Interaction Design Project | 04 Usability Testing
Usability Testing Plan
Assessing online tools like Maze and Loop11 we chose Lookback.io with its ease of use and features. Lookback allowed us to record the participants face, actions and screen whilst in a video call. Some members used different tools in the end, it didn't hinder any results as we all recorded and took notes.
Our methodology for testing was mixed. We conducted moderated remote usability testing through different online tools for qualitative data. And a system usability scale (SUS) for quantitative results. Our objective was to test the new layout and features, to identify any usability issues across the app, to discover any navigation problems and identify confusing or unclear UI. With limitations in our prototype we decided to test on desktop rather than mobile.
Michael created a testing plan: Introduction, warm up with pre task questions, Tasks 01–05, followed by post task questions after each task, a wrap up SUS and finishing with additional comments or questions. We recorded 10 participants in total aged 25–34, 9 of which has previously used RTE player before.
Post testing we had a working group session we grouped our cards according to themes, synthesised them and identified recommendations based on those insights. We calculated our SUS score (89.2%) And used the The Action Priority Matrix to vote on effort and impact.
Usability testing was generally positive with little confusion across the board. We hoped that using our peers didn't influence this.
One participant felt the landing pages colours were ‘quite sinister’. We solved this by removing the red duotone on the featured image giving the overall layout a brighter, more inviting feel.
Random pick is now smart pick. 40% of participants notices this feature on first glance while 20% confirmed they would use it however there was little interest in it being ‘random’. We solved this by renaming it to ‘Smart Pick’ which I explained why we used that term in Interaction Design Project | 03 Design Process.
Participants were generally unlikely to give a rating unless it was a forced action with little effort which didn't interfere with their watching. Or it improved their experience. There was little desire with participants for leaving a review however they would red those of others. We solved this by giving feedback after a rating was selected letting the user know why they have completed such an action. “Review submitted. This helps us improve suggestions.”
Participants we generally uneasy about chatting in a public channel. The ‘Friends’ filter option we had went unnoticed by the majority of participants. We solved this by clearly displaying the words ‘Public’ and ‘Private.’ 60% of participants approved this feature with one saying that “Live chatting is like enjoying an event together.”
Live chatting is like enjoying an event together
Participants were confused over the more episodes and share icons. With 7 giving negative feedback. We solved this by redesigning the more episodes icon. Making the ‘Share’ icon more identifiable.
In addition to our refinements Tom noticed a gap. He suggested an onboarding flow to give us an opportunity to showcase RTE players new features on first use. This design pattern gave first time users an introduction to the RTE Players new competitive edge. This was implemented into our new prototype.
Watch a walkthrough of the full prototype here.
Prototype iteration 01 Figma
Prototype iteration 02 Figma
Video of prototype Sharepoint
Usability testing plan Miro
Usability testing consent forms Sharepoint
Usability testing transcripts Sharepoint
Usability testing recording participant 01 Lookback
Usability testing recording participant 02 Lookback