Usability testing
Last modified on Mon 12 Sep 2022

Usability is the ability to use a product with ease. When we talk about usability, we make sure that products are easy to use for everyone in any usage context. It should be evident to people where they are, what they are supposed to do next and to be able to complete the necessary task. From the design perspective, we want our users to have a product that is easy to use without much thought and any frustrations.

This chapter goes into a lot of detail about user testing. To make it easier to implement best practices we've created Project Lifecycle Checklist in Productive. In the Design Phase board, there's a task list called Validation. Ping your TL to give you access to it.

How to conduct usability testing

Usability testing is a direct way of seeing how users interact with our product.

In usability testing, we show the clickable prototype to one user at a time (whether it's a mobile or web app), and we ask them to:

a. figure out what it is
b. try to use the prototype to do a typical task
c. find some critical information.

Usability Testing is crucial because it can help us learn if our product is really based on users' behaviours and expectations. This leads to many ideas for improving the design. We usually perform “task testing” — we ask the user to do complete a task, and then we observe how well they do.

If it’s the first time you’re running usability testing, don’t hesitate to ask a more experienced colleague for advice, help, or a quick walkthrough.

For each usability testing, we need at least two persons to set everything up:

a. facilitator,
b. observer.

Preferably two designers, but you can take someone from the project team (i.e., QA). If you are the lead designer on the project, don’t take the user feedback personally and avoid bias towards users’ opinions.

The facilitator is one of the most critical roles in usability testing. It needs to be someone who tends to be patient, calm, empathetic and a good listener.

The observer is someone who tracks the testing in-person, via Lookback or video call, takes notes, and cares about the tech setup (phones, batteries)

We can divide usability testing into 3 phases:

  1. Setting up — planning and preparing everything we need for testing
  2. Conducting usability testing — live or remote sessions with the users
  3. Analysis — provide actionable insight to the client and our team

The following paragraphs will guide you through each of these phases.

1. Setting up

The usability testing organization depends on many factors (the number of participants, hosting location, etc.). There are three usual options:

  1. Infinum will find users and host the testing
  2. The client will find users and host the testing
  3. Remote testing, with both the client engaged in finding and hosting the testing and us

There are some essential steps that don't differ from one setup to another. We'll explain the differences as we come by them.


1. Send Statement of Consent and NDA to your project’s PM and the client to make sure if everything is ok from a legal standpoint. If you're running the research in Croatian, here's local version of Statment of consent and NDA.


2. Discuss with the client early on who is recruiting participants and where the testing will take place. One significant point to discuss: how participants will be rewarded. Usually, it's a gift voucher or some small gadget. Make sure the client knows about this and budgets for rewards.


3. Go over the 🔒 requirements document with the client in person or on a call. Also, send this document in a follow-up email.

Here’s an example of a mail we send alongside the requirements doc:

Hi everyone,
I am sending the Usability testing requirements document with information about the criteria for setting the remote usability testing:

Participants requirements:
- from Ghana
- fluent in English
- have at least 1-year experience using the mobile applications
- be between the ages of 25-45

Mobile set-up:
- Mobile phone for testing purposes Sony Xperia XZ (with Android 8.0 installed) or similar device (not older devices)
- installed InVision application on the mobile device for testing
- installed Participate application on the mobile device for testing
- High-speed Internet connection

Methodology:
- Thinking aloud protocol will be used
- Users will be given 5-6 tasks
- Users will read each task aloud
- Users will complete a short questionnaire between each task
- Testing schedule: each session (per user) is scheduled for 40 minutes with 20 minutes between each session

Training online session for a facilitator in Ghana:
- Call meeting during which our design team will explain how to conduct usability testing
- One (maybe two) internal usability testing conducted in real conditions for training purposes
If you have any questions let me know.

Regards,
Your name


4. Create a 🔒 testing schedule table. Depending on the setup, it will be up to the client or us to fill the timeslots. Keep each session about 60 min per person (40 min of interacting with the participant, 20 min for reviewing and preparing for the next participant)— five participants per day max. In case the testing is conducted remotely or in a different country, keep time zones in mind when planning and scheduling.


5. Recruit participants using our 🔒 testers database or ask good people of Infinum at #infinum Slack channel if they happen to know someone who fits the target audience. For each group of users have one back-up participant. When using the testers database, please consult with Team Leads or pProduct Strategist.


6. Prepare an 🔒 agenda for all testing sessions based on participant availability. Here is an example agenda for one day:

09:00 AM - 10:00 AM – 1st user
10:00 AM - 11:00 AM – 2nd user
11:00 AM - 12:00 AM – 3rd user

12:00 AM - 01:00 PM – Lunch & prep. for the next user*

02:00 - 03:00 PM – 4th user
03:00 - 04:00 PM – 5th user
04:00 - 05:00 PM - conclusions and preparation for the next day


7. Define your goals — Before preparing specific tasks for our users, we need to define what we want to achieve. We’re looking for answers to crucial questions about usability, findability, discoverability, or general pain points. Then we use this info during the next design phase or iteration. We should ask ourselves questions like:
- Are the users aware of the main app functionalities?
- How can we make sure they understand the content correctly?
- Can users use the main functionalities effortlessly?
- What dilemmas did we run into while designing? How can we be sure users won’t run into the same issues?
These goals will help you focus while preparing the tasks.


8. Prepare tasks — After the client approved the agenda and after the responsibilities are assigned, the facilitator starts to prepare tasks and prototype(s). Be wary of the wording you're using on tasks; you don't want to give away the answer by mentioning the exact copy that is visible on the screen. For a more in-depth study of this process, take a look at NNG’s guide.

Live testing: You should print tasks on individual cards. You'll be giving participants those cards one by one during the testing. The observer should have a list of all tasks with additional notes. These notes should specifically instruct them on what to look for while the user is completing a task.


Remote testing: Keep all the tasks in one file so that both moderator and facilitator can keep up with users going through tasks. You'll upload these tasks in the Maze and also set expected user paths there. What's Maze? Glad you asked - here's a whole chapter about it with detailed instructions on setting it up.

This is where live and remote usability testing diverge. Steps are similar, but to avoid confusion, we'll cover them separately.

Live usability testing, either hosted by the client or us


9. Prepare 🔒 task rating tables. Task rating tables are useful for measuring success on each task and having an objective overview of pain points and room for improvement.


10. Print out these documents:

- Statement of Consent (equal to the number of participants),
- Confidentiality statement (NDA) (equal to the number of participants),
- Interviewer guides (1 copy),
- Tasks for participants (1 copy of each task),
- Tasks with additional notes for the observer (1 copy of each task),
- Task rating tables (equals the number of participants × number of tasks).


11. Define mobile devices that you will use in the testing sessions. You can use new phones, but always check if Participate (from Lookback) and InVision support that specific phone and operating system. To be sure, don’t use the latest devices, because they maybe didn’t get to fix all bugs. Here’s a list of compatible Android devices for Lookback. Prepare two devices for each platform because the battery drains fast when you’re doing testings in a row. Assign who is responsible for charging the batteries before the sessions (usually the observer).


12. Install Participate on mobile phones.


13. As a thank you for participants, we usually give out rewards (vouchers or cash), so have them handy at the testing site.

Preparing the project on Lookback and testing device

  1. Log in to Lookback and go to DashboardNew project
  2. Choose the device type
  3. Name the project and skip the rest
  4. Open the link for the InVision prototype on the testing device (in the native browser)
  5. On iOS - click share and “Add to home screen” (Safari) On Android - click more and “Add to home screen
  6. The link should now be on the home screen and look like an app icon with InVision’s logo

Remote usability testing


9. Do the paperwork.
Prepare digital versions of these documents, it's best to keep it all in one Google Drive:
- Interviewer guides,
- List of all tasks,
- Document for observer's notes (1 per each participant). Good practice: Each task and question on a separate page.

Send documents to participants to sign:
- Confidentiality statement (NDA),
- Statement of Consent.


Ping HR for this. We use Hello sign for remote documents signing. You'll need to send HR the list of participant's email addresses and versions of NDA and Consent that participants need to sign. Colleagues from HR will ping you as participants start signing the documents.


10. Prep participants for testing. Send them the 🔒 prep email so that they know what to expect and have an up to date browser needed for Maze to run smoothly.


11. Reserve timeslots at Infinum's Zoom account. Send your PM the testing schedule and ask them to reserve these timeslots for you. Zoom usually has the most stable video connection, and it lets you record the call online. Recording online is better because your Mac will quickly start dragging if you record it locally.

2. Conducting usability testing

Do a test run

Before the official user sessions, make sure you do a test run. Simulate testing in Infinum’s offices or via video call and choose a colleague most similar to the target audience. Participants include:

We do test runs to notice smaller issues and any improvements we can make before conducting the actual testing.

Preparing the session

Live testing: Before the user enters the room, we have to set everything up:

  1. Open Lookback on desktop (in Chrome) and go to your project.
  2. Click on the Live button and copy the link.
  3. Paste the link in a web browser on the testing device.
  4. Choose “I already have Participate
  5. For the name, write - User 1, User 2, … (the email can be your email or something random)
  6. Turn on the audio and camera
  7. “Start broadcast”
  8. Go to the Lookback on desktop and open the green live session
  9. Start session” and “Call participant”
  10. Answer the call on the test device
  11. On Android - click “Show my screen”
  12. Leave the Participate app open in the background and open the InVision link added to the home screen

Remote testing: Much easier than above.
1. Send participant 🔒 the email with both links to Zoom and Maze.

The session


1. In the beginning, make sure the user feels comfortable and calm. Be friendly, so the user feels relaxed, but not too friendly because we need to get objective answers and opinions from them. In the true Balkan manner, offer them some juice and snacks.


2. Next, we use a script with Interview guides to conduct usability testing, and always keep the text in front of you. Don’t hesitate to read from it, but it’s ok to ad-lib a little, even if it means making mistakes. When the users see that you are comfortable making mistakes, it helps take the pressure off them.


3. You have to make it clear before you start that nothing we do or say is personal, and they can always ask questions. It's helpful to tell them that sometimes giving the answers too soon will affect the testing, and you will wait until the end of the session to answer their questions. It’s important to mention this because it will seem rude not to answer their questions as you go along. If you want honest answers, it can be useful to point out you did not work on this project (whether that’s true or not) and that you are just overseeing the testing.

As with preparation, this is where live and remote testing diverge. First, we'll cover live than remote testing.

Live testing


4.Give the Statement of Consent and NDA (when mentioned in the Interviewer guides). Statement of Consent — explain that we will record the screen and the user's face and use it only for this research. NDA — point out that all the information regarding the app and this testing is confidential. Your role here is not to come across as an expert but as a good listener, so don’t hesitate to admit your ignorance about anything.


5. After reading Interview guides, give the user the 🔒 Out loud instruction document. This activity helps the user to be comfortable with talking out loud. Give the first task.

It is crucial to give only one task at a time so the user can focus. The user will do an excellent job by thinking out loud on their own. If they don’t, encourage them to do so. It's ok to ask questions like “What are you thinking?” or “What do you think…” to get them to start talking about their experience but try not to evoke a conversation. Talking too much at inappropriate times or leading the user can affect what they do and say, which can ruin the research findings. From here on, observe while the participants try to complete the given task, letting them continue until either:
- they finished the task,
- they get frustrated,
- we’re not learning anything new by watching them try to muddle through.
If the participant can't solve the task after a couple of minutes, suggest that they read it out loud again. Remind the user during the task to tell you when they think they are done.


6. After the task - give a 🔒 task rating table for that specific task.


7. After each task, while the user is filling out the task rating table, the facilitator prepares the prototype for the next task. Set up the prototype on the screen where the next task begins and make sure everything is working.


8. When the user finishes testing the app, ask them some general follow-up questions (e.g., the best and worst thing about using the app).


9. When you complete the session, don't forget to give the user their reward for participating.


10. Make sure that you have 10-15 minutes before the next session to prepare everything again.

Remote testing


4. Double-check if the participant has signed recording consent and NDA. If they did, turn on the recording of the session.


5. After reading Interview guides, tell the participant to open the Maze link. Tell them to follow the instructions and think out loud as they go through the tasks.
If the participant's a bit quiet, it's ok to ask questions like “What are you thinking?” or “What do you think…”. Try not to start a conversation, but just to remind them to think out loud. Talking too much at inappropriate times or leading the user can affect what they do and say, which can ruin the research findings.


6. Ask participants not to refresh their browser or go back in browser's navigation. This might affect Maze's tracking of their behavior.


7. When users finish the Maze (they'll get "Thank you" screen), ask them follow-up questions.


8. Tell users how they'll get their reward, e.g., you'll send voucher code via email.

3. Analysis

Live testing: After the session, or after all sessions, enter data collected from the task rating tables into the 🔒 Task rating spreadsheet.

When you got all the data, go through the rating table or Maze's report and the observer's notes and video recordings.
Build your report around these:
1. Percentage of success for each task,
2. Ease of use for each task,
3. Pain points from your notes of participants thinking out loud,
4. First clicks or taps for each task,
5. General appeal of the app to participants,
6. Improvement suggestions based either on participants' direct ideas or your analysis of all of the above.

You can write your report as Pages/Word document or as a presentation document. You can find both templates 🔒 in this folder. If you follow these templates' structure, you won't miss any significant part of the report, such as describing participants.

You'll be sending this report to the client, so try to write it in a plain language with no UX/UI lingo and lay down what improvement ideas result from the testing. That's what clients care about - how we'll use insights to make their product better.

Resources