FullStack Labs

Please Upgrade Your Browser.

Unfortunately, Internet Explorer is an outdated browser and we do not currently support it. To have the best browsing experience, please upgrade to Microsoft Edge, Google Chrome or Safari.
Upgrade
Welcome to FullStack Labs. We use cookies to enable better features on our website. Cookies help us tailor content to your interests and locations and provide many other benefits of the site. For more information, please see our Cookies Policy and Privacy Policy.

Remote Usability Testing

Written by 
Brian Smith
,
Vice President of Design
Remote Usability Testing
blog post background
Recent Posts
Getting Started with Single Sign-On
Choosing the Right State Management Tool for Your React Apps
Accessibility in Focus: The Intersection of Screen Readers, Keyboards, and QA Testing

Designing a product always begins with your own assumptions. Once you have a prototype, you assess whether your assumptions were valid. This process is called usability testing.

Table of contents

Usability testing is a form of qualitative research that is usually done in person, requires a lot of scheduling, and is limited by geographic location. Luckily, with tools such as Zoom and Google Meet, usability testing can be done remotely. This guide will help you make the most of your remote usability testing and will cover:

1. Defining Goals & Target Group

2. Identifying Testers

3. Writing Assumptions

4. Writing the Test Script

5. Conducting the Test

6. Compiling & Analyzing the Results

7. Writing Up Your Findings

Defining Goals & Target Group

Every usability test should have a well-documented goal in mind. You might want to decrease the time it takes someone to complete a membership form or you may want to learn how to improve customer satisfaction. Knowing this goal will help you craft the entire test. 

Identifying Testers

Who Should You Pick?

Find people that fit the profile of the group that will be affected by the feature. Make sure they will be able to participate in the test. With remote testing, it’s ideal they have access to a reliable internet connection, a webcam, and are reasonably proficient with the basics of computers. 

How Many Testers is Ideal? 

Four to six testers should give enough variance. Additional testers may be required for more complex sessions, but fewer than four may give you skewed results with a single unhappy tester. Nielsen Norman suggests that using five testers results in the most efficient process.

Writing Assumptions

When designing a solution without testing it, you are doing so based on assumptions you made through research. You’re trying to understand who the users are and the goals they hope to achieve with your app. These serve as your baseline in usability tests. For example, you might believe that users will prefer logging in by connecting their Google account. By identifying this as your assumption, you now have something to test against: will users choose Google login over other options? 

Writing the Test Script

Using a three-phased approach will help ensure that the test does not get derailed. These can be described as context gathering, scenario setting & tasks, and debrief.

Context Gathering

To begin the session it’s useful to spend some time breaking the ice. This will make them feel more at ease and will help you understand the context in which they came into the test. Some possible questions might be:

  1. How are you doing today?
  2. Tell me a bit about yourself.
  3. What do you do for work? 
  4. Have you done this sort of testing before? 
  5. How comfortable are you with technology? 

Scenario Setting & Tasks

You will have best results if the tester understands a bit about the app and why they might be arriving at the task you will ask them to complete. For example:

Scenario

You are in a new city and are trying to meet with a client to close an important deal. A coffee shop would be too casual for this type of meeting. You only have two hours until the meeting is supposed to begin. 

Task

Get out your phone, open the app, book a conference room, and send an invite to your client to join you. 

Debrief

After they complete the tasks, you should gather feedback on their experience. This is the most critical part of the test script. Incomplete or poorly written questions will confuse users or not elicit the detailed results you were hoping for. 

Conducting the Test

Before testing your target audience, run a pilot with a colleague to ensure there are no misleading or missing questions. Preparing the tester for the process is also a good idea. Send an email to the tester a few days in advance that introduces yourself, describes the process, the time commitment, and briefly describes what you hope to accomplish.

With remote sessions, it’s important for the tester to be comfortable and not feel like a lab rat. If they feel that they are being tested, the tester will be tempted to do what you expect them to do rather than what they would naturally do. To help, never conduct remote testing with more than yourself and the tester on the call. 

When the call begins, start with an introduction of who you are and thank the tester for their time. Then start by asking the context gathering questions. Keep the tone informal and enjoyable to help develop trust. 

Before jumping into the scenarios and tasks, orient the tester with a few suggestions:

  1. Encourage them to talk through their process and emotions as they are completing tasks.
  2. Remind the tester that there is no wrong way to complete the tasks.
  3. Let them know that you won’t be offended by any of their feedback and they should be as honest and blunt as possible. 

Then, get started with the first scenario and task. If the tester gets stuck at any point and isn’t sure what to do, try not to lead them. Instead ask, “what do you think you should do next?” or “how do you think you could go back to try again?”

During the test, don’t write detailed notes on their process, instead, record the session so you can do that later. You want to be paying attention to what they’re doing and quickly note the results. A simple method is to score their completion of the task on a 0-10 scale along with the amount of time it took. If they successfully complete the task, that’s a 10. If they couldn’t complete it at all, that’s a 0. For partially completing the task, mark a number that represents how far they were able to get. Noting the amount of time taken will help you understand how easy the task was for the tester to complete. 

After completing the tasks, it’s time to debrief. Ask follow-up questions about how they felt during the tasks, what they wish they could have done but couldn’t, what (if anything) was particularly enjoyable about the process, or what (if anything) was particularly frustrating. 

Compiling & Analyzing the Results

For each test, you’ll have a scorecard and a video recording of the session. Go back through the video and write down any salient points, then categorize those notes into topics. At this point, you should begin to notice themes in the results; consider the “why” behind them. For example, you notice that 60% of the testers got stuck at the checkout page. This is the time to consider why this is happening and what in the UX could be causing this issue.

Writing Up Your Findings

In order to turn the findings into actionable items, you should write a summary. Include details on your hypothesis going into the test, how it was conducted, data points from the results, and the conclusions you drew from them. These will help those who were not involved in the process understand the actions that should be taken as a result. 

Conclusion

This isn’t a one-and-done process. After your first test, make some revisions to the UX and conduct another one. This way you’ll be able to validate that the changes you made were warranted. Usability testing helps remove the unknowns and assumptions from your design process, resulting in greater efficiency for your product development process. Ultimately, you will save the client time and money and give them a higher quality product that their users will actually enjoy. 

Brian Smith
Written by
Brian Smith
Brian Smith

As the Vice President of Design at FullStack Labs, I lead a team that specializes in user experience and interface design, through rapid high-fidelity prototyping, user flow creation, and feature planning. I have over a decade of experience designing complex software applications with a focus on user-centered design principles. Prior to FullStack Labs I was the Creative Director at Bamboo Creative and the Director of Design at Palmer Capital. I hold a B.A. in Design from UCLA.

People having a meeting on a glass room.
Join Our Team
We are looking for developers committed to writing the best code and deploying flawless apps in a small team setting.
view careers
Desktop screens shown as slices from a top angle.
Case Studies
It's not only about results, it's also about how we helped our clients get there and achieve their goals.
view case studies
Phone with an app screen on it.
Our Playbook
Our step-by-step process for designing, developing, and maintaining exceptional custom software solutions.
VIEW OUR playbook
FullStack Labs Icon

Let's Talk!

We’d love to learn more about your project.
Engagements start at $75,000.