Quant method that can be both generative and evaluative.
Intro
A user survey is a research tool used to collect data from a specific group of users to gain information and insights into various topics of interest. It usually consists of a series of questions that users can answer online, on paper, or during an interview. If you're aiming for a mix of quantitative and qualitative insights from a larger pool of users gathered in a timely fashion user survey is the right tool in your belt.
Depending on your goal, there are two main types of surveys:
- User persona survey: Used when you want to do market research, understand your (target) customers better, and gather information about users’ demographics, behaviors, and attitudes. You can use the same pool of participants to ask around what your audience likes or dislikes about your competitors. This type of survey falls under the category of generative research and is usually conducted in the discovery phase of the project lifecycle. Combine surveys with user interviews to add depth to your insights.
- User satisfaction survey: It allows you to quantitatively measure user sentiments while also capturing qualitative insights for deeper understanding. You can add one of the usability measures to your own pool of questions. Another advice is to use the questions to identify areas of improvement (“If you could change one thing about this product, what would it be?”).
Execution
There are a lot of tools you can use to conduct the survey. When doing research for a product that is not live yet, our tool of choice is Typeform. If you are interested in user feedback on a live product, and it supports tools like HotJar, Intercom, etc. — use it!
- Introduction: Always start with a brief intro explaining the purpose, how long it will take, and how the data will be used. Note if the survey is NDA protected.
- Demographics: These could range from age and gender to occupation and location. Aim for what is relevant for your target persona and their relationship with the product or industry.
- Survey questions: This is the meat and potatoes of your survey. Craft questions that align with your research objectives. Be clear and concise to avoid confusion. Mix up question types—multiple choice, Likert scales, and open-ended questions—to garner both quantitative and qualitative data. Add standardized usability metrics to the mixture to benchmark your product.
- Closing: Wrap up your survey by thanking the participants for their time and input. Provide them with a space for additional comments or questions they may have. Phrases like "Is there anything I forgot to ask?" or "Feel free to share any additional thoughts you may have" can be useful here. This not only closes the loop but may also provide you with insights you hadn't initially considered.
Types of Questions
- Multiple-choice questions: Great for quantitative data and easier analysis. Offer options, but don't forget the 'Other' category.
- Likert Scale Questions: These ask participants to indicate their level of agreement or satisfaction with statements. Ideal for attitude or opinion data.
- Open-Ended Questions: As open questions are time consuming to analyze, serve these for more nuanced insights, such as understanding reasons behind a specific behavior or sentiment.
Tips and Tricks
- Keep it Short: The longer the survey the grater the bounce rate. Aim for a 5-10 minute completion time.
- Randomize Answer Order: When possible. You want to minimize response bias.
- Mobile-Friendly: Make sure the survey is easy to complete on mobile devices.
- Pilot Test: Always conduct a small-scale test run to catch errors or confusing questions.
Data analysis
Before analyzing the data, examine answers for outliers or anomalies. If you figure out some of your users wrote gibberish as a response, ended the survey way faster than the rest, or selected the a. option at every multiple-choice question — you might want to remove their answers.
On top of that, check if users misunderstood the questions. You don’t want faulty answers driving your insights and design in the wrong direction.
Once you have cleared the data, you can start analyzing it. Dive deeper into data. Filter & compare results by demographic data, length of using the product, and other relevant elements. Code verbal responses, create charts and graphs, calculate statistics, or do whatever helps you to understand survey results.
The goal is to identify patterns and trends in the data that can be used to make conclusions about the participants' opinions, habits, satisfaction, attitudes, and experiences.
Quantifying and benchmarking usability
Yes, you can (and should) measure the usability of your product. And yes, there are a lot of good and reliable measures that can help you do that usability health check better than the NPS.
There are several good reasons to use usability metrics:
- They can help you compare different design options or products.
- They can help you track the usability progress in time.
- They are reliable and valid as most of them are standardized.
- They will allow you to compare your results against the industry standard as they are norm-referenced.
Some usability metrics are geared towards evaluating specific aspects like user flows, designs, or features. These include quantitative measures such as single ease question (SEQ), error rate and task completion time, which are elaborated upon in the usability testing chapter.
On the other hand, we have more complex psychometrically standardized questionnaires. These are tailored to measure the effectiveness, efficiency, and user satisfaction of the product as a whole, serving as ideal instruments for periodic health checks or industry benchmarking.
Here are some of the questionnaires you can pick from:
- SUS: System Usability Scale is one of the most used usability metrics that works better with complex systems and applications.
- SUPR-Q: Standardized User Experience Percentile Rank Questionnaire measures usability, trust, loyalty and appearance and works great on websites.
- UMUX: Similar to the SUS but is shorter and targeted toward the ISO 9241 definition of usability (effectiveness, efficiency, and satisfaction).
- UEQ: User Experience Questionnaire is available in Croatian, in longer or shorter for, and measures attractiveness, perspicuity, efficiency, dependability, stimulation and novelty.
System usability scale
The System Usability Scale (SUS) is a widely accepted scale, used and validated in a variety of contexts, making it a reliable and valid instrument for measuring usability. It provides a quick and easy way to gather user feedback and assess the overall usability of a product or system.
The SUS consists of a 10-item questionnaire that asks users to rate their level of agreement with various statements about the usability of the product or system.
The statements are designed to measure different aspects of usability, such as ease of use, learnability, and satisfaction. The responses are given on a 5-point Likert scale, where 1 represents "strongly disagree" and 5 represents "strongly agree."
System usability scale questions:
- I think that I would like to use this system frequently.
- I found the system unnecessarily complex.
- I thought the system was easy to use.
- I think that I would need the support of a technical person to be able to use this system.
- I found the various functions in this system were well integrated.
- I thought there was too much inconsistency in this system.
- I would imagine that most people would learn to use this system very quickly.
- I found the system very cumbersome to use.
- I felt very confident using the system.
- I needed to learn a lot of things before I could get going with this system.
The SUS is unique in that it provides a single score, called the SUS score, that represents the overall usability of the product or system. The SUS score is calculated by summing the ratings for each item and multiplying by 2.5. The final score ranges from 0 to 100, with higher scores indicating better usability.
Interpreting SUS scores can be done by comparing it to a benchmark score or by comparing it to a previous score. A score of 68 or higher is considered to be a good score and indicates that the product or system is usable. Scores below 68 may indicate that there are usability issues that need to be addressed.
It's also important to look at the individual item scores and the patterns of the responses, in addition to the overall score. This can help identify specific areas of the product or system that need improvement.