A Deep Dive into User Research Methods
User research plays a crucial role in shaping any successful product or service. It keeps the user at the heart of the experience by tailoring it to their needs, and in turn provides real advantage over competitors. But with a growing arsenal of different research methods out there, it can be a challenge to know which is best to use, and when.
This guide offers an overview of the fundamentals for each of the most commonly used methods, providing direction on when to use them — and more importantly, why.
- the origins of user research
- discovery and exploratory research
- quant and qual, and the difference between them
- core methodologies:
- user interviews
- ethnography and field studies
- surveys and questionnaires
- analytics and heatmaps
- card sorts and tree tests
- usability studies
- further reading and resources
- key takeaways
Want to learn UX from the ground up? Get an entire collection of UX design books covering fundamentals, projects, tips and tools & more with SitePoint Premium. Join now for just $9/month.
The Origins of User Research
Product designers and engineers have incorporated user feedback into their process for centuries. However, it wasn’t until 1993 that the term “user experience” (UX) was coined by Don Norman during his time at Apple.
As the discipline of UX evolved and matured, practitioners began to use investigative research techniques from other fields, such as science and market research. This enabled decisions to be informed by the end user, rather than the design teams’ assumptions, laying the groundwork for UX research as we know it today.
That’s a quick rundown of the origins. Now let’s dive into some research frameworks.
Discovery and Evaluative Research
User-centered design means working with your users all throughout the project — Don Norman
Broadly speaking, user research is used to either discover what people want and need or evaluate if ideas are effective. The methods to achieve these two distinct outcomes can be loosely divided into two groups.
Strategize: Discovery Research
Methods that help to answer unknowns at the beginning of a project can be referred to as Discovery Research. These methods range from reviewing existing reports, data and analytics to conducting interviews, surveys and ethnographic studies. These methods ensure that you have a solid understanding of who your user is, what they need and the problems they face in order to begin developing a solution.
Execute and Assess: Evaluative Research
Once a clearer picture of the end user and their environment has been established, it’s time to explore possible solutions and test their validity. Usability studies are the most common method employed here. Evaluative research provides you with the knowledge you need to stay focussed on the user and their specific requirements.
|Discovery Research Methods||Evaluative Research Methods|
Quant and Qual, and the Difference Between Them
Although every design problem is different, it’s generally agreed that a combination of both qualitative and quantitative research insights will provide a balanced foundation with which to form a more successful design solution. But what do these pronunciation-averse words mean?
Quantitative (statistical) research techniques involve gathering large quantities of user data to understand what is currently happening. This answers important questions such as “where do people drop off during a payment process”, or “which products were most popular with certain user groups” and “what content is most/least engaging”.
Quantitative research methods are often used to strategize the right direction at the start of a project and assess the performance at the end using numbers or metrics. Common goals include:
- comparing two or more products or designs
- getting benchmarks to compare the future design against
- calculating expected cost savings from some design changes
Qualitative (observational) research techniques involves directly observing small user groups to understand attitudes, behaviors and motivations. This is where we begin to understand why something is happening and how to solve a problem.
You can optimize everything and still fail. That’s where qualitative approaches come in. By asking “why”, we can see the opportunity for something better beyond the bounds of the current best. ― Erika Hall
Qualitative research methods are also used to strategize the right direction at the start of a project, and to inform design decisions throughout the ideation process. Common goals include:
- to uncover trends in thoughts and opinions
- understand a problem more deeply
- to develop a hypothesis for a quantitative research study
So that’s enough of the background behind the methods. Let’s dive into the methods themselves. It’s worth noting that, since every project is different, there’s no quick way of strictly stating which method is best for what. However, pros and cons have been listed for each.
1. User interviews
Qualitative | Discover/Evaluate
Interviews allow you to ask questions to help see things from the participants’ perspective. They are usually recorded and later analyzed to find out what the beliefs, attitudes and drivers of users are, alongside uncovering new considerations to aid with ideation.
Stories are where the richest insights lie. Your objective is to get to this point in every interview. — Steve Portigal
Interview length, style and structure can vary depending on what you’re trying to achieve, and the access to and availability of participants. The following are some different types of interviews.
One-to-one interviews are often conducted in a lab or coffee shop, but can be undertaken almost anywhere with a little preparation. In-person interviews are preferable to remote (via phone or video) as they offer additional insights through body language. Sessions are conducted with questions that loosely follow a discussion guide. This allows you to uncover new learnings around an objective and not get sidetracked.
Focus groups are used to gain a consensus from a group of 3–10 representatives of a target audience when you’re short on time or availabile participants. Focus groups take the form of discussions and exercises and are a good way of assessing what people want from a product or service and their opinions on things. They’re not recommended for evaluating interface usability, due to their lack of focus and the potential for groupthink bias.
Contextual inquiry interviews are the holy grail of interview methods. They’re conducted within the participants’ everyday environment whilst they go about their daily activities. A researcher can observe a participant and discuss what they did, and why, whilst the activities take place. Unlike other interviews, the researcher usually summarizes the findings back to the participant at the end, offering them a chance to give final corrections and clarifications. This method is used to generate highly relevant and reliable insights from real situations, but it can be very time consuming.
For more on user interviews, there’s some great resources on the Interaction Design Foundation website.
2. Field Studies
Qualitative | Discover
Field studies involve observing people as they interact with a product, service, or each other, in their natural working or living environment (rather than in a lab) to better understand user behavior and motivations in context. These studies are usually conducted over longer periods of time than most other methods, recording extensive field notes for later analysis.
Ethnographic research involves researchers actively participating within a group setting, becoming the subject themselves. This method is particularly useful when studying a target audience that is culturally or socially different from your own, and it can uncover lots of unknowns and important considerations.
Direct observation involves passively observing from a distance (like a curious fly on a wall), allowing researchers to uncover problems and workarounds in user journeys and flows (such as retail store layouts), and also allowing for future improvements.
User logs involve diary studies and video journals, and are sometimes referred to as the “poor man’s field study”. They allow the user to generate the data for you by recording their experiences with the focus of the study at a specific time each day over a period of time. The real-time insights provided can be useful for understanding long-term behaviors such as habits, workflows, attitudes, motivations, or changes in behavior.
3. Surveys and Questionnaires
Quantitative | Discover/Evaluate
Surveys are a quick, cost effective and relatively easy way to get data insights on your existing, lapsed or prospective users. They can be used for a variety of purposes, such as gaining quantitative feedback on a new feature, or trends in attitudes and beliefs from your target audience.
As with interview guides, there’s something of an art to writing survey questions that collect the right amount of useful and focussed data required to meet the study objective. Surveys should be short and simple enough to avoid drop-off and guard against misinterpretation or confirmation bias from leading or confusing questions. Participants also need to be screened to make sure the study is completed by the right audience.
Surveys can be deployed in various ways to gather data from your participants.
Email surveys can be put together very quickly using online tools such as Survey Monkey, Google Forms and Typeform. They rely on you already having an email list to recruit participants, which tend to produce higher response rates as they’re likely to be engaged by the product or service. Emails have added benefits of allowing the researcher to better target and control their sample size and allow participants to respond at their convenience.
Intercept surveys and pop-ups are a good way of reaching out to existing and prospective users when you don’t have a customer database. This is a useful method for gaining quick insights whilst in the context of the product experience, such as through a customer satisfaction poll. On the downside, this method is likely to provide low completion rates and can lead to a negative overall experience, as you’re interrupting users from the task at hand.
In-person surveys are unlikely to provide enough results to be significantly useful. They remove anonymity, but can be useful for wrapping up and improving other studies. For example, short surveys can be combined with a usability study to measure the ease of completion of tasks. When closely monitored, these surveys are prone to bias, as the participant may not wish to offend the facilitator.
For more on creating surveys, see Better User Research Through Surveys.
4. Web Analytics and Heatmaps
Quantitative | Discover/Evaluate
Analytics are an incredibly powerful and often inexpensive research tool. They’re a great starting point for seeing how people are actually using your product or service, and for benchmarking and measuring improvements.
Web analytics, in the form of free tools such as Google Analytics (GA), allow you to track how many visitors are coming to your site, where they’re coming from, what they do when they get there, how long they stay, how many complete certain tasks, and so on. You can observe common behavioral patterns, and adapt GA to track goals and events specific to the business or project, such as completion rates on forms. If GA hasn’t been set up yet, basic customer insights such as referral sources can be gathered from free online tools such as Similar Web.
Many modern platforms, such as Adobe Experience Manager, also allow for more complex measurement of funnels and detailed audience segmentation.
Heatmaps are a fairly recent addition to the UX analytics toolkit. Once set up, they provide a colorful, graphical representation of on-page behavioral data. Paid tools such as Hotjar and Crazy Egg also include other features, such as session replay, which can be set up to anonymously record short sessions within a particular journey, from the user’s perspective — such as the checkout process. Collectively, these tools can help to build a fuller picture of what’s happening.
For more on analytics, see The ultimate guide to Google Analytics for UX designers.
5. Card Sorting and Tree Tests
Quantitative/Qualitative | Evaluate
Card sorting is a research technique commonly used to inform or evaluate the information architecture of a website or app by allowing people to categorize items into groups. It’s a great way to get insights into how users might expect content to be organized and labelled, helping to define navigation and filter sets. Card sorts are usually conducted with around 20 different people either remotely, using software such as Optimal Workshop, or in person, allowing for additional insights through probing questions.
When planning a study, you need to decide between an open or closed sort.
Open card sorting involves participants all getting the same cards but being organized into their own categories and being labelled. The results can then be analyzed for common patterns and considerations. For example, in a card sort about music, one user may sort artists by genre, and another by time periods.
In a closed sort, you provide participants with both a list of items and fixed categories for them to sort the cards amongst. Closed sorts often follow open sorts to validate the proposed categories.
Tree tests are an alternative to closed sorts, being good for validating menus due to the visual similarity of a navigation interface. This helps to ensure people will easily find content using your proposed categories.
6. Usability Studies
Qualitative | Evaluate
Usability tests (or user tests) are one of the most common types of user research, offering invaluable insights into how an existing or proposed product is performing at fulfilling important tasks with real users. Participants are observed whilst trying to achieve a set of tasks provided by a facilitator. Sessions are usually conducted in person in a controlled “lab” environment, with findings recorded and documented by an observer for later analysis.
After you’ve worked on a site for a few weeks, you can’t see it freshly anymore. If you want a great site, you’ve got to test. — Steve Krug.
Test reports summarize findings with supporting photos, quotes and video clips. This makes this method extremely effective at convincing stakeholders to allow recommended changes, further research or the product to be released. The disadvantages are that the environment is artificial, and results can be unreliable when testing a partially finished prototype.
In-person usability tests are usually preferred, allowing facilitators to read body language and know when to ask follow on questions. However, when timelines, budgets and distance don’t permit, remote tests can be achieved using software to screen and perform the same set of tasks. You just have to hope for no technical difficulties.
Unmoderated online tests involve participants undertaking tests alone using online tools such as usertesting.com. Although generally not encouraged, this method can be useful when testing a small element or minor change. Most tools allow follow-up questions, but these must be predefined, and there’s no ability for real-time support or clarification.
Sample sizes need to be considered. User tests are great at uncovering usability issues, even at early paper prototype level, by showing your sketches to just two or three users. As the design progresses, high-fidelity, clickable prototypes should be tested with five to eight users across a range of devices.
Although usability benchmarking is predominantly considered a qualitative research method, quantitative insights can be gathered by measuring the success rate of tasks and time it takes to complete them. These require larger sample sizes (20+ users) and tighter scripts.
Usability isn’t the only factor to consider when testing a product. Other tests have been developed to fulfill other needs such as impression or concept tests, which assess both behavior and opinion. This can be useful for measuring the aesthetic appeal and impact of branding and content to your audience. A/B or multivariate tests are also popular for measuring preference between anything from UI patterns and headlines to button colors and imagery.
Before starting any UX project, put some time aside to plan which methodologies, techniques and tools are most likely to get the best results. Less can often be more, so don’t overburden yourself, and keep the study within your available timeline, budget and resources.
It’s all about finding the right balance, which comes with experience. For example, if it’s a new product or complete redesign, try to include some time and budget for quantitative and qualitative methods in both the early discovery phase and evaluative phase later in development.
Sometimes constraints just can’t be overcome and all you have at your disposal are guerrilla tests with small sample sizes. Remember, even a few research insights are better than pure guesswork, and little iterations over time can make a big difference to the experience.
Research is a time consuming and complex process often full of conflicting opinions. Don’t be hard on yourself if you don’t get it perfect first time. Put those learnings towards the next research project plan.
Regardless of what you may hear, nothing is set in stone; methods and techniques can be combined and altered to suit the project and problem you’re solving. Just remember these are tried and tested by many experienced professionals, so do your best to ensure you understand the whys and hows before experimenting.
Finally, keep in mind who you’ll be presenting the research findings to, and what they’ll be using them for. Make sure it’s as relevant and concise as possible.
Recommended Reading and Resources
There are heaps of amazing free articles out there showing you how to conduct each method with tried and tested techniques and considerations, some of which have been included above.
There’s a growing collection of books dedicated to user research. Here’s a few of my top picks.
- The User Experience Team of One: A Research and Design Survival Guide, by Leah Buley
- Interviewing Users: How to Uncover Compelling Insights, by Steve Portigal
- Just Enough Research, by Erika Hall
- Don’t Make Me Think, Revisited: A Common Sense Approach to Web Usability, by Steve Krug
- 100 Things Every Designer Needs to Know about People, by Susan Weinschenk
- Quantifying the User Experience: Practical Statistics for User Research, by James R. Lewis and Jeff Sauro
- Observing the User Experience: A Practitioner’s Guide to User Research, by Elizabeth Goodman, Mike Kuniavsky, and Andrea Moed