21 Reasons Research Fails (and how to overcome these challenges)
I spend 90% of my working time with clients and students helping them learn, plan, recruit, conduct, analyze, synthesize, present, and act on user research learnings. There are a lot of considerations and all have to be navigated and resolved to achieve success. It’s no wonder research it’s not always successful!
The major categories of failure are lack of alignment, planning, timing, confidence, and action. Each category is broken out below and 21 suggestions about how to mitigate these pitfalls are provided. Set yourself and your team up for success!
1. Mindset or Mental Model not aligned
1. The stakeholders aren’t clear that user research is an investment. Try to make sure the following is true before moving forward with a study:
Stakeholders don’t already know what we’re trying to learn, and can’t reasonably infer it from other sources
Stakeholders have the time, and the will, to act on the learnings
The stakes are high enough that stakeholders would regret erring on the side of rapid iteration
2. A stakeholder was using the study to prove a point, or worse, attempting to use it as a weapon.
You should be able to pick up on this very early if you are using our Sample Stakeholder Kick-Off Questions tool.
Remember, you have a responsibility to educate your stakeholder on “ethical research.” Learn more here >>
3. The team did not approach the study with an open mind.
Did they desire a specific outcome or end goal and become fixated on it?
Educate them on the fact that “we conduct research to learn and it’s a continuous learning process. "We do not conduct research to be right.”
4. Stakeholders were not included or engaged in the research process.
If they don’t have time to provide feedback on your plan, recruiting criteria, and discussion guide, take it as a sign they are not truly committed or interested. Then politely walk away. Why? Because research cannot, and should not, be conducted in a vacuum!
Several teams require stakeholder engagement throughout the research process, including mandates to observe or take notes in at least two sessions. Consider this. It works! And the stakeholders also learn firsthand how valuable it is to participate, all that goes into a study, and the complexities involved. NOTE: Lack of participation is also a sign of a lack of UX Maturity. Learn more here >>
2. Proper planning didn't take place
5. The research study topic does not support key business goals.
When gathering initial input about what the team wants to learn, why, and how the learnings will be applied, make sure to find out how the study goals support the organization's larger goals, OKRs, KPIs, etc.
It’s absolutely critical that UXR goals ladder up to broader initiatives. If they don’t, politely share your concerns and then walk away because the likelihood the study, or its results, will get traction is slim. Business goal alignment generates interest and buy-in. You need buy-in to succeed!
6. The right research question(s) were not identified.
Is a documented and agreed-upon research plan really necessary? YES, IT ABSOLUTELY IS!
The process of authoring the plan, in collaboration with your stakeholders, will tease out the right questions, necessary context, assumptions, and hypotheses to test, capture collective knowledge about the topic, and other key aspects that will help set you and the study up for success. It will also help the team reach a consensus on the priorities and ensure the study's goals align with stakeholders' goals. Learn more here>> and here >>
7. The study questions were too large or small, not specific enough, or were already researched.
Remember Goldilocks? Yea, there is a sweet spot in terms of size and scope.
Make sure your research questions are specific, actionable, ethical, tied to business goals, and the right size for the time frame and confidence desired.
8. You explored a study topic that is not answerable with UXR.
Not everything is a user research question!
And not every project is worth investing in either! Learn more here >>
9. The selected methodology was not appropriate for the topic, culture, or confidence level desired.
Mastering the art of conducting live interviews, consistently, and gathering reliable results in a culturally relevant manner, should always be a researcher’s first significant performance goal as it relates to methods.
Once you know how to ask solid questions and hone your improv skills, you’ll be in a much better position to choose the right tool or platform for the needs at hand. Nailing down your live moderation skills, first, will pay off in spades. TRUST ME.
Never focus on mastering tools or platforms before you learn to conduct live interviews like a pro. Why? Here are six concrete reasons.
10. The roadmap or business objectives shifted from when the study planning started
Let's face it, nothing about user research is static. Not our competition, the people we study, teams we support, what we design, how we collaborate, etc. Your plan WILL and SHOULD evolve in response to the decisions made and the learnings acquired along the way.
Research plans are iterative. They should be revisited and updated several times throughout the course of the study. It is a living, breathing document. When you and your team revisit and update the plan consistently, changes and challenges surface in real-time.
See 6-9 above.
3. The timing was off
11. The study results came in too late.
The study's key milestones and dates should be included in the research plan to make sure this doesn't occur. Check that the plan and timeline were revisited throughout the course of the study.
When you and your team revisit and update the plan consistently throughout the journey, any timing or other issues surface in real-time (which will save you considerable time in the long run).
12. The team did not set aside time to act on the learnings.
I really hate hearing about this one!
People, we do not do research to check a box off! See points 1 and 11 above :)
4. The team isn't confident about the research
13. Stakeholders are often skeptical about research results when they aren’t included in the process.
It's MUCH harder for them to say, “you asked the wrong people” or “the wrong questions” when they are involved, and provide feedback and confirmation along the way.
Inclusion equals buy-in. Buy-in translates into confidence. Confidence translates into understanding and action. Our end goal is to move our teams into action - with confidence!
14. The study didn’t recruit the right people or gather input from the right amount of people.
Generative research typically includes 7-10 people per segment (at minimum).
Evaluative research typically includes 4-5 people per segment.
Quantitative research requires more samples to achieve statistical significance. Sometimes this is important, other times not. Confirm this in the planning stage so you choose the right approach to meet the confidence level desired.
15. The study didn’t gather input in a credible way.
There are several contributors that may sway stakeholders' confidence in regard to conducting both primary and secondary research.
Consider the research plan, the recruit, discussion guide, artifacts tested, activities within sessions, the facilitation, analysis and synthesis, whether bias was introduced, and other factors to troubleshoot.
16. The researcher was not inclusive or credible.
Part of a researcher's role is akin to herding a bunch of cats (the stakeholders)! It is imperative stakeholders have a strong voice during the process and truly feel heard.
Another culprit may be that the data collected either was not equally or thoroughly analyzed or considered. Or perhaps other biases were introduced that tainted the data collected.
See 6, 14, and 15 above
17. Stakeholders lack UX Maturity.
Consider your core stakeholders' previous experience with user research (positive, negative, ethical, actionable and otherwise) and tailor your collaboration and study approach accordingly.
Reflect on the UX of the research process itself, and how easy or difficult it was to collaborate with you, as the researcher. YEA, THAT'S KINDA META. But please think about it because the UX of working with you, as the researcher, really makes a difference.
5. Little or no action was taken after the study
18. The learnings did not reveal any strong patterns.
This is typically one of three problems. The wrong questions were asked. The wrong assumptions were made (before or during the research planning process) or the wrong people were recruited to participate.
Revisit your research plan and comb through every single word. Find the aspects you may not have questioned, or been able to substantiate, and dig deeper.
You should see at least some broad patterns from the first few participants in an evaluative study and from the first four or five in a generative study. If you are not, explore the three likely circumstances above. Do this right away. DON'T WAIT.
19. The learnings were not shared in a culturally meaningful way.
Ask yourself if the deliverables were boring to consume or difficult to decipher. Think of the UX of working with you as a researcher (from your stakeholders' POV)!
Were your deliverables over or underwhelming in terms of depth, design, or format?
Were your stakeholders included in the creation of the final deliverables? Did they review and provide feedback? Did you pilot the share-out?
Also, consider how the deliverables compare to previous research that was well-received. Look in the archives and find examples to emulate.
20. The data was not triangulated or substantiated in a significant way (if at all).
Data triangulation is key to moving teams into action, especially naysayers!
Triangulation efforts demonstrate you did your work and this additional evidence provides further support for the learnings. (NOTE: You should be triangulating existing data to inform your initial plan as well).
21. The learnings presented were not actionable.
If they weren’t actionable then something went sideways earlier on in the process.
Revisit the plan to see if it accurately captured the goals and constraints, you gathered input from the right stakeholders, feedback from the right participants, and the timing aligns with what was agreed upon. One or more of these aspects is the likely culprit!
What are the most common reasons you see research failing? We'd love to hear your thoughts.
Speak up, get involved, and share the love!
Connect with Michele on LinkedIn for more UXR tips and UX discussions
Read about Curiosity Tank workshops. The next series begins in September!
Participate as an On-Demand, or as an Observer, for less!
Dive into a different UX Research term every week! Sign up for UX Lex weekly emails here.
Forward this email to someone who you think might enjoy it. Better yet, sign them up here. It will be the gift that keeps on giving.
And that’s a wrap!
We typically alternate between a theme and UX/UXR jobs, events, classes, articles, and other happenings every few weeks. Thank you for all of the feedback. Feedback is a gift and we continue to receive very actionable input on how to make Fuel Your Curiosity more meaningful to you.
What do you think? Lmk. We're constantly iterating and love to hear your input.
Stay curious,
- Michele and the Curiosity Tank team
PS. Our next Ask Like A Pro user research cohort begins in September. Join us at our info session with alumni on September 13th. Register on Eventbrite.