Summary: This article introduces the 20 biases I see most while executing UX work. The goal is to provide insights into the various ways biases can mislead us during decision-making and research synthesis.
A few months into a new job, an executive asked me to write a document to help reframe the decision-making process within our UX team. The goal was to eliminate personal bias and foster a more objective way of thinking. Over time, it began to be utilized not only by the UX team but also by the product and development teams.
Since then, I've shared this document with numerous organizations, all of which continue to use it to this day. I've been told that it's especially useful for onboarding new team members and for quick reference during intense projects. Because of this, I thought you might find it helpful as well. Below, I've outlined the 20 common biases that these teams have found beneficial to be aware of when making decisions and synthesizing user research.
How We Think About Others
Implicit bias - Humans have implicit associations about certain groups and their behavior based on their life experiences.
🌐 Example: Assuming an older user is not tech-savvy.Attribution error - The false perception that a person's personality traits dictate their actions.
🌐 Example: Assuming an outgoing and talkative user is more likely to click around and explore new features.False consensus effect - Most people assume that their own personal qualities, characteristics, beliefs, and actions are shared and understood by others.
🌐 Example: Assuming a user describes things using the same terms you do.
Our Own Thinking
Confirmation bias - People tend to only look for evidence confirming their hypothesis.
🌐 Example: When synthesizing user interview data, only write down user quotes that match your research plan's hypothesis.
Myside bias - Most people process information in a manner biased toward their own prior beliefs, opinions, and attitudes.
🌐 Example: A user says, "This works," in a neutral tone during a moderated usability test, and you report that quote as evidence the design is pleasing.
Hindsight bias - People often retrofit logic and reasoning onto their past actions.
🌐 Example: A user cannot figure out how to execute a task during a usability test, but after seeing you do it, says they knew how it worked all along; they just misunderstood the task.
Social desirability bias - People overly estimate the positive behaviors they make and underestimate the negative ones.
🌐 Example: A user says they would use a new feature you just demoed in a concept test "all the time."
Sunk cost fallacy - The more time people invest in something, the harder it becomes to abandon it.
🌐 Example: A usability test unearths an error in a design you've made, and you resist fixing it because you've already spent so much time testing, creating, and socializing the design.
Motivated reasoning - People tend to find arguments in favor of ideas they want to believe more compelling than arguments for ideas they do not want to believe.
🌐 Example: You are made aware of a UX best practice in a design review meeting that you disagree with. The best practice is validated by many studies conducted over 40 years involving tens of thousands of participants. You search the internet and find 1 Medium article contradicting the best practice research and argue it holds the same weight.
Recency bias - People tend to think newer information is more accurate.
🌐 Example: You base a design decision on the feedback you just heard in a recent user interview, even though it contradicts other feedback you've heard many times in past interviews.
Research-Related Phenomenon
Framing effect - The way you set up and ask questions can influence people's responses.
🌐 Example: You start a usability test task by saying, "Others we've talked to have found this difficult, so take your time," to the participant.
Serial-Position Effect - People perceive items at the beginning and end of lists to be more valuable.
🌐 Example: Putting the most used item first in a select control dropdown.
Recall bias - People tend to feel that they can accurately remember how things were in the past, but in reality, they forget most of the details and facts as time passes. This often also causes their feelings about things to change without them noticing.
🌐 Example: During a user interview, two colleagues who worked on a project together years ago gave completely different accounts of that project's details. Their memories became fuzzy over time, making it impossible to know whose answers were more accurate.
Clustering bias - People frequently believe they find meaningful patterns amidst random data when there are no patterns.
🌐 Example: A usage analytics report shows that many users are clicking in the white space of a page, so you conclude there is a design error that should be researched further.
Self-reporting bias - The difference between the same measurement's self-reported values and actual values.
🌐 Example: During the concluding open-ended questions of a usability test, a user says task #4 was easy even though they could not complete the task successfully during the test itself.
Bias bias - Experts often trust more complex explanations over simple ones.
🌐 Example: You pursue a design solution that requires four different 3rd party tool integrations, but you overlook a simple label change that will have the same impact on the design's usability.
Base rate neglect - People tend to misunderstand the effect small sample sizes have on percentages and fractions.
🌐 Example: You optimize a new feature's design based on an experienced power user's feedback, even though all user types will use the feature frequently.
Power-user fallacy - The false assumption that people who frequently use a product have drastically different behavior patterns than those who use the same product infrequently.
🌐 Example: You argue that UX best practices don't apply to your users because they are highly knowledgeable in their subject area of expertise, yet you don't provide any contradicting data to support this argument.
Prestige bias - On average, people dislike being portrayed negatively.
🌐 Example: A survey respondent distorts their answers to make themselves seem more impressive, smart, or successful, skewing your data.
Negativity bias - People default to remembering the bad more than the good.
🌐 Example: A usability test participant encounters a small error in your prototype and keeps bringing it up throughout the session, even though the error did not affect their behavior negatively.
Conclusion
Knowing these biases can help you spot them in others and yourself while making decisions and synthesizing user research.
With that said, things become trickier when multiple biases interact in real-world scenarios.
🌐 Example:
Your team is discussing how to react to some recent user feedback. Since the feedback came from a longtime power user, one team member said, "I don't think this is a real concern; they seem like they are just stuck in their ways."
Power-user fallacy + Myside bias + Attribution error + Implicit bias + Confirmation bias
Someone responds, "I agree; even though the user found an issue during the moderated usability test, I got the sense it wasn't a big deal when they said they only had "some trouble" finding things.
False consensus effect + Motivated reasoning + Self-reporting bias
Then another teammate says, "Plus, we've spent a lot of time looking into this already. I don't think this changes anything."
Sunk cost fallacy + Hindsight bias
Lastly, you say, "Sounds like we all agree. To be sure, I spoke with 5 new users before this meeting, and they didn't think this is an issue, so we can dismiss the usability test finding and move forward."
Base rate neglect + Self-reporting bias + Recency bias + Motivated reasoning + Implicit bias
The next time your gut tells you something is "off," consider thinking about this list of biases. You never know; it might just be your brain tricking you.
(These last two sentences are an example of the Framing effect. See what I did there?)