The Simplest Mixed Methods(ish) Study You’ll Ever Run
Question Pairing in Surveys
Summary: Question Pairing is a simple, quasi mixed methods survey technique that adds an open-ended “why” question after rating or ranking questions. This small addition helps uncover the real reasoning behind respondents’ answers.
This week I was thinking about sharing some simple research tips, and this one came to mind. It’s an easy way to make any survey more insightful without adding much extra work. I call it Question Pairing. Question Pairing is when you include a rating or scale question and follow it with an optional open-ended text box that asks, “Why did you just pick what you picked?” Super simple but extremely valuable in the real world.
⚠️ Disclaimer #1: You don’t need to pair every rating question, only the ones where extra context will help you interpret the number.
⚠️ Disclaimer #2: Question Pairing isn’t technically mixed methods according to the academic definition, but you can think of it that way in practice. Hence the (ish) in the title. Hahahaha
Rating Questions Often Fall Short
In my experience, rating questions are often overused, and the data is often misinterpreted. I think this happens because they produce quant results that stakeholders love and are relatively fast to analyze. But factors like response biases and wording effects make simple scales tricky to interpret, especially when teams treat a single number as the whole story. NN/g highlights how biases distort survey answers and urges careful interpretation. MeasuringU’s work shows that many changes to scale format rarely move the needle as much as people assume, which is another reminder that numbers alone can mislead if you do not know what drove them.
A common example of this is the 5 star rating system. MeasuringU found only small, nonsignificant differences between five-star and five-point numeric scales, and even saw order effects that argue against mixing formats within the same study. The takeaway is not that stars are bad, but that format alone does not guarantee clarity. Pairing a brief “why” often does more to prevent misinterpretation than switching icons or endpoints.
Because of these facts, we as UX researchers need to do better. We need to respect our survey respondents’ time and get the most bang for the buck from our stakeholders’ investment in user-centered data. That’s why I love using Question Pairing.
Real World Example
Scenario: You are writing paired survey questions for an internal market-risk dashboard used by financial analysts.
Q1 (rating): Overall, how satisfied are you with the market risk dashboard for spotting anomalies in your coverage area today?
(1 = very dissatisfied, 5 = very satisfied)Q2 (open follow-up): What is the main reason for your rating?
Suppose a respondent selects 4 in the rating scale, but their follow-up response in the open-ended text box says:I love how easy it is to track the outliers, but the column sort resets on refresh, so I sometimes lose my place and have to hunt for it again.
That comment is soooooo much more helpful than just the coded idea of generally “satisfied”. The rating looks positive, yet the “why” behind the rating reveals a meaningful workflow issue that should be addressed. This is the kind of thing you often discover in observation, but a paired “why” can surface it quickly and precisely within a survey.
Question Pairing Works
Question Pairing works because it captures users’ reasoning in context. People explain what they just considered when they chose a ranking or rating, so you get quant and qual at the same time. It also feels natural for respondents to answer, which keeps survey completion friction low.
Three benefits that stand out the most are:
Efficiency. You collect both data types in one short sequence.
Clarity. You no longer have to guess what a “3” or a “4” means.
Scalability. You can analyze the numbers first, then theme the open responses.
Because of this, Question Pairing is especially effective when used for post-test pulses, in-product widgets, beta feedback, or feature satisfaction checks.
One thing to watch out for is if Question Pairing makes your survey too long. Long surveys are proven to have lower completion rates and produce lower-quality data. Survey-writing experts have long advised keeping surveys short, and recent NN/g guidance on user feedback requests reinforces the same point. This is where adding “why” follow-ups in Question Pairing can actually be a feature, not a bug:
A nice side effect is that Question Pairing naturally encourages shorter surveys, since pairing selectively forces you to keep only the questions that matter.
What Question Pairing Does Not Do
Question Pairing does not replace behavioral research. Remember, what users say and what they do are often very, very different. Question Pairing only captures what users self-report to have done, so it’s important to keep that in mind.
Question Pairing cannot show hesitation paths, error recovery, or real world taskflows.
Treat Question Pairing as a quick, guerrilla-style tactic for directional insight, or as a way to triangulate with moderated testing, analytics, or prior studies.
NN/g’s survey guidance and MeasuringU’s rating-scale evidence cited above both strongly imply that you should use surveys wisely, then look for converging evidence in behavior.
What Question Pairing can do is highlight patterns worth validating through richer methods. For instance, if several respondents mention confusion about a specific interaction, that signal points to where you should observe users next. In that sense, pairing acts like an early warning system. It flags specific pain points that help us researchers prioritize what to test in future rounds of studies.
Conclusion
Question Pairing is one of those small, deceptively simple techniques that can make your survey data far more meaningful. It helps you capture not just what users say, but a hint of what they really mean. When used selectively, it transforms basic rankings and ratings into something you can actually act on.
Just remember that it is not a replacement for behavioral data or moderated research. It’s a quick, practical way to add context to quant results and to surface signals you might otherwise miss.
Have you ever tried Question Pairing in your own surveys? If so, what did you learn from the “why” behind the numbers? DM me or comment below. I’d love to hear how you’ve used this method in the real world too.




