Summary: In this article, I discuss how Sean Ellis's Product-Market Fit (PMF) survey, originally created for Product Marketing, can be repurposed for UX Researchers working on early-stage products. I explain the methodology, key insights from using the survey in real-world scenarios, and how to combine it with other UX research techniques to understand product-market fit and user needs better.
If you're like me and have been deep in UX research for a while, you've probably noticed the occasional tension between the Product Management/Product Marketing and UX Research disciplines, especially when it comes to overlapping activities like conducting early-stage generative research for digital products.
At my last two jobs, both with early-stage products, I was tasked with quantifying product-market fit. That's when I first came across Sean Ellis's Hacking Growth, where he details his Product-Market Fit (PMF) survey. I successfully used that framework to help define and gain stakeholder alignment on product-market fit in both cases.
Recently, while listening to a great product management podcast called Lenny’s Podcast, I heard Sean Ellis himself break down the methodology, which reminded me of its value and gave me new insights. Although the discussion came from a product management/product marketing perspective, I realized the approach is just as relevant for us as UX researchers, and we shouldn’t throw the baby out with the bathwater on this one.
This survey is a fantastic tool for early-stage companies looking to determine whether they've built something that genuinely resonates with users. It's not about fine-tuning growth strategies or optimizing established products, it's for companies that are still in the early stages, trying to figure out if what they've built actually works for their users in the real world.
As a UX researcher, I can see how valuable this process is. When we talk about product-market fit, we're talking about understanding if your product has that magic quality, something Sean describes as a "must-have" for users. The survey helps validate that and becomes even more powerful when combined with other UX research methods.
What I appreciate most is that it focuses on asking the right users the right questions, which is exactly where UX research shines.
In this post, I'll share Sean's methodology, explain why it's so effective for early-stage companies, and show how we, as UX Researchers, can integrate it with our own research techniques to gain a clearer picture of product-market fit.
Recruiting the Right Participants
Sean's methodology, like all good user research, centers on asking the right users the right questions, and this starts with recruiting the most accurate subset of users. In the podcast, he stressed how crucial it is to select users who have truly experienced your product, not just those who signed up once and never used it again. He recommended focusing on users who have activated (i.e., used the product multiple times and gotten a real sense of its value).
For the PMF survey, Sean suggests a sample size of around 30 representative users. He mentioned that this recommendation came after a ton of trial and error. As Sean and others applied the survey in real-world scenarios, they found that gathering data from 30-35 users typically provided enough data saturation to move forward with confidence. Without even running any statistical models, Sean stumbled upon this sweet spot range. He found that it's statistically relevant enough to guide decisions, yet small enough to avoid overcomplicating the process or overwhelming teams with too much data to analyze.
What's interesting is how this approach aligns with UX research best practices. In fact, this sample size is right in line with Jakob Nielsen's Discount Usability Philosophy, which advocates for smaller, focused sample sizes that provide high-quality insights without excessive effort. Nielsen's research shows that beyond a certain point, additional participants don't significantly increase the value of the data.
This is especially true when conducting research for an early-stage company trying to determine general product-market fit. By keeping it to around 32 users, you're maximizing the efficiency of your research while still ensuring the results are good enough to inform your next steps.
But again, the most important part of this process is making sure you're talking to the right users. If you don't, you run the risk of getting misleading data and making decisions that could send you down the wrong path. As Sean put it, "If you start paying attention to the wrong users, you dilute the product for the ones who matter most." Sound familiar?
The Sean Ellis Product-Market Fit Survey
The Key Question
The heart of Sean Ellis's PMF survey is a single question:
How would you feel if you could no longer use this product?
The answer options for this question, as displayed to the participants in the survey, are:
Very disappointed
Somewhat disappointed
Not disappointed
According to the methodology, if 40% or more of your users say they'd be "very disappointed," you're likely on the right track. If the percentage is lower than that, it might be time to reassess.
This isn't about optimizing growth or squeezing out a few more percentage points in conversion; it's about determining if you have something that truly fits the needs of your target users. I like how, although this was born out of the Product Marketing world, it's completely user-centered in its execution.
Sean mentioned something important in the interview: ignore the users who say they'd be "somewhat disappointed." These users think of your product as "nice to have," but not essential. If you start making changes based on their feedback, you might dilute the product for your most valuable users, the ones who really need it. This methodology does a great job of finding the signal in the noise, something other more traditional UX research product market fit activities often miss.
The Power of a 40% Threshold
Sean emphasized in the podcast that the 40% threshold isn't some magical number. It's more of a general benchmark to help early-stage companies understand whether their product is on the right track. He pointed out that hitting 39% doesn't mean you don't have product-market fit, and getting over 40% doesn't guarantee that you do. It's more about using that figure as a signal, not an absolute measure.
As Sean put it, "I don't think it's that firm. To me, the real power is having some kind of target for the team to be shooting for." It's about alignment, getting everyone in the company on the same page regarding whether the product is resonating with users.
He shared an example from the podcast about a startup that initially scored just 7% on the PMF survey. In two weeks, after targeted changes to their product, they shot up to 40%. But even then, the 40% wasn't the goal, it was a reflection that they'd tapped into what users really needed. Sean said, "It's not about hitting a number and calling it a day; it's about deeply understanding the context behind it."
Ultimately, the 40% threshold is more of a leading indicator than a definitive stamp of product-market fit. It's a valuable signal that you're heading in the right direction, but it's the insights you gather from your users and how you act on them that truly define your product's success. This initial survey is just the starting point, setting you up for deeper research. The methodology emphasizes that you should first aim to get close to the 40% threshold before moving forward. If your score is significantly lower (like Sean's example of the startup that scored just 7%) the focus should be on making changes to the product and running the survey again with a new set of representative users.
Once you're near that benchmark, that's when the real work begins. The first survey helps you identify product-market fit, but it's the next steps, diving deeper into the 'why' behind the feedback, that guide how to improve and refine your product. You're not just looking to hit a number; you're looking to understand what makes your product resonate with users and how to expand that impact.
Follow-Up Surveys: The Next Step
After you've run the initial survey and you've identified your "very disappointed" users, the methodology recommends running a follow-up survey. But here's where it gets specific: this second survey shouldn't go to the same group of users. Instead, send it to a new, but equally representative group of users.
This follow-up survey digs deeper into why those users would be disappointed. Ask questions like:
What is the primary benefit you get from the product?
Why is that benefit important to you?
As Sean explained in the podcast, this is where you start to peel back the layers and understand what's really driving that strong attachment to the product. He shared an example from his time working at Xobni, an email client app, where users mentioned they'd be lost without the tool because it helped them find emails quickly. When they followed up with the users and asked why that was important, the most common response was, "I'm drowning in email."
It's these kinds of insights that help you not only understand what users value, but why they value it. And that's crucial for making informed product decisions.
Triangulating Data with Other UX Research Methods
Here's where the PMF survey can be strengthened by other UX research techniques. While the survey gives you great data on how users feel about your product, it doesn't explain why they feel that way. This is where qualitative methods come into play, and Sean emphasized their importance in the podcast too.
After running the PMF survey and follow-up surveys, I'd recommend conducting user interviews or field studies to dig deeper into the behaviors and motivations of your most engaged users. For example:
What specific problems are they solving with your product?
What aspects of the experience are the most frustrating?
Where do they spend the most time while using the product, and why?
Additionally, you can triangulate this qualitative data with product analytics to get a fuller picture. Analytics can show you where users are dropping off, which features are being underused, and where users are spending the most time. Sean mentioned that looking at this data helped him and his team improve onboarding processes to better guide users toward the core value of the product in a different example. Combining this behavioral data with qualitative insights helps you understand not just if your product fits, but how it's being used and where it's falling short.
Conclusion
The Sean Ellis PMF survey is a fantastic tool for early-stage companies trying to determine if their product has real potential to solve real user needs in the real world. However, to get the most out of it, it's essential to combine it with other UX research methods like user interviews, field studies, and product analytics. These additional methods help you understand not only if users are attached to your product, but also why and how they're using it.
And remember, this isn't about marketing or making small tweaks to drive growth. This is about figuring out if what you've built solves a real problem in a way that users can't live without. If you do it right, the survey will give you the early signal you need to guide your product development, and the follow-up research will help you refine the experience for those who matter most.
Have you ever used the Sean Ellis PMF survey in your own work? If so, did it give you the clarity you needed to move forward, or were there unexpected insights that shifted your strategy? How did you combine it with other UX research methods, and what were your biggest takeaways? If you haven't tried it yet, do you think it could help you in identifying product-market fit for your next project? I'd love to hear about your experiences. Please feel free to DM me or comment here with your thoughts!