The Dangerous Appeal of Fake AI Users
Rant: There’s Nothing Human-Centered About Talking to a Chatbot
Summary: The proxy user trend comes straight out of product management culture, where speed and efficiency are often mistaken for insight. This article breaks down why that idea will collapse under the weight of real user research and the demands of actual human behavior.
I usually don't weigh in on trendy topics, but I feel like I have to on this one because I've had to deal with it in my recent day-to-day interactions with PMs. The debates have become exhausting, and it's left me pretty discouraged. So here's my rant on proxy or synthetic users. I'm already rolling my eyes at myself just for writing this first paragraph, but I need to get this off my chest. (Hahahaha)
The Dangerous Idea of a Proxy User
A proxy user is an AI-generated stand-in for a real person. I've also heard them called synthetic users. You prompt a LLM with persona data, ask it questions, and treat the responses as if they came from someone who actually uses your product. The back and forth can feel like a real user interview. It often results in quotes, takeaways, and insights that get added to a slide deck. But none of it comes from a real user.
But no matter how well it's framed, this practice is built on a false premise. It assumes that if the prompt is strong enough and the research plans are thoughtful enough, the answers can be trusted. That assumption is wrong.
LLMs do not generate insight. They generate probability-based sentences one word at a time that sound human. What you get is not user data.
This trend is especially concerning because it reflects a larger shift in tech culture. Over the past several years, PMs have taken on more responsibility for user research. In many cases, research has become a shared team activity without a trained researcher involved. That shift may be well-intentioned, but it has created an environment where teams feel confident in findings that are not grounded in real user behavior, sentiment, or mental models.
The idea of the proxy user is not just another flawed, PM-driven, bastardized quasi-research method. It is a symptom of a bigger problem.
The line between interviewing users and simply thinking about users has already started to blur. And when PMs begin using AI to simulate user data, that line disappears completely.
False Premise
Even with a data-driven persona and a carefully written prompt, a proxy user can never give you valid research findings. A LLM isn't drawing from actual human experience in the real world. It doesn't recall context, solve problems, or navigate constraints. It's just predicting the next likely word based on the words that came before it.
The output might look like insight. It might read like a real quote from a real user. But it just isn't.
Don’t be fooled. LLMs simply produce a string of plausible-sounding sentences based on the structure of the prompt. That’s it.
What you get from an LLM is confirmation of whatever the team already believed. You know, those biases that were unintentionally embedded in the prompt and reflected in the model's training data, which often includes made-up examples of user behavior. That's what makes this so risky. The output looks like legitimate research. It creates the illusion of insights that easily trick the untrained person doing research.
Experienced researchers know that user research is not about asking questions and recording answers. It's about watching behavior, identifying patterns, and uncovering needs the team didn't already know.
The things not said are often more important than what is said. That kind of learning cannot happen through an AI chatbot.
The Broken Product Culture
The rise of proxy users is not novel. It is the inevitable result of a product culture that has deprioritized real user research for years. Speed is rewarded. Confidence is faked. And the discipline of UX has been gutted to make room for lightweight rituals that fit neatly into sprint cycles.
PMs are at the center of this shift. Over time, they've taken on responsibility for much of the user research being conducted today, but they have not gained the training, methodologies, or mindset required to do it well. Instead of observation, they rely on conversation. Instead of evidence, they rely on opinion. Research has become a calendar slot, not a discipline.
For example, Teresa Torres, one of the most influential voices in this space, advocates for short "weekly touchpoints with customers by the team building the product, where they conduct small research activities in pursuit of a desired outcome." That mindset has led teams to optimize for frequency over depth. These rituals give the illusion of learning while stripping out the rigor needed to produce real insight. This is not good research. It is theater.
I’m arguing we need to stop asking the same handful of users what they think and start observing how all different kinds of users behave in the real world.
More evidence of this mindset comes from Lenny Rachitsky, who stated on his widely popular PM-focused podcast that "experienced teams can move fast without sacrificing quality," while also advocating for the use of proxy users. However, this perspective ignores the reality that most teams are not experienced. Most PMs, when told to move fast, have consistently turned to cutting corners when it comes to research. They commonly default to the simplest and most convenient methods, even when those methods are not appropriate for the context. Worse, they often convince themselves that speed is equivalent to effectiveness. They continue down this path despite negative results over time because their focus remains fixed on immediate outcomes.
That's where proxy users enter the picture. They are not being seen as a temporary workaround for moments when constraints are tight. They are being treated by some as the new best practice. This complete perversion of user research is a direct consequence of the modern PM mindset. You take a persona, write a prompt, and get an answer that sounds smart and aligned. There is no recruiting. No interviews. No discomfort. Just clean, agreeable language that fits the roadmap already in motion. Sounds like utopia, right?
This is not research. It is automated bias confirmation. It removes real-world friction, skips discovery, and feeds the team's assumptions back to them with perfect clarity.
Proxy users don't just fail to reveal the truth. They bury it beneath layers of language that feel like progress.
And once a team adopts them, it's nearly impossible to go back. The pace feels good. The transcripts look legitimate. The insights seem useful. But none of it is based on how real users think, act, struggle, or adapt. The further teams move in this direction, the more detached they become from the actual people they claim to build for.
Proxy users are not the problem. They are the proof that the problem already exists.
Ten Things a Proxy User Could Never Tell You
If all that theory and pontificating didn't convince you, just think about this from a practical use case standpoint. Here's a thought exercise: would a proxy user ever say any of the following things we, as user researchers, hear in our day-to-day work?
"I gave up halfway through because I got frustrated."
"I don't know what that button does, so I avoid it."
"I lied in your survey because I thought it would help me finish faster."
"I use this workaround every time because your system is too slow."
"I didn't even notice that feature until you pointed it out just now."
"I feel stupid asking this, but…"
"I clicked that by mistake, but I didn't want to say anything."
"Honestly, I don't trust your product to do this right."
"I had to ask a coworker how to do it."
"I thought it worked. Then I realized it didn't."
These are the parts of a user interview that leads to real learning. They uncover the unknown unknowns that have the potential to make or break a project. These types of quotes only happen when real people use a real product in the unpredictable ways we know occur in the real world.
Proxy users can't reveal those moments. They never pause. They never get confused. They never choose the wrong path. And when none of that shows up in the research, the team keeps building as if everything is fine.
Proxy Users Scale the Decay of Research
All that said, the biggest threat of proxy users isn't that they give PMs a shortcut to their own opinions. It's that they let entire teams build faster on research practices that were already falling apart. Poor discovery isn't new. But now, it's automated!
PMs have been skipping steps for years. They've been trading synthesis for sentiment, and pushing researchers out of the early stages of development. That erosion didn't happen overnight. It came from well-meaning attempts to simplify research so it could scale inside product frameworks.
Proxy users take that oversimplification and crank it up to 11.
It's not just that the insights are synthetic. It's that they now arrive with almost no cost, no resistance, and no pushback. That makes the worst practices infinitely repeatable. Teams can now ship feature after feature, roadmap after roadmap, all backed by fake data.
This is what makes proxy users more than a flawed technique. They are the logical next step in a product culture that has been trading research quality for research convenience. And unless we actively stop it, this is going to scale far beyond the teams using it today.
Real user research strategically slows teams down early so you don't waste time later. It gives you fewer answers upfront, but more reliable ones when it counts.
Proxy users do the opposite. They speed things up in ways that feel productive, right up until the moment the product fails and no one knows why.
Conclusion
PMs didn't set out to break research. But years of pushing for speed over quality have brought us here. The idea that you can talk to a chatbot instead of a user didn't appear out of nowhere. It was seeded by a decade of diminishing rigor, repackaged as efficiency.
Now we have tools that let teams simulate research without actually doing any. And we have AI outputs that look enough like insight to pass for the real thing in most meetings.
This is not a debate about whether AI has value in product development. It clearly does. But using LLMs to replace users is not a value add. It's a liability. It takes every shortcut product teams have already been taking and gives them a faster, shinier version.
And that's the real danger. Proxy users don't just threaten research quality. They accelerate its collapse.
We don't need better prompts. We need to restore the practice of actual UX. The kind that involves people. We need to become human-centered again.
The kind of research that finds what's not working before it's too late. The kind that earns its place in the product lifecycle by improving outcomes, not just speed.
Because if we let this continue, it won't just be bad research. It'll be scalable, repeatable, institutionalized bad research. And at that point, there won't be much UX left.
Rant over. Thanks for reading!