Summary: Product demos are often mistaken for cognitive walkthroughs, which is why I wrote this week's post. This "how-to" shows you how to run a simple, Cognitive Walkthrough using only four questions and includes a downloadable template you can use in the real world.
Early in my career, I was working as a UX researcher at a SaaS company. One day, I sat in on what was being called a Cognitive Walkthrough. I expected to witness something like I had read about on the Nielsen Norman Group website. What I saw instead was a PM-style demo. The PM clicked through the interface, explained what each interaction did, and everyone in the room nodded along. It felt more like a sales pitch than a research method.
At the time I remember thinking this can't be what this method is all about. My confusion must have been written all over my face because later, a UX Research Lead who acted as a mentor to me, took me aside. She explained that what we had just watched was not a Cognitive Walkthrough at all; it was just that the PM did know what he was talking about. So my mentor sat me down, taught me the actual framework and structure for conducting a proper Cognitive Walkthrough.
I learned a lot about what not to do from the bad demo.
My mentor told me that a Cognitive Walkthrough done wrong is actually worse than not doing one at all because it gives teams a false sense of security.
But, done correctly, it is one of the most efficient ways to surface real UX problems early in a project's lifecycle. I've included a downloadable template in this post that builds on those lessons. It lays out the structure I wish we had used in that very first session, a structure that has since proven itself across countless workflows for organizations of all shapes and sizes.
What is A Cognitive Walkthrough
A Cognitive Walkthrough is a structured method for stepping through a task as if you were a brand new user. The goal is not to show how the interface works. The goal is to simulate how someone without prior knowledge would approach each step.
The method is built on four questions:
Will the user know what they are trying to do at this step?
Will they see the control or option that allows them to do it?
Will they understand that this control does what they need?
After the action, will they receive system feedback that tells them it worked as intended?
Test any workflow against these four questions step by step, and you will start to identify where user friction is likely to appear in your design in the real world.
This was what was missing in that first session I watched early in my career. Everyone in the room assumed the answers to those questions were obvious. No one even bothered to ask any of them, so blind spots stayed hidden. The basic structure is what separates a real walkthrough from a demo. I created the Cognitive Walkthrough table below that organizes these four questions into a practical grid. You can use it with your own product and see how quickly weak spots surface.
The example grid below shows how a compliance officer might try to generate and export a quarterly report in a FinTech SaaS platform.
I hope this gives you an idea of what this could look like in the real world.
Where Confirmation Bias Creeps In
Confirmation bias is the reason so many teams think they ran a walkthrough when all they really did was confirm what they already know.
Bias shows up in a few predictable ways:
Assuming users know what the team knows: For example, believing the button labeled “Finalize” is the best choice when a clearer label like “Submit Report” or “Close Task” is more accurate.
Skipping messy starting points: Real users often begin from deep links, emails, or alerts, but teams often mistakenly start from a clean homepage.
Filling in gaps without realizing it: Teams narrate the steps out loud and convince themselves that control is discoverable.
Any of these sounding familiar?
When I think back to that first session I was a part of, every one of these mistakes was present. The PM sincerely believed that users would interpret the workflow the same way he did. The rest of us followed along because his reasoning seemed logical and convincing.
That experience is why I believe Cognitive Walkthroughs only work when the structure is followed closely. The framework forces you to slow down, ask the right questions, and document the gaps. Without that structure, all you end up doing is walking through the happy path.
The Walkthrough Grid
You can recreate this table in Google Sheets or Excel, or download the template I've provided here.
Download the Cognitive Walkthrough Template Here
How to Use This Template:
Fill it in as a team, but independently first. (Diverge-Style) Each reviewer answers the four questions on their own to avoid groupthink.
Compare answers and flag mismatches. (Converge-Style) The disagreements are often where the real problems hide.
Document reviewer notes openly. Do not let assumptions slide into the background.
When you follow this process, you end up with structured evidence in the form of this document. Over time, you also build a reusable artifact that leadership can understand, because it clearly shows where the workflow supports users and where it does not.
Step-by-Step Cognitive Walkthrough
Now that you have the template, the key to keeping bias out is to follow a clear structure. Use the step-by-step instructions below to guide how you fill it in.
Step 1. Define a specific user goal
Do not start with "explore the dashboard" or "use the reporting tool." Pick a goal that is grounded in a real task. For example, in a FinTech SaaS platform a compliance officer might need to generate and export a quarterly report for an auditor. That goal is narrow and concrete.
Step 2. Select realistic entry points Most teams default to starting from the homepage. In reality, many tasks begin from deep links, notifications, or email alerts. In our example, the compliance officer clicks into the system from a quarterly reminder email. Starting the walkthrough at that point surfaces issues you will never see if you only examine the clean entry.
Step 3. Break the workflow into atomic steps
Write every action as a specific, observable step. Do not summarize at a high level. "Navigate to Reports" or "Click Generate" is far more useful than "Prepare compliance report." When you list the steps in detail you can map each one against the four walkthrough questions.
Step 4. Assign independent reviewers
One of the biggest sources of bias is groupthink. If a PM or designer drives the walkthrough, others will tend to nod along. Instead, have each reviewer answer the four questions for every step on their own first. Only after that do you compare results. This is where the downloadable template comes in handy. It gives everyone the same grid to work from and makes disagreements visible.
Step 5. Compare results and document disagreements
The mismatches are the most valuable part of the walkthrough. If two reviewers interpret a label differently or one person thinks feedback is missing, that is the signal that a user might struggle. In our example, half the reviewers assumed the "Generate" button created a new report, not exported the existing one. That disagreement flagged a real usability risk.
Step 6. Validate risky steps with users
If the walkthrough reveals uncertainty on a critical step, validate it quickly. You do not need a full usability study to test a label or a menu placement. In our example, running a short test with three compliance officers confirmed that "Generate" was confusing. That evidence supported a label change before the feature went live.
Tips to Avoid Confirmation Bias
The walkthrough framework itself helps push bias out of the room, but a few additional habits make it even stronger.
Bring in a naive reviewer: Someone from outside the product team can highlight assumptions you did not realize you were making.
Rotate facilitators: Do not always let the PM or lead designer run the walkthrough. A fresh voice changes the dynamic.
📝 NOTE: These types of activities are not inherently the role of the UX researcher, but I believe we are perfectly positioned to contribute to our teams in this way.
Timebox each step: If reviewers cannot answer the four questions in less than a few minutes it often means the step is unclear to begin with. If this happens, pause and rewrite the step at a more granular level, then run it back through the four questions to see where the ambiguity lives.
Separate assumptions from findings: If someone says "I think users will know this because…" write it down under assumptions, not in the main findings.
These small practices stop the session from slipping back into the guided demo pattern, and they make the results more reliable and the grid more valuable when you present it to leadership.
The Bigger Picture
Cognitive Walkthroughs are not a replacement for usability testing or usage analytics. They sit earlier in the research funnel and help catch obvious issues before you put designs in front of users. When done well, they save teams time and energy by preventing flaws from making it into usability sessions where they would waste limited participant hours.
The reason the faux walkthrough given by my old PM failed was not only bias, but also the way the team treated it as the finish line.
Cognitive Walkthroughs give you early warnings about discoverability, labeling, system feedback, etc., which you then must validate with real users and monitor with analytics after launch.
When you connect walkthroughs to the rest of your user research toolkit, they become one of the most cost-effective methods for helping teams that build complex products avoid design errors early in their process. Cognitive walkthroughs are great at catching the small errors before they grow into larger problems. My favorite part about them is how they provide structured evidence that grounds everything else you do in the project back to UX basics. Having this documented early gives researchers a shorthand when discussing future issues that pop up.
For example, I have often said something like, “Remember back to the cognitive walkthrough when we identified that the split button control was not appropriate for this screen? I think the same applies to this new screen.”
I hope you can see the real-world benefits of documenting walkthrough findings early.
Conclusion
That first session I sat in on early in my career taught me more than I realized at the time. Product demos as research not only fail to catch problems, but they also create a false sense of confidence that can be more damaging than skipping any early-stage research altogether. Once I learned the structured version, I saw the difference immediately. The framework caught issues that power users and insiders never noticed, and it gave teams evidence they could act on.
I have carried that lesson into every project since, from small SaaS startups to global platforms in fintech, medical tech, and manufacturing. I have also seen the same framework work for people I now have the honor of mentoring. That is how I know this approach scales and is likely to work well for your next project.
If you are a UX pro and want to add a reliable method to your toolkit, start with the template in this post. Run one Cognitive Walkthrough this way, and see if it shifts the conversation away from assumption and toward evidence on your next project. If you try it out, I would love to hear how it goes. Drop a comment here or DM me and share your experience so others in the community can learn from it too. Thanks all!