Summary: Forced ranking data can be misleading if you rely on simple averaging or weighted scores. This guide explains how to use RStudio to correct ranking data, with step-by-step instructions.
If you're like me, you're often asked to help stakeholders prioritize features. Take product teams, for example, they want a list of high-priority features for their backlog. But if your team isn't ready for a big prioritization workshop, you might end up using a forced ranking survey to get quick answers. Forced ranking surveys can seem like a straightforward shortcut, but they're actually more complicated than most UXers think. In this post, I'll break down how I've used forced ranking to get reliable results and why it's not as simple as it looks.
I first learned about forced ranking while working at Minitab. I was asked to help prioritize features for our desktop product, so I went to my usual UX research tools to see which ones offered forced ranking survey options. Sure enough, they all did: Microsoft Forms, Google Forms, Qualtrics, SurveyMonkey, and UserTesting. I set up a drag-and-drop ranking-style question in Microsoft Forms, sent it out, and waited for the results. Simple, right? Not quite.
The Problems with Averaging & Weighted Rank Scoring
When the results came back, I shared them with a colleague, who asked, "Which ranking method did you use?" Ranking method? I assumed the built-in ranking calculation in Microsoft Forms was good enough. Microsoft Forms takes each rank position, assigns it points (e.g., first place gets the most points, last place the fewest), and adds up the scores. But my colleague explained why just averaging ranks, or even using weighted scoring, can be misleading.
Here's the main issue: averaging or adding up weighted scores doesn't always reflect the real preferences of your group. Let's say a few participants rank a feature, like "Automated Reporting," at the top, while others rank it lower. Because of these high ranks, "Automated Reporting" might get more points overall and end up at the top of the list, even if most people didn't rank it highly.
This happens because simple scoring methods, like averaging, are very sensitive to outliers.
And the worst part of all this? The "averaging and weighting" method is the default approach in all the major UX "research" platforms listed above. (See what I did there with "research" in quotes? Hahaha.)
Why the Majority Criterion Matters
So, why does this matter? It's all about capturing what most people actually prefer. In ranking, a fair system should reflect the top choice of the majority. In other words, if a majority of people rank "Advanced Data Cleaning" as their top feature, that feature should come out on top. This principle called the Majority Criterion, is often missed when we use simple averaging or weighted scoring. As a result, we can end up with rankings that don't accurately reflect group preferences, leading to a misleading final order.
Pairwise Comparisons & Ranked Pairs
To avoid these misleading results, we can use pairwise comparisons. Pairwise comparisons evaluate each feature head-to-head against every other feature, showing us which one wins in each matchup. This method provides a more accurate picture of group preferences by focusing on direct comparisons.
My coworker then recommended I try the Ranked Pairs method. So, I read up on it and learned that Ranked Pairs takes things further by “locking in” the strongest preferences without creating conflicts or cycles. Unlike simple scoring, Ranked Pairs ensures that the most consistently preferred options rise to the top, even when there’s a lot of variation in rankings. This approach produces a final ranking that better represents what the group as a whole prefers. It’s a little more advanced, but the results are worth it, Ranked Pairs helps ensure that the rankings reflect actual, real-world user preferences, which is the goal all along.
NOTE: There are many other pairwise methods for calculating rankings, including MaxDiff, Borda Count, Copeland's Method, Minimax, Black’s Method, Kemeny-Young Method, Schulze Method, and Dodgson's Method. However, I’m confident that Ranked Pairs will be sufficient for almost any UX research task you’re tackling.
⚠️For Dorks Only⚠️
Here's the method broken down if you’re into the details:
Pairwise Comparison Matrix: First, create a matrix where each entry
Pij
represents the number of votes where candidatei
is preferred over candidatej
.Strength of Each Pair: For each pair of candidates
(i,j)
, calculate the "strength" of preference as:where
Sij
is the strength of preference fori
overj
. This shows the margin by whichi
is preferred overj
.Sorting Pairs by Strength: Sort all pairs
(i,j)
bySij
in descending order, from the strongest preference to the weakest.Locking in Pairs (without Cycles): For each sorted pair
(i,j)
, if including the pair does not create a cycle in the "locked" pairs graph, then lock it in. This can be represented as:NOTE: This step keeps the ranking free of contradictions and respects transitive preferences.
Final Ranking: After locking in all eligible pairs, the ranking order is determined by the directed graph, where each candidate points to lower-ranked candidates.
Tutorial: How to Recalculate Rank in RStudio
Now that you know your ranking data might not be as accurate as you'd hoped, here’s how to fix it using RStudio. This process includes steps for downloading data, setting it up in R, performing pairwise calculations, and applying Ranked Pairs with the PlackettLuce
package.
Step 1: Download Your Data
Locate the Export Option: In your UX research platform, go to the survey results section and look for “Export” or “Download Data.”
Select the Right Format: Choose
.csv
, as this is easily imported into R.Save the File: Save the file to an accessible location on your computer.
Step 2: Set Up RStudio
Install Necessary Packages: Install the packages
dplyr
for data manipulation andPlackettLuce
for ranking analysis. Open RStudio and enter the following commands:
install.packages("dplyr")
install.packages("PlackettLuce")
Load the Packages:
library(dplyr)
library(PlackettLuce)
Import Your Data: Use
read.csv()
to import your downloaded.csv
file. Replace"your_file_path.csv"
with the path to your file:
data <- read.csv("your_file_path.csv")
Step 3: Prepare the Data in R
Organize the Rankings: Ensure your dataset is structured so that each row represents a participant and each column represents the ranking they assigned to each option. If necessary, rename columns for clarity using
dplyr
:
colnames(data) <- c("Participant", "Option1", "Option2", "Option3", "Option4", "Option5")
Convert Rankings to Scores: If rankings need to be converted to scores (e.g., 1st place = 5 points), you can create a scoring system with
mutate()
:
data <- data %>%
mutate(across(starts_with("Option"), ~ 6 - .))
# Assuming a 1-5 ranking system
Step 4: Perform Ranked Pairs Analysis in RStudio
Transform Data for Pairwise Comparisons: Use
PlackettLuce
to convert the data to a format suitable for pairwise comparisons. This package works well with ranking data structured for comparisons:
library(PlackettLuce)
rankings <- as.rankings(data[, -1], input = "order") # Excluding the Participant column
Run Ranked Pairs Analysis: Using the
PlackettLuce()
function, you can perform a pairwise comparison analysis that accounts for preference strength, similar to Ranked Pairs:
model <- PlackettLuce(rankings)
summary(model)
NOTE: This output will display the ranking order based on the pairwise comparison results.
Step 5: Visualize Results
Create a Basic Bar Chart: Use
ggplot2
to visualize the final ranking order. Install and loadggplot2
if it’s not already:
install.packages("ggplot2")
library(ggplot2)
Plot Rankings:
final_scores <- as.data.frame(model$coefficients)
ggplot(final_scores, aes(x = reorder(row.names(final_scores), -V1), y = V1)) +
geom_bar(stat = "identity") +
labs(x = "Options", y = "Final Ranking Score") +
theme_minimal()
NOTE: This chart will show each option along with its ranking score, making it easier for stakeholders to interpret the results.
Conclusion
Default ranking methods in platforms like Microsoft Forms, Google Forms, Qualtrics, SurveyMonkey, and UserTesting can often distort user preferences, especially when outliers skew the results. If you want to upgrade your UX research game, RStudio is the way to do it. With RStudio, we can move beyond simple averaging or weighted scores and apply methods like Ranked Pairs to produce more accurate, representative rankings that genuinely reflect participant priorities. Accurate ranking data from these adjustments leads to better prioritization, ensuring that product and design decisions are truly user-centered and data-driven. While it may require a few extra steps, the payoff is a clear, reliable view of what users actually value in the real world.