Screening out liars from your user research

Note: This post is over 4 years old. It was first published in September 2009

The whole point of user research is that you get to observe real members of your target user group interacting with your product. However, the cash incentive that you offer – typically £50 for an hour – is compelling enough to make some people bend the truth, and this is compounded by the chain of people involved in the recruitment. For example, if you outsource a research project to a UX consultancy, they will probably outsource the recruitment to a specialist agency, who in turn will may outsource to a number of independent freelancers. As the client sitting on the receiving end, you have to be confident that it’s being carried out in a rigorous way.

Even if your recruitment agency are trying their best, it’s sad reality that there’s diminishing returns in weeding out end users who fib. They can’t really hire in Columbo to investigate every user. And if, during the sessions, the research facilitator starts to suspect the participant might be a dud, what can they do? It’s an awkward situation, especially if they their client watching from behind the two-way mirror. The researcher can continue the interview without pushing the issue, or they can deviate from the script and start cross-questioning the participant on their honesty, which will ruin the rapport, take time, and probably won’t be effective in any case.

In fact, a lot of liars can be screened out by writing a really good screener questionnaire. For example, here’s a decoy question that the Mozilla metrics team used in their recent Test Pilot survey.

Screengrab from Mozilla Test Pilot survey

The goal of the question above was to ascertain the experience level of a respondent, so the data could be segmented. To sift out the deluded novices and liars, the Mozilla Metrics team added a made-up acronym – JFW – on the rationale that anyone who ticks “full understanding” for this item and all the others can be flagged as a suspect respondent.

Don’t believe me on the JFW acronym? I asked the Moz Labs team, just to be sure:

twitter

It’s also fairly likely you will want to recruit participants who have used your product a certain number of times. If you ask them directly (“Have you used mysite.com at least 3 times in the past month?”), the respondent will easily guess what they are meant to say to “win” the research. So, you should always hide the qualifying answer among a number of decoy questions, or by asking open questions.

Another trick I’ve recently started using is placing a stern warning on the screener about honesty. For example, if you’re testing an ecommerce site, you can state that a substantial part of the interview will involve being signed in to the site and referring to their purchase history page. If they don’t have a history spanning over 3 months, tell them they will be turned away without payment. This does sound a bit harsh, but it works.

To sum up, you face a real risk if you rely on your recruitment agency to take care of the screener behind the scenes. When engaging with a new agency, ask them what they do to screen out liars, and always be certain to review the final questionnaire before it gets deployed.

Do you have any other screener tips? Add them in the comments!