TESLA (Time Elapsed Since Labs Attended) and RMU (Range of Methods Used)

In a recent post on Boing Boing, Clay Shirky talks about the user research approach used at meetup.com:

[…] Scott pulled me into a room by the elevators, where a couple of product people were watching a live webcam feed of someone using Meetup. Said user was having a hard time figuring out a new feature, and the product people, riveted, were taking notes. It was the simplest setup I’d ever seen for user feedback, and I asked Scott how often they did that sort of thing. “Every day” came the reply.
Every day. That’s not user testing as a task to be checked off on the way to launch. That’s learning from users as a way of life.
Andres Glusman and Karina van Schaardenburg designed Meetup’s set-up to be simple and cheap: no dedicated room, no two-way mirrors, just a webcam and a volunteer. This goal is to look for obvious improvements continuously, rather than running outsourced, large-N testing every eighteen months. As important, these tests turn into live task lists, not archived reports. As Glusman describes the goal, it’s “Have people who build stuff watch others use the stuff they build.”
Mark Hurst, the user experience expert, talks about Tesla — “time elapsed since labs attended” — a measure of how long it’s been since a company’s decision-makers (not help desk) last saw a real user dealing with their product or service. Measured in days, Meetup approaches a Tesla of 1.
Glusman and van Schaardenburg have also made it possible to take Jacob Nielsen’s user-testing advice — “Test with five users” — and add “…every week.” Obstacles to getting real feedback are now mainly cultural, not technological; any business that isn’t learning from their users doesn’t want to learn from their users. […]”

While reading this I found myself nodding. Outsourcing a small lab-based user research project can cost £10-12,000 (depending on your supplier and the project details), so it you don’t need to many each year before it makes sense to bring it in-house. Rapid, iterative research is something I’ve blogged about enthusiastically before, and it’s a very effective approach. But there were a couple of points in Clay’s article that I just couldn’t swallow.

Test with 5 users every week.
5 users a week? Or every day? This sounds a bit like announcing that everyone should bench-press 500 lbs as part of their gym work-out just because you do. 5 users a week is impressive, but there’s no need to feel intimidated by it – the scale of your research should match your needs. For many web businesses 5 a week is too many and you’d be doing great if you achieved half or a quarter of that. Let’s think it through – to achieve 5 a week we’re talking at least one solid day of research sessions each week. As well as the researcher and the user, you also need decision-making stakeholders in the viewing room (otherwise you are back to the old fashioned giant-report-and-video-analysis scenario). This means you have to take some of the best members of your team out of their normal working life for 1/5 of their working time. That’s pretty expensive and you’d need to consider expanding your team to make up the loss in man-hours. Research and design are like Yin and Yang: they need to be balanced and they need to work together. In the rush to address the imbalance, you can push the scale back over too far onto the research side.

TESLA – Time elapsed since labs attended
If your TESLA is 6 months, you know you’re being naughty. But should you just be counting lab research? Isn’t that a bit like only doing bench-presses to stay healthy? There are lots of other exercises you should be doing.

In response, I propose RMU: “Range of Methods Used”
If your RMU=1, then you know you aren’t being as effective as you could be. Face to face user testing is just one approach amongst masses of others – and there are many times when these other approaches are more appropriate. In fact, I’m pretty certain that Meetup must be doing lots of other kinds of research, but it didn’t come across in Clay’s short article.

So what other research methods am I talking about here? Off the top of my head – diary studies, cultural probes, A/B testing, multivariate testing, analytics, co-discovery sessions, participatory design workshops, surveys, heuristic evaluation, walkthroughs, open, closed and reverse card sorting, remote usability testing, eyetracking (sometimes), telephone interviewing, ethnographic field work, NPS and other one-click feedback tools. All of these have many variants – the list goes on and on.

Now don’t get me wrong – face to face user testing is a wonderful method, but sometimes it’s inefficient, expensive and can even deliver weak findings if misused. Here’s an example. Last week a client came to me complaining that he needed to validate a large category scheme for his site. The scheme covered about 35 different industry sectors, and so he said he would need to run the testing on 35 different types of users (after all, you would need someone in accounting to tell you whether your accounting IA made sense to them). He was complaining that as a result, he simply couldn’t afford to do user testing. The solution to his problem was of course to escape from the mindset that “research = face to face user testing”. In his case, he could save a lot of time and get more reliable findings by running a remote user test, using a tool like User Zoom’s self serve.

Heading into the 2009 recession, we have to be smarter about the way we use our research budgets. If your RMU=1 for 2008, then it’s definitely time to break out some new methods.

one comment