Rapid Iterative User Testing: what a great method

Having worked at 3 different User-Centred Design (UCD) consultancies in the last few years (Flow Interactive, Amberlight, and Oyster Partners), I can confidently say that the type of project most commonly requested by clients looks like this:

  1. Client delivers test materials (e.g. prototype of new website)
  2. 8-12 user test sessions are carried out by the UCD consultant (the client may watch a small number)
  3. UCD consultant locks him/herself in a room, analyzes data and writes a report
  4. Report is delivered in a debrief presentation to the client
  5. Project ends, UCD consultant leaves and doesn’t get spoken to again for weeks or months.

It suddenly stuck me today how un-user-centred this method actually is. Sure, you get a cheaper project with a good number of users, but ultimately the consultant just throws their findings over the wall and hopes that their client catches them.

The amount of times I’ve seen a UCD consultant looking at the finished implementation months later, saying something like “Oh man! Didn’t those guys listen to the presentation?”

The problem is, the client stakeholders simply aren’t very engaged in this kind of project. Watching a few sessions just doesn’t cut it. Have you ever tried watching a series of user testing sessions? From a darkened room behind a 2-way mirror? The passiveness and repetitiveness can really send you off to sleep.

So how do you get the client stakeholders involved? The solution I’ve been using recently at Flow Interactive is Rapid Iterative User Testing. Put simply, you run design workshops between the test sessions. You talk to the client, you analyze the findings together and then, most importantly, as a group you tweak the design of the thing you’re testing. (Often the prototypes we test are UI mock-ups, so tweaking is a cinch).

Not only does it involve everyone and keep them interested during the test sessions, but it means that you get to make sure your design recommendations actually work in practice.

The main problem with this method is that it can cost more. More client stakeholders need to turn up, and the sessions may need to be spread over more days. But it hugely improves the output of the project. Instead of a report detailing the prototype’s failures, the final prototype that emerges from the testing is a living, tested implementation of your design recommendations.

Try it!

3 comments