#UX101: On usability testing
Once you have created an interface, you need to test it, and this is known as usability testing. How do you do that? Well, it’s easier than you may think.
First, let’s define what we mean by the term “usability test.” A usability test is a structured session that involves a person engaging with a design and providing feedback on said design to a facilitator. Usability testing allows for an open objective conversation about a design to identify what works and what doesn’t. Why perform usability tests? By testing designs early, you can quickly identify potential problems with the interface before it is coded and implemented. This saves a lot of time, money, rework… and potential embarrassment.
I’ve met several UX professionals that think you need to have a huge usability lab with thousands of dollars worth of equipment, a two-way mirror, specialized software, and more. While I’ve used (and set up) a couple of usability labs, and there’s several benefits from having such a location for user research and testing… you don’t need most of that stuff. What do you need? The only things you absolutely need are:
- The Design
- People to show the design to
That’s it. You can do usability testing almost anywhere, and you don’t even need a computer. Now I know that there’s an entire cottage industry out there of people who sell usability-testing equipment, and many of them would prefer I didn’t state this. Sorry, but I call it like I see it… though I will spend some time detailing some usability test tools I have liked and used later.
While you don’t need a lot of James-Bond style gadgets to do a usability test, you do need to plan things out so you can get the most out of the effort. So, let’s talk about how you can plan a usability test.
There are five things you have to do beforehand to have an effective usability test session:
- Define test goals
- Formalize the test artifacts
- Define the type of test and the test method
- Identify who (and how many people) to test with
- Write the test protocol
Define test goals
What do you want to get out of the testing? What do you want to find out or understand? Define clear goals to focus on, and then make sure that the test details don’t lose sight or “muddle” those goals. If you don’t have any clear goals, then focus on getting answers to three key questions:
- Do they “get” it? If they can grasp the purpose and utility of the design, you’re in a good position.
- Can they use it? When given tasks to accomplish using the design, can they do it.
- Can they explain it to you? Can they describe what it is and how it would work?
Formalize the test artifacts
Where you are in the design process informs what design artifacts you create and use for usability testing. If you are doing early testing, the paper sketches or “prototypes” are fine. You may need to sketch out different “screen states” for some screens to represent the process you need to test, but that depends on how you are testing and what your goals are.
If you are testing more “mature” designs you are going to want to make the test artifacts more interactive, so the test participant can engage with the design more. Creating an interactive prototype is fairly straightforward if you can code, or have access to a team member who can develop it for you. If you don’t have those options, tools like Axure RP, Omnigraffle or iRise can help you make clickable prototypes (more on those tools later).
Define the type of test and testing method
There are generally three types of usability tests: Formative, Summative and A-B Testing.
Formative tests are used early in the design process to assess the effectiveness and usability of a preliminary design, as well as to capture users’ thought processes and conceptual understanding. It can be done with sketches or more “formalized” designs, and can be task-driven or an open-ended conversation.
Summative tests evaluate more detailed designs to determine the satisfaction, effectiveness, and overall usability. It usually takes place after an earlier formative test has occurred and is usually very structured and task-driven. You can perform these types of tests with existing systems to evaluate the current state of a design.
A-B testing compares two or more products or designs to identify and distinguishes the strengths and weaknesses of each. If you have two differing approaches to solve a design problem, this is a good way to evaluate them. This can be done at any time, and can also be used as a competitive analysis tool (test two existing sites with the same tasks to identify what site better supports the user). What you will want to do is have two sets of participants, with one set testing design A and the other set testing design B.
Now that you have identified the type of test you want to do, how do you want to execute the test? Again, there are three different ways you can do it.
In-person, facilitated tests provides the opportunity to gather the most comprehensive feedback from participants about a design. It allows for any type of testing, and provides the most flexibility of what and how you test. In my experience, I’ve found it’s the best method to test designs for mobile application
An in-person facilitated test takes about an hour and you’ll expect to pay participants at least $100 for their time (usually with a VISA or Amex gift card).
Remote usability testing allows you to get feedback on designs without having to facilitate any sessions. It will cost less than in-person testing, in that the services charge a flat fee of less than $50 per participant. However, it doesn’t provide for extensive conversations or in-depth testing of designs, and the results can be hit-or-miss. In my opinion, this method is good for getting feedback on early concepts and if you need really quick user feedback.
If you’re strapped for cash or time, consider in-person friends and family testing, AKA “guerrilla” testing. While not as useful as formal in-person testing, it’s still a good way to get “casual” feedback or do some simple A/B testing. It has costs besides people’s time (and maybe some snacks or a lunch).
Identify who (and how many people) to test with
Who do you test with? People who reflect the key characteristics of your personas, of course. Use the information that fleshes out these personas to inform the creation of a recruiting screener. Be sure to have some “disqualifying” questions in this screener to prevent the wrong type of people to get in the test group (an obvious disqualifying question: (Do you work for a competitor?”)
When you test a design, you need to get a lot of different people to look at it. The question, of course, is how many is “a lot”? Well, it depends on the type of test you are doing. If it’s an early test and you want to get high-level feedback, you may decide to have a larger group of people. If it’s a task-driven test that is focused on identifying potential design or usability issues, then you can “get away” with only five people.
Why five people? Because the law of diminishing returns applies. Jakob Nielsen, after his company spent years doing formal usability tests, looked at the data and identified that after the fourth person you will (on average) identify over 80% of the usability issues in the design tested. With 5 people, that percentage goes up to over 95%. So, unless you REALLY want to test with lots of users, five is enough (details here: http://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/).
When it comes to recruiting participants, there are usually local companies that do market research who can find people for you. Expect to pay $100 “finders fee” for each person (not counting the compensation you will need to pay participants for their time).
Writing the test protocol
When writing the test protocol, you need to include the goals you have defined earlier as well as the list of materials/designs you are going to review. A lot of people like to script out everything – the questions, the introduction, the whole thing. I leave this to you, but the key is to present the design artifacts in a consistent way to all participants.
If you defined user scenarios to inform your designs (through either a narrative story or a journey map) then you will already have a list of tasks to use in your test, making this job easier. Using these scenarios in a usability test will also allow you to “sanity-check” these design artifacts to make sure that you have correctly understood user needs and intent – if many of your test participants say “I’d never do that” in response to the task they are given, you may need to rethink some previous decisions.
A decision you need to make at some point is whether your or some one on your team will do the testing or whether you should “farm it out” to an independent UX consultancy. The answer to that depends on some key questions you should ask yourself. Can you be objective? Do you have the proper skills in-house? Do you have confidence in your team’s ability to do the testing? Consider doing both – maybe you do formative tests internally and then supplement that with an outside consultant to do the summative testing.
Now that you have fleshed out your plan, identified what you are testing, defined participants, and have anything ready, it’s time to test. Which is what we’ll cover next time…