#UX101: How to analyze user research data (and produce results)

Once you have gathered your user data, through either user interviews or usability testing, you need to analyze it. While some may consider this a daunting task, it’s not: it is, however, an activity that requires focus and objectivity. In order to make sure that you are not unconsciously letting your preconceived notions taint your analysis and final findings, do the analysis activity with at least one other person – preferably two or three. This way everyone can “check” each other’s interpretation of the data and mitigate the chance that the results will contain fallacies or mistakes.

The most important thing about analysis is this: Don’t reach any conclusions before you start analysis! When you do you will (unconsciously) try to make the data fit your premise and you will (unintentionally) distort the data to fit that preconceived notion. Always remember: the data doesn’t care if you’re right or wrong, its just data. I have repeatedly been pleasantly surprised and delighted in research analysis sessions when the data revealed insights and understanding that were totally unexpected and cool – and we would never have identified those insights if we had come to the data with “blinders” on.

The second thing to be mindful of is “pattern recognition.” We are wired to make connections… sometimes when no connections exist. Analysis involves interpreting data to form insights and findings, and a lot of that involves identifying patterns and trends in the data… but don’t make false connections that aren’t there just because you want to find such things. Again, having more than one person helps to mitigate any such “incorrect” pattern recognition.

Analysis methods

When you are looking at data from user research or testing, there are multiple techniques you can use to analyze the data. No matter what technique you use, there will always be common activities you will always have to do. You will always need to review the notes and audio/video of the session to get a sense of the person and their responses. Spends some time identifying any pain-points or frustrations that were captured and the root cause of the frustration – was it the software or process being used or was it an underlying issue the person has outside of that? Finally, when you are looking at usability test data, be fair and objective – you need to identify what worked and what didn’t in order to produce accurate findings.

Use one or more of the following data analysis, and be aware there are pros and cons to all of them:

Affinity diagrams (aka Card Sorting)

Card sorting allows you to write down individual data points on sticky notes and then put them on the wall. This allows you to start looking at the data in isolation, without any preconceived notions. This is a good way to identify patterns and works especially well when you are building an information architecture or creating personas based on attributes of interview subjects.

Another benefit of this technique is that you can bring in other people to look at the data and let them organize it for and with you. This is called an open card sort, when participants are asked to sort cards with no pre-established groupings. The groups they create reflect how they think of the data and after they are done they are asked to group the cards and describe the groups they created.  (A closed card sort is when participants are asked to sort cards into groups provided to them).

Data-crunching

This is using tools like Excel to look at the data to identify patterns. When doing usability testing, I like to use a standard spreadsheet template that has columns that list the task being tested, an area for notes, a drop-list that allows the notetaker to classify the note as they are typing (“usability issue”, “participant question”, etc.) and a “flag” for whether or the person was successful in the task. This allows me to reconcile all the notes from the sessions to quickly analyze what worked, what didn’t, and what issues the participant encountered.

When you are looking at notes from user interviews, you will have to spend some time “retyping” the handwritten notes into excel, so this approach has some extra effort baked in. However, this lets you have a permanent electronic effort of all the interview notes, so this is a benefit over card sorting. Another benefit is you can produce nice charts and graphs from the data and many stakeholders like that kind of thing…

Mind Mapping 

Doing a mind-map allows you to create a visual map of the information that you gathered through your research, and this “picture” allows you and your analysis team an opportunity to look at things differently. You can use a software-tool or, if you are artistically inclined, you can create a mind-map on a whiteboard or on large sheets of paper. Visualizing the information helps identify patterns and informs insights that may not be understood otherwise (remember: many people are visual learners, and this exercise works extremely well for those type of people).

Dimensions

“Dimensions” is a good tool for identifying patterns to inform personas. You look at the data you have gathered from all the user interviews and you define key characteristics that came out of these conversations. Some examples are “Tech savviness”, “Confidence”, “Extrovert/Introvert” and “Charitable giving”. You then identify the two ends of the dimension and you place all the interview subjects on the line where they fall. This allows you to see where there are similarities and where there are differences, and this informs the creation of more accurate representative personas. Other than persona creation, however, it has limited application for other design or research activities.

Forming results and making recommendations 

After you spend the proper amount of due diligence analyzing the data and forming results (you’ll know you’re done when your fingers are numb and your eyes feel like they are bleeding) you will need to package your results and define your recommendations. In the past I’ve written detailed and lengthy word documents as well as large PowerPoint presentations… and I’ve found PowerPoint (or Keynote if you use OS X) works best.

Business stakeholders like to see presentations and many of them have looked at my lengthy word documents with a reaction bordering on contempt. “I have no time to read that,” they say, “give me a 10-page summary.” Do not take offense at such a reaction – most senior folks like this are like Jack Webb on Dragnet: They want “just the facts.”

You may be working on a project with an aggressive timelines, and may be tempted to just send the results in a quick e-mail to the key players and the rest of the team. Don’t do it. Spend the time pulling together a formal, professional document, because if you don’t odds are a stakeholder at some point will question the budget line item for user experience research or testing and ask, “Why are we doing this? What are we getting out of this spend?” Having results documented from all your testing will allow you to best respond to this type of question. It will also add to your portfolio, and so it’s worth doing and doing well.

Comments are closed.