#UX101: How to analyze user research data (and produce results)

Once you have gathered your user data, through either user interviews or usability testing, you need to analyze it. While some may consider this a daunting task, it’s not: it is, however, an activity that requires focus and objectivity. In order to make sure that you are not unconsciously letting your preconceived notions taint your analysis and final findings, do the analysis activity with at least one other person – preferably two or three. This way everyone can “check” each other’s interpretation of the data and mitigate the chance that the results will contain fallacies or mistakes.

The most important thing about analysis is this: Don’t reach any conclusions before you start analysis! When you do you will (unconsciously) try to make the data fit your premise and you will (unintentionally) distort the data to fit that preconceived notion. Always remember: the data doesn’t care if you’re right or wrong, its just data. I have repeatedly been pleasantly surprised and delighted in research analysis sessions when the data revealed insights and understanding that were totally unexpected and cool – and we would never have identified those insights if we had come to the data with “blinders” on.

The second thing to be mindful of is “pattern recognition.” We are wired to make connections… sometimes when no connections exist. Analysis involves interpreting data to form insights and findings, and a lot of that involves identifying patterns and trends in the data… but don’t make false connections that aren’t there just because you want to find such things. Again, having more than one person helps to mitigate any such “incorrect” pattern recognition.

Analysis methods

When you are looking at data from user research or testing, there are multiple techniques you can use to analyze the data. No matter what technique you use, there will always be common activities you will always have to do. You will always need to review the notes and audio/video of the session to get a sense of the person and their responses. Spends some time identifying any pain-points or frustrations that were captured and the root cause of the frustration – was it the software or process being used or was it an underlying issue the person has outside of that? Finally, when you are looking at usability test data, be fair and objective – you need to identify what worked and what didn’t in order to produce accurate findings.

Use one or more of the following data analysis, and be aware there are pros and cons to all of them:

Affinity diagrams (aka Card Sorting)

Card sorting allows you to write down individual data points on sticky notes and then put them on the wall. This allows you to start looking at the data in isolation, without any preconceived notions. This is a good way to identify patterns and works especially well when you are building an information architecture or creating personas based on attributes of interview subjects.

Another benefit of this technique is that you can bring in other people to look at the data and let them organize it for and with you. This is called an open card sort, when participants are asked to sort cards with no pre-established groupings. The groups they create reflect how they think of the data and after they are done they are asked to group the cards and describe the groups they created.  (A closed card sort is when participants are asked to sort cards into groups provided to them).

Data-crunching

This is using tools like Excel to look at the data to identify patterns. When doing usability testing, I like to use a standard spreadsheet template that has columns that list the task being tested, an area for notes, a drop-list that allows the notetaker to classify the note as they are typing (“usability issue”, “participant question”, etc.) and a “flag” for whether or the person was successful in the task. This allows me to reconcile all the notes from the sessions to quickly analyze what worked, what didn’t, and what issues the participant encountered.

When you are looking at notes from user interviews, you will have to spend some time “retyping” the handwritten notes into excel, so this approach has some extra effort baked in. However, this lets you have a permanent electronic effort of all the interview notes, so this is a benefit over card sorting. Another benefit is you can produce nice charts and graphs from the data and many stakeholders like that kind of thing…

Mind Mapping 

Doing a mind-map allows you to create a visual map of the information that you gathered through your research, and this “picture” allows you and your analysis team an opportunity to look at things differently. You can use a software-tool or, if you are artistically inclined, you can create a mind-map on a whiteboard or on large sheets of paper. Visualizing the information helps identify patterns and informs insights that may not be understood otherwise (remember: many people are visual learners, and this exercise works extremely well for those type of people).

Dimensions

“Dimensions” is a good tool for identifying patterns to inform personas. You look at the data you have gathered from all the user interviews and you define key characteristics that came out of these conversations. Some examples are “Tech savviness”, “Confidence”, “Extrovert/Introvert” and “Charitable giving”. You then identify the two ends of the dimension and you place all the interview subjects on the line where they fall. This allows you to see where there are similarities and where there are differences, and this informs the creation of more accurate representative personas. Other than persona creation, however, it has limited application for other design or research activities.

Forming results and making recommendations 

After you spend the proper amount of due diligence analyzing the data and forming results (you’ll know you’re done when your fingers are numb and your eyes feel like they are bleeding) you will need to package your results and define your recommendations. In the past I’ve written detailed and lengthy word documents as well as large PowerPoint presentations… and I’ve found PowerPoint (or Keynote if you use OS X) works best.

Business stakeholders like to see presentations and many of them have looked at my lengthy word documents with a reaction bordering on contempt. “I have no time to read that,” they say, “give me a 10-page summary.” Do not take offense at such a reaction – most senior folks like this are like Jack Webb on Dragnet: They want “just the facts.”

You may be working on a project with an aggressive timelines, and may be tempted to just send the results in a quick e-mail to the key players and the rest of the team. Don’t do it. Spend the time pulling together a formal, professional document, because if you don’t odds are a stakeholder at some point will question the budget line item for user experience research or testing and ask, “Why are we doing this? What are we getting out of this spend?” Having results documented from all your testing will allow you to best respond to this type of question. It will also add to your portfolio, and so it’s worth doing and doing well.

What are best practices for participatory design sessions?

Participatory design is a great way to have an engaging conversation with users about a particular task or situation. The main thing to remember  about participatory design is this obvious point: Users aren't designers. During the design session they are going to sketch out screens and workflows, but those artifacts aren't important: the thoughts behind the designs are.

My best practices:

It's a conversation, not a classroom exercise.

Don't look at participatory design as an assignment you give your "students." Consider the design activity an opportunity to have a conversation about the design problem/scenario. You need to do X – how would you see doing X? What process makes sense to you? If you need to enter information to do X, what would it be?

Be clear about the problem space/scenario.

Your time with each participant is important and valuable (they are taking time out of their personal and professional lives to meet with you). To get the most out of that time you need to be clear about the design problem and scenario you will be working on with the participant. Keep that description simple and obvious, to prevent confusion and ambiguity.

Don't judge.

Again, don't judge the designs or the participants – many of them will be very self-conscious about their design and drawing abilities. Don't criticize, even in a joking way. Be positive. And there are no bad ideas… even if the participant comes up with a contraption that would embarrass Rube Goldberg, don't let the participant know that.

Encourage, but don't patronize.

While you shouldn't judge, you also shouldn't be condescending. Be positive and polite but don't overpraise – it can come off as phony and turn off the participant.

Have someone take notes.

Have an assistant to the side take notes and (optimally) video and audio record the session. If the participant is not comfortable being recorded, just take notes. But the key is to capture as much of the conversation as possible. As the facilitator, you shouldn't be trying to capture everything as you go, you should be responding to the participant – again, it's a conversation.

See question on Quora

Can the physical appearance of a moderator influence the results of usability testing?

UX

Yes, the physical “attractiveness” can influence the responses of test participants. If you appear well dressed, it may make the participant feel they have to be more formal and some may look to you as an authority figure, and either reactions can affect how they respond. If you dress slovenly then some participants may not take you or the exercise very seriously.

A personal example: I don’t consider myself attractive, but I do consider myself reasonably good looking. I used to dress very well when I facilitated tests (suit and tie) and at 6 foot 4 in a suit I can make quite an impression (think Penn Jillette).

Over a couple of tests I noticed a couple of the female participants (and one male) was being… Well, they were flirting with me. Being an unbiased facilitator I had to not reciprocate and I had to “throw out” one of the interviews because the person was… Well, she was pretty blatant.

I was flattered, but I also decided to have less of a “presence” in the sessions – the focus of the test obviously is what you are testing and what the participant does and says… Not on you. So now it’s downkey polo shirts and nondescript khaki pants, to “blend in” and not distract.

See question on Quora

Lessons in UX: How a high-level design makes everything better

UX

I’ve been in several meetings where the words “high level design” was bandied about by designers and stakeholders. The importance of a high level design… well, it was unquestioned. A high level design was absolutely vital, to ensure the success of the project. Everyone agreed needed one, of course.

At one such meeting, a long time ago, someone asked the obvious question,”What is a high level design?” That person was me. I kinda sorta knew what it was, but not really. My asking resulted in a very heated discussion with my fellow designers. I found out that none of the people in the room could define it succinctly… and several disagreed with each other’s definitions. We hashed it out, and settled on a definition that was agreeable to all concerned.

After some research, I’ve settled on my own personal definition of what a high level design is, and what it brings to a design project – which is quite a lot, in my opinion.

A high level design represents the “foundation” of what you are trying to produce as a design solution. It is the core definition of what it is, who it’s for, why you are doing it, and what it will do. It contains details about user needs, business drivers, design principles, the core conceptual model of how people do things, and the core interactive model that the design needs to follow. A potentially more approachable way to think about it is to look at it as a brief such as one that would be given to an ad agency, a “project charter” for a UX team. So, now, the big question: how do you create one, and how do you use it? The following sections outline what I have done on a couple of projects to create a high level design.

First, know what you know.

Even if you haven’t formally done any research in the domain you are designing, you probably already know a lot about the space. Get it down. Do you have access to any research about what you are doing? Study it, and capture the key learnings that can apply to this project. Have different people look at the same data, to get a different perspective. Throw all this stuff up on the walls where you are working, and do an affinity exercise to organize it and identify trends and patterns.

Talk to business stakeholders

Get a sense of what is important to them on the project, and capture what they know. Depending on the experience of the people involved, they may be able to provide volumes of contextual information and understanding. An added bonus: you can capture, and keep in mind, the business goals of the project, to sanity check the design work against.

Define who you are designing for

Use or create personas to have an empathetic “target” for your work. Capture and/or define the “I want to” users bring to what you are creating. Understand the emotional and rational landscape that exists in the space, so you can either align with the desires (“I want to be noticed”) or identify potential points of resistance (“I don’t want to spend a lot of time doing X”).

Create a conceptual model

Visualize the space in a way that represents what people do in the domain you are presenting, based on all of the above “intelligence” you have gathered. Keep it simple and approachable, and consider using visual weighting to represent aspects of the experience that is more important than others. A quick case study: My team created a model for mobile banking that represented what people did in a mobile banking context. We identified five areas of action, and also noticed that, frequently, one action could lead to another. The visual we did presented the interconnective aspect of the domain and highlighted the key actions. It was very helpful as a reference doc that we could then “map” features to, and therefore prioritize them.

Sketch and collaborate

Take all the above and start sketching out how it could work. Usee key functions as the basis of what you are sketching. “Sketchstorm” together, to compare and share ideas. Through elimination and discussion, identify some key sketches and ideas that look best. Flesh these out in something that you can get feedback on.

Test and refine

Get the conceptual sketches out to users. See if they can understand how the intended UI would work. Have the participants describe to you how it would work, to see if it is obvious enough and quickly understandable. Refine the design based on testing.

Create a draft information architecture

Group the functions and the information, using the aforementioned conceptual model to group like things and to make the more frequently accessed “stuff” be front and center, and the less important “stuff” at a lower level.

Define the interaction model

This is more for a mobile or table app than a web app, but you need to define how users engage with the information and functionality. Is it pinch? Swipe? Using pagination?

Document and package

It can be contained in one page or a hundred pages, but the more concise you make it, the better. Present it to stakeholders to get buying and (if needed) approval. Keep all your working artifacts if you need to use them to “show your work” to any skeptics.

Doing a high level design is absolutely crucial to ensure success. It allows everyone to create and share a core vision as to what they are doing, a vision that the design team can then follow through on and quickly execute. Try it, it works.

UX for Mobile: What are the best methods for doing contextual user research for mobile?

Here’s what I did when I did a contextual research project around mobile usage and habits two years ago:

First, I interviewed people. A lot of people. 30 people in three different states. I asked them about how they used their phones, what they liked, what they disliked, their pain points, what they wanted in the next phone they bought, etc. Then I asked everyone I interviewed if they would be interested in logging thier mobile usage for a full week in a daily diary, as well as fill-out a daily “report”. 20 of them agreed, and 18 of them lived up to thier commitment.

I had designed and had printed up small pocket-sized notebooks, with areas for comments as well as checkboxes that the participant could use to indicate what the activity was (“Game”, “Call”, “Text Message”, “Web”, etc.) and also created a standard questionaire for the daily reports. As the participants were in different states, I also gave them a self-addressed stamped envelope for them to send back the diaries to me.

All the participants were given VISA gift cards – one card for the interview time, and another card was mailed to them when they completed and returned the diaries.

After all the data was recieved, I typed it into excel and analyzed it to identify usage patterns and form mobile personas, which our mobile design team use to this very day…

The genesis for this approach came from a panel on mobile testing and research I attended at a UX conference – my article on that panel and additional context on the above idea is here: http://www.josephdickerson.com/b…

See question on Quora

Is NPS is a good measure of a users experience? Also, are there any other calculations that could measure how a product is perceived by its users?

Yes, I think NPS is a fairly good indicator, but it’s a “one-dimensional” indicator. No matter how many customers you survey it is still just a survey, without much context.

Some other ways you can get a sense of how customers think of (and just as important, use) your site or product:

Analytics can be a useful indicator, but not the way you might think. If you have any way of tracking through analytics how often the application is used it can give you some great insights into how your product or site is used. If customers tend to “go” to a particular “place” often this will tell you where to focus your attention. It also may help you identify potential pain points in your experience (I’ve used analytics to identify “drop off” points in an application process to make hat process better).

Social reputation is also a key indicator. Set up saved searches for your product name(s) in Facebook, Twitter and Google+ to monitor what customers are saying about your offerings. While not “scientific,” it can give you some practical “real world” understanding.

Finally, have regular in depth discussions with customers. Even if it’s a group of 10 people, having a conversation where they discuss what they like and don’t like about your site or application will give you insight into potential areas of improvement.

See question on Quora