The 5 W’s
Getting feedback from users is a critical part of designing the user experience for a product. But gathering feedback can be a daunting task. What is the best format? What questions do I ask? How do I translate what they are saying into something usable? Here is my guide to making user feedback more successful, I call them the 5 W’s:
- What are your objectives?
- What are your assumptions?
- How will you collect the data?
- Who are you going to engage?
- Who is going to conduct the session?
What are your objectives?
As with any activity, it’s important to approach with intent. To get the best results, you need to be deliberate about what you are asking and, more importantly, why. Write out your objectives, turn them into questions, and refine those questions with your team. If you know that you will be testing a live product or a working prototype, then turn your objectives into realistic tasks for the test users to complete. These questions/tasks are now the basis upon which everything else will be built and will determine the answers to the remaining W’s.
What are your assumptions?
Remember the scientific method from high school? You are doing something very similar here. Define what it is that you think you know about your users and how you think they will respond to the demo/questions/product. In many cases, your objective in conducting a user test or gathering feedback is to validate your assumptions about a product or experience. But I’ve seen enough aimless testing and useless feedback sessions to warrant calling this out as a separate step. Regardless of the order in which you approach these first two, make sure that your objectives and assumptions are aligned. They need to support each other or you will not end up with a clear, consistent focus in your session.
Side note: Defining your teams assumptions going into the testing is also important to be able to show them that their assumptions do not always line up with user expectations or abilities. A record of genuine assumptions that have been proven wrong will help to disarm office politics and “design by committee” mindsets by asserting that internal stakeholders may not actually know the right answers without talking to users.
This is especially helpful when dealing with upper management who may not have the best answer, but still mandates that you do X in a certain way. Use these feedback sessions as a chance to test management’s assumptions, in addition to your own, the resulting data will be a power tool against bad ideas.
How will you collect your data?
No matter what you are doing (user interview, focus group, product test, etc.), always record your session. Always record your session. There are several reasons for this:
- Everyone who needs to see/hear this might not be available at the time
- If you are running the session on your own, it will be hard to take notes and respond appropriately to the users. The recording will allow you to go back and review so that you can take notes when not under pressure. The quality of your notes will be much higher.
- You will be able to use actual excerpts from the session in presentations later. These can be powerful aids when dealing with “challenging” stakeholders.
Now you just need to determine which type of session is going to give you the most usable feedback. There are many different formats for collecting feedback, not all of them are created equal:
Survey
Surveys tend to have a very low engagement and the success or failure of a survey (assuming you find an appropriate sample representative of your user base) is entirely dependent on how you phrased the questions and responses. Unless you’re an epic-level survey writer, I would generally say that surveys have a very low yield in terms of usable feedback. Things that you can do to help, though: avoid open-ended questions, use sorts/ratings/scales, force respondents to make a trade-off when choosing between answers, include specific null values (avoid “other” and “I don’t know”; instead use “not applicable” or “I don’t use this”).
Card Sorting
Card sorting exercises are ideal for determining your users’ priorities or understanding how they categorize and group certain functions/activities. However, card sorting is very difficult to do remotely. I would suggest using Trello for real-time sessions and Optimal Sort when you can’t get everyone on at the same time. Both pose unique challenges, but are generally better than trying to create your own setup in Google Drive.
User Interview
You are gathering this type of feedback already (albeit in a less structured manner), when talking to customers about your product. When conducting a user interview, you are typically engaging one user at a time for qualitative feedback. You generally want to ask lots of open-ended questions and encourage them to put things into their own words. User interviews are good, in that the data you get is 100% accurate, but only for that one user. Look for the “why” behind everything they say. What is their motivation for using this product? Why did they choose your product as the solution to their problem? What is their precise problem and how could you better solve it?
Focus Group
Focus groups are great for those of us already proficient at herding cats. You are able to get similar information to what you would get from an interview, but now you can filter it through the context of a larger sample. The group can feed off of each other and provide more complete feedback. The major drawbacks are that, unless you work really hard to prevent it, you will usually only hear from the one or two loudest voices and they can very easily derail into a bitch fest. I recommend rotating the members of this group in a staggered schedule, so as to keep the group sessions small, while still engaging with a larger sample of your user base. One final warning on focus groups, they tend to have a larger “observer dependency”, so I would encourage you to have several members of your team sitting in or watching from a live stream to balance that natural bias we have.
User Acceptance Testing
User acceptance testing (UAT) usually involves placing some portion of the product in front of a user and observing/measuring how well they interact with it. UAT has historically been done in a lab on-site, but technology is making this increasingly easy to handle remotely. You can easily setup a WebEx or GoToMeeting and record the user’s screen while they interact with the product. This is an excellent opportunity to test your assumptions in a way that most closely mirrors the real world. UAT–more than any other format–gives your usable, quantitative measurements to bring back to your team.
Things to consider when choosing a format: Are you likely to find users (or representative samples) willing to engage with you in that format? If you find the engagement you need, is that format likely to yield usable data? How much engagement will you need to validate your assumptions (5 users will give you 80-90% of all responses you are going to get, but will not tell you the statistical significance of those responses)? Would your data quality be improved by sitting down with users 1-on-1?
Who are you going to engage?
Choose your participants carefully, not everyone needs to be involved. As I mentioned in the last section, engaging with only 5 users will give you almost all the responses you are going to get, even if you engaged with every single user or target in your market. But the burden of identifying priorities falls on you when reviewing this data. If you have the ability to pull from actual users of your product, you want to look for two types: (1) those who are active, generally happy with your product, and provide your with usable, unsolicited feedback already; and (2) customers who are unhappy with your product, at risk of leaving, or otherwise not engaged. I recommend about an 70/30 split between them, so that the positive feedback outweighs the negative. This gives “bad” users a chance to see that you are looking to improve your product and also a chance to interact with user who have perhaps already solved the issues with which they are currently frustrated.
Speaking to those of you with existing customer/user groups, do not feel like you have to include everyone. Sometimes, despite even their best intentions, some people just do not provide good feedback, so don’t waste your time trying to get it from them. In focus groups, the best feedback comes from smaller groups where people feel less intimidated and there are less egos getting in the way. Also, be mindful of those users likely to derail the discussion or bully others into their way of thinking.
Who is going to conduct the session?
There are several roles that need to be considered here, provided you have the team to support it:
- Interviewer (required)
Someone has to moderate the session and they must remain neutral. Therefore, the best person for that job might not be the product owner, it might even be someone outside the organization. Don’t let egos prevent you from putting the best person in this role. It’s important that the interviewer remain impartial and avoid inserting their own views and opinions into the session. They must feel comfortable saying “no” and dealing with conflict, but they are not there to contribute to the discussion in any way other than asking questions and keeping the session on track. - Reporter (recommended)
You are recording your session, so you could always go back and take notes later. However, having a reporter in the room to take notes provides a different human perspective which can be valuable in removing the observer dependency (in any live format). - Product Expert (recommended)
Again, this might not be the product owner, but having someone there to answer the tough product questions when they come up can make a tremendous difference in the users’ perception of the session. having a product expert available becomes required any time you are showing a product demo or asking the user to interact with the product directly. - Devil’s Advocate (optional)
If you have the ability, it can be beneficial to have a dedicated advocate for the “other side”. Challenging what the user thinks and says (in a constructive way) can force them to really think about what they are telling you. Often, we have gut reactions to things and the Advocate can help by forcing them to reevaluate their stance on an issue.