top of page

New to Usability Testing? Avoid These Newbie Mistakes

  • Writer: Elizabeth Benker
    Elizabeth Benker
  • Jan 5, 2019
  • 4 min read

Updated: Jul 13


ree

When I was in graduate school, I took a class on usability testing. We read multiple textbooks and articles on the subject. We formed teams and practiced planning and conducting testing sessions. We analyzed and reported on the results. When the class was over, I used the techniques I’d learned in several other courses. I even attempted a usability test at work! By the time I graduated, I was confident I knew what I was doing.

Then I got the chance to see a seasoned professional conduct a usability test. And my whole approach changed overnight. It was like sitting in the first row behind the dugout at a major league baseball game instead of watching 6-year-olds play little league. Yes, the game mechanics are the same, but that’s where the similarity ends. There’s just no substitute for playing with and getting coached by experts.


(Are you thinking to yourself, “How hard is it to conduct a usability test? Isn’t it just watching someone use a tool and seeing where they get confused?” Well, this is accurate in the same way that surgery is just using a scalpel to cut someone open. There’s a lot that differentiates the beginners from the pros.)

In my career as a user experience professional, I’ve been fortunate to conduct over 100 usability tests. I’ve also had the pleasure of teaching usability testing to university and corporate students. What I’ve observed from helping newcomers hone their technique is that everyone makes the same two mistakes when they’re first starting out. Yes, everyone.

If you’re making these mistakes, you’re leaving a lot of valuable data on the table. You could even be skewing your usability testing results to be more favorable than they truly are.


Screenshot of MLS.com
Note: I have not performed usability testing on this site nor do I have any affiliation with it. It is illustrative only.

Mistake #1: You’re testing features, not how your product supports user needs

The first mistake is testing an application’s features instead of testing to support real-world tasks.


Let’s say you’re searching for real estate listings on MLS.com. How might you structure an initial usability testing task? Many start with something like this:


Moderator: “How would you view MLS listings in Illinois?”


User: “Well, I’d just click the state of Illinois right here.”


Moderator: {Thinks to self: “Great! We’ve got a winning design here!”}


The problem with phrasing your testing task in this way is that it treats your user like a monkey instead of a human being with goals to accomplish. This task reveals if a user can click the link you’re signaling them to click. It doesn’t determine if people can use the site to do things that matter to them. This type of question doesn’t reflect a realistic scenario of use, or approximate what would drive a customer to use your product. (I mean, who searches for houses by state?)


A better way to structure a testing task is to describe a realistic scenario that would prompt a user to interact with your tool. Here’s an example:


Moderator: “Please imagine that you’re interested in viewing houses for sale in your area. How would you use this site to do that?”


User: “Well, I probably wouldn’t come to the MLS website to do that. I’m signed up through my realtor’s website and I get alerts when new houses come on the market that match my interests.”


Moderator: “That’s great to know!... {Asks a few questions to understand the user’s search criteria. Then asks:} Do you see a way to do that on this site?”


Can you spot the difference in these two approaches? In the first example, you’re not asking the user to connect what they’re doing on the site with something they do in real life; you’re really just asking them to click a button. In the second, you’re situating the use of this tool in context of something the user might actually do. Phrasing the task in this way opens the door for people to reflect on how they’d normally approach such a task, and even paves the way for them to tell you if your scenario is realistic or not. This is important to know because situating your testing tasks in a real-life example always nets you better data than when the task is purely speculative.



Screenshot of MLS.com (repeated)
Note: I have not performed usability testing on this site nor do I have any affiliation with it. It is illustrative only.

Mistake #2: You’re using the language of the interface

The second mistake beginners make is using words that appear in the interface in your testing task. Let’s use our MLS example again:


Moderator: “How would you view MLS listings in Illinois?”

User: {Thinks to self: “I don’t know what MLS stands for, but I see it listed right here... I’ll just click it and see what happens.”} “I’d click this link that says ‘MLS Listings’.”

Moderator: {Thinks to self: “And we have another winner!”}

When you use the language of the interface in the testing task, the user can just click on things that match what the moderator says. It becomes a label matching game and not a test of the usability and suitability of an interface.

To fix this, use another word that’s not in the interface. Or better yet, describe a real-world scenario that would prompt the user to complete the task in the first place.

Moderator: “Please imagine that you’re in the market for a new house. How would you use this site to help with that?”

User: “I guess I’d use the ‘MLS Listings’ search... but I also see this tab that says ‘Find New Homes.’ Aren’t those the same thing?”

In the first example, the user’s confusion doesn’t come through because they’re just matching a label to the words the moderator said. In the second example, the user has freedom to explore the site more naturally and is able to voice their confusion without losing face.


Making these two subtle shifts when you’re creating usability testing tasks will ensure that you capture richer, more accurate data. Because better quality testing data is what leads to true insights for how to improve your product.


What tips do you find most helpful when conducting a usability test? I’d love to hear your ideas and suggestions.

Comments


bottom of page