In this five part series, Lauren discusses Website Usability in terms of knowing how to give your users what they want. She goes from helping you make the decision to conduct a Usability Study to interpreting the results and producing a report.
It’s a must-read for anyone who wants to make the site they’re developing more effective and user-friendly.
PART 1: Pay Now or Pay Later
PART 2: Planning a Web Usability Study
PART 3: Preparing for a Web Usability Study
PART 4: Implementing a Web Usability Study
PART 5: Interpreting Results and Producing a Report
Part 1: Pay Now or Pay Later
Usability isn’t necessarily a new term, but when it comes to how users interact with the Web, it takes on a whole new meaning. Web usability is evolving as we learn more and more about how our users interact with our online information, how they retrieve it and use it, how they want to move on our site, what they anticipate and what they expect within the realm of their experience.
The term ‘usability’ has historically involved testing how users actually get on a system and use it. With Web usability, I prefer to take it one step further and think of it in terms of how a user gets on a Web page and a) anticipates how to interact with it and b) actually interacts with it. This anticipation is what we need to test for along with use; it’s the intuition and the logic involved in the Web experience that differentiates Web usability from other types of usability. Failing to rethink our approach to usability predisposes us to either success or failure – I don’t know about you, but I prefer the success path. Web users are a fickle group; let’s face it, if you don’t anticipate what they want, give them what they want, how they want it and when they want it, you can forget the bookmark; they’re not returning.
Think of Web usability as a ‘pay now or pay later’ proposition. If you don’t check in with users early in the development process, you run the risk that you won’t meet their needs when you launch the site. When you realize their needs haven’t been met, you have to go back and rethink your whole approach, rehire the web developers and get them refocused on the project so they can redevelop the site. The terms, ‘rethink’, ‘rehire’, ‘refocus’ and ‘redevelop’ should be conjuring up visions of dollar signs for you, not to mention the fact that your brand, image and credibility were damaged in the process of launching a site users weren’t able to use.
Testing for usability is a choice, not a requirement, and it is often the first step in the development process that management will scratch if time is running short. The next time you approach your web project, consider using the web usability strategy I developed to understand the scope of testing usability, how you could easily administer a test session, who you should test and what you want to test for. It may be less burdensome than you think!
Your Usability Strategy
Below are some considerations for your Web usability strategy:
- Goals of the study: What do you want to achieve with this study?
- Criteria: What do you want to test and what are you going to test for?
- Participants: Who will you invite to participate and why did you choose them?
- Resources: Who will need to be involved (consider if observers will be needed to document the participants’ movements — this normally involves one participant and one observer working together)
- Timeframe: When should this study take place to be meaningful for the project? (dates and times of day)
- ROI: Will there be a return on investment if this study goes forward? For example, if it costs $1,000 to conduct the study, would you save an anticipated $15,000 in rework and redevelopment if it’s not done? Will you gain any competitive advantage with your end result if the quality is higher than a competitor’s because you performed the study?
- Cost: How much will it cost to administer this study?
With this Web strategy, you’ll be in a good position to seek approval from management to continue with your study. Your management will appreciate the time you took up front to understand what this study will entail and how much it is expected to cost.
Web developers, just like the application and software developers of the past, are too close to their projects and too close to the terminology, labels, navigation, look/feel, context, text and messages to truly know what users want. They’ve taken their best guess and used whatever knowledge they’ve gathered during their Web development career to make critical decisions and assumptions about users. Put a check in the process to validate the approach and confirm that the direction chosen was the right one and all the elements on the Webpage will be meaningful for the user.
With the ‘pay now or pay later’ philosophy, you’ll want to pay now while your brand, image and credibility are still in tact. Paying later after the user has had an awful experience compounds both your financial and reputation costs and the user experience cannot be salvaged.
Part 2: Planning A Study
So, you’ve learned from earlier mistakes (or read Part 1 and avoided mistakes!) and decided that adding a step up-front in your web development process is a wise and prudent way to spend your resources, secure the confidence of your users and keep your brand image strong. The next step is to actually sit down with the Web Usability Strategy you completed in Part 1 of this series and decide how you want to administer the study.
I can’t stress enough that planning is the key to a successful study. With enough time to understand your approach, your surroundings, your users, and designing a quality evaluation tool, you’ll have all the necessary ingredients you need.
Learn PHP for free!
Make the leap into server-side programming with a comprehensive cover of PHP & MySQL.
RRP $11.95 Yours absolutely free
Along with the information gathered in the strategy, the approach involves making decisions early about the following:
- Where you’ll conduct the study.
You may have a training facility you can use — a conference room where you can hook up PCs or be at an off-site location or have the resources to utilize the services of a usability lab. Early in the planning process, you’ll want to decide where you will conduct the study and secure that space. Many studies fail because the facilities weren’t secured.
- How you’ll conduct the study.
Testing on the Web may mean that you’ll need flash cards that users will sort for logical order or navigation or it may mean having computers with browsers or modems connected. Don’t underestimate how you will administer the study! The level of effort involved and the logistics involved in administering the study depend on what props and equipment you need.
- How you’ll select your participants, seek their willingness to participate, communicate the logistics and what incentive they will receive.
You could study the demographics of your users ahead of time and select users that meet the criteria for who you want to test (new versus experienced users, technology-savvy versus not, certain age ranges, etc.) Once you have their names or email addresses, you’d simply send them an invitation. The invitation could include what you’re trying to achieve with the study (e.g., We’ve tried to anticipate your needs, now we need you to tell us if we’ve done our jobs well.), a description of the incentive and how they will receive the incentive (It’s not unusual to offer a user $100 for a one-hour study or to have food at the session)
With these foundation pieces in place and key decisions made, you’re ready to begin to develop your plan document.
Your Usability Plan
As with any plan, you’ll want to bring all the pieces of your plan together in one place. I prefer to put my plans in table format simply because it’s easier to read and forces me to be brief and to the point when I describe my activities. The headings for this plan could be:
- Sorted by phases
e.g. planning phase, logistics phase, user selection and communication phase, evaluation tool development, study implementation, results interpretation, reporting
e.g. finalize location, send invitation using distribution list, confirm attendance, send payment for food or room, etc.
- Roles and responsibilities
e.g., identifying who will carry out each activity
- Planned and actual dates
e.g. the date by which the activity should be complete along with the actual date the activity was complete; if you’re running behind, this will help you get back on track
- Expected results tied to each activity
e.g. these are the deliverables that would be done in each phase such as completing the strategy, securing the location, completing the plan, completing the evaluation tool, finalizing the invitation, etc.
- Planned and actual costs
e.g. identify where charges will be incurred to carry out the study which may include how much the location costs, how much travel or hotel costs, how much for food, how much to copy any materials, how much to rent or secure any cameras, recorders, computers, etc. Compare your planned cost to your actual so you can work to stay within your budget.
- Issues do arise, so add a column or separate area to keep track of issues, how they will be resolved, by whom, date opened, date closed, etc. Not all issues are show-stoppers, but you want to be proactive and stay on top of them and their timely solution because one or two significant ones could bring your study to a halt.
Design the plan document in whatever way works best for you and in whatever software package you choose. This plan document will be shared with the members of your team as well as your management, so choose a software package that is eMailable and accessible. Now that you have secured management approval for your strategy document, you need to secure their approval for your plan document. This helps bring management along in the process and educates them on the benefits of performing the study, the returns and benefits that will be gained, and the level of effort it takes to conduct a Web usability study.
Part 3: Preparing for a Web Usability Study
Preparing for your Website Usability study involves two steps:
- Developing your evaluation method or tool
- Finalizing logistics
Let’s walk through them both in detail.
1. Developing your Evaluation Method or Tool
Developing your evaluation method or tool
Regardless of how you plan to administer your usability study, you’ll want to establish an approach for recording the raw data generated by your participants. This may take the form of using recording equipment for later review and analysis or designing a hard-copy or online form that participants or observers will complete. Because resources were available, my experience has generally included one observer for each participant with the observer making the notations on the evaluation tool. This frees up the participant so they can focus on the task at hand. If I didn’t have access to these observers, I certainly would use a camera or other recording device to capture the participant’s information.
Developing the evaluation tool is an important step in the process and you should include time in your plan to draft the tool, have it reviewed by your team or staff, approved by management and walked through one or two times by an objective third party (it should be timed when the walkthrough takes place with the tool modified as needed). The evaluation tool itself can be a Word document or take the form of a web form for automated input. Regardless of how you produce the tool, it should have several basic components.
- A number assigned to each participant (for anonymity)
- The observer’s name (in case you need to follow-up with them later)
The five parts below make up the contents of the evaluation tools I’ve created, but you can certainly develop your own.
Specific to Website studies, I’ve gotten all the information I needed using these basic components – it is a simple approach and it works. The main limitation to the length of the evaluation criteria is time allotted for the study. If you have a 30-minute study, you want an evaluation tool that will take up that much time, and still leave time for those who are more diligent or not as well paced as the others (and, there are always a few stragglers!).
With the time constraint, you’d want to be selective in your activities and questions and really focus on and prioritize the activities you need to test the most. I’ve managed 5-minute, 10-minute, 30 minute and 1-hour studies. If the study is more than 1 hour, you might want to rethink or re-prioritize your questions and activities; asking a participant to give more than an hour of their time sitting at a computer and working through what they might perceive as a difficult assignment may not be very appealing.
If the study takes too long to complete or is perceived to be boring or dragging on, it may impact participation for this or future studies. But, if your incentive is really good, they may be willing to stay, but I would bet that the quality of the participant’s output drops significantly after an hour. Another alternative would be to break a two-hour session in two by providing a meal in the middle, then resuming for the remaining hour.
These are the five parts I’ve used for Website Usability studies. The testing criteria are the questions, statements and activities that take place under each of the five parts:
Part 1 – General Survey
Includes a few general questions about web use (How often do you use the web? Have you ever visited our website? When you did, what were you primarily looking for?); you could certainly add other questions here if you needed more specific information from your participants about how they want to interact with your company (How would you like to communicate with us? From our company, do you prefer instructor-led or online training? Do you wish to subscribe to our newsletter?) This section includes the questions you need to better understand your user base and should be no more than 5-7 open-ended questions.
Scoring the General Survey
Responses will be text, so read through each carefully to learn more about your participant, how they use the web, how they use your website and how they want to interact with you. The information from these results can build on your understanding of the demographics associated with web use relative to your industry, user or specific websites. These results can be used to improve business process, create new communications strategies or even identify new market niches.
Part 2 – Treasure Hunt
In the Treasure Hunt, the participant is asked to take a deeper dive into the site to find, retrieve or download specific pieces of information or perform a function or transaction. For example, you may have the following statement, “From the home page, find the benefits change form and modify your status from single to married.” You could make these activities increasingly more difficult. For example, select several items on the website that you feel are particularly buried; you’ll want to test these especially if they are more than 3 clicks away from the home page.
In addition, make the direction more complex, for example, “From the home page, which path would you follow to find and download a leave slip; without returning to the home page, describe three features of the telephone directory.” If the site includes interactivity, add a few exercises to test the usability of these features and have the participant perform a mock interaction. If you want to perform the Treasure Hunt, place it as the first section of your study so the participants don’t have a chance to become more familiar with your site through other activities – you want first-impression material here!
Scoring the Treasure Hunt
(I’m not a statistician or rocket scientist, but I made up this intuition scoring system and it has worked well for me.) These exercises produce text responses, so it is difficult to use a scoring methodology. While not using a number to score, use three categories and ask the participant to circle the one that was most relevant to their experience: a) I completed this task b) I’m not sure if I completed this task c) I did not complete this task or d) I gave up. Then, when you return to the raw results, you can count up how many were a, b, c or d and determine participant ease or difficulty based on these findings. For example, let’s say you have 10 participants and you are able to convert the a-d choices into a numbering scheme (counting up all you’re a’s, b’s, c’s and d’s).
Total up each to produce how many participants fell into each category. By specifying ranges, you can then determine the level of intuitiveness (0-3=low intuitive, 4-7=medium intuitive and 8-10=high intuitive). If you have 10 participants and all 10 were a) I completed this task, you can surmise that the activity was high intuitive (they found what they were looking for and you did a great job!). On the other hand, if 8 of the 10 were d) I gave up, you have a low intuitive score and need to do more work to place that item more intuitively for the user. When you prioritize which changes to make, focus on those low to medium intuitive scores first.
Part 3 – Anticipation/Intuition
This section may include a few exercises to confirm that your assumptions about labeling and navigation is meaningful to the user (e.g., when the click the link, they will get exactly what they expected to get). This section can be done in a two-column table with “Label/Link” in the first column header (and your labels below – one on each row) and “Expectation” in the second column heading. Then the observer will note what the participant expected to find in the second column. Users expect to find what they need easily and quickly on your site, so check to be sure that you’re referring to these items correctly, that the names and identifiers make sense and are intuitive to the user.
This test with the table will net text responses, so you’ll want to read through the comments in the second column to see where labeling or navigation queues didn’t work well for the participants. If the majority noted a similar issue, I would tend to agree that more work is needed on that particular item. If you can’t score it, go with what the majority said, I always say! On the other hand, if you restructure the question to give simple statements like, “The screen layout works well for me,” you could use a 1-5 scale (1=strongly disagree, 2=disagree, 3=agree somewhat, 4=agree or 5=strongly agree) and have the participant circle the statement that most closely matches their opinion about the statement.
I also like to give room for the participant to tell me why the item didn’t work well for them (so any scores under 3, I request more information on). Then, you could total up all the individual scores and multiply by the number of participants to produce the average. With 10 participants, if 5 participants strongly agree (5×5=25), 3 participants agree somewhat (3×3=9) and 2 participants strongly disagree (2×1=2), add up the totals (25+9+2=36) and divide by 10 (your number of participants). 36 divided by 10 = 3.6. From this, we can conclude that 3.6 against the 1-5 scoring would mean that work needs to be done, but it’s in the middle-to-high range. Closer to the 1 would indicate dissatisfaction and that more work is required; closer to the 5 would mean that the participant was more satisfied and less work is required. You have the detail behind the scoring to determine which areas need work. You get the idea!
Part 4 – Terms and Language Use
Within the evaluation tool, ask if the writing was clear and easy to understand and any terms were encountered that the participant didn’t understand. Web writers and developers often use language, terms and acronyms specific to their company or industry, and it’s important to check to see if you’ve used any on your website that will be misunderstood.
Scoring Terms and Language
There is no system for scoring language use, but you’ll want to review the feedback to identify where any issues were uncovered and make changes to the site accordingly. You may find that several trends develop where several participants noted the same feedback. It would be a good idea to focus on these items first and the remainder after these high priority items are resolved.
Part 5 – Look & Feel
Close out your evaluation tool by soliciting feedback on the color scheme used, readability of the text and font sizes, consistent use of a metaphor, theme or template through graphics from page to page at both high levels and deeper levels of the site. You could use statements like, “The text and lettering is legible and readable.” Other items to test might be: “The colors were appropriate for the content and messages being presented,” “Graphics and pictures were not distracting,” or “The theme for the site was a good representation for the content.”
Scoring Look & Feel
You can use the 1-5 scale (1=strongly disagree, 2=disagree, 3=agree somewhat, 4=agree or 5=strongly agree) and have the participant circle the statement that most closely matches their opinion about the statement. Provide room for more information about items that were scored 3 or lower and focus on these items to prioritize which changes to make.
Within each of the five parts, make room on the evaluation tool for the observer to make their notations or attach additional sheets. This is the document that will be used to tally all the results, so give your observers the room they need to express their observations clearly.
Now to the second part in preparing for your Website Usability Test…
2. Finalizing Logistics
When the evaluation tool is near completion and participants and observers are confirmed, it’s also a good time to reconfirm that your location, equipment, food order, incentive or other items are secured. Ensure that everyone involved knows the exact times for the study and when they should arrive. Ask observers to arrive at least 30 minutes before the session. You don’t want any surprises on the day of the study, so take the extra step to confirm all these items again. I generally confirm three or four days before the study so there is time to resolve any issues.
There are several risks associated with administering a Website Usability test that you should be aware of and plan for. Some of these you can’t control and others you can. My philosophy is: focus on what you can control and pray about the rest! If you don’t manage these risks, the success of your entire study will be in jeopardy. Plan for contingencies or other workarounds, because you can have unanticipated problems.
Here is what you need, and a few of the most commonly encountered problems:
The correct equipment to arrive on time and be set up ready to log on to the Internet Service Provider, get through any firewalls, and arrive at the intended URL address.
What could happen:
Equipment doesn’t arrive on time, the ISP is down, the firewall doesn’t allow you in or out, the URL address isn’t valid or isn’t operational.
What you can do:
Test, test, test. Give yourself plenty of time to get in the actual room the day or two before the session and test all the PCs, laptops, monitors, connections or other technology to be used. Call your technical representative, vendor or ISP to ensure they don’t have any planned maintenance, outages or anticipated issues on the day or time of your session. Call your firewall administrator, vendor or ISP well ahead of time to ensure that all firewall ‘maneuvering’ issues can be understood and addressed.
You will need one observer for each participant.
What could happen:
Observer is out unexpectantly sick and can’t make the session.
What you can do:
Set up two or three fill-in observers who can stand in and be ready to go if you are short observers. Or, use the facilitator as an observer after they do the opening. If you have sick participant, you can simply reassign or release the observer (better to be short a participant than an observer!!)
Access to the room.
You will definitely need access to a room for the duration of the test.
What could happen:
The room is locked when you get there. This is an obvious problem if you’re in your own facility and more annoying when you’re off-site.
What you can do:
Add to your ‘to do’ list a note to confirm with the organizer where keys are or who is responsible for providing you access to the room. If a pager number is available, that’s even better.
An Incentive or Reward for Participants
The incentive or other token of appreciation which will be given to participants in your possession or on site. You could give one to observers, as well.
What could happen:
If it’s an item you ordered specifically for this study and it hasn’t arrived, think about a financial alternative.
What you can do:
Make a mad dash to the bank or ATM machine and get enough cash to give your participants money instead. I think $25 for a 15-minute study, $50 for a 30-minute and $100 each for a one-hour study. Remember to get a receipt so you can expense it when you get back to the office! Let’s face it, some of you participants are coming not out of the kindness of their heart, but because you told them in your invitation and confirmation that there would be an incentive for them. You’ve got to deliver on this!!
Okay, let’s recap where we are at with all this shall we?
Summary So Far
You’re making progress pulling all the details of your Website Usability study together.
In Part 1, you learned about the concept of usability testing, how it influences the outcome of your web project and how the strategy can lay the foundation for all the components of a study.
Part 2 revealed that planning is the secret. With an approved strategy, approach and implementation plan in place, the foundation pieces are in place for a successful study.
Part 3 moved you closer to implementation by introducing you to the five-part evaluation tool, the roles and responsibilities needed to administer a study and finalizing logistics.
Next is Part 4, which will walk you through the day of the study.
Part 5 will conclude the series with a review of the materials and the results produced by each.
Part 4: Implementing a Web Usability Study
Web usability testing isn’t as easy as you thought, is it? If you’re not a planner, it’s probably particularly difficult, but hopefully, this series of articles have given you tools and information to make you more comfortable. (I’m a planner, so I like this stuff…go figure!) Doing a study by the seat of your proverbial pants isn’t a good idea; the risks are too high to you, your company and your website. Now that you’ve moved through three parts of this five-part series, I hope get a sense that there are many individual pieces that need to work together to implement your study. Implementing your study generally involves the following:
- Identifying roles and responsibilities
- Getting a snapshot of a typical session
1) Identifying roles and responsibilities
It’s truly a team effort to run a successful Website Usability study. One person physically could not do it alone unless all the participants were in one room, with a sophisticated interactive, audio/video interface and cameras and voice recorders poised in front of each participant. Yes, then, the facilitator could observe from a distance and sip a cup of expensive coffee and take in all the sights. It wouldn’t be much fun for the participants and the cost would be more than most of our budgets could afford. There are circumstances when these resources can be deployed, but I don’t see my management putting up that kind of money any time soon.
There are three key roles to administering the study. These are:
The facilitator is generally the organizer of the study and the one who will (before the session) facilitate a dry run with the observers, confirm participants, order food, confirm the location and equipment and secure the incentive and (during the session) open the actual session with a welcoming message, provide an overview of how the study will be run, communicate when there will be a break, what the observer will do during the study, and the ground rules. The facilitator will also close the session with a thank you message, give out the token of appreciation or other incentive and describe next steps or how the data will be used.
The participant will focus on addressing the activities and questions in the evaluation tool. During the study, the majority of participants will take it seriously and be glad that you asked for their input. They will focus on each activity and question and try to give an honest response. Some participants are conscientious to the point where they need to be prodded by the observer to move along and to refocus on providing first impressions. A participant who gets lost performing an activity will tend to become quiet and should be prompted by the observer to express their frustration or what’s happening. Some participants struggle with this because they are so used to figuring these types of challenges out on their own. While the observer should encourage them to verbalize, they should not give the participant the answer.
The observer focuses on documenting the user experience. A good observer will establish a comfort with the participant at first, asking where they’re from or what their job function is. This relationship-building pays off because the more at ease the participant is, the more relaxed they tend to perform each activity. Observers should note the responses clearly and try to summarize what the participant has said using complete sentences in clearly written handwriting.
2) Getting a Snapshot of a Session
Let’s transport you to the day of the study now where all this work will come together.
The time of the session: The actual session(s) could be held:
In the morning
Start no earlier than 9:00 a.m. and take into account any rush-hour traffic in the area to give everyone time to arrive and reduce the number of late arrivals. Consider the 9-11:45 timeframe as your window of opportunity for the morning (remember, the closer you get to the lunch hour, the more distracted the participants will be).
In the late afternoon
A session right after lunch would be disastrous, so I wouldn’t recommend an afternoon session until after 2:00 p.m. and it should end by 4:00 p.m. Learn about the traffic in the area and be respectful of your participants and end the session earlier if needed.
A session after dinner would work (especially if you’re providing the dinner), so another window of opportunity might be the 7:00 p.m. to 9:00 p.m. timeframe. I wouldn’t run it after 9:00 p.m. because people get tired and the quality of the results would go down.
The facilitator, who probably cares the most how the session goes, should
- Arrive two hours early to ensure that equipment is installed properly and in good working order.
- Have in his/her possession all the documents generated to date in a project folder: the approved strategy, the implementation plan, the evaluation tool and criteria.
- Have in his/her possession a list of all the participants and observers who are expected to attend. Each should be checked off as they arrive.
- Have the incentive or other token of appreciation for the participants on their person or confirmed at the site
Each observer should be seated at a terminal, laptop or PC and be ready to accept and greet participants as they enter.
Each participant should be introduced to their observer and asked to take a seat. Try to start at the appointed time, but if you’re still missing participants, wait 3-5 more minutes, but no more. I like to respect the effort made by those who made it on time, so I do go ahead and get started.
When the session is ready to begin, the facilitator should step to the front of the room and give a welcoming message and communicate ground rules. Below are the types of ground rules that the facilitator should describe during the opening of the session. These break the ice and give participants the “permission” they need to provide open and honest feedback and enjoy the session. The facilitator would say:
- There are no wrong answers; the purpose of the study is to validate the assumptions made by the web designers; this is not a test of you.
- You will not hurt anyone’s feelings with your input; we want to create a website that will work for our users, and the best way to achieve this is to put the site in front of you in this type of study.
- Perform the activities as you would back at your desk or in your work environment.
- Give the same amount of time to each activity as you normally would; let the observer know when you would give up.
- Speak out loud as you move through activities so the observer can track where you go, when you get lost, why you chose one direction over another, what you expected to find versus what you really found.
- Don’t ask your observer to provide direction or hints; they’ve been asked to prompt you to think through the solution, but not give you clues or answers.
- When the break will be.
- Relax, and have fun.
With this, the facilitator should look at the clock, note the time and begin the session. There will be a few minutes of chatter as observers and their participants get acquainted, then it will quiet down as they begin to move through the exercises. If the facilitator remains in the facilitator role, he/she may begin to move around the room listening and observing. This make the facilitator available to an observer or participant who may have a question.
There are questions that come up during a session. Often, they are from a participant who can’t seem to work themselves out of an issue and they want further hints. The facilitator or observer should respond as follows:
If the question is:
"I’ve looked all over for this and can’t find it. Would you start here and move to here?"
The answer could be:
"Go ahead and give it a try and we’ll note where it took you. Of course, the observer would make notations about the path."
If the question is:
"I’m stuck at the home page and don’t have a clue where to go. How should I approach this task?"
The answer might be:
"Are there any logical options you see? Which one is the most logical that you think might move you in the right direction? Which way do you think you should go? Go ahead and try it."
Answer the question with a question and prompt the participant to choose the most logical path or the path they would most likely take. Giving hints or answers can invalidate the scoring and quality of the responses, so don’t let yourself get trapped.
Because participants will finish at different times, I recommend going ahead and letting them go as they finish. Thank them individually, hand them their incentive or token of appreciation and let them go. There’s no need holding up the quicker finishers from the slower ones.
When everyone has completed their work, the facilitator can step to the front of the room and thank everyone again for their participation and their time. Let them know how the results will be used and if there any other next steps you had planned for in your implementation.
Sometimes, a participant will ask to review changes and be kept in the loop. These participants feel a real commitment to improving the product beyond just giving feedback, so if you’re able, go ahead and include them.
That’s how you perform the actual study and script the session. It’s not hard, just details and some protocol to learn.
You’re well on your way to performing your own Website Usability study, and I hope Part 4 has given you some tips and tricks.
Part 5 is the last article in this series and it will focus on using the raw data and scoring results and to create a meaningful report.
Part 5: Interpreting Results and Producing a Report
Quite a lot of work has taken place to bring you to this point in the website usability testing process. You’ve produced a strategy and plan, evaluation criteria and an evaluation tool, and conducted your website usability study. The final step is to package all the feedback and scoring together into a meaningful report that can be used to change and improve the website. How you package this information and the level of formality used depends on whether you are the sponsor yourself (it’s your website) or whether you were the facilitator who conducted the study for a business sponsor (it’s their website).
Of course, if you’re the sponsor, the report can be more informal, but you’ll still want it to be complete and to-the-point for your own records and for your management. For the business sponsor and their management, though, who committed human and budgetary resources, you’ll want to produce a more formal report. Regardless of how you package the report, the information should be easily understood, free of technical jargon and provide a complete picture of how the study was conducted, what users experienced and what conclusions have been formed from the results.
Regardless of whether you were the sponsor or if you facilitated the study for someone else, you could describe results in these high-level categories:
- Overview: State the goals of the study and what you hoped to achieve.
- Background: Describe when, where and how the study was conducted, how participants were selected, how many people participated in the study and how they were incented or rewarded; describe the timeframes for the study and how they were or were not met and why.
- Study Plan: Provide a synopsis of the evaluation criteria, what you tested against and why you tested those particular features.
- Study Results: Break down the five categories according to the evaluation tool you used. For example, in this series of articles, I recommended 1) a General Survey, 2) Treasure Hunt, 3) Anticipation/Intuition, 4) Terms and Language and 5) Look and Feel. These would be the headings for the results report.
- Budget Expenditures: Describe the costs associated with administering the study and provide an analysis of the forecasted and actual costs; if you were over or under budget, describe why. Include costs for communications/mailings, transportation, lodging, facilities, equipment rentals, food, incentives, premiums, etc.
- Conclusion: Reconsidering the goals you or your sponsor set out to achieve; state how you did or did not achieve them here. Communicate how this study benefited your project, product, process, organization, industry, customer, corporation, etc. and if you felt that the time and energy spent was worth it. Note that conclusions have been formed as they relate to the evaluation criteria and described in further detail in this report.
Interpret and Summarize Your Findings
For the purposes of this part of the article, we’re going to focus specifically on the study results and how to interpret and package them for the final report. I am not a statistician, so these are simply ideas and ways that I’ve compiled my reports in the past; if you have access to a statistician, certainly see if they have a more proven methodology for tabulating these types of results-if there’s an easier way to sift through all the raw information to uncover the proverbial ‘golden nuggets,’ I’d love to hear about it.
You’ve collected raw data and now you need to summarize the findings, form conclusions and recommendations and offer any other feedback that’s been provided to you. Most of your information will come from the facts that have been provided by the participants and their experience. It’s important to use the facts that have been provided by your participants and not read too much into the information — interpret what’s comfortable for you, collaborate with others on your team if something proves difficult to understand or interpret, and try not to embellish, over-simplify or exaggerate. Be honest in reporting what the data is telling you. If your study produced significant issues, remember that’s what you came here for. You did this study to improve the quality of the website, and whatever the findings are, you tested your users and achieved your goal. Your management bought into the study and agreed that it was a worthwhile activity to undertake; now, if the results reveal that there are problems-that’s what you were looking for and what you can now begin to digest.
Scoring and Conclusions
We’ll use the five parts of the study described in Part 3 of this series and review the results based on the scoring system we used for each part. If you haven’t already done so, go ahead and create a document in your word processing system now so you can begin to document your interpretations and conclusions. Then, you can build the other sections of the report around this most important section..
Part 1 — General Survey
This survey collected general information about your participants and may have included text or check box responses. As you read through the responses to each of the questions, notice not only what was said but how many said it to form a conclusion.
For example: You might report "20 people participated in the usability study and 15 stated that they use the web every day and 5 use it once a week. From this, I can conclude that the majority of the participants are regular web users, familiar with how the web and web pages work and that they are well-versed in the capabilities and functions of web-related navigation and materials." If these 15 were priority customers or an audience that you are purposefully driving to your website, you can make a more educated assumption about their skill level and use it to improve the site.
Look for logical groupings of feedback and report them accordingly. For example: You might report "20 people participated in the study, and all 20 stated that they visit our website to retrieve operating forms. From this, I can conclude that we should do an extra-special job of providing navigation and labeling that leads our users quickly to forms." If only 10 of the 20 came to your site for forms, your conclusion may be different depending on what the other 10 came for.
Part 2 — Treasure Hunt
In the Treasure Hunt, you provided very specific direction and instructions and the participant carried them out. They may have been asked to find a specific piece of information, use an interactive functionality, download an application or document, etc. (Note that I could have used the term, "Wild Goose Chase," but I chose "Treasure Hunt" to make it a little more appealing!) You also may have sent them after a piece of information that you felt was particularly buried in the site or that you had a difficult time naming or placing on the site. You’ll have to read text answers, but if you used the scoring system a) I completed this task, b) I’m not sure if I completed this task, c) I did not complete this task or d) I gave up, you can count how many participants responded according to these responses.
For example: Of the 20 participants, 7 completed the task, 4 weren’t sure if they completed the task, 6 did not complete this task and 3 gave up. What can you conclude from these numbers? The best way is to see if they convert to the intuitive scoring system discussed in Part 3. While you won’t want to go into all the details of how the scoring system works, you will want to let the reader know that you’ve determined an equally balanced low, medium and high intuitive range and how these results relate to those ranges. For example, you might report, "The Treasure Hunt produces results that relate to how intuitive a particular activity was. There were a total of 20 participants, so a range of 0-20 will be used to help you understand these results. The ranges for the Treasure Hunt for intuitiveness have been determined as follows: 0-7=low intuitive, 7-14=medium intuitive, 15-20=high intuitive. You can see that the more people complete the task, the more intuitive we’ve apparently built that specific component (we’ve been successful and our assumptions paid off!). Likewise, fewer people completing the task would be evidence that more work is needed to increase intuition for that item."
By reading the text responses and manually counting for the intuitive score, you can give the reader a good picture of the user experience while finding information and performing activities on your site.
Part 3 — Anticipation/Intuition
Web developers make many assumptions about labeling and navigation on websites, so is section will help clarify and validate that users can actually use those paths to reach their desired destination. You may have text responses and a scoring system if you provided a statement and asked the participant to circle their range of agreement and disagreement with the statement. Of course, being able to produce a number helps us again as we use it for our scoring and conclusions.
For example: By counting all the 1, 2, 3, 4, and 5 scores (assuming it’s a five-point scale you used), you can again see how many participants found a certain statement to be true (3-agree somewhat, 4-agree, 5-strongly agree) or through the range to false (2-disagree, 1-disagree strongly). Totaling all these and dividing by the number of participants can produce an average score. With this score you can form conclusions or prioritize which work you might want to perform first. In the example used in the previous article, you could total up all the individual scores and multiply by the number of participants to produce the average. With 10 participants, if 5 participants strongly agree (5×5=25), 3 participants agree somewhat (3×3=9) and 2 participants strongly disagree (2×1=2), add up the totals (25+9+2=36) and divide by 10 (your number of participants). 36 divided by 10 = 3.6. From this, we can conclude that 3.6 against the 1-5 scoring would mean that more work needs to be done, but it’s in the middle range. If you asked the participants to provide more information on items they scored 3 or lower, you can zoom on the areas in the website that need work. Closer to the 1 would mean dissatisfaction and that more work is required; closer to the 5 means that the participant was more satisfied and less work is required. It depends, of course, how you structure your statements as to how you use this type of scoring.
Use a conversational tone in your report with language that’s easy for the reader to understand. You might report, "Results were averaged among all participants and produced a 3.6 total for this item. With the scoring system we’ve used, an average closer to a 1 would mean that we have much work left to do; closer to a 5 would mean less. Regardless, several improvements have been recommended and they are as follows…"
Part 4 — Terms and Language Use
In this section you probably asked specific questions to see if specific or generic terms made sense and were understood and if language use was appropriate. The feedback provided here will likely be text or check boxes, so you’ll have to read through each participants’ feedback. As you uncover trends, note how many people contributed to that trend. It helps justify why the change is needed if you have more people who made similar recommendations.
For example: If there were 10 participants, and 3 noted that they preferred the word ‘car’ to ‘automobile,’ you could decide if it was meaningful that 33% of your participants asked for the same change. If all 10 found some glaring item that they asked be changed, you’d have 100% of your participants asking for it, and you’d likely make it. Again, as these trends appear, count them — it really helps justify changes if priorities are tight or resources are running thin. In your conclusion, decide if you want to report that 3 participants felt a certain way or if 33% felt a certain way. There may be cases where you believe reporting 33% is more compelling than a measly old 3.
Part 5 — Look and Feel
If you asked participants to comment on the look of the site, you probably asked about graphic usage, colors, font sizes, readability of text, foreground versus background compatibility, use of a metaphor, ability to move through the site, etc. You can use the 1-5 scoring system again here as well. (1-disagree strongly, 2-disagree, 3-agree somewhat, 4-agree or 5-strongly agree). How you phrase the statements will determine how this scoring will work for you.
For example: If the statement was, "I like the bright purple background with yellow checkered squares." Count how many participants chose which responses (1-5) and do the math as you did previously. With 10 participants, if 5 participants strongly agree (5×5=25), 3 participants agree somewhat (3×3=9) and 2 participants strongly disagree (2×1=2), add up the totals (25+9+2=36) and divide by 10 (your number of participants). 36 divided by 10 = 3.6. From this, we can conclude that 3.6 against the 1-5 scoring would mean that some work needs to be done, but it’s in the middle range. Closer to the 1 would mean dissatisfaction and that more work is required; closer to the 5 means that the participant was more satisfied and less work is required. You’ll be able to produce an average and discuss the areas needing work if you asked for more detail on items scored 3 and lower. You might report "The bright purple background with yellow checkered squares was met with mixed emotion. An average of 3.6 was produced by this item which may mean that it’s not an ideal color/shape combination. Participants who scored this item 3 or lower provided the following ideas for improving the color and shapes of the background…"
What’s Your Approach?
In closing, each of us has a unique way that we approach our projects and usability is no different. It’s subjective, driven by the experiences or lack of experience of those involved in designing the strategy, plan, evaluation tool and interpreting the results, etc. So, be creative and do some research on your own until you find a system that works for you. There are certainly world-renowned experts, books, websites and associations on the subject, so you have many resources to explore as you find your unique approach. Just search for the keyword ‘usability’ on the web and you’ll see what I mean.
Through experimentation and research, you’ll soon become a usability expert within your own organization. This field adds incredible value to our business processes, products and systems, and I believe that management has not yet fully understood and embraced this fact. If usability is a pay now or pay later proposition, why not do it up-front, before coding has begun, while a website is in the planning or development stage, to validate the designer’s assumptions and make sure user gets what they need? Failing to do so is an error that will have negative repercussions to customers, your corporate brand, your system’s credibility and your staff-it could be a fatal error and not recoverable. Stress to your management and the management in your information systems areas that usability is a key step in the development process and that it must remain in the plan no matter how short timeframes are or how expensive it is perceived to be. The usability process can be shortened, fewer participants can be sought, internal staff can be used to administer the test and other means can be taken to reduce the cost and still produce meaningful results.
Thank you for the opportunity to share my approaches with you! I hope you’ve found this series helpful and that you’ll experiment with some of my strategies the next time you have a website usability opportunity.
Jump Start Git, 2nd Edition
Visual Studio Code: End-to-End Editing and Debugging Tools for Web Developers