Today’s corporate firms focus increasingly on their online presence. However, not many understand the long-term implications of not testing their site’s usability before it goes online, and in a recessionary era like the one just past, frequently usability is all too easily forgotten. Often no funds are allocated to conduct usability testing, even though it’s a key component of any online or interactive project. In an ideal world, a Website should be evaluated for usability from the point of a new concept’s inception, to the final execution and upload.
So if the budget’s tight, does it automatically mean an unusable site? Not according to Dr. Deborah J. Mayhew, proponent of "Discount Usability" — low-cost testing techniques. According to Dr. Mayhew,
Discount Usability engineering started partly in response to the fact that investment in usability has always been a hard sell in the software development industry, but mostly in response to the fast-paced and cost-conscious development cycles of early Websites.
Discount usability techniques can be used to test a site’s users without setting up a state-of-art usability lab. The methodologies are also simple and easy to implement, and the test can be completed in a short period of time, which puts discount usability well within the reach of those who can’t afford the time or money to commission professional laboratory usability studies.
According to Jakob Nielsen,
Discount usability engineering plays two roles in the path toward higher usability maturity:
- it smoothes the way by lowering the threshold of getting started, and
- it can be used on fast-paced or low-budget projects even in organizations that use a more careful approach for their high-priority projects.
Define your Goals
Before jump-starting for the test, you need to define your goal — what you want the test to do. As user testing occurs at various stages of the project, you need to define your requirements for the test at every stage. The requirements could vary considerably:
- confirming that the user can perform a certain task,
- ascertaining how long it takes them to perform that task, and
- identifying whether the user understands the buttons or icons you have created,
…all represent potential goals of those undertaking discount usability.
Indeed, the "dreamed for" goals can be many, but you really need to zero in on the primary requirements of the stage for which you’re testing the site. You’ll also need to decide whether the results from the test can be used directly, or whether they’ll have to be calculated from the observations before they represent ‘actionable’ findings. Considering these aspects will help you chose an appropriate test methodology and task list.
Discount usability methods encourage regular evaluations of the interface, allow problems to be identified and addressed early, and facilitate the execution of more efficient and focused formal usability testing. But for discount methods, just as for all usability testing methods, we need to be able to ascertain which evaluation goals are achieved and how, the costs, the benefits, and the conditions of application.
As Wayne D. Gray says, "The time required to apply these techniques is almost totally a function of the amount, degree, and level of analysis required to understand how the human and the computer must interact to perform the task. Shortcutting the time required to do these analyses may make interface design faster but the result is no bargain."
Decide on the Evaluation Method
Some methods are suitable for one stage — and others for another stage — of the project’s development. Some of the most frequently used methods of Discount Usability testing are:
- Paper Prototyping
- Heuristic Evaluation
For this test, sketches or textual navigation is placed on paper cards, or printouts of interfaces mocked up in HTML or created with Photoshop are used. Paper prototypes can be used throughout the development cycle, or before the "real" interface is created. Task-based testing is completed using the paper prototypes, and user feedback is gathered verbally and/or by observing their usage of the prototypes.
Advantages of using the paper prototyping method are:
- it’s inexpensive,
- problems can be fixed properly,
- paper prototyping can be used at any stage of development and most importantly,
- the test can be conducted anywhere.
The main disadvantage of paper prototypes is that it is difficult to manage complex prototypes.
Heuristic Evaluation should be performed either early in the phase of development, or after the usability test has been conducted. In this method, the testers perform an informed, critical inspection using a set of rules, checking for usability problems, lack of adherence to standards, and inconsistency of design — so this method doesn’t need involvement of test participants.
Heuristic Evaluation methods are quite inexpensive and consumes the least amount of time. However, the task list to be reviewed can be exhaustive. Evaluators’ inexperience and team members’ resistance to the findings can reduce the effectiveness of this method. And while the task list helps verify any problems, this method doesn’t provide any design solutions.
For more information on Heuristic Evaluation, read this article.
Creating scenarios is another popular discount usability method. According to Jakob Nielsen,
The entire idea behind prototyping is to cut down on the complexity of implementation by eliminating parts of the full system. Horizontal prototypes reduce the level of functionality and result in a user interface surface layer, while vertical prototypes reduce the number of features and implement the full functionality of those chosen (i.e. we get a part of the system to play with).
While the horizontal scenarios are effective in evaluating the site navigation (during the early phases of development), the vertical scenarios are helpful in the later phases, when the development is in progress and the search or submission systems are being finalized.
While being inexpensive, scenarios allow the simulation of independent areas of the interface for usability evaluation. However, the limitation of using scenarios is that the user has to follow a predetermined path, with options predefined by the evaluators.
Conducting the Test
In a study conducted by Rolf Molich and Christian Gram at the Technical University of Denmark, 50 teams of students were made to conduct usability tests of commercial Websites as part of a user interface design class. The average time spent by each team was 39 hours. Hence, it was inferred that on an average, a discount usability test should not take more than 39 hours.
And Jakob Nielsen argues that:
"A usability test with 5 users will typically uncover 80% of the site-level usability problems plus about half of the page-level usability problems on those pages that users happen to visit during the test."
The guidelines for conducting discount tests are simple: use a limited number of participants, and conduct the tests in an informal environment (thus making the participants comfortable with their surroundings). Don’t isolate the participants completely as they might feel as if they’re "being tested" and hence more conscious about their actions, which would, in turn, generate inappropriate results. On the other hand, don’t have too many distractions or noise around the participants, as this might also impact on your results.
If you’ve selected the participants properly, you’ll have a mix of individuals with varied skills and experience for your evaluation. Before you start the test, give them an overview of the process and what they can do to help you get the best information. Be honest with your instructions. Assure the participants that it is not they who are being evaluated, but the site. And most importantly, ask them to "think aloud".
The method you’ve designed for the evaluation will be influenced by the goals you’ve defined for the test. However, it’s important to build in the flexibility to deviate from the anticipated procedure, in case the participants supply useful information that wasn’t planned for during the course of the test. Responses will be more insightful when the evaluation is run like a conversation rather than an interview or test.
At times, there are some nitty-gritties that can be missed out during the test: it’s hard to moderate and collect feedback at the same time. So it’s best to have at least one person observe the test to note down the responses of the participant in each situation. Make sure the observer clearly documents the problems faced by the participants, and also the solutions (if they suggest any). If you have a Webcam, try to capture the movement of your participants while they perform the task they’re assigned.
Don’t lead the participants: let them vocalize their thoughts and lead you through their experience. Avoid any phrase that can influence their views on the product. Ensure that the observer records all the changes in facial expressions or body language the participant makes as they perform the task. More often than not, these expressions are more revealing than the participants’ verbal communication.
Evaluating and Reporting your Findings
Identify the key comments, phrases, problems and expressions of the participants at each stage so that you can make a critical analysis of the evaluation. Clearly differentiate between your observations and the verbal response of the participant. This could make all the difference in the way you analyze the report.
Base the conclusion of your report on the usability shortcomings of the Website rather than focusing the attention on how the individual participant performed the task he or she was assigned. Mention the task-list, participant’s profile, and the version of the Website being tested (with a screen-shot, if possible). After you compile the report, share the experiences and findings with the team so that they understand the strong points and drawbacks of each evaluated feature.
And once you feel that the feedback from the test has been incorporated in the latest version of the site, perform another evaluation before uploading the Website, and continue to test once the site’s been launched. Remember — testing is about trial and error. Each evaluation provides new challenges, but each amendment delivers a more useable site.