I’ve just published the transcript of an interview I did with Andy Budd at Web Directions UX last week. It’s quite long, but well worth the read — we cover all sorts of topics such as careers in web design, the future of CSS, IE8, HTML 5, the role of usability testing in the design process, CSS frameworks, CSS gallery sites and more!
Sifting through the notes I took last Friday, here are some snippets that I jotted down from another speaker whose talk I got a lot out of — Steve Baty, who spoke about Analysing User Research Data.
Steve managed to introduce a number of quite scary and complex looking statistical formulae, without having his audience drift off to sleep or turn and run for the exit. Being passionate about his chosen field and a charismatic presenter certainly helped matters. Perhaps it’s just because, with his glasses off, he looks like Charlie (David Krumholtz) from Numb3rs, which probably reinforced his credibility in my mind.
The takeaway that I got from Steve’s talk is that user research data is useless unless you do something with it, and that “something” needs to be well-defined before you collect it. He advocated
- defining the level of precision that you’ll be measuring up front
- taking into account the mean, variance and standard deviation of your sample data, and
- taking dual sets of data, so that you can compare them to determine whether deviations in your data are because of the design you’re testing, or because of the differences between users in your test group
Steve recommended approaches when interpreting data from A/B testing, task completion rates, time-to-completion and page view data, and his heavily scientific approach to usability testing reinforced the term “user science” (most of us are probably guilty of taking an approach that is more indicative of “user art”).
The podcasts and slides from both Web Directions UX and Web Directions Government will be appearing on the conference site’s blog soon. Go check them out!