1 / 9

Chapter 20

Chapter 20. Deciding on what to evaluate: the strategy. Introduction. What’s the purpose of the evaluation? What data to collect? What product, system, or prototype are you testing? What constraints do you have? Answers to above form da strategy. Purpose of evaluation.

cwen
Download Presentation

Chapter 20

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 20 Deciding on what to evaluate: the strategy

  2. Introduction • What’s the purpose of the evaluation? • What data to collect? • What product, system, or prototype are you testing? • What constraints do you have? • Answers to above form da strategy

  3. Purpose of evaluation • Qualitative or quantitative? • Qualitative: not easily defined or measured • Sometimes obtained from user comments, e.g., “easy”, “difficult”, “boring”, etc. so… • Listen to your subjects (video camera, yes) • Quantitative: explicit usability metrics • Clearly easier to crunch the numbers if you have some numbers to crunch • Of course measurements need to be set up: in the code, with a stopwatch, or “wrapper” program, like Tobii’s ClearView (records time, keystrokes, etc.)

  4. Priorities and levels • Prioritize usability requirements • What’s more important: domain, users, tasks, environment, constraints (costs, budgets, timescales, technology)? What’s most important drives design • Erm, what’s this got to do with evaluation? • Setting usability metric levels: • Has to do with baseline and desired performance levels, i.e., “speed will improve by 50%” • Can be based on model, e.g., Fitts Law • Can be stated as a hypothesis

  5. What type of data to collect • Quantitative or qualitative data? • Didn’t we already go over this? • (Doncha hate it when textbooks are overly repetitive; I don’t know what it is about HCI books, but they tend to be this way) • Anyway, so this is a kind of wasted slide… • Oh wait, I get it--it’s the second question of da strategy

  6. What to test? • (Question 3 of da strategy) • What’s being evaluated: low-fidelity prototype or high-fidelity prototype? (Why not an existing system?) • Low-fidelity: more for guidance and direction of design (more exploratory in nature) • High-fidelity: used for exposing problems with preliminary version of UI

  7. What are the constraints? • (Question 4 of …) • Hmm, they say this is the most important, is it? Practically speaking, I guess so… • These are the pragmatic concerns: • How much time do I have to run the experiment? • Money? (Paying subjects, yeah right…) • Equipment available? • Subjects? Where to get them (Psyc pool!) • How much time do I have to analyze? • Document the strategy (good idea)

  8. Global warming example • Evaluating the Global Warming CD: • Learnability: easy to learn? • Satisfaction: enjoyable to use? • Navigation: easy to install, navigate, use? • Exercise 21.1 (good one): • Suppose you’re a consultant and you get hauled in by the Global Warming developers (who think usability testing is a waste of time but they want to adhere to ISO 9241) • What are you going to tell them your strategy is? • What concerns/requirements do you have as the experimenter?

  9. Global Warming strategy • Purpose: evaluate whether navigation will be effective for students • Concerns: will this be an enjoyable learning experience (will students actually learn anything?) • Data to collect: comments on the UI during use (what about learning effect?) • To test: prototype (just UI, no math model) • Constraints: newbie evaluators

More Related