340 likes | 627 Views
Empirical Methods in Human-Computer Interaction. Empirical methods in HCI. Where do good designs come from? Observation Experience Experiments. UCSD: Iterative Design. DESIGN TEST. Evolutionary design vs Radical new designs. Empirical methods in HCI.
E N D
Empirical methods in HCI • Where do good designs come from? • Observation • Experience • Experiments
UCSD: Iterative Design DESIGN TEST
Evolutionary design vs Radical new designs
Empirical methods in HCI • Task analysis*: Ethnographic & other observations • Requirements analysis *done first!
Empirical methods in HCI • Task analysis*: Ethnographic & other observations • Requirements analysis • Rapid prototyping, scenarios, story boards • Simulation/Wizard of Oz studies • Heuristic evaluation; cognitive walkthroughs (by experts) *done first!
Empirical methods in HCI • Task analysis*: Ethnographic & other observations • Requirements analysis • Rapid prototyping, scenarios, story boards • Simulation/Wizard of Oz studies • Heuristic evaluation; cognitive walkthroughs (by experts) • Usability testing & user studies (qualitative & quantitative • Controlled experiments *done first!
These methods share some of the same measures. Often, the best projects use several methods in combination! Good design requires iterating between design and observation (or testing).
User-Centered System Design • Task analysistells us how people currently accomplish a task. Requirements analysistells us what a system should do. • Usability testingtells us whether a system performs acceptably when a user tries to carry out certain tasks. User-Centered System Design brings these things together.
Methods for task analysis(cont.) • Questionnaires • Interviews • Ethnographic observation • Verbal protocols • Formal models and notations (GOMS) (Hierarchical task analysis)
Verbal protocols • pioneered by psychologists studying problem-solving • have people “think out loud” as they do some activity, step by step • Advantage: can get at some of the invisible steps that people go through on the way to a solution, steps that aren’t evident in their behavior.
Task & requirements analysis, usability testing These are pragmatic activities that require observations and systematic analyses. BUT: they’re not the same thing as the scientific method! How do they differ?
A note on scientific method: Two important steps: 1. Observing and describing 2. Testing theories and hypotheses HCI specialists get many useful principles and solutions from what they see users do (#1), not only from theories (#2). But they sometimes test theories.
Ethnographic observation: very different from controlled observations in the laboratory! The observer looks at what people do in real life, recording data in great detail, and then tells a story rather than quantifying the data.
Ethnographic studies: Experiments: Ethnographic observation vs. experiments
Ethnographic studies: study behavior taking place naturally Experiments: study behavior during a controlled task Ethnographic observation vs. experiments
Ethnographic studies: study behavior taking place naturally fewer observations Experiments: study behavior during a controlled task many observations Ethnographic observation vs. experiments
Ethnographic studies: study behavior taking place naturally fewer observations very rich observations Experiments: study behavior during a controlled task many observations limited observations Ethnographic observation vs. experiments
Ethnographic studies: study behavior taking place naturally fewer observations very rich observations no hypotheses Experiments: study behavior during a controlled task many observations limited observations hypothesis-testing Ethnographic observation vs. experiments
Ethnographic studies: study behavior taking place naturally fewer observations very rich observations no hypotheses results may differ; speculative contain confounds Experiments: study behavior during a controlled task many observations limited observations hypothesis-testing reliable results; scientific, replicable eliminates confounds Ethnographic observation vs. experiments
Scenarios or story boards • Write a story or dialog of a sample interaction* • Draw key frames (as in animation) • Act out functionality; role play *Not unlike what a telephone speech dialog designer would do…
Scenarios or story boards PROS and CONS: • don’t require programming • require readers or “users” to use their imaginations • may fail to convey the interactive aspects of a design • may fail to find problems with a design • are often good for getting started
Rapid prototyping • risky if based on designer’s intutions • works well combined w/ user studies • observe naive users using the prototype • Examples of prototyping languages: HyperCard, Director, Smalltalk, Logo, LISP, HTML, VoiceXML or JAVA • Pro: system need not be finished • Con: can’t thoroughly test system
Wizard of Oz studies - laboratory studies of simulated systems The experimenter intercepts the subject’s input to the computer and may provide responses as if they were coming from the computer.
Wizard of Oz - Pros & Cons • test something without having to build it • can be difficult to run • you need a consistent set of response rules to avoid bias • “system’s” reaction times may be slow • sometimes subjects catch on • can control what the user experiences
Usability testing • evaluation of an existing system or prototype • less formal than a laboratory study, more formal than just asking users what they think. • for instance, watch people use a prototype to do an assigned task (user studies)
Usability testing assesses: • Performance • Learnability • User satisfaction • Extensibility
Usability testing assesses: Performance (speed, errors, tasks) Learnability (How long does it take to get started? to become an expert? What is forgotten between sessions?) User satisfaction (both by self report and by behavior) Extensibility (can system be tailored?)
Systematic user testing, often having each user do the same task(s). User studies
Set up observation (tasks, users, situation) Describe the evaluation’s purpose Tell user she can quit at any time Introduce equipment Explain how to “think aloud” Explain that you will not provide help Describe tasks and system Ask for questions Conduct the observations (debrief the subject) Summarize results User studies (Gomoll, 1990)
Writing for the Internet (Nielsen) • How users read on the Web (Read about the different variables that influence readablity; follow the links to the full report of the study.)