1 / 51

Lecture 8: Evaluation Using User Studies

Lecture 8: Evaluation Using User Studies. Brad Myers 20-790 Human-Computer Interaction in eCommerce. Note: UARs for reports. Used for both Heuristic Evaluation (HE) and User Evaluation (UE) homeworks UARTemplate20790.doc. UAR “slots” for reporting results of HE.

amoreland
Download Presentation

Lecture 8: Evaluation Using User Studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 8:Evaluation UsingUser Studies Brad Myers 20-790Human-Computer Interaction in eCommerce

  2. Note: UARs for reports • Used for both Heuristic Evaluation (HE) and User Evaluation (UE) homeworks • UARTemplate20790.doc

  3. UAR “slots” for reporting results of HE • UAR Identifier -- <Problem or Good Feature> • Succinct name of the usability aspect • Use this for which Heuristic (e.g., “Consistency”) • Reference • Include a screen shot pointing out where you found it • Evidence for the aspect • How do you know? (unless obvious) • Explanation of the aspect • Explain what you found • Severity • How important do you think this is to fix? E.g., • 0 – not a real usability problem • 1 – cosmetic problem only–need not be fixed • 2 – minor usability problem–low priority • 3 – major usability problem–important to fix • 4 – usability catastrophe—imperative to fix before releasing product • Good reference: http://www.useit.com/papers/heuristic/severityrating.html • Possible solution or trade-off • Help the designers figure out what to do, if not obvious • Relationships with other UARs, if not obvious

  4. User Studies Also called: User Evaluations (UE)

  5. Why User Studies? • Following guidelines never sufficient for good UIs • Heuristic analysis by experts not sufficient • Experts are not the same as users • Experts will generate long list of issues • Which are the important problems? • Experts miss issues • Need both good designand user studies Gooddesigners Averagedesigners Quality, before andafter user tests

  6. “Don’ts” of User Studies • Don’t test whether it works (quality assurance) • Don’t have experimenters evaluate it – get users • Don’t ask user questions. Not an “opinion survey.” Instead, watch their behavior. • Don’t test with groups: see how well site works for each person individually (not a “focus group”) • Don’t train users: want to see if they can figure it out themselves.

  7. Issue: Reliability • Do the results generalize to other people? • Individual differences • Up to a factor of 10 in performance • If comparing two systems • Statistics for confidence intervals, p<.01 • But rarely are doing A vs. B studies • Also, small number of users cannot test an entire site • Just a sample

  8. Issue: Validity • Did the study measure what we wanted? • Wrong users • “Confounding” factors, etc, • Issues which were not controlled but not relevant to study • Other usability problems, setting, etc. • Ordering effects • Learning effects • Too much help given to some users

  9. Make a Test Plan • Goals: • Formative – help decide features and design • Summative – evaluate system • Pilot tests • Preliminary tests to evaluate materials, look for bugs, etc. • Test the instructions, timing • Users do not have to be representative

  10. Types of Tests • Paper prototypes • “Low fidelity prototyping” • Often surprisingly effective • Experimenter plays the computer • “Wizard of Oz” • “Pay no attention to the man behind the curtain” • User’s computer is “slave” to experimenter’s computer • Experimenter provides the computer’s output • Prototype • Visual Basic, web browser, etc. (no database) • Real system

  11. Test Design • “Between subjects” vs. “within subjects” • For comparing different conditions • Within: • Each user does all conditions • Removes individual differences • Add ordering effects • Between • Each user does one condition • Quicker for each user • But need more users due to huge variation in people • Randomized assignment of conditions • To people, or order

  12. Performance Measurements • Efficiency, learnability, user’s preference • Time, number of tasks completed, number of errors, severity of errors, number of times help needed, quality of results, emotions, etc. • Decide in advance what is relevant • Can instrument software to take measurements • Or try to log results “live” or from videotape • Emotions and preferences from questionnaires and apparent frustration, happiness with system

  13. Questionnaire Design • Collect general demographic information that may be relevant • Age, sex, computer experience, etc. • Evaluate feelings towards your product and other products • Important to design questionnaire carefully • Users may find questions confusing • May not answer the question you think you are asking • May not measure what you are interested in

  14. Problematic Questionnaire

  15. Questionnaire, 2 • “Likert scale” • Propose something and let people agree or disagree: agree disagree The system was easy to use: 1 .. 2 .. 3 .. 4 .. 5 • “Semantic differential scale” • Two opposite feelings: difficult easy Finding the right information was: -2 .. -1 .. 0 .. 1 .. 2 • If multiple choices, rank order them: Rank the choices in order of preference (with 1 being most preferred and 4 being least): Interface #1 Interface #2 Interface #3 Interface #4 • (in a real survey, describe the interfaces)

  16. Survey example

  17. Videotaping • Often useful for measuring after the test • But very slow to analyze and transcribe • Useful for demonstrating problems to developers, management • Compelling to see someone struggling • Facilitate Impact analysis • Which problems will be most important to fix? • How many users and how much time wasted on each problem • But careful notetaking will often suffice when usability problems are noticed

  18. “Think Aloud” Protocols • “Single most valuable usability engineering method” • Get user to continuously verbalize their thoughts • Find out why user does things • What thought would happen, why stuck, frustrated, etc. • Encourage users to expand on whatever interesting • But interferes with timings • May need to “coach” user to keep talking • Unnatural to describe what thinking • Ask general questions: “What did you expect”, “What are you thinking now” • Not: “What do you think that button is for”, “Why didn’t you click here” • Will “give away” the answer or bias the user • Alternative: have two test users and encourage discussion

  19. Getting Users • Should be representative • If multiple groups of users • Representatives of each group, if possible • Issues: • Managers will pick most able people for testing • Getting users who are specialists • E.g., doctors, dental assistants • Maybe can get students, retirees • Paying users • Novices vs. experts • Very different behaviors, performance, etc.

  20. Number of test users • About 10 for statistical tests • As few as 5 for evaluation • Can update after each user to correct problems • But can be misled by “spurious behavior” of a single person • Accidents or just not representative • Five users cannottest all of a system

  21. Number of users, cont. • Jared Spool claims • Only found 35% of problems after 5 users • Needed about 25 users to get 85% of the problems • But were testing large and complete web sites Jared Spool and Will Schroeder, “Testing Web Sites: Five Users is Nowhere Near Enough,” SIGCHI’2001 Extended Abstracts,pp. 285-286.

  22. Ethical Considerations • No harm to the users • Emotional distress • Highly trained people especially concerned about looking foolish • Emphasize system being tested, not user • Don’t use terms like “subject” • Results of tests and users’ identities kept confidential • Stop test if user is too upset • At end, ask for comments, explain any deceptions, thank the participants • At universities, have “Institutional Review Board” (IRB)

  23. Exampleconsentform fromCMU

  24. Define a Frameworkfor the Test • What problem is this system trying to solve? • What level of support will our users have? • What types of use do we hope to evaluate? • What are the usability goals?

  25. Prepare for the Test • Set up realistic situation • Write up task scenarios • PRACTICE • Recruit users

  26. Who runs the experiment? • Trained usability engineers know how to run a valid study • Called “facilitators” • Good methodology is important • 2-3 vs. 5-6 of 8 usability problems found • But useful for developers & designers to watch • Available if system crashes or user gets completely stuck • But have to keep them from interfering • Randy Pausch’s strategy • Having at least one observer (notetaker) is useful • Common error: don’t help too early!

  27. Where Test? • Usability Labs • Cameras, 2-way mirrors, specialists • Separate observation and control room • Should disclose who is watching • Having one may increase usability testing in an organization • Can usually perform a test anywhere • Can use portable videotape recorder, etc.

  28. Test Tasks • Task design is difficult part of usability testing • Representative of “real” tasks • Sufficiently realistic and compelling so users are motivated to finish • Can let users create their own tasks if relevant • Appropriate coverage of UI under test • Developed based on task analysis, scenarios • Short enough to be finished, but not trivial • Tasks not humorous, frivolous, or offensive • Easy task first, progressively harder • But better if independent

  29. Test Script • Useful to have a script • Make sure say everything you want • Make sure all users get same instructions • Can even read instructions out loud • Ask if users have any questions • Make sure introduction discusses goals only in a general way, and doesn’t give away information • Describe the goal and not the steps • Avoid product names and technical terms that appear on the web site • Don’t give away the vocabulary

  30. Stages of a Test • Preparation • Make sure test ready to go before user arrives • Introduction • Say purpose is to test software • Consent form • Give instructions • Pre-test questionnaire • Write down outline to make sure consistent for all users • Running the test • Debriefing after the test • Post-test questionnaire, explain purpose, thanks

  31. Introduce the Participants to the Observation • Introduce yourself • Ask them if they are willing to hear your “pitch” for participating in a study • Describe the purpose in general terms • Explain the terms of the study and get consent • Give them consent form & get signature • Ask them background questions

  32. Conduct the Observation • Introduce the observation phase • Instruct them on how to do a think aloud • Final instructions (“Rules”) • You won’t be able to answer Qs during, but if questions cross their mind, say them aloud • If you forget to think aloud, I’ll say “Please keep talking”

  33. Cleaning up After a Test • In real tests of web sites, need to remove history to avoid hints to next user • Browser history, “cookies”, etc.

  34. Analyze Think-Aloud Data • NOT just a transcription of the tape. • Establish criteria for critical incidents • Record critical incidents on UAR forms (Usability Aspect Report)

  35. Critical Incident Technique in Human Factors • DefinitionFlanagan, (1954), Psychological Bulletin, 51 (4), 327-358.“By an incident is meant any observable human activity that is sufficiently complete in itself to permit inferences and predictions to be made about the person performing the act. To be critical, an incident must occur in a situation where the purpose or intent of the act seems fairly clear to the observer and where its consequences are sufficiently definite to leave little doubt concerning its effects.” (p. 327)“Such incidents are defined as extreme behavior, either outstandingly effective or ineffective with respect to attaining the general aims of the activity.” (p. 338) • Origin: Aviation Psychology Program during WWII

  36. UAR “Slots” and Critical Incident Definition • UAR Identifier -- <Problem or Good Feature>“outstandingly effective or ineffective” • Succinct description of the usability aspect purpose or intent... “consequences are sufficiently definite” • Evidence for the aspect “observable human activity” • Explanation of the aspect “permit inferences and predictions”

  37. Possible Criteria for Identifying a Bad Critical Incident • -- Fill this in!

  38. Possible Criteria for Identifying a Bad Critical Incident • The user articulated a goal and does not succeed in attaining that goal within 3 minutes (then the experimenter steps in and shows him or her what to do--the next step). • The user articulates a goal, tries several things or the same thing over again (and then explicitly gives up). • The user articulates a goal and has to try three or more things to find the solution. • The user accomplishes the task, but in a suboptimal way • The user does not succeed in a task. That is, when there is a difference between the task the user was given and the solution the user produced. • The user expresses hesitation, surprise. • The user expresses some negative affect or says something is a problem. • The user makes a design suggestion (don’t ask them to do this, but sometimes they do this spontaneously as they think-aloud).

  39. Previous slide had…. • Possible Criteria for Identifying a Bad Critical Incident • NOT the only criteria that are reasonable • Your organization or design team should think about what’s reasonable for your product

  40. Possible Criteria for Identifyinga Good Critical Incident • -- Fill this in!

  41. Possible Criteria for Identifyinga Good Critical Incident • The user expressed some positive affect or says something is really easy. • The user expresses happy surprise. • Some previous analysis has predicted a usability problem, but this user has no difficulty with that aspect of the system

  42. Previous slide had…. • Possible Criteria for Identifying a Good Critical Incident • NOT the only criteria that are reasonable • Your organization or design team should think about what’s reasonable for your product

  43. Analyzing the data • Numeric data • Example: times, number of errors, etc. • Tables and plots using a spreadsheet • Look for trends and outliers • Organize problems by scope and severity • Scope: How widespread is the problem? • Severity: How critical is the problem?

  44. Scope and Severity Separately

  45. Composite Severity Ratings • Probably easier to use: • 0 – not a real usability problem • 1 – cosmetic problem only–need not be fixed • 2 – minor usability problem–low priority • 3 – major usability problem–important to fix • 4 – usability catastrophe—imperative to fix before releasing product

  46. Practice Analyzing • Usability test of “wearable computer” • cockpit_p004.mov • Looking for picture: onr_3_magnification.mov • Asking for help: onr_7_useoflink_interface.mov

  47. UAR Template • UARTemplate.doc

  48. UAR “slots” for reporting results of Think-aloud tests • UAR Identifier -- <Problem or Good Feature> • Succinct description of the usability aspect • Evidence for the aspect • What’s on the screen or what’s presented aurally • What the user says • What the user does • What the system does in response to what the user does • Include reference to time code or hyperlink – so easy to find • Explanation of the aspect • What you infer the user’s goal was from what they said • What you infer the user was thinking from what the user said • How the system interpreted the user’s action (i.e., what the system was “thinking”) • etc.

  49. Find Possible Redesigns • Relate UARs • Similar goals • System feature • Integration issues • If many UARs, could do affinity diagrams(like in Contextual Inquiry) • Find Plausible Solutions • Review all UARs to activate them in memory (invention favors the “well-prepared” mind) • Some hints from relationships • Often “just” creativity • Review proposed solutions against UARs

  50. Write a Summarizing Report • “Executive” summary • Conceptual re-designs are most important • If just “tuning”, then a “top ten” list • Levels of severity help rank the problems • “Highlights” video is often a helpful communications device

More Related