1 / 15

Chapter 22

Chapter 22. Planning who, what, when, and where. Intro. We have da strategy: purpose type of data to collect system constraints Now: choose users (population sample) create timetable (and stick to it) prepare task descriptions (script it!) decide where to evaluate (field or lab).

makya
Download Presentation

Chapter 22

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 22 Planning who, what, when, and where

  2. Intro • We have da strategy: • purpose • type of data to collect • system • constraints • Now: • choose users (population sample) • create timetable (and stick to it) • prepare task descriptions (script it!) • decide where to evaluate (field or lab)

  3. Choosing participants • Each participant should be: • a real (actual) user, or • representative user (from requirements), or • usability or domain expert • Participants should not be: • chosen at random • (hmm, this is contrary to “traditional” experimental criteria…why?)

  4. Screening or pretesting • To gage whether user fits desired subject profile, may need to screen users • For example, if testing a Spanish language training program, don’t use fluent Spanish language speakers • Questionnaires may be used to record users’ experience levels (can be useful in analysis later, e.g., discard experts’ data)

  5. Examples

  6. Working alone or in Pairs? • Usually users are tested alone • When may pairs be a good idea? • working cooperatively, sharing a computer • different culture (e.g., Japanese) • they prefer it (e.g., husband/wife team) • Hire a facilitator / caretaker / custodian? • when working with children, disabled, etc. • when interpreter is needed

  7. How many participants? • Depends on problem and stage of testing • “trivial” or “easy” troublespots will be identified quickly (and often if many users); so need only a few during early stages • OTOH, if a couple of users finds no problems, does that mean UI is acceptable in general? • Key issue: generalizability • Ideally, you’d want to conduct a power analysis of the experiment • But one typically goes with “rule of thumb”: start with 5, go to 10, etc………………..

  8. University participants • The book’s discussion seems to be aimed at practitioners • What about at the Uni? • use the Psych pool, other students, etc. • problems: • restricted age group • users may not have required expertise (e.g., evaluating a Fortran debugger) • motivation may be wanting (e.g., doing it for credit, not so much for science or “good of humanity” :)

  9. Incentive • If possible, compensate users: • extra credit • real credit (e.g., on an e-commerce web site) • food • nick-nacks (mugs, pens, bla bla) • soap (true story :) • money is always good, if you have enough to spare

  10. Global Warming App Users • How they picked users for the Global Warming App study: • email solicitation • experienced users (not novices) • various disciplines (e.g., not CS necessarily) • 10 users • no incentives

  11. Create a timetable • Timetable: • How long do you need per evaluation session? • may need to run quick pilot study to determine • How much time will the whole process take? • quick “back of the envelope” calculation: 100 subjects, 10 minutes each = 16 hours (not counting introductions, filling out questionnaires, lunch, dinner, classes, interruptions, Survivor episodes, etc.) • how many sessions can you run per day? Maybe 4-5 hours’ worth? • so what’s a realistic estimate for 100 subjects? week? two weeks?

  12. Timetable (cont.) • Keep evaluation session to a minimum, try not to exceed 1 hour (subjects will get bored, tired) • Create a timetable “sign up sheet” • very useful for signing up subjects and reserving lab space (e.g., eye trackers) • Allocate time for analysis • 80% of time spent in analysis (true? I dunno, just guessing) • just like debugging code?

  13. Task Descriptions • Create task descriptions • similar to idea of scripts for evaluators (so they know what to say and say the same thing to each participant, thereby reducing bias) • these are scripts for users • I think they’re a good idea (task cards), but so long as they’re not too detailed • case study: my Navy usability study: participants read directions from a script. This was too easy; everyone performed similarly; no clear performance problems were identified

  14. Where to do evaluation? • Field studies • observations in the field: most realistic environment, obviously • lacks control • Controlled studies • in a lab, usually • or some kind of mock scenario (e.g., “shoot house” for cops, SWAT personnel)

  15. Usability Lab • How to build a good usability lab: • http://www.stcsig.org/usability/topics/usability-labs.html • often a separate room is used for participants (one-way mirror) • various logging devices, e.g., cameras, keystroke logging software, etc. • do we have one at Clemson? We should…

More Related