140 likes | 150 Views
Learn about tools and techniques for collecting data, classify info types (subjective, objective), explore methods like questionnaires, talk-aloud protocols, note-taking, and recording technologies. Understand how to gather accurate and valuable data for your research purposes.
E N D
Chapter 23 How to collect data
How to collect data • This chapter is about the tools/techniques used to collect data • Hang on, let’s review: what are we collecting? What’s being measured?
What are we collecting? • Classify the type of info we want:
What are we collecting? • Classify the type of info we want: • Subjective: • Satisfaction: qualitative / quantitative (e.g., Likert)
What are we collecting? • Classify the type of info we want: • Subjective: • Satisfaction: qualitative / quantitative (e.g., Likert) • Objective: Performance measures • Efficiency: speed • Effectiveness: accuarcy (error rates)
What are we collecting? • Classify the type of info we want: • Subjective: • Satisfaction: qualitative / quantitative (e.g., Likert) • Objective: Performance measures • Efficiency: speed • Effectiveness: accuarcy (error rates) • Objective: Process measures • Eye movements, brain waves, physiological data (heart rate, skin response, etc.)
Timing and logging • Performance measures: • e.g., time to task completion • Start / stop times: stopwatch • Specific events (e.g., time Help was clicked): time stamp (best if code is retooled to provide this) • Maybe use logging software?
Logging software • Can be expensive • Ovo Logger appears to be free…does it come with source? (doesn’t look like it) • Does it provide mechanism for aligning notes or comments? (You may want to add comments at specific points in time) • Does it interfere with running program? (e.g., ClearView samples at 15 fps, therefore slows PC app down a bit)
Talk Aloud • Subjective Process measures: • User says what s/he’s thinking as s/he’s doing • Talk-aloud protocols: • good: gives glimpes of what user’s thinking • bad: may influence performance (probably slow user down; reduce errors), may be distracting • Alternative: record (log) session, ask questions in review “what did you do here”? • use cognitive walkthrough questions
Note taking • Take good notes • Bring paper and pens Don’t use subject’s name: should be anonymous
Debrief • Following session, you may want/need to: • ask user more questions • explain in more detail what was going on (participants may be curious as to what was really being tested) • some users may blame themselves for problems during test---this may sound silly but you need to be sensitive to this; if in doubt blame the machine, software, etc.
Questionnaires • May be difficult to design • “did this program meet your expectations?” • some expect crappy program, so “yes” • others expect good program, also “yes” • badly formed question… • Supplement questionnaires with interviews • Use pre-designed questionnaires • SUMI: Software Usability Measurement Inventory • WAMMI:Website Analysis and MeasureMent Inventory
SUMI • Questions such as: • “I’ll never learn all the features of this software” • “The instructions are helpful” • “I sometimes don’t know what to do next” • “I would recommend this software to my Mom” • Of course you may want to adapt the questions to your study
Recording technologies • Video and audio recording: good idea • But what kind of data is it recording? • satisfaction (maybe) • efficiency (only if you look at length of video) • effectiveness (if you can tell an error is made) • Doesn’t seem to offer any data recording, so what’s the point? • process measures perhaps… • don’t forget asking for permission to record…