860 likes | 1.06k Views
HCI460: Week 3 Lecture. September 23, 2009. Outline. Project 1 recap Q & A Research overview Defining usability Usability measures Testing with users Preparation for a usability study Assignment for next week. Project 1 Recap. Project 1 Recap.
E N D
HCI460: Week 3 Lecture • September 23, 2009
Outline • Project 1 recap • Q & A • Research overview • Defining usability • Usability measures • Testing with users • Preparation for a usability study • Assignment for next week
Project 1 Recap Feedback on Project 1a (Individual Notes) • No deductions for late submissions but from now on: • In class students: Projects are due Wed 11:59 pm Central Time • DL students: Projects are due Sun 11:59 pm Central Time • Please submit through COL. • One person should submit for the entire group.
Project 1 Recap Feedback on Project 1a (Individual Notes) • Assignments differed along two dimensions: • Number of issues found • Some found only a few, some found a lot. • Level of polish • Some turned in polished deliverables, some turned in “quick and dirty” work. • Real world: perfection vs. efficiency • Recognize when “quick and dirty” is needed and when professional and final deliverable is needed. • At this stage of the project (individual notes), it was better to have more issues and present them in a “quick and dirty” way than to have fewer issues but a more polished deliverable.
Project 1 Recap Project 1b: Discussion • Project 1b is due today at midnight for in-class students and on Sunday at midnight for DL students. • One person from each group should upload it to COL. • What have you learned? • What was difficult / challenging? • About the evaluation itself • About evaluating as a group • How did you overcome the challenges? • What will you do differently next time? • Did you see any value in conducting the evaluation as a group?
Q & A Class Interaction ~ Opportunity • Interaction has been excellent thus far • Clarification will always be provided, just ask • Exercises allow for alternative feedback • Feedback • Opportunities • Push the User eXperience (UX) envelope • Professional development
Q & A Conference Recap • UXalliance.com • mobileHCI • Eating usability • Touch interface debate
Research Overview Why Are You Here? What do we do? What do you want to do?
Research Overview Paths Into UX Research • Research background • Design background
Research Overview Thinking About UX Research • If someone asked, what is it? • What we strive to do is answer questions. • Methodologies and techniques are just tools. • As practitioners, we must be focused on the questions. • Often, finding the answer will require much deeper and complex methodologies than discount usability testing will provide OR NOT! • Let’s explore and quite possibly push the boundaries of user experience research tactics.
Research Overview What Can’t We Evaluate? • May have mentioned…. • Web sites • Software applications • Interactive Voice Response systems (IVRs) • Speech recognition systems • Text-to-speech (TTS) interfaces • Voice mail systems • Unified messaging interfaces • Internet applications • Games and gaming systems • Telecommunication products and services • Call center applications • Consumer products • Wireless devices • Packaging
Research Overview What Can We Test? • Consider an iPhone…
Research Overview Three Dimensions of Feedback • How people evaluate objects • Commercials • Packages • Online advertisements • Products • … 16
Research Overview Attitude • What users “say” ... • Influencers • Social status • Emotion • Coolness / Hip • Reveal • Feature importance • Purchase intent 17
Research Overview Behavior • What users actually “do” • Ultimately, behaviors are what we wish to shape • Give users context, a task and stimuli and then • Observe what users do • Behavior drives usage 18
Research Overview Attention • What users “focus” on… • What happens inside the head • Sometimes users are unaware • Often attention measured by eye tracking 19
Defining Usability Why Are You Here? What do we do? What do you want to do?
Defining Usability Can Usability Be Defined? We want to make things better. But how do you measure “better?” Can usability be defined?
Defining Usability The UX Honeycomb
Defining Usability What Makes Something Usable? Lights are on. Turn off the light.
Defining Usability Are There Universal Rules or Stereotypes? Turn the knob to move the arrow to the right.
Defining Usability Interaction Not Always Universal Make the shower temperature “just right.”
Defining Usability How About More Complicated Interfaces?
Defining Usability Can Interfaces Be Learned?
Defining Usability Emphasis on Usability in Many Fields
Defining Usability Naysayers: You Cannot Measure Usability • No measures exist. • Why? • What can we measure? • Success/Fail • Time • Accuracy • Satisfaction • ...
Defining Usability Definitions of Usability • From UPA: • Usability is an approach to product development that incorporates direct user feedback throughout the development cycle in order to reduce cost and create products and tools that meet user needs. • From Krug’s “Don’t Make Me Think:” • Usability really means making sure that something works well…so that a person of average ability and experience can use the thing for its intended purpose without being hopelessly frustrated.
Defining Usability Definitions of Usability • From ISO 9241-11: • The extent to which a product can be used by specified users to achieve specific goals with effectiveness, efficiency, and satisfaction in a specific context of use. • Effectiveness: Being able to complete the task • Efficiency: Amount of effort required to complete task • Satisfaction: Degree of happiness or fulfillment while performing task (or rather the absence of pain and frustration)
Defining Usability Naysayers: Usability Data Are Too Noisy • There is just too much going on to get reliable data—NOT! • Measurement must: • Be observable • Be quantifiable • Have sound control • Have understanding of context
Defining Usability Not All Measures Are Created Equal Bad Inappropriate Good
Defining Usability Types of Usability Metrics • Performance metrics • Issues-based metrics • Self-reported metrics • Behavioral and physiological metrics • Combined and comparative metrics
Defining Usability Performance Metrics • Task success • Binary: Success/fail • Levels of success: Complete success/partial success/…/failure • Time on task • Actual time • Thresholds • Errors • Actual number • Efficiency • Deviations from optimal path • Lostness (Smith, 1996) • Combination of task success and time (NIST, 2001) • Learnability • Metrics above taken from trials within same session, with breaks, between sessions
Defining Usability Issues-Based Metrics HIGH MEDIUM LOW • Usability issues • Number of issues found • Percentage of participants who found an issue • Severity ratings • High severity • Medium severity • Low severity
Defining Usability Self-Reported Metrics • Post-task ratings • Likert • Post-session ratings • Likert • System Usability Scale (SUS) (Brook 1996) • Likert • NASA TLX • Likert • Specific attribute • Agreement • Answers to open-ended questions
Defining Usability System Usability Scale (SUS)
Defining Usability Behavioral and Physiological Metrics • Verbal and non-verbal • Positive / negative comments • Facial expressions • Computerized analysis • Eye movements • Attention • Pupil diameter • Workload • Skin conductance and heart rate • Stress
Defining Usability Combined and Comparative Metrics • Usability scorecards • Comparison • Harvey balls • Radar charts
Defining Usability Again, Metric Selection is Non-Trivial Bad Inappropriate Good
Defining Usability Consider a Horse Race: Which Measure is Good?
Defining Usability Different Conditions: Yes, That is Snow
Defining Usability Appropriate Metrics Depend on Many Factors
Defining Usability Exercise: Measurement Scale • Objective: • Tons of user complaints that volume is too soft. • We have three new versions and want to know if the problem has been fixed. • Measure: • Rating of the perceived sound level 1 2 3 4 5 6 7
Defining Usability Evaluating Usability
Testing with Users Methods with No Users vs. Methods with Users • Inspection methods = no users, only UX experts • Heuristic evaluation • Expert evaluation • Competitive evaluation • Cognitive walkthrough • Pluralistic walkthrough • Methods involving users • User testing (lab, longitudinal) • Eye tracking • Focus groups • Surveys • Ethnographic research
Testing with Users Limitations of Testing (Rubin … Lab Testing) • Artificial situations • Results cannot prove product will work • Participants not necessarily actual target market • Might not be the right thing to do • Others? (Beyond Rubin?) • Sample size considerations • Feature coverage