1 / 51

Mobile Softwareteknologier

Learn about involving users, planning a usability evaluation, conducting field or lab tests, and presenting results for mobile app usability.

jonesv
Download Presentation

Mobile Softwareteknologier

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mobile Softwareteknologier Interaktionsdesign og usabilityevaluering

  2. Kursusgang 6: Usability-evaluering af mobile apparater • Status fra sidste gang • Involvering af brugere • Planlægning af en usability-evaluering • Felt eller laboratorium • Øvelse 1: Planlægning af usability-evaluering • Gennemførelse af en usability-evaluering • Teknikker til (gen)skabelse af kontekst • Øvelse 2: Udførelse af usability-evaluering • Præsentation af resultater

  3. Kursusgang 6: Usability-evaluering af mobile apparater • Status fra sidste gang • Involvering af brugere • Planlægning af en usability-evaluering • Felt eller laboratorium • Øvelse 1: Planlægning af usability-evaluering • Gennemførelse af en usability-evaluering • Teknikker til (gen)skabelse af kontekst • Øvelse 2: Udførelse af usability-evaluering • Præsentation af resultater

  4. Status fra sidste gang • Kort status pr. gruppe • Videre arbejde med øvelsen

  5. Kursusgang 6: Usability-evaluering af mobile apparater • Status fra sidste gang • Involvering af brugere • User-Centred Design • Prototyper • Planlægning af en usability-evaluering • Felt eller laboratorium • Øvelse 1: Planlægning af usability-evaluering • Gennemførelse af en usability-evaluering • Teknikker til (gen)skabelse af kontekst • Øvelse 2: Udførelse af usability-evaluering • Præsentation af resultater

  6. Involvering af brugere • Mål: involvere brugerne aktivt i udviklingsarbejdet • Brugerne er med til at tage designbeslutninger • User-Centred Design henføres til Norman (1986) • UCD ændrede usability engineering fra evaluering af menneskelig performance og fejl til en sammenkobling af design- og evalueringsaktiviteter i en samlet udviklingsproces • Mange har arbejdet med det siden

  7. Adoption of UCD Techniques • Venturi et al. 2006 • Investigated UCD adoption through a web survey • Respondents: 83 • The early involvement of UCD practitioners in the product life cycle is more frequent compared to ten years ago • The methods and the techniques employed have shifted their focus from summative evaluation to rapid development cycles and from quantitative to qualitative evaluation methods.

  8. Prototype • Definition: En repræsentation af et design. Kan bruges til at afprøve kvaliteten af dette design i forhold til bestemte kriterier • To kategorier – ud fra nærhed til produkt • Low fidelity: designskitser på papir • High fidelity: emulator, værktøj

  9. Evaluering af prototyper • Grundlæggende: samme procedure som andre evalueringer • Afhænger af kategorien

  10. Kursusgang 6: Usability-evaluering af mobile apparater • Status fra sidste gang • Involvering af brugere • Planlægning af en usability-evaluering • Design og evaluering • Begreb, aktiviteter og resultater • Felt eller laboratorium • Øvelse 1: Planlægning af usability-evaluering • Gennemførelse af en usability-evaluering • Teknikker til (gen)skabelse af kontekst • Øvelse 2: Udførelse af usability-evaluering • Præsentation af resultater

  11. User interface design provides design products: The system An operational prototype A user interface specification A paper prototype … Usability evaluation provides feedback to user interface design This feedback forms the basis for redesign and further development This is called formative evaluation Design og evaluering: Samspil

  12. Usability Effective to use (effectiveness) Efficient to use (efficiency) Safe to use (safety) Have good utility (utility) Easy to learn (learnability) Easy to remember how to use(memorability) Experience Satisfying Enjoyable Fun Entertaining Helpful Motivating Aesthetically pleasing Supportive of creativity Rewarding Emotionally fulfilling A usability problem is something that reduces the usability of the system Usability: Concept

  13. Usability Evaluation • A usability evaluation is a set of activities that produces a coherent assessment of the usability of a software system • A usability test is a limited activity that contributes to the evaluation, e.g. one user applies the system to solve a number of tasks • A usability evaluation is documented in some way

  14. Determine basics • When in development process: • Exploratory • Assessment • Validation • Comparison • Location and equipment • Participants and roles Test plan: • Purpose • Key questions • User profile • Test method • Task list • Context and equipment • Test monitor role • Data to be collected • Report structure Plan process Create test situation Recreation of context Selection of test subjects Design of tasks Introduction Task solving Data collection Logging Debriefing Conduct test Transcription of log files Data summary Data analysis Documentation (report) Interpret data Activities in a Usability Evaluation

  15. The result of a usability evaluation is typically documented in a usability report Characteristics: 40-80 pages 30-80 usability problems:A list and a detailed description of each Problems are identified through user-based tests Problems are categorized (cosmetic, serious, critical) Additional: TLX and task completion times Log files transcribed from the video (15-20 pages) This report is the feedback 1. Executive summary 2.Method a)Purpose b)Procedure c)Test participants d)Test procedure e) Location & equipment f) Identification & categorization of problems 3.Results a)Workload (NASA-TLX) b)Time used c) Problem overview d) Detailed description of problems 4.Conclusion 5.Appendix a)Tasks b) Interview guide c) Questionnaires d) Video log-files e) System log-files f)Task solutions Result of a Usability Evaluation

  16. Key Result: Usability Problems

  17. Kursusgang 6: Usability-evaluering af mobile apparater • Status fra sidste gang • Involvering af brugere • Planlægning af en usability-evaluering • Felt eller laboratorium • MobileWard: Is it worth the hassle? • NetMill: It’s worth the hassle! • Øvelse 1: Planlægning af usability-evaluering • Gennemførelse af en usability-evaluering • Teknikker til (gen)skabelse af kontekst • Øvelse 2: Udførelse af usability-evaluering • Præsentation af resultater

  18. Two Evaluation Approaches • Field experiments • Realistic use context • Difficult to control • Complicated data collection • Complex and time consuming • Safety and ethical issues • Laboratory experiments • Experimental control • High quality data collection • Lack of realism

  19. Laboratory vs. Field • Most usability evaluations of mobile systems are currently conducted in laboratory settings • A recent literature study revealed that… • 41% of mobile HCI research involve evaluation • 71% of this is done in laboratory settings • It is a widely adopted point of view that mobile systems require field evaluations, but… • It is difficult to conduct field evaluations • The added value of testing in the field is unknown • Additional problems come at a high cost (time & effort)

  20. MobileWard: Method • Laboratory evaluation • Lab at Aalborg University, Denmark • 6 test subjects (trained nurses) • Tasks derived from user study • Laboratory furnished as hospital, divided into two wards + corridor • Field evaluation • Frederikshavn Hospital, Denmark • 6 test subjects (trained nurses) • No specified tasks • Involving real work activities

  21. Findings (1) • 37 different usability problems • Lab evaluation resulted in 36 problems • 8 critical, 18 serious, and 10 cosmetic • Field evaluation resulted in 23 problems • 7 critical, 10 serious, and 6 cosmetic • Primarily more serious and cosmetic problems

  22. Findings (2) • More problems per session • 18.8 (2.0) problems versus 11.8 (3.3) problems (U=2.651, p<0.01) • Critical: 5.3 (1.2) and 4.5 (2.2) problems • Serious: 7.5 (1.0) and 4.5 (0.8) problems • Cosmetic: 6.0 (0.9) and 2.8 (1.0) problems • Identified significantly more serious (U=2.79, p<0.01) and cosmetic problems (U=2.84, p<0.01)

  23. Conclusions • Was it worth the hassle? • Not really, at least not for usability problem identification • However, the real use situation provided additional information on use • Replicating the context – always possible? • Lab evaluation without context replication • Field evaluation with task assignments

  24. NetMill: Method • Two user-based usability evaluations: • A usability laboratory • A field-based setting • System: mobile system used by tradesmen andworkers to register use of time and materials • Test subjects: 14 tradesmen students on atechnical high school • Tasks: • Nine tasks, identical for the two settings, except for a single task where the field evaluation included a physical aspect in order to complete the task. • A pre-questionnaire was made to gather data of the participant’s experience with different types of information technology • A NASA-TLX test • A post-questionnaire to reveal the participant’s subjective opinion • Two separate teams with a test monitor and a logger conducted the two evaluations

  25. Data Collection in the Two Settings • Field • Laboratory

  26. Findings (1) • Same amount of critical and serious problems • More cosmetic problems in the field

  27. Findings (2) • Signifikant forskel på kritiske problemer

  28. Conclusion • Two evaluations conducted in the field and in a usability laboratory using identical data collection techniques • The field-based evaluation was more successful in uncovering usability problems: • It identifies significantly more problems • It was only in this evaluation we identified problems related to cognitive load and interaction style • 58% of the identified usability problems were unique to either of the two settings, but the more severe a problem was, the more likely it was to be identified in both evaluations. • The context of use influences the usability of a system, which indicates that more realistic context settings can help provide more valid information about the overall usability of a system • The overall conclusion of is that it is worthwhile conducting user-based usability evaluations in the field, even though it is more complex and time consuming

  29. Kursusgang 6: Usability-evaluering af mobile apparater • Status fra sidste gang • Involvering af brugere • Planlægning af en usability-evaluering • Felt eller laboratorium • Øvelse 1: Planlægning af usability-evaluering • Gennemførelse af en usability-evaluering • Teknikker til (gen)skabelse af kontekst • Øvelse 2: Udførelse af usability-evaluering • Præsentation af resultater

  30. Øvelse 1: Planlægning • Planlæg en usability-evaluering af jeres indkøbsvogn • Overvej og fastlæg de basale forhold • Lav en testplan • Lav de opgaver, brugeren skal udføre • Præsenter eventuelle vanskeligheder

  31. Det mobile usabilitylaboratorium: problemet

  32. Det mobile usabilitylaboratorium: løsningen (1)

  33. Det mobile usabilitylaboratorium: løsningen (2)

  34. Kursusgang 6: Usability-evaluering af mobile apparater • Status fra sidste gang • Involvering af brugere • Planlægning af en usability-evaluering • Felt eller laboratorium • Øvelse 1: Planlægning af usability-evaluering • Gennemførelse af en usability-evaluering • Teknikker til (gen)skabelse af kontekst • Øvelse 2: Udførelse af usability-evaluering • Præsentation af resultater

  35. Determine basics • When in development process: • Exploratory • Assessment • Validation • Comparison • Location and equipment • Participants and roles Test plan: • Purpose • Key questions • User profile • Test method • Task list • Context and equipment • Test monitor role • Data to be collected • Report structure Plan process Create test situation Recreation of context Selection of test subjects Design of tasks Introduction Task solving Data collection Logging Debriefing Conduct test Transcription of log files Data summary Data analysis Documentation (report) Interpret data Activities in a Usability Evaluation

  36. Kursusgang 6: Usability-evaluering af mobile apparater • Status fra sidste gang • Involvering af brugere • Planlægning af en usability-evaluering • Felt eller laboratorium • Øvelse 1: Planlægning af usability-evaluering • Gennemførelse af en usability-evaluering • Teknikker til (gen)skabelse af kontekst • Øvelse 2: Udførelse af usability-evaluering • Præsentation af resultater

  37. Fysisk kontekst: hvordan bruges systemetFysisk lighed, for eksempel indretning som brugskonteksten (en bank, en forretning, …) Teknologisk kontekst: den teknologi, som er tilgængelig for brugerenFor eksempel netværk, devices Social kontekst: den sociale situation, som systemet bruges i Andre kontekster De 5 W'er(fem former for kontekst) Who Where What When Why Hvad er kontekst?

  38. Teknikker til evaluering af mobile systemer • Pirhonen et al. (2002) • Windows Media Player på en PDA • Gestures som interaktionsform • Lavede en usability-evaluering • I evalueringen forsøgte man atgenskabe en mobil kontekst

  39. To teknikker Teknik 1 • Forhindringer på en gang • Lærred med opgaver Teknik 2 • Trappemaskine • Videooptagelse

  40. New Techniques: Two Experiments • Kjeldskov & Stage (2004) • Two experiments comparing techniques for lab- and field-based usability testing of mobile systems were conducted • Experiments explored different techniques requiring… • Different levels of physical movement • Divided cognitive attention • Example application: use of Short Message Service (SMS) on PDAs and mobile phones

  41. Five Laboratory Techniques

  42. The Laboratory Experiments • 5 conditions (6 test subjects per condition) • Number of usability problems • Performance (task completion time) • Subjective workload (NASA TLX)

  43. The Field Experiment • 1 condition: walking in a pedestrian street (6 test subjects) • Number of usability problems • Performance (task completion time) • Subjective workload (NASA TLX)

  44. Comparison: Usability Problems Number of identified usability problems categorized by severity • No technique identified all problems • Most problems found when seated at table (34) • Statistical significance • Comparable numbers of critical problems found (3-4) • More than double the number of cosmetic problems were found while seated compared to the other lab techniques

  45. Comparison: Workload Subjective experience of workload with the different techniques • Sitting at a table (lab 1) required significantly less mental activity compared to all other techniques but lab 2 • Overall, sitting or walking at constant speed is experienced significantly less demanding than any other technique

  46. Kursusgang 6: Usability-evaluering af mobile apparater • Status fra sidste gang • Involvering af brugere • Planlægning af en usability-evaluering • Felt eller laboratorium • Øvelse 1: Planlægning af usability-evaluering • Gennemførelse af en usability-evaluering • Teknikker til (gen)skabelse af kontekst • Øvelse 2: Udførelse af usability-evaluering • Præsentation af resultater

  47. Øvelse 2 • Lav usability-evalueringen af jeres indkøbsvogn • Resultater: • Problemliste • Erfaringer med at teste mobilt • Lav det på slides til præsentation (10 min. pr. gruppe) • Demo af systemet • Fundne usability-problemer • Lessons learned

  48. Kursusgang 6: Usability-evaluering af mobile apparater • Status fra sidste gang • Involvering af brugere • Planlægning af en usability-evaluering • Felt eller laboratorium • Øvelse 1: Planlægning af usability-evaluering • Gennemførelse af en usability-evaluering • Teknikker til (gen)skabelse af kontekst • Øvelse 2: Udførelse af usability-evaluering • Præsentation af resultater

More Related