190 likes | 302 Views
User Studies: Surveys and Continuing Assessment. Lecture 5. Case Study: One-Or-More Buttons. Task: Good control that gives the user a "one or more, but not none" selection Cross between radio buttons and check boxes Select as many as want, but always keep at least one selected.
E N D
Case Study: One-Or-More Buttons • Task: Good control that gives the user a "one or more, but not none" selection • Cross between radio buttons and check boxes • Select as many as want, but always keep at least one selected from Bruce "TOG" Tonazzini's TOG on Interface, 1991 CS774 – Spring 2006
Approaching the Problem • Guidelines for expanding an interface • If it ain't broke real bad, don't fix it • Build on existing visual/behavioral language • Invent new objects, with new appearances, for new behavior • When possible, evolve objects, rather than starting from scratch • Make changes clearly visible • Interpret users' responses consistently • Multiplex meanings CS774 – Spring 2006
Constraints • People have to figure out how to use the control on their own, first time out • Appearance reflect marriage between radio buttons and check boxes CS774 – Spring 2006
Ten Steps for User Testing on the Cheap • Introduce yourself • Describe the purpose of the observation (in general terms) • Stress you want to find problems in the product • Tell the participant that it's OK to quit at any time • Talk about the equipment in the room • Explain how to "think aloud." CS774 – Spring 2006
Ten Steps for User Testing on the Cheap • Explain that you will not provide help • Describe the tasks and introduce the product • Ask if there are any questions before you start; then begin the observation • Conclude the observation • Use the results CS774 – Spring 2006
The Paper Test CS774 – Spring 2006
Next Iteration • Response to first iteration • More designs • Testing • Thought it looked like an icky bug • X meant it was inactive (cancelled) • Thought rule was that you had to use a particular English CS774 – Spring 2006
3rd Iteration • Next design: • Test 5 subjects • None develop superstitious rule • The object work! • But… • Prototypes started with at least two buttons on • If they go locked up, hound them until they try clicking the immutable button CS774 – Spring 2006
Peer Design Review • What was the funny looking line? • New design: • The superstitious rule came back • Solving the superstitious rule • Mimic mercury: Pressing the only button turned on will cause it to turn off and the button below to turn on • Test: 11 year old son immediately knows the rule CS774 – Spring 2006
Final Design • Validation Testing • 5 people • 2 nine year olds • woman who uses a Mac 2 hours per week • woman who uses a Mac 2 hours per day • self-identified power-user • All learned rule within 15 seconds CS774 – Spring 2006
Survey Instruments • Written user surveys are a familiar, inexpensive and generally acceptable companion for usability tests and expert reviews. • Keys to successful surveys • Clear goals in advance • Development of focused items that help attain the goals. • Survey goals can be tied to the components of the Objects and Action Interface model of interface design. • Users could be asked for their subjective impressions about specific aspects of the interface such as the representation of: • task domain objects and actions • syntax of inputs and design of displays. CS774 – Spring 2006
Surveys (cont.) • Online surveys avoid the cost of printing and the extra effort needed for distribution and collection of paper forms. • Many people prefer to answer a brief survey displayed on a screen, instead of filling in and returning a printed form, • although there is a potential bias in the sample. CS774 – Spring 2006
Acceptance Test • For large implementation projects, the customer or manager usually sets objective and measurable goals for hardware and software performance. • If the completed product fails to meet these acceptance criteria, the system must be reworked until success is demonstrated. • Rather than the vague and misleading criterion of "user friendly," measurable criteria for the user interface can be established for the following: • Time to learn specific functions • Speed of task performance • Rate of errors by users • Human retention of commands over time • Subjective user satisfaction CS774 – Spring 2006
Acceptance Test (cont.) • In a large system, there may be eight or 10 such tests to carry out on different components of the interface and with different user communities. • Once acceptance testing has been successful, there may be a period of field testing before national or international distribution. • Yes or no decision - is it good enough to release? CS774 – Spring 2006
Evaluation During Active Use • Interviews and focus group discussions • Continuous user-performance data logging • Online or telephone consultants • Online suggestion box or e-mail trouble reporting • Discussion group and newsgroup CS774 – Spring 2006
Controlled Psychologically-oriented Experiments • Scientific and engineering progress is often stimulated by improved techniques for precise measurement. • Rapid progress in the designs of interfaces will be stimulated as researchers and practitioners evolve suitable human-performance measures and techniques. CS774 – Spring 2006
Controlled Psychologically-oriented Experiments (cont.) • The outline of the scientific method as applied to human-computer interaction might comprise these tasks: • Deal with a practical problem and consider the theoretical framework • State a lucid and testable hypothesis • Identify a small number of independent variables that are to be manipulated • Carefully choose the dependent variables that will be measured • Judiciously select subjects and carefully or randomly assign subjects to groups • Control for biasing factors (non-representative sample of subjects or selection of tasks, inconsistent testing procedures) • Apply statistical methods to data analysis • Resolve the practical problem, refine the theory, and give advice to future researchers CS774 – Spring 2006
Controlled Psychologically-oriented Experiments (cont.) • Controlled experiments can help fine tuning the human-computer interface of actively used systems. • Performance could be compared with the control group. • Dependent measures could include performance times, user-subjective satisfaction, error rates, and user retention over time. CS774 – Spring 2006