250 likes | 273 Views
Learn about evaluating user interfaces, essential paradigms, and techniques to improve usability. Understand the importance of evaluation, when to evaluate, and how to interpret and present data effectively.
E N D
The next two weeks • Oct 21 & 23: • Lectures on user interface evaluation • Oct 28: • Lecture by Dr. Maurice Masliah • No office hours (out of town) • Oct 30: • Midterm in class • No office hours (out of town)
Midterm material • Everything up to exactly this point (including DemoCustomDialog) • Things to study: • Slides • Programs • Javadoc • No need to memorize all methods of Swing classes. Familiarity with the most common ones will be tested though.
Evaluating User Interfaces Material taken mostly from “Interaction Design” (Preece, Rogers, Sharp 2002)
User Interface Evaluation • Users want systems that are easy to learn and use • Systems also have to be effective, efficient, safe, satisfying • Important to know: • What to evaluate • Why it is important • When to evaluate
What to evaluate • All evaluation studies must have specific goals and must attempt to address specific questions • Vast array of features • Some are best evaluated in a lab, e.g. the sequence of links to find a website • Others are better evaluated in natural settings, e.g. whether children enjoy a particular game
Why it is important to evaluate • Problems are fixed before the product is shipped, not after • One can concentrate on real problems, not imaginary ones • Developers code instead of debating • Time to market is sharply reduced • Finished product is immediately usable
When to evaluate • Ideally, as early as possible (from the prototyping stage) and then repeatedly throughout the development process. “Test early and often.”
Evaluation Paradigms • “Quick and Dirty” evaluation • Usability Testing • Field studies • Predictive evaluation
“Quick and Dirty” evaluation • User-centered, highly practical approach • Used when quick feedback about a design is needed • Can be conducted in a lab or the user’s natural environment • Users are expected to behave naturally • Evaluators take minimum control • Sketches, quotes, descriptive reports are fed back into the design process
Usability Testing • Applied approach based on experimentation • Used when a prototype or a product is available • Takes place in a lab • Users carry out set tasks • Evaluators are strongly in control • Users’ opinions collected by questionnaire or interview • Reports of performance measures, errors etc. are fed back into the design process
Field studies • Often used early in design to check that users’ needs are met or to assess problems or design opportunities • Conducted in the user’s natural environment • Evaluators try to develop relationships with users • Qualitative descriptions that include quotes, sketches, anecdotes are produced
Predictive evaluation • Do not involve users • Expert evaluators use practical heuristics and practitioner expertise to predict usability problems • Usually conducted in a lab • Reviewers provide a list of problems, often with suggested solutions
Evaluation techniques • Observing users • Asking users their opinions • Asking experts their opinions • Testing users’ performance • Modeling users’ task performance to predict the efficacy of a user interface
The DECIDE framework • Determine the overall goals that the evaluation addresses • Explore the specific questions to be answered • Choose the evaluation paradigm and techniques • Identify practical issues • Decide how to deal with the ethical issues • Evaluate, interpret, and present the data
Determine the overall goals • What are the high level goals of the evaluation? • Examples: • Check that evaluators have understood the users’ needs • Ensure that the final interface is consistent • Determine how to improve the usability of a user interface
Explore specific questions • Break down overall goals into relevant questions • Overall goal: Why do customers prefer paper tickets to e-tickets? • Specific questions: • What is the customer’s attitude? • Do they have adequate access to computers? • Are they concerned about security? • Does the electronic system have a bad reputation? • Is it’s user interface poor?
Choose paradigm and techniques • Practical and ethical issues might be considered • Factors: • Cost • Timeframe • Available equipment or expertise • Compromises may have to be made
Identify practical issues • Important to do this before starting • Find appropriate users • Decide on the facilities and equipment to be used • Schedule and budget constraints • Prepare testing conditions • Plan how to run the tests
Decide on ethical issues • Studies involving humans must uphold a certain code • Privacy of subjects must be protected • Personal records must be kept confidential • Exact description of the experiment must be submitted for approval
Evaluate the data • Should quantitative data be treated statistically? • How to analyze qualitative data? • Issues to consider: • Reliability (consistency) • Validity • Biases • Scope • Ecological validity
We’ll take a closer look at… • Two predictive evaluation techniques: • Heuristic evaluation • Cognitive walkthroughs • A usability testing technique • User testing
Heuristic Evaluation • Heuristic evaluation is a technique in which experts, guided by a set of usability principles known as heuristics, evaluate whether user interface elements conform to the principles. • Developed by Jakob Nielsen • Heuristics bear a close resemblance to design principles and guidelines • Interesting article on Heuristic Evaluation:http://www.useit.com/papers/heuristic/heuristic_evaluation.html
List of heuristics • Visibility of system status • Match between system and the real world • User control and freedom • Consistency and standards • Help users recognize, diagnose, and recover from errors
List of heuristics (cont.) • Error prevention • Recognition rather than recall • Flexibility and efficiency of use • Aesthetic and minimalist design • Help and documentation