240 likes | 335 Views
Chapter 8 Usability Specification Techniques Hix & Hartson. Usability Specifications. Quantitative Usability Goals. Usability Attributes. Usability characteristics to be measured Initial Performance Long-term performance Learnability Retainability Advanced Feature Usage First Impression
E N D
Usability Specifications • Quantitative Usability Goals
Usability Attributes • Usability characteristics to be measured • Initial Performance • Long-term performance • Learnability • Retainability • Advanced Feature Usage • First Impression • Long-term user satisfaction
How can attributes be measured? • Objective Tasks (called benchmark tasks) • Tasks must be representative of what users would perform • Measure performance on benchmark tasks • Tasks must be specific • Do not tell user how to carry out tasks • Should be simple, or small combinations of simple tasks • Consider who your end users are!
Objective Tasks (cont.) • Example • For the attribute Initial Performance may measure how well they perform a specific function that is primary to the software. Time and error data can be collected.
Subjective Questionnaires • Asks for opinions on use . • QUIS is an existing validated questionnaire. • Questionnaires produce objective data as well as subjective data. • Example: First Impression attribute, would want certain rankings on questionnaire.
Usability Specifications • For the Task looking for the following information: • Current Level of Task Performance • Worst Acceptable Level • Planned Target Level • Best Possible Level • Observed Results
Types of measures • Objective measures • Time to complete a task • Number or percentage of errors • Percentage of task completed in a given time • Ratio of successes to failures • Time spent in errors and recovery • Number of commands/actions performed • Frequency of help and documentation use • Number of repetitions of failed commands • Number of available commands • Number of time user expresses frustration or satisfaction
Set current levels based on: • Previous or existing system • Similar competitive systems • Performing computer tasks • Performing manual tasks • Market input • From previous prototypes
Considerations when developing specifications • Is each attribute practically measurable? • Are the user classes specified clearly? • Are the values for the levels reasonable? • How well do the attributes capture usability for the design?
What is meant by formative evaluation? • A formal evaluation plan during design process. • To be begun as early as possible in design cycle. • First evaluation to take place when 10% of project resources are expended.
Summative Evaluation • A human factors engineer’s worst nightmare • Evaluation only after completion of the design.
Types of Evaluation Data • Objective – Directly observed and measurable • Subjective – Opinions • Quantitative – Numerical data • Qualitative – lists of user problems, suggestions, etc.
Steps of Formative Evaluation • Develop evaluation plan (or experiment) • Selecting Participants • Developing tasks and task orders • Determining protocol and procedures • Pilot testing • Direct the evaluation
Direct the evaluation • Data Generation • Benchmark tasks • User preference questionnaires • Concurrent Verbal protocol • Retrospective verbal protocol • Critical incident taking • Structured interviews
Direct the evaluation (cont.) • Data Collection • Real-time note taking • Videotaping • Audiotaping • Internal instrumentation of the interface
Direct the evaluation (cont.) • Analyzing the Data • Compute averages for benchmark tasks • Determining problems or user difficulty • Determine effects on user performance • Impact analysis • Importance • How important is this problem to the design • Generate solutions • Consider costs to fix problems • Redesign, implement, retest
Formative Evaluation Pros & Cons • Pros • 4 to 5 subjects find 80% of problems • Sensitive to major problems • Can be very through process • Developers empathize with users • Cons • Time Consuming • Expensive
Other Usability Testing Methods • Heuristic • Guidelines • Computer Evaluation
Heuristic • Usability engineer reviews and evaluates program with no standard procedure • Pros • Quickly identify problems • Major problems are discovered • Cons • Must use more than one usability engineer • Minor problems not discovered
Guidelines • Evaluator looks at design to see if it meets guidelines • Pros • Finds general and recurring errors • Easily applied • Cons • Major problems can be missed • Guidelines are not exhaustive • Not all programs are created equal • Not all guidelines apply
Computer Evaluation • Automated computer program evaluates software • Pros • Potential tool in the future • Cons • Expensive • Currently finds only primitive problems • Will designers lose creativity trying to design to meet tests?
Conclusions • Heuristic can be cost effective • Use more than one method • Users determine success of software and companies