130 likes | 216 Views
Usability and Accessibility Test Methods: Preliminary Findings on Validation. Sharon Laskowski, Ph.D. Manager, NIST Visualization and Usability Group, Information Access Division, ITL http://vote.nist.gov. U&A Test Methods: Face Validity.
E N D
Usability and Accessibility Test Methods:Preliminary Findings on Validation Sharon Laskowski, Ph.D. Manager, NIST Visualization and Usability Group, Information Access Division, ITL http://vote.nist.gov
U&A Test Methods: Face Validity • Does the test method test the requirement and does the requirement improve usability/accessibility? • U&A requirements and tests based on 30 years of best practice human factors research and design applied to user interfaces in similar domains • Comments by the public, test labs and manufacturers on proposed test methods
Goal: Procedural Validity of the Tests • Reliability: correct and reproducible • Do the tests produce true pass/fail determinations, for any given system, independent of testers and test labs? • Primary concerns • Are test procedures clear, complete, and easy to execute? • Are special tester qualifications required?
Validation Team • Design and Usability Center, Bentley University • Testers: students enrolled in the MS in Human Factors (HF) Program • Range of experience in usability testing, user interface and web design, media • Advisors: • HF Ph.D. researcher, usability metrics • Senior usability consultant
Validation Process: Scope • VVSG test methods for design requirements only • Did not have manufacturer summative usability test reports (no Technical Data Package) • Did not test with alternative languages • Next phases • Accessibility throughout the voting session usability test • Documentation usability/poll worker usability test • Usability performance benchmark test • Any additional VVSG 1.1 design requirements • Validate test for evaluating manufacturer reports
Validation Process: Protocol • 2 voting systems: accessible DRE, optical scanner • Round 1 • 4 individual testers, then team pairs executed the tests • Recorded pass/fail decisions, their confidence in the decisions, and any problems that arose • Round 2 - ongoing • 1 team, detailed recording of test execution, measurements, observations, and pass/fail decisions • Analysis and feedback on test methods • Meta-analysis of process and comparison of pass/fail decisions against expected outcomes
Preliminary Findings: Validation Process • In general, testers understood intention of the requirements and the HF focus • HF testers need to have clear understanding of validation and certification process • Test execution with P/F + evaluation of the test method • Partial interpretation of some tests useful, e.g., no logging of alt. language used • Materials to support recording of all data important • We enhanced the test method documentation and added detailed data collection forms for each system
Qualifications • Testers had detailed knowledge and experience in usability and accessibility, evaluation of user interfaces, and best practices • Few questions concerning the requirements and how to evaluate • More training and knowledge of voting systems and certification would have improved Round 1 • Validation and execution of tests depend heavily on contextual knowledge • E.g., there are many system acronyms and multiple documents (VVSG, test methods, glossaries, etc.)
Tester Skill Sets • Some requirements have test methods that require skills/knowledge atypical for HF professionals, e.g., • Use of photometers or oscilloscopes • Measuring contrast ratios, saturation, screen flicker, and sound levels • Might some of these tests be assigned to in-house VSTL testers? • But not obvious, e.g., measurements for wheelchair reachability may require some HF expertise
Specific Test Issues • Based on Round 1 and information from VSTLs • What is the ideal bystander distance for privacy testing? • What is the ideal distance from screen for measuring font size with a magnifier? • What is the best way to simulate 20/70 farsighted vision? • What constitutes “adequate notification” of failure • Opportunistic test if particular failure state arises • Will verify lack of problems in some areas, e.g., plain language requirements in Round 2
Workflow • Workflow of usability/accessibility test methods was designed to optimize the process • Designed to test multiple requirements • Conduct functional and design tests before usability tests with test participants • VSTLs pointed out that some tests require a new election definition be installed • These should be minimized and incorporated into other types of tests where possible • E.g., overvote/undervote warning tests
Next Steps • Collect and analyze results from Round 2 validation • Determine correctness of pass/fail decisions • Revise test methods based on findings • Clarity • Specificity • Identification of tests that do not require HF expertise • Placement in workflow • Revise qualifications draft document • Continue validation of remaining test methods and additional tests for changes to VVSG 1.1 and 2.0
Discussion/Questions Page 13