140 likes | 235 Views
Evaluating Search Interfaces. Marti Hearst UC Berkeley. Enterprise Search Summit West Search UI Design Panel. Evaluating Search Interfaces. This is very hard to do well First, a recap on iterative design and evaluation. Then I’ll present some do’s and don’ts. Interface design is iterative.
E N D
Evaluating Search Interfaces Marti Hearst UC Berkeley Enterprise Search Summit West Search UI Design Panel
Evaluating Search Interfaces • This is very hard to do well • First, a recap on iterative design and evaluation. • Then I’ll present some do’s and don’ts.
Interface design is iterative Design Evaluate Prototype
Fast A small number of participants (5) Test mock-ups and prototypes in addition to finished designs Learn about what doesn’t work, a bit about what does, maybe new good ideas for future iterations. More time-consuming Need many participants (often still too few) Test particular components or principles to be used by others Learn if something is better than something else, and by how much. Discount Testing vs. Formal Testing
Qualitative Semi-Formal Studies • After the design has been mocked up, evaluated, redesigned several times, • Evaluate the system holistically or in parts with a large user base • Watch the participants use the system on their own queries • Use Likert scales to get subjective responses to different features • Find bugs • Find features/tasks that need to be streamlined • Determine next round of useful features • Refine and test again.
Participants need to know and care about the search goal (Jared Spool, UIE.com) Do Use Motivated Participants
Have people use the system for their needs for several weeks or months. Observe changes in behavior, and subjective preferences. Do Longitudinal Studies
If you’re doing something new with search, start simple, see what works, then add more in using additional evaluations. Do Add New Features Gradually
In search engine comparisons, variability between queries/tasks can be greater than variability between systems. Beware of Query Sensitivity
Some things are eye-catching, but serve best to draw the user in. Will they really like it over time? Or if they don’t like it at first, will they learn to like it? (rarer) Beware of Cool vs. Usable
Compare your new idea against the best, most popular current solution. A good test: “How often would you use this system?” Do Compare Against a Strong Baseline
Time to complete the task can be a misleading metric. Subjective impressions are key for determining search interface success. Subjective vs. Quantitative Measures
Time to complete the task can be a misleading metric. Subjective impressions are key for determining search interface success. Subjective vs. Quantitative Measures
Summary • Search evaluation is hard because of huge variations in • Information needs • Searchers’ knowledge and skills • Collection contents • A good strategy is to: • Add a few features at a time, test as you add • Obtain subjective preference information • Measure over time using longitudinal studies.