290 likes | 523 Views
Your Search Returned 0 Results: Improving Digital Library Search Tools. Paul Aumer-Ryan School of Information The University of Texas at Austin November 29, 2006. 1. Foreword. “No Results Found” can have several meanings:
E N D
Your Search Returned 0 Results: Improving Digital Library Search Tools Paul Aumer-Ryan School of Information The University of Texas at Austin November 29, 2006
1 Foreword • “No Results Found” can have several meanings: • “The explicit assemblage of characters you submitted does not occur anywhere in our index of items in our collection.” • “We don’t understand what you just typed.” • “We understand some of the things you typed, but not all of them.” • “We have what you are looking for, but we call it something else.” • “We don’t have what you are looking for.” • “Go away.”
1 Foreword • How is a patron supposed to determine which meaning is being conveyed? • “No Results Found” seems pretty authoritative and final; it’s a statement of fact, and it’s coming from a computer. • In a world where information overload has become cliché, how do we react to the opposite?
1 Let’s Waste Some Time… • http://www.lib.utexas.edu/ • Does it know acronyms? (JCDL) • Does it deal with misspellings? (digitul) • Can it search on subsets of terms? • Does it understand singular/plural?
2 Introduction • Overview of related work: • Searcher Behaviors, Collection “Behaviors” • Suggestions • Social Computing • Meta Search Engines • Visualizing Search Results • Experiment • Design • Expected Findings • Contributions
3 Searcher Behaviors • Models of Search Behavior: • Deep Divers vs. Broad Scanners vs. Fast Surfers • Query refiners vs. “I’m Feeling Lucky!”-ers • Expert vs. Novice • Seeking vs. Encountering vs. Exploring • Digital Libraries vs. The Web
3 Collection Behaviors • Different searchers have different wants, and different collection types call for different search tools • Models of collection “behavior”: • Small vs. Large • Homogeneous vs. Heterogeneous • Interrelated vs. Distinct • Single medium vs. Many media
3 The Helping Hand: Suggestions • Misspelled word suggestions • Automatic permutation suggestions • Acronym recognition Ebay.com
3 The Helping Hand: Suggestions • Avoiding the back button • Maintaining a consistent direction of flow • Minimize swapping between keyboard/mouse
3 Social Computing in Digital Libraries • Personalization • Search results are tailored based on the patron’s history… • With obvious privacy implications • Peer Recommendations • At the very least, links that were followed and/or rated highly by searchers using the same search terms will be preferred • More involved: results from peers with similar interests will be preferred... • With obvious privacy implications
3 Social Computing in Digital Libraries • Patron Tagging • Objects in the DL can be tagged by patrons, and these tags can be searched • Thumbs Up / Thumbs Down • A simple, patron-driven measure of the applicability of a document to a given search term • Popularity Rankings • “Popular” documents ranked higher; could be measured in many ways
3 Meta Search Engines • If one search engine returns no results, how about three or five? MetaCrystal
3 Meta Search Engines • Problems with aggregators: • Always done by a 3rd party • Relies on all engines being available and up-to-date • Only as fast as the slowest member • Adds a layer of complexity (Schwartz’s “The Paradox of Choice”)
3 Visualizing Search Results • Relational Maps
3 Visualizing Search Results • Topic Maps
3 Visualizing Search Results • Concept Maps
3 Visualizing Search Results • Maps, Maps, Maps, and Complexity
3 Visualizing Search Results • In general, visualization tends to deal with too much complexity, rather than too little (But for certain circumstances)
3 Keep the Baby, Not the Bathwater • Rather than performing an end-run around our problem (e.g., visualization maps), the focus here is on classic textual search and retrieval • “No Results Found” is applicable to all types of searches, but visualization adds another layer of complexity that we don’t need to deal with now
4 Experiment: Reaction to Ø Ø = No Results Found • Broad Questions: • What are the affective implications of encountering a null result set? • What impact does the digital library interface have on the interpretation of its contents? • Focused Question: After encountering a null result set, how do participant’s emotional responses affect further searches on the same topic?
4 Experimental Design • A mock digital library will be created: • Participants will interact with it via a simple search tool, which they will be told they are evaluating; • Participants will be given a topic to search for and several questions to answer regarding that topic; • The digital library will contain a small set of results pertaining to that topic.
4 Experimental Design • Participants will be divided into 3 groups: • Control Group: Get appropriate results from their first search term; • Experimental Group 1: Encounter Ø once, then subsequent search will return appropriate results; • Experimental Group 2: Encounter inappropriate results. • There will ideally be at least 50 people in each group
4 Experimental Design • Before searching the digital library, Participants will: • Answer a set of demographic questions • Rate their general mood (affect) • Rate their familiarity with computers, digital libraries, and research
4 Experimental Design • After evaluating the results in their own fashion, Participants will: • Answer a set of questions confirming comprehension; • Rate the authoritativeness of the results they found; • Rate their impressions of the digital library and the search tool; • Rate their general mood (affect) • (Would behavioral measures, e.g. skin conductivity and heart rate, be worthwhile during the seeking process?)
4 Experimental Design • Data Collected: • Pre-test questionnaires (demographics, baseline affect, familiarity measures) • Experimental data (time-on-task, search queries, number of mouse clicks, back button presses, etc., and possible behavioral measures) • Post-test questionnaires (authoritativeness, opinion of DL, affect)
4 Expected Findings • Participants who encounter Ø will: • Take more time completing the task (of course); • Rank the results as less authoritative; • Have a lower opinion of the search tool; • Exhibit more negative affect (frustration, anger, distress). • Participants who encounter inappropriate results are expected to be similar. • Novice users are expected to be more susceptible than expert users (see Chesney’s First Monday paper)
4 Contribution to the Field • This study hopes to elucidate the dangers of “no results found” responses by showing the actual effects on digital library users; • If Participants do indeed see results following Ø as less authoritative, it means the contents of a digital library are being evaluated not on their own merit, but by the interface’s effect on them; • If Participants have a lower opinion of a digital library because it returns Ø, then they are likely to go elsewhere; • If Participants exhibit more negative affect because of Ø, that’s just generally bad.
5 Conclusions • Empty search result pages tend to get ignored in the design and testing process: • Because they are not destinations; • They are just fleeting error messages; • They have little impact other than saying, “Try Again”; and our captive users have no choice, right? • Patrons won’t be spending any time there anyway; • Testers are so familiar with the interface that they hardly ever see them. • Ignore them no more!
5 Conclusions • In short, it is no longer enough to simply “put a digital library out there” for consumption; we need to make sure that we aren’t misleading patrons by saying we don’t have what we actually do have.