560 likes | 634 Views
Prof. Ray Larson University of California, Berkeley School of Information. Lecture 2: Concepts and Elements . Principles of Information Retrieval. Review. Review IR History, Readings Central Concepts in IR Documents Queries Collections Evaluation Relevance Elements of IR System.
E N D
Prof. Ray Larson University of California, Berkeley School of Information Lecture 2: Concepts and Elements Principles of Information Retrieval
Review • Review • IR History, Readings • Central Concepts in IR • Documents • Queries • Collections • Evaluation • Relevance • Elements of IR System
Review – IR History • Journal Indexes • “Information Explosion” following WWII • Cranfield Studies of indexing languages and information retrieval • Paper by Joyce and Needham on Thesauri for IR. • Development of bibliographic databases • Chemical Abstracts • Index Medicus -- production and Medlars searching
Development of IR Theory and Practice • Phase I: circa 1955-1975 • Foundational research • Fundamental IR concepts advanced in research environment • Phase II: 1975 to present • Slow adoption of IR research into operational systems • Accelerated in mid-1990’s due to WWW search engines
Boolean model, statistics of language (1950’s) Vector space model, probablistic indexing, relevance feedback (1960’s) Probabilistic querying (1970’s) Fuzzy set/logic, evidential reasoning (1980’s) Regression, neural nets, inference networks, latent semantic indexing, TREC (1990’s) DIALOG, Lexus-Nexus, STAIRS (Boolean based) Information industry (O($B)) Verity TOPIC (fuzzy logic) Internet search engines (O($100B?)) (vector space, probabilistic) Information Retrieval – Historical View Research Industry
Readings and Discussion • Joyce and Needham • Assigned index terms or Automatic? • Lattice theory (extension of Boolean algebra to partially ordered sets) • Notice the Vector suggestion? • Luhn • Document/Document similarity calculations based on term frequency • KWIC indexes • Doyle • Term associations
Readings (Next time) • Saracevic • Relevance • Maron and Kuhns • Probabilistic Indexing and matching • Cleverdon • Evaluation • Salton and Lesk • The SMART system • Hutchins • Aboutness and indexing
Documents • What do we mean by a document? • Full document? • Document surrogates? • Pages? • Buckland (JASIS, Sept. 1997) “What is a Document” • Bates (JASIST, June 2006) “Fundamental Forms of Information” • Are IR systems better called Document Retrieval systems? • A document is a representation of some aggregation of information, treated as a unit.
Collection • A collection is some physical or logical aggregation of documents • A database • A Library • A index? • Others?
Queries • A query is some expression of a user’s information needs • Can take many forms • Natural language description of need • Formal query in a query language • Queries may not be accurate expressions of the information need • Differences between conversation with a person and formal query expression
User Information Need • Why build IR systems at all? • People have different and highly varied needs for information • People often do not know what they want, or may not be able to express it in a usable form • Filling the gaps in Boulding’s “Image” • How to satisfy these user needs for information?
Controlled Vocabularies • Vocabulary control is the attempt to provide a standardized and consistent set of terms (such as subject headings, names, classifications, or the thesauri discussed by Joyce and Needham) with the intent of aiding the searcher in finding information. • Controlled vocabularies are a kind of metadata: • Data about data • Information about information • Uncontrolled vs. Controlled • E.g., tagging images in Flickr
Pre- and Postcoordination • Precoordination relies on the indexer (librarian, etc.) to construct some adequate representation of the meaning of a document. • E.g., “United States -- History -- Civil War, 1861-1865” • Postcoordinationrelies on the user or searcher to combine more atomic concepts in the attempt to describe the documents that would be considered relevant.
Structure of an IR System Storage Line Interest profiles & Queries Documents & data Information Storage and Retrieval System Rules of the game = Rules for subject indexing + Thesaurus (which consists of Lead-In Vocabulary and Indexing Language Formulating query in terms of descriptors Indexing (Descriptive and Subject) Storage of profiles Storage of Documents Store1: Profiles/ Search requests Store2: Document representations Comparison/ Matching Potentially Relevant Documents Search Line Adapted from Soergel, p. 19
Uses of Controlled Vocabularies • Library Subject Headings, Classification and Authority Files. • Commercial Journal Indexing Services and databases • Yahoo, and other Web classification schemes • Online and Manual Systems within organizations • SunSolve • MacArthur
Types of Indexing Languages • Uncontrolled Keyword Indexing • Folksonomies • Uncontrolled but somewhat structured • Indexing Languages • Controlled, but not structured • Thesauri • Controlled and Structured • Classification Systems • Controlled, Structured, and Coded • Faceted Classification Systems and Thesauri
Thesauri • A Thesaurus is a collection of selected vocabulary (preferred terms or descriptors) with links among Synonymous, Equivalent, Broader, Narrower and other Related Terms
Development of a Thesaurus • Term Selection. • Merging and Development of Concept Classes. • Definition of Broad Subject Fields and Subfields. • Development of Classificatory structure • Review, Testing, Application, Revision.
Categorization Summary • Processes of categorization underlie many of the issues having to do with information organization • Categorization is messier than our computer systems would like • Human categories have graded membership, consisting of family resemblances. • Family resemblance is expressed in part by which subset of features are shared • It is also determined by underlying understandings of the world that do not get represented in most systems
Classification Systems • A classification system is an indexing language often based on a broad ordering of topical areas. Thesauri and classification systems both use this broad ordering and maintain a structure of broader, narrower, and related topics. Classification schemes commonly use a coded notation for representing a topic and it’s place in relation to other terms.
Classification Systems (cont.) • Examples: • The Library of Congress Classification System • The Dewey Decimal Classification System • The ACM Computing Reviews Categories • The American Mathematical Society Classification System
Evaluation • Why Evaluate? • What to Evaluate? • How to Evaluate?
Why Evaluate? • Determine if the system is desirable • Make comparative assessments • Others?
What to Evaluate? • How much of the information need is satisfied. • How much was learned about a topic. • Incidental learning: • How much was learned about the collection. • How much was learned about other topics. • How inviting the system is.
What to Evaluate? What can be measured that reflects users’ ability to use system? (Cleverdon 66) • Coverage of Information • Form of Presentation • Effort required/Ease of Use • Time and Space Efficiency • Recall • proportion of relevant material actually retrieved • Precision • proportion of retrieved material actually relevant effectiveness
Relevance • In what ways can a document be relevant to a query? • Answer precise question precisely. • Partially answer question. • Suggest a source for more information. • Give background information. • Remind the user of other knowledge. • Others ...
Relevance • “Intuitively, we understand quite well what relevance means. It is a primitive “y’ know” concept, as is information for which we hardly need a definition. … if and when any productive contact [in communication] is desired, consciously or not, we involve and use this intuitive notion or relevance.” • Saracevic, 1975 p. 324
Relevance • How relevant is the document • for this user, for this information need. • Subjective, but • Measurable to some extent • How often do people agree a document is relevant to a query? • How well does it answer the question? • Complete answer? Partial? • Background Information? • Hints for further exploration?
Relevance Research and Thought • Review to 1975 by Saracevic • Reconsideration of user-centered relevance by Schamber, Eisenberg and Nilan, 1990 • Special Issue of JASIS on relevance (April 1994, 45(3))
Saracevic • Relevance is considered as a measure of effectiveness of the contact between a source and a destination in a communications process • Systems view • Destinations view • Subject Literature view • Subject Knowledge view • Pertinence • Pragmatic view
Define your own relevance • Relevance is the (A) gage of relevance of an (B) aspect of relevance existing between an (C) object judged and a (D) frame of reference as judged by an (E) assessor • Where… From Saracevic, 1975 and Schamber 1990
A. Gages • Measure • Degree • Extent • Judgement • Estimate • Appraisal • Relation
B. Aspect • Utility • Matching • Informativeness • Satisfaction • Appropriateness • Usefulness • Correspondence
C. Object judged • Document • Document representation • Reference • Textual form • Information provided • Fact • Article
D. Frame of reference • Question • Question representation • Research stage • Information need • Information used • Point of view • request
E. Assessor • Requester • Intermediary • Expert • User • Person • Judge • Information specialist
Schamber, Eisenberg and Nilan • “Relevance is the measure of retrieval performance in all information systems, including full-text, multimedia, question-answering, database management and knowledge-based systems.” • Systems-oriented relevance: Topicality • User-Oriented relevance • Relevance as a multi-dimensional concept
Schamber, et al. Conclusions • “Relevance is a multidimensional concept whose meaning is largely dependent on users’ perceptions of information and their own information need situations • Relevance is a dynamic concept that depends on users’ judgements of the quality of the relationship between information and information need at a certain point in time. • Relevance is a complex but systematic and measureable concept if approached conceptually and operationally from the user’s perspective.”
Froelich • Centrality and inadequacy of Topicality as the basis for relevance • Suggestions for a synthesis of views
Janes’ View Satisfaction Topicality Relevance Utility Pertinence
Operational Definition of Relevance • From the point of view of IR evaluation (as typified in TREC and other IR evaluation efforts) • Relevance is a term used for the relationship between a users information need and the contents of a document where the user determines whether or not the contents are responsive to his or her information need
IR Systems • Elements of IR Systems • Overview – we will examine each of these in further detail later in the course
What is Needed? • What software components are needed to construct an IR system? • One way to approach this question is to look at the information and data, and see what needs to be done to allow us to do IR
What, again, is the goal? • Goal of IR is to retrieve all and only the “relevant” documents in a collection for a particular user with a particular need for information • Relevance is a central concept in IR theory • OR • The goal is to search large document collections (millions of documents) to retrieve small subsets relevant to the user’s information need
Collections of Documents… • Documents • A document is a representation of some aggregation of information, treated as a unit. • Collection • A collection is some physical or logical aggregation of documents • Let’s take the simplest case, and say we are dealing with a computer file of plain ASCII text, where each line represents the “UNIT” or document.
How to search that collection? • Manually? • Cat, more • Scan for strings? • Grep • Extract individual words to search??? • “tokenize” (a unix pipeline) • tr -sc ’A-Za-z’’\012’ < TEXTFILE | sort | uniq –c • See “Unix for Poets” by Ken Church • Put it in a DBMS and use pattern matching there… • assuming the lines are smaller than the text size limits for the DBMS
What about VERY big files? • Scanning becomes a problem • The nature of the problem starts to change as the scale of the collection increases • A variant of Parkinson’s Law that applies to databases is: • Data expands to fill the space available to store it • Currently this problem takes a new approach – use MapReduce (like Hadoop)
The IR Approach • Extract the words (or tokens) along with references to the record they come from • I.e. build an inverted file of words or tokens • Note that Google and others use MapReduce approaches for this step • Is this enough?
What about … • The structure information, POS info, etc.? • Where and how to store this information? • DBMS? • XML structured documents? • Special file structures • DBMS File types (ISAM, VSAM, B-Tree, etc.) • PAT trees • Hashed files (Minimal, Perfect and Both) • Inverted files • How to get it back out of the storage • And how to map to the original document location?