460 likes | 477 Views
Principles of Cased-Based Reasoning. Gilles Fouque Postdoctoral Fellow UCLA Computer Science Department CoBase Research Group. Plan. Cased-Based Reasoning - an overview An ideal CBR architecture CBR important steps. CBR: An Overview. Learning Process. Definitions.
E N D
Principles of Cased-Based Reasoning Gilles Fouque Postdoctoral Fellow UCLA Computer Science Department CoBase Research Group
Plan • Cased-Based Reasoning - an overview • An ideal CBR architecture • CBR important steps
Definitions Riesbeck, C.K. Schank, R.C. A CBR solves new problems by adapting solutions that were used to solve old problems. Example: X describes how his wife would never cook his steak as rare as he likes it. When X told this to Y, Y was reminded when he tried to get his hair cut and the barber would not cut as short as he wanted it. CBR motto (Hammond, K.): If it worked, use it again, If it works, don’t worry about it, If it did not work, remember not to do it again, If it does not work, fix it.
Short Historic • R. Schank – 1982: Yale • Dynamic Memory, Memory Organization Package (Script) • J. Kolodner – 1983: Georgia Tech • Memory Organization and Retrieval
CBR Important Steps • Retrieving: case representation, indexing, similarity metrics • Adaptation: case transformation • Testing: evaluation • Learning: utility problem, forget
Case Retrieving • Case Representation: • Information to represent in a case • Memory organization to use • Indexing: • Selection of indexing features • Search problem • Similarity Metrics: • Retrieval of relevant cases • Partial matching based on similarities/dissimilarities classical distance/knowledge based/inductive hierarchies
Adaptation Not a perfect match • Case Transformation: • Domain specific strategies/CBR
Testing • Evaluation: • Appropriateness of the solution Simulation User feedback • Repair: • Domain specific repair strategies • Identification of the failure cause • Avoid the problem to arise again Learning from failure
Learning A CBR becomes more efficient and competent by increasing its memory of old solutions and adapting them. • Case memory modification: • Generalization/Accumulation/Indexing • New Failure strategies • New repairing strategies • The utility problem: • Anticipation of the usefulness of the new stored case • What to forget?
Some CBR Applications • Recipes: CHEF • Patent law: HYPO • Medical diagnosis: PROTOS • Catering: JULIA • Software reuse: CAESAR
Conclusion • Rely on experience rather than theory, • Learning from success and failure, • Doing the best it can.
Bibliography • Riesbeck, C.K. and Schank, R.C. 1989. Inside Cased-Based Reasoning. Lawrence Erlbaum Associates, Hillsdale, NJ. • Kolodner, J.L. 1980. Retrieval and Organization Strategies in conceptual Memory: A Computer Model. Ph.D. dissertation, Yale University. • Hammond, K.J. 1989. Viewing Planning as a Memory Task. Academic Press, Perspectives in AI. • Bareiss, E.R. 1989. Exemplar-Based Knowledge Acquisition: A Unified Approach to Concept Representation, Classification, and Learning. Academic Press, Perspectives in AI. • Machine Learning, Vol. 10, Number 3.
A Case-Based Reasoning Approach to Associative Query Answering Gilles Fouque Wesley W. Chu Henrick Yau
Associative Query Answering • Provides information useful to but not explicitly asked for by the user Example: Query: “What is the flight schedule of UA1002?” Answer: Departure 9:00am LAX Arrival 7:00pm Detroit Metro Possible Associations: 1. Stopover at Chicago 2. Stopover time is 1 hr 3. Dinner is provided on the flight 4. Price of ticket is $300 5. No in-flight movies 6. Flight will probably be delayed because of snowy conditions in Chicago. Association depends on User Type and Query Context User Type: 1. A passenger [all] 2. A person whose friend is a passenger [6] Query Context: 1. A person who has bought the ticket [not 3] 2. A person who is going to buy the ticket [3]
A Case-Based Reasoning Approach CBR: Case-Based Reasoning systems store information about situations in their memory. As new situations arise, similar situations are searched out to help solve these problems. CBR systems evolve over time: • learn from its own mistakes and failures • learn from its own successes • acquire more knowledge in the process Goodness of a CBR system depends on: 1. How much experience/knowledge it has 2. Its ability to understand new cases in terms of old 3. Its ability to respond quickly 4. Its ability to incorporate user feedback into the system 5. Its ability to acquire new experience
Associative Query Answering in CoBase Idea: Store past user queries in Case Memory as knowledge. When a new query comes in, it is compared to the past cases. Inferences are made from queries similar to the new query. Example: Past Query: Select departure_time, arrival_time, fare From Flights Where origin = “LA” and destination = “Detroit” and date = 12/03/93 New Query: Select departure_time, arrival_time From Flights Where origin = “Chicago” and destination = “San Francisco” and date = 06/06/94 -> A possible association is airfare.
Associative Query Answering Inclusion of additional information not explicitly asked by the user but relevant to his/her query.
Association Control • Search for Associative Links • Termination
Methodology for Association Control • Case-Based Reasoning (CBR) is used • Cases (past queries) are stored in the Case Memory and similar cases are linked together • Weights are assigned to the links to represent the usefulness of association • Usefulness of association depends on query context and user type • CBR uses user’s feedback (success or failure) to update the link weight between a case and its associations
Selection and Ranking of Attributes To find similar cases to the user query Q. • CBR searches the case memory for all association links containing the set of attributes in Q • CBR evaluates the user features of the cases based on the similarity of the cases with the user query and their weights in the association link • Weight of association link depends on query context and user profile The selected attributes are appended to the user query to derive the associated information
Schema for Reducing Search Complexity in Case Memory • Based on the attributes in the input query, a list of the association links that are useful for association is generated from the Case Memory • Since only cases similar to the user query are to be extracted, if the cases are indexed on the attributes that they contain, then The search time complexity • Only depends on the number of attributes in the user query • Is dependent of the number of stored cases
Updating Association Link Weights (Learning) • Users select relevant attributes for association. • Based on the attribute selected, CBR updates the weights of the association links.
Learning Three forms of learning: • Update weights on association links based on user feedback • Addition of a new case into the case memory • Modification of an existing case to reflect the newly learned experience Idea: When a new query comes in, the CBR either adds it as a new case or it modifies existing cases to incorporate the new experience. Criterion: The similarity of the new case with the old cases. Compare the new case with the case most similar to it in the case memory. If they are: • Exactly the same – Not much to be done • Very similar to each other – Modify the existing case to incorporate new features • Not very similar to each other – Add the new case into case memory
Acquiring Experience From New Orleans Example: Past Query: Select departure_time, arrival_time From Flights Where origin = “Chicago” and destination = “San Francisco” and date = 12/03/93 New Query Select departure_time, arrival_time, fare From Flights Where origin = “LA” and destination = “Detroit” and date = 06/06/94 The attribute Flights.fare should be added into the existing case.
Acquiring Experience From New Queries (cont’d) Example: New Query: Select destination From Flights Where airline = “United” and origin = “LA” A new case should be added.
Incremental Query Answering/Reusable Queries Idea: The methodology of incremental query answering or reusable queries helps the CBR to identify query dependencies Example: First Query: Select flight_number marked FN From Flights Where origin = “LA” and destination = “Detroit” and date = 06/06/94 Second Query: Select fare From Flights Where flight_number = $(FN) It is apparent that the two queries are very much related with each other and future associations can be made.
Implementation • Step 1 – Selection of similar cases • Indexing schema is used to select association links included in the user query. • The set of cases similar to the user query is selected from the association links. • Step 2 – Selection and ranking of associated attributes • Selected cases for association are ranked by their usefulness. • Step 3 – Updating of association weights • The association weights are updated based on user feedback.
User Interface • A set of relevant associative attributes are presented to the user. • The attributes are ranked according to the association usefulness of the attribute based on the user profile and query context.
Revised (After User Feedback) Candidate Attributes for Association
Characteristics of the CoBase System 1. Case Memory • A history of past user queries is stored 2. Similarity Computation • It is both attribute-based and value-based • Value-based method makes use of TAHs 3. Inter-relation of Cases • Similar cases within the case memory are interconnected together by association links. • Weights are assigned on association links to indicate its usefulness for association. 4. User feedback • The system ranks all associations it has computed and returns the top candidates to the user. 5. Adaptation • Based on user feedback, association weights are increased or decreased.
Characteristics of the CoBase System (cont’d) Goodness of a CBR system depends on: • How much experience/knowledge it has: - Initial case history of 100+ queries • Its ability to understand new cases in terms of old - Attribute-based and value-based similarity ranking - Currently, joint attributes not considered • Its ability to respond quickly - Hashing is used to store features of cases - Quick retrieval: 300+ cases => around 2-3s 1500+ cases => around 6-7s - Scalability • Its ability to incorporate user feedback into the system - Updating formulae available to incorporate user feedback into the system • Its ability to acquire new experience - An important feature which is still lacking
Conclusions • Association control is based on experience acquisition, query context, and user profile. • Evolution of association is managed by CBR which adapts its knowledge from user feedback. • A prototype has been constructed on top of CoBase, showing it is feasible and scalable. • Further investigation areas: • Behavior of learning algorithm of using user feedback for modifying association weights.