1 / 48

Explainable Systems: The Inference Web Approach

Explainable Systems: The Inference Web Approach. Paulo Pinheiro da Silva Stanford University.

arella
Download Presentation

Explainable Systems: The Inference Web Approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Explainable Systems:The Inference Web Approach Paulo Pinheiro da Silva Stanford University In collaboration with Deborah L. McGuinness, Richard E. Fikes, Cynthia Chang, Priyendra Deshwal, Dhyanesh Narayanan, Alyssa Glass, Selene Makarios, Jessica Jenkins, Bill Millar, Eric Hsu and many people from IBM, SRI, ISI, IHMC, U. Toronto, U. Trento, U. Fortaleza, U. Texas Austin, Rutgers U., Maryland U., Batelle, SAIC, UCSF, MIT W3C

  2. Overview • What are explainable systems and why should we care about them? • Inference Web: Enabling Explainable Systems • Explainable Systems in Action • Explainable Systems 10 years from now

  3. Explanation Need I need to send Paulo a letter but I don’t know his address. Google-2.0, where is Paulo’s office? Google-2.0, why is Paulo’s address “Manchester, UK”? I believe Paulo lives in the U.S. So, Stanford, CA, USA. appears to be a possible answer. • Stanford, • CA, USA • 2) Manchester, • UK [Betty]

  4. Explanation in Action OK, “Manchester, UK” was Paulo’s address in May, 2002 and we are in 2005 !! Why should I believe this? Why should I believe these? I’ll send his letter to Stanford. Paulo At Manchester, UK [Betty] transitivity of At University of Manchester At Manchester, UK Paulo At University of Manchester Source: http://www.cs.man.ac.uk/~pinheirp Source usage: May/2002 Source: http://www.cs.man.ac.uk Source usage: May/2002

  5. question question answer explanation request 1 answer explanation request 1 expl. 1 explanation request 1 explanation 1 … … expl. n answer understanding explanation request n explanation n What are Explainable Systems? question answer explanation request 1 expl. 1 … explanation request n expl. n [Bob]

  6. Why should we care about explainable systems? • As system users, we often need: • To understand system’s response • To trust system’s responses • Many explanation concerns are the same as in early systems such as • Shortliffe’s MYCIN [1976] • Swartout’s XPLAIN [1983]

  7. Why should we care about explainable systems even more now? • Systems are far more complex than 30 years ago • Hybrid and distributed processing, e.g., web services, the Grid • Large number of heterogeneous, distributed information sources, e.g., the Web • More variation in reliability of information sources, e.g., information extraction • Sophisticated information integration methods, e.g., SIMS, TSIMMIS • Now we have less understanding (and sometimes less trust) of system’s answers and behavior • Now we have even more reasons for systems to explain their responses

  8. How to Enable Explainable Systems? 1 -> ((allof (the played-by of (the instances of Project-Leader)) where (It isa Person)) = (:set *Helen *Jody)) 2 -> (allof (the played-by of (the instances of Project-Leader)) where (It isa Person)) 3 -> (forall (the played-by of (the instances of Project-Leader)) where (It isa Person) It) 4 -> (the played-by of (the instances of Project-Leader)) 5 -> (the instances of Project-Leader) 5 (1) Local value(s): (:set *COGS-Proj-Leader-1 *HI-LITE-ProjectLeader-1 *SKIPR-ProjectLeader-1) 6 -> (:set *COGS-Proj-Leader-1 *HI-LITE-ProjectLeader-1 *SKIPR-ProjectLeader-1) [for (the instances of Project-Leader)] 6 <- (*COGS-Proj-Leader-1 *HI-LITE-ProjectLeader-1 *SKIPR-ProjectLeader-1) [(:set... 5 (2) From inheritance: (:set *COGS-Proj-Leader-1 *HI-LITE-ProjectLeader-1 *SKIPR-ProjectLeader-1) Which information do I have to generate an explanation? question answer explanation request 1 expl. 1 … I may have (or may be able to record) data describing how I manipulate information to produce answers! explanation request n expl. n

  9. Explainable System Challenge Explanation Understanding Trust The GAP Information Manipulation Data

  10. Overview • What are explainable systems and why should we care about them? • Inference Web: Enabling Explainable Systems • Explainable Systems in Action • Explainable Systems 10 years from now 

  11. Requirements for Explainable Systems • Information Manipulation Traces • hybrid, distributed, portable, shareable, combinable encoding of proof fragments supporting multiple justifications • Presentation • multiple display formats supporting browsing, visualization, etc. • Abstraction • understandable summaries • Interaction • multi-modal mixed initiative options including natural-language and GUI dialogues, adaptive, context-sensitive interaction • Trust • source and reasoning provenance, automated trust inference • [McGuinness & Pinheiro da Silva, ISWC 2003, J. Web Semantics 2004]

  12. Explainable System Challenge Explanation Proof Markup Language Information Manipulation Data

  13. A->(A^B) A B Direct Assertion From KB1 Direct Assertion From Doc1 Direct Assertion from Doc2 A DAG of PML Node Sets (a collection of justifications) Extracted Proofs for the conclusion A^B A->(A^B) A A^B A B A^B DA MP ^I A^B Proof Markup Language:Node Sets and Inference Steps Direct Assertion (DA) Modus Ponens (MP) AND Intro (^I) from KB1 A^B

  14. http://foo.com/NS.owl#NS124 http://bar.com/NS.owl#NS125 Encoding Hybrid and Distributed Proof Fragments • Proof Markup Language has a web-based solution for distribution • Specification written in W3C’s OWL • Each node set has one URI • Node sets can be used to combine proofs generated by multiple agents • OMEGA [Siekmann et al.,CADE2002] has a nice solution for hybrid proofs conclusion: A ^ B rule: Modus Ponens (MP) hasEngine: JTP (and A B) hasLanguage: KIF http://foo.com/NS.owl#NS123

  15. Information Manipulation Traces Proof Markup Language covers the full spectrum of information manipulation traces! [Pinheiro da Silva, McGuinness & Fikes, IS 2005]

  16. Explainable System Challenge Explanation Proof Markup Language Information Manipulation Data Provenance Meta-data

  17. Infrastructure: IWBase • Meta-data useful for disclosing knowledge provenance and reasoning information such as descriptions of • inference engines along with their supported inference rules • Information sources such as organizations, publications and ontologies • Languages along with their axioms • Core IWBase as well as domain IWBases • OWL files for interoperability and database for scaling • [McGuinness & Pinheiro da Silva, IIWeb 2003]

  18. Infrastructure: Core IWBase Statistics for relevant domain independent meta-data: Inference Engines 29 Axioms 56 Declarative Rules 38 select Method Rules 10 Derived Rules 6 Languages 12 select

  19. Explainable System Challenge Explanation Presentation Proof Markup Language Information Manipulation Data Provenance Meta-data

  20. Enable the visualization of proofs (and abstracted proofs) Proofs can be “extracted” and browsed from both local and remote PML node sets and can be combined Links provide access to proof-related meta-information Browsing Proofs(1/2) select select

  21. Browsing Proofs(2/2)

  22. Explainable System Challenge Explanation Presentation Abstraction Proof Markup Language Information Manipulation Data Provenance Meta-data

  23. “has opinion” “has opinion” “has opinion” BBC NYT CNN DA DA DA A->(A^B) A B MP ^I DA A^B A->(A^B) A A^B A B A^B Dir.Ass. MP ^I A^B (CNN,BBC) (BBC,NYT) (CNN) Knowledge Provenance Elicitation Google-2.0 says ‘A^B’ is the answer for my question. Provenance information may be essential for users to trust answers. Data provenance (aka data lineage) is defined and studied in the database literature. [Buneman et al., ICDT 2001] [Cui and Widom, VLDB 2001] Why should I believe this? Knowledge provenance extends data provenance by adding data derivation provenance information [Pinheiro da Silva, McGuinness & McCool, Data Eng. Bulletin, 2003]

  24. Knowledge Provenance Example Answer Source Source

  25. Abstracting Proofs • Explanation tactics (a.k.a. rewriting rules) may be used to abstract proofs into more understandable and manageable explanations • Enable the use of axioms as inference rules preventing the presentation of primitive (and potentially less interesting and useful) rules • Eliminate intermediate results from proofs

  26. (implies (and (Holds (owner ?person ?object) ?when)) (organization ?object)) (Holds* (hasOffice ?person ?object) ?when)) Direct assertion Direct assertion (Holds ((owner ?person ?object) ?when) (organization ?object) (implies (and (Holds* ?f ?t)) (not (Ab ?f ?t)) (Holds ?f ?t)) (not (Ab (hasOffice ?person ?object) ?when)) Generalized Modus Ponens Generalized Modus Ponens (Holds* ((hasOffice ?person ?object) ?when) (Holds ((hasOffice ?person ?object) ?when) Abstracting Proofs: An Example (1/2) Direct assertion (implies (and (Holds (owner ?person ?object) ?when) (organization ?object)) (Holds* (hasOffice ?person ?object) ?when)) Direct assertion Direct assertion Direct assertion (organization GradgrindFoods) (Holds (owner JoesephGradgrind GradgrindFoods) Apr1_03) (Holds (owner JoesephGradgrind GradgrindFoods) Apr1_03) Direct assertion (organization GradgrindFoods) Assumption Generalized Modus Ponens (not (Ab (hasOffice JosephGradgrind ?where) ?when)) Direct assertion Organization Owner Typically Has Office at Organization (Holds* (hasOffice JoesephGradgrind GradgrindFoods) Apr1_03) (implies (and (Holds* ?f ?t)) (not (Ab ?f ?t)) (Holds ?f ?t)) (Holds (hasOffice JoesephGradgrind GradgrindFoods) Apr1_03) Tactic Library Generalized Modus Ponens (Holds ((hasOffice JoesephGradgrind GradgrindFoods) Apr1_03) ABSTRACTED PROOF Explanation tactic: “Organization Owner Typically Has Office at Organization” Abstractor algorithm • Match conclusion (key for selecting tactics) 2) Match leaf nodes 3) Unify 4) Propagate conclusion 5) Apply the assertion-level rule 6) Propagate justified nodes

  27. Abstracting Proofs: An Example (2/2) • A rule says that • the owner of an organization typically has an office in an organization • Because • JosephGrardgrind owned GradgrindFoods on April 1st 2003 • GradgrindFood is an organization • therefore • JosephGradgrind had an office at GradgrindFoods on April 1st, 2003. Direct assertion (Holds (owner JoesephGradgrind GradgrindFoods) Apr1_03) Direct assertion (organization GradgrindFoods) Organization Owner Typically Has Office at Organization (Holds (hasOffice JoesephGradgrind GradgrindFoods) Apr1_03) ABSTRACTED PROOF IN DISCURSIVE STYLE ABSTRACTED PROOF Assertion-level rules are introduced in [Huang, PRICAI 1996]. Maybury describes strategies for rewriting abstracted proofs into English [AAAI 1991, AAAI 1993]. Explanation tactics supports multi-level abstraction of proofs

  28. Explainable System Challenge Explanation Understanding Interaction Presentation Abstraction Proof Markup Language Information Manipulation Data Provenance Meta-data

  29. Explaining Answers: GUI Explainer Users can exit the explainer providing feedback about their satisfiability with explanation(s) Select action Users can ask for alternative explanations

  30. Explainable System Challenge Explanation Understanding Interaction Presentation Abstraction Proof Markup Language Inference Meta-Language Information Manipulation Data Provenance Meta-data Inference Rule Specs

  31. Inference Meta Language (InferenceML) • An inference rule involves pattern of transformations on expressions to produce a conclusion • InferenceML uses schemas to state such transformations • InferenceML defines a schema to be a pattern, which is any expression of CL in which: • some lexical items have been replaced by a schematic variable (or meta-variable) Example: ndUI: '(forall (' N ')' q ')' |- ' (forall (' N - N.i ')' q[t/N.i] ')';; (Name N) (Sent q) (Term t)

  32. = Checking Proofs MP: x; '(implies ' xy ')' |- y ;; (Sent x y) (A) (implies (A) (and A B)) DA DA From IWBase (and A B) MP (A) ; (implies (A) (and A B)) |- (and A B) • binding of expressions to schematic variables: • x binds to (A) • ybinds to (and A B)  the rule schema instantiates directly to: (A);(implies (A) (and A B)) |- (and A B)

  33. Explainable System Challenge Explanation Understanding Trust Interaction Presentation Abstraction Proof Markup Language Inference Meta-Language Information Manipulation Data Provenance Meta-data Inference Rule Specs

  34. XYZ NYT CNN DA DA DA A->(A^B) A B MP ^I DA A^B A->(A^B) A A^B A B A^B DA MP ^I B (CNN,XYZ) (XYZ,NYT) (CNN) IWTrust: Trust in Action Google-2.0 says ‘A^B’ is the answer for my question. Trust can be inferred from a Web of Trust. ++ Why should I trust the answer? ? ++ 0 IWTrust provides infrastructure for building webs of trust. + The infrastructure includes a trust component responsible for computing trust values for answers. IWTrust is described in [Zaihrayeu, Pinheiro da Silva & McGuinness, iTrust 2005] A^B 0 + ? ? + ++ 0

  35. Inference Web and Paulo • Paulo is a co-technical leader of the Inference Web project • Paulo was the main IW developer during 1 ½ years • Paulo has been the manager of the IW development team including members with the following profile: • 1 research programmer • 3 masters students • 1 Ph.D. student • Paulo has organized the IW weekly meetings • Paulo has been responsible for presenting and demonstrating IW solutions at several DARPA and ARDA PI meetings • Paulo has participated of the writing of grant proposals

  36. Overview • What are explainable systems and why should we care about them? • Inference Web: Enabling Explainable Systems • Explainable Systems in Action • Explainable Systems 10 years from now  

  37. Application Areas • Information extraction –IBM (UIMA), Stanford (TAP) • Information integration –USC ISI (Prometheus/Mediator); Rutgers University (Prolog/Datalog) • Task processing –SRI International (SPARK) • Theorem proving • First-Order Theorem Provers –SRI International (SNARK); Stanford (JTP); University of Texas, Austin (KM) • SATisfiability Solvers –University of Trento (J-SAT) • Expert Systems –University of Fortaleza (JEOPS) • Service composition – Stanford, University of Toronto, UCSF (SDS) • Semantic matching –University of Trento (S-Match) • Debugging ontologies – University of Maryland, College Park (SWOOP/Pellet) • Problem solving –University of Fortaleza (ExpertCop) • Trust Networks –U. of Trento (IWTrust) No single explanation approach has been used in so many diversified areas as Inference Web!

  38. Extraction as Inference • Goal: To provide browsable justifications of information extraction • Strategy: Reuse, adapt, and integrate existing technology: • justification technology - Inference Web • extraction technology - IBM’s UIMA • Requires that systems to describe their processing as logical inferences • Requires a new perspective: IE as Inference • [Murdock, Pinheiro da Silva et al., AAAI’s SSS 2005]

  39. Direct assertion from gradgrind.txt Joseph Gradgrind is the owner of Gradgrind Foods Entity Recognition IBM EAnnotator Joseph Gradgrind is the owner of Gradgrind Foods [organization] Entity Identification IBM Cross - Annotator Coreference Joseph Gradgrind is the owner of Gradgrind Foods [organization] [refers to GradgrindFoods] Direct assertion from KB1 Extracted Entity Classification (organization GradgrindFoods) Document Coreference (organization GradgrindFoods) Extraction As Inference:An Example (1/2) • Solution: • A taxonomy of extraction tasks expressed as inference rules • Components that record IE justifications using rules in the taxonomy • We have identified 9 types of extraction inferences: • 6 for analysis, and 3 for integration Direct assertion from KB1 (implies (and (Holds (owner ?person ?object) ?when) (organization ?object)) (Holds* (hasOffice ?person ?object) ?when)) Direct assertion from KB1 (Holds (owner JoesephGradgrind GradgrindFoods) Apr1_03) Assumption Generalized Modus Ponens (not (Ab (hasOffice JosephGradgrind ?where) ?when)) Direct assertion from KB1 (Holds* (hasOffice JoesephGradgrind GradgrindFoods) Apr1_03) (implies (and (Holds* ?f ?t)) (not (Ab ?f ?t)) (Holds ?f ?t)) Generalized Modus Ponens (Holds ((hasOffice JoesephGradgrind GradgrindFoods) Apr1_03)

  40. Extraction As Inference:An Example (2/2) Why should I believe that these documents say that? Why should I believe this? Why should I believe these? Paulo At Manchester, UK [Betty] transitivity of At TheoremProving University of Manchester At Manchester, UK Paulo At University of Manchester InformationExtraction http://www.cs.man.ac.uk/~pinheirp http://www.cs.man.ac.uk Paulo is a PhD student at University of Manchester. University of Manchester is located in Manchester, UK.

  41. Explaining Tool Responses Explain (v. tr.)1: • “To offer reasons for the actions, beliefs, or remarks of (oneself).” Inferences for explaining answers (aka beliefs), and tasks (including actions) Requests and Responses Generalization Inferences for explaining answers (aka beliefs) Questions and Answers New perspective: Task processing as inference 1Dictionary.com

  42. NL Explainer: An Example <user>: What are you doing now? <system>: I am trying to get an approval to buy a laptop. <user>: Why? [note: “Why?” is rephrased to “Why are you trying to get an approval to buy a laptop?] <system>: I have completed the previous requirement to get quotes so I am now working on get approval. <user>: OK, I am happy with your explanation. Levering explanation dialogues as in [Fiedler, IJCAI 2001] Using natural language support as in [Allen et al., AAMAS 2002]

  43. Overview • What are explainable systems and why should we care about them? • Inference Web: Enabling Explainable Systems • Explainable Systems in Action • Explainable Systems 10 years from now   

  44. 4 6 5 2 4 3 2 1 4 Inference Web Contributions • Language for encoding hybrid, distributed proof fragments based on web technologies. Support for both formal and informal proofs (information manipulation traces). Explanation Understanding Trust Interaction 2. Support (registry, language, services)for knowledge provenance. Presentation Abstraction 3. Declarative inference rule representation for checking hybrid, distributed proofs. Proof Markup Language Inference Meta-Language Information Manipulation Data Provenance Meta-data Inference Rule Specs 4. Multiple strategies for proof abstraction, presentation and interaction. 5. End-to-end trust value computation for answers. 6. Comprehensive solution for explainable systems.

  45. Open Issues • Automated generation of explanation tactics • Performance for abstracting and checking proofs • Use of machine learning and user modeling to support interaction • Adaptive explanations • Explanation contexts • Modeling user knowledge • Metrics and evaluations for explainable systems

  46. http://www.w3.org/2004/Talks/0412-RDF-functions/slide4-0.htmlhttp://www.w3.org/2004/Talks/0412-RDF-functions/slide4-0.html Three Years From Now • An initial research community working on explainable systems • Adaptive explanations based on user modeling • IWBase registration of a large set of software systems • Registration of a comprehensive set of primitive rules • Established library of explanation tactics • First generation of metrics and evaluation methods for explainable systems • Inference Web is a solution for the Semantic Web proof and trust layers

  47. Ten Years From Now • An established research community working on explainable systems • A theory for explainable systems • Established metrics for explainable systems • First (or second) generation of industrial explainable systems • A standard language for encoding information manipulation traces (probably derived from PML among other proposals). The language will include support for the following: • probabilistic reasoning • inductive reasoning

  48. and Inference Web • Immediate connections • Explaining Task Processing • TaskTracer • CALO with Intelligent Information Systems team • Explaining Tool Responses • Explaining WYSIWYT – with End Users Shaping Effective Software team • Potential connections • Explanation generation • Filtering Learning • Explanation-based learning • with Learning and Adaptive Systems team • Explaining pattern and object recognition from videos and graphs • with Computer Graphics and Vision

More Related