160 likes | 534 Views
Ideas for Explainable AI. Steve Solomon Research Programmer USC Institute for Creative Technologies. The Need for Explainable AI. Complex computer-generated forces (CGFs) can be difficult to understand New simulation systems markedly increase AI complexity XAI for simulation-based training
E N D
Ideas for Explainable AI Steve Solomon Research Programmer USC Institute for Creative Technologies
The Need for Explainable AI • Complex computer-generated forces (CGFs) can be difficult to understand • New simulation systems markedly increase AI complexity • XAI for simulation-based training • Useful during after-action review • What happened? • Why did it happen? • What is the advantage or disadvantage of employing one tactic vs. another? • XAI for simulation-based analysis • Validation of CGF behaviors • Standard practice is subject matter expert (SME) approval of observed CGF behavior • Difficult for SME’s to fully judge behavior from just observation • Reduce false negatives • XAI’s demonstrated value in debugging AI behavior
XAI Production Research • XAI in Full Spectrum Command • Log AI’s behavior during training session • Pattern match for “Decision Points” • Task start/end, First contact, KIA, WIA… • Provide Who, What, When, Where • Q&A-based interface to AI state information • Question and answer templates • Current task, organization, and status • Parameters that affect task performance • XAI in Full Spectrum Warrior • Graphically depict lines of sight & awareness • Graphically depict “cones of attention”
Limitation of XAI in FSC • Limited depth of explanation • Can explain “how” but can’t explain “why” • Important for training, validation and analysis • Behavior and explanations are separate • Requires an additional step in AI development • Explanations must be kept in synch with behaviors • XAI can cover for “invalid” behavior (bogus explanations) • Explanations can seem valid even if the behavior isn’t • False sense of confidence in AI systems
New Improved XAI • User specifies “why” and XAI figures out “how” • User provides strategic goals and constraints • Goals: Mission objective, Commander’s Intent • Constraints: Rules of Engagement, Doctrine… • XAI generates specific tactical behaviors • AI planning systems • Generates multiple ranked tactical plans with embedded meta-knowledge about the tactics • Records a trace of its plan generation process • Uses first principles • Resolves all the limitations • XAI can explain “why” • Unifies behavior and XAI meta-knowledge • Doesn’t add steps to CGF development • Enforces “valid” behavior • Easier to validate “why” knowledge than “how” knowledge
Explainable AI Off-lineActivity Tactical Plans and XAI meta-knowledge Plan/ ExplanationGenerator NaturalLanguageExplainer Tactical Plan Explanations Mission and Entity Goals Questions Simulation Environment
Using Soar in the Natural Language Explainer • Provide discourse explanations using Rhetorical Structure Theory (RST) structures, cf. Carenini and Moore, ILEX • Nucleus + Satellites • Template-based approach to store clauses, with variables • Output of the XAI planner is loaded into working memory • Tactical plans • Meta knowledge - raw RST template data • Soar rules encode RST rewrite grammar for the small-unit tactics domain • Different Soar operators for Why, How, and Compare queries • Propose operators that the user know about discourse elements • Reason about elements that the user already knows about to avoid repetition