170 likes | 354 Views
AI on the Battlefield: an Experimental Exploration. Robert Rasch US Army Battle Command Battle Lab. Kenneth Forbus Northwestern University. Alexander Kott BBN Technologies.
E N D
AI on the Battlefield: an Experimental Exploration Robert Rasch US Army Battle Command Battle Lab Kenneth Forbus Northwestern University Alexander Kott BBN Technologies Views expressed in this paper are those of the authors and do not necessarily reflect those of the U. S. Army or any agency of the U.S. government.
Outline • Motivation for the experiment • The experimental rig • Experimental procedure • Findings • A surprising challenge uncovered
The Role of BCBL-L • Exploration of new techniques and tool for Army C2 – a key focus of BCBL-L • Apparent emergence and maturing of multiple technologies for MDMP • What is the right way to apply such technologies? Value? Drawbacks? • BCBL-L proposed and executed the Concept Experimentation Program (CEP) - Integrated Course of Action Critiquing and Elaboration System (ICCES)
Room for Controversy • Some call for “…fast new planning processes… between man and machine… decision aids…” • Extensive training and specialization requirements? • Detract from intuitive, adaptive, art-like aspects of military command? • Undue dependence on vulnerable technology? • Make the plans and actions more predictable to the enemy? • The experiment was designed to address such concerns
Input: Mission and Intelligence Analysis Output: Detailed Synchron. Matrix COA Creator Tool COA Statement Tool CADET Tool Fusion Tool The Experimental Rig • COA Creator, by the Qualitative Reasoning Group at Northwestern University - allows a user to sketch a COA • The COA statement tool, by Alphatech, allows the user to enter the COA statement • Fusion engine, by Teknowledge, fuses the COA sketch and statement • CADET, by Carnegie Group & BBN – elaborates the fused sketch-and-statement into a detailed plan and estimates
The COA Entry Bottleneck • The key bottleneck in MDMP digitization: • Time / effort / distraction • Training requirements • Downstream representation language • Our approach – COA Creator, based on nuSketch • Sketching = interactive drawing plus linguistic I/O • Rich conceptual understanding of the domain • Speech often not preferred in mix of modalities • Include “speechless” multimodal interface (buttons plus gestures) • Expressible in the underlying knowledge representation
The Experimental Procedure • Comparison with the conventional process • Exploratory vs. statistical rigor Conventional Manual Process ICCES- Based Process Training Team1, Case1 Team 2, Case 1 Team 2, Case 2 Team 1, Case 2 Interviews, Products Review
Key Findings • Low training requirements • Largely due to “naturalness” of sketching • Simple, frugal CONOPS • No impact on creative aspects of the process • Largely driven by human-generated sketch-and-statement • Opportunity to explore more options • Dramatic time savings (3-5 times faster) • Mainly in downstream processing (e.g., planning) • Comparable quality of products • Few edits of ICCES-built products • Comparable quantitative measures (e.g., friendly losses)
Products of 5 past exercises inputs outputs Give “computer look” Generate w/ CADET Grade by 9 “Blind” Judges Parallel Experiments – Quality of Plans Rigorous experimental comparison: computer-assisted vs. conventional Multiple cases, subject, judges Conclusions: indistinguishable quality of products, dramatically faster
Surprise: Plan Presentation is a Key Concern • Conventional output presentation paradigms, i.e., sync. matrix is ineffective • Larger number of elements • Inadequate spatial aspect • Difficult to detect errors • Alternatives: • Animation? • Cartoon sketches?
For Army professionals: Technologies like ICCES have near-term deployment potential No impact on creativity, predictability Dramatic acceleration, comparable quality Challenges in inspecting, comprehending the new MDMP products For AI R&D community: Dominant role of HMI challenges calls for new mechanisms Value of natural sketch-based interfaces Simple, straightforward, all-in-one CONOPS for users No substitute for comparative experiments, from both practical and research perspectives Conclusions