1 / 17

AI on the Battlefield: an Experimental Exploration

AI on the Battlefield: an Experimental Exploration. Robert Rasch US Army Battle Command Battle Lab. Kenneth Forbus Northwestern University. Alexander Kott BBN Technologies.

zack
Download Presentation

AI on the Battlefield: an Experimental Exploration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AI on the Battlefield: an Experimental Exploration Robert Rasch US Army Battle Command Battle Lab Kenneth Forbus Northwestern University Alexander Kott BBN Technologies Views expressed in this paper are those of the authors and do not necessarily reflect those of the U. S. Army or any agency of the U.S. government.

  2. Outline • Motivation for the experiment • The experimental rig • Experimental procedure • Findings • A surprising challenge uncovered

  3. The Role of BCBL-L • Exploration of new techniques and tool for Army C2 – a key focus of BCBL-L • Apparent emergence and maturing of multiple technologies for MDMP • What is the right way to apply such technologies? Value? Drawbacks? • BCBL-L proposed and executed the Concept Experimentation Program (CEP) - Integrated Course of Action Critiquing and Elaboration System (ICCES)

  4. Room for Controversy • Some call for “…fast new planning processes… between man and machine… decision aids…” • Extensive training and specialization requirements? • Detract from intuitive, adaptive, art-like aspects of military command? • Undue dependence on vulnerable technology? • Make the plans and actions more predictable to the enemy? • The experiment was designed to address such concerns

  5. Input: Mission and Intelligence Analysis Output: Detailed Synchron. Matrix COA Creator Tool COA Statement Tool CADET Tool Fusion Tool The Experimental Rig • COA Creator, by the Qualitative Reasoning Group at Northwestern University - allows a user to sketch a COA • The COA statement tool, by Alphatech, allows the user to enter the COA statement • Fusion engine, by Teknowledge, fuses the COA sketch and statement • CADET, by Carnegie Group & BBN – elaborates the fused sketch-and-statement into a detailed plan and estimates

  6. The COA Entry Bottleneck • The key bottleneck in MDMP digitization: • Time / effort / distraction • Training requirements • Downstream representation language • Our approach – COA Creator, based on nuSketch • Sketching = interactive drawing plus linguistic I/O • Rich conceptual understanding of the domain • Speech often not preferred in mix of modalities • Include “speechless” multimodal interface (buttons plus gestures) • Expressible in the underlying knowledge representation

  7. Terrain features and characterization

  8. Units and control lines

  9. Objective and engagement areas

  10. Friendly tasks are defined

  11. The Experimental Procedure • Comparison with the conventional process • Exploratory vs. statistical rigor Conventional Manual Process ICCES- Based Process Training Team1, Case1 Team 2, Case 1 Team 2, Case 2 Team 1, Case 2 Interviews, Products Review

  12. Key Findings • Low training requirements • Largely due to “naturalness” of sketching • Simple, frugal CONOPS • No impact on creative aspects of the process • Largely driven by human-generated sketch-and-statement • Opportunity to explore more options • Dramatic time savings (3-5 times faster) • Mainly in downstream processing (e.g., planning) • Comparable quality of products • Few edits of ICCES-built products • Comparable quantitative measures (e.g., friendly losses)

  13. Products of 5 past exercises inputs outputs Give “computer look” Generate w/ CADET Grade by 9 “Blind” Judges Parallel Experiments – Quality of Plans Rigorous experimental comparison: computer-assisted vs. conventional Multiple cases, subject, judges Conclusions: indistinguishable quality of products, dramatically faster

  14. Surprise: Plan Presentation is a Key Concern • Conventional output presentation paradigms, i.e., sync. matrix is ineffective • Larger number of elements • Inadequate spatial aspect • Difficult to detect errors • Alternatives: • Animation? • Cartoon sketches?

  15. For Army professionals: Technologies like ICCES have near-term deployment potential No impact on creativity, predictability Dramatic acceleration, comparable quality Challenges in inspecting, comprehending the new MDMP products For AI R&D community: Dominant role of HMI challenges calls for new mechanisms Value of natural sketch-based interfaces Simple, straightforward, all-in-one CONOPS for users No substitute for comparative experiments, from both practical and research perspectives Conclusions

  16. BACKUP SLIDES

More Related