250 likes | 516 Views
2. Outline. Background in ETProposed ITS Design/ Intelligent Structured Training ConceptApplicationsC2V Robotics TestbedCAT ATD Testbed ExperimentVirtual Warrior TestbedIssues and Conclusion. 3. . . Embedded Combined Arms Team Training and Mission Rehearsal ATO. ATO Researching Solutions to TRADOCs
E N D
1. Research on Development of Intelligent Tutoring Systems (ITS)to Support Embedded Training (ET)in Future Army Systemspresented by: Henry MarshallRDECOM Simulation and TrainingTechnology Center (STTC)
2. 2 Outline Background in ET
Proposed ITS Design/ Intelligent Structured Training Concept
Applications
C2V Robotics Testbed
CAT ATD Testbed Experiment
Virtual Warrior Testbed
Issues and Conclusion
3. 3
4. 4 ITS and ET, a Good Match? Training (to include Embedded Training) is a Key Performance Parameter for the Future Combat System (FCS) and Ground Soldier Systems (GSS). Also requirement for Abrams, Bradley and Stryker.
ET intent is to fully embed training system on the operational platforms
The instructional staff at current Army fixed sites will likely not be available for deployed forces
Can ITS-based technology be integrated with current simulation common components to replace the role of instructors for ET?
5. 5 FCS/ET – Training Challenge New paradigm requires scenario-based practice for FCS warfighters
CCTT has successfully used a Structured Training concept for Basic Techniques, Task and Procedures Structured Training for CCTT STRUCTT
Formal tactical doctrine for FCS operational concept is still evolving
Desirable to minimize costs of developing and administering training – reduce requirements for human instructors and simplify scenario definition
ITS are effective for simulating some of the benefits of a human instructor, especially for a domain with focused, task-based exercises
Enter our ITS research to prototype possible solutions
6. 6 Intelligent Structured Training? Goals are to
Maximize Simulation Common Components in developing ITS system
Operate in typical virtual training environment
Develop a system capability of replacing instructors where possible
Assumption
Because of the complexity of free play exercises and the number of possible solutions, ITS would be useable for only a limited set of predefined training scenarios
Answer
Develop an ITS-based system that supports finite state transitions and provides prompts and feedback, operating in a virtual training scenario. We have named this Intelligent Structured Training (IST)
7. 7 What are they?
Transition networks executing in coordination with a simulation to gather data about instructionally significant events and states, and make evaluation conclusions in real time
Why use them in an ITS?
Several benefits:
Modularity – they can be used separately or in conjunction for a variety of scenarios
Instructional correspondence – individual instructional principles can be associated with independent evaluations
Integration – the FSM structure is easily integrated with free-play simulations and maps well to diagnostics for widely varied outcomes
Authoring ease – they can be represented visually, making them easy for non-programmers to create, maintain, and revise
Application implements as a Behavior Transition Network (BTN)
8. 8
9. 9 Intelligent Structured Training Concept Goal: Provide the benefits of
instructor-led training, in an
embedded setting
Methods
Based on Intelligent Tutoring System (ITS) technology
Intelligent agents perform automated evaluation during execution
Subject matter experts define agent behaviors
Behaviors defined in hierarchical behavior transition networks (BTN)
Real-time feedback, hinting, or coaching presented in Soldier Machine Interface
Must operate on small “footprint” of embedded computer systems
10. 10 C2V Experiment Credit LTC Mike Sanders FA 57, mike.sanders@us.army.mil
11. 11 Command and Control Vehicle Crewstation Based on CAT ATD The device used in this experiment was the Robotics’ NCO crew station in a prototype FCS C2V. The Robotics’ NCO crew station consists of six, flat panel touch screens for the visualization and control of entities within the synthetic training environment (STE), haptic devices for the control of robotics assets (driver/gunners yokes, buttons, triggers, etc) speakers for audio output, a finite state machine (FSM) embedded simulation component for computer generated forces (CGF) behaviors, a tutoring-based feedback application system, and a data collection system.
The device used in this experiment was the Robotics’ NCO crew station in a prototype FCS C2V. The Robotics’ NCO crew station consists of six, flat panel touch screens for the visualization and control of entities within the synthetic training environment (STE), haptic devices for the control of robotics assets (driver/gunners yokes, buttons, triggers, etc) speakers for audio output, a finite state machine (FSM) embedded simulation component for computer generated forces (CGF) behaviors, a tutoring-based feedback application system, and a data collection system.
12. 12 Command and Control Vehicle Crewstation The top three flat panel screens display a three-dimensional view of the STE, each representing a different view from the robotic vehicles. The top left screen displays the view from the visual sensor mounted on the Unmanned Aerial Vehicle (UAV). The top center screen displays the driver’s position view from the visual sensor mounted on the Unmanned Ground Vehicle (UGV). The top right screen displays the view seen by the visual sensor from the gunner’s position in the turret. When displayed, feedback prompts also appear on this screen.
The top three flat panel screens display a three-dimensional view of the STE, each representing a different view from the robotic vehicles. The top left screen displays the view from the visual sensor mounted on the Unmanned Aerial Vehicle (UAV). The top center screen displays the driver’s position view from the visual sensor mounted on the Unmanned Ground Vehicle (UGV). The top right screen displays the view seen by the visual sensor from the gunner’s position in the turret. When displayed, feedback prompts also appear on this screen.
13. 13 Command and Control Vehicle Crewstation The primary control environment that the test subjects used is called an Operator Control Unit (OCU) and is displayed on the bottom center screen. The OCU functions as the control interface for networked robotic vehicles under the test subject's command. The OCU operates directly with the OneSAF Testbed Baseline (OTB) to control and monitor status for robotic entities under the test subject’s control. It also provides situational awareness to the test subject, providing a map, scenario defined graphic control measures (GCM), and icon tracking & locations.
An additional user interface, called the Robotic Assets/Mission Status Tool provides a high level control user interface both for seeing the status of the unmanned vehicles under the control of the operator, for initiating commands to the vehicle entities, submitting reports, and acknowledging targets. It is located on the lower left screen, and the test subjects touch the screen to activate the appropriate buttons. The lower right screen is yet another user interface, called the Tele-Operation Asset Tool and it provides a high level control user interface both for seeing the status of the Tele-Op UGV vehicle under the control of the operator and issuing commands.The primary control environment that the test subjects used is called an Operator Control Unit (OCU) and is displayed on the bottom center screen. The OCU functions as the control interface for networked robotic vehicles under the test subject's command. The OCU operates directly with the OneSAF Testbed Baseline (OTB) to control and monitor status for robotic entities under the test subject’s control. It also provides situational awareness to the test subject, providing a map, scenario defined graphic control measures (GCM), and icon tracking & locations.
An additional user interface, called the Robotic Assets/Mission Status Tool provides a high level control user interface both for seeing the status of the unmanned vehicles under the control of the operator, for initiating commands to the vehicle entities, submitting reports, and acknowledging targets. It is located on the lower left screen, and the test subjects touch the screen to activate the appropriate buttons. The lower right screen is yet another user interface, called the Tele-Operation Asset Tool and it provides a high level control user interface both for seeing the status of the Tele-Op UGV vehicle under the control of the operator and issuing commands.
14. 14 ITS Generated Feedback Prompt
15. 15 Approach and Scenario Overview Task Analysis for FCS equipped Unit of Action (UA)
Required Functional Capabilities include sensor fusion and engagement techniques
User ~ Robotics Operator in the C2V at the Company level
Scenario ~ Route Reconnaissance
ITS modified to trigger OneSAF behaviors via DIS
Task decomposition focused on the common functional capabilities of the FCS C2V, unmanned robotic vehicles and fires systems organic to either the Infantry or Mounted Combat System (MCS) Company levels. Additional fires platforms organic to the CA Bn were included in this functional capabilities review, as well as aviation assets, although aviation assets were quickly eliminated to reduce the scope of the evaluation set. Information dominance is a cornerstone of the Future Force, so sensor fusion became the focus of the task analysis. The O&O defines fusion as “the combining or blending of data and information from single or multiple sources into information.” This fusion provides the FCS-equipped UA the “Quality of Firsts” which enables it to dominate the battlefield. This battlefield dominance centers on the coupling of new lethality concepts (conceptual engagement types) and methods of receiving targeting information. Lethality modes include the traditional Line of Sight (LOS) and Beyond Line of Sight (BLOS) engagements (direct fire) and Non-Line of Sight (NLOS) (indirect fire). BLOS is an extension of the traditional direct fire engagement because the shooter “sees” the target through a sensor that has a sensor-to-shooter link. Cooperative Engagement is an engagement method where the sensor and shooter are not together in a single platform. Point and Shoot is a subset of Cooperative Engagement and allows a platform to designate a target for engagement by a different platform. Point and Shoot requires highly responsive effect (5 seconds or less) but occurs within the same tactical echelon. Task decomposition focused on the common functional capabilities of the FCS C2V, unmanned robotic vehicles and fires systems organic to either the Infantry or Mounted Combat System (MCS) Company levels. Additional fires platforms organic to the CA Bn were included in this functional capabilities review, as well as aviation assets, although aviation assets were quickly eliminated to reduce the scope of the evaluation set. Information dominance is a cornerstone of the Future Force, so sensor fusion became the focus of the task analysis. The O&O defines fusion as “the combining or blending of data and information from single or multiple sources into information.” This fusion provides the FCS-equipped UA the “Quality of Firsts” which enables it to dominate the battlefield. This battlefield dominance centers on the coupling of new lethality concepts (conceptual engagement types) and methods of receiving targeting information. Lethality modes include the traditional Line of Sight (LOS) and Beyond Line of Sight (BLOS) engagements (direct fire) and Non-Line of Sight (NLOS) (indirect fire). BLOS is an extension of the traditional direct fire engagement because the shooter “sees” the target through a sensor that has a sensor-to-shooter link. Cooperative Engagement is an engagement method where the sensor and shooter are not together in a single platform. Point and Shoot is a subset of Cooperative Engagement and allows a platform to designate a target for engagement by a different platform. Point and Shoot requires highly responsive effect (5 seconds or less) but occurs within the same tactical echelon.
16. 16 Robotics Operator Tasks Coordinated use of robotic assets
Example: Maintain proper separation between air asset and ground vehicle
Proper reporting procedures
Example: Send SITREP after reaching a control measure
Proper engagement procedures
Example: Lase a target before sending call for fire
Proper use of asset control tools
Example: Make sure a vehicle is currently being controlled before issuing commands in the control interface.
17. 17 Experimental Design Test group: 20 subjects
Comparison conditions
Immediate Directive Feedback (IDF) only
ITS generated feedback through prompts
AAR Only (Delayed Feedback)
Human facilitated AARs used open-ended, content neutral prompts
Experiment phases
Training and test phase
Initial human-tutored and computer-aided instruction
Two-phased execution
Paper and pencil test
Retention and Post-test phase after 1 week delay ITS generated feedback: If the trainees response was incorrect, immediate feedback is provided in the form of an error message provided to the test subject. No immediate feedback was provided for following correct procedures. The three types of immediate feedback included:
battlefield heuristic feedback, “Conduct a sensor scan before beginning movement”;
error detection feedback, “Submit a report anytime there is a change to the tactical situation”;
directive feedback, “You have failed to correctly submit a SITREP. The correct procedure is . . .”.
Directive Feedback prompts were triggered either when a test subject failed to take an appropriate action, after receiving an error detection prompt, or conducted a procedure incorrectly.ITS generated feedback: If the trainees response was incorrect, immediate feedback is provided in the form of an error message provided to the test subject. No immediate feedback was provided for following correct procedures. The three types of immediate feedback included:
battlefield heuristic feedback, “Conduct a sensor scan before beginning movement”;
error detection feedback, “Submit a report anytime there is a change to the tactical situation”;
directive feedback, “You have failed to correctly submit a SITREP. The correct procedure is . . .”.
Directive Feedback prompts were triggered either when a test subject failed to take an appropriate action, after receiving an error detection prompt, or conducted a procedure incorrectly.
18. 18 Experimental Results Procedural Errors note
An ANOVA was preformed with the between subjects factor being Feedback Condition, F (1,18) = 5.87 and Training Trials, F (2,24) = 13.05 with a p<.05.
The ANOVA results tells us that there are significant differences in the number of IDF prompts triggered by each test subject and that these differences are explained by both the number of training trials (learning occurring over time) and by the type of Feedback received. The significantly lower number of error prompts triggered during the execution of training and transfer scenarios demonstrates that the IDF Only feedback condition had a significant effect on the acquisition and transfer of the performance procedures.
The differences in retention scores following feedback while large was not statistically significant.Procedural Errors note
An ANOVA was preformed with the between subjects factor being Feedback Condition, F (1,18) = 5.87 and Training Trials, F (2,24) = 13.05 with a p<.05.
The ANOVA results tells us that there are significant differences in the number of IDF prompts triggered by each test subject and that these differences are explained by both the number of training trials (learning occurring over time) and by the type of Feedback received. The significantly lower number of error prompts triggered during the execution of training and transfer scenarios demonstrates that the IDF Only feedback condition had a significant effect on the acquisition and transfer of the performance procedures.
The differences in retention scores following feedback while large was not statistically significant.
19. 19 Analysis The timing and type of feedback received during training does affect the acquisition, retention and transfer of knowledge
Procedural knowledge
Retention scores elevated in both comparison conditions
Lower number of errors with Immediate Directive Feedback
Conceptual knowledge
Retention scores elevated in both comparison conditions
Higher retention scores with AAR
Bottom line – proven learning from embedded ITS feedback!
This could provide deployable training and save $$$ if authoring cost were economical
Improved Pre-Brief or Postbreif capability could help improve the ITS Conceptual Knowledge retention
20. 20 TARDEC CAT ATD Integration
21. 21 Virtual Warrior Experiment Credit Major Jason Sims FA 57, jason.sims@us.army.mil
22. 22 Authoring Tool Design
23. 23 VW-ITS Design
24. 24 VW-ITS Issues Numerous problems with OTB SAF behaviors filling out the rest of the Squad and OPFOR
Evaluations limited by troop availability
Most prefer system that would allow most of the squad to be live (e.g. team training) as opposed to interacting with SAF
Virtual Locomotion of VW not liked, e.g. ability to move around database. Exposure to fire also a concern.
Difficultly sending messages w/ C2 system
25. 25 Issues Integration/Interoperation with OOS and Training Common Components
Best Future Direction, Game SDK vs. Training Common Components?
Mix with a Operational Coach Mentor?
Experiment with Enhanced / Muti-Modal feedback for ITS
Integration into vehicles information/ET systems
Transition with focus to PM - systems we developed were focused on experimentation
Authoring systems for ease of production and usability by topic SMEs. Low Software License Costs.
Production costs per tasks will drive economics of ITS acceptance.
Explore Team Training
Improved CGF for ITS for control of OPFOR/BLUFOR behaviors
Exercise automated Pre-Brief/Post Brief
Ability to adjust difficultly level either before execution or dynamically
26. 26 Discussion??