690 likes | 703 Views
Intelligent Behaviors for Simulated Entities. I/ITSEC 2006 Tutorial. Presented by: Ryan Houlette Stottler Henke Associates, Inc. houlette@stottlerhenke.com 617-616-1293 Jeremy Ludwig Stottler Henke Associates, Inc. ludwig@stottlerhenke.com 541-302-0929. Outline.
E N D
Intelligent Behaviors for Simulated Entities I/ITSEC 2006 Tutorial Presented by: Ryan Houlette Stottler Henke Associates, Inc. houlette@stottlerhenke.com 617-616-1293 Jeremy Ludwig Stottler Henke Associates, Inc. ludwig@stottlerhenke.com 541-302-0929
Outline • Defining “intelligent behavior” • Authoring methodology • Technologies: • Cognitive architectures • Behavioral approaches • Hybrid approaches • Conclusion • Questions
The Goal • Intelligent behavior • a.k.a. entities acting autonomously • generally replacements for humans • when humans are not available • scheduling issues • location • shortage of necessary expertise • simply not enough people • when humans are too costly • Defining Intelligent Behavior • Authoring Methodology • Technologies: • Cognitive Architectures • Behavioral Approaches • Hybrid Approaches • Conclusion
“Intelligent Behavior” • Pretty vague! • General human-level AI not yet possible • computationally expensive • knowledge authoring bottleneck • Must pick your battles • what is most important for your application • what resources are available
Decision Factors1 • Entity “skill set” • Fidelity • Autonomy • Scalability • Authoring
Factor: Entity Skill Set • What does the entity need to be able to do? • follow a path • work with a team • perceive its environment • communicate with humans • exhibit emotion/social skills • etc. • Depends on purpose of simulation, type of scenario, echelon of entity
Factor: Fidelity • How accurate does the entity’s behavior need to be? • correct execution of a task • correct selection of tasks • correct timing • variability/predictability • Again, depends on purpose of simulation and echelon • training => believability • analysis => correctness
Factor: Autonomy • How much direction does the entity need? • explicitly scripted • tactical objectives • strategic objectives • Behavior reusable across scenarios • Dynamic behavior => less brittle
Factor: Scalability • How many entities are needed? • computational overhead • knowledge/behavior authoring costs • Can be mitigated • aggregating entities • distributing entities
Factor: Authoring • Who is authoring the behaviors? • programmers • knowledge engineers • subject matter experts • end users / soldiers • Training/skills required for authoring • Quality of authoring tools • Ease of modifying/extending behaviors
Choosing an Approach • Also ease of integration with simulation.... Scalability Ease of Authoring Skill Set Fidelity Autonomy
Agent Technologies • Wide range of possible approaches • Will discuss the two extremes Cognitive Architectures Behavioral Approaches EPIC, ACT-R, Soar scripting FSMs deliberative reactive
Authoring Methodologies Behavior Model Agent Architecture Simulation • Defining Intelligent Behavior • Authoring Methodology • Technologies: • Cognitive Architectures • Behavioral Approaches • Hybrid Approaches • Conclusion
Basic Authoring Procedure Evaluate entity behavior Run simulation Determine desired behavior Build behavior model DONE! Refine behavior model
Iterative Authoring • Often useful to start with limited set of behaviors • particularly when learning new architecture • depth-first vs. breadth-first • Test early and often • Build initial model with revision in mind • good software design principles apply: modularity, encapsulation, loose coupling • Determining why model behaved incorrectly can be difficult • some tools can help provide insight
The Knowledge Bottleneck • Model builder is not subject matter expert • Transferring knowledge is labor-intensive • For TacAir-Soar, 70-90% of model dev. time • To reduce the bottleneck: • Repurpose existing models • Use SME-friendly modeling tools • Train SMEs in modeling skills • => Still an unsolved problem
The Simulation Interface • Simulation sets bounds of behavior • the primitive actions entities can perform • the information about the world that is available to entities • Can be useful to “move interface up” • if simulation interface is too low-level • abstract away simulation details • in wrapper around agent architecture • in “library” within the behavior model itself • enables behavior model to be in terms of meaningful units of behavior
Cognitive Architectures • Overview • EPIC, ACT-R, & Soar • Examples of Cognitive Models • Strengths / Weakness of Cognitive Architectures • Defining Intelligent Behavior • Authoring Methodology • Technologies: • Cognitive Architectures • Behavioral Approaches • Hybrid Approaches • Conclusion
Introduction • What is a cognitive architecture? • “a broad theory of human cognition based on a wide selection of human experimental data and implemented as a running computer simulation” (Byrne, 2003) • Why cognitive architectures? • Advance psychological theories of cognition • Create accurate simulations of human behavior
Introduction • What is cognition? • Where does psychology fit in?
A Theory – The Model Human Processor • Some principles of operation • Recognize-act cycle • Fitt’s law • Power law of practice • Rationality principle • Problem space principle (from Card, Moran, & Newell, 1983)
Architecture • Definition • “a broad theory of human cognition based on a wide selection of human experimental data and implemented as a running computer simulation” (Byrne, 2003) • Two main components in modeling • Cognitive model programming language • Runtime Interpreter
EPIC Architecture • Processors • Cognitive • Perceptual • Motor • Operators • Cognitive • Perceptual • Motor • Knowledge Representation (from Kieras, http://www.eecs.umich.edu/ ~kieras/epic.html)
Model Task Description Task Environment Architecture Runtime Task Strategy Architecture Language
Task Description • There are two points on the screen: A and B. • The task is to point to A with the right hand, and press the “Z” key with the left hand when it is reached. • Then point from A to B with the right hand and press the “Z” key with the left hand. • Finally point back to A again, and press the “Z” key again.
Task Environment A B
EPIC Production Rule • (Top_point_A • IF • ( (Step Point AtA) • (Motor Manual Modality Free) • (Motor Ocular Modality Free) • (Visual ?object Text My_Point_A) • ) • THEN • ( • (Send_to_motor Manual Perform Ply Cursor ?object Right) • (Delete (Step Point AtA)) • (Add (Step Click AtA)) • ))
ACT-R and Soar • Motivations • Features • Models
Initial Motivations • ACT-R • Memory • Problem solving • Soar • Learning • Problem solving
ACT-R Architecture (from Bidiu, R., http://actr.psy.cmu.edu/about/)
Some ACT-R Features • Declarative memory stored in chunks • Memory activation • Buffer sizes between modules is one chunk • One rule per cycle • Learning • Memory retrieval, production utilities • New productions, new chunks
Task Description • Simple Addition • 1 + 3 = 4 • 2 + 2 = 4 • Goal: mimic the performance of four year olds on simple addition tasks • This is a memory retrieval task, where each number is retreived (e.g. 1 and 3) and then an addition fact is retrieved (1 + 3 = 4) • The task demonstrates partial matching of declarative memory items, and requires tweaking a number of parameters. • From the ACT-R tutorial, Unit 6
(p retrieve-first-number =goal> isa problem arg1 =one state nil ==> =goal> state encoding-one +retrieval> isa number name =one ) (p encode-first-number =goal> isa problem state encoding-one =retrieval> isa number ==> =goal> state retrieve-two arg1 =retrieval ) ACT-R 6.0 Production Rules
Some Relevant ACT-R Models • Best, B., Lebiere, C., & Scarpinatto, C. (2002). A model of synthetic opponents in MOUT training simulations using the ACT-R cognitive architecture. In Proceedings of the Eleventh Conference on Computer Generated Forces and Behavior Representation. Orlando, FL. • Craig, K., Doyal, J., Brett, B., Lebiere, C., Biefeld, E., & Martin, E. (2002). Development of a hybrid model of tactical fighter pilot behavior using IMPRINT task network model and ACT-R. In Proceedings of the Eleventh Conference on Computer Generated Forces and Behavior Representation. Orlando, FL
Soar Architecture • Problem Space Based
Some Soar Features • Problem space based • Attribute/value hierarchy (WM) forms the current state • Productions (LTM) transform the current state to achieve goals by applying operators • Cycle • Input • Elaborations fired • All possible operators proposed • One selected • Operator applied • Output • Impasses & Learning
Task Description • Control the behavior of a Tank on the game board. • Each tank has a number of sensors (e.g. radar) to find enemies, missiles to launch at enemies, and limited resources • From the Soar Tutorial
sp {propose*move (state <s> ^name wander ^io.input-link.blocked.forward no) --> (<s> ^operator <o> +) (<o> ^name move ^actions.move.direction forward)} sp {propose*turn (state <s> ^name wander ^io.input-link.blocked <b>) (<b> ^forward yes ^ { << left right >> <direction> } no) --> (<s> ^operator <o> + =) (<o> ^name turn ^actions <a>) (<a> ^rotate.direction <direction> ^radar.switch on ^radar-power.setting 13) } sp {propose*turn*backward (state <s> ^name wander ^io.input-link.blocked <b>) (<b> ^forward yes ^left yes ^right yes) --> (<s> ^operator <o> +) (<o> ^name turn ^actions.rotate.direction left) } Propose Moves
Prefer Moves • sp {select*radar-off*move • (state <s> ^name wander • ^operator <o1> + • ^operator <o2> +) • (<o1> ^name radar-off) • (<o2> ^name << turn move >>) • --> • (<s> ^operator <o1> > <o2>) • }
Apply Move • sp {apply*move • (state <s> ^operator <o> • ^io.output-link <out>) • (<o> ^direction <direction> • ^name move) • --> • (<out> ^move.direction <direction>) • }
Elaborations • sp {elaborate*state*missiles*low • (state <s> ^name tanksoar • ^io.input-link.missiles 0) • --> • (<s> ^missiles-energy low) • } • sp {elaborate*state*energy*low • (state <s> ^name tanksoar • ^io.input-link.energy <= 200) • --> • (<s> ^missiles-energy low) • }
Some Relevant Soar Models • Wray, R.E., Laird, J.E., Nuxoll, A., Stokes, D., Kerfoot, A. (2005). Synthetic adversaries for urban combat training. AI Magazine, 26(3):82-92. • Jones, R. M., Laird, J. E., Nielsen, P. E., Coulter, K. J., Kenny, P., & Koss, F. V. (1999). Automated intelligent pilots for combat flight simulation. AI Magazine, 20(1), 27-41.
Strengths / Weaknesses of Cognitive Architectures • Strengths • Supports aspects of intelligent behavior, such as learning, memory, and problem solving, not supported by other types of architectures • Can be used to accurately model human behavior, especially human-computer interaction, at small grain sizes (measured in ms) • Weaknesses • Can be difficult to author, modify, and debug complicated sets of production rules • High level modeling languages (e.g. CogTool, Herbal, High Level Symbolic Representation language) • Automated model generation (e.g. Konik & Laird, 2006) • Computational issues when scaling to large number of entities
Behavioral Approaches • Focus is on externally-observable behavior • no explicit modeling of knowledge/cognition • instead, behavior is explicitly specified: “Go to destination X, then attack enemy.” • Often a natural mapping from doctrine to behavior specifications • Defining Intelligent Behavior • Authoring Methodology • Technologies: • Cognitive Architectures • Behavioral Approaches • Hybrid Approaches • Conclusion
Hard-coding Behaviors • Simplest approach is write behavior in C++/Java: MoveTo(location_X); AcquireTarget(target); FireAt(target); • Don’t do this! • Can only be modified by programmers • Hard to update and extend • Behavior models not easily portable
Scripting Behaviors • Write behaviors in scripting language • UnrealScript • Avoids many problems of hard-coding • not tightly coupled to simulation code • more portable • often simplified to be easier to learn & use • Fine for linear sequences of actions, but do not scale well to complex behavior