390 likes | 537 Views
Topic 12: Level 3. David L. Hall. Topic Objectives. Introduce the JDL Level 3 Process Describe modeling, prediction and analysis techniques for Level 3 Indentify limitations and issues for level 3 processing. Comments on this lecture.
E N D
Topic 12: Level 3 David L. Hall
Topic Objectives Introduce the JDL Level 3 Process Describe modeling, prediction and analysis techniques for Level 3 Indentify limitations and issues for level 3 processing
Comments on this lecture The lectures on level-2 and level-3 could easily be merged (although it would be a very long lecture), since the methods described for level-3 are a continuation of the automated reasoning methods introduced in the level-2 The examples presented have a DoD/military flavor since an enormous amount of research has been funded in this area – with extensive developments These examples and methods are easily extendible to non-military applications such as environmental monitoring, public health, disaster relief and other areas
a priori Intelligence Preparation Threat (RISK) Analysis Technical Doctrinal Databases N . . . Command and Control Decision-Making Situation Abstractions Situation Assessment 1 • Planning • Expected • Outcomes • of Blue • COAs 2 Level 1 Data Fusion Products • Interpret and • Express the • Environment • Objects • Groups • Events • Activities • Predict Future • Courses of Action • Three Perspectives • White • Blue • Red Level 2,3 Data Fusion Products • Multiple • Possible • Explanations • of Situation • and Threat • Construct • Representa- • tions of Data Situation and Threat Assessment Functions
LEVEL THREE PROCESSING THREAT REFINEMENT ESTIMATE/ AGGREGATE FORCE CAPABILITIES (RED/BLUE) PREDICT ENEMY INTENT IDENTIFY THREAT OPPORTUNITIES • MULTI-PERSPECTIVE ASSESSMENT • Offensive/Defensive • ESTIMATE IMPLICATIONS • Force vulnerabilities • Timing of critical events • Threat system priorities • Friendly system opportunities Level Three Processing:Threat Refinement
SENIOR COMMANDER’S CONCEPT SITUATION ELEMENTS ENEMY OWN TROOPS OWN TROOP’S MISSION ADJACENT UNITS INTERMEDITE MISSIONS TERRAIN Particular Missions HYDROMETEOROLOGICAL CONDITIONS, TIME OF YEAR Analysis of Mission RADIATION SITUATION Mission Tree (Objectives to be hit) ECONOMIC CONDITION OF COMBAT OPERATIONS AREA & SOCIOPOLITCAL MAKE-UP OF POPULATION Commander’s Decision-Making Methodology Analysis of Situation DECISION ELEMENT OPTIONS Measures for Political Work Combat Operations Support, and Organization of Command and Control Concept of Combat Operations Tactical Missions of Sub-units of Troop Branches Troop Coordination Procedure Selection and Formulation of Best Decision Option
Mountain 1 Road 2 Bridge 2 River 1 River 2 Extended Barrier Bridge 1 Lake 1 Large Barrier Forest 1 Road 1 Marsh 1 * Antony, R., Principles of Data Fusion Automation, Artech House, Inc., Norwood, MA, 1995, p. 92. Notional Enemy Course of Action Display* Richard Antony discusses the “Intelligent Preparation of the Battlefield” concept and related situation and decision-support displays; such concepts are often used in a wide variety of areas including business continuity planning, preparation for disasters & disaster relief; and many large-scale operations
WEATHER ANALYSIS Expected Enemy IDENTIFIED ENEMY HOLDINGS Reactions to Decisions INTELLIGENCE DATA COMMANDER DECISIONS PERCEIVED ENEMY SITUATION LIKELY ENEMY ACTION AREAS OF INTEREST ENEMY DOCTRINE COLLECTION MANAGEMENT TERRAIN ANALYSIS THREAT EVALUATION THREAT INTEGRATION Intelligence Preparation of the Battlefield Process
WHITE VIEW OF THE BATTLEFIELD ENVIRONMENT • RED VIEW OF THE ENEMY WAR PLAN • BLUE VIEW OF THE FRIENDLY FORCE MISSIONS GENERAL NOTION OF THE THREAT MODEL: WHITE VIEW BLUE VIEW RED VIEW • Operational Art • Objectives • Offensive • Economy of force • Maneuver • Unity of command • Security • Surprise • Simplicity • Enemy Battle Plans • Why • When • Where • Force structure • Objectives • Time table • Options • Tactics • Doctrine • Effects of the Environment • Weather • Terrain • Political treaties • Communication nets Assessing the Threat/Consequences Note: the concept of “shifting” perspectives is a valuable tool in addressing nearly any situation and it’s consequences; what am I planning to do or want to happen (blue view); how might others react to my plans and activities (red view), and how the environment affect both me and others (white view)?
Knowledge Representation • Physical and mathematical models • Equations • Neural Nets • Language constructs • Ontology/Taxonomy • Logical constructs (e.g. predicate logic) • Examples, Stories and Cases • Analogical models • Graphs • Trees • Special notations (chemical symbols, musical notes) • Diagrams • Cognitive Maps • Etc
Major reasoning approaches Knowledge representation • Rules • Frames • Scripts • Semantic nets • Parametric Templates • Analogical methods Uncertainty representation • Confidence factors • Probability • Dempster-Shafer evidential intervals • Fuzzy membership functions • Etc. Reasoning methods & architectures • Implicit methods • Neural nets • Cluster algorithms • Pattern templates • Templating methods • Case-based reasoning • Process reasoning • Script interpreters • Plan-based reasoning • Deductive methods • Decision-trees • Bayesian belief nets • D-S belief nets • Hybrid architectures • Agent-based methods • Blackboard systems • Hybrid symbolic/numerical systems
Pattern Templates Logical Templating Methods Case-Based Reasoning
Test paper Name ______ 1) 2) Answer Sheet 1) 2) Logical Templates Student answers are matched against correct answer sheet to see how many the student got correct Based on a concept similar to grading tests or papers using a template (score sheet) to quickly determine the number of correct answers Logical templates can be created including parametric relations, causal factors, sub-entities, etc. to characterize a complex entity, activity or event Logical template methods are an extension of decision-trees and pattern recognition
YES START Receive Triggering Information NO More Templates ? YES MOC > TA NO Receive Candidate Template NO Receive Related Events from Database MOC > TR YES Perform Logic Checks Make Ambiguity Declaration • Notes: • TA = Acceptance • Threshold • TR = Rejection • Threshold • MOC = Measure of • Correlation NO Pass Necessary Test YES YES Pass Sufficiency Test Make Identification Declaration NO STOP Compute MOC B Ending Processes A A B Template Processing Flow
Speeding Ticket Template Example Speeding Ticket Threat Template • Threat elements • Moving violations • Speeding ticket • DUI • Reckless driving • Failure to stop at stop sign or stoplight • Non-moving violations • Illegal parking • Failure to have vehicle inspected • Other • Blue Conditions • Own car speed • Condition of driver • Appearance of driver • Gender of driver • Color of vehicle • etc • White Conditions • Speed limit • Visibility • Posted speed limit • Location wrt known speed traps • etc • Red Conditions • RWR indicator • Visible enemy • COMINT externals • COMINT internals • etc. • Logical relations • If RWR and own car speed >(1.2* speed limit) threat • Etc
What is Case-Based Reasoning? • Cases are descriptions of situations and the actions taken to respond to them • Case-based reasoning is an approach to building knowledge systems that: • Bases reasoning on retrieval of cases that are similar to the current situation • Supports learning from experience Reference: J. Dannenhoffer, Case-Based Reasoning, presented to the AIAAA AI Technical Committee, January 1992.
The Case-Based Reasoning Process ACCEPT NEW CASE RETRIEVE RELEVANT CASES SELECT MOST RELEVANT CASE(S) CONSTRUCT SOLUTION OR INTERPRETATION OF NEW CASE VALIDATE SOLUTION/INTERPRETATION UPDATE MEMORY WITH NEW CASE Reference: J. Dannenhoffer, Case-Based Reasoning, presented to the AIAAA AI Technical Committee, January 1992.
Process Reasoning • Script Interpreters • Rule-bases systems • Expert Systems • Plan-based Reasoning
GeneralCharacteristics of Rule-based Systems (Process reasoning) • Specific fairly narrow real-world problems • (poor/missing data) • Heuristic, rule-based search strategies in • general plus facts and computation methods • Knowledge engineering/knowledge • representation • Control: data-driven or goal directed • Software: LISP, PROLOG or other script-like language • Development support system • Incremental, evolutionary development • process • No absolutes -- experts are evaluators Approach: Application Domain: Development: Evaluation: In the early heyday of AI research (1980s & early 1990s) these types of reasoning were termed “expert” systems
USER MAN-MACHINE INTERFACE CONTROL STRUCTURE (RULE INTERPRETER/INTERFACE ENGINE) • KNOWLEDGE BASE • Heurtistics • Facts • Algorithms GLOBAL DATABASE (Dynamic System Status) System Input Basic Structure of an Expert System
Dynamic Data Knowledge Base (KB) Search KB NO Any Rules ? YES Select Rule • Update Dynamic Data • Execute Sub-Routine • Request Input Data • Etc. Fire/Execute Rule NO Done ? Quit Quit Conceptual Inference Cycle
SOFTWARE ENGINEER KNOWLEDGE BASE MILITARY/DOMAIN EXPERT • Military Organization/Protocols • Rules of Engagement • Military Doctrine • Military Equipment Characteristics • Weapons • Electronics • Communications • Development Support System • Soft Programming • Software Architecture • Computer Environment • Numerical Techniques • Scenario • Rule Base • Tree Constructs • Database Design • Facts/Algorithms Knowledge Engineering
Planning/Goal Decomposition Plan A Plan B • Planning provides another effective way to represent knowledge including timelines, roles and responsibilities, hierarchies of plans, causality, etc. • Planning analogies have been used effectively for automated reasoning including course of action analysis tools, impact analysis, decision trees, hypothesis evaluation, gaming methods and more recently team-based intelligent agents
ACCOMPLISH MISSION ATTACK TARGET SURVIVE THREAT Detect Candidate Target Determine Threat Tactic Determine Attack Tactic Identify Threat Evaluate Target Monitor Threat Select Attack Profile Estimate Range Bearing Infer Status Intention Specify Target Target of Opportunity Acquire Target Select Weapon Revise Plan Avoid Threat Suppress Threat Revise Plan Goal Decomposition
MILITARY GOALS Known Target Locations Defend against Target Destroy Target Defend Mission MULTI-TARGET MISSION TEMPLATE LIBRARY Monitor Mission Attack Mission SINGLE-AGENT PLAN LIBRARY Feint Blockage Reconnaissance Counter Measures Coordination Strike Surveillance Tank Operations Damage Assessment Concept of Goal/Plan Hierarchy
Deductive Methods • Decision Trees • Bayesian Belief Nets • Belief networks • Bayesian networks • Causality nets, etc. • Dempster-Shafer Belief Nets
Bayesian Belief Nets • Representation of relationships or causality via Bayesian probability (publicized by Judea Pearl in 1988) • Knowledge contained in • Directional acyclic graph • Nodes represent variables • Links express (parent/child) relationships (e.g. causal relationships) • Each node has a conditional probability relation specified • Knowledge propagation via Bayesian chain rule • Network as a whole represents the joint probability distribution • Note: Bayesian Networks also called Markov Chain See for example the tutorial at: http://www.cs.ubc.ca/~murphyk/Bayes/bayes.html
A B C D E Bayesian Belief Nets Example of a directed acyclic graph
Bayes Net for Target Identification Target, No Target Land Sea Air Target Type (Tgt1, Tgt2, …, Non-Tgt) Target Activity (launch, hide, reload, move) Comm Equipment Target Dimension Radar Type Radar Activity Comm Activity Width (IMINT) Frequency (COMINT) Length (IMINT) Duration (COMINT) PRI (ELINT) Frequency (ELINT) Example provided by KC Chang via M. Liggins • Evidence can be injected into any node in the form of a likelihood function This increase propagates to the parent nodes and the children nodes Propagation continues until all nodes have been updated • The sum of probabilities in a set of children equals that of the parent
Hybrid Methods Blackboard Systems Agent-based Architectures Hybrid Symbolic/Numeric Systems
PROBLEM DOMAIN PARTITION N KB KA PARTITION 3 SHARED MEMORY KA KB PARTITION 2 KB KA PARTITION 1 KA KB CONTROL STRUCTURE KA = KNOWLEDGE AGENT KB = KNOWLEDGE BASE EXTERNAL INTERFACE HUMAN COMPUTER INTERFACE Example: Blackboard Architecture Concept
Summary of Agent Attributes Bradshaw (1997) lists the following as possible agent attributes: • Reactivity. the ability to selectively sense and act • Situatedness. being in continuous interaction with a dynamic environment, able to perceive features of the environment important to them, and effect changes to the environment. • Autonomy. goal-directedness, proactive and self-starting behavior • Temporal continuity. persistence of identity and state over long periods of time • Inferential capability. can act on abstract task specifications using prior knowledge of general goals and preferred methods to achieve flexibility, goes beyond the information given, and may have explicit models of self, user situation, and/or other agents • Adaptivity. being able to learn and improve with experience • Mobility. being able to migrate in a self-directed way from one host platform to another across a network
Summary of Agent Attributes (cont.) • Social ability - the ability to interact with other agents(and possibly humans) via some kind of agent-communication language, and perhaps cooperate with others. • Knowledge-level'' communication ability. the ability to communicate with persons and other agents with language more resembling human-like “speech acts” than typical symbol-level program-to-program protocols • Collaborative behavior. can work in concert with other agents to achieve a common goal Wooldridge and Jennings [4] add the following as possible agent attributes: • veracity - an agent will not knowingly communicate false information • benevolence - agents do not have conflicting goals, and that every agent will therefore always try to do what is asked of it, and • rationality- an agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved — at least insofar as its beliefs permit
Intelligent Agent Automated Reasoning • Agent Characteristics • Wish agent to be pro-active • Agent maintains a list of one or more goals • A goal is a description of a desirable situation (state of the world) • Actions are chosen so as to achieve the goals • Deliberative – agent needs to reason about the actions to take to achieve goals • Goal achievement may involve long sequences of actions – may involve extensive search and planning Goal-based reactive agents can be developed to emulate human-like behavior for information search and understanding
Example of Team-Based Intelligent Agents to Support Data Fusion Shared Mental Shared Mental Model Model Computational SMM Computational SMM Context Context Team Decision Team Decision Context Context Information Information Belief Update Fusion 2+ Fusion 2+ Team Knowledge (MALLET) Information Information Information Information Fusion 1 Fusion 1 Information Fusion 1 Fusion 1 Responsibility Selection Information Information Fusion 1 Fusion 1 Belief Responsibilities (Petri Nets) Domain Knowledge (JARE) Identify Info Needs (DIARG) Information Needs Act on Info Needs • Multi-agent logic language for encoding teamwork (MALLET) See research by Dr. John Yen at http://agentlab.psu.edu/
Topic 12 Assignments Preview the on-line topic 12 materials Read Wark and Lambert chapter 11 referenced above Read chapter 2 in Mlidinow (2008)
Data Fusion Tip of the Week Level 3 processing is ultimately about consequence prediction – assisting a user/analyst in determining how the current situation may evolve (i.e., alternate hypothetical futures), how these alternative futures may affect the current situation, how to identify potential decisions and how to evaluate the consequences of alternative decisions. We need to seek a balance between providing insight for the analyst/decision-maker without inducing “analysis paralysis”.