1 / 18

How to assess deception in a complex environment

How to assess deception in a complex environment. Paul R. Syms, Dstl LBSD Fort Halstead 26 ISMOR Bishops Waltham, 1–4 September 2009. Dstl/CP37862. Contents. Introduction: why try to model deception? Examples of deception from history Psychology of deception

maxima
Download Presentation

How to assess deception in a complex environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How to assess deception in a complex environment Paul R. Syms, Dstl LBSD Fort Halstead 26 ISMOR Bishops Waltham, 1–4 September 2009 Dstl/CP37862

  2. Contents • Introduction: why try to model deception? • Examples of deception from history • Psychology of deception • Assessing deception in OA models • modelling the intelligence cycle in an imperfect world • Applying cognitive modelling and historical analysis • Summary and conclusions • References and questions

  3. Why try to model deception? • Because it happens! • it is one of the best ways of achieving surprise in war … • surprise is a driver of success in battle (Rotte & Schmidt, 2003) • To promote awareness, to avoid falling victim to it • To use deception to achieve military goals • including own force protection • To assess the value of investing in CCD/D&D • training and CCD equipments

  4. Examples of deception from history • 1250 BC: The Trojan Horse • played to superstitions, and poor Trojan Opsec • included a feigned withdrawal to complete the scenario • 1781: Yorktown … or was it to be New York? • dummy American positions supported by unexpected manoeuvre • also Megiddo (1918), Alamein (1942), D-Day (1944), Iraq (1991) • each played to the enemy’s expectations • 1999: Serbian CCD blunted NATO airpower in Kosovo • NATO killed 28 AFVs, 17 guns, in campaign of 38,000 sorties

  5. Where does deception occur? • At the sensor/platform level? • At the command level? • Or in the mind of the decision-maker? • at any or all command level(s) • concluded by Whaley (1982); accepted by all authorities • Conclusion: we must model the decision-maker (DM) • and all relevant inputs to their decision, e.g. sensors • with appropriate detail in their environment

  6. Misperception (wrongly seen) Pluperception (accurately seen) Other-induced Self-induced Deception (intentional) Self-deception (delusion) Misrepresentation (unintentional) Illusion (cannot see) Defining deception – Whaley (1982) • A typology of perception Perception

  7. Whaley’s structure of deception • Six strategies for deception: • dissimulation: hiding the real, and simulation: showing the unreal • each by 3 means: hiding, disguising and distracting • resulting in six basic types of deception • May be neat and correct … but is it useful for modelling? • only if it leads to patterns that can be exploited • e.g. generalizing P(success) over one class of deception • or patterns regarding the sustainability of the deception

  8. Psychology of deception • Lots of work on this … by psychologists • studies are largely qualitative, and outside the military context • ‘gullibility’ has not been quantified for established psychometrics • Deception plays on target expectations • reinforced by confirmatory bias, i.e. not seeking negative data • Deceptions can exploit cultural features of the target … • discussed by Sheppard et al. (2000); Fidock (2002) • but can also fail because of these (if C2 ignores ISTAR input)

  9. Why current tools are inadequate • Current models centre on ISTAR or the attrition battle • link from deception through C3 to outcome is weak • Deception is poorly understood • Current models typically evaluate P(success) • whereas deception largely arises from C3ISTAR failures • Deception exploits environmental clutter • most models ‘subsume’ clutter into their data (or ignore it) • Data on deception are scarce to non-existent

  10. Key modelling issues • Camouflage is poorly modelled in OA studies • older STA models ignored shadow, camouflage and clutter • field trials did not support OA needs • Concealment is poorly modelled in OA studies • most terrain and LoS models are inadequate • high-level OA models don’t account for enemy use of terrain • Deception is inherently hard to model • complex, high cost and risk to include it in OA studies • lack of understanding, lack of data

  11. RFIs Command Direction Intelligence IRs, PIRs Cueing Target signatures Dissemination Collection Sensor capabilities Information Data Analyst training Analysis Must focus on intelligence cycle

  12. The ‘long screwdriver’ Confirmatory bias RFIs Command Direction Queuing Intelligence Disinformation, decoys IRs, PIRs Cueing " Target signatures Dissemination Collection ECM " Sensor capabilities Information Data Culture, Expectations, Time pressures “Yes, Sir!” vulnerabilities Analyst training Analysis Intelligence cycle: real world Personality Complexity warning! Clutter Imperfect overload Lack of

  13. A role for cognitive modelling? • Currently aims to analyse a single decision-maker (DM) • Could be extended to DM groups • more complexity: a DM group is not the sum of its participants • cultural effects such as ‘group-think’ • Bayesian belief networks (e.g. Stech & Elsaesser, 2007) • “what is the probability of A being true, given B?” • widely used for automated diagnostics, medical etc. • Dempster-Shafer inference (e.g. Dean et al., 2005) • DMs’ beliefs represented as changing ‘confidence masses’

  14. Applying cognitive modelling • CCD/D&D is a more complex question than CID • must represent competing Blue and Red C3 processes • decisions are between the main options for each side • e.g. attack left flank, centre or right flank • Blue choosing convoy routes, Red choosing ambushes • Must include other elements of the appreciation • feasibility of routes, terrain, etc. • rapidly leads to a ‘combinatorial explosion’ of options • Still needs link from C3 model to battle outcome

  15. A role for historical analysis? • Appropriate to study the human factors in deception • Two possible applications of HA: • 1. A catalogue of deceptions • to correlate factors and circumstances with P(success) • requires ‘slow time’ collection – harder to search deliberately • 2. Assess value of deception in historical battles • probit analysis, as used by Rotte & Schmidt (2003) • Linked to Rowland’s surprise? US Helmbold HA database?

  16. Summary • CCD/D&D can be a battle-winning phenomenon • deception happens in the mind of the deceived • therefore understanding its psychology is key • Requires the most complex form of C3 model • incorporating human fallibility, imperfect comms within an HQ ... • current models take too much information from ‘ground truth’ • First step must be to model the decision-maker • the technique of cognitive modelling shows promise • HA may improve understanding and data provision

  17. Sources and references DEAN D.F., HYND K.S., MISTRY B., VINCENT A. & SYMS P.R. (2005) ‘A new technique to address CID and IFF studies’ Paper presented to 22 International Symposium on Military Operational Research, Bishop’s Waltham, Hants., 30 August – 2 September 2005; DSTL/CP16723 FIDOCK J.J.T. (2002) ‘Building representations of organizational behaviour in models of military headquarters’ DSTL/CR03842/1.0 HATHERLEY A.G.C. (2000) ‘Why were NATO air operations so ineffective against Serbian land forces in Kosovo, and what lessons can be learned from this?’ Cranfield University/RMCS Shrivenham 6 Military Studies MA Course dissertation R/00/1112 ROTTE R. & SCHMIDT C.M. (2003) ‘On the production of victory: empirical determinants of battlefield success in modern war’ Defence and Peace Economics 14(3): 175–192 SHEPPARD C., MATHIESON G.L. & CORRIE N. (2000) ‘Human Decision-Making in OA: Knowledge Requirements and Guideline Structure’ DERA/CDA/SA/CR000070/1.0 SYMS P.R. (2005) ‘Quantifying CCD effectiveness: measuring the unmeasurable?’ J. Defence Science 10(1): 18–26; DSTL/JA13058 WHALEY B. (1982) ‘Toward a General Theory of Deception’ J. Strategic Studies 5(1): 178–192

  18. Questions?

More Related