640 likes | 720 Views
Overview: The MARMML Reasoning System and Simulator-Based Modeling for Intelligence Analysis, Wargaming, and Training Therein. Selmer Bringsjord & Yingrui Yang (with Bram van Heuveln, Kostantine Arkoudas, Ron Sun, & Paul Bello) Rensselaer AI & Reasoning (RAIR) Laboratory (SB Director)
E N D
Overview:The MARMML Reasoning Systemand Simulator-Based Modeling for Intelligence Analysis, Wargaming, and Training Therein Selmer Bringsjord & Yingrui Yang (with Bram van Heuveln, Kostantine Arkoudas, Ron Sun, & Paul Bello) Rensselaer AI & Reasoning (RAIR) Laboratory (SB Director) Department of Cognitive Science Department of Computer Science (SB) Department of Decision Sciences & Engineering Systems Rensselaer Polytechnic Institute (RPI) Troy NY 12180 USA 8.26.03
The Rensselaer AI & Reasoning Lab(The RAIR Lab) Intelligent Tutoring Systems (mathematical logic) Over $1million internal seeding Intelligence Analysis Item generation (theorem proving-based generation) synthetic characters/psychological time
The Paradox of Human vs. Machine Reasoning • On the one hand, theorem provers are getting faster, and can do some impressive things. • On the other hand, as Herb Simon, one of the grandfathers of AI, admitted before he died, machine reasoning is absolutely nowhere when stacked against first-rate human reasoning.
Response:Next-Generation AI Including, specifically, … MARMML
Mental MetaLogic: A new, Unified Theory of Human and Machine Reasoning (Yang & Bringsjord; forthcoming from Lawrence Erlbaum) Models of Reasoning: From Symbolic, Connectionist, and Psychological Perspectives (Bringsjord, Sun, and Yang) Simon, Nash, Kahneman: From Economic Rationality to Ordinary Rationality (Yang & Bringsjord) Journal of Experimental and Theoretical AI: Special issues on Heterogeneous Reasoning (Bringsjord & Yang, Co-Editors) Cognitive Systems Research (Ron Sun, Editor in Chief) Next-Generation AI includes Cognitive Science:Empirical/Experimental Study of Human Reasoning(a glimpse)
The Next-Generation AI Theory,Mental MetaLogic (MML), includes Cognitive Counterparts to Purely Formal Ones, e.g., • Empirical Consistency: Consider two theories of reasoning, denoted as A and B. Assume TA is a problem type supporting Theory A, and TB another problem supporting Theory B. We say A and B are cognitively consistent iff (i) a new problem type TC can be generated from integrating A’ and B’, and (ii) TC supports the interaction between A and B. • Empirical Completeness: Let C be a theory integrated from A and B. C is cognitively complete iff it can predict performance on Problem Type C’ generated from B’ and C’.
So, turn to the Human Sidefor six distinguishing attributes… 1 Resolution? No. Natural Deduction 2 Always expressed in a formal language? No. Natural Language (e.g., English)
J-L 1 Suppose that the following premise is true: If there is a king in the hand, then there is an ace in the hand, or else if there isn’t a king in the hand, then there is an ace. What can you infer from this premise? There is an ace in the hand. NO! NO! In fact, what you can infer is that there isn’t an ace in the hand!
---------------- PROOF ---------------- 1 [] -Yar(x)|Terrorists(x). 2 [] -WindAccessible(x,y)| -USBase(x)| -Bioagents(z)| -Terrorists(z)|AttackPosition(y,z,x). 3 [] -CaveSystem(x,aconvoy)| -Accessible(x,aconvoylocation). 4 [] -Camp(x,aconvoy)| -Accessible(x,aconvoylocation). 5 [] -Village(x,aconvoy)| -Accessible(x,aconvoylocation). 6 [] -AttackPosition(x,y,z)| -Convoy(y)| -Terrorists(y)| -PresentLocation(y,u)| -Accessible(x,u)|CaveSystem($f1(x,y,u,z),y)|Village(z2,y)|Camp(z3,y)|Destination(x,y). 7 [] -AttackPosition(x,y,z)| -Convoy(y)| -Terrorists(y)| -PresentLocation(y,u)| -Accessible(x,u)|CaveSystem($f1(x,y,u,z),y)|Village(z2,y)|Accessible(z3,u)|Destination(x,y). 8 [] -AttackPosition(x,y,z)| -Convoy(y)| -Terrorists(y)| -PresentLocation(y,u)| -Accessible(x,u)|CaveSystem($f1(x,y,u,z),y)|Accessible(z2,u)|Camp(z3,y)|Destination(x,y). 9 [] -AttackPosition(x,y,z)| -Convoy(y)| -Terrorists(y)| -PresentLocation(y,u)| -Accessible(x,u)|CaveSystem($f1(x,y,u,z),y)|Accessible(z2,u)|Accessible(z3,u)|Destination(x,y). 10 [] -AttackPosition(x,y,z)| -Convoy(y)| -Terrorists(y)| -PresentLocation(y,u)| -Accessible(x,u)|Accessible($f1(x,y,u,z),u)|Village(z2,y)|Camp(z3,y)|Destination(x,y). 11 [] -AttackPosition(x,y,z)| -Convoy(y)| -Terrorists(y)| -PresentLocation(y,u)| -Accessible(x,u)|Accessible($f1(x,y,u,z),u)|Village(z2,y)|Accessible(z3,u)|Destination(x,y). 12 [] -AttackPosition(x,y,z)| -Convoy(y)| -Terrorists(y)| -PresentLocation(y,u)| -Accessible(x,u)|Accessible($f1(x,y,u,z),u)|Accessible(z2,u)|Camp(z3,y)|Destination(x,y). 13 [] -AttackPosition(x,y,z)| -Convoy(y)| -Terrorists(y)| -PresentLocation(y,u)| -Accessible(x,u)|Accessible($f1(x,y,u,z),u)|Accessible(z2,u)|Accessible(z3,u)|Destination(x,y). 14 [] -Destination(amountain46,aconvoy). 15 [] Convoy(aconvoy). 16 [] Yar(aconvoy). 17 [] PresentLocation(aconvoy,aconvoylocation). 19 [] Accessible(amountain46,aconvoylocation). 20 [] WindAccessible(amilbase33,amountain46). 21 [] Bioagents(aconvoy). 22 [] USBase(amilbase33). 23 [hyper,16,1] Terrorists(aconvoy). 24 [hyper,20,2,22,21,23] AttackPosition(amountain46,aconvoy,amilbase33). 25 [hyper,24,13,15,23,17,19,unit_del,14] Accessible($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoylocation)|Accessible(z2,aconvoylocation)|Accessible(z3,aconvoylocation). 26 [hyper,24,12,15,23,17,19,unit_del,14] Accessible($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoylocation)|Accessible(z2,aconvoylocation)|Camp(z3,aconvoy). 27 [hyper,24,11,15,23,17,19,unit_del,14] Accessible($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoylocation)|Village(z2,aconvoy)|Accessible(z3,aconvoylocation). 28 [hyper,24,10,15,23,17,19,unit_del,14] Accessible($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoylocation)|Village(z2,aconvoy)|Camp(z3,aconvoy). 29 [hyper,24,9,15,23,17,19,unit_del,14] CaveSystem($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoy)|Accessible(z2,aconvoylocation)|Accessible(z3,aconvoylocation). 30 [hyper,24,8,15,23,17,19,unit_del,14] CaveSystem($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoy)|Accessible(z2,aconvoylocation)|Camp(z3,aconvoy). 31 [hyper,24,7,15,23,17,19,unit_del,14] CaveSystem($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoy)|Village(z2,aconvoy)|Accessible(z3,aconvoylocation). 32 [hyper,24,6,15,23,17,19,unit_del,14] CaveSystem($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoy)|Village(z2,aconvoy)|Camp(z3,aconvoy). 33 [hyper,26,4,25,factor_simp,factor_simp] Accessible($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoylocation)|Accessible(z2,aconvoylocation). 34 [hyper,27,5,33,factor_simp] Accessible($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoylocation)|Accessible(z3,aconvoylocation). 35 [hyper,28,5,33,factor_simp] Accessible($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoylocation)|Camp(z3,aconvoy). 36 [hyper,35,4,34,factor_simp] Accessible($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoylocation). 37 [hyper,29,3,36] Accessible(z2,aconvoylocation)|Accessible(z3,aconvoylocation). 38 [hyper,30,3,36] Accessible(z2,aconvoylocation)|Camp(z3,aconvoy). 39 [hyper,38,4,37,factor_simp] Accessible(z2,aconvoylocation). 40 [hyper,31,5,39] CaveSystem($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoy)|Accessible(z3,aconvoylocation). 41 [hyper,40,3,36] Accessible(z3,aconvoylocation). 42 [hyper,32,5,39] CaveSystem($f1(amountain46,aconvoy,aconvoylocation,amilbase33),aconvoy)|Camp(z3,aconvoy). 43 [hyper,42,3,36] Camp(z3,aconvoy). 44 [hyper,43,4,41] $F. ------------ end of proof ------------- Slate Hypothesis Generation in our Narrative Scenario ( v)What is the destination of the convoy? customary destinations ruled out Shows that mountain46 is convoy’s destination
Resolution Athena English • Automatic generation of proofs in natural language, roughly in the same style that one encounters in rigorous proofs appearing in mathematical texts. • Coupled with the automatic generation of counter-examples (in the form of finite models), such a feature should greatly help engineers building digital systems. • Automatically generated counter-examples will help to catch bugs in the early stages of design and implementation; automatically generated proofs expressed in English will validate their design and implementation choices in later stages by demonstrating why the systems work.
MARMML:Multi-Agent Reasoning and Mental MetaLogic MARMML moves out in four dimensions revealed by an honest, systematic study of the best of human reasoning: 3 Mode: traditional syntactic proofs, exclusively semantic/visual proofs, and proofs that synthesize the two in “hybrid” reasoning (based on MML theory) 4 Type: deductive, inductive, “creative,” “narratological”/abductive 5 Expressivity (syntactic and semantic): propositional, first-order, second-order, …, higher-order, modal, temporal, etc.; and ever more expressive modeling 6 Logical Levels (multi-agent reasoning): Agent 2 can evaluate and refute Agent 1’s object-level proof with meta-proof P’; Agent 3 can evaluate and refute P’ with meta-meta-proof P’’, etc.
Simple Reasoning Problem Everyone loves anyone who loves someone. Alvin loves Bill. Can you infer that everyone loves Bill? ANSWER: JUSTIFICATION:
Mode: Solution using Hybrid Reasoning from Mental MetaLogic (Proof Construction in Hyperproof)
MARMML Proofs in Chess Microworld Mode, Expressivity, Type: No other reasoning in the world can do this
Type (new) Creative Reasoning…
Key Functions Used for This • rank of formulas (based on number of connectives and quantifiers) • length of proof • # and complexity of “mental models” • # of inference rules used • no simple reiteration from givens • number of relation symbols (propositional variables in propositional case …) • …
? ?
Hard Narrative (coherent)(commonsense psychological) |- ’ prediction
A Remarkable Empirical Result • Mental Logic (Rips, Braine): People reason by schemas (syntactically) -- competes ferociously with: • Mental Models (Johnson-Laird): People reason by mental models (semantically) • We (Bringsjord & Yang) carried out a pilot experiment: • Problem generally unsolvable by ML given to a group. Those who solved it, told to leave. Group converted to teams. Teams solved the problem! • Problem generally unsolvable by MM given to a group. Those who solved it told to leave. Group converted to teams. Teams solve the problem! • Why? Multi-Agent Reasoning! • Remarkable amount of metareasoning, metametareasing, lots of thinking about thinking, visual aids, diagrams, syntactic and semantic, etc.