1 / 47

Sigma: Towards a Graphical Architecture for Integrated Cognition

Sigma: Towards a Graphical Architecture for Integrated Cognition. Paul S. Rosenbloom | 7/27/2012. The Goal of this Work. A new cognitive architecture – Sigma (

belva
Download Presentation

Sigma: Towards a Graphical Architecture for Integrated Cognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sigma: Towards a Graphical Architecture for Integrated Cognition Paul S. Rosenbloom | 7/27/2012

  2. The Goal of this Work • A new cognitive architecture – Sigma(𝚺) – based on • The broad yet theoretically elegant power of graphical models • The unifying potential of piecewise continuous functions • As an approach towards integrated cognition • Consolidating the functionality and phenomena implicated in natural minds/brains and/or artificial cognitive systems • That meets two general desiderata • Grand unified • Functionally elegant • In support of developing functional and robust virtual humans (and intelligent agents/robots) • And ultimately relating to a new unified theory of cognition

  3. Example Virtual Humans (USC/ICT) Gunslinger INOTS Ada & Grace SASO

  4. Fixed structure underlying intelligent behavior Defines mechanisms for memory, reasoning, learning, interaction, etc. Intended to yield integrated cognition when add knowledge and skills May serve as the basis for A Unified Theory of Cognition Virtual humans, intelligent agents and robots Induces a language, but not just a language (or toolkit) Embodies theory of, and constraints on, parts and their combination Overlaps in aims with what are variously called AGI architectures and intelligent agent/robot architectures Examples include ACT-R, AuRA, Clarion, Companions, Epic, Icarus, MicroPsi, OpenCog, Polyscheme, RCS, Soar, and TCA Cognitive Architecture • Symbolic working memory • (x1 ^next x2)(x2 ^next x3) • Long-term memory of rules • (a ^next b)(b ^next c)(a ^next c) • Decide what to do next based on preferences generated by rules • Reflect when can’t decide • Learn results of reflection • Interact with world Soar 3-8 (CMU/UM/USC) USC/ICT – SASO USC/ISI & UM – IFOR

  5. Outline of Talk • Desiderata • Sigma’s core • Progress • Wrap up

  6. Desiderata

  7. Desideratum I: Grand Unified • Unified: Cognitive mechanisms work well together • Share knowledge, skills and uncertainty • Provide complementary functionality • Grand Unified: Extend to non-cognitive aspects • Perception, motor control, emotion, personality, … • Needed for virtual humans, intelligent robots, etc. • Forces important breadth up front • Mixed: General symbolic reasoning with pervasive uncertainty • Hybrid: Discrete and continuous • Towards synergistic robustness • General combinatoric models • Statistics over large bodies of data • Expansive base for mechanism development and integration

  8. Desideratum II: Functionally Elegant • Broad scope of functionality and applicability • Embodying a superset of existing architectural capabilities (cognitive, perceptuomotor, emotive, social, adaptive, …) • Simple, maintainable, extendible & theoretically elegant • Functionality from composing a small set of general mechanisms Hybrid Mixed Long-Term Memory Decision Learning Hybrid Mixed Short-Term Memory Soar 9(UM) Sigma Soar 3-8

  9. Candidate Bases for Satisfying Desiderata • Programming languages (C, C++, Java, …) • Little direct support for capability implementation or integration • AI languages (Lisp, Prolog, …) • Neither hybrid nor mixed, nor supportive of integration • Architecture specification languages (Sceptic, …) • Neither hybrid nor mixed, nor sufficiently efficient • Integration frameworks (Storm, …) • Nothing to say about capability implementation • Neural networks • Symbols still difficult, as is achieving necessary capability breadth • Statistical relational languages (Alchemy, BLOG, …) • Exploring a variant tuned to architecture implementation and integration • Based on graphical models with piecewise continuous functions

  10. SIGMA’s Core

  11. u y x w z Graphical Models • Enable efficient computation over multivariate functions by decomposing them into products of subfunctions • Bayesian/Markov networks, Markov/conditional random fields, factor graphs • Yield broad capability from a uniform base • State of the art performance across symbols, probabilities and signals via uniform representation and reasoning algorithm • (Loopy) belief propagation, forward-backward algorithm, Kalman filters, Viterbi algorithm, FFT, turbo decoding, arc-consistency, production match, … • Can support mixed and hybrid processing • Several neural network models map onto them • f(u,w,x,y,z) = f1(u,w,x)f2(x,y,z)f3(z) y • p(u) • p(u,w,x,y,z) = p(u)p(w)p(x|u,w)p(y|x)p(z|x) w y u • p(y|x) u x z x • p(x|u,w) f1 f2 f3 w • p(z|x) z • p(w)

  12. Based on Kschischang, Frey & Loeliger, 1998 Factor Graphs and the Summary Product Algorithm • Factor graphs handle arbitrary multivariate functions • Variables in function map onto variable nodes • Factors in decomposition map onto factor nodes • Bidirectional links connect factors with their variables • Summary product alg. processes messages on links • Messages are distributions over link variables (starting w/ evidence) • At variable nodes messages are combined via pointwise product • At factor nodes do products, and summarize out unneeded variables: y x z 0 2 4 6 … 1 3 5 7 … 2 4 6 8 … … 0 1 2 … 1 2 3 … 2 3 4 … … 12 21 32 ... 2 3 4 ... 6 7 8 ... f2= “3” “2” [00010…] [00100…] A single settling can efficiently yield: Marginals on all variables (integral/sum) Maximum a posterior – MAP (max) Can mix across segments of graph f1=

  13. x Mixed Hybrid Representation for Functions/Messages .7x+.3y+.1 1 .5x+.2 y .6x-.2y 0 x+y • Multidimensional continuous functions • One dimension per variable • Approximated as piecewise linear over arrays of rectilinear (orthotopic) regions • Discretize domain for discrete distributions & symbols [1,2)=.2, [2,3)=.5, [3,4)=.3 • Booleanizerange (and add symbol table) for symbols [0,1)=1 Color(x, Red)=True, [1,2)=0 Color(x, Green)=False 1 1 0 Analogous to implementing digital circuits by restricting an inherently continuous underlying technology

  14. CONDITIONALConcept-Prior Conditions: Object(s,O1) Condacts: Concept(O1,c) Constructing Sigma Defining Long-Term and Working Memories • Predicate-based representation • E.g., Object(s,O1), Concept(O1,c) • Arguments are constants in WM but may be variables in LTM • LTM is composed of conditionals (generalized rules) • A conditional is a set of patternsjoined with an optionalfunction • Conditionals compile into graph structures • WMcomprises nDcontinuous functions for predicates • Compile to evidence at peripheral factor nodes WM Constant Pattern Join Object: Function LTM Access: Message Passing until Quiescence and then Modify WM Concept:

  15. CONDITIONALConcept-Prior Conditions: Object(s,O1) Condacts: Concept(O1,c) The Structure of Conditionals • Patterns can be conditions, actions or condacts • Conditions and actions embody normal rule semantics • Conditions: Messages flow from WM • Actions: Messages flow towards WM • Condacts embody (bidirectional) constraint/probability semantics • Messages flow in both directions: local match + global influence • Pattern networks connect via join nodes • Product (≈ AND for 0/1) enforces variable binding equality • Functions are defined over pattern variables WM Constant Pattern Join Object: Function Concept:

  16. Some More Detail on Predicates and Patterns • May be closed world or open world • Do unspecified WM regions default to unknown (1) or false (0)? • Arguments/variables may be unique or universal • Unique act like random variables: P(a) • Distribution over values: [.1 .5 .4] • Basis for rating and choice • Universal act like rule variables: (a ^next b)(b ^next c)(a ^next c) • Any/all elements can be true/1: [1 1 0 0 1] • Work with all matching values Key distinctions between Procedural and Declarative Memories

  17. Key Questions to be Answered • To what extent can the full range of mechanisms required for intelligent behavior be implemented in this manner? • Can the requisite range of mechanisms all be sufficiently efficient for real time behavior on the part of the whole system? • What are the functional gains from such a uniform implementation and integration? • To what extent can the human mind and brain be modeled via such an approach?

  18. PROGRESS

  19. Progress • Mental imagery [BICA 11a]* • 2D continuous imagery buffer • Transformations on objects • Perception • Edge detection • Object recognition (CRFs)[BICA 11b] • Localization (of self)[BICA 11b] • Statistical natural language • Question answering (selection) • Word sense disambiguation • Graph integration [BICA 11b] • CRF + Localization + POMDP • Memory [ICCM 10] • Procedural (rule) • Declarative (semantic,episodic) • Constraint • Problem solving • Preference based decisions [AGI 11] • Impasse-driven reflection • Decision-theoretic (POMDP) [BICA 11b] • Theory of Mind • Learning • Episodic • Gradient descent • Reinforcement Some of these are very much just beginnings!

  20. Memory (Rules) CONDITIONAL Transitive Conditions: Next(a,b) Next(b,c) Actions: Next(a,c) Pattern Join WM • Procedural if-then Structures • Just conditions and actions • CW and universal variables a X1 X2 X3 WM Next(X1,X2) Next(X2,X3) X1 a X2 b first X1 X2 X3 Next(a,b) X1 X2 X3 X3 X1 X1 b 1 X2 c b X2 second X1 X2 X3 c X3 a X3 X1 1 Next(b,c) X2 c X3 (type ’X :constants ‘(X1 X2 X3)) (predicate ‘Next ‘((first X) (second X)) :world ‘closed)

  21. Memory (Semantic) • Given cues, retrieve (predict)object category and missing attributes • E.g., GivenColor=Silver, RetrieveCategory=Walker, Legs=4, Mobile=T, Alive=F, Weight=10 Naïve Bayes classifier Prior on concept + CPs on attributes Just condacts (in pure form) OW and unique variables WM Constant Pattern CONDITIONALConcept-Weight • Conditions: Object(s,O1) • Condacts: Concept(O1,c) Weight(O1,w) Join Object: Function Concept: CONDITIONALConcept-Prior Conditions: Object(s,O1) Condacts: Concept(O1,c)

  22. Example Semantic Memory Graph Concept (S) Dog=.21 Silver=.01, Brown=.14, White=.05 Color (S) Weight (C) [1,50)=.00006w-.00006, [50,150)=.004-.00003w B: Boolean S: Symbolic D: Discrete C: Continuous Function WM Join 4 F=.01, T=.2 Mobile (B) Legs (D) T Alive (B) Just a subset of factor nodes (and no variable nodes)

  23. Based on Russell et al., 1995 Local, Incremental, Gradient Descent Learning(w/ Abram Demski & Teawon Han) Concept (S) Color (S) Weight (C) 4 Mobile (B) Legs (D) T Gradient defined by feedback to function node Normalize (and subtract out average) Multiply by learning rate Add to function, (shift positive,) and normalize Alive (B)

  24. Procedural vs. Declarative Memories Similarities Differences Procedural vs. declarative Conditions+actions vs. condacts Directionality of message flow Closed vs. open world Universal vs. unique variables • All based on WM and LTM • All LTM based on conditionals • All conditionals map to graph • Processing by summary product Constraints are actually hybrid: condacts, OW, universal Other variations also possible

  25. Mental Imagery • How is spatial information represented and processed in minds? • Add and delete objects from images • Translate, scale and rotate objects • Extract implied properties for further reasoning • In a symbolic architecture either need to • Represent and reason about images symbolically • Connect to an imagery component (as in Soar 9) • Here goal is to use same mechanisms • Representation: Piecewise continuous functions • Reasoning: Conditionals (FGs + SP)

  26. 2D Imagery Buffer in the Eight Puzzle • The Eight Puzzle is a classic sliding tile puzzle • Represented symbolically in typical AI systems • LeftOf(cell11, cell21), At(tile1, cell11), etc. • Instead represent as a 3D function • Continuous spatial x& y dimensions • (type 'dimension :min 0 :max 3) • Discrete tile dimension (an xy plane) • (type 'tile :discrete t :min 0 :max 9) • Region of plane with tile has value 1 • All other regions have value 0 • (predicate 'board ’((x dimension) (y dimension) (tile tile !)))

  27. Affine Transformations • Translation: Addition (offset) • Negative (e.g., y + -3.1 or y− 3.1): Shift to the left • Positive (e.g., y + 1.5): Shift to the right • Scaling: Multiplication (coefficient) • <1 (e.g. ¼ × y): Shrink • >1 (e.g. 4.37 × y): Enlarge • -1 (e.g., -1 × y or -y): Reflect • Requires translation as well to scale around object center • Rotation (by multiples of 90°): Swap dimensions • x ⇄ y • In general also requires reflections and translations

  28. Translate a Tile • Offset boundaries of regions along a dimensions • Special purpose optimization of a delta function PAD CROP CONDITIONALMove-Right Conditions:(selected state:soperator:o) (operator id:ostate:sx:xy:y) (board state:sx:xy:ytile:t) (board state:s x:x+1 y:y tile:0) Actions:(board state:s x:x+1 y:ytile:t) (board – state:sx:xy:ytile:t) (board state:sx:xy:y tile:0) (board – state:s x:x+1 y:y tile:0)

  29. Transform a Z Tetromino CONDITIONALScale-Half-Horizontal Conditions: (tetrominox:xy:y) Actions:(tetrominox:x/2+1 y:y) CONDITIONALRotate-90-Right Conditions: (tetrominox:xy:y) Actions:(tetromino x:4-y y:x) CONDITIONALReflect-Horizontal Conditions: (tetrominox:xy:y) Actions:(tetrominox:4-xy:y)

  30. CONDITIONALEdge-Detector-Left Conditions: (tetrominox:xy:y) (tetromino – x:x-.00001 y:y) Actions: (edge x:xy:y) Comments on Affine Transformations • Support feature extraction • Edge detection with no fixed pixel size • Support symbolic reasoning • Working across time slices in episodic memory • Working across levels of reflection • Asserting equality of different variables • Need polytopicregions for any-angle rotation × http://mathworld.wolfram.com/ConvexPolyhedron.html

  31. Problem Solving • In cognitive architectures, the standard approach is combinatoric search for a goal over sequences of operator applications to symbolic states • Architectures like Soar also add control knowledge for decisions based on associative (rule-driven) retrieval of preferences • E.g., operators that move tiles into position are best • Decision-theoretic approach maximizes utility over sequences of operators with uncertain outcomes • E.g., via a partially observable Markov decision process (POMDP) • This work integrates the latter into the former • While exploring (aspect of) grand unification with perception … U1 U2 U3 Pr X0 XT1 X1 XT2 X2 XT3 X3 A0 A1 A2

  32. Standard (Soar-like) Problem Solving • Base level: Generate, evaluate, select, apply operators • Generate (retractable): OW actions – LTM(WM) WM • Evaluate (retractable): OW actions + fns – LTM(WM) LM • Link memory (LM) caches last message in both directions • Subsumes Soar’s alpha, beta and preference memories • Select: Unique variables – LM(WM)  WM • Apply (latched): CW actions – LTM(WM) WM • Meta level: Reflect on impasse (not focus here) LTM Evaluation Decision subgraph Generation Join Negate Changes WM LM WM Application Selection – – Choice +

  33. Eight Puzzle Problem Solving • All knowledge encoded as conditionals • Total of 17 conditionals to solve simple problems • 667 nodes (359 variable, 308 factor) and 732 links • Sample problem takes 5541 messages over 7 decisions • 792 messages per graph cycle, and .8 msec per message (on iMac) CONDITIONALMove-Left; Move tile left (and blank right) Conditions: (selected state:soperator:left) (operator id:leftstate:sx:xy:y) (board state:sx:xy:ytile:t) (board state:s x:x-1 y:y tile:0) Actions: (board state:sx:xy:y tile:0) (board – state:s x:x-1 y:y tile:0) (board state:s x:x-1 y:ytile:t) (board – state:sx:xy:ytile:t) CONDITIONALGoal-Best; Prefer operator that moves a tile into its desired location Conditions: (blank state:scell:cb) (acceptable state:soperator:ct) (location cell:cttile:t) (goal cell:cbtile:t) Actions: (selected state:soperator:ct) Function: 1

  34. Decision Theoretic Problem Solving + Perception Challenge problem • Find way in corridor from to G • Locations are discrete, and a map is provided • Vision is local, and feature based rather than object based • Can detect walls (rectangles) and doors (rectangles + circles, colors) • Integrates perception, localization, decisions & action • Both perception and action introduce uncertainty • Yielding distributions over objects, locations and action effects G I Wall Wall Door 1 Door 3 Door 2

  35. Yields distribution over A0 from which best action can be selected Integrated Graph for Challenge Problem SLAM U1 U2 U3 Teawon Han (USC) XT1 X1 XT2 X2 XT3 X3 X-3 X-2 X-1 X0 XT-3 XT-2 XT-1 A0 A1 A2 A-3 A-2 A-1 M-2 M-1 M0 Pr POMDP O-2 O-1 O0 CRF Abram Demski (USC/ICT) O-1 O0 O-2 OT-2 OT-1 Nicole Rafidi (Princeton) David Pynadath (USC/ICT) P1-2 P3-2 P1-1 P3-1 P10 P30 P 2-2 P 2-1 P 20 Junda Chen (USC) Louis-Philippe Morency (USC/ICT) S1-2 S3-2 S1-1 S3-1 S10 S30 S2-2 S2-1 S20

  36. Comments on Problem Solving & Integrated Graph • Shows decision-theoretic problem solving within same architecture as symbolic problem solving • Ultimately using same preference-based choice mechanism • Capable of reflecting on impasses in decision making • Implemented within graphical architecture without adding CRF, localization and POMDP modules to it • Instead, knowledge is added to LTM and evidence to WM • Distribution on A0 defines operator selection preferences • Just as when solve the Eight Puzzle in standard manner • Total of 25 conditionals • 293 nodes (132 variable, 161 factor) and 289 links • Sample problem takes 7837 messages over 20 decisions • 392 messages per graph cycle, and .5 msec per message (on iMac)

  37. Reinforcement Learning R • Learn values of actions for states from rewards • SARSA: Q(st, at) ← Q(st, at) + α[rt + γQ(st+1, at+1) - Q(st, at)] • Deconstruct in terms of: • Gradient-descent learning • Schematic knowledge for prediction • Synchronic learning/prediction of: • Current reward (R) • Discounted future reward (P) • Q values (Q) • Learn given an action model • Diachronic learning/prediction of: • Action model (transition function) (SN) • Requires addition of intervening decision cycle Rt Pt Q(A)t Rt+1 Pt+1 At St+1 St St+1 R Rt Pt Q(A)t Rt+1 Pt+1 At St+1 St SNt St+1

  38. RL in 1D Grid CONDITIONALReward Condacts: (Reward x:xvalue:r) Function<x,r>: .1:<[1,6)>,*> … CONDITIONALBackup Conditions: (Location state:sx:x) (Selected state:soperator:o) (Location*Next state:sx:nx) (Reward x:nxvalue:r) (Projected x:nxvalue:p) Actions: (Q x:xoperator:o value:.95*(p+r)) (Projected x:x value:.95*(p+r)) CONDITIONALTransition Conditions: (Location state:sx:x) (Selected state:soperator:o) Condacts: (Location*Next state:sx:nx) Function<x,o,nx>: (.125 * * *) Sampling of conditionals 0 1 2 3 4 5 6 7 G Reward Q Projected 0 1 2 3 4 5 6 7 Graphs are of expected values, but learning is of full distributions

  39. Theory of Mind (ToM)(w/ David Pynadath & Stacy Marsella) • Modeling the minds of others • Assessing and predicting complex multiparty situations • My model of her model of … • Building social agents and virtual humans • Can Sigma (elegantly) extend to ToM? • Based on PscyhSim (Pynadath & Marsella) • Decision theoretic problem solving based on POMDPs • Recursive agent modeling • Preliminary work in Sigma on intertwined POMDPs (w/ Nicole Rafidi) • Belief revision based on explaining past history • Can cost and quality of ToM be improved? • Initial experiments with one-shot, two-person games • Cooperate vs. defect

  40. One-Shot, Two-Person Games B • Two players • Played only once (not repeated) • So do not need to look beyond current decision • Symmetric: Players have same payoff matrix • Asymmetric: Players have distinct payoff matrices • Socially preferred outcome: optimum in some sense • Nash equilibrium: No player can increase their payoff by changing their choice if others stay fixed • Sigma is finding the best Nash equilibrium A

  41. Symmetric, One-Shot, Two-Person Games Agent A Agent B CONDITIONALPayoff-A-ACONDITIONALPayoff-B-B Conditions: Choice(A,B,op-b) Conditions: Choice(B,A,op-a) [B’s model of A] Actions: Choice(A,A,op-a) Actions: Choice(B,B,op-b) [B’s model of B] Function:payoff(op-a,op-b)Function:payoff(op-b,op-a) CONDITIONALPayoff-A-B CONDITIONALPayoff-B-A Conditions: Choice(A,A,op-a)Conditions: Choice(B,B,op-b) Actions: Choice(A,B,op-b)Actions: Choice(B,A,op-a) Function:payoff(op-b,op-a)Function:payoff(op-a,op-b) CONDITIONALSelect-Own-Op Conditions: Choice(ag,ag,op) Actions:Selected(ag,op) 602 Messages 962 Messages

  42. Graph Structure Nominal Agent A PBA PAB AA AB PAB PBA Select Actual (Abstracted) Agent B PBA PAB POR BB BA PAB Select Select ** PBA All one predicate

  43. Asymmetric, One-Shot, Two-Person Games CONDITIONALPayoff-A-ACONDITIONALPayoff-B-B Conditions: Choice(A,B,op-b) Conditions: Choice(B,A,op-a) Actions: Choice(A,A,op-a) Actions: Choice(B,B,op-b) Function:payoff(A,op-a,op-b)Function:payoff(B,op-b,op-a) CONDITIONALPayoff-A-B CONDITIONALPayoff-B-A Conditions: Choice(A,A,op-a) Conditions: Choice(B,B,op-b) Model(m)Model(m) Actions: Choice(A,B,op-b)Actions: Choice(B,A,op-a) Function:payoff(m,op-b,op-a)Function:payoff(m,op-a,op-b) CONDITIONALSelect-Own-Op Conditions: Choice(ag,ag,op) Actions:Selected(ag,op) 374 Messages 636 Messages

  44. Wrap UP

  45. Broad Set of Capabilities from Space of Variations Highlighting Functional Elegance and Grand Unification Rule memory Preference-based decisions Episodic memory POMDP-based decisions Semantic memory Localization Mental imagery … Edge detectors ➤ ➤ ➤ ➤ ➤ ➤ • f(u,w,x,y,z) = f1(u,w,x)f2(x,y,z)f3(z) ➤ w y Uni- vs. bi-directional links Max vs. sum summarization Long- vs. short-term memory Product vs. affine factors ➤ u x z Closed vs. open world functions Universal vs. unique variables Discrete vs. continuous variables Boolean vs. numeric function values f1 f2 f3 .5y 0 x+.3y 1 • Knowledge above architecture also involved • Conditionals that are compiled into subgraphs x-y 1 0 6x Piecewise Continuous Functions Factor graphs w/ Summary Product

  46. Conclusion • Sigma is a novel graphical architecture • With potential to support integrated cognition and the development of virtual humans (and intelligent agents/robots) • Focus so far is not on a unified theory of human cognition • However, makes interesting points of contact with existing theories • Grand unification • Demonstrated mixed processing • Both general symbolic problem solving and probabilistic reasoning • Demonstrated hybrid processing • Including forms of perception integrated directly with cognition • Need much more on perception, plus action, emotion, … • Functional elegance • Demonstrated aspects of memory, learning, problem solving, perception, imagery, Theory of Mind [and natural language] • Based on factor graphs and piecewise continuous functions

  47. Publications Rosenbloom, P. S. (2009). Towards a new cognitive hourglass: Uniform implementation of cognitive architecture via factor graphs.  Proceedings of the 9thInternational Conference on Cognitive Modeling. Rosenbloom, P. S. (2009).  A graphical rethinking of the cognitive inner loop.  Proceedings of the IJCAI International Workshop on Graphical Structures for Knowledge Representation and Reasoning. Rosenbloom, P. S. (2009).  Towards uniform implementation of architectural diversity.  Proceedings of the AAAI Fall Symposium on Multi-Representational Architectures for Human-Level Intelligence. Rosenbloom, P. S. (2010). An architectural approach to statistical relational AI.  Proceedings of the AAAI Workshop on Statistical Relational AI. Rosenbloom, P. S. (2010). Speculations on leveraging graphical models for architectural integration of visual representation and reasoning.  Proceedings of the AAAI-10 Workshop on Visual Representations and Reasoning. Rosenbloom, P. S. (2010). Combining procedural and declarative knowledge in a graphical architecture.  Proceedings of the 10thInternational Conference on Cognitive Modeling. Rosenbloom, P. S. (2010). Implementing first-order variables in a graphical cognitive architecture.  Proceedings of the First International Conference on Biologically Inspired Cognitive Architectures. Rosenbloom, P. S. (2011). Rethinking cognitive architecture via graphical models.  Cognitive Systems Research, 12, 198-209. Rosenbloom, P. S. (2011). From memory to problem solving: Mechanism reuse in a graphical cognitive architecture.  Proceedings of the Fourth Conference on Artificial General Intelligence. Winner of the 2011 Kurzweil Award for Best AGI Idea. Rosenbloom, P. S. (2011). Mental imagery in a graphical cognitive architecture.  Proceedings of the Second International Conference on Biologically Inspired Cognitive Architectures. Chen, J., Demski, A., Han, T., Morency, L-P., Pynadath, P., Rafidi, N. & Rosenbloom, P. S. (2011). Fusing symbolic and decision-theoretic problem solving + perception in a graphical cognitive architecture.  Proceedings of the Second International Conference on Biologically Inspired Cognitive Architectures. Rosenbloom, P. S. (2011). Bridging dichotomies in cognitive architectures for virtual humans.  Proceedings of the AAAI Fall Symposium on Advances in Cognitive Systems. Rosenbloom, P. S. (2012). Graphical models for integrated intelligent robot architectures. Proceedings of the AAAI Spring Symposium on Designing Intelligent Robots: Reintegrating AI. Rosenbloom, P. S. (2012). Towards a 50 msec cognitive cycle in a graphical architecture. Proceedings of the 11th International Conference on Cognitive Modeling. Rosenbloom, P. S. (2012). Towards functionally elegant, grand unified architectures. Proceedings of the 21st Behavior Representation in Modeling & Simulation (BRIMS) Conference. Abstract for panel on “Accelerating the Evolution of Cognitive Architectures,” K. A. Gluck (organizer).

More Related