1 / 17

A Soar’s Eye View of ACT-R

A Soar’s Eye View of ACT-R. John Laird 24 th Soar Workshop June 11, 2004. Soar / ACT-R Comparison. What changes relative to ACT-R would significantly alter Soar? Not just extensions (activation, RL, EpMem) but fundamental changes.

teddy
Download Presentation

A Soar’s Eye View of ACT-R

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Soar’s Eye View of ACT-R John Laird 24th Soar Workshop June 11, 2004

  2. Soar / ACT-R Comparison • What changes relative to ACT-R would significantly alter Soar? • Not just extensions (activation, RL, EpMem) but fundamental changes. • What changes relative to Soar would significantly alter ACT-R? Soar Soar Soar ACT-R ACT-R ACT-R

  3. Obvious Similarities Soar 9 ACT-R 5 Input/Output Buffers & Async. Buffers & Async. Short-term memories Graph Structure Chunks in buffers Activation Base Activation Long-term memories Production Rules Production Rules Episodes Declarative Memory Rule Utilities Chunk Associations Sequential control Operator Production Goal Structures State stack Goal & Declarative Memory Learning Chunking Production Composition Reinforcement Utility Learning Episodes Chunks -> Decl. Memory Goal & Chunk Association Base Activation

  4. Soar Unbounded graph structure Multi-valued attributes: sets Decision on ^operator of state I-support and o-support Explicitly represent state Short-term identifiers Generated each time retrieved Values can be long-term symbols ACT-R Chunks (flat structures) in buffers One chunk/buffer Chunk types with fixed slots Goal, Declarative Memory, Perception All persistent until replaced/modified Long-term identifiers for each chunk Provides hierarchical structure state state visualization goal declarative memory #3 perception #45 #45 red #3 ‘x’ #9 Short-term Memories

  5. Implications for Soar • Unbounded working memory • No easy way to move subset of short-term memory to long-term memory piece by piece • Can’t maintain connections between objects without long-term memory symbols • Makes it possible to determine results automatically • Supports automatic removal of irrelevant data state state

  6. Implications for ACT-R • Bounded representation • Long-term memory symbols allow dynamic encapsulation • Can learn to test only chunk id instead of substructure • Flat representation • Hard to represent sets • Requires “unpacking” of object symbols to access features • But can learn rules that access symbols directly • How can it recognize structured objects from perception? • (Blending?) • Unitary object representation primacy (vs. independent features) • All features are equally important (activation is object based) • Chunk types are architecturally meaningful declarative memory goal #3 perception #45

  7. Implications for ACT-R II • Persistence • Easy to have inconsistent beliefs • Consistency always competes with other reasoning • Working Memory = retrieved LTM Declarative Memory (Changes in working memory change declarative memory) • No memory of old values in chunks • Difficult to maintain independent copies of same object • Hypothetical reasoning declarative memory goal #3 perception #45

  8. Fundamental Issue: Long-Term Object Identity • Architectural (ACT-R) vs. Knowledge-based (Soar) • Connecting to perception • Connecting to other long-term memories • Copying structures

  9. Decision Making Soar ACT-R • Generate features Parallel rules Sequential rules • Generate alternatives Parallel rules Match rule conditions • Compare & rate alternatives Parallel rules Rule utility • Select Architecture Architecture • Apply Parallel rules Rule actions Dimensions for comparison: • Simple metrics • # of reasoning steps required • # of sequential rule firings • # knowledge units (rules) required • ACT-R often trades off chunks + interpretation + learning for rules. • Capabilities • Expressibility • Use context • Open to meta-reasoning • Modification through learning

  10. Execution Steps Soar ACT-R • Generate features (F) Parallel rules Sequential rules • Generate options (O) Parallel rules Match rule conditions • Compare & rate options (C) Parallel rules Rule utility • Select Architecture Architecture • Apply (A) Parallel rules Rule actions • # of rule firings F + O + C + A F + 1 • # of sequential steps 1 F + 1 • This is complicated by declarative memory retrievals in ACT-R – but they are not really procedural knowledge directly involved in decision making, although they are sometimes involved indirectly.

  11. Propose and Apply Knowledge Units • For a single O that can be selected in S Situations and has A was of Applying: • Soar: O + A rules • ACT-R: O * A rules O: Independent Proposals A: Independent Applications Op

  12. Qi Qk Q Qj Selection Knowledge Units • In Soar, independent numeric indifferent rules combine values for decision • Allows linear combinations of desirability • In ACT-R, only a single utility value is associated with each rule • No run time combination • Conflates legality (proposal) and desirability • Must have separate rule for each unique context application pair • Architecture Architecture • Architecture Architecture

  13. Expressibility • Soar allows “open decisions” • Which knowledge contributes is determine at run time • Does not require pre-compilation of important features. • Separates knowledge about “can” do an action from “should” • Makes easy to express and add knowledge to modify method • Symbolic preferences • Possibility of one-shot learning for decision making • Can be told not to do an action (and overcome statistical) • Can learn to not do an action

  14. Use Run-time Context Soar ACT-R • Generate alternatives Yes – rules Yes – rule conditions • Compare/rate alternatives Yes – rules No – rule utility • Select Architecture Architecture • Apply Yes – rules No – rule action

  15. Meta-Reasoning • Soar has tie impasses & subgoals • Can detect when knowledge is uncertain/incomplete • Can use arbitrary reasoning to analyze and make decision • Including look-ahead planning with hypothetical states • Can return results that modify the decision • Learning can directly modify decision • ACT-R • Difficult to detect uncertainty a & reason about decision • Could create impasse when utilities are close or uncertain • Difficult to modify decision without experience • How could other reasoning change a production rule selection?

  16. Predictions! • ACT-R • Something to deal with meta-cognition • Detecting uncertainty and deliberate reasoning to deal with it (and the learning). • Planning • Integration of emotion/pain/pleasure for learning • Episodic memory • Soar • Long-term declarative memory & architectural declarative learning • Some one will buildASCOT-ARR! • ACT-R memory structure with Soar operators

  17. Gold and Coal • Goal: Having alternative architectures • Provides inspiration for architectural modification • Provides comparison • Forces us to examine arbitrary decisions • Coal: Most comparisons to date are: • Informal (such as this) • Not theory directed (AMBER) • Confound programming & architecture • Not exactly same task

More Related