260 likes | 408 Views
How Can the Human Mind Occur in the Physical Universe?. John R. Anderson. Contents. 1. Newell’s ultimate scientific question. 2. What is a cognitive architecture ?. 3. Alternatives to cognitive architectures . 4. ACT-R : a cognitive architecture . 5. Symbol vs. connections in a CA.
E N D
How Can the Human Mind Occur in the Physical Universe? John R. Anderson
Contents • 1. Newell’s ultimate scientific question • 2. What is a cognitive architecture? • 3. Alternatives to cognitive architectures • 4. ACT-R: a cognitive architecture • 5. Symbol vs. connections in a CA User Interface Lab
1. Newell’s Ultimate Scientific Questions (1/2) • Allen Newell (March 19, 1927 ~ July 19, 1992) • Ultimate Scientific Questions • Why does the universe exist? • When did it start? • What’s the nature of life? • for Newell’s • How can the human mind occur in the physical universe? • ※ this question leads him down to worry about the architecture • Last lecture (Dec 4, 1991) • “Desires and Diversions” User Interface Lab
1. Newell’s Ultimate Scientific Questions (2/2) • Purpose of this book • is to report on some of the progress that has come from taking a • variety of perspectives, including biological • Answer would be like : cognitive architecture • Purpose this chapter • What is cognitive architecture? • How the idea came to be • What the (failed) alternatives are • Introduce the cognitive architecture User Interface Lab
2. What is a Cognitive Architecture? (1/4) • Cognitive Architecture • Architect is concerned with how the structure achieves the function. • structure (domain of the builder) • function (domain of the dweller) • ☞ Architecture is the art of specifying the structure of the building • at the level of abstraction sufficient to assure that the builder will • achieve the functions desired by the user. Architecture Computer Science Cognitive Science Architecture of buildings Fred Brooks (1962) introduced into computer Science through an analogy to the architecture of buildings. Newell (1971) introduced Cognitive Architecture through an analogy to Computer Architecture User Interface Lab
2. What is a Cognitive Architecture? (2/4) • Brooks (in Planning a Computer System) • computer architecture is the art of determining of user needs and • d then designing to meet those need • ☞ Brooks is using “architecture” to mean the activity of design • Definition (cognitive architecture) • Newell (1990) • ☞ the fixed (or slowly varying) structure that forms the framework • for the immediate process of cognitive performance and learning. • Pylyshyn (1984) • ☞ the functional architecture includes the basic operations provided • by the biological substrate, say, for storing and retrieving symbols, • comparing them, treating them differently. • Anderson (1983) • ☞ a theory of the basic principles of operation built into the • cognitive system. User Interface Lab
2. What is a Cognitive Architecture? (3/4) • Structure • Building’s architecture : physical components • Cognitive architecture..: do not mention the brain • Function • Building’s architecture : habitation • Cognitive architecture..: cognition • Functional shift : activity of another → its own activity • ☞ except for this shift, there is still the same S-F relationship; • function of the structure is to enable the behavior. Agent (dweller) Agent (structure) User Interface Lab
2. What is a Cognitive Architecture? (4/4) • Before the idea of CA emerged, a scientist has two options; • either focus on structure and get lost (endless details of the brain) • or focus on function and get lost (endless details of behavior) • ☞ CA reflects the relationship between S and F rather than focusing • d on either individually • Definition (for the purpose of this book) • Cognitive Architecture is a specification of the structure of the brain at a level of abstraction that explains how it achieves the function of the mind • Function of the mind : Can be roughly interpreted as referring to • d human cognition in all of its complexity User Interface Lab
3. Alternatives to Cognitive Architecture (1/8) • The type of architectural program requires paying • d attention to three things; • Brain, Mind, Architectural abstraction • This chapter examines three of the more prominent • d Instances of such shortcuts, • Success : • discuss what they can accomplish • Demerit : • Note Where they fall short of being able to • answer Newell’s question. • Problem : • What their problems are User Interface Lab
3. Alternatives to Cognitive Architecture (2/8) • Shortcut 1. Classic Information-Processing Psychology: • Ignore the Brain • Success • Info-Processing Psychology was very successful during1960s~1970s • inspect human brain → neural explanation is too complex • so, We need a level of analysis that is more abstract • for example : Sternberg task & model • Demerit • “computer-inspired ”model of discrete serial search • Problem • ignore the brain (structure) • is like a specification of a buildings architecture that ignore • d what the building is made of User Interface Lab
3. Alternatives to Cognitive Architecture (3/8) • Saul Sternberg’(1966) task & model of it • information processing stage • comparisontime : 35~40 msec • Sternberg reached for the computer metaphor • “when the scanner is being operated by the central process • it delivers memory representations to the comparator. • If and when a match occurs a signal is delivered to the match • register” User Interface Lab
3. Alternatives to Cognitive Architecture (4/8) • Connectionism • arose in the 1980s • bolstered Anderson’s general claim • information processing between brain and computer • Neural imaging • arose in the 1990s • showed the importance of understanding the brain as the structure • underlying cognition. • showed where cognition played out in the brain. User Interface Lab
3. Alternatives to Cognitive Architecture (5/8) • Shortcut 2. Eliminative Connectionism: • Ignore the Mind • Success • notable success during 1980s~1990s • abstract description of the computational properties of the brain • “neurally inspired” computation • for example : Rumelhart and McClelland’(1986) past-tense model • Demerit • is not concerned with how the system might be organized to achieve • functional cognition • Problem • ignores mental function (Mind) as a constraint and just provides • an abstract characterization of brain structure • all we have to do is pay attention to the brain; just describe what is • happening in the brain at some level of abstraction. User Interface Lab
3. Alternatives to Cognitive Architecture (6/8) • Rumelhart and McClelland’(1986) past-tense model • children, with irregular past tense • sing : sang → singed → sang : conventional wisdom • correct irregulars, over generalize, get it right • past-tense model • simulating a neural network : learned the past tenses of verbs • ☞ one can understand function by just studying structure • sleight of hand becomes apparent • This is not a common human behavior User Interface Lab
3. Alternatives to Cognitive Architecture (7/8) • Shortcut 3. Rational Analysis: • Ignore the Architecture • Success • RA (e.g., vision, memory, categorization) have characterized • features of the environment that all primates experience • Demerit • rather focus on architecture as the key abstraction, focus on • adaptation to the environment • ☞ rational analysis (Anderson, 1990) • ☞ Anderson’s application of this approach was Bayesian • Problem • Human mind is not just the sum of core competences such as • memory, or categorization, or reasoning User Interface Lab
3. Alternatives to Cognitive Architecture (8/8) • Bayesian approach • a set of prior constraints about the nature of the world • given various experience, one can calculate the conditional probability • given the input, one can calculate the posterior probabilities from the • priors and conditional probabilities. • after making this calculation, one engages in Bayesian decision making • and take the action that optimizes our expected utilities • ☞ the world makes on our memory (Fig 1.4. e-mail message) • ※ indicates that time since a memory was last used is an important • determinant of whether the memory will be needed now User Interface Lab
4. ACT-R: a Cognitive Architecture (1/4) • Goal of this book • is to use one architecture (ACT-R) to try to convey what we have • a learned about human mind • ACT-R’s Modular Organization • visual module • hold the representation (3X-5=7) • problem state module (imaginal module) • hold a current mental rep’ of the problem (3X=12) • control module (goal module) • keeps track of one’s current intentions • declarative module • retrieves critical info’ form memory (7+5=12) • manual module • programs the output (X=4) • ☞ each of these modules is associated with specific brain regions • ※ ACT-R contains elaborate theories about the internal processes of these modules Fig 1.5. The interconnections among modules in ACT–R 5.0 User Interface Lab
4. ACT-R: a Cognitive Architecture (2/4) • ACT-R’s Modular Organization • production system (sixth module : central procedural module) • can recognize patterns of info’ in the buffers and respond by sending • requests to the modules • these recognize-act tendencies are characterized by production rules • production rule • If the goal is to solve an equation, • and the equation is of the form “expression – num1= num2,” • Then write “expression = num 2 + num1,” • Experiment : children 11~14 years of age • three classes of equations on a computer: • 0-step: e.g., 1X + 0 = 4 • 1-step: e.g., 3X + 0 = 12, 1X + 8 = 12 • 2-step: e.g., 7X + 1 = 29 Fig 1.6. Mean solution times (and predictions of the ACT–R model) for the three types of equations as a function of delay. User Interface Lab 18 18
4. ACT-R: a Cognitive Architecture (3/4) • Brain Imaging Data and the Problem of Identifiability • children’s 5 brain regions were scanned : Fig 1.8 • they are associated with specific modules in the ACT-R theory • Predicting the BOLD Response in Different Brain Regions • x-axis : time (from the onset of the trial) • left graph : effect of number of operations averaging over days • right graph : effect of days averaging over operations • response shifts a little forward in time from day 1 to day 5, • reflecting the speed increase User Interface Lab
4. ACT-R: a Cognitive Architecture (4/4) • Summary • unlike the classic info-processing approach, • the architecture is directly concerned with data about the brain. • unlike eliminative connectionism, • an architectural approach also focuses on how a fully functioning • system can be achieved. • unlike the rational approach and some connectionist approaches, • ACT-R does not ignore issues about how the components of the • architecture are integrated. User Interface Lab
5. Symbols Vs. Connections in a CA (1/6) • Debate • notorious debate between symbolic and connectionist architecture • there is no consensus about what role symbols play in an explanation • of mind • ※ “+” indicate an explanatory role, “-” non explanatory role • +symbols, -connections: • transformation of the structural properties of symbolic representations • unimportant : the physical processes that realize these symbols • 2. - Symbols, +Connections: • this position is called eliminative connectionism • it seeks to eliminate symbols in the explanation of cognition • it views symbols much like elements in explicitly stated rules • “if the verb ends in d or t, add ed” User Interface Lab
5. Symbols Vs. Connections in a CA (2/6) • 3. +Symbols, +Connections: • both play an important explanatory role • Integrated Connectionist/Symbolic(ICS) architecture • 4. - Symbols, - Connections: • reject both architecture and offer other explanatory devices • Functionalism, some varieties of Behaviorism • situatedcognition: explanation resides in what is outside the human • ※ Because there is not agreement about what symbols mean, these • debates are a waste of time User Interface Lab
5. Symbols Vs. Connections in a CA (3/6) • Symbolic-Subsymbolic Distinction • symbolic level in ACT-R • is an abstract characterization of how brain structures encode knowledge. • subsymbolic level • is an abstract characterization of the role of neural computation in • making that knowledge available. • Newell (1990) identifies the critical role of symbols • symbol provide distal access to knowledge access • information must be brought from other locations • this is exactly what they do in ACT-R; • Question • what info’ will be brought and how quickly that info’ will appear • thisis what the subsymbolic level is about User Interface Lab
5. Symbols Vs. Connections in a CA (4/6) • Symbolic-Subsymbolic Distinction in the Declarative Module • Sugar factory task (Fig 1.9) • Chunks (symbolic level) • ACT-R has networks of knowledge encoded in what we call chunks • chunks have activations at the subsymbolic level • Activations (subsymbolic level) • most active chunk will be the one retrieved • Its activation value will be determined by computations that attempt • to abstract the impact of neural Hebbian-like learning and spread of • activation among neurons. User Interface Lab
5. Symbols Vs. Connections in a CA (5/6) • Symbolic-Subsymbolic Distinction in the Procedural Module • PM consists of production rules • illustration of a production rule in ACT-R (Fig 1.10) • general pattern • information location • ☞ symbolic level • Multiple production rules applied situation • production have utilities and production with highest utility is chosen • ☞ subsymbolic level User Interface Lab
5. Symbols Vs. Connections in a CA (6/6) • Final Reflections on the Symbolic-Subsymbolic Distinction • confusion • Nothing in the production rule in fig 1.10 is different from the pattern- • matching capabilities of standard connectionist networks. • Actual code looks like cognitive science stereotype of a symbol as a • piece of text • symbol for the simulation program, not the symbols of the ACT-R • architecture • level of description • choosing best level is a strategy decision • ACT-R : higher level processes such as equation solving • gap is smaller in the case of ACT-R (from neurons and brain process) • the same level of description might not be best for all applications. • Connectionist model : perceptual processing User Interface Lab