340 likes | 354 Views
S EL2211: Contexts. Lecture 7: Cognitive Science. A Necker Cube http://en.wikipedia.org/wiki/Necker_Cube. The last few lectures: Looked at the theory of computers and its relationship to cognition The Computational Theory of Mind The mind uses algorithms to build up structured objects
E N D
SEL2211: Contexts Lecture 7: Cognitive Science A Necker Cube http://en.wikipedia.org/wiki/Necker_Cube
The last few lectures: • Looked at the theory of computers and its relationship to cognition • The Computational Theory of Mind • The mind uses algorithms to build up structured objects Today: • Various disciplines that use this idea • ‘Connectionism’ as an implementation
Cognitive Science: An Introduction • Many phenomena look as though mental representations are what’s important, not the stimuli themselves Your mental representation determines what you ‘see’
‘Folk psychology’ (aka ‘common sense’ or ‘naïve’ psychology) • People have beliefs, desires, intentions, etc. • A remarkably accurate theory • The question of levels: what is multiplication? • A high level description: (25, 15) → 375 • A lower level description: the decimal multiplication algorithm • A still lower level: neurons and their organization in John’s brain
Marr (1982): three levels • The ‘functional’/’knowledge’/’competence’ level (the highest level) • The ‘formal’ level (specific representations and algorithms) • The ‘implementation’ level (physical realization) • The key observation: all of these levels have something to contribute • The functional level – the locus of ‘meaning’ • Without this, no way to say that John and digital computer are doing ‘the same thing’.
The formal level – the specific algorithm • Which specific way are you carrying out multiplication (e.g., decimal multiplication vs. successive addition)? • The physical level • Important for both obvious and non-obvious reasons • Some algorithms will be impossible given certain physical implementations So now, on to some specific sub-areas of cognitive science and the representations they use
Cognitive Psychology • the nature of concepts [http://en.wikipedia.org/wiki/Dog] How do these things figure in our mental life? What’s the nature of the representation?
The Definitional View • DOG is a network of propositions • X has four legs • X barks • etc. • However, • (a) probably false, in terms of people knowing exactly the definition • (b) can lose whatever most important/typical property is w/o ceasing to be a dog
A ‘family resemblance’ view • Looser, more probabilistic • There might be members of a family which are very typical, in that they share many features with other family members, but there could also be members of a family which are very atypical. • So, magpies are typical birds, penguins not so much
Experimental evidence for the ‘family resemblance’ view of mental representations • Rosch & Mervis (1975): fruit, vegetable, clothing, furniture, vehicle, weapon • Subjects given 20 examples of each and asked to think about features of each item (e.g., chair – “for sitting on”, “made of wood or metal”, “has a seat, back and legs”, etc.) • Then ‘family resemblance’ score for each item calculated based on how commonly phrases were used to describe other examples in the set (of, e.g., furniture) • So ‘chair’ very typical instance of furniture, ‘telephone’ very atypical
What’s actually interesting about that? • Family resemblance score could predict people’s performance on other tasks. • Independent group of subjects asked “is this item an example of a particular category (Yes/No)?” • Speed of ‘yes’ answer correlated with typicality score. • Conclusion: looks like ‘family resemblance’ plays some role in humans’ mental representation of concepts
Marr’s Representational Theory of Vision • Vision – not much to work with (from http://webvision.med.utah.edu/book/part-i-foundations/simple-anatomy-of-the-retina)
How do you get from patterns of light intensity to 3D objects? • Marr sez: via a series of representations • The ‘Primal Sketch’ • A ‘2.5D’ representation • A full 3D representation
The Primal Sketch • From changes in light intensity you get • Edges • Lines • Curves • ‘Blobs’ • A primal sketch model in action http://vcla.stat.ucla.edu/old/Chengen_Research/primal_sketch.htm
The 2.5D Sketch • Represents the orientation and depth of various visible surfaces. Draws not only on the information from the primal sketch, but also from information about motion, surface texture. However, the surfaces themselves aren’t grouped into objects. (Figure 2 from Marr & Nishihara (1978: 274)
The full 3D representation • A primitive vocabulary of shapes (generally various kinds of cylinders) is used to build up parts of objects, which are then combined to form the whole. (from http://www.doc.gold.ac.uk/~mas02fl/MSC101/Vision/Marr.html)
A (very brief) introduction to connectionism • Background: we’ve been assuming the ‘classical’ model of cognitive architecture • The mind works directly with mental symbolic representations of various sorts (phrase-structure trees for syntax, family-resemblance clusters for concepts, etc etc.) • Beginning of the 80’s: • Move cognitive science closer to neuroscience • Mind works with artificial neural networks, not representations
A connectionist model in action (from Stillings et al. (1995))
A connectionist model in action (from Stillings et al. (1995)) Input layer 7 -4 7 -4 Hidden Layer -3 7 7 7 Output layer -10
7 -4 7 -4 -3 7 7 7 -10
0 0 7 -4 7 -4 -3 7 7 7 -10
0 0 -3 7 7 -3
0 1 7 -4 7 -4 -3 7 7 7 -10
0 1 7 -4 4 3 7 7 -10
0 1 7 -4 4 3 7 7 4
1 0 -4 7 4 3 7 7 4
1 1 7 -4 7 -4 -3 7 7 7 -10
1 1 7 -4 7 -4 11 7 7 7 -10
1 1 7 -4 7 -4 11 -1 7 7 -10
1 1 7 -4 7 -4 11 -1 7 7 -10
1 1 7 -4 7 -4 11 -1 7 -3
Some points to note • The ‘magic’ is in the weights • Change the weights between nodes and you change the outcome • Neural nets can also ‘learn’
Advantages • Fits in with neural plausibility • Neurons are (we think) the key relevant part of the brain • Learning by adjustable weights seems to fit with the idea of changes in how efficiently neurons conduct their electrical signal (Lecture 3). • Well-suited for computing ‘soft’ rather than ‘hard’ constraints
Disadvantages • Can’t capture ‘systematicity’ (Fodor & Pylyshyn 1988) • If you know “John likes Bill” is a well-formed expression, then you also know that “Bill likes John” also is because “John” and “Bill” are literally component parts of the representation “John likes Bill”. While you could set up a neural network that had that result, nothing about the way it works forces it to be true. You could just as easily set one up so that if “John likes Bill” was well-formed, so is “The Last of the Mohicans”.