1 / 34

S EL2211: Contexts

S EL2211: Contexts. Lecture 7: Cognitive Science. A Necker Cube http://en.wikipedia.org/wiki/Necker_Cube. The last few lectures: Looked at the theory of computers and its relationship to cognition The Computational Theory of Mind The mind uses algorithms to build up structured objects

hplotkin
Download Presentation

S EL2211: Contexts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SEL2211: Contexts Lecture 7: Cognitive Science A Necker Cube http://en.wikipedia.org/wiki/Necker_Cube

  2. The last few lectures: • Looked at the theory of computers and its relationship to cognition • The Computational Theory of Mind • The mind uses algorithms to build up structured objects Today: • Various disciplines that use this idea • ‘Connectionism’ as an implementation

  3. Cognitive Science: An Introduction • Many phenomena look as though mental representations are what’s important, not the stimuli themselves Your mental representation determines what you ‘see’

  4. ‘Folk psychology’ (aka ‘common sense’ or ‘naïve’ psychology) • People have beliefs, desires, intentions, etc. • A remarkably accurate theory • The question of levels: what is multiplication? • A high level description: (25, 15) → 375 • A lower level description: the decimal multiplication algorithm • A still lower level: neurons and their organization in John’s brain

  5. Marr (1982): three levels • The ‘functional’/’knowledge’/’competence’ level (the highest level) • The ‘formal’ level (specific representations and algorithms) • The ‘implementation’ level (physical realization) • The key observation: all of these levels have something to contribute • The functional level – the locus of ‘meaning’ • Without this, no way to say that John and digital computer are doing ‘the same thing’.

  6. The formal level – the specific algorithm • Which specific way are you carrying out multiplication (e.g., decimal multiplication vs. successive addition)? • The physical level • Important for both obvious and non-obvious reasons • Some algorithms will be impossible given certain physical implementations So now, on to some specific sub-areas of cognitive science and the representations they use

  7. Cognitive Psychology • the nature of concepts [http://en.wikipedia.org/wiki/Dog] How do these things figure in our mental life? What’s the nature of the representation?

  8. The Definitional View • DOG is a network of propositions • X has four legs • X barks • etc. • However, • (a) probably false, in terms of people knowing exactly the definition • (b) can lose whatever most important/typical property is w/o ceasing to be a dog

  9. A ‘family resemblance’ view • Looser, more probabilistic • There might be members of a family which are very typical, in that they share many features with other family members, but there could also be members of a family which are very atypical. • So, magpies are typical birds, penguins not so much

  10. Experimental evidence for the ‘family resemblance’ view of mental representations • Rosch & Mervis (1975): fruit, vegetable, clothing, furniture, vehicle, weapon • Subjects given 20 examples of each and asked to think about features of each item (e.g., chair – “for sitting on”, “made of wood or metal”, “has a seat, back and legs”, etc.) • Then ‘family resemblance’ score for each item calculated based on how commonly phrases were used to describe other examples in the set (of, e.g., furniture) • So ‘chair’ very typical instance of furniture, ‘telephone’ very atypical

  11. What’s actually interesting about that? • Family resemblance score could predict people’s performance on other tasks. • Independent group of subjects asked “is this item an example of a particular category (Yes/No)?” • Speed of ‘yes’ answer correlated with typicality score. • Conclusion: looks like ‘family resemblance’ plays some role in humans’ mental representation of concepts

  12. Marr’s Representational Theory of Vision • Vision – not much to work with (from http://webvision.med.utah.edu/book/part-i-foundations/simple-anatomy-of-the-retina)

  13. How do you get from patterns of light intensity to 3D objects? • Marr sez: via a series of representations • The ‘Primal Sketch’ • A ‘2.5D’ representation • A full 3D representation

  14. The Primal Sketch • From changes in light intensity you get • Edges • Lines • Curves • ‘Blobs’ • A primal sketch model in action http://vcla.stat.ucla.edu/old/Chengen_Research/primal_sketch.htm

  15. The 2.5D Sketch • Represents the orientation and depth of various visible surfaces.  Draws not only on the information from the primal sketch, but also from information about motion, surface texture.  However, the surfaces themselves aren’t grouped into objects. (Figure 2 from Marr & Nishihara (1978: 274)

  16. The full 3D representation • A primitive vocabulary of shapes (generally various kinds of cylinders) is used to build up parts of objects, which are then combined to form the whole. (from http://www.doc.gold.ac.uk/~mas02fl/MSC101/Vision/Marr.html)

  17. A (very brief) introduction to connectionism • Background: we’ve been assuming the ‘classical’ model of cognitive architecture • The mind works directly with mental symbolic representations of various sorts (phrase-structure trees for syntax, family-resemblance clusters for concepts, etc etc.) • Beginning of the 80’s: • Move cognitive science closer to neuroscience • Mind works with artificial neural networks, not representations

  18. A connectionist model in action (from Stillings et al. (1995))

  19. A connectionist model in action (from Stillings et al. (1995)) Input layer 7 -4 7 -4 Hidden Layer -3 7 7 7 Output layer -10

  20. 7 -4 7 -4 -3 7 7 7 -10

  21. 0 0 7 -4 7 -4 -3 7 7 7 -10

  22. 0 0 -3 7 7 -3

  23. 0 1 7 -4 7 -4 -3 7 7 7 -10

  24. 0 1 7 -4 4 3 7 7 -10

  25. 0 1 7 -4 4 3 7 7 4

  26. 1 0 -4 7 4 3 7 7 4

  27. 1 1 7 -4 7 -4 -3 7 7 7 -10

  28. 1 1 7 -4 7 -4 11 7 7 7 -10

  29. 1 1 7 -4 7 -4 11 -1 7 7 -10

  30. 1 1 7 -4 7 -4 11 -1 7 7 -10

  31. 1 1 7 -4 7 -4 11 -1 7 -3

  32. Some points to note • The ‘magic’ is in the weights • Change the weights between nodes and you change the outcome • Neural nets can also ‘learn’

  33. Advantages • Fits in with neural plausibility • Neurons are (we think) the key relevant part of the brain • Learning by adjustable weights seems to fit with the idea of changes in how efficiently neurons conduct their electrical signal (Lecture 3).  • Well-suited for computing ‘soft’ rather than ‘hard’ constraints

  34. Disadvantages • Can’t capture ‘systematicity’ (Fodor & Pylyshyn 1988) • If you know “John likes Bill” is a well-formed expression, then you also know that “Bill likes John” also is because “John” and “Bill” are literally component parts of the representation “John likes Bill”.   While you could set up a neural network that had that result, nothing about the way it works forces it to be true.  You could just as easily set one up so that if “John likes Bill” was well-formed, so is “The Last of the Mohicans”. 

More Related