890 likes | 1.1k Views
What do experiments on mental imagery tell us about consciousness?. Rediscovery of mental imagery in the 1950s marked the return of cognitive concepts within a behaviorist context.
E N D
What do experiments on mental imagery tell us about consciousness? • Rediscovery of mental imagery in the 1950s marked the return of cognitive concepts within a behaviorist context. • The term mental image has been universally assumed to mean a conscious mental state, similar to a perceptual state. Thus it provides a way to examine the cortical embodiment of one of the most frequently studied conscious states. • Some empirical properties of mental images, together with assumptions about its connection with perception, has lead many people to search for a specific pictorial objects in the brain. • NB The question whether there are (or can be) unconscious mental images has, to my knowledge, never been raised. This is an interesting idea which runs counter to everyone’s assumption about images (and therefore worth exploring).
One of the strongest cases for the causal role of consciousness comes from the study of reasoning with mental images • The ‘psychological reality’ of mental images is unquestioned because many experiments have shown that manipulating imagery instructions results in a large and reliable effect on behavior. • For example, there are many demonstrations that in order to recall something you must first recall an image of something else, and then examine that image for the answer: • On what side of your front door is the handle on? How many windows are there in your kitchen? What is 6 feet behind you? What is in your pocket? Did any US president have a beard (or glasses)? • Imagine a cube on its corner and point and count its corners. • What do you ‘see’ if you rotate a ‘D’ by 90° and put it on a ‘J’ D J
Among the early experiments suggesting the causal role of consciousness comes from the use of mental images in learning and memory • A good mnemonic for recalling a list of items is to combine an image of each item on the list with a place on the image of a familiar path (street, sidewalk...). Then imagine walking along the path and leaving items from your list at familiar places. To recall the list you reverse this walk and notice the items. • E.g., items might be a shopping list, such as bread, milk, eggs, …Then imagine walking along a path from the store to this room, dropping off the items on the list: the bread goes at the corner, the milk goes under the tree, the eggs go by the light, etc.. • To recall the list imagine walking back along that path while looking for any objects you recognize as being on the list. You might see the eggs, then the milk, then the bread.This “Method of loci” is used by actors for obvious reasons. • In order to remember pairs of words, do this: Imagine the objects denoted by each of the word pairs interacting in some way – the more bizarre the better. Imagine the pairs and remember the image of each pair. When its time to recall you will be given one of the pair. Then you must find the image that contains that object and see what other object is with it.
One of the strongest cases for the causal role of consciousness comes from the study of reasoning using mental images • In the 1970’s most research on mental imagery studied the effect of imagery (or “imigability rating of words”) on learning and memory. This research brought the idea of mental imagery back into mainstream psychology, but the next big thing in imagery research was to ask subjects to do things with their image – to derive conclusions by examining their image. This era was most visibly occupied by Steven Kosslyn (with whom I have debated often – see my paper “The Imagery Debate: …” in your reading list). • Research in this period found more and more ways in which imagining something was like seeing a picture of that thing. It was a case study in the Stimulus Error or the Intentional Fallacy. Before describing some of the research and the way conclusions were drawn from them I will reiterate why this has something to do with the course theme: Consciousness, and will also give you a demonstration of the Intentional Fallacy at work.
On the difference between explanations that appeal to mental architectureand those that appeal to tacit knowledge Methodological aside: This difference is closely related to the intentional fallacy and so deserves a special aside.
Aside on the parable of the mystery box • A Cognitive Scientist, out walking in a field, comes upon a black box which happens to have a meter and recorder that records the meter’s changes over time. • The Cognitive Scientist examines lots of records generated by the box and finds the pattern to be quite systematic. It follows the following regular pattern:
An illustrative example: Mystery Code Box What does this behavior pattern tell us about the nature of the box?
An illustrative example: Mystery Code Box Careful study revealed that pattern #2 only occurs in this special context when it is preceded by pattern A What does this behavior pattern tell us about the nature of the box?
The punch line: • The black box is transmitting International Morse Code – in this case English messages in IMC. • The Morse code for e is ▪, for i is ▪▪ , for c is ▬ ▪ ▬ ▪ • There is a spelling ‘rule’ in English; Roughly it’s i (▪▪) before e (▪) except afterc(▬ ▪▬ ▪). • The pattern that was observed is due entirely to this rule of English spelling applied to Morse code. • Because this box can presumably transmit any pattern of dots and dashes, the regularity will not be found in the way it is wired, but in what it (correctly) represents.
The Moral: Regularities in behavior may be due to either: The inherent nature of the system (to its structure), or its Architecture The nature of what the system represents (what it “knows”) or its Representations
Given this regularity, are we now in a position to figure out how the box works? • Does the regularity itself place constraints on potential structural-functional properties of the black box? • Will it help in inferring how the box works?
In general the behavioral repertoire does not determine the structure of the box • That’s because the particular behavior of the box in the example is not constrained by its structure, but by its relation to its environment, and its function as described at a particular level– in an intentional vocabulary. • To find out what vocabulary is the correct one a scientist must study the behavior of the box in special test situations (not in “ecologically valid” ones). Only rarely will the vocabulary be obvious, which is why science is hard. Note the parallel with regularities attributed to the nature of mental images. In most cases it is attributable to what the subject knows about the imagined situation.
Before turning to the puzzle of conscious thoughts, here is a short detour to the intuitive idea that we think in words or pictures • Recall the seductive idea behind the dual code theory of the mental. • We experience our conscious thoughts as either sensoryor verbal (spoken)What else could they be? What would it be like to experience something consciously that we could not in principal have perceived? • How about pain (headache), dizziness, elation, fear, happiness? Are these basic, learned or inferred experiences? NB James-Lange theory of emotions. • Consider first the intuitive idea that we think in our native language (e.g., English). This idea is so natural that most people do not even consider that it might need support! • I claim the problem with either option in the dual code view is this: Neither words nor pictures have the right variable grain orspecificity to represent the content of thought.
As I sit here, writing some text to go with the heading I just typed, I am aware of imagining saying (“thinking”) to myself: “I’d better hurry and finish this section or I will be late for my meeting.” Inner speech is not thinking! In “saying” that sentence to myself in my inner voice, I meant something more than what appears in the sequence of words. I knewwhich particular text I meant when I thought “this section”; I knewhow much time would have to pass (roughly minutes rather than hours or days) in order for it to count as my being “late” for a meeting; I knewwhich meeting I had in mind even though I only thought “my meeting”; and I knewwhat counts as “hurrying” when typing a section of text, as opposed to running a race. And for that matter I knew what “I” referred to, although the sentence did not specify who that was (“I” refers to different people on different occasions).
Imagined speaking is not thinking! What is going on in my mind when I imagine speaking, is the same sort of thing that goes on when I am actually speaking to someone (or to myself, as I sometimes do). It involves deciding how best to communicate some idea to another person by uttering a sequence of words, and then using the resources of the grammar of my language, as well as principles of conversation, to construct a sentence that conveys that meaning when conjoined with my other beliefs, including what I believe my hearer already knows(or is in a position to infer in that context). The sentence I decide to speak or imagine speaking is the end product of such a thought process, it is not the thought.
Neither inner nor outer speech expresses all the thoughts behind them Sentences alone never express all and only what their speakers mean or think. Because what we experience when we seem to be “thinking in words” is an imaginary dialogue, the sentences of this imagined dialogue follow rules of “discourse” such as those referred to as Gricean Maxims. These include principles of cooperation such as “make your statements as informative as required but not more informative than necessary” (i.e., don’t express what your hearer already knows). They include such principles as Truth (don’t say what you know to be false), Informativeness (only say what your hearer does not know), Relevance and Clarity (e.g., avoid ambiguity, vagueness, redundancy). Simple as these maxims are, they apply to all human dialogues, including imagined ones.
What we just saw is a case where much of the regularity does not reflect the nature of thoughts, but the constraints on discourse • Like the case of the long-short dashes in the code-box example, much of the pattern is not caused by the nature of the intentional representations, but of the rules of discourse. The person determines the content by virtue of his thoughts but the rest of the regularities are determined by the constraints on discourse. • The person controls the content of thoughts, just the way he does in a conversation, but this content is given prior to the sentences being formed.
Preview of what is to come… • Most, if not all the work on mental imagery in the past 40 or so years has been fueled by phenomenologyand by the metaphor of mental space and mental pictures, which in turn has been used to explain empirical findings the way one might be tempted to explain the behavior of the black box – by looking for things inside instead of things outside the mind. • In the case of mental imagery, much of the work has appealed to relevant spatial properties, but has done so by assuming that these properties are replicated inside the head!
Speculative aside concerning the role of space Space and self-consciousness? • Some writers have suggested that self-consciousness arises from the ability to distinguishing selffrom otheron the basis of location. • So being conscious of oneself is being able to situate oneself apart from others in space. • If there is anything to this view of consciousness, the work on imagery could help place it in context since empirical findings concerning mental imagery have been essentially about how we mentally represent space.
What constraints does imagery impose on representations? • Recalling the code-box example we might ask whether the regularities are due tothe nature of our cognitive architectureor to what you we want or believe? • As with the code-box example we should ask what constraints are imposed by the mental architecture. Can your image have any properties you choose? If not, ask; Why not? of examples; • Can you imagine an object viewed from all directions at once, or from no particular direction? • Can you imagine a 4-dimensional object? • Can you imagine a written character that is neither upper nor lower case? Or a figure that does not have a particular size or shape (e.g. Berkeley asked: can you imagine a triangle that is neither equilateral nor isosceles, nor scalene, …etc)? • Can you imagine a 3x3 array of numbers and read them back in any order?
What constraints does imagery impose on representations? Images are better at representing spatial layouts than temporal patterns
More on Consciousness of space and arguments from neuroscience • Theintuition that images are spatial (or, as some put it, that they “have space”) is deeper than most other questions about images. I have discussed this in mind-numbing detail in “Things and Places” (Chapter 4). • According to Pictorialists (like Kosslyn), the argument over imagery is now in its third and final stage, where evidence from neuroscience has essentially put an end to the debate. I will look at some of this alleged debate-ending evidence and will show that it is characterized by the pervasive acceptance of the intentional fallacy.
One of the earliest and most-cited studies that have been interpreted as showing that images are laid out in space: Mental Scanning – a ‘window on the mind’ • The experiments seem to show that when you move your attention across a mental image, it takes longer (in real time) to move it a greater (imagined) distance – i.e. time & distance on the image appears to follow the same laws as it does in the real world – i.e. time increases linearly with distance when speed is constant.
Mental Scanning (“Window on the Mind”) • Kosslyn (and many hundreds of researchers and students) showed that if you ask subjects to move their attention from one place on an imagined map, it takes longer to do so when the distance represented is greater. • This series of studies was judged by the journal editor as an example of important new findings in psychology. It has become a classic. • You might keep watch for possible cases of the intentional fallacyorthe stimulus error!
Studies of mental scanningDoes it show that images have metrical space? (Pylyshyn & Bannon. Described in Pylyshyn, 1981) Conclusion: The image scanning effect is Cognitively Penetrable • i.e., it depends on Tacit Knowledge.
Does visual imagery use the visual system? • It depends on what you mean by use and visual system • Touse vision can mean to visually perceive the image – a non-starter • Only early vision is relevant to this use of “perceive”, since general vision can involve all of cognition. This point requires a whole lecture! • This question is of interest to picture theorists because if vision is involved it would suggest that images are uninterpreted picture-like spatial displays. The visual module cannot be applied to already-interpreted data structures – we don’t see symbolic descriptions. • Let’s assume that “the visual cortex” is “active” during mental imagery as some fMRI studies have suggested, • What follows from that? Does it tell us what the image itself must be like? • Why should we believe that vision is involved? • In the last 15 years the main support for the assumption that the visual system is involved has come from neuroscience.
Reasons for thinking that images are interpreted by the visual system • Similar phenomenology of imagining & seeing • This reason actually overshadows all others • Re-perceiving and novel construals • A large but very problematic literature • Superposition & interference studies • Visual illusions with projected images • The ubiquitous role of attention
Reasons for thinking that images are interpreted by the visual system • Similar phenomenology of imagining & seeing • This phenomenology is what leads to the intentional fallacy as shown in the next cartoons from Kliban
This is what our conscious experience suggests goes on in vision… Kliban
This is what the demands of explanation suggests must be going on in vision… Conceptual representation – not a picture or icon
More demonstrations of the relation between vision and imagery • Images constructed from descriptions • The two-parallelogram example • “Seeing” mental images lacks the critical signature properties of vision • Involuntary properties of vision such as amodalcompletion, automatic 3D & apparent motion, different off-retina analysis, sensitivity to hints, … • Reconstruals: Slezak
Connect each corner of the top parallelogram with the corresponding corner of the bottom parallelogram: What do you see? Do this imagery exercise:Imagine a parallelogram like this one Now imagine an identical parallelogram directly below this one What do you see when you imagine the connections? Did the imagined shape look (and change) like the one you see now?
Viewing Mental Images lacks signature properties of vision • No ‘Amodal Completion’ of partly-occluded objects. • Off-retina figures do not combine. • Superposition not as common as thought – does not occur in general (so assumed independent motivation for an internal screen not supported). • Most psychophysical phenomena do not occur in viewing mental images (notwithstanding some bad published studies)
Off-retinal information is different from retinal info(which itself suggests that all mental information is different from visual inputs)
Standard view of saccadic integration by superposition Is it plausible? It is one of the arguments given in support of the view that mental imagery is like vision except it enters higher up in the ‘visual pathway’.
More questions about the relation between vision and imagery • Conceptual information is never iconic or graphic, as picture theorists must assume. • Images constructed from descriptions • The D-J example(s) • The two-parallelogram example • Amodal completion • Reconstruals: Slezak
Visual-Motor adaptation and image-motor adaptation • The basic prism adaptation setup: arm movement towards a target while wearing prism glasses • Now repeat with arm unseen but subject toldwhere it is (actually where it would have been in the prism case) • Get adaptation {Finke, R. A. (1979). The Functional Equivalence of Mental Images and Errors of Movement. Cognitive Psychology, 11, 235-264.} • But in the original experiment it has been shown that you don’t need to see a hand, any indicator of where the hand is will do as long as the subject believes his hand is where indicated: Being your hand is irrelevant.
Can images be visually reinterpreted? • There have been many claims that people can visually reinterpret images • These have all been cases where one could easily figure out what the combined image would look like without actually seeing it (e.g., the J – D superposition). (This example was mentioned earlier) • Peterson’s careful examination of visual “reconstruals” showed (contrary to her own conclusion) that images are never bistable (no Necker cube or figure-ground reversals) and when new construals were achieved from images they were quite different from the ones achieved in vision (more variable, more guessing from cues, etc). • The best evidence comes from a philosopher (Slezak, 1992, 1995)
Slezak figures Pick one (or two) of these animals and memorize what they look like. Now rotate it in your mind by 90 degrees clockwise and see what it looks like.
What do parallels between seeing and imaging show? Phenomenology is ambiguous between: • Showing that imageryuses vision to examine an internal picture, or • Showing that vision does not involve an internal picture either! There is little doubt that (2) is the correct option. Why do we need a second (internal) image when we have the original to look at! As Nelson Goodman said, ‘One of the damn things is enough’!
Neuroscience: The picture-theory’s last hopeAre there pictures in the brain? There is no evidence ofcortical displays of the right kind to explain imagery phenomena. Here is what there is that gives hope to picture-theorists:
Neuroscience has shown that the retinal pattern of activation is displayed on the surface of the cortex There is a topographical projection of retinal activity on the visual cortex of the cat and monkey. Tootell, R. B., Silverman, M. S., Switkes, E., & de Valois, R. L. (1982). Deoxyglucose analysis of retinotopic organization in primate striate cortex. Science, 218, 902-904.
Problems with drawing conclusions about the nature of mental images from neuroscience data • The capacity for imagery and for vision are known to be independent. Also all imagery results are observed in the blind. • Cortical topography is 2-D, but mental images are 3-D – all phenomena (e.g. rotation) occur in depth as well as in the plane. • Patterns in the visual cortex are in retinal coordinates whereas images are in world-coordinates • Your image stays fixed in the room when you move your eyes or turn your head or even walk around the room • Accessing information from an image is very different from accessing it from the perceived world. Order of access from images is highly constrained. • Conceptual rather than graphical properties are relevant to image complexity (e.g., mental rotation).
Problems with drawing conclusions about mental images from the neuroscience evidence • Retinal and cortical images are subject to Emmert’s Law, whereas mental images are not; • The signature properties of vision (e.g. spontaneous 3D interpretation, automatic reversals, apparent motion, motion aftereffects, and many other phenomena) are absent in images; • A cortical display account of most imagery findings is incompatible with the cognitive penetrability of mental imagery phenomena, such as scanning and image size effects; • The fact that the Mind’s Eye is so much like a real eye (e.g., oblique effect, resolution fall-off) should serve to warn us that we may be studying what observers know about how the world looks to them, rather than what form their images take.