1 / 12

Readers routinely represent implied object rotation: The role of visual experience

Readers routinely represent implied object rotation: The role of visual experience. Wassenberg & Zwaan , in press, QJEP. Brennan Payne Psych 525 10.27.10. Theories of discourse comprehension. Construction-Integration (Kintsch & van Dijk , 1978; Kintsch, 1998)

melina
Download Presentation

Readers routinely represent implied object rotation: The role of visual experience

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Readers routinely represent implied object rotation: The role of visual experience Wassenberg & Zwaan, in press, QJEP Brennan Payne Psych 525 10.27.10

  2. Theories of discourse comprehension • Construction-Integration (Kintsch & van Dijk, 1978; Kintsch, 1998) • Structure-Building (Gernsbacher, 1990, 1997) • Event-Indexing Model (Zwaan et al., 1995) • Resonance Model (O’Brien et al., 1995, 1998) • Shared assumption that discourse comprehension can be modeled as the integration of abstract and amodal representations.

  3. Theories of discourse comprehension • Construction-Integration (Kintsch & van Dijk, 1978; Kintsch, 1998) • Structure-Building (Gernsbacher, 1990, 1997) • Event-Indexing Model (Zwaan et al., 1995) • Resonance Model (O’Brien et al., 1993, 1995, 1998) • Shared assumption that discourse comprehension can be modeled as the integration of abstract and amodal representations.

  4. Construction-Integration • Computational model • Different levels of representation: • Surface form of the text • Text base: propositional information from the text • Situation model: representation of situation implied with the text; derived from propositional text base • Proposition: “idea unit”; smallest unit of knowledge. Follows predicate argument form: PREDICATE(ARGUMENT1, ARGUMENT2) 1a. The ranger saw the eagle in the sky. [SAW(RANGER, EAGLE)], [IN (EAGLE, SKY)] 1b. The ranger saw the eagle in its nest. [SAW(RANGER, EAGLE)], [IN (EAGLE, NEST)] 2a. The carpenter pounded the nail into the wall. [POUNDED(CARPENTER, NAIL)], [INTO (NAIL, WALL)] 2b. The carpenter pounded the nail into the floor. [POUNDED(CARPENTER, NAIL)], [INTO(NAIL, FLOOR)]

  5. Alternative Account 1. The carpenter pounded the nail into the wall. [POUNDED (CARPENTER, NAIL)], [IN (NAIL, WALL)] 2. The carpenter pounded the nail into the floor. [POUNDED (CARPENTER, NAIL)], [IN (NAIL, FLOOR)] Proposition Account: Highly Identical, only difference is N specifying orientation Perceptual Symbol Account (Barsalou, 1999a,b; Stanfield & Zwaan, 2001; Zwaan et al., 2002): Sentences are very different in perceptual representation that is implied.

  6. Do readers represent perceptual information? Stanfield & Zwaan (2001); Psych. Sci. • Sentence-picture verification task Was this mentioned in the previous sentence? 882 (329) John put the pencil in the cup. * 838 (331) Significant differences in RT latencies when objects matched vs. mismatched.

  7. Do readers represent perceptual information? • Perceptual Traces (Zwaan & Kaschak, 2008) • Orientation (Stanfield & Zwaan, 2001) • Shape (Zwaan et al., 2002; Dijkstra et al., 2004) • Size (Taylor & Zwaan, 2010) • Movement and Motion (Kaschak et al., 2005; 2006) • Color (Richter et al., 2009; Therriault et al., 2009) Previous Research: Nature of language representation Sentence  Picture Can this perceptual information affect online language processing? Picture  Sentence

  8. Current Study Wassenburg & Zwaan (in press); QJEP. Does a recent visual exposure to an object in a specific orientation affect later language comprehension? • A 3-phase “visual memory” paradigm (Zwann et al., 2010) • Word-picture verification task • -Experimental items shown in vertical or horizontal orientation • 15- minute filler task • Eye-tracking session Three phases are presented as unrelated experiments to deter some kind of strategy use. Predict a match/ mismatch effect: fixation times on the prepositional phrase (into the wall/ in the cup) that implies the object orientation should be sensitive to the orientation of the previously seen image.

  9. Method • Participants: • N = 34; N= 28 after track loss/errors. 50%Female • Age= 20.3 (18-24). • Native Dutch Speakers • Materials • P1 • 80 word-picture items • 60 fillers; 20 critical items • Each critical item formed a match with its word • Orientation (H;V) counterbalanced across participants over 2 lists • P2 • ??? Maybe the flag test • P3 • Tobii 2150 eye tracker • 40 Dutch Sentences • 20 filler; 20 critical sentences using critical words from P1 • Half of each orientation matched and half mismatched

  10. Method Procedure Phase 1: Toothbrush Phase 2: VSP rotation filler task Approx. 15 min. Phase 3: 1 2 3 4* 5 Aunt Karen finally found the toothbrush in the sink of the bathroom.

  11. Results * *p < .05 † †p = .06

  12. Conclusion Much in the way that language affects later visual processing (Stanfield & Zwaan, 2001), visual memory also influences language processing. Prior exposure to a picture of an object in a particular orientation affects later reading times for phrases that imply the orientation of that object. Match/mismatch effects occur on first pass measures on the disambiguating PP and diminish quickly, suggesting that these effects are both early and immediate. Reading comprehension may be multimodal, not only using linguistic representations, but sensory/perceptual representations as well.

More Related