140 likes | 327 Views
Cognitive Processes PSY 334. Chapter 2 – Perception April 11, 2003. Categorical Perception. For speech, perception does not change continuously but abruptly at a category boundary. Categorical perception – failure to perceive gradations among stimuli within a category.
E N D
Cognitive ProcessesPSY 334 Chapter 2 – Perception April 11, 2003
Categorical Perception • For speech, perception does not change continuously but abruptly at a category boundary. • Categorical perception – failure to perceive gradations among stimuli within a category. • Pairs of [b]’s or [p]’s sound alike despite differing in voice-onset times.
Two Views of Categorical Perception • Weak view – stimuli are grouped into recognizable categories. • Strong view – we cannot discriminate among items within such a category. • Massaro – people can discriminate within category but have a bias to same items are the same despite differences. • Category boundaries can be shifted by fatiguing the feature detectors.
Top Down Processing • General knowledge (context, high-level thinking) combines with interpretation of low-level perceptual units (features). • Context limits the possibilities so fewer features must be processed: • Word superiority effect – D or K vs WORD or WORK – words do 10% better. • To xllxstxatx, I cxn rxplxce xvexy txirx lextex of x sextexce xitx an x, anx yox stxll xan xanxge xo rxad xt wixh sxme xifxicxltx.
Context and Speech • Phoneme restoration effect: • It was found that the *eel was on the axle. • It was found that the *eel was on the shoe. • It was found that the *eel was on the orange. • It was found that the *eel was on the table. • The identification of the missing word depends on what happens after it.
Faces and Scenes • When parts are presented in isolation, more feature information is needed to recognize them. • Face parts are recognized with less detail when in the context of a face. • Subjects are better able to identify objects when they are part of coherent novel scenes rather than jumbled scenes.
Models of Object Perception • Two competing models explain how context and feature information are combined: • Massaro’s FLMP (fuzzy logic model of perception) -- Context and detail are two independent sources of information. • McClelland & Rumelhart’s PDP model – connectionist model in which both sources of information interact.
Testing the FLMP Model • Four kinds of stimuli: • Only an e can make a real word. • Only a c can make a real word. • Both letters can make a word. • Neither letter can make a word. • Within each group, stimuli go from e to c. • Subjects saw each stimulus word briefly and had to identify the letter, e or c.
FLMP Results • Observed frequencies for naming a letter e increase as it has more e features, but also as the context demands an e. • Baye’s theorem gives a formula for combining the independent contributions of two sources of information. • Massaro’s results conform to predictions of Baye’s theorem, suggesting that the information sources must be independent of each other.
Testing the PDP Model • Activation spreads from features to excite letters and from letters to excite words (bottom up processing). • Activation also spreads from words to the component letters (top-down processing). • The more activation, the more likely the correct letter will be identified: • TRAP vs TRIP
Comparing the Two Models • Subjects heard a phoneme that varied from r to an l in two contexts: • A syllable beginning with t – tr or tl. • A syllable beginning with s – sl or sr. • Both the FLMP and PDP models were compared to actual subject data. • FLMP was close to what subjects did. • PDP was too strongly affected by context.
PDP Model Describes More • The PDP model suggests that information is not separately processed but each letter affects each other letter. • Recognition of “a” in MAVE is almost as good as recognizing it in MADE. • This occurs because MAVE is similar to many other words with an A in that position. • We do not have a context but four letters that each influence the others.
Marr • Depth cues (texture gradient, stereopsis) – where are edges in space? • How are visual cues combined to form an image with depth? • Primal sketch – extracts features. • 2-1/2 D sketch – identifies where visual features are in relation to observer (depth). • 3-D model – refers to the representation of the objects in a scene, combines context.
Putting it All Together • The output of these stages (see Fig 2.31) is a representation of an object and its location. • This output is used as input to higher-level cognitive processes. • Conscious awareness (a higher-level process) involves the recognition stage, but lots of processing occurs first.