440 likes | 579 Views
Object & Face Perception. Outline The problem (for the visual system) Theories of object recognition Evidence (mostly effects of rotation) Reconciliation (maybe all “views” are right)? Beyond viewpoint effects Faces - are they special? Evidence for specialness configuration effects
E N D
Object & Face Perception • Outline • The problem (for the visual system) • Theories of object recognition • Evidence (mostly effects of rotation) • Reconciliation (maybe all “views” are right)? • Beyond viewpoint effects • Faces - are they special? • Evidence for specialness • configuration effects • Neurophysiological/neuropsychological evidence • But is this really special (Gauthier)? • Probing configurational information further
Perceiving objects • People have argued that object recognition is the hardest thing the visual system does. • It is tricky because the retinal information available to discriminate one object from another changes dramatically whenever you or the object changes position/orientation. • Object perception research has (mostly) restricted itself to situations where non-shape info. is not useful, and have concentrated on viewpoint constancy.
How do we achieve viewpoint constancy? Two categories of theory; • Viewpoint invariant theories • Recognition By Components (in which information about the 3D structure of the object is extracted from a single view of it). • Viewpoint dependent theories • In which specific views are “stored” and performance is (somehow) based on generalisation from these.
RBC or “geon” theory • The most obvious solution to the problem of how we manage to generalise across viewpoints is to suggest that viewpoint invariant info. can be extracted from a single view. • In RBC, Biederman proposes that under most circumstances we are able to extract a “structural description” of the object • The way it’s parts (its geons) relate to one another.
Objects made of geons Since the structural description is “object centred”, if you can extract one you should show no viewpoint costs when the object is rotated in depth (unless parts appear or disappear). Alas, this is almost never found (even with single geons!)
Peissig et al 2002 • Here’s data from pigeons showing decidedly incomplete generalisation. • Which is affected by the training views they get. • And this is a paper Irv’s on!
Other problems • While Biederman recognises that faces are a problem for his theory, he claims they are a special case, but…..
Evidence for geon theory - Biederman & Bar (1999)Metric vs non-accidental properties
Biederman & Bar (1999) • Found much bigger viewpoint costs for objects which differed in MPs than for those that differed by a NAP • Even though performance was identical if the 1st and 2nd stimulus were at the same viewpoint (suggesting that the changes were equally salient)
However…. • Essentially every other experiment conducted by anyone else has shown substantial viewpoint costs even when the objects are made of geons and have different structural descriptions • This evidence suggests that maybe object recognition is based on generalising from the particular views that you have actually seen previously • This conclusion is supported by the physiological research which has been done (but…)
Logothetis, Pauls & Poggio (1994, 1995) • Showed (using paper clip objects) that monkeys developed IT cells which responded (with fairly narrow tuning) to particular training views • Behavioural generalisation (after training with one view) correlated well with neural generalisation • Cells were position and size invariant, despite their viewpoint specificity Are these results due to using these stimuli?
And… • Showing that an obvious feature permits viewpoint generalisation does not constitute evidence for RBC. • If the objects were different colours, say, then you would get perfect viewpoint generalisation • This suggests that viewpoint costs depend on the stimuli from which the object must be discriminated - the context.
Williams & Hayward 2000 • Tested this possibility • But found the same viewpoint cost in each condition • Of course this is likely to be influenced by task as well
Reconciliation? Vanrie et al 2002 • Suggested there may be two routes to object recognition - one viewpoint invariant and one view specific. Compared performance on A mental rotation task and objects with a NAP Rotation -Invariance activity
But…mental rotation? • Maybe mental rotation is specific to making handedness judgements • And so is not like normal object recognition • And so these results might just be a physiological confirmation of that possibility • Consider the difference between judging the identity of a rotated object and doing a genuine mental rotation task
More reconciliation: Burghund and Marsolek (2000) • Examined priming of object recognition. • Found viewpoint dependent priming when the test object was shown first to the right hemisphere, and viewpoint independent priming when it was presented to the left hemisphere
What causes viewpoint effects? • Simons et al 2002 showed that viewpoint costs are reduced if the subject actually does change viewpoint. • So information from sources other than the image matter
Beyond viewpoint effects • Keane, Hayward & Burke (in press) investigated the kind of information that is most useful for discriminating between objects Found that configuration changes are detected more quickly and more accurately than switch or identity changes Identifying the info that is most useful for telling objects apart might help us understand how they are recognised
Faces • Faces are objects that we are all expert at recognising • And there is evidence that they are processeed in a different way to other objects • As though they are a special case • But you can probably already guess what might be different about recognising faces and telephones, say…. • so guess….
Face effects Negation Inversion
Face effects Negation Inversion
Boutet and Chaudhuri (2001) • You can clearly see two alternating (rivaling) faces when they are superimposed only if they are near upright (not if they are upside down)
But what does configuration mean? • Maurer et al (2002) define it multiply: • 1st order relationships • The fact that the parts are in the right categorical relationships • 2nd order relationships • Distances between the parts (metric or coordinate relationships) • Holistic processing • Faces are seen as “wholes” (making parts hard to differentiate)
Neural evidence • Neurophysiology • There are cells in IT which respond selectively to faces. • And a part of IT (FFA in humans) that reponds exclusively to faces. • Neuropsychology • Prosopagnosia is the inability to recognise faces, while being able to see normally, otherwise. • Some Agnosics can recognise faces but not other objects. • These dissociations are usually used as evidence for or against a special face area (as we shall see) - but these patients may be differentially insensitive to first order and second order relationships.
But - Enter Isabel • Gauthier and colleagues have argued that faces seem special because they are a rare instance of a subordinate level classification at which we are expert • They have gathered evidence to support this: • Bird and dog experts show activity in the FFA when classifying birds and dogs (or faces!) • When people are trained with novel objects which require subordinate level classification (Greebles), activity in FFA increases as they become more expert.
Meet the greebles • As well as showing the appropriate neural changes, greeble experts (and only experts) show traditional face effects: • Inversion • Recognise parts more easily if they are part of the right greeble • Have trouble recognising the top half if it is paired with the wrong bottom half
But babies recognise faces! • It is well known that newborns preferentially look at faces - suggesting an innate face area. • But they are probably only sensitive to 1st order relationships (they have prosopagnosia!) • There is good evidence that sensitivity to second order relationships develops quite slowly (is incomplete at 7 or 8 - Mondloch et al., 2002) • and young children in fact show inverse inversion effects (Brace et al., 2001) • And so this is not really a problem for Isabel, but suggests that we should be….
Probing configuration Cooper & Wojan (2000) • Asked subjects to match names to faces • Found one-eye-shifted matches faster than two-eye-shifted matches
Keane Burke & Hayward • Also examined Cooper and Wojan’s findings, but more completely • Based on results with novel objects
Task Respond: Same or Different
Categorical Changes • 16 pixels moved • Half up, half down & half right, half left
Coordinate Changes • 16 pixels moved • Half up, half down
Identity Changes • Original feature manipulated
Results • UPRIGHT FACES: categorical=coordinate>identity • INVERTED FACES: categorical>coordinate>identity • Coordinate information is as important as categorical information for upright faces • Upside down faces are processed in the same way as novel objects.