220 likes | 330 Views
Detecting Natural Occlusion Boundaries Using Local Cues. Christopher DiMattina , Sean A. Fox & Michael S. Lewicki Presented by: Ying Li Elderlab , 2012.06.15. Overview. Motivation Methods Results Conclusion Contribution. Motivation.
E N D
Detecting NaturalOcclusion Boundaries Using Local Cues Christopher DiMattina, Sean A. Fox & Michael S. Lewicki Presented by: Ying Li Elderlab, 2012.06.15
Overview • Motivation • Methods • Results • Conclusion • Contribution
Motivation • Investigate what features are used by the visual system to detect natural occlusions. • Address the above question using a psychophysical experiment. • Analyze a mixture model classifier trained on natural occlusion and surface patches. • Develop a novel occlusion database.
Methods • Image Databases
Quantifying subject consistency MCS(the Most Conservative Subject) : For a given image, the subject who had labeled the fewest pixels. Generate a binary image mask F: ‘1’ for any pixel within R pixels of an occlusion labeled by the MCS; ‘0’ for all other pixels. Applying this mask to the labelings of each subject. Calculate the comparison between two subjects: -- Randomly assign one edge-map as the ‘reference’ and the other one as the ‘test’ ; -- Use the reference map to define a weighting function , and apply to all of the pixels in the test map. For , For , -- Calculate the index:
Patch extraction and region labeling (Only want patches containing a single occlusion separating two regions of roughly equal size.) • Statistical Measurements Standard for accepting a candidate patch: The composite occlusion edge map from all subjects consisted of a single, connected piece in the analysis window. The occlusion contacted the sides of the window at two distinct points and divided the patch into two regions. Each region comprised at least 35% of the pixels.
-- Convert the raw images into gamma-corrected RGB images. -- Map the RGB color space to the NTSC color space, obtaining the grayscale luminance: Grayscale scalar measurements -- Measure the following visual features: G1. Luminance difference : where are the mean luminance in regions respectively. G2. Contrast difference : where are the contrasts(standard deviation) in regions respectively. G3. Boundary luminance gradient : where , the gradient of the image patch evaluated at the central pixel. And is the average intensity of the image patch.
G4. Oriented energy : where is the image patch in vector form, is a Gabor filter of orientation and evenly spaced orientations. G5. Global patch contrast : Note: G1 and G2 are differences between statistics measured from different regions of the patch; while G3~G5 are measured globally from the entire patch. Luminance Difference Contrast Difference
-- Convert the images from RGB to LMS color space. -- Convert LMS images into an color space. Color scalar measurements -- Measure two additional properties from the LMS color image patches represented in the basis: C1. Blue – Yellow difference : , where are the mean values of the B – Y opponency component in regions respectively. C2. Red – Green difference : , where are the mean values of the R – G opponency component in regions respectively.
Quadratic classifier analysis • Machine classifiers Assume that there are two categories of stimuli from which we can measure n features , and that positive instances of each category are Gaussian distributed: where are the means and covariances of each category. Given a novel observation with feature vector , assuming equal prior for each category where
ICA (IndependentComponents Analysis) mixture model classifier Bayes’ Law In the case of the complete ICA model, A second classifier is trained on occlusion edges and patches not centered on occlusions in order to detect occlusions in natural images.
Experimental paradigm Subjects were given as much time as they needed for each patch. Prior to the beginning of the first session of the study, naïve subjects were allowed to browse grayscale versions of all of the images in the database, and after viewing the original images for 2-3 seconds they were shown superimposed on the image the union of all subject labelings.
Results • Case Occlusion Boundary (COB) database
Conclusion • Simple local features like luminance and contrast differences are insufficient to account for human performance for detecting natural occlusion boundaries. • Human observers integrate complex information over a large spatial region to detect natural occlusions. • Simple computations like luminance edge detection are inadequate for detecting natural occlusions, and more sophisticated cues are required.
Contribution • Investigate the question, what features are used by the visual system to detect natural occlusions in a psychophysical way. • Offer a novel database. • Demonstrate the necessity of complex information for human performance in detecting occlusion.