220 likes | 317 Views
“When” rather than “Whether”: Developmental Variable Selection. Melissa Dominguez Robert Jacobs Department of Computer Science University of Rochester. Introduction. Using human developmental theories as an inspiration for machine learning Don’t use all variables at once
E N D
“When” rather than “Whether”:Developmental Variable Selection Melissa Dominguez Robert Jacobs Department of Computer Science University of Rochester
Introduction • Using human developmental theories as an inspiration for machine learning • Don’t use all variables at once • Focus on choice of when to include certain variables • A system which uses this process to learn disparity sensitivities
Human Perceptual Development • Humans are born with limited sensory and cognitive abilities • Two main schools of thought about early limitations • Traditional view • Immaturities are barriers to be overcome • “Less is More” view • Early limitations are helpful
Less is More in vision • Newborns have poor visual acuity • Improves approx. linearly to near adult levels by about 8 months of age • Other visual skills are being acquired at the same time • Sensitivity to disparities around 4 months • We propose that early poor acuity helps in acquisition of disparity sensitivity
Less is More and binocular disparity detection A richly detailed pair of pictures The same pair of pictures, blurred
Previous coarse to fine approaches • Coarse to fine approaches • First search low resolution image pair • Then refine estimate with high resolution pair • Marr and Poggio, 1979; Quam, 1986; Barnard, 1987; Iocchi and Konolidge, 1998 • Previous approaches are processing strategies - not developmental sequences
Left and Right Images • 1 dimensional images • Horizontal and vertical disparities exist • Only horizontal mean depth Left Right
Binocular Energy Filters • Make comparisons in the energy domain • Based on neurophysiology • Compute Gabor functions of left and right eye images
Unstaged Model • All input at once
Progressive models Developmental Model Inverse Developmental Model • Input in stages during training
Random Model • Still have 3 stages • Stage 1 consists of a randomly selected third of the input units • In subsequent stages add another randomly selected third of the input units • Stages consist of same inputs across data items
Data Solid Object Noisy Object Planar Stereogram
Conjugate gradient training procedure 10 runs of each model for each data set 35 iterations per run Stages of 10, 10, and 15 iterations Randomly generated training set Test sets had evenly spaced disparities Randomly generated object size and location Procedures
Result summary • Overall Developmental and Inverse Developmental models performed best • Random and Unstaged models performed worst
Why do Developmental and Inverse Developmental models work best? Limitations on initial input size? NO! Random model results show otherwise Hypothesis: Important to combine features at same scale in early stages Important to proceed to neighboring scales in stages
Prediction: F-CF-CMF or C-CF-CMF perform poorly Suitably designed developmental sequences can aid learning of complex vision tasks Development Aids Learning
Conclusions • Performance of a system can be improved by judiciously choosing when to include each variable • Randomly staggering variables is not enough