280 likes | 417 Views
Learning on the Fly: Rapid Adaptation to the Image. Erik Learned-Miller with Vidit Jain, Gary Huang, Laura Sevilla Lara, Manju Narayana , Ben Mears . “Traditional” machine learning. Learning happens from large data sets With labels: supervised learning
E N D
Learning on the Fly:Rapid Adaptation to the Image Erik Learned-Millerwith Vidit Jain, Gary Huang, Laura Sevilla Lara, ManjuNarayana, Ben Mears
“Traditional” machine learning • Learning happens from large data sets • With labels: supervised learning • Without labels: unsupervised learning • Mixed labels: semi-supervised learning,transfer learning, learning from one (labeled) example, self-taught learning, domain adaptation
Learning on the Fly • Given: • A learning machine trained with traditional methods • a single test image (no labels) • Learn from the test image!
Learning on the Fly • Given: • A learning machine trained with traditional methods • a single test image (no labels) • Learn from the test image! • Domain adaptation where the “domain” is the new image • No covariate shift assumption. • No new labels
An Example in Computer Vision • Parsing Images of Architectural ScenesBerg, Grabler, and Malik ICCV 2007. • Detect easy or “canonical” stuff. • Use easily detected stuff to bootstrap models of harder stuff.
Claim • This is so easy and routine for humans that it’s hard to realize we’re doing it. • Another example…
What about traditional methods… • Hidden Markov Model for text recognition: • Appearance model for characters • Language model for labels • Use Viterbi to do joint inference
What about traditional methods… • Hidden Markov Model for text recognition: • Appearance model for characters • Language model for labels • Use Viterbi to do joint inference • DOESN’T WORK!Prob( |Label=A) cannot be well estimated, fouling up the whole process.
Lessons • We must assess when our models are broken, and use other methods to proceed…. • Current methods of inference assume probabilities are correct! • “In vision, probabilities are often junk.” • Related to similarity becoming meaningless beyond a certain distance.
2 Examples • Face detection (CVPR 2011) • OCR (CVPR 2010)
Preview of results: Finding false negatives Viola-Jones Learning on the Fly
Eliminating false positives Viola-Jones Learning on the Fly
Eliminating false positives Viola-Jones Learning on the Fly
Run a pre-existing detector... Key Face Non-face Close to boundary
Gaussian Process Regression learn smooth mappingfrom appearance to score negative positive apply mapping to borderline patches
Comments • No need to retrain original detector • It wouldn’t change anyway! • No need to access original training data • Still runs in real-time • GP regression is done forevery new image.
Noisy Document Initial Transcription We fine herefore tlinearly rolatcd to thewhen this is calculated equilibriurn. In short,on the null-hypothesis:
Premise • We would like to fine confident words to build a document-specific model, but it is difficult to estimate Prob(error). • However, we can bound Prob(error). • Now, select words with • Prob(error)<epsilon.
Document specific OCR • Extract clean sets (error bounded sets) • Build document-specific models from clean set characters • Reclassify other characters in document • 30% error reduction on 56 documents.
Summary • Many applications of learning on the fly. • Adaptation and bootstrapping new models is more common in human learning than is generally believed. • Starting to answer the question: “How can we do domain adaptation from a single image?”