400 likes | 579 Views
From Virtual to Reality: Fast Adaptation of Virtual Object Detectors to Real Domains. Baochen Sun and Kate Saenko UMass Lowell. Conventional Approach : collect training images. Back pack model. Test Images. Bicycle model. Our Approach : generate 2D detector directly from 3D model.
E N D
From Virtual to Reality: Fast Adaptation of Virtual Object Detectors to Real Domains Baochen Sun and Kate Saenko UMass Lowell
Conventional Approach: collect training images Back pack model Test Images Bicycle model Our Approach: generate 2D detector directly from 3D model Virtual Back pack model Back pack model Adapt Bicycle model Virtual Bicycle model
Where is “mug”? Detector fails! Solution? Solution #1: Get more training data from the web: Problem: limited data, rare poses
Where is “mug”? Detector fails! Solution? Solution #1: Get more training data from the web: Solution #2: Synthetic data from crowdsourced CAD models • many models available • any pose possible! …but, can we generate realistic images?
CAD models + Photorealistic Rendering Project Cars • Example images
Bad news • photorealism is hard to achieve • must model BRDF, albedo, lighting, etc. • also model scene: cast shadows, etc. • most free models don’t • Good news • we don’t need photorealism! • model the information we have, ignore the rest* • model: 3D shape, pose • ignore: scene, textures, color *obviously, this will not work for all objects, but it will work for many
Conventional Approach: collect training images Back pack model Test Images Bicycle model Our Approach: generate 2D detector directly from 3D model Virtual Back pack model Back pack model Adapt Bicycle model Virtual Bicycle model
Example models • Collected models for 31 objects in “Office” dataset • Selected 2 models per category manually from first page of results
Virtual data generation • For each CAD model, 15 random poses were generated by rotating the original 3D model by a random angle from 0 to 20 degrees in each of the three axes • Note, we do not attempt to model multiviewcomponents in this paper, so pose variation is intentionally small • Then two types of background and texture are applied to each pose to get the final 2D virtual image: Virtual or Virtual-Gray
Two types of virtual data generated random background and texture from ImageNet gray texture and white background “Virtual” “Virtual-Gray”
Conventional Approach: collect training images Back pack model Test Images Bicycle model Our Approach: generate 2D detector directly from 3D model Virtual Back pack model Back pack model Adapt Bicycle model Virtual Bicycle model
Goal: model what we have, ignore the rest specularity light spectrum light direction 3D shape color viewpoint background camera noise, etc… Question: How to ignore difference between resulting non-realistic images and real test images? Answer: domain adaptation
Detection approach • We start with the standard discriminative scoring function for image I and some region b • Keep regions with high score • In our case, is learned via Linear Discriminant Analysis (LDA) • features are HOG (could use others)
Hariharan, Malik and Ramanan, ECCV 2012 • Inspired by this work • Assume generative model: • Class-conditional feature distribution • Assume covariance S is same for all classes • Then classifier is efficiently computed as BharathHariharan, Jitendra Malik, and Deva Ramanan. Discriminative decorrelation for clustering and classification. In Computer Vision–ECCV 2012.
Main idea Source points x: labeled virtual images Target points u: unlabeled real-domain images
Main idea (d) decorrelated target (c) target decorrelated with source covariance (a) source (b) source decorrelated with source covariance
Main idea • if T is target covariance, S is source covariance, then • Note, this corresponds to different whitening operation • Also, if source and target are the same, this reduces to
Main idea • if T is target covariance, S is source covariance, then • Note, this corresponds to different whitening operation • Also, if source and target are the same, this reduces to
Bike model example Synthetic vs. Real Synthetic “bike” performs better on webcam target images!
Data Training: Test: webcam *[Saenko et al, ECCV2010]
Results: compare to [10] [10] BharathHariharan, Jitendra Malik, and Deva Ramanan. Discriminative decorrelation for clustering and classification. In Computer Vision–ECCV 2012.
Results: compare to [7] [10] BharathHariharan, Jitendra Malik, and Deva Ramanan. Discriminative decorrelation for clustering and classification. In Computer Vision–ECCV 2012. [7] Daniel Goehring, Judy Hoffman, Erik Rodner, Kate Saenko, and Trevor Darrell. Inter- active adaptation of real-time object detectors. In International Conference on Robotics and Automation (ICRA), 2014.
Related work:Synthetic data in Computer Vision • Kaneva’11: local features • Optic Flow benchmark Evaluation of Image Features Using a Photorealistic VirtualWorld BilianaKaneva, Antonio Torralba, William T. Freeman, ICCV 2011
Related work:CAD models for Object Detection Wohlkinger, Walter, et al. "3DNet: Large-scale object class recognition from CAD models." Robotics and Automation (ICRA), 2012 IEEE International Conference on. IEEE, 2012. Point cloud test data Stark, Michael, Michael Goesele, and BerntSchiele. "Back to the Future: Learning Shape Models from 3D CAD Data." BMVC. Vol. 2. No. 4. 2010. limited to cars
Next Steps: more factors • Generate multi-component models • Pose variation • Intra-category variation • Analytic model of illumination? • illumination manifolds can be computed from just 3 linearly independent sources (Nayar and Murase ‘96) • Model of other reconstructive factors?
Next steps: “go deep” rCNN 60,000 images of 20 categories