1 / 57

Internet Vision - Lecture 3

Internet Vision - Lecture 3. Tamara Berg Sept 10. New Lecture Time. Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision & Machine Learning review Please look at papers and decide which one you want to present by Monday

Download Presentation

Internet Vision - Lecture 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Internet Vision - Lecture 3 Tamara Berg Sept 10

  2. New Lecture Time Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision & Machine Learning review Please look at papers and decide which one you want to present by Monday • read topic/titles/abstracts to get an idea of which you are interested in

  3. Thanks to Lalonde et al for providing slides!

  4. Algorithm Outline

  5. Inserting objects into images Have an image and want to add realistic looking objects to that image

  6. Inserting objects into images User picks a location where they want to insert an object

  7. Inserting objects into images Based some properties calculated about the image, possible objects are presented.

  8. Inserting objects into images User selects which object to insert and the object is placed in the scene at the correct scale for the location

  9. Inserting objects into images – Possible approaches Insert a clip art object with some idea of the environment Insert a clip art object Insert a rendered object with full model of the environment

  10. Some objects will be easy to insert because they already “fit” into the scene

  11. Collect a large database of objects. Let the computer decide which examples are easy to insert. Allow the user to select only among those.

  12. When will an object “fit”? 1.) When the lighting conditions of the scene and object are similar 2.) When the camera pose of the scene & object match

  13. 2D vs 3D Use 3d information for: 1.) Annotating objects in the clip-art library with camera pose 2.) Estimating the camera pose in the query image 3.) Computing illumination context in both library & query images

  14. Phase 1 - Database Annotation For each object we want: • Estimate of its true size and the camera pose it was captured under • Estimate of the lighting conditions it was captured under

  15. Phase 1 - Database AnnotationEstimate object size smaller larger Objects closer to the camera appear larger than objects further from the camera

  16. Phase 1 - Database AnnotationEstimate object size *If* you know the camera pose then you can estimate the real height of an object from: location in the image, pixel height

  17. Phase 1 - Database AnnotationEstimate object size Annotate objects with their true heights and resize examples to a common reference size

  18. Phase 1 - Database AnnotationEstimate object size & camera pose Don’t know camera pose or object heights! Trick - Infer camera pose & object heights across all object classes in the database given only the height distribution for one class

  19. Phase 1 - Database AnnotationEstimate object size & camera pose Start with known heights for people

  20. Phase 1 - Database AnnotationEstimate object size & camera pose Estimate camera pose for images with multiple people

  21. Phase 1 - Database AnnotationEstimate object size & camera pose Use these images to estimate a prior over the distribution of poses How do people usually take pictures? Standing on the ground at eye level.

  22. Phase 1 - Database AnnotationEstimate object size & camera pose Use the learned pose distribution to estimate heights of other object categories that appear with people. Iteratively use these categories to learn more categories. Annotate all objects in the database with their true size and originating camera pose.

  23. Phase 1 - Database AnnotationEstimate object size & camera pose

  24. Phase 1 - Database Annotation For each object we want: • Estimate of its true size and the camera pose it was captured under • Estimate of the lighting conditions it was captured under

  25. Phase 1 - Database AnnotationEstimate lighting conditions Ground Vertical Sky Estimate which pixels are ground, sky, vertical Black box for now (we’ll cover this paper later in the course)

  26. Phase 1 - Database AnnotationEstimate lighting conditions Distribution of pixel colors

  27. Phase 2 – Object Insertion Query Image

  28. Phase 2 – Object Insertion User specifies horizon line – use to calculate camera pose with respect to ground plane (lower -> tilted down, higher -> tilted up). Illumination context is calculated in the same way as for the database images.

  29. Phase 2 – Object Insertion Insert an object into the scene that has matching lighting, and camera pose to the query image

  30. Phase 2 – Object Insertion But wait it still looks funny!

  31. Phase 2 – Object Insertion Shadows are important!

  32. Phase 2 – Object Insertion

  33. Phase 2 – Object Insertion

  34. Phase 2 – Object Insertion

  35. Phase 2 – Object Insertion Shadow Transfer

  36. Categorize images for easy selection in user interface

More Related