1 / 68

Novel view synthesis from still pictures

Novel view synthesis from still pictures. by Frédéric ABAD under the supervision of Olivier FAUGERAS 1 Luc ROBERT 2 Imad ZOGHLAMI 2 1 ROBOTVIS Team INRIA Sophia Antipolis 2 REALVIZ SA. Novel view synthesis. Given data: Few reference photographs Reference camera calibration

Download Presentation

Novel view synthesis from still pictures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Novel view synthesis from still pictures by Frédéric ABAD under the supervision of Olivier FAUGERAS1 Luc ROBERT2 Imad ZOGHLAMI2 1 ROBOTVIS Team INRIA Sophia Antipolis 2 REALVIZ SA

  2. Novel view synthesis • Given data: • Few reference photographs • Reference camera calibration • Objective: • Photo-realistic image generation • Free virtual camera motion • In particular: correct handling of parallax and image resolution

  3. Novel view synthesis • Usual approaches: • Model-based rendering (light simulation with mathematical models) • Image-based rendering (image interpolation) • Our approach: • Hybrid image-model based rendering (texture mapping)

  4. Our approach • Based on a hybrid scene representation • Rough 3D model + few images (reference images and masks) • Layer factorization • Rendering engine (main processing step) • View-dependent texture mapping • Double layered-structure • Refinement step (post-processing step) • Rendering errors occur when the 3D model is too rough • Mask extraction (pre-processing step) • Segmentation of the layers in the reference images

  5. Our approach • Hybrid scene representation • Rendering engine (main processing step) • Refinement step (post-processing step) • Mask extraction (pre-processing step)

  6. Our approach • Hybrid scene representation • Rendering engine (main processing step) • Refinement step (post-processing step) • Mask extraction (pre-processing step)

  7. Scene representation • Hybrid representation: • Few reference images • Rough 3D model (built by image-based modeling) • 3D structure decomposed into layers • Binary layer masks extracted from the reference images

  8. Scene representation

  9. Scene representation (example) • Reference images

  10. Scene representation (example) • 3D model

  11. Scene representation (example) • Layer map

  12. Scene representation (example) • Masks extracted from reference image #1

  13. Our approach • Hybrid scene representation • Rendering engine (main processing step) • Refinement step (post-processing step) • Mask extraction (pre-processing step)

  14. Our approach • Hybrid scene representation • Rendering engine (main processing step) • Refinement step (post-processing step) • Mask extraction (pre-processing step)

  15. Rendering engine • View-dependent texture mapping [Debevec:96] • Efficient combination of the different reference images with respect to the virtual viewpoint. • Optimal image resolution • Double layered-structure, three steps: • Independant rendering of each geometric layer with the best 3 reference textures • Intra-layer compositing (for VDTM) • Inter-layer compositing (for occlusion processing)

  16. View-dependent texture mapping Basic texture mapping Reference image weighting

  17. Double layered-structure Intra-layer compositing Inter-layer compositing Intra-layer compositing

  18. Rendering engine (example) • Results: hole-filling by VDTM

  19. Rendering engine (example) • Results: generated movie

  20. Our approach • Hybrid scene representation • Rendering engine (main processing step) • Refinement step (post-processing step) • Mask extraction (pre-processing step)

  21. Our approach • Hybrid scene representation • Rendering engine (main processing step) • Refinement step (post-processing step) • Mask extraction (pre-processing step)

  22. Refinement step • Rendering errors occur with basic texture mapping if the 3D model is too rough (‘Geometric Rendering Errors’ or GRE) • GRE’s are responsible for ‘ghosting artefacts’ with view-dependent texture mapping

  23. Refinement step • Origin of the Geometric Rendering Errors

  24. Refinement step • Origin of the ghosting artefacts

  25. Refinement step • Our correcting approach: 1) Detect GRE’s in auxiliary reference images 2) Propagate them in new generated images 3) Correct them by image morphing

  26. Our correcting approach • Step 1: detect GRE’s in an auxiliary image Model-based stereo [Debevec:96]

  27. Our correcting approach • Step 2: GRE propagation by point prediction Known point Searched point Known point Known point Known point

  28. Point prediction methods • Example 1: epipolar transfer

  29. Point prediction methods • Example 2: Shashua’s cross-ratio method [Shashua:93]

  30. Point prediction methods • Other point prediction methods: • 3D point reconstruction and projection • Trifocal transfer • Compact cross-ratio method • Irani’s parallax-based multi-frame rigidity constraint method

  31. Our correcting approach • Step 3: correct the GRE’s by image morphing

  32. Our correcting approach • Experimental comparison of point prediction methods: • Epipolar transfer: simplest implementation but imprecise and instable close to the trifocal plane • Irani’s approach: complex, imprecise and instable method • Cross-ratio approaches: simple, precise and stable methods

  33. Our correcting approach • Experimental application to deghosting

  34. Experimental application to deghosting • Before deghosting +

  35. Experimental application to deghosting • After deghosting +

  36. Experimental application to deghosting • Comparison before/after Before deghosting After deghosting

  37. Our approach • Hybrid scene representation • Rendering engine (main processing step) • Refinement step (post-processing step) • Mask extraction (pre-processing step)

  38. Our approach • Hybrid scene representation • Rendering engine (main processing step) • Refinement step (post-processing step) • Mask extraction (pre-processing step)

  39. Mask extraction • Extract layer masks from reference images Reference image Ii Layer Cj Mask Mij

  40. Mask extraction • Region-based image segmentation: pixel labelling by energy minimization • Energies • Optimization techniques

  41. Mask extraction • Region-based image segmentation: pixel labelling by energy minimization • Energies • Optimization techniques

  42. Mask extraction • Energies: Data attachment term + Regularization term

  43. Energies • Data attachment term • Ensures the adaptation of the labelling to the observed data in the image • Inverse of the labelling likelihood

  44. Data attachment term • Usual segmentation criteria: • Luminance: luma-key • Color: chroma-key • Texture: texture-key • Emphasis on a new geometric criterion: planar-key

  45. Regularization term • Ensures one stable and unique solution • ‘Markovian Random Field’ a priori • 4-connexity neighborhood • Second order cliques • Generalized Potts Model potential function

  46. Planar-key • Exploits geometric a priori knowledge: Scene made with planar patches (3D model = triangular mesh) • 1 label = 1 plane = 1 homography (between the image to segment and an auxiliary image) • Data attachment energy: dissimilarity between the labelled pixel and its image by the homography associated with the label

  47. Dissimilarity(p,HC.p) < Dissimilarity(p,HA.p) D(C,p) < D(A,p)  Dissimilarity(p,HC.p) < Dissimilarity(p,HB.p) D(C,p) < D(B,p) Planar-key

  48. Planar-key • Example: Auxiliary image Main image Structure of the scene Segmented image

  49. Planar-key • Technique more complex than it seems: • Dissimilarity measures • Photometric discrepancy robustness • Geometric inaccuracy robustness • Occlusion shadow error management

  50. Dissimilarity measures

More Related