340 likes | 493 Views
Exploitation of 3D Video Technologies. Takashi Matsuyama. Graduate School of Informatics, Kyoto University. 12 th International Conference on Informatics Research for Development of Knowledge Society Infrastructure (ICKS’04). Outline. Introduction 3D Video Generation
E N D
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12th International Conference on Informatics Research for Development of Knowledge Society Infrastructure (ICKS’04)
Outline • Introduction • 3D Video Generation • Deformable Mesh Model • Texture Mapping Algorithm • Editing and Visualization System • Conclusion
Introduction • PC cluster for real-time active 3D object shape reconstruction
3D Video Generation • Synchronized Multi-View Image Acquisition • Silhouette Extraction • Silhouette Volume Intersection • Surface Shape Computation • Texture Mapping
Silhouette Volume Intersection • Plane-to-Plane Perspective Projection • 3D voxel space is partitioned into a group of parallel planes
Plane-to-Plane Perspective Projection • Project the object silhouette observed by each camera onto a common base plane • Project each base plane silhouette onto the other parallel planes • Compute 2D intersection of all silhouettes projected on each plane
Linearized Plane-to-Plane Perspective Projection(LPPPP) algorithm
Parallel Pipeline Processing on a PC cluster system With camera Silhouette Extraction Projection to the Base-Plane Base-Plane Silhouette Duplication Object Cross Section Computation Without camera
Parallel Pipeline Processing on a PC cluster system • average computation time for each pipeline stage
3D Video Generation • Synchronized Multi-View Image Acquisition • Silhouette Extraction • Silhouette Volume Intersection • Surface Shape Computation • Texture Mapping
Deformable Mesh Model • Dynamic 3D Shape reconstruction • Reconstruct 3D shape for each frame • Estimate 3D motion by establishing correspondences between frames t and t+1 • Constraint • Photometric constraint • Silhouette constraint • Smoothness constraint • 3D motion flow constraint • Inertia constraint Intra-frame deformation Inter-frame deformation
Intra-frame deformation • step 1 Convert the voxel representation into a triangle mesh. [1] • step 2 Deform the mesh iteratively: • step 2.1 Compute force acting on each vertex • step 2.2 Move each vertex according to the force. • step 2.3 Terminate if all vertex motions are small enough. Otherwise go back to 2.1 . [1] Y. Kenmochi, K. Kotani, and A. Imiya. Marching cubes method with connectivity. In Proc. of 1999 International Conference on Image Processing, pages 361–365, Kobe, Japan, Oct. 1999.
Intra-frame deformation • External Force: • satisfy the photometric constraint
Intra-frame deformation • Internal Force: • smoothness constrain • Silhouette Preserving Force: • Silhouette constrain • Overall Vertex Force:
Dynamic Shape Recovery • Inter-frame deformation • A model at time t deforms its shape to satisfy the constraints at time t+1, we can obtain the shape at t+1 and the motion from t to t+1 simultaneously.
Dynamic Shape Recovery • Define 、 、 • Drift Force: • 3D Motion flow constraint • Inertia Force: • Inertia constraint • Overall Vertex Force:
3D Video Generation • Synchronized Multi-View Image Acquisition • Silhouette Extraction • Silhouette Volume Intersection • Surface Shape Computation • Texture Mapping
Viewpoint Independent Patch-Based Method • Select the most “appropriate” camera for each patch • For each patch pi • Compute the locally averaged normal vector Vlmnusing normals of pi and its neighboring patches. • For each camera cj , compute viewline vectorVcjdirecting toward the centroid of pi. • Select such camera c* that the angle between VlmnandVcjbecomes maximum. • Extract the texture of pifrom the image captured by camera c*.
Viewpoint Dependent Vertex-Based Texture Mapping Algorithm • Parameters • c: camera • p: patch • n: normal vector • I: RGB value
Viewpoint Dependent Vertex-Based Texture Mapping Algorithm • A depth buffer of cj: Bcj • Record patch ID and distance to that patch fromcj
Viewpoint Dependent Vertex-Based Texture Mapping Algorithm • Visible Vertex from Camera cj: • The face of pi can be observed from camera cj • Project visible patches onto Bcj • Check the visibility of each vertex using the buffer
Viewpoint Dependent Vertex-Based Texture Mapping Algorithm • Compute RGB values of all vertices visible from each camera • Specify the viewpoint eye • For each patch pi, do 4 to 9 • If visible ( ), do 5 to 9 • Compute weight • For each vertex of patch pi, do 7 to 8 • Compute the normalized weight • Compute RGB value • Generate the texture of patch pi by linearly interpolating RGB values of its vertices
Performance • Viewpoint Independent Patch-Based Method (VIPBM) • Viewpoint Dependent Vertex-Based Texture Mapping Algorithm (VDVBM) • VDVBM-1 : including real images captured by camera cj itself • VBVBM-2: excluding real images • Mesh : converted from voxel data • D-Mesh : after deformation
Performance Frame number
Editing and Visualization System • Methods to generate camera-works • Key Frame Method • Automatic Camera-Work Generation Method • Virtual Scene Setup • Virtual camera • Background • object
Key Frame Method • Specify the parameters (positions, rotations of a virtual camera, object, etc.) for arbitrary key frames
Automatic Camera-Work Generation Method • Object’s parameters (standing human) • Position, Height, Direction • User has only to specify • Framing of a picture • Appearance of the object from the virtual camera • We can compute virtual camera parameters • Distance between virtual camera and object d • Position of the virtual camera (xc, yc, zc)
Automatic Camera-Work Generation Method • Distance between virtual camera and object d • Position of the virtual camera (xc, yc, zc)
Conclusion • A PC cluster system with distributed active cameras for real-time 3D shape reconstruction • Plane-based volume intersection method • Plane-to-Plane Perspective Projection algorithm • Parallel pipeline processing • A dynamic 3D mesh deformation method for obtaining accurate 3D object shape • A texture mapping algorithm for high fidelity visualization • A user friendly 3D video editing system.
Reference • T. Matsuyama and T. Takai. “Generation, visualization, and editing of 3d video.” In Proc. of symposium on 3D Data Processing Visualization and Transmission, pages 234–245, Padova, Italy, June 2002. • T. Matsuyama, X. Wu, T. Takai, and T. Wada. “Real-time dynamic 3d object shape reconstruction and high-fidelity texture mapping for 3d video.” IEEE Trans. on Circuit and Systems for Video Technology, pages 357–369, 2004. • T. Wada, X. Wu, S. Tokai, and T. Matsuyama. “Homography based parallel volume intersection: Toward real-time reconstruction using active camera.” In Proc. of International Workshop on Computer Architectures for Machine Perception, pages 331–339, Padova, Italy, Sept. 2000.