240 likes | 596 Views
Full Body 3D Scanning. Team KIPPA Sam Calabrese, Abhishek Gandhi, Changyin Zhou {smc2171, asg2160, cz2166}@columbia.edu. Outline. Background Motivation Our Plan Data Capture Data Processing Result Comparison Discussion.
E N D
Full Body 3D Scanning Team KIPPA Sam Calabrese, Abhishek Gandhi, Changyin Zhou {smc2171, asg2160, cz2166}@columbia.edu
Outline • Background • Motivation • Our Plan • Data Capture • Data Processing • Result Comparison • Discussion
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Background • Laser Scanner using TOF • High precision, long range, slow • Laser Scanner using triangulation method • High precision, smaller range, occlusion problem, slow • Image-based method using triangulation method (motion, stereo, focus/defocus, shading…) • Relatively low precision and resolution • Pose assumption to the surface • Fast (can be real-time) • .. With Structured Light • Improve the precision and resolution • Indoor
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Motivation • Lots of people dream to have an accurate 3D model of themselves • Lots of applications with the real 3D model (Animation, Augmented Realistic and even Clothes Design…) • Difficulties • Laser scanner – too slow to scan a live person (moving and non-rigid) • Image-based method – not enough resolution and precision (even with structured light) • Laser may hurt the eyes
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Our Plan • Image-based method to model the head • Laser scanner to capture the body • Proper experiment settings to minimize the model movement during the scanning • Proper post-process to tolerant slight movement • Software to merge them together • Map skins and do animation afterward
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Data Capture - Head FaceGen: www.facegen.com Use face symmetric, a large set of 3D Face models (gender, ages, races..)
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Data Capture -Body • A professional model + a large private room + 4 hours • Tripods to rest the arms (10 cm lower than the shoulders) • Mark the feet position • A camera to track the movement • Scanner is as high as the shoulder • The distance to scanner ranges from 2.5m to 3m
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Data Capture –Body (cont’d) • Four scan points (Frontal/Back x Left/Right) • Ten targets around for further registration • Precision set to 2mm • ≈10 min for each scan, but a long time to move the scanner • 90 – 130 K vertices for each range data
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Data Capture –Body (cont’d) • Registration with Cyclone • The body movement causes big problems • we need more stronger non-rigid mesh merging methods
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison MeshLab • Initially used to convert PTX to PLY • Problems arose with the original PTX files from Cyclone • Used MeshLab to reorient • Used again to clean data and resurface after VRip surfacing
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Scanalyze • Used to re-register the Range scans following the initial errors • Prepares for VRip directly by saving out .conf and .xf files which VRip reads to orient the different scans to each other while maintaining their original orientation
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison VRip • Used to create a surface from the registered point cloud • Uses view direction and ramp weights to get confidence about each vertex • Sampled at .002m in each direction per voxel • Used ramp weight of .004m with slight increase in standard weights
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison PlyCrunch and Meshlab (again) • PlyCrunch is Packaged with VRip • Decimated the Mesh from 3.5 million polygons to just over 10,000 • Full of holes because Plycrunch deleted Triangles, but left proper vertices. • Meshlab used to clean and resurface resulting mesh, filled most of the holes
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison 3dsMax • Used to finalize mesh by capping the remaining holes and smoothing the result. • Used to attach the head from facegen • Attached a skin shader for Mental Ray, one of the built in Renderers which used Sub-Surface Scattering for realism, modified to match the texture from FaceGen • Rigged with Biped object • Animated with stock Motion Capture Data • Rendered into animations
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison 3dsMax
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Result Comparison • We compared our output with that of two professional companies : • Headus – based in Australia • AvatarMe (initially developed by University of Surrey) • Criteria for Comparison : • Resolution of the output • Accuracy of scanned data • Error prevention during scanning • Cost/Ease of setup
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Resolution of the output • We made way for Quality via Quantity • KIPPA • Number of Polygons in original scan • Scan 1 : 168,855 • Scan 2 : 179,868 • Scan 3 : 174,686 • Scan 4 : 101,010 • Total number of points : 500,000 • Data Density : 2mm • Others • Data Density 4 mm (Headus) • Average number of points : 300,000
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Resolution of the output AvatarMe KIPPA
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Accuracy of Scanned Data(smoothness, details captured) • Used more than just one good software – Vrip- 3,858,996 MeshLab- 3,858,783 PlyCruch- 10,019 MeshLab- 15,056 3ds Max- 15,080 93,108
Background | Motivation | Our Plan | Data Capture | Data Processing | Result Comparison Error prevention during scanning &Cost/Ease of setup • KIPPA • We didn’t have a pre-defined model/shape against which to map our data.(explain) • Cheap/Simple set-up • Separate scanning for head and body • Limited body movement below the neck • Others • WBX Platform(explain) (movable platform, cables, etc.) http://www.cyberware.com/documentation/digisize/www/info/WBXPlatform.html
Discussion & Conclusion • Reduction and dealing with motion… • Repositioning the scanner was difficult… • Problems during transformation… • Efficient use of markers… • Ignored hair… • Missed details of hands and toes… To conclude, 3D human scan was a very interesting problem to deal with. We managed to get a satisfying output in a very efficient manner. It was enjoyable learning and using softwares such as Vrip, Cyclone, MeshLab, PlyCrunch and 3ds Max.
References & Resources • ALLEN, P., 2007. 3d photography 2007 fall. Class notes on Active 3D Sensing. • ALOIMONOS, Y., AND SPETSAKIS, M. 1989. A unified theory of structure from motion. • BERALDIN, J., BLAIS, F., COURNOYER, L., GODIN, G., AND RIOUX, M. 2000. Active 3D Sensing. Modelli E Metodi per lo studio e la conservazionedell’architetturastorica, 22–46. • BLAIS, F., PICARD, M., AND GODIN, G. 2004. Accurate 3d acquisition of freely moving objects. In 3DPVT04, 422–429. • BLAIS, F. 2004. Review of 20 years of range sensor development. Journal of Electronic Imaging 13, 231. • DHOND, U., AND AGGARWAL, J. 1989. Structure from stereo-a review. Systems, Man and Cybernetics, IEEE Transactions on 19, 6, 1489–1510. • DU, H., ZOU, D., AND CHEN, Y. Q. 2007. Relative epipolar motion of tracked features for correspondence in binocular stereo. In IEEE International Conference on Computer Vision (ICCV). • NAYAR, S., WATANABE, M., AND NOGUCHI, M. 1996. Realtime focus range sensor. IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 12, 1186–1198. • RUSINKIEWICZ, S., HALL-HOLT, O., AND LEVOY, M. 2002. Real-time 3D model acquisition. Proceedings of the 29th annual conference on Computer graphics and interactive techniques, 438–446. • SCHECHNER, Y., AND KIRYATI, N. 2000. Depth from Defocus vs. Stereo: How Different Really Are They? International Journal of Computer Vision 39, 2, 141–162. • WATANABE, M., AND NAYAR, S. 1998. Rational Filters for Passive Depth from Defocus. International Journal of Computer Vision 27, 3, 203–225. • ZHANG, R., TSAI, P., CRYER, J., AND SHAH, M. 1999. Shape from shading: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 21, 8, 690–706. • Facegen: http://www.facegen.com, Meshlab: http://meshlab.sourceforge.net/ • Vrip, Scanalyze, Plycrunch: http://graphics.stanford.edu/software/vrip/, 3Ds Max: www.autodesk.com/3dsmax • Prometheus : http://personal.ee.surrey.ac.uk/Personal/A.Hilton/research/PrometheusResults/index.html • Headus : www.headus.com/au/3D_scans/index.html • TC Square : http://www.tc2.com/what/bodyscan/index.html • Cornell University Body Scan : http://www.bodyscan.human.cornell.edu/scene0037.html
Thanks to Prof. Allen, Karan, Matei and Paul for your kindly help and support. Special thanks to our model, Daniel, for his professional, passion and great cooperation.
Any Questions… ~Thank You (team KIPPA)