580 likes | 769 Views
2D Tracking to 3D Reconstruction of Human Body from Monocular Video. Moin Nabi Mohammad Rastegari. Approaches:. Introduction to 3D Reconstruction. Stereo [Multiple Camera] Monocular [Single Camera]. Difficult!. Difficulties of 3D Reconstruction.
E N D
2D Tracking to 3D Reconstruction of Human Body from Monocular Video Moin Nabi Mohammad Rastegari
Approaches: Introduction to 3D Reconstruction Stereo [Multiple Camera] Monocular [Single Camera] Difficult!
Local properties not enough for depth estimation. Need to learn global structure. • Overall organization of the image • Contextual Information Difficulties of Monocular 3D Reconstruction
Depth ambiguity problem Difficulties of Monocular 3D Reconstruction We can have innumerable States with single Observation we should estimate Depth
Forward-Backward ambiguity problem Difficulties of Monocular 3D Reconstruction We can have 2#limbs configuration • With Physical constrain • With Learning
Application of Monocular 3D Reconstruction 3D Motion Capturing • 3D Medical Imaging
Application of Monocular 3D Reconstruction Human-Computer Interfaces • Video games, More Reality
Problem Backgrounds • Humans interpret 2D video easily 2D video offers limited clues about actual 3D motion Goal: Reliable 3D reconstructions from standard single-camera input
Work-Flow of Monocular 3D Reconstruction Skeleton Extraction 2D Tracking 3D Reconstruction Build Flesh 2D 3D ?
Skeleton Extraction • Proposed Skeleton for Human Body
Overview of approach 2D Tracking 3D Reconstruction
Reconstruction of articulated Objects from Point Correspondences in a Single Uncalibrated Image [Camillo J. Taylor, 2000 ] Objective: To recover the configuration of an articulated object from image measurements Assumptions: Scaled orthographic projection (unknown scale) Relative lengths of segments in model known Input:Correspondences between joints in the model and points in the image Output:Characterization of the set of all possible configurations
Reconstruction of articulated Objects from Point Correspondences in a Single Uncalibrated Image [Camillo J. Taylor, 2000 ] ?
Reconstruction of articulated Objects from Point Correspondences in a Single Uncalibrated Image [Camillo J. Taylor, 2000 ] The set of all possible solutions can be characterized by a single scalar parameter, s and a set of binary flags indicating the direction of each segment Solutions for various values of the s parameter
Reconstruction of articulated Objects from Point Correspondences in a Single Uncalibrated Image [Camillo J. Taylor, 2000 ] In practice the policy of choosing minimum allowable value of scale parameter as default usually yields acceptable result since it reflects the fact that one or more segments in the model are typically quit close to perpendicular to the viewing direction and are, therefore, not significantly foreshortened. The scalar, s, was chosen to be the minimum possible value and the segment directions were specified by the user.
Reconstruction of articulated Objects from Point Correspondences in a Single Uncalibrated Image [Camillo J. Taylor, 2000 ] Experimental results:
Reconstruction of articulated Objects from Point Correspondences in a Single Uncalibrated Image [Camillo J. Taylor, 2000 ]
Bayesian Reconstruction of 3D Human Motion from Single-Camera Video [N. R. Howe, M. E. Leventon, W. T. Freeman, 2001] Motion divided into short movements, informally called snippets. Assign probability to 3D snippets by analyzing knowledge base. Each snippet of 2D observations is matched to the most likely 3D motion. Resulting snippets are stitched together to reconstruct complete movement.
Learning Priors on Human Motion Bayesian Reconstruction of 3D Human Motion from Single-Camera Video [N. R. Howe, M. E. Leventon, W. T. Freeman, 2001] choose snippet -> Long enough to be informative, but short enough to characterize Collect known 3D motions, form snippets. Group similar movements, assemble matrix. SVD gives Gaussian probability cloud that generalizes to similar movements.
Posterior Probability Bayesian Reconstruction of 3D Human Motion from Single-Camera Video [N. R. Howe, M. E. Leventon, W. T. Freeman, 2001] Bayes’Law gives probability of 3D snippet given the 2D observations: P(snip|obs)=k P(obs|snip) P(snip) Training database gives prior -> P(snip). Assume normal distribution of tracking errors to get likelihood -> P(obs|snip).
Stitching Bayesian Reconstruction of 3D Human Motion from Single-Camera Video [N. R. Howe, M. E. Leventon, W. T. Freeman, 2001] Snippets overlap by n frames. Use weighted interpolation for frames of overlapping snippets.
Bayesian Reconstruction of 3D Human Motion from Single-Camera Video [N. R. Howe, M. E. Leventon, W. T. Freeman, 2001]
Bayesian Reconstruction of 3D Human Motion from Single-Camera Video [N. R. Howe, M. E. Leventon, W. T. Freeman, 2001]
Bayesian Reconstruction of 3D Human Motion from Single-Camera Video [N. R. Howe, M. E. Leventon, W. T. Freeman, 2001]
Monocular Reconstruction of 3D Human Motion by Qualitative Selection[M. Eriksson, S. Carlsson, 2004] Depth ambiguity -> by using Taylor method Forward-Backward ambiguity -> Prune possible binary configurations
Monocular Reconstruction of 3D Human Motion by Qualitative Selection[M. Eriksson, S. Carlsson, 2004] Forward-Backward ambiguity -> For any point-set, X, representing a motion, we can represent its binary configuration with respect to the image plane where 0 means that the limb points outwards, from the image plane, and 1 means that the limb points inwards, towards the image plane.
Monocular Reconstruction of 3D Human Motion by Qualitative Selection[M. Eriksson, S. Carlsson, 2004] Example for 4 limbs: In this case, limb 1 and limb 2 are both parallel to the mage-plane. If limb 1 is the root segment, limb 3 points towards the image plane, while limb 4 points away from the image plane. Any infinitesimal rotation (except for rotations around limb 1 and limb 2), of this structure will put it into one of the following four binary configurations: [0, 0, 1, 0], [0, 1, 1, 0], [1, 0, 1, 0], [1, 1, 1, 0]
Limited Domain: Monocular Reconstruction of 3D Human Motion by Qualitative Selection[M. Eriksson, S. Carlsson, 2004] 3d Reconstruction in Limited Domain Key frame Selection
Monocular Reconstruction of 3D Human Motion by Qualitative Selection[M. Eriksson, S. Carlsson, 2004] Qualitative measure: Sign of determinant Humming distance
Monocular Reconstruction of 3D Human Motion by Qualitative Selection[M. Eriksson, S. Carlsson, 2004] Experimental Results:
Monocular Reconstruction of 3D Human Motion by Qualitative Selection[M. Eriksson, S. Carlsson, 2004]
Learning to Reconstruct 3D Human Pose and Motion from Silhouettes[A. Agarwal, B. Triggs, 2004] • Recover 3D human body pose from image silhouettes • 3D pose = joint angles • Use either individual images or video sequences
2 Broad Classes of Approaches • Model based approaches • Presuppose an explicitly known parametric body model • Inverting kinematics / Numerical optimization • subcase: Model based tracking • Learning based approaches • Avoid accurate 3D modeling/rendering • e.g. Example based methods
“Model Free” Learning – based Approach • Recovers 3D pose (joint angles) by direct regression on robust silhouette descriptors • Sparse kernel-based regressor trained used human motion capture data • Advantages: • no need to build an explicit 3D model • easily adapted to different people / appearances • may be more robust than model based approach • Disadvantages: • harder to interpret than explicit model, and may be less • accurate
The Basic Idea To learn a compact system that directly outputs pose from an image • Represent the input (image) by a descriptor vector z. • Write the multi-parameter output (pose) as a vector x. • Learn a regressor x = F(z) + ε Note: this assumes a functional relationship between z and x, which might not really be the case.
Why Use Silhouettes ? • Captures most of the available pose information • Can (often) be extracted from real images • Insensitive to colour, texture, clothing • No prior labeling (e.g. of limbs) required Limitations • Artifacts like attached shadows are common • Depth ordering / sidedness information is lost
Ambiguities Which arm / leg is forwards? Front or back view? Where is occluded arm? How much is knee bent? Silhouette-to-pose problem is inherently multi-valued … Single-valued regressors sometimes behave erratically
Shape Context Histograms • Need to capture silhouette shape but be robust against occlusions/segmentation failures • Avoid global descriptors like moments • Use Shape Context Histograms – distributions of local shape context responses
Shape Context Histograms Encode Locality First 2 principal components of Shape Context (SC) distribution from combined training data, with k-means centres superimposed, and an SC distribution from a single silhouette. SCs implicitly encode position on silhouette – an average overall human silhouettes -like form is discernable
Regression Model Predict output vector x (here 3D human pose), given input vector z (here a shape context histogram): x = ∑ akφk(z) + ε ≡A f(z) +ε • {φk(z) | k = 1…p} : basis functions • A≡ (a1a2 … ap) • f(z) = (φ1(z) φ2(z) … φp(z))T • Kernel basesφk = K(z,zk) for given centre points zkand kernel K. e.g. K(z,zk) = exp(-β║z-zk║2) p k=1 A
Regularized Least Squares A = arg min { ∑║Af(zi) - xi║2 + R(A)} = arg min { ║AF - X║2 + R(A)} R(A): Regularizer / penalty function to control overfitting Ridge Regression: R(A) = trace(A T A) n A i=1 A
Spiral Walk Test Sequence Mostly OK, but ~15% “glitches” owing to pose ambiguities
Glitches • Results are OK most of the time, but there are frequent “glitches” • regressor either chooses wrong case of an ambiguous pair, or remains undecided. • Problem is especially evident for heading angle the most visible pose variable.