1 / 23

Online Motion Capture Marker Labeling for Multiple Articulated Interacting Targets

This research paper presents an online marker labeling algorithm for motion capture, specifically for multiple interacting subjects. The algorithm is capable of reconstructing the 3D position of markers and is tolerant to sporadic marker missing. The approach utilizes both spatial and motion cues to track and label markers, making it suitable for capturing complex interactions. The paper includes evaluations of the proposed algorithm and comparisons with existing approaches.

dorothyj
Download Presentation

Online Motion Capture Marker Labeling for Multiple Articulated Interacting Targets

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Online Motion Capture Marker Labeling for Multiple Articulated Interacting Targets Qian Yu 1, 2, Qing Li 1, Zhigang Deng1 1 University of Houston 2 University of Southern California

  2. Motivations • Passive optical motion capture system • Calibrated cameras • Strongly retro-reflective markers • Automatically reconstruct 3D position of markers • Limitations • Sporadic marker missing • Challenging when capture multiple interacting subjects due to occlusion and interaction

  3. Related Work Assume correct marker labeling are given • Novel human motion synthesis [Rose et al. 98, Pullen and Bregler 02, Arikan and Forsyth 03] • Motion data reusing and editing [Witkin and Popovic 95, Gleicher 98, Arikan and Forsyth 02, Kovar et al. 02, Bregler et al. 02] • Marker labeling for single subject [Herda et al. 00, Lasenby and Ringer 02] • A multiple hypothesis tracking algorithm • Recovering joints’ positions [Silaghi et al. 98, Kirk et al. 05, Lasenby and Ringer 02] • Spectral clustering for one target • Multiple subjects but little interaction [Guerra 05] Closest point heuristic rule or closest-point rule works

  4. Our Work • Track and label markers for multiple interacting subjects • Online marker labeling • Tolerant to sporadic marker missing Our approach

  5. Two Cues • Spatial Cue (structure model) • Subjects are make of rigid bodies • Automatically construct rigid bodies from input sequences • Motion Cue (motion model) • Motion smoothness of each marker • Narrow down legitimate candidates for each marker in next frame Relative position on rigid body is fixed Standard deviation of marker-pair distance is close to zero

  6. Online Labeling Algorithm Model Training Structure Model Labeled Markers Motion Model Approach overview One Motion Capture Frame

  7. Structure model Two matrices (updated when a new frame is available) D – Distance matrix A – Standard deviation matrix - Distance between the marker and the marker - Standard deviation between the marker and the marker

  8. Structure model • A rigid body is typically composed of several markers (three to eight) • How to use? A matrix- construct rigid bodies D matrix- validate the marker correspondence between consecutive frames

  9. Motion Model • Build the candidate set • Maximum candidate number per marker (K) • Experimentally, it is usually small (from two to four) • Estimate the marker’s position in next frame • Kalman filter Legitimate candidates Estimated marker position Mahalanobis distance

  10. Training Stage • Initial short sequence (first 50 frames) • No interaction among multiple subjects • Build structure model

  11. On-line labeling stage Fitting rigid bodies Algorithm • Input • Labels of previous frames • 3D positions of all markers in current frame • The trained structure and motion models • Output • Marker labeling result of current frame • The updated structure and motion models

  12. On-line labeling stage • Fitting rigid bodies Algorithm (pseudo code) • Construct a candidate set for each marker • Set the flag of every rigid body to “unlabeled” • While at least one rigid body is “unlabeled” do • Foreach“unlabeled” rigid body r • - Enumerate and evaluate all possible assignments (based on the candidate set) • - Keep the optimum assignment MaxScore(r) for the rigid body r • End for • Select the assignment with maximum score for all r, argmax(MaxScore(r)) • Set the flag of this rigid body to “labeled” • Update other unassigned markers’ candidate sets • Update the structure and motion models • End while

  13. On-line labeling stage • Measure how a rigid body is fit with a marker assignment Distance between marker-pairs in a rigid body is consistent in a short time span • Distance between marker i and j in structure model • Distance between possible marker i and j in a possible assignment • Standard deviation between marker i and j in structure model • The number of links in the rigid body

  14. Missing Marker Recovery • Remaining “unlabeled” rigid bodies in the previous “fitting-rigid-bodies algorithm” are regarded as the rigid bodies enclosing missing markers • The displacement vectors between markers in a rigid body is fixed • Displacement vectors is known from previous frames • The position of missing markers can be estimated from other markers

  15. Result & Evaluation • Motion capture sequences (with 5 and 10 subjects) • Recorded with 120 frames/sec • 45-49 markers are on each subject • Each subject has different marker layout, and different total number of markers • First 50 frames used as initial training (no interaction) • Frame by Frame online labeling

  16. Result & Evaluation The Closest point method Input frame Our method

  17. Result & Evaluation • Our algorithm VS Closest point based approach • X axis is down sampling rate • Y axis is the number of wrong marker labeling

  18. Result & Evaluation • Missing marker recovery experiment • Randomly remove several markers in the middle of mocap sequences (up to 20 continuous frames) • X axis is the missing length • Y axis is “error over max distance in the rigid body”

  19. Conclusions • Adaptive • Track and label varied markers for multiple interacting subjects • Adaptively cluster markers into rigid bodies • Efficient • Online marker labeling, frame by frame • Robust • Automatically detect and recover sporadic missing markers • Little error propagation (compare with the closest point approach)

  20. Limitations • If most of the markers in a rigid body are missing, they are hard to be recovered. • Current missing marker auto-recovery mechanism depends on other markers in the same rigid body • Not real-time on a single computer • Currently, 227 millisecond per subject per frame on a PC (Intel Xeon 3.0GHz, 4G Memory)

  21. Future work • Introduce specific human motion models • Eliminate candidates that conflict with reasonable human motion • Add sample-based method • Avoid enumerate the labeling for each rigid body • Improve algorithm efficiency to achieve real-time performance • GPU accelerated parallel computing • Test more complex motion capture scenarios

  22. Acknowledgement • Vicon Motion Capture Inc. • Providing experimental motion capture data and relevant software support • University of Houston

  23. Thank you!

More Related