340 likes | 469 Views
Performance Evaluation of Vision-based Real-time Motion Capture. Naoto Date, Hiromasa Yoshimoto, Daisaku Arita, Satoshi Yonemoto, Rin-ichiro Taniguchi Kyushu University, Japan. Background of Research. Motion Capture System Interaction of human and machine in a virtual space
E N D
Performance Evaluation of Vision-based Real-time Motion Capture Naoto Date, Hiromasa Yoshimoto, Daisaku Arita, Satoshi Yonemoto, Rin-ichiro Taniguchi Kyushu University, Japan
Background of Research • Motion Capture System • Interaction of human and machine in a virtual space • Remote control of humanoid robots • Creating character actions in 3D animations or video games • Sensor-based Motion Capture System • Using Special Sensors (Magnetic type, Infrared type etc.) • User’s action is restricted by attachment of sensors • Vision-based Motion Capture System • No sensor attachments • Multiple cameras and PC cluster
Key Issue • Available features acquired by vision process is limited. • Head, faces and feet can be detected robustly. • How to estimate human postures from the limited visual features • Three kinds of estimation algorithms • Comparative study of them
CG model 人物 2 System Overview camera PC PC PC camera PC camera PC camera camera PC
CG model 人物 2 System Overview Using 10 cameras for robust motion capture camera PC camera PC camera PC camera camera PC
CG model 人物 2 System Overview 1 top-view camera on the ceiling camera PC camera camera PC camera camera PC
CG model 人物 2 System Overview 9 side-view cameras around the user camera camera camera camera camera PC
CG model 人物 2 System Overview Using PC cluster for real-time feature PC PC PC PC camera PC camera camera PC
CG model 人物 2 System Overview First, take images with each camera camera PC camera PC camera PC camera camera PC
CG model 人物 2 System Overview Extract image-features on the first stage PCs camera PC PC camera PC camera PC camera camera PC
CG model 人物 2 System Overview Reconstruct human CG model by feature parameters in each image PC PC PC PC camera camera PC
CG model 人物 2 System Overview Synchronous IEEE1394 cameras: 15fps camera camera PC camera PC camera camera PC
CG model 人物 2 System Overview CPU : PentiumⅢ700MHz x 2 OS : Linux Network :Gigabit LAN Myrinet PC PC PC camera PC camera camera PC
Top-view camera process • Background subtraction • Opening operation • Inertia principal axis • Detect body directionand transfer it
Top-view camera process • Background subtraction • Opening operation • Inertia principal axis • Detect body directionand transfer it
Top-view camera process • Background subtraction • Opening operation • Inertia principal axis • Detect body directionand transfer it
Top-view camera process • Background subtraction • Opening operation • Feature extraction • Inertia principal axis • Body direction
Side-view camera process • Background subtraction • Calculate centroids of skin-color blobs
Side-view camera process • Background subtraction • Calculate centroids of skin-color blobs
Side-view camera process • Background subtraction • Calculate centroids of skin-color blobs
Estimate 3D position of skin-color blob • From all the combination of cameras and blob centroids, we select all possible pairs of lines of sight. Then we calculate an intersection point of each line pair. Unless the distance of the two lines is smaller than a threshold, we decide there is no intersection point.
Estimate 3D position of skin-color blob • The calculated points are clustered according to distances from the feature points (head, hands, feet) of the previous frame. • Select points where feature points are dense as the 3D positions of the true feature points.
head V V: V is the vector which intersects perpendicularly with a body axis and with a body direction. L1 right shoulder L2 torso Estimate 3D position of torso ・A method based on simple body model Center point
Estimate 3D positions of elbows and knees • 3 estimation methods • Inverse Kinematics (IK) • Search by Reverse Projection (SRP) • Estimation with Physical Restrictions (EPR)
Estimate 3D positions of elbows and knees • IK • f3 assumed to be a constant
Estimate 3D positions of elbows and knees • EPR • An arm is assumed to be the connected two spring model. • The both ends of a spring are fixed to the position of the shoulder, and the position of a hand. • The position of an elbow is converged to the position where a spring becomes natural length. (the natural length of springs is the length of the bottom arm and the upper arm which acquired beforehand.)
Computation time required in each algorithm Top-view camera processing : 50ms Side-view camera processing : 26ms 3D blob calculation : 2ms IK calculation : 9ms SRP calculation : 34ms EPR calculation : 22ms
Conclusions • We have constructed a Vision-based Real-time Motion Capture System and evaluated its performance • Future works • Improvement of posture estimation algorithm • Construction of various applications • Man and machine interaction in a virtual space • Humanoid robot remote control system