250 likes | 381 Views
Multiple View Based 3D Object Classification Using Ensemble Learning of Local Subspaces ( ThBT4.3 ). Jianing Wu, Kazuhiro Fukui lacarte@cvlab.cs.tsukuba.ac.jp , kfukui@cs.tsukuba.ac.jp Graduate school of Systems and Information Engineering University of Tsukuba (Japan). Abstract.
E N D
Multiple View Based 3D Object Classification Using Ensemble Learning of Local Subspaces (ThBT4.3) Jianing Wu, Kazuhiro Fukui lacarte@cvlab.cs.tsukuba.ac.jp, kfukui@cs.tsukuba.ac.jp Graduate school of Systems and Information Engineering University of Tsukuba (Japan)
Abstract • We proposed a statistical method for object classification based on multi-view. • The proposed method is an extension of MSM. • Problems of previous works (MSM, KMSM) has been solved. • We evaluated the classification performance of the proposed method and previous works.
Table of Contents • Backgrounds • Multi-view classification and problems of existing methods for nonlinear distribution • The proposed method • Approximation by local subspaces and ensemble learning • Experimental results • Performance comparison using multi-view images of objects • Summary
Multi-view Based Object Classification • Multiple view is more beneficial for object classification than single. • View based approach does not need 3D model for classification. • Subspace related methods has been proposed for multi-view frame work.
Existing view based approaches • Mutual Subspace Method (MSM) O.Yamaguchi, K.Fukui, K.Maeda: Face recognition using temporal image sequence. Proc.IEEE 3rd International Conference on Automatic Face and Gesture Recognition, pp.318-323,1998. • Pro Low computation cost • Con Cannot handle nonlinear distribution
Multi-view Based Object Classification • Feature vectors from multi-view input are likely to have nonlinear distribution. • Subspace approximation therefore is not accurate.
Existing view based approaches • Kernel Mutual Subspace Method (KMSM) H.Sakano, N.Mukawa: Kernel mutual subspace method for robust facial image recognition. Proc. 4th International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies, Vol.1, pp.245-248, 2000. • Project feature vectors to a high dimensional space to reduce their nonlinearity. • Pros: • Can handle nonlinear distribution. • Cons: • Consume more computation time, and this increases at order N^2 as learning data increases. • Some critical parameters need to be optimized
Motivation of the Proposed Method • Approximate nonlinear distribution. • Divide the original distribution, and approximate each subset. • Achieve comparable classification performance with KMSM, using less computation. • Perform classification without kernel framework.
Table of Contents • Backgrounds • Multi-view classification and problems of existing methods for nonlinear distribution • The proposed method • Approximation by local subspaces and ensemble learning • Experimental results • Performance comparison using multi-view images of objects • Summary
The Proposed Method • Divide the nonlinear distribution into several subsets based on Euclidean distance. • Nonlinearity of each subset is weaker. • We approximate each subset with local subspace.
Ensemble Classification • Division number and dimension of each local subspace are parameters affect classification performance. • We construct local subspaces under multiple combination. • We assume each case as weak classifier and apply ensemble learning.
The Proposed Method with Weight • Each local subspace carries a weight coefficient based on its classification performance. • This coefficient weights the canonical angle. • The coefficients are simply the accuracy rate of each local subspace in preliminary experiments.
Table of Contents • Backgrounds • Multi-view classification and problems of existing methods for nonlinear distribution • The proposed method • Approximation by local subspaces and ensemble learning • Experimental results • Performance comparison using multi-view images of objects • Summary
Classification Experiment • Compare the classification performance and computation cost of MSM, KMSM and the proposed method. • Use multi-view image data set ‘The ETH-80 Image Set’ B.Leibe, B.Schiele: Analyzing appearance and contour based methods for object categorization. Proc. CVPR'03, Vol.2, pp.409-415, 2003
Classification Experiment • The dataset contains 8 classes, 10 objects for each class. • 41 points of view for each object • Feature vector is the resized image (16x16)
Classification Experiment • Use 164 images to learn for each class. (generate dictionary) • Evaluation input is images from an unknown object. (10 images/points of view) • By exchanging learning data and evaluation data (leave-one-out), we repeated experiment for 1640 times.
Classification Experiment • Classification performance of each method
Classification Experiment • The proposed method improved classification performance from MSM • The proposed method achieved comparable performance with KMSM • Classification performance of the proposed method was improved by introducing weight to each local subspace
Classification Experiment • Computation cost of each method
Classification Experiment • The proposed method consumes less computation time compared with KMSM • The computation time of the proposed method increases in order N as learning data increases
Classification Experiment • Classification performance of weak classifiers
Table of Contents • Backgrounds • Multi-view classification and problems of existing methods for nonlinear distribution • The proposed method • Approximation by local subspaces and ensemble learning • Experimental results • Performance comparison using multi-view images of objects • Summary
Summary • We proposed a method to achieve comparable performance with KMSM by less calculation. • Classification performance is further improved by introducing weight to each local subspace • The advantages of the proposed method is shown with classification experiment of objects.
Thank You Multiple View Based 3D Object Classification Using Ensemble Learning of Local Subspaces (ThBT4.3) Jianing Wu, Kazuhiro Fukui