10 likes | 234 Views
Algorithm summary. Fixed-complexity model update 2. Model ‘splitting’ 3. LOOP: pair-wise component merging 3.1 Expected description lengths 3.2 Model complexity update.
E N D
Algorithm summary • Fixed-complexity model update • 2. Model ‘splitting’ • 3. LOOP: pair-wise component merging • 3.1 Expected description lengths • 3.2 Model complexity update Figure 1: TC-GMMs: (left) Average distribution of Euclidean distance between temporally consecutive faces across video sequences of faces in unconstrained motion. (right) A typical sequence projected to the first three principal components estimated from the data, the corresponding MDL EM fit and the component centres visualized as images. On average, we over 80% of pairs of successive faces have the highest likelihood of having been generated by the same Gaussian component. Incremental Learning of Temporally-Coherent Gaussian Mixture Models Ognjen Arandjelović, Roberto Cipolla Engineering Department, University of Cambridge Introduction Component merging • Based on the well-known Description Length • Since historical data is not available, we compute expected Description Length • Postulated components merged similarly as for model splitting (step 2.) • The main contributions of this work: • Framework for space and time efficient incremental learning of GMMs • Proposed temporal-coherence assumption in GMMs as valid in many important computer vision applications • The notion of the Historical GMM fitting a salient portion of historical data Experimental evaluation Overview Method evaluated on several synthetic data sets (see Fig. 2) and real face appearance data extracted from realistic videos of random head motion (see Fig. 3). Method Details Temporally coherent GMMs GMM update for fixed model complexity The assumption of temporal coherence based on an empirical observation that in many practical applications temporally successive samples from a GMM are correlated. Formally, we assume: • Key assumptions: • Model complexity is fixed • Model parameters in a local minimum of the EM algorithm • Component likelihoods do not change much with novel information (ensured by model complexity update steps that follow): Figure 2: Synthetic data sets: (1s) Data and the initial model. (2s) MDL-EM GMM fit. (3s) Incremental GMM fit. (4s) Description length of GMMs fitted using EM and the proposed incremental algorithm (of the final GMM estimate). Whereps is a unimodal distribution. A good and practically important example is the variation of face appearance in a continuous video of face motion, see Fig. 1. Figure 3: Face motion data set: Data and (a) MDL-EM GMM fit. (b) Incremental GMM fit. (c) Description length of GMMs fitted using EM and the proposed incremental algorithm (of the final estimate). (d) GMM component centres visualized as images for the MDL-EM fit (top) and the incremental algorithm (bottom). Model splitting (postulating new components) • Since no historical data is available, a single novel point per se never carries enough information to cause a change in the model order. To retain some historical information, while achieving low memory demands we define a Historical GMM: • Historical GMM: the model corresponding to the portion of historical data seen until the last model complexity change (increase or decrease) Conclusions • Then, the key ideas are: • Components of the Historical and the current GMM are in 1-1 correspondence • Postulate new components by ‘subtracting’ corresponding components of two GMMs • Introduced a novel method for incremental learning of GMMs • Temporally-coherent Gaussian mixtures a practically interesting class of models • More evaluation needed: quantification of temporal coherence, failure modes etc.