1 / 76

Bayesian Frameworks for Deformable Pattern Classification and Retrieval

Bayesian Frameworks for Deformable Pattern Classification and Retrieval. by Kwok-Wai Cheung January 1999. Model-Based Scene Analysis. Knowledge. Input. Output. An “H” model. Integrated Segmentation and Recognition. Template Matching: Limitation. Knowledge. Reference Models. Input.

bingman
Download Presentation

Bayesian Frameworks for Deformable Pattern Classification and Retrieval

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bayesian Frameworks for Deformable Pattern Classification and Retrieval by Kwok-Wai Cheung January 1999

  2. Model-Based Scene Analysis Knowledge Input Output An “H” model Integrated Segmentation and Recognition

  3. Template Matching: Limitation Knowledge Reference Models Input Output Matching Score of “H” 10/12 Matching Score of “A” 10/12

  4. Deformable Models • A deformable model is mostly referred to as an object shape abstraction and possesses shape varying capability for modeling non-rigid objects. A Deformable “6” Model

  5. A Common Formulation Modeling Matching Classification Retrieval

  6. Modeling • Model representation Hj • Model shape parameter vector w Hj(w3) Hj(w1) Hj(w2) w2 w3 w1 parameter space A Common Formulation

  7. Matching • A search process (multi-criterion optimization) Model Deformation Criterion Data Mismatch Criterion Combined Criterion and Regularization w1 wf w0 Hj(wf) parameter space A Common Formulation

  8. Classification A Common Formulation

  9. Retrieval A Common Formulation

  10. Thesis Overview Reasoning: Bayesian Framework Approach: Deformable Models Problem: Deformable Pattern Classification Application: Handwritten Digit Recognition Problem: Deformable Pattern Retrieval Application: Handwritten Word Retrieval

  11. Presentation Outline • A Bayesian framework for deformable pattern classification (applied to handwritten character recognition) • Extensions of the framework • A competitive mixture of deformable models • Robust deformable matching • A Bayesian framework for deformable pattern detection (applied to handwritten word retrieval) • Conclusions and future works

  12. A Bayesian Framework for Deformable Pattern Classification with Application to Isolated Handwritten Character Recognition

  13. Posterior Distribution Prior Distribution w w Likelihood Function Data Distribution w D Bayesian Background

  14. Bayesian Formulation Shape Parameter Distribution • Prior distribution (without data) • Likelihood function • Posterior distribution (with data)

  15. MAP estimate Bayesian Inference: Matching • Matching by maximum a posteriori (MAP) estimation. parameter space

  16. Bayesian Inference: Classification • Classification by computing the model evidence (Laplacian approximation).

  17. Model Representation • Cubic B-splines for modeling handwritten character shape. • Shape parameter vector { w, A, T } • w = spline control points (local deformation) • {A,T} = affine transform parameter (global deformation) • Mixture of Gaussians for modeling black pixels.

  18. Model Representation Spline curve 2 Control points with sequence number 3 1 7 Gaussian distributions modeling black pixels 8 6 Stroke width 4 5

  19. Criterion Function Formulation • Model Deformation Criterion • Data Mismatch Criterion Mahalanobis distance Negative log of product of a mixture of Gaussians

  20. Matching • MAP estimation for {w, A, T, a, b} using the expectation-maximization (EM) algorithm [Dempster et al. 1977]. • No closed form solutions and iterations between the estimation of {w, A, T} (linear) and that of {a, b} are required.

  21. Matching Results Simple Initialization Affine Transform Initialization Final Match

  22. Matching Results a* = 3.54 b* ~ 0.9 deformed less a* = 0.89 b* ~ 0.9 deformed more a* ~ 3.0 b* = 0.52 thicker stroke a* ~ 3.0 b* = 0.9 thinner stroke

  23. Classification Best Match with highest P(D|H6). The output class is “Six”.

  24. Critical Factors for Higher Accuracy • Size of the Model Set • how many models for each class? • Model Flexibility Constraints • Likelihood Inaccuracy • use prior only for the best few candidates. Unconstrained Constrained

  25. Critical Factors for Higher Accuracy • Filtering Normalized “1” • Sub-part Detection For the NIST dataset we used, all the characters are normalized to 20x32. Some abnormal “1”s are observed. These are the unmatched portions for matching model “2” to data “0”.

  26. Experiment • Training Set (NIST SD-1) • 11,660 digits (32x32 by 100 writers) • Test Set (NIST SD-1) • 11,791 digits (32x32 by 100 writers) • Size of Model Set = 23 (manually created)

  27. Experimental Results

  28. Previous Works

  29. Accuracy and Size of Model Set Accuracy 99.25% [Jain et al.1997] Optimal accuracy curve 94.7% [Our system] Manual Nearest Neighbor 2000 No. of models 23

  30. Summary • A unified framework based on Bayesian inference is proposed for modeling, matching and classifying non-rigid patterns with promising results for handwritten character recognition. • Several critical factors related with the recognition accuracy are carefully studied.

  31. Extensions of the Bayesian Framework

  32. Major Limitations of the Framework • The Scale-up Problem • The classification time increases linearly with the size of the model set. • The Outlier Problem • The framework is very sensitive to the presence of outlier data (e.g., strokes due to the adjacent characters)

  33. The Scale-up Problem Solns. • Hardware solution • Independent Matching Process -> Highly Parallel Computing Architecture • Software solution • Cutting down the unnecessary computation by carefully designing the data structure and the implementation of the algorithm.

  34. A Competitive Mixture of Deformable Models • LetH = {H1, H2, … , HM, p1, p2, … , pM} denote a mixture of M models. Input data D p1 p2 pM H1 H2 HM

  35. A Competitive Mixture of Deformable Models • The Bayesian framework is extended and {pi} can then be estimated using the EM algorithm. • By maximizingp(D|H) and assuming the data D comes fromHi, the ideal outcome of {pi} = [0 0 .. 0 1 0 .. 0] pi

  36. Speed up: Elimination Process Input data D p1 p2 pM H1 H2 HM

  37. Experiment • Training Set (NIST SD-1) • 2,044 digits (32x32 by 30 writers) • Test Set (NIST SD-1) • 1,427 digits (32x32 by 19 writers) • Size of Model Set = 10 (manually created) • Elimination Rule • After the first iteration, only best R models are retained.

  38. Experimental Results: Accuracy 95.1% 94.2% 92.7%

  39. Experimental Results: Speedup 2.1 1.9 1.4

  40. The Outlier Problem • The mixture of Gaussians noise model fails when some gross errors (outliers) are present. Well Segmented Input Badly Segmented Input

  41. The Outlier Problem • There is a necessity to distinguish between the true data and the outliers. • Utilize true data and suppress outliers. Outliers True data

  42. Use of Robust Statistics • Robust statistics takes into account the outliers by either: 1) Modeling them explicitly using probability distributions, e.g. uniform distribution 2) Discounting their effect (M-estimation), e.g. defining the data mismatch measure (which is normally quadratic) such that

  43. Use of Robust Statistics • Suppressing the outliers’ contribution

  44. Robust Linear Regression Without Robust Statistics With Robust Statistics

  45. Robust Deformable Matching • An M-estimator is proposed such that Data Mismatch Criterion with Robust Statistics Original Data Mismatch Criterion

  46. Experiment • Goal: To extract the leftmost characters from handwritten words. • Test Set - CEDAR database • Model Set - manually created • Model Initialization • Chamfer matching based on a distance transform.

  47. Experimental Results Initialization Fixed Window Width 1 Fixed Window Width 2 Fixed Window Width 3 Robust Window

  48. More Experimental Results

  49. Summary • The basic framework can be extended to a competitive mixture of deformable models where significant speedup can be achieved. • The robust statistical approach is found to be an effective solution for robust deformable matching in the presence of outliers.

  50. Deformable Pattern Detection

More Related