1 / 16

Character Recognition using Hidden Markov Models

Character Recognition using Hidden Markov Models. Anthony DiPirro Ji Mei Sponsor:Prof. William Sverdlik. Our goal. Recognize handwritten Roman and Chinese characters This is an example of the Noisy Channel Problem. Ji. Noisy Channel Problem.

ghada
Download Presentation

Character Recognition using Hidden Markov Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Character Recognition using Hidden Markov Models Anthony DiPirro Ji Mei Sponsor:Prof. William Sverdlik

  2. Our goal • Recognize handwritten Roman and Chinese characters • This is an example of the Noisy Channel Problem Ji

  3. Noisy Channel Problem • Find the intended input, given the noisy input that was received • Examples • iPhone 4S Siri speech recognition • Human handwriting

  4. Markov Chain • We use a Hidden Markov Model to solve the Noisy Channel Problem • A HMM is a Markov chain for which the state is only partially observable. • Markov Chain • Definition • Illustration

  5. Hidden Markov Model

  6. Our Project

  7. How to solve our problem? • Using a HMM, we can calculate the hidden states chain, based on the observation chain • We used our collected samples to calculate transition probability table and emission probability table • Use Viterbi algorithm to find the most likely result

  8. Pre-Processing • Shrink • Medium filter • Sharpen

  9. Feature Extraction • We count the regions in each area to represent the observation states

  10. Compare Canonical A S2 S2 Adjusted Input S3 S3 S2 S2 Compare Canonical B S3 S2 S2 S3 S1 S3 …

  11. ExperimentingHow to split character

  12. ExperimentingHow to represent states

  13. Result

  14. Conclusions • Factors that will affect accuracy • Pre-processing • How to split word • Number of states

  15. In the future • Spend more time on different features Pixel Density Counting lines • Use other algorithms such as a neural network to implement character recognition.

More Related