1 / 22

Fusion of HMM’s Likelihood and Viterbi Path for On-line Signature Verification

Fusion of HMM’s Likelihood and Viterbi Path for On-line Signature Verification. Bao Ly Van - Sonia Garcia Salicetti - Bernadette Dorizzi Institut National des Télécommunications. Presented by Bao LY VAN. Prague – May 2004. Overview. HMM for Online Signature

jace
Download Presentation

Fusion of HMM’s Likelihood and Viterbi Path for On-line Signature Verification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fusion of HMM’s Likelihood and Viterbi Path for On-line Signature Verification Bao Ly Van - Sonia Garcia Salicetti - Bernadette Dorizzi Institut National des Télécommunications Presented by Bao LY VAN Prague – May 2004

  2. Overview • HMM for Online Signature • Likelihood Approach: Normalized Log-Likelihood information given by the HMM • Comparison with Dolfing’s system on Philips database [Ref] J.G.A. Dolfing, "Handwriting recognition and verification, a Hidden Markov approach", Ph.D. thesis, Philips Electronics N.V., 1998. • Viterbi Path Approach: exploit the Viterbi Path information given by the HMM • Motivation of the Viterbi Path approach • Fusion Likelihood and Viterbi Path • Experiments & Results New

  3. Azimuth (0°-359°) Altitude (0°-90°) 0° 270° 180° 90° Introduction of Online Signature • Captured by a Digitizing Tablet • A signature: a sequence of sampled points • Raw data: • Coordinates: x(t), y(t) • Pressure: p(t) • Pen Inclination Angles

  4. HMM Architecture • Continuous, left-right HMM • Mixture of 4 Gaussians • Personalized number of states • 30 points to estimate a gaussian When using 5 training signatures, the personalized number of states for this signer is 10

  5. Feature Extraction • Features extracted from coordinates • Velocity • Acceleration • Curvature radius • Normalized coordinates by the gravity center • Length to Width ratio • ... • 25 features at each point of the signature:signature = sequence of feature vectors

  6. Feature A Feature A Normalize Feature Z Feature Z Personalized Feature Normalization • Goals: • Same variance for all features = same importance • A good choice of leads to a faster convergence • Avoid the overflow problem in training phase • Implementation: • Normalization factors (one per feature) of each signer are stored with his/her signature model (HMM) • A test signature will be normalized according to these factors

  7. HMM Likelihood Approach • Log-Likelihood of a signature • Normalized by the signature length • Score • Based on the Distance between the LLN of the test signature and the Average LLN of training signatures: |LLN-LLNmean| • Convert to similitude between [0, 1] • (Likelihood Score)

  8. New What is The Viterbi Path Approach? • VP is the sequence of states that maximizes the likelihood of the test signature Normalized Log-Likelihood HMM (Viterbi Algorithm) input output Signature Viterbi Path (VP)

  9. Representation of Viterbi Path • VP generated by a N states HMM is represented by a N components Segmentation Vector (SV) • Each component of SV contains the number of points modeled by the corresponding state

  10. LL = -1166.10 LLN = -14.95 SV = (21, 30, 27) LL = -296.46 LLN = -16.47 SV = (18, 0, 0) Complementarity between VP and LL • Genuine and forged signatures can have very close Normalized Log-Likelihoods although their VPs (SVs) are quite different • It is easier to forge the system when the score based on Normalized Likelihood

  11. Hamming Distance HMM Hamming Distance SV 1 Test Signature Training Signature 1 ... SV 2 Training Signature 2 … Hamming Distance … SV … SV K Training Signature K References How to use the VP (SV) information? • SVsof HMM’s training signatures are saved as References • Convert Average Distance to similitude between [0, 1] (Viterbi Score) average AverageDistance

  12. Viterbi Score vs Likelihood Score • Important overlap when using only one score • Viterbi and Likelihood scores are complementary • Simple arithmetic mean is used for fusion (no extra-training)

  13. Experiments Overview • Protocol P1: • Exploits only the likelihood score on Philips database (with the same protocol as Dolfing) [Ref] J.G.A. Dolfing, "Handwriting recognition and verification, a Hidden Markov approach", Ph.D. thesis, Philips Electronics N.V., 1998. • Protocol P2: • Performs fusion of 2 scores on Philips database • Protocol P3: • Performs fusion of 2 scores on BIOMET database

  14. NN 0.7 1 1.3 1.6 2 2.5 3.2 6 10 TE min(%) 1.32 1.59 0.97 0.92 0.88 0.97 1.10 1.23 1.98 1.98 EER (%) 1.35 2.04 1.02 0.96 0.95 1.03 1.13 1.24 1.99 2.02 P1: Likelihood Score on Philips Database • 15 signatures to train HMM • Repeat 10 times: robust results • Our result is of 0.95% EER compared to 2.2% EER of Dolfing (1998)

  15. Likelihood Viterbi Path Fusion TE min (%) 3.73 7.66 3.26 EER (%) 4.18 8.12 3.54 P2: Fusion on Philips database • Only 5 signatures to train HMM • Repeat 50 times: robust results • Fusion lowers the Error Rate by 15% (compared to likelihood)

  16. genuine test data Likelihood Viterbi Path Fusion No time variability TE min (%) 5.27 3.71 2.47 EER (%) 6.45 4.07 2.84 Time variability (5 months before) TE min (%) 14.30 7.44 6.95 EER (%) 16.70 9.21 8.57 P3: Fusion on BIOMET database • 5 signatures to train HMM • Genuine test on two session • Repeat 50 times: robust results • Fusion lowers the Error Rate by a factor 2 (compared to likelihood)

  17. P3: Confidence Level on 50 trials

  18. Conclusions • We have built a HMM-based system and introduced 2 measures of information: • Likelihood score • Viterbi score • We have compared both scores on two databases: Philips and BIOMET • The new approach using VP information can give better results than LL approach (BIOMET) • Fusion of both scores improves results which shows their complementarity

  19. Thank you for your attention! ?

  20. NN 0.7 1 1.3 1.6 2 2.5 3.2 6 10 TE min(%) 1.32 1.59 0.97 0.92 0.88 0.97 1.10 1.23 1.98 1.98 EER (%) 1.35 2.04 1.02 0.96 0.95 1.03 1.13 1.24 1.99 2.02 • Mean result of 10 trials Protocol 1: Only Likelihood • Philips database • 51 signers, 30 genuine and about 70 forgeries per signer • Forgery of high quality • Dolfing’s protocol • 15 genuine signatures to train HMM • 15 other genuine signatures and forgeries to test HMM (~4000 signatures) • Fixed partition of training and testing genuine signatures • Our result is of 0.95% EER compared to 2.2% EER of Dolfing (1998)

  21. Likelihood Viterbi Path Fusion TE min (%) 3.73 7.66 3.26 EER (%) 4.18 8.12 3.54 Protocol 2: Fusion on Philips database • Protocol • Only 5 signatures to train HMM, randomly selected from 30 • Test on the remaining 25 genuine signatures and forgeries • Repeat 50 times: robust results • Fusion lowers the Error Rate by 15% (compared to likelihood)

  22. genuine test data Likelihood Viterbi Path Fusion 2nd session TE min (%) 5.27 3.71 2.47 EER (%) 6.45 4.07 2.84 1st session (5 months before) TE min (%) 14.30 7.44 6.95 EER (%) 16.70 9.21 8.57 Protocol 3: Fusion on BIOMET • BIOMET Database • 87 signers • Two sessions spaced of 5 months: 5 + 10 genuine, 12 forgeries per signer • Protocol: • 5 signatures (2nd session) to train HMM, randomly selected from 10 • test on the remaining 5 genuine signatures of the 2nd session, on the 5 genuine of the 1st session and the forgeries • Repeat 50 times: robust results • Fusion lowers the Error Rate by a factor 2 (compared to likelihood)

More Related