1 / 46

Introduction to Biometrics

Understand facial recognition systems, image acquisition, processing, distinctive characteristics, template creation, and more in biometric technologies.

lisagarcia
Download Presentation

Introduction to Biometrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Biometrics Dr. Bhavani Thuraisingham The University of Texas at Dallas Lecture #8 Biometric Technologies: Face Scan September 19, 2005

  2. Outline • Introduction • Basic Terms • Face Scan Process • Technologies • Market and Deployment • Strengths and Weaknesses • Research Directions • Conclusions • Appendix

  3. References • Course Text Book, Chapter 5 • http://www.biometricsinfo.org/facerecognition.htm • Face Recognition in Color Images • Hsu, Mottaleb, Jain et al, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002

  4. Introduction • Facial recognition systems are built on computer programs that analyze images of human faces for the purpose of identifying them. • The programs take a facial image, measure characteristics such as the distance between the eyes, the length of the nose, and the angle of the jaw, and create a unique file called a "template • Using templates, the software then compares that image with another image and produces a score that measures how similar the images are to each other. • Typical sources of images for use in facial recognition include video camera signals and pre-existing photos such as those in driver's license databases.

  5. Basic Terms • Components • Image processing software only systems • Full-fledged acquiring and processing systems • Cameras, workstations, backend processors • Enroll, verify , identify facial images • Acquired through photographs, images, surveillance videos, etc • Two major components • Face location engine: finds and tracks faces • Face recognition engine: compares the faces • Template matching • Takes place either in PCs or embedded into PDAs and Mobile phones

  6. Verification vs. Identification • System design for facial scan verification vs. identification differ in a number of ways. • The primary difference is that identification does not utilize a claimed identity. Instead of employing a PIN or user name, then delivering confirmation or denial of the claim, identification systems attempt to answer the question "Who am I?“ • As databases grow very large as in the case for facial scans, the system may narrow the database to a number of likely candidates. Human intervention may then be required • Uncooperative subjects actively avoid recognition, and may use disguises or take evasive measures • Facial scan technologies are much more capable of identifying cooperative subjects, and are almost entirely incapable of identifying uncooperative subjects.

  7. Process • Steps • Image acquisition • Image processing • Distinctive characteristics • Template creation • Template matching • Process • Image repository stores static images • The PC converts the static images into templates (e.g., zeros and ones) and stores in a template database • Camera takes photo of the person and sends it to a PC or image processing device • The PC converts the image into a template • Matching is carried out against the template database

  8. Image Acquisition • Can acquire images through cameras or video systems • Ideal to have high resolution images for more accuracy • High quality enrolment is essential for identification and verification • Issues with acquisition • Quality of image usually decreases with distance • Need proper angles • Appropriate lighting • Skin color causes problems with current facial scanning systems

  9. Image Quality • The performance of facial recognition technology is very closely tied to the quality of the facial image. • Low-quality images are much more likely to result in enrollment and matching errors than high-quality images. • Many photograph databases associated with drivers' licenses or passports contain photographs of marginal quality • Importing these files and executing matches may lead to reduced accuracy. • Problems exist with surveillance deployments. • If facial images for enrollment and matching can be acquired from live subjects with high-quality equipment, system performance increases substantially.

  10. Image Processing • Images are cropped and color images are converted to black and white for comparisons • Facial images are normalized • Basic characteristics such as middle of the eyes are located and used as frame of reference • The image can then be rotated and magnified • May need multiple images for a person for enrollment

  11. Distinctive Characteristics • After the image is normalized, distinctive features are located • Features that usually do not change • Example distinctive features • Upper ridges of the eye sockets • Areas around the cheekbones • Nose shape • Size of the mouth • Positioning of major features relative to each other • Issues • Facial expressions may change • Facial features may change due to cosmetic surgery

  12. Template Creation • Enrollment templates are created from the facial images • Numbers may be assigned to the distinctive features and computation based on some function • A template may vary in size from less than 10K bytes to over 3K bytes • Cannot recreate image from template • Larger templates may be used for behavioral biometrics where it is difficult to locate distinctive features

  13. Template Matching • Proprietary vendor algorithms are used to match templates • Confidence level is assigned for the matching process • If the score of the match exceeds the confidence level then a positive match is obtained • Many matching attempts may be carried out before an answer is produced • Rejection implies a failure of a match after X attempts; not a complete failure • Facial scan is not as effective as an Iris scan or a fingerprint scan to identify a person from a large database • The system may return many images and then the human decides which one is a good match

  14. Example Technologies • Methods used by vendors • Eigenface • Feature analysis • Neural network • Automatic face processing • Particular method chosen will depend on application • Forensics, network access, surveillance, etc.

  15. Eigenface • Invented at MIT • Two dimensional grayscale facial images called Eigenfaces • Face can be recreated from the Eigenface • Templates are created during enrollment and verification • Eigenface has distinctive characteristics • Process • During enrollment several Eigenfaces are created from the image • The eigenfaces are mapped to numbers (called coefficients) according some computation • For identification the template of the person is matched against the enrolled template • Determine coefficient variation

  16. Eigenface (Concluded) http://www.biometricsinfo.org/facerecognition.htm

  17. Feature Analysis • Most widely utilized facial recognition technique • More capable of accommodating changes to facial features • Smiling vs. Frowning • Feature extraction • Extracts dozens of features from different parts of the face • Determines the relative location of the features • These features are the building blocks • Assumption: Slight movement of a feature will cause a movement of the neighboring feature

  18. Neural Networks • Neural networks are first trained with different features • Getting a good training set is important • Need to trading the network to identify different changes in the features • Half smile, full smile etc. • When a new image arrives, it is given as input to the network • Network understand values between zero and one • The network tries to identify the image • Numerous applications in image processing, unusual image detection etc.

  19. Automatic Face Processing • More rudimentary technology used in early systems • Uses distances between eyes, end of nose, corners of mouth, distance between eyebrows and hairline, etc. • Templates are created and matched • More suitable for pictures taken in dimly lit environment where it may be difficult to extract distinctive features used in Eigenvalue

  20. Accuracy of the Technologies • Government studies of face-recognition software have found high rates of both "false positives" (wrongly matching innocent people with photos in the database) and "false negatives" (not catching people even when their photo is in the database). • One problem is that unlike our fingerprints or irises, our faces do not stay the same over time. These systems are easily tripped up by changes in hairstyle, facial hair, or body weight, by simple disguises, and by the effects of aging. • A study by the government's National Institute of Standards and Technology (NIST), for example, found false-negative rates for face-recognition verification of 43 percent using photos of subjects taken just 18 months earlier. • The NIST study also found that a change of 45 degrees in the camera angle rendered the software useless.

  21. Accuracy of the Technologies (Concluded) • The technology works best under tightly controlled conditions, when the subject is staring directly into the camera under bright lights • However a study by the Department of Defense found high error rates even in ideal conditions.  • Dated video surveillance photographs of the type likely to be on file for suspected terrorists may be of very little use. • Questions have been raised about how well the software works on dark-skinned people, whose features may not appear clearly on lenses optimized for light-skinned people.

  22. Privacy Considerations • One threat is the fact that facial recognition, in combination with wider use of video surveillance, would be likely to grow increasingly invasive over time. • Once installed, this kind of a surveillance system rarely remains confined to its original purpose. • Subjects may be unaware that they are being monitored and as a result privacy may be compromised • Another problem is the threat of abuse. • The use of facial recognition in public places like airports depends on widespread video monitoring • Maybe an intrusive form of surveillance that can record in graphic detail personal and private behavior. • Video monitoring may be misused. • Gender and racial biases

  23. Privacy Considerations (Concluded) • How do we decide whether to install facial recognition systems? • Is the technology effective? Does it significantly increase our safety and security? If the answer is no, then maybe should not pursue this technology • If the answer is yes, then it must be asked whether the technology violates the appropriate balance between security and civil liberty. • Why Face Recognition? • Most compelling reason is that humans identify other people by their face and voice, therefore are likely to be comfortable with systems that use face and voice recognition

  24. Market and Deployment • Facial recognition revenues are projected to grow from $34.4m in 2002 to $429.1m in 2007 and are expected to comprise approximately 10% of the entire biometric market.  • Deployed in many places • public sector ID card applications • surveillance systems • Casinos, Voting booths, hotels, etc. • Deployments included the following • Mexican government uses the technology for voter registration (source: text book) • Tampa 2001 Super bowl – for entrance • London public spaces • Surveillance cameras at Underground stations and all public places

  25. Where should we Deploy Face Recognition Systems? • Should we deploy face-recognition in airports to prevent terrorism? • It makes sense if the government has a database of all terrorists • Fast moving crowds may be an issue • Will it create a false sense of security? • Should we use the technology in other public places? • Surveillance techniques have been a great help to catch the London bomb terrorists • Prior to this it was believed by some that surveillance in London did not deter criminals

  26. Strengths of Facial Scan • Software-based technology that can be deployed without additional hardware • Could use existing imaging systems • Video cameras, Surveillance cameras etc. • Usable without the subject knowing • Surveillance systems • Unlike finger scans a person need not be in contact with physical devices • Images available for enrollment • Fingerprinting systems may need years to develop • With facial scanning systems new images can be added to the existing images

  27. Weaknesses of Facial Scan • Potential for privacy abuse • One could take images of a person without the person knowing • Mobile phone cameras, Surveillance cameras, etc. • Public concerns over biometric technologies • Matching may be difficult due to physical changes • Facial hair, Hairstyle changes, Cosmetic surgery • Not a major issue with fingerprinting systems • Need better technology to detect such changes • Matching accuracy depends on the acquisition environment • Sensitive to lighting • Person must be positioned at the right angle

  28. Research Directions • Reduce False Positives and False Negatives • Better performance for large number of images • Able to learn the aging process in humans and detect images that have aged • Ability to see through cosmetic surgery • Based on the assumption that people change a few features • What happens in the case of “Face Transplants”? • Machine learning techniques

  29. Research Directions (Concluded) • An Example Paper • Face Recognition in Color Images • Hsu, Mottaleb, Jain et al, IEEE Transactions on Patterns Analysis and Machine Intelligence, 2002 • Face detection algorithm is proposed • Able to handle color images based on “lighting compensation technique” and a “nonlinear color transformation” • Skin color is modeled using a parametric ellipse in a two dimensional transformed color space • Facial features are extracted by constructing feature maps for eyes, mouth and face boundary

  30. Summary • Homeland security applications such as surveillance • Can leverage preexisting images • Technology is not as accurate as some of the other technologies • False matches • Need better technologies to understand facial expressions • Need technologies to detect facial changes • With the boom in cosmetic surgery including recent discussions in face transplant, face scan and recognition may not be a viable technology?

  31. Introduction to Biometrics Dr. Bhavani Thuraisingham The University of Texas at Dallas Appendix to Lecture #8 Neural Networks September 19, 2005

  32. Neural Networks: Outline of the Appendix • What is a neural network? • Example – Home sales • Basic process • Types of Neural Networks • Back Propagation Learning • Key Issues in Learning • Neural networks for time series • Strengths/Weaknesses • Example: Image Change Detection

  33. Reference • Course Notes • Data Mining Technologies and their Applications in Counter-terrorism • Three day course taught at the Armed Forces Communications and Electronics Association, Fair Lakes, Virginia • Instructor: B. Thuraisingham • Dates: June 2003, December 2003, December 2004, December 2005 (Scheduled) • Book: Web Data Mining Technologies and Their Applications in Business Intelligence and Counter-terrorism, B. Thuraisingham, CRC Press, June 2003

  34. What is a Neural Network? • Neural networks consist of basic units modeled on biological neurons • Each unit has inputs that it combines into a single output • Main questions • What are units and how do they behave? • How are they connected together • Neural networks are trained using examples and then they are run on data sets • Based on the training they receive they carry out activities • Used both for directed and undirected data mining

  35. Example Living space Size of garage Age of house Appraised value Other

  36. Example (Continued) Features describing a house Feature Description Values # apts # dwelling units 1-3 Yr built Year built 1900 – 2000 Plumb. fix # plumbing fixtures 5-17 Heating type Heating system type coded as A or B Base gar. Basement garage 0-2 Att. Gar attached garage 0-228 Living Total living space 714-4185 Deck Deck space 0-738 Porch Porch space 0-452 Recroom Recreation space 0-672 Basement Basement space 9-810

  37. Example (Continued) Training set example Feature Value Sales Price 171,000 Months ago 4 Num apts 1 Yr built 1923 Plum. Fix 9 Heat Type A Base gar 0 Att. Gar 120 Living 1614 Deck 0 Porch 210 Recroom 0 Basement 175

  38. Example (Continued) Massaged Training set example Feature Original Value Range of Values Massaged Value Sales Price 171,000 103,000-250,000 0.4626 Months ago 4 0-23 0.1739 Num apts 1 1-3 0.0000 Yr built 1923 1850-1986 0.5328 Plum. Fix 9 5-17 0.3333 Heat Type A A or B 1.0000 Base gar 0 0-2 0.0000 Att. Gar 120 0-228 0.5263 Living 1614 714-4185 0.2593 Deck 0 0-738 0.0000 Porch 210 0-452 0.4646 Recroom 0 0-672 0.0000 Basement 175 0-810 0.2160

  39. Example (Concluded) • Computing the Appraised Value • Example: Sales price • 103,000 is 0.0000 and 250,000 is 1.0000 • Therefore, 171,000 is 0.4626 • Using some computation, neural network computes the appraised value to be: $213,250 • The output of the neural network is between 0 and 1. To get the appraised value of $213,250, the output will be 0.75 (the massaged output) • After the neural network is trained with examples, it will determine the formula then will compute the appraised value

  40. Basic Process • Inputs are given to the network • A combination function combines the inputs into a single value usually a weighted summation • A transfer function calculates the output value from the results of the combination function • The result is exactly one output between 0 and 1

  41. Types of Neural Network • Feed forward neural network • Processing propagates from inputs to output without any loops • In a layered representation of feed forward network, there are no links in nodes within a layer • Hidden layers in-between input and output layers • Recurrent neural network • There is feedback link that forms a circular path in the network • Most widely used network is feed forward network with a back propagation learning algorithm

  42. Back Propagation Learning • Training the network is setting the best weights on the inputs of each units • Use weights where output is close to desired output • Consists of the following steps • Network gets training example and using the existing weights in the network it calculates the output or outputs of the example • Back propagation calculates error by taking the calculated results and actual result • Error is fed back through the network and the weights are adjusted t minimize error

  43. Key Issues • Choosing appropriate training sets • Training set needs to consider full range of values • Number of inputs and number of outputs – need good coverage • Need computational power • Preparing the input data • Features with continues and discretely values as well as categorical values • Need to interpret the results

  44. Neural Networks for Time Series Data • Data has time series • E.g., stock market values fluctuate • Need data mining techniques to predict the next value in the time series • Neural networks have been adapted to handle time series data • Network is trained using time series starting with oldest point, second oldest point etc. • Feed forward back propagation network • May take multiple time series inputs • E.g., Euro, Dollar etc. • Heuristics may be used for feed forward back propagation networks

  45. Strengths/Weaknesses • Strengths • Wide range of problems • Good results in complicated domains • Handle discrete, continuous, categorical values • Many packages • Weaknesses • Require inputs in the range 0 to 1 • Cannot explain results • May product solution prematurely

  46. Image Processing:Example: Change Detection: • Trained Neural Network to predict “new” pixel from “old” pixel • Neural Networks good for multidimensional continuous data • Multiple nets gives range of “expected values” • Identified pixels where actual value substantially outside range of expected values • Anomaly if three or more bands (of seven) out of range • Identified groups of anomalous pixels

More Related