320 likes | 331 Views
This paper presents the Hold and Sign authentication system, which profiles smartphone users based on how they hold and sign on the screen. It combines touch-points and micro-movements of the phone to create a safer and more efficient authentication method compared to PINs or passwords. The system uses built-in three-dimensional sensors and touchscreen data to create user profiles. Four different classifiers are used for verification, and success metrics include TAR, FAR, FRR, TRR, and FTAR.
E N D
Hold and Sign: A Novel Behavioral Biometrics for Smartphone User Authentication Presented by: Dhruva Kumar Srinivasa Team-mate: NagadeeshNagaraja
Authors • AttaullahBuriro- Dept. of Inf. Eng. & Comput. Sci., Univ. of Trento, Trento, Italy • Bruno Crispo- DistrNet, KU Leuven, Leuven, Belgium • Filippo Delfrari- Dept. of Inf. Eng. & Comput. Sci., Univ. of Trento, Trento, Italy • Konrad Wrona- NATO Commun. & Inf. Agency, The Hague, Netherlands
User authentication • Pattern • Pin • Password • Gestures • Biometrics
Biometric-based authentication Physiological • Fingerprint • Face • Retina • Odor Behavioral • Typing rhythm • Gait • Voice
Handwritten Signature? • Socially and legally accepted form of personal identification. • Feasible to implement on smartphones. Challenges? • Intra-class variability leads to high FRR • Inter-class similarity leads to high FAR
Proposed approach • Authentication system based on how the user holds his phone while signing on the screen. • The system profiles the user based on the touch-points and micro-movements of the phone. • Safer than a PIN or Password as “shoulder surfing” is almost impossible.
Existing authentication systems • Sensor-based authentication • Physical three-dimensional sensors built into most smartphones – accelerometers, gyroscopes and orientation sensors. • E.g: On-body detection • Touch-based authentication • Compares the geometry of the gesture/pattern. • E.g: gestures, patterns, knock code • Signature-based authentication • Computes similarity score between signatures. • E.g: voice, face, signature
What makes Hold & Sign different? • It is bi-modal; it takes into account phone and finger movements during the signing process. • It relies on the screen touch-points and velocity of finger movement during signing – neither the image nor the geometry of the signature is used. • It does not impose a restriction on the gesture to be used – user is free to choose any pattern he/she is already familiar with such as a signature.
Threat model • Attacker is already in possession of the device. • Attacker can be a stranger, family member, friend or co-worker. • Goal of the attacker: To gain access to the device and its contents.
Solution • Consider all the touch-points pushed for the entire signature and the velocity of the finger movement. • All the physical sensors are triggered and kept running during the whole signing process. • Combine the extracted features from both built-in sensors and the touchscreen to profile user behavior. • A user profile template is formed based on the selected feature subset and is then stored in the main database.
Data Source - Sensors • Three built-in three-dimensional sensors: the accelerometer, the gravity sensor, and the magnetometer. • Two additional sensor readings from the accelerometer: • High-Pass Filter – contribution of the force of gravity is eliminated. • Low-Pass Filter – force of gravity is isolated. • In Android, the SensorEvent API is used to collect these readings. • A fourth dimension to all of these sensors was calculated.Magnitude , where is the resultant dimension and , and are the acceleration along X, Y and Z directions.
Data Source - Touchscreen • In Android, the MotionEvent API provides a class for tracking the motion of the finger on the screen. • The VelocityTracker API is used to track the motion of the pointer on the touchscreen.
Classifiers chosen • Generally, the problem of user biometric authentication is solved in two ways: with binary classification and anomaly detection. • Four different verifiers were chosen: • BayesNET • K-Nearest Neighbor (KNN) • Multilayer Perceptron (MLP) • Random Forest (RF).
Success metrics • True Acceptance Rate (TAR) - The proportion of attempts of a legitimate user correctly accepted by the system. • False Acceptance Rate (FAR) - The proportion of attempts of an adversary wrongly granted access to the system. FAR = 1 - TRR • False Rejection Rate (FRR) - The proportion of attempts of a legitimate user wrongly rejected by the system. FRR = 1 - TAR • True Rejection Rate (TRR) - The proportion of attempts of an adversary correctly rejected by the system. • Failure to Acquire Rate (FTAR) - The proportion of failed recognition attempts (due to system limitations). A reason for this failure could be the inability of the sensor to capture, insufficient sample size, number of features, etc.
Data Collection • Android supports data collection in both fixed and customized intervals after registering the sensors. Such intervals are often termed Sensor_Delay_Modes. • Hold & Sign uses SENSOR_DELAY_GAME sinceSENSOR_DELAY_NORMAL and SEN-SOR_DELAY_UI were too slow and SENSOR_DELAY_FASTESTincludes noise in the data collection. • 30 volunteers (22 male and 8 female) from several nationalities; the majority of them are either Master's or Ph.D. students but not security experts. • Data collected in three different activities, sitting, standing and walking with Google Nexus 5.
Features • Gathered 4 data streams from every 3-dimensional sensor, and extracted 4 statistical features, namely mean, standard deviation, skewness, and kurtosis, from every data stream. In total 16 features were obtained from all four dimensions of each sensor. • Similarly, we extracted 13 features from touchscreen data. The extracted features from touchscreen data are shown.
Features fusion • The fusion of data as early as possible may increase the recognition accuracy of the system. • Data fusion was done at the feature level. The fusion of 16 features from each sensor makes a new feature vector and this feature vector is called the pattern of user's hold behavior. Similarly, the feature vector of sign behavior is called a sign pattern. • The length of the fused feature vector for both modalities becomes 93 features.
Feature Subset Selection • Feature subset selection is the process of choosing the best possible subset, i.e. the set that gives the maximum accuracy, from the original feature set. • Feature set was evaluated with Recursive Feature Elimination (RFE) feature subset selection methods using scikit-learn.
Analysis • Data analyzed in two settings, • verifying legitimate userscenario • attackscenario • In the verifying legitimate user scenario, the system was trained with the data from the ownerand then tested with the patterns belonging to that owner. • The results were reported in terms of TAR and FRR. • In the attack scenario, the system was trained with all the data samples from the ownerand then tested with the patterns belonging to the other 29 users. • The results were reported in terms of FAR and TRR.
Results • Results were reported in three ways: intra-activity, interactivity and activity fusion. • Intra-activity - training and testing each single activity (i.e. training walking to test walking only). • Inter-activity - training with one single activity and using that training for testing all activities. • Activity fusion - the combined data of all 3 activities for both training and testing (i.e. training with fused data from walking, sitting and standing) to test all activities.
Results contd. Intra-activity Inter-activity unsatisfactory results (65.82% at best) ≥79% TAR with full and ≥85% TAR with chosen RFE feature subset.
Need for activity fusion • Training the system in just one activity and using it in multiple activities does not lead to good results. • Instead, the patterns of multiple activities were combined and the RFE feature selection method was applied on the combined data.
Hold & Sign implementation • Uses the MLP classifier based on the feature set extracted using the RFE method. • Analysis was performed using this application on a Google Nexus 5 smartphone running Android 4.4.4
Performance • Measured three different timings: • sample acquisition time • training time • testing time • Computed these times for 3 different settings: with 15, 30 and 45 patterns. • Tested each setting on the Google Nexus 5 with 35 tries for each time. Results are averaged over all 35 runs.
Performance contd. Sample acquisition time Training/Testing time Training time is the time required to train the classifier. 3.497s, 6.193s and 9.310s for classifier training with 15, 30 and 45 patterns. Testing time is the time required by the system to accept/reject the authentication attempt. 0.200s, 0.213s, and 0.253s for testing with 15, 30 and 45 patterns.
Power consumption Hold & Sign Common tasks A one-minute phone call: 1054 mW Sending a text message: 302 mW Sending or receiving an email over WiFi: 432 mW Sending or receiving an email over a mobile network: 610 mW • All the steps (sensor data collection, feature extraction, etc.) disabled = 460 mW • Only sensor data collection enabled = 493 mW • Sensor data collection and feature extraction enabled = 588 mW • Full functionality enabled ≈1000 mW
Feedback • 11-question questionnaire adapted from the System Usability Scale (SUS) to the chosen volunteers (30 users) • An optional subjective question: What did you like or dislike about the mechanism? • Feedback received from 18 out of 30 volunteers (60%). • Achieved an average SUS score of 68.33%, better than the well-established voice recognition score (66%) and its fusion with the face (46%) and gestures (50%). • Some negative responses: • Initial setup too cumbersome. • Having to sign multiple times whereas setting up a PIN is easier. • Requires the use of both hands.
Limitations • Requires the use of both hands. • Cannot predict the user's ongoing activity in order to extract the best pre-selected features and use them for verifying user identity. • How does it stack up against the increasingly popular fingerprint authentication?
References • AttaullahBuriro, Bruno Crispo, Filippo Delfrari and Konrad Wrona “Hold and Sign: A Novel Behavioral Biometrics for Smartphone User Authentication”, Security and Privacy Workshops (SPW), 2016 IEEE • https://developer.android.com/reference/android/view/MotionEvent.html • https://developer.android.com/reference/android/hardware/SensorEvent.html