1 / 37

Side-Channel Inference Attacks Activities of Daily Living Inference from Encrypted Video Streams

This research focuses on the privacy concerns related to fine-grained smart meter readings and proposes a novel approach to achieve privacy preservation and accurate billing through delayed information release.

herin
Download Presentation

Side-Channel Inference Attacks Activities of Daily Living Inference from Encrypted Video Streams

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Side-Channel Inference AttacksActivities of Daily Living Inference from Encrypted Video Streams Xiuzhen (Susan) Cheng Professor of Computer Science The George Washington University Email: cheng@gwu.edu

  2. Privacy-Preserving Smart Metering • Fine-grained smart meter readings could release private information • Absence/presence; activities; etc; pose strong privacy (safety and security) concerns • Hiding the fine-grained data via aggregation or other approach does not work • Make billing, statistical analysis, etc. hard, if not impossible • Our Approach: Achieving privacy preservation and billing via delayed information release • Employ a token that change at each billing period • Employ Rabin’s Cryptosystem • Propose a novel group signature scheme Figure from G. Hart. Nonintrusive appliance load monitoring. Proceedings of the IEEE, 80(12):1870 –1891, Dec 1992.

  3. Self-Collusion and Truthful Auctions Dynamic Spectrum Access Cognitive Radio (CR) Enabled Spectrum Auctions • Market Manipulation • 1999, in a Germany spectrum auction, T-Mobile saved at least 1.82 million DM (Deutsche Mark) per MHz on blocks 1-5 via colluding with Mannersman • In 1994 FCC DEF auction, On blocks D and E, the colluding bidders paid $2.50/pop, while non-colluding bidders paid $4.34/pop. • Challenges • Spatial and price diversity • Channel spatial reuse • Heterogeneous Spectrum attributes • What we have done • Propose practical auction models for spectrum market • Prove that all the traditional auctions (VCG, MOM, McAfee) lose their truthfulness • Identify the root cause of untruthfulness and the conditions for truthful auctions • Develop frameworks for self-collusion resistant and truthful auctions

  4. Safety and Efficiency in ITS • Extra Devices • On-Board Unit • Unidirectional Microphone • Feature Extraction • Mel-Frequency Cepstral Coefficients (MFCC) • Pattern Matching • Gaussian Mixture Model (GMM) • Maximum Likelihood (ML) • Extra Device • On-Board Unit • Lane Identifier • Rear Camera • Canny Edge Detection • Hough Line Detection • Lane Tracker • Accelerometer and Gyroscope • Moving Average • Disturbance Cancellation • Bending Road, Driving Styles

  5. Online Social Networking: Attacks and Defenses Privacy • Interests • Profession • Personal background • Friend preference • etc • Ever-growing number social network attack instances • In 2009, a market strategist named Carri Bugbee, was tracked; in early 2016, Facebook recommended a psychiatrist’s patients to friend each other; CNN in 2013 reports that grandparent scams steal thousands from seniors ... • We have considered identify and attribute inference, differential privacy based defenses; cryptographic approaches.

  6. Secure and Privacy-Aware Big Data Computing • Mutual privacy-preserving k-means clustering in social participatory sensing • Secure outsourcing for matrix inverse computation • Secure & verifiable secret sharing for big data storage in clouds • Based on the NTRUcryptosystem

  7. Privacy Leakage in Encrypted Video Stream • Video Surveillance System • Video Stream Encoding • Intra-coding vs. inter-coding • Leakage in Encrypted Video Stream • Infer whether you are at home; infer your basic activities; infer your living habit nobody there Dressing Moving Styling hair turn on/off the light Eating

  8. Side-channel Attacks Side-channel attacks refer to the attacks that exploit any information other than the plaintext and ciphertext Public Information (WiFi signal Strength, Timing, Memory, etc.) Private Information (Location, Password, activities, etc.) 1. Obtain 2. Infer (with ML Models) Target System Attacker

  9. Why Side-channel Attacks? • Side-channel Information is easy to retrieve • It is hard to defend • It is difficult to detect • It is highly profitable http://www.infoworld.com/article/2624981/network-monitoring/hackers-find-new-way-to-cheat-on-wall-street----to-everyone-s-peril.html

  10. Recent Major Research Results NDSS17’ Cracking Android Pattern Lock CCS16’ WindTalker S&P16’ Interrupts Cracking Password/Pin/Pattern Lock Stealing User Privacy Side-channel Attacks CCS16’ SideDroid Cross-App Unauthorized Accessing USENIX14’ UI State Inference

  11. CCS‘16 WindTalker Attack 2. Connects to the AP 3. Constantly sends ICMP ECHO Requests Malicious WiFi AP 4. Constantly replies ICMP Responses 5. Measures the Channel State Information (CSI) 1. Sets up Rogue AP Victim Device Inference Attack: • Principle Component Analysis (PCA) on CSI • Discrete Wavelet Transform • Classification using Dynamic Time Warping Recovers the Alipay Pin Attacker

  12. CCS‘16 SideDroid Attack Resides In the Device TCP Counts, IP, WiFi BSSID, Audio Status Attacker Server Infers Zero-permission Malicious App Victim Android Device User Identity (Twitter Account) Health Investment Where you are going

  13. USENIX‘14State Inference Resides In the Device 1. Memory and CPU Information 4. UI Hijacking Zero-permission Malicious App Victim Android Device 2. UI State Inference with Hidden Markov Model (HMM) 3. Device Current Foreground Activities

  14. Inferring Victim Social Network Activities based on Cellphone Side-channel Resides In the Device 1. Collect all available side-channel info from all available processes Zero-permission Malicious App Victim Android Device 2. Pass all the info to a Deep Neural Network Desired Features, e.g., the timestamp the victim tweets; the timestamp the victim comments; number of characters 3. outputs

  15. Social Network Activity Inference Results • We conducted 1,125 tests with respect to three of the most popular social networking apps, Twitter, Flickr and Instagram. The inference accuracy for these 3 apps are respectively 90.67%, 98.67% and 99.73%.

  16. Side-Channel Information Leakage of Encrypted Video Stream in Video Surveillance Systems [INFOCOM 2016] Hong Li1,3, Yunhua He2,3, Limin Sun1, Xiuzhen Cheng3 and Jiguo Yu4 1 Institute of Information Engineering, Chinese Academy of Sciences, China 2 School of Computer Science, Xidian University, China 3 Department of Computer Science, The George Washington University, USA 4 School of Information Science and Engineering, Qufu Normal University, China

  17. Video Surveillance System and Market • Video surveillance systems are widely used: • Video surveillance market: • According to the market research of ReportLinker, the market will reach $42.81 billion by 2019, growing at an annual growth rate of 19.1% Google’s dropcam Baidu’s ermu D-link’s camera Video surveillance system 19

  18. Redundanciesin Video Streams • Spatial Redundancy: spatial redundancy is caused by the high correlation between pixels within one frame • Temporal Redundancy: temporal redundancy is caused by the high correlation between adjacent frames Spatial redundancy Temporal redundancy 20

  19. Reducing Temporal Redundancy by Difference Coding • Difference Coding: one frame is compared with a reference frame and only pixels that have changed with respect to the reference frame are coded (MPEG-4 and H.264 ) Difference coding 21 Original frames Compressed frames

  20. Reducing Temporal Redundancy by Difference Coding Traffic size of the encoded video streams Original frames Difference coding GOPs (Group of Pictures) 22

  21. Side-Channel Information Leakage • When the monitoring area is static, the size of B/P-frame would be very small, and the traffic will be steady and small; • When someone appears in the monitoring area, the size of B/P-frame will be relatively large, and the traffic will be large and unstable; • The traffic patterns are different when a user performs different activities. Large-scale and complex movements generally yield heavy and unstable traffic caused by the large and varied differences between adjacent frames. Are these claims right?

  22. An Experimental Study • Experiment settings: • Used Google’s Dropcam Pro, and recorded the traffic size data of an encrypted video stream without I frames • A volunteer conducted four basic activities of daily living (ADLs): dressing, moving, styling hair, and eating PRIVATE Dressing Moving Styling hair Eating

  23. An Experimental Study • Traffic Patterns With and Without Movements: • Traffic Patterns Under Different Activities:

  24. An Experimental Study • Traffic Patterns Under Different Activities:

  25. Detection of Basic Activities of Daily Living Split the traffic size data into small pieces, with each corresponding to one activity Get the time-series traffic size data by sniffing Use ML algorithms to recognize the underlying activities 27

  26. ActivitySegmentation • Objective: split the traffic size data into small pieces, with each corresponding to one activity. where • Basic idea: locate the beginning/ending points of each activity • Step 1: detecting all the change points (of statistical properties) using binary segmentation Fitness function Penalty against overfitting 28

  27. Activity Segmentation • Step 2: merging short segments with short durations • Step 3: merging periodic segments Similarity computed by dynamic time warping The duration of the segment 29

  28. Activity Recognition • Step 1: filtering out segments without activities • Step 2: feature extraction A measure of symmetry of a probability distribution a measure of whether the data are peaked or flat relative to a normal distribution 30

  29. Activity Recognition • Step 3: machine learning algorithms • k-NN Classification: when training data are available • DBSCAN-Based Clustering: when training data are unavailable • Step 4: error correction for the recognition of eating • Objective: error correction is to alleviate the impact of activity segmentation on the recognition accuracy of eating • Cause: periodic segment merging will cause more errors for eating • Method: Given 2k+1 consecutive activities if all segments but is recognized as “eating”, is corrected as “eating” 31

  30. Experiments • Hardware • Cameras: Google’s Dropcam Pro, Samsung’s SmartCam HD Pro • Settings: the settings is listed in the table below • Experimental Methodology • The volunteer conducted 4 basic activities of daily living • Experiment scenarios: • Different distance to the cameras: 1m, 2m, and 3m • Different Illumination: morning, afternoon, and evening 32

  31. Experiments • Accuracy of Activity Segmentation • Accuracy of Classification Impact of training set size on classification accuracy 33

  32. Experiments • Accuracy of Clustering Impact of epsilon on the clustering accuracy Impact of min_samples on the clustering accuracy 34

  33. Experiments • Varying Distance between the Cameras and Activities Impact of distance on the clustering accuracy Impact of distance on the classification accuracy 35

  34. Experiments • Impact of Illumination Impact of illumination on the classification accuracy Impact of illumination on the clustering accuracy 36

  35. Experiments • Impact of Error Correction on the Recognition of Eating 37

  36. Limitations and On-Going Research • Infer Activities of Daily Living in a Multiple-Person Case • Attack to Infer Other Living Habits • Attack to Infer More Complex Activities of Daily Living • Countermeasures for Privacy Preservation 38

  37. Questions and Comments Thank you! Mom said I am a trouble-maker but I think I am very COOL

More Related