840 likes | 947 Views
Evaluating LBS Privacy In DYNAMIC CONTEXT. Outline. Introduction Overview Attack Model Classification Defend Model Evaluation Module Conclusion. Outline. Introduction Overview Attack Model Classification Defend Model Evaluation Module Conclusion. What we need now?.
E N D
Outline • Introduction • Overview Attack Model • Classification Defend Model • Evaluation Module • Conclusion
Outline • Introduction • Overview Attack Model • Classification Defend Model • Evaluation Module • Conclusion
What we need now? • Problem of privacy preserving in a context-aware environment? • Different services require different algorithms • Even in the only one service? How to solve it? • Providing suitable privacy preserving services (algorithms) after forecast of privacy concern • Evaluating result from privacy preserving services (algorithms) then refine it
Key problem • Provide a privacy protection level which is suitable to the context • rather than increasing the privacy protection level unconditionally • Each service provider obtain the current privacy concern of the user • optimize the privacy preserving level Service provider ‘s problem
Privacy-aware Query Processor Location-based DatabaseServer LBS Middleware Our Assumptions 2: Query + Cloaked Spatial Region 3: Candidate Answer Third trusted party that is responsible on blurring the exact location information. 1: Query + Location Information 4: Candidate Answer
Key problem(1) • Provide a privacy protection level which is suitable to the context • rather than increasing the privacy protection level unconditionally • Each service provider obtain the current privacy concern of the user • optimize the privacy preserving level push puzzle to the middleware need an efficient context management and privacy evaluation system
Related Work • An index-based privacy preserving service trigger by Y. Lee, O. Kwon [13]
Output model of privacy concern index 5 3 4 1 2
Related Work • An index-based privacy preserving service trigger by Y. Lee, O. Kwon [12] • Advantage • Easy implement & good performance • Disadvantage • Static context • Results almost get from users feeling (through survey) • How privacy preserving service can use the result? • Support for privacy algorithms Need an efficient context management method
Related Work(1) • Efficient profile aggregation and policy evaluation in a middleware for adaptive mobile applications – CARE Middleware – Claudio Bettini [13]
Related Work(1) • Efficient profile aggregation and policy evaluation in a middleware for adaptive mobile applications – CARE Middleware – Claudio Bettini [14] • Advantage • Manage context efficiently and dynamically • Results can beuseddirectly for privacy algorithms • Scalability
Evaluation Modulẹ Version 1
Outline • Introduction • Overview Attack Model • Classification Defend Model • Evaluation Module • Conclusion
Overview Attack Model[1] • What is a privacy threat? • Whenever an adversary can associate • The identity of a user • Information that the user considers private
Overview Attack Model (1) • What is a privacy attack? • A specific method used by an adversary to obtain the sensitive association • How to classify privacy attacks? • Depending on parameters of an adversary model • What is main components of an adversary model? • Target private information • Ability to obtain the transmit messages • Background knowledge and the inferencing abilities
How adversary model can be used? • The target private information • Explicit in message (i.e. real id) • Eavesdropping channel • Implicit (using pseudo id) • Inference with external knowledge • Ex. Joining pseudo id with location data Attacks exploiting quasi-identifiers in requests
How adversary model can be used? • Ability to obtain the transmitted messages • Message • Snapshot • Chain • Issuer • Single • Multiple Single versus multiple-issuer attacks
How adversary model can be used? • The background knowledge and inferencing abilities • Unavailable • Depend on sensitive information in message (implicit or explicit) • Complete available • Privacy violation occurs independently from the service request Attacks exploiting knowledge of the defense
Outline • Introduction • Overview Attack Model • Classification Defend Model • Evaluation Module • Conclusion
Classification Defend Model • Our Target • Architecture: centralized • Technique: anonymity-based and obfuscation • Defend Model against • Snapshot, Single-Issuer and Def-Unaware Attacks • Snapshot, Single-Issuer and Def-Aware Attacks • Historical, Single-Issuer Attacks • Multiple-Issuer Attacks
Outline • Introduction • Overview Attack Model • Classification Defend Model • Snapshot, Single-Issuer and Def-Unaware Attacks • Snapshot, Single-Issuer and Def-Aware Attacks • Historical, Single-Issuer Attacks • Multiple-Issuer Attacks • Evaluation Module • Conclusion
Single-issuer and def-unaware attack • Some assumptions • Attacker could acquire the knowledge about the exact location of each user • Attacker knows that the generalization region g(r).Sdata always includes point r.Sdata • Attacker can’t reason with more than one request. uniform attack
Single-issuer and def-unaware attack • A uniform attack: Not safe if user require k = 4 (with threshold h = ¼). u2 u1 u3
Single-issuer and def-aware attack • Some assumptions: • Like def-unaware • Attack can know generalization function g Uniform attack and outlier problem u2 u1 u3
Example attack Outlier problem
Cst+g-unsafe generalization algorithms • Following algorithms: • IntervalCloaking • Nearest Neighbor Cloak • … • Why are they not safe?
Cst+g-unsafe generalization algorithms • These algorithms are not safe: • Every user in anonymizingset (AS) does not generete the same AS for given k • Uniform attack • A property that Cst+g-safe generalization algorithms must satisfy: • AS contains issuer U and at least k-1 additional user • Every user in AS generates the same AS for given k Reciprocity property
Cst+g-safe generalization algorithms • hilbASR • dichotomicPoints • Grid All above algorithm satisfy Reciprocity property. So, they are safe with knowledge of generalization function.
Centralized defenses against snapshot, single-issuer and def-aware attacks • To defend snapshot, single-issuer and def-aware attack, generalization must satisfy reciprocity property • How to know an algorithm satisfy that property or not?
Decide an algorithm satisfy Reciprocity? • For an request r with k • Run algorithm to get AS • For each id ui in AS, run algorithm to get ASi • If AS = ASi for every i, then algorithm is satisfy reciprocity. • Else, it’s not safe.
Check reciprocity based calculated result • After check reciprocity directly, save result to database • With a new request r, find a similar case • result of previous request of same issuer (if movement is not large) • Result of another request, with: • Same issuer’s location • Same surrounding user’s locations
Case-based module • Run algorithm to generate AS. • Find a similar case in database, return results. • If not found, check reciprocity property • Change k parameter if necessary. • Update result to database • Send result to next step.
Experimental results [9] Spatial Generalization Algorithms for LBS Privacy Preservation
Experimental results [9] Spatial Generalization Algorithms for LBS Privacy Preservation
Chosen algorithms • Architecture • Efficiency • Approach • Security • Following algorithms • Interval cloaking • nnASR • Grid
Chosen algorithms • Architecture • Efficiency • Approach • Security • Following algorithms • Interval cloaking • nnASR • Grid Centralized
Chosen algorithms • Architecture • Efficiency • Approach • Security • Following algorithms • Interval cloaking • nnASR • Grid
Chosen algorithms • Architecture • Efficiency • Approach • Security • Following algorithms • Interval cloaking: predefined regions • nnASR: dynamic regions • Grid: dynamic regions
Chosen algorithms • Architecture • Efficiency • Approach • Security • Following algorithms • Interval cloaking • nnASR • Grid Def-aware Def-unaware
Outline • Introduction • Overview Attack Model • Classification Defend Model • Snapshot, Single-Issuer and Def-Unaware Attacks • Snapshot, Single-Issuer and Def-Aware Attacks • Historical, Single-Issuer Attacks • Multiple-Issuer Attacks • Evaluation Module • Conclusion
Memorization Property • Definition • Single-Issuer Historical Attacks • Query Tracking Attack • Maximum Movement Boundary Attack • Multiple-Issuers Historical Attacks • Notion of Historical k-Anonymity
D E A C B Memorization PropertyDefinition • k-anonymity property: the spatial cloaking algorithm generates a cloaked area that cover k different users, including the real issuer. Cloaked area contains k users Issuer A r Privacy Middleware Service Provider r’
D E A C B Memorization PropertyDefinition • k users in the cloaked area are easy to move to different places. Attacker which knowledge of exact location of users, has chance to infer the real issuer from the anonymity set. RISK !
Spatial Cloaking Algorithm Processor D E A C B Memorization PropertyDefinition • memorization property[5]: the spatial cloaking algorithm memorizes the movement history of each user and utilize this information when building cloaked area. movement patterns cloaked region