1 / 61

Evaluating LBS Privacy In DYNAMIC CONTEXT

Evaluating LBS Privacy In DYNAMIC CONTEXT. Outline. Overview Attack Model Classification Defend Model Evaluation Module Conclusion. Outline. Overview Attack Model Classification Defend Model Evaluation Module Conclusion. Overview Attack Model[1]. What is a privacy threat?

kenny
Download Presentation

Evaluating LBS Privacy In DYNAMIC CONTEXT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating LBS Privacy In DYNAMIC CONTEXT

  2. Outline • Overview Attack Model • Classification Defend Model • Evaluation Module • Conclusion

  3. Outline • Overview Attack Model • Classification Defend Model • Evaluation Module • Conclusion

  4. Overview Attack Model[1] • What is a privacy threat? • Whenever an adversary can associate • The identity of a user • Information that the user considers private

  5. Overview Attack Model (1) • What is a privacy attack? • A specific method used by an adversary to obtain the sensitive association • How to classify privacy attacks? • Depending on parameters of an adversary model • What is main components of an adversary model? • Target private information • Ability to obtain the transmit messages • Background knowledge and the inferencing abilities

  6. How adversary model can be used? • The target private information • Explicit in message (i.e. real id) • Eavesdropping channel • Implicit (using pseudo id) • Inference with external knowledge • Ex. Joining pseudo id with location data  Attacks exploiting quasi-identifiers in requests

  7. How adversary model can be used? • Ability to obtain the transmitted messages • Message • Snapshot • Chain • Issuer • Single • Multiple  Single versus multiple-issuer attacks

  8. How adversary model can be used? • The background knowledge and inferencing abilities • Unavailable • Depend on sensitive information in message (implicit or explicit) • Complete available • Privacy violation occurs independently from the service request  Attacks exploiting knowledge of the defense

  9. Outline • Overview Attack Model • Classification Defend Model • Evaluation Module • Conclusion

  10. Classification Defend Model • Our Target • Architecture: centralized • Technique: anonymity-based and obfuscation • Defend Model against • Snapshot, Single-Issuer and Def-Unaware Attacks • Snapshot, Single-Issuer and Def-Aware Attacks • Historical, Single-Issuer Attacks • Multiple-Issuer Attacks

  11. Outline • Overview Attack Model • Classification Defend Model • Snapshot, Single-Issuer and Def-Unaware Attacks • Snapshot, Single-Issuer and Def-Aware Attacks • Historical, Single-Issuer Attacks • Multiple-Issuer Attacks • Evaluation Module • Conclusion

  12. Single-issuer and undef-aware attack • Some assumptions • Attacker could acquire the knowledge about the exact location of each user • Attacker knows that the generalization region g(r).Sdata always includes point r.Sdata • Attacker can’t reason with more than one request. uniform attack

  13. Single-issuer and undef-aware attack • A uniform attack: Not safe if user require k = 4 (with threshold h = ¼). u2 u1 u3

  14. Single-issuer and def-aware attack • Some assumptions: • Like undef-aware • Attack can know generalization function g  Uniform attack and outlier problem u2 u1 u3

  15. Example attack Outlier problem

  16. Cst+g-unsafe generalization algorithms • Following algorithms: • IntervalCloaking • Casper • Nearest Neighbor Cloak • Why are they not safe?

  17. Cst+g-unsafe generalization algorithms • These algorithms are not safe: • Every user in anonymizingset (AS) does not generete the same AS for given k • Uniform attack • A property that Cst+g-safe generalization algorithms must satisfy: • AS contains issuer U and at least k-1 additional user • Every user in AS generates the same AS for given k  Reciprocity property

  18. Cst+g-safe generalization algorithms • hilbASR • dichotomicPoints • Grid  All above algorithm satisfy Reciprocity property. So, they are safe with knowledge of generalization function.

  19. Grid algorithm

  20. Centralized defenses against snapshot, single-issuer and def-aware attacks • To defend snapshot, single-issuer and def-aware attack, generalization must satisfy reciprocity property • How to know an algorithm satisfy that property or not?

  21. Decide an algorithm satisfy Reciprocity? • For an request r with k • Run algorithm to get AS • For each id ui in AS, run algorithm to get ASi • If AS = ASi for every i, then algorithm is satisfy reciprocity. • Else, it’s not safe.

  22. Check reciprocity based calculated result • After check reciprocity directly, save result to database • With a new request r, find a similar case • result of previous request of same issuer (if movement is not large) • Result of another request, with: • Same issuer’s location • Same surrounding user’s locations

  23. Case-based module • Run algorithm to generate AS. • Find a similar case in database, return results. • If not found, check reciprocity property • Change k parameter if necessary. • Update result to database • Send result to next step.

  24. Outline • Overview Attack Model • Classification Defend Model • Snapshot, Single-Issuer and Def-Unaware Attacks • Snapshot, Single-Issuer and Def-Aware Attacks • Historical, Single-Issuer Attacks • Multiple-Issuer Attacks • Evaluation Module • Conclusion

  25. Memorization Property • Definition • Single-Issuer Historical Attacks • Query Tracking Attack • Maximum Movement Boundary Attack • Multiple-Issuers Historical Attacks • Notion of Historical k-Anonymity

  26. D E A C B Memorization PropertyDefinition • k-anonymity property: the spatial cloaking algorithm generates a cloaked area that cover k different users, including the real issuer. Cloaked area contains k users Issuer A r Privacy Middleware Service Provider r’

  27. D E A C B Memorization PropertyDefinition • k users in the cloaked area are easy to move to different places. Attacker which knowledge of exact location of users, has chance to infer the real issuer from the anonymity set. RISK !

  28. Spatial Cloaking Algorithm Processor D E A C B Memorization PropertyDefinition • memorization property[5]: the spatial cloaking algorithm memorizes the movement history of each user and utilize this information when building cloaked area. movement patterns cloaked region

  29. Memorization PropertyDefinition • Lacking of memorization property the issuer may suffer from the following attacks: • Single-Issuer Historical Attacks: attacker consumes historical movement of single issuer • Query Tracking Attack • Maximum Movement Boundary Attack • Multiple-Issuers Historical Attacks: attacker use multiple users historical movement • Notion of Historical k-Anonymity

  30. F G H D E A C I B J K Memorization PropertyQuery Tracking Attack[6] • Case description: • User query is requested multiple times at ti, ti+1, etc. • Attacker knows exact location of each user. • Attack description: • Attacker reveal real issuer by intersecting the candidate-sets between the query instances At time ti{A,B,C,D,E} At time ti+1{A,B,F,G,H} Reveal A At time ti+2 {A,F,G,H,I}

  31. F F F G G G H H H D D D E E E A A A C C C I I I B B B Memorization PropertyQuery Tracking Attack[6] • Possible instant solution: • Delay request until the cloaked until most of the candidate return • Make new cloaked area, consuming users location history. • Etc. At time ti +k Risky  Delay At time ti+k+m Safe  Forward At time ti

  32. Memorization PropertyMaximum Movement Boundary Attack[6] • Case description: • Consider the movement rate (speed) of users. • Attacker knows exact location and speed of each user. • Attack description: • Attacker limit the real issuer into the overlap area I know you are here! Ri+1 Ri movement bound area

  33. Ri+1 Ri+1 Ri+1 Ri Ri Ri Memorization PropertyMaximum Movement Boundary Attack[6] • Solution must satisfy one of the three cases: • The overlapping area satisfies user requirements • The MBB of Ri • totally covers Ri+1 • Ri totally covers Ri+1 33 • Possible solutions are Patching and Delaying

  34. Memorization PropertyMaximum Movement Boundary Attack[6] • Patching:Combine the current cloaked spatial region with the previous one • Delaying:Postpone the update until the MMB covers the current cloaked spatial region Ri+1 Ri+1 Ri Ri 34

  35. Memorization PropertyHistorical k-Anonymity[7] • If attacker also considers users frequent movement patterns, he has more chance to differ the real issuer with other candidates. 35

  36. Memorization PropertyHistorical k-Anonymity Terminology • Quasi-identifier (QID): set of attributes which can be used to identify an individual. • Location-Based QIDs (LBQIDs): • Spatio-temporal movement patterns consisting of • Set of elements: <Area, Timestamp> and • A recurrence formula: rec1.G1, …, recn.Gn, • Depict frequent user movement patterns • <Home, 8am>, <Park, 8:30am>, <Work, 9am>, 1.day, 5.week • Personal History Locations (PHL): • Sequence of element (x, y, t) that indicate the location (x, y) of a user U at time t. 36

  37. Memorization PropertyHistorical k-Anonymity Terminology • … • Historical k-anonymity: • A set of request R of user U is historical k-anonymity if there exist k-1 PHLs P1, …, Pk-1for k-1 users other than U, such that each Pi is LS-consistent with R. 37

  38. Memorization PropertyHistorical k-Anonymity Terminology • Request: • A tuple R = (x, y, t, S), S is service-specific data. • Element matching: • User request Ri= (x, y, t, S) matches an element E of an LBQID if Ǝ (x, y) ϵ E.coord and t ϵ E.timestamp • R = (park, 8 :30am) … <Park, 8:30 am>, … E • Request LBQID matching: • A set of user requests R match his/her LBQID iff: • Each request matches an element E and • All requests satisfy the recurrence formula. 38

  39. Memorization PropertyHistorical k-Anonymity Terminology • LT-consistency: A PHL is Location and Time consistent with a set of request R if: • Each request riexists an element in the PHL or • Request was sent at a time/location that can be extracted from consecutive elements of PHL. • When a user U sends a set of request R, (historical) k-anonymity is preserved if at least k-1 user, other than U, have PHLs that are LT-consistent with R. 39

  40. Memorization PropertyHistorical k-Anonymity Algorithm[7] 40

  41. Memorization PropertyHistorical k-Anonymity Algorithm[7] 41

  42. Memorization PropertyHistorical k-Anonymity Algorithm[7] • Input: • The ST information (x, y, t) of the request R. • The desired level of anonymity (k). • The spatial and temporal constraints. • Output: • The generalized 3D area. • A boolean value b to denote success/failure. • A list N of the k-1 neighbors (after execution of the first-element matching phrase) 42

  43. Memorization PropertyHistorical k-Anonymity Algorithm • Problems to considers: • LTS has to generate each request when it is issued without knowledge of future locations and future request of users. • The longer PHL traces require, the more computational costs. • Our approach: • PHLs of user are predefined (testing only), not updated at real time. • Only consider short PHL trace. 43

  44. Memorization PropertySummary & Work Flow • Memorization is the 2nd property we consider. • Memorization property checking is after Reciprocity property checking. • Memorization property checking covers 3 phases: • Check Maximum Movement Boundary Attack. • Check Query Tracking Attack. • Check Frequent Pattern Attack. 44

  45. Memorization PropertySummary & Work Flow • Memorization is the 2nd property we consider. • Memorization property checking is after Reciprocity property checking. • Memorization property checking initially covers 3 phases: • P1: Check Maximum Movement Boundary Attack. • P2: Check Query Tracking Attack. • P3: Check Frequent Pattern Attack. • If the request is failed in any phase, the algorithm stops and report the result to the next property checking. 45

  46. Outline • Overview Attack Model • Classification Defend Model • Snapshot, Single-Issuer and Def-Unaware Attacks • Snapshot, Single-Issuer and Def-Aware Attacks • Historical, Single-Issuer Attacks • Multiple-Issuer Attacks • Evaluation Module • Conclusion

  47. Multiple-issuer attacks • Problem: • The attacker can acquire multiple requests from multiple users • The attacker can infer the sensitive association for a user from the sensitive attribute(s) involved in a user’s request => query association

  48. Two methods to prevent query association: • Many-to-one queries: a k-anonymous cloaking region is associated with a single service attribute => k potential users who may be the owner of the service attribute • Many-to-many queries: a cloaking region is associated with a set of service attribute values

  49. Example

  50. l-diversity [2] • Query l-diversity: ensures that a user cannot be linked to less than ℓ distinct service attribute values.

More Related