E N D
1. Publishing Set-Valued Data via Differential Privacy Rui Chen, Concordia University
Noman Mohammed, Concordia University
Benjamin C. M. Fung, Concordia University
Bipin C. Desai, Concordia University
Li Xiong, Emory University 1
2. Outline Introduction
Preliminaries
Sanitization algorithm
Experimental results
Conclusions 2
3. Introduction The problem: non-interactive set-valued data publication under differential privacy 3
4. Introduction Set-valued data refers to the data in which each record owner is associated with a set of items drawn from an item universe. 4
5. Introduction Existing works [1, 2, 3, 4, 5, 6, 7] on publishing set-valued data are based on partitioned-based privacy models [8].
They provide insufficient privacy protection.
Composition attack [8]
deFinetti attack [9]
Foreground knowledge attack [10]
They are vulnerable to background knowledge.
5
6. Introduction Differential privacy is independent of an adversary background knowledge and computational power (with exceptions [11]).
The outcome of any analysis should not overly depend on a single data record.
Existing differentially private data publishing approaches are not adequate in terms of both utility and scalability for our problem.
6
7. Introduction Problems of data-independent publishing approaches: 7
8. Outline Introduction
Preliminaries
Sanitization algorithm
Experimental results
Conclusions 8
9. Preliminaries Context-free taxonomy tree
9
10. Preliminaries Differential privacy [12]
10
11. Preliminaries Laplace mechanism [12] 11
12. Preliminaries Exponential mechanism [13] 12
13. Preliminaries Composition properties [14] 13
14. Preliminaries Utility metrics 14
15. Outline Introduction
Preliminaries
Sanitization algorithm
Experimental results
Conclusions 15
16. Sanitization Algorithm Top-down partitioning 16
17. Sanitization Algorithm Privacy budget allocation 17
18. Sanitization Algorithm Privacy budget allocation 18
19. Sanitization Algorithm Privacy budget allocation 19
20. Sanitization Algorithm Sub-partition generation 20
21. Outline Introduction
Preliminaries
Sanitization algorithm
Experimental results
Conclusions 21
22. Experiments Two real-life set-valued datasets are used. 22
23. Experiments Average relative error vs. privacy budget 23
24. Experiments Utility for frequent itemset mining 24
25. Experiments Scalability: O(|D|*|I|) 25
26. Outline Introduction
Preliminaries
Sanitization algorithm
Experimental results
Conclusions 26
27. Conclusions Differential privacy can be successfully applied to non-interactive set-valued data publishing with guaranteed utility.
Differential privacy can be achieved by data-dependent solutions with improved efficiency and accuracy.
The general idea of data-dependent solutions applies to other types of data, for example, relational data [17] and trajectory data [18]. 27
28. References [1] J. Cao, P. Karras, C. Raissi, and K.-L. Tan. ?uncertainty inference proof transaction anonymization. In VLDB, pp. 10331044, 2010.
[2] G. Ghinita, Y. Tao, and P. Kalnis. On the anonymization of sparse high-dimensional data. In ICDE, pp. 715724, 2008.
[3] Y. He and J. F. Naughton. Anonymization of set-valued data via top-down, local generalization. In VLDB, pp. 934945, 2009.
[4] M. Terrovitis, N. Mamoulis, and P. Kalnis. Privacy-preserving anonymization of set-valued data. In VLDB, pp.115125, 2008.
[5] M. Terrovitis, N. Mamoulis, and P. Kalnis. Local and global recoding methods for anonymizing set-valued data.VLDBJ, 20(1):83106, 2011.
[6] Y. Xu, B. C. M. Fung, K. Wang, A. W. C. Fu, and J. Pei. Publishing sensitive transactions for itemset utility. In ICDM, pp. 11091114, 2008.
[7] Y. Xu, K. Wang, A. W. C. Fu, and P. S. Yu. Anonymizing transaction databases for publication. In SIGKDD, pp. 767775, 2008. 28
29. References [8] S. R. Ganta, S. P. Kasiviswanathan, and A. Smith. Composition attacks and auxiliary information in data privacy. In SIGKDD, pp. 265-273, 2008.
[9] D. Kifer. Attacks on privacy and deFinettis theorem. In SIGMOD, pp. 127138, 2009.
[10] R. C. W. Wong, A. Fu, K. Wang, P. S. Yu, and J. Pei. Can the utility of anonymized data be used for privacy breaches, ACM Transactions on Knowledge Discovery from Data, to appear.
[11] D. Kifer and A. Machanavajjhala. No free lunch in data privacy. In SIGMOD, 2011.
[12] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference, pp. 265284, 2006.
[13] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, pp. 94103, 2007.
[14] F. McSherry. Privacy integrated queries: An extensible platform for privacy-preserving data analysis. In SIGMOD, pp. 1930, 2009.
[15] A. Blum, K. Ligett, and A. Roth. A learning theory approach to non-interactive database privacy. In STOC, pp.609618, 2008.
29
30. References [16] G. Cormode, M. Procopiuc, D. Srivastava, and T. T. L. Tran. Differentially Private Publication of Sparse Data. In CoRR, 2011.
[17] N. Mohammed, R. Chen, B. C. M. Fung, and P. S. Yu. Differentially private data release for data mining. In SIGKDD, 2011.
[18] R. Chen, B. C. M. Fung, and B. C. Desai. Differentially private trajectory data publication. ICDE, under review, 2012.
30
31.
Thank you!
Q & A 31
32. Backup Slides 32
33. Lower Bound Results In the interactive setting, only a limited number of queries could be answered; otherwise, an adversary would be able to precisely reconstruct almost the entire original database.
In the non-interactive setting, one can only guarantee the utility of restricted classes of queries. 33
34. 34
35. 35
36. Threshold Selection We design the threshold as a function of the standard deviation of the noise and the height of a partitions hierarchy cut: 36
37. Relative error (a, d)-usefulness is effective to give an overall estimation of utility, but fails to produce intuitive experimental results.
We experimentally measure the utility of sanitized data for counting queries by relative error: 37
38. Experiments Average relative error vs. taxonomy tree fan-out 38