TR2024-141
Analyzing Inference Privacy Risks Through Gradients In Machine Learning
-
- "Analyzing Inference Privacy Risks Through Gradients In Machine Learning", ACM Conference on Computer and Communications Security (CCS), DOI: 10.1145/3658644.3690304, October 2024, pp. 3466-3480.BibTeX TR2024-141 PDF
- @inproceedings{Li2024oct,
- author = {Li, Zhuohang and Lowy, Andrew and Liu, Jing and Koike-Akino, Toshiaki and Parsons, Kieran and Malin, Bradley and Wang, Ye}},
- title = {Analyzing Inference Privacy Risks Through Gradients In Machine Learning},
- booktitle = {Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security},
- year = 2024,
- pages = {3466--3480},
- month = oct,
- publisher = {Association for Computing Machinery},
- doi = {10.1145/3658644.3690304},
- isbn = {9798400706363},
- url = {https://www.merl.com/publications/TR2024-141}
- }
,
- "Analyzing Inference Privacy Risks Through Gradients In Machine Learning", ACM Conference on Computer and Communications Security (CCS), DOI: 10.1145/3658644.3690304, October 2024, pp. 3466-3480.
-
MERL Contacts:
-
Research Areas:
Abstract:
In distributed learning settings, models are iteratively updated with shared gradients computed from potentially sensitive user data. While previous work has studied various privacy risks of sharing gradients, our paper aims to provide a systematic approach to analyze private information leakage from gradients. We present a unified game-based framework that encompasses a broad range of attacks including attribute, property, distributional, and user disclosures. We investigate how different uncertainties of the adversary affect their inferential power via extensive experiments on five datasets across various data modalities. Our results demonstrate the inefficacy of solely relying on data aggregation to achieve privacy against inference attacks in distributed learning. We further evaluate five types of defenses, namely, gradient pruning, signed gradient descent, adversarial perturbations, variational information bottleneck, and differential privacy, under both static and adaptive adversary settings. We provide an information-theoretic view for analyzing the effectiveness of these defenses against inference from gradients. Finally, we introduce a method for auditing attribute inference privacy, improving the empirical estimation of worst-case privacy through crafting adversarial canary records.
Related Publication
- @article{Li2024aug,
- author = {Li, Zhuohang and Lowy, Andrew and Liu, Jing and Koike-Akino, Toshiaki and Parsons, Kieran and Malin, Bradley and Wang, Ye}},
- title = {Analyzing Inference Privacy Risks Through Gradients in Machine Learning},
- journal = {arXiv},
- year = 2024,
- month = aug,
- url = {https://arxiv.org/abs/2408.16913}
- }