TR2022-081

Data Privacy and Protection on Deep Leakage from Gradients by Layer-Wise Pruning


    •  Liu, Bryan, Koike-Akino, Toshiaki, Wang, Ye, Kim, Kyeong Jin, Brand, Matthew, Aeron, Shuchin, Parsons, Kieran, "Data Privacy and Protection on Deep Leakage from Gradients by Layer-Wise Pruning", Tech. Rep. TR2022-081, Mitsubishi Electric Research Laboratories, Cambridge, MA, August 2022.
      BibTeX TR2022-081 PDF
      • @techreport{MERL_TR2022-081,
      • author = {Liu, Bryan; Koike-Akino, Toshiaki; Wang, Ye; Kim, Kyeong Jin; Brand, Matthew; Aeron, Shuchin; Parsons, Kieran},
      • title = {Data Privacy and Protection on Deep Leakage from Gradients by Layer-Wise Pruning},
      • institution = {MERL - Mitsubishi Electric Research Laboratories},
      • address = {Cambridge, MA 02139},
      • number = {TR2022-081},
      • month = aug,
      • year = 2022,
      • url = {https://www.merl.com/publications/TR2022-081/}
      • }
  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Machine Learning

Abstract:

In this paper, we study a data privacy and protection problem in a federated learning system for image classification. We assume that an attacker has full knowledge of the shared gradients during the model update. We propose a layer-wise pruning defense to prevent data leakage from the attacker. We also propose a sequential update attack method, which accumulates the information across training epochs. Simulation results show that the sequential update can gradually improve the image reconstruction results for the attacker. Moreover, the layer-wise pruning method is shown to be more efficient than classical element-wise threshold-based pruning on the shared gradients.