TR2022-141
Improving Adversarial Robustness by Learning Shared Information
-
- "Improving Adversarial Robustness by Learning Shared Information", Pattern Recognition, DOI: 10.1016/j.patcog.2022.109054, Vol. 134, pp. 109054, November 2022.BibTeX TR2022-141 PDF
- @article{Yu2022nov,
- author = {Yu, Xi and Smedemark-Margulies, Niklas and Aeron, Shuchin and Koike-Akino, Toshiaki and Moulin, Pierre and Brand, Matthew and Parsons, Kieran and Wang, Ye},
- title = {Improving Adversarial Robustness by Learning Shared Information},
- journal = {Pattern Recognition},
- year = 2022,
- volume = 134,
- pages = 109054,
- month = nov,
- doi = {10.1016/j.patcog.2022.109054},
- issn = {0031-3203},
- url = {https://www.merl.com/publications/TR2022-141}
- }
,
- "Improving Adversarial Robustness by Learning Shared Information", Pattern Recognition, DOI: 10.1016/j.patcog.2022.109054, Vol. 134, pp. 109054, November 2022.
-
MERL Contacts:
-
Research Areas:
Artificial Intelligence, Machine Learning, Signal Processing
Abstract:
We consider the problem of improving the adversarial robustness of neural networks while retaining natural accuracy. Motivated by the multi-view information bottleneck formalism, we seek to learn a representation that captures the shared information between clean samples and their corresponding adversarial samples while discarding these samples’ view-specific information. We show that this approach leads to a novel multi-objective loss function, and we provide mathematical motivation for its components towards improving the robust vs. natural accuracy tradeoff. We demonstrate enhanced tradeoff compared to current state-of-the- art methods with extensive evaluation on various benchmark image datasets and architectures. Ablation studies indicate that learning shared representations is key to improving performance.