Details
Original language | English |
---|---|
Article number | 142 |
Journal | ACM computing surveys |
Volume | 57 |
Issue number | 6 |
Publication status | Published - 10 Feb 2025 |
Abstract
We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).
Keywords
- adversarial examples, Adversarial robustness, deep neural networks, Lipschitz constant
ASJC Scopus subject areas
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: ACM computing surveys, Vol. 57, No. 6, 142, 10.02.2025.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus
T2 - A Survey
AU - Zühlke, Monty Maximilian
AU - Kudenko, Daniel
N1 - Publisher Copyright: © 2025 Copyright held by the owner/author(s)
PY - 2025/2/10
Y1 - 2025/2/10
N2 - We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).
AB - We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).
KW - adversarial examples
KW - Adversarial robustness
KW - deep neural networks
KW - Lipschitz constant
U2 - 10.1145/3648351
DO - 10.1145/3648351
M3 - Article
AN - SCOPUS:85219749041
VL - 57
JO - ACM computing surveys
JF - ACM computing surveys
SN - 0360-0300
IS - 6
M1 - 142
ER -