Details
Originalsprache | Englisch |
---|---|
Aufsatznummer | 142 |
Fachzeitschrift | ACM computing surveys |
Jahrgang | 57 |
Ausgabenummer | 6 |
Publikationsstatus | Veröffentlicht - 10 Feb. 2025 |
Abstract
We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).
ASJC Scopus Sachgebiete
- Mathematik (insg.)
- Theoretische Informatik
- Informatik (insg.)
- Allgemeine Computerwissenschaft
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: ACM computing surveys, Jahrgang 57, Nr. 6, 142, 10.02.2025.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus
T2 - A Survey
AU - Zühlke, Monty Maximilian
AU - Kudenko, Daniel
N1 - Publisher Copyright: © 2025 Copyright held by the owner/author(s)
PY - 2025/2/10
Y1 - 2025/2/10
N2 - We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).
AB - We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).
KW - adversarial examples
KW - Adversarial robustness
KW - deep neural networks
KW - Lipschitz constant
U2 - 10.1145/3648351
DO - 10.1145/3648351
M3 - Article
AN - SCOPUS:85219749041
VL - 57
JO - ACM computing surveys
JF - ACM computing surveys
SN - 0360-0300
IS - 6
M1 - 142
ER -