Loading [MathJax]/extensions/tex2jax.js

Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus: A Survey

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autorschaft

  • Monty Maximilian Zühlke
  • Daniel Kudenko

Organisationseinheiten

Details

OriginalspracheEnglisch
Aufsatznummer142
FachzeitschriftACM computing surveys
Jahrgang57
Ausgabenummer6
PublikationsstatusVeröffentlicht - 10 Feb. 2025

Abstract

We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).

ASJC Scopus Sachgebiete

Zitieren

Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus: A Survey. / Zühlke, Monty Maximilian; Kudenko, Daniel.
in: ACM computing surveys, Jahrgang 57, Nr. 6, 142, 10.02.2025.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Zühlke, Monty Maximilian ; Kudenko, Daniel. / Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus : A Survey. in: ACM computing surveys. 2025 ; Jahrgang 57, Nr. 6.
Download
@article{1be2740ff4d4419789acc55273f8b3d5,
title = "Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus: A Survey",
abstract = "We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network{\textquoteright}s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model{\textquoteright}s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).",
keywords = "adversarial examples, Adversarial robustness, deep neural networks, Lipschitz constant",
author = "Z{\"u}hlke, {Monty Maximilian} and Daniel Kudenko",
note = "Publisher Copyright: {\textcopyright} 2025 Copyright held by the owner/author(s)",
year = "2025",
month = feb,
day = "10",
doi = "10.1145/3648351",
language = "English",
volume = "57",
journal = "ACM computing surveys",
issn = "0360-0300",
publisher = "Association for Computing Machinery (ACM)",
number = "6",

}

Download

TY - JOUR

T1 - Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus

T2 - A Survey

AU - Zühlke, Monty Maximilian

AU - Kudenko, Daniel

N1 - Publisher Copyright: © 2025 Copyright held by the owner/author(s)

PY - 2025/2/10

Y1 - 2025/2/10

N2 - We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).

AB - We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).

KW - adversarial examples

KW - Adversarial robustness

KW - deep neural networks

KW - Lipschitz constant

U2 - 10.1145/3648351

DO - 10.1145/3648351

M3 - Article

AN - SCOPUS:85219749041

VL - 57

JO - ACM computing surveys

JF - ACM computing surveys

SN - 0360-0300

IS - 6

M1 - 142

ER -