Loading [MathJax]/extensions/tex2jax.js

Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus: A Survey

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Monty Maximilian Zühlke
  • Daniel Kudenko

Research Organisations

Plum Print visual indicator of research metrics
  • Citations
    • Citation Indexes: 3
  • Captures
    • Readers: 18
see details

Details

Original languageEnglish
Article number142
JournalACM computing surveys
Volume57
Issue number6
Publication statusPublished - 10 Feb 2025

Abstract

We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).

Keywords

    adversarial examples, Adversarial robustness, deep neural networks, Lipschitz constant

ASJC Scopus subject areas

Cite this

Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus: A Survey. / Zühlke, Monty Maximilian; Kudenko, Daniel.
In: ACM computing surveys, Vol. 57, No. 6, 142, 10.02.2025.

Research output: Contribution to journalArticleResearchpeer review

Download
@article{1be2740ff4d4419789acc55273f8b3d5,
title = "Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus: A Survey",
abstract = "We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network{\textquoteright}s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model{\textquoteright}s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).",
keywords = "adversarial examples, Adversarial robustness, deep neural networks, Lipschitz constant",
author = "Z{\"u}hlke, {Monty Maximilian} and Daniel Kudenko",
note = "Publisher Copyright: {\textcopyright} 2025 Copyright held by the owner/author(s)",
year = "2025",
month = feb,
day = "10",
doi = "10.1145/3648351",
language = "English",
volume = "57",
journal = "ACM computing surveys",
issn = "0360-0300",
publisher = "Association for Computing Machinery (ACM)",
number = "6",

}

Download

TY - JOUR

T1 - Adversarial Robustness of Neural Networks from the Perspective of Lipschitz Calculus

T2 - A Survey

AU - Zühlke, Monty Maximilian

AU - Kudenko, Daniel

N1 - Publisher Copyright: © 2025 Copyright held by the owner/author(s)

PY - 2025/2/10

Y1 - 2025/2/10

N2 - We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).

AB - We survey the adversarial robustness of neural networks from the perspective of Lipschitz calculus in a unifying fashion by expressing models, attacks and safety guarantees—that is, a notion of measurable trustworthiness—in a mathematical language. After an intuitive motivation, we discuss algorithms to estimate a network’s Lipschitz constant, Lipschitz regularisation techniques, robustness guarantees, and the connection between a model’s Lipschitz constant and its generalisation capabilities. Afterwards, we present a new vantage point regarding minimal Lipschitz extensions, corroborate its value empirically and discuss possible research directions. Finally, we add a toolbox containing mathematical prerequisites for navigating the field (Appendix).

KW - adversarial examples

KW - Adversarial robustness

KW - deep neural networks

KW - Lipschitz constant

U2 - 10.1145/3648351

DO - 10.1145/3648351

M3 - Article

AN - SCOPUS:85219749041

VL - 57

JO - ACM computing surveys

JF - ACM computing surveys

SN - 0360-0300

IS - 6

M1 - 142

ER -