Loading [MathJax]/extensions/tex2jax.js

Releasing Graph Neural Networks with Differential Privacy Guarantees

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autorschaft

  • Iyiola E. Olatunji
  • Thorben Funke
  • Megha Khosla

Organisationseinheiten

Externe Organisationen

  • Delft University of Technology

Details

OriginalspracheEnglisch
FachzeitschriftTransactions on Machine Learning Research
Jahrgang2023
Frühes Online-Datum30 Nov. 2023
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 30 Nov. 2023

Abstract

With the increasing popularity of graph neural networks (GNNs) in several sensitive applications like healthcare and medicine, concerns have been raised over the privacy aspects of trained GNNs. More notably, GNNs are vulnerable to privacy attacks, such as membership inference attacks, even if only black-box access to the trained model is granted. We propose PrivGnn, a privacy-preserving framework for releasing GNN models in a centralized setting. Assuming an access to a public unlabeled graph, PrivGnn provides a framework to release GNN models trained explicitly on public data along with knowledge obtained from the private data in a privacy preserving manner. PrivGnn combines the knowledge-distillation framework with the two noise mechanisms, random subsampling, and noisy labeling, to ensure rigorous privacy guarantees. We theoretically analyze our approach in the Rènyi differential privacy framework. Besides, we show the solid experimental performance of our method compared to several baselines adapted for graph-structured data. Our code is available at https://github.com/iyempissy/privGnn.

ASJC Scopus Sachgebiete

Zitieren

Releasing Graph Neural Networks with Differential Privacy Guarantees. / Olatunji, Iyiola E.; Funke, Thorben; Khosla, Megha.
in: Transactions on Machine Learning Research, Jahrgang 2023, 30.11.2023.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Olatunji, IE, Funke, T & Khosla, M 2023, 'Releasing Graph Neural Networks with Differential Privacy Guarantees', Transactions on Machine Learning Research, Jg. 2023. https://doi.org/10.48550/arXiv.2109.08907
Olatunji, I. E., Funke, T., & Khosla, M. (2023). Releasing Graph Neural Networks with Differential Privacy Guarantees. Transactions on Machine Learning Research, 2023. Vorabveröffentlichung online. https://doi.org/10.48550/arXiv.2109.08907
Olatunji IE, Funke T, Khosla M. Releasing Graph Neural Networks with Differential Privacy Guarantees. Transactions on Machine Learning Research. 2023 Nov 30;2023. Epub 2023 Nov 30. doi: 10.48550/arXiv.2109.08907
Olatunji, Iyiola E. ; Funke, Thorben ; Khosla, Megha. / Releasing Graph Neural Networks with Differential Privacy Guarantees. in: Transactions on Machine Learning Research. 2023 ; Jahrgang 2023.
Download
@article{b113973d992f4077a95858bd45f9e65a,
title = "Releasing Graph Neural Networks with Differential Privacy Guarantees",
abstract = "With the increasing popularity of graph neural networks (GNNs) in several sensitive applications like healthcare and medicine, concerns have been raised over the privacy aspects of trained GNNs. More notably, GNNs are vulnerable to privacy attacks, such as membership inference attacks, even if only black-box access to the trained model is granted. We propose PrivGnn, a privacy-preserving framework for releasing GNN models in a centralized setting. Assuming an access to a public unlabeled graph, PrivGnn provides a framework to release GNN models trained explicitly on public data along with knowledge obtained from the private data in a privacy preserving manner. PrivGnn combines the knowledge-distillation framework with the two noise mechanisms, random subsampling, and noisy labeling, to ensure rigorous privacy guarantees. We theoretically analyze our approach in the R{\`e}nyi differential privacy framework. Besides, we show the solid experimental performance of our method compared to several baselines adapted for graph-structured data. Our code is available at https://github.com/iyempissy/privGnn.",
author = "Olatunji, {Iyiola E.} and Thorben Funke and Megha Khosla",
note = "Publisher Copyright: {\textcopyright} 2023, Transactions on Machine Learning Research. All rights reserved.",
year = "2023",
month = nov,
day = "30",
doi = "10.48550/arXiv.2109.08907",
language = "English",
volume = "2023",

}

Download

TY - JOUR

T1 - Releasing Graph Neural Networks with Differential Privacy Guarantees

AU - Olatunji, Iyiola E.

AU - Funke, Thorben

AU - Khosla, Megha

N1 - Publisher Copyright: © 2023, Transactions on Machine Learning Research. All rights reserved.

PY - 2023/11/30

Y1 - 2023/11/30

N2 - With the increasing popularity of graph neural networks (GNNs) in several sensitive applications like healthcare and medicine, concerns have been raised over the privacy aspects of trained GNNs. More notably, GNNs are vulnerable to privacy attacks, such as membership inference attacks, even if only black-box access to the trained model is granted. We propose PrivGnn, a privacy-preserving framework for releasing GNN models in a centralized setting. Assuming an access to a public unlabeled graph, PrivGnn provides a framework to release GNN models trained explicitly on public data along with knowledge obtained from the private data in a privacy preserving manner. PrivGnn combines the knowledge-distillation framework with the two noise mechanisms, random subsampling, and noisy labeling, to ensure rigorous privacy guarantees. We theoretically analyze our approach in the Rènyi differential privacy framework. Besides, we show the solid experimental performance of our method compared to several baselines adapted for graph-structured data. Our code is available at https://github.com/iyempissy/privGnn.

AB - With the increasing popularity of graph neural networks (GNNs) in several sensitive applications like healthcare and medicine, concerns have been raised over the privacy aspects of trained GNNs. More notably, GNNs are vulnerable to privacy attacks, such as membership inference attacks, even if only black-box access to the trained model is granted. We propose PrivGnn, a privacy-preserving framework for releasing GNN models in a centralized setting. Assuming an access to a public unlabeled graph, PrivGnn provides a framework to release GNN models trained explicitly on public data along with knowledge obtained from the private data in a privacy preserving manner. PrivGnn combines the knowledge-distillation framework with the two noise mechanisms, random subsampling, and noisy labeling, to ensure rigorous privacy guarantees. We theoretically analyze our approach in the Rènyi differential privacy framework. Besides, we show the solid experimental performance of our method compared to several baselines adapted for graph-structured data. Our code is available at https://github.com/iyempissy/privGnn.

U2 - 10.48550/arXiv.2109.08907

DO - 10.48550/arXiv.2109.08907

M3 - Article

AN - SCOPUS:86000123949

VL - 2023

JO - Transactions on Machine Learning Research

JF - Transactions on Machine Learning Research

SN - 2835-8856

ER -