TrustFed: Navigating Trade-offs Between Performance, Fairness, and Privacy in Federated Learning.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings
Herausgeber/-innenUlle Endriss, Francisco S. Melo, Kerstin Bach, Alberto Bugarin-Diz, Jose M. Alonso-Moral, Senen Barro, Fredrik Heintz
Seiten2370-2377
Seitenumfang8
ISBN (elektronisch)9781643685489
PublikationsstatusVeröffentlicht - 2024

Publikationsreihe

NameFrontiers in Artificial Intelligence and Applications
Band392
ISSN (Print)0922-6389
ISSN (elektronisch)1879-8314

Abstract

As Federated Learning (FL) gains prominence in secure machine learning applications, achieving trustworthy predictions without compromising predictive performance becomes paramount. While Differential Privacy (DP) is extensively used for its effective privacy protection, yet its application as a lossy protection method can lower the predictive performance of the machine learning model. Also, the data being gathered from distributed clients in an FL environment often leads to class imbalance making traditional accuracy measure less reflective of the true performance of prediction model. In this context, we introduce a fairness-aware FL framework (TrustFed) based on Gaussian differential privacy and Multi-Objective Optimization (MOO), which effectively protects privacy while providing fair and accurate predictions. To the best of our knowledge, this is the first attempt towards achieving Pareto-optimal trade-offs between balanced accuracy and fairness in a federated environment while safeguarding the privacy of individual clients. The framework's flexible design adeptly accommodates both statistical parity and equal opportunity fairness notions, ensuring its applicability in various FL scenarios. We demonstrate our framework's effectiveness through comprehensive experiments on five real-world datasets. TrustFed consistently achieves comparable performance fairness tradeoff to the state-of-the-art (SoTA) baseline models while preserving the anonymization rights of users in FL applications.

Zitieren

TrustFed: Navigating Trade-offs Between Performance, Fairness, and Privacy in Federated Learning. / Badar, Maryam; Sikdar, Sandipan; Nejdl, Wolfgang et al.
ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings. Hrsg. / Ulle Endriss; Francisco S. Melo; Kerstin Bach; Alberto Bugarin-Diz; Jose M. Alonso-Moral; Senen Barro; Fredrik Heintz. 2024. S. 2370-2377 (Frontiers in Artificial Intelligence and Applications; Band 392).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Badar, M, Sikdar, S, Nejdl, W & Fisichella, M 2024, TrustFed: Navigating Trade-offs Between Performance, Fairness, and Privacy in Federated Learning. in U Endriss, FS Melo, K Bach, A Bugarin-Diz, JM Alonso-Moral, S Barro & F Heintz (Hrsg.), ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings. Frontiers in Artificial Intelligence and Applications, Bd. 392, S. 2370-2377. https://doi.org/10.3233/FAIA240762
Badar, M., Sikdar, S., Nejdl, W., & Fisichella, M. (2024). TrustFed: Navigating Trade-offs Between Performance, Fairness, and Privacy in Federated Learning. In U. Endriss, F. S. Melo, K. Bach, A. Bugarin-Diz, J. M. Alonso-Moral, S. Barro, & F. Heintz (Hrsg.), ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings (S. 2370-2377). (Frontiers in Artificial Intelligence and Applications; Band 392). https://doi.org/10.3233/FAIA240762
Badar M, Sikdar S, Nejdl W, Fisichella M. TrustFed: Navigating Trade-offs Between Performance, Fairness, and Privacy in Federated Learning. in Endriss U, Melo FS, Bach K, Bugarin-Diz A, Alonso-Moral JM, Barro S, Heintz F, Hrsg., ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings. 2024. S. 2370-2377. (Frontiers in Artificial Intelligence and Applications). doi: 10.3233/FAIA240762
Badar, Maryam ; Sikdar, Sandipan ; Nejdl, Wolfgang et al. / TrustFed: Navigating Trade-offs Between Performance, Fairness, and Privacy in Federated Learning. ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings. Hrsg. / Ulle Endriss ; Francisco S. Melo ; Kerstin Bach ; Alberto Bugarin-Diz ; Jose M. Alonso-Moral ; Senen Barro ; Fredrik Heintz. 2024. S. 2370-2377 (Frontiers in Artificial Intelligence and Applications).
Download
@inproceedings{bcf38d2c3e3a41cd9e5193d329c62c5f,
title = "TrustFed: Navigating Trade-offs Between Performance, Fairness, and Privacy in Federated Learning.",
abstract = "As Federated Learning (FL) gains prominence in secure machine learning applications, achieving trustworthy predictions without compromising predictive performance becomes paramount. While Differential Privacy (DP) is extensively used for its effective privacy protection, yet its application as a lossy protection method can lower the predictive performance of the machine learning model. Also, the data being gathered from distributed clients in an FL environment often leads to class imbalance making traditional accuracy measure less reflective of the true performance of prediction model. In this context, we introduce a fairness-aware FL framework (TrustFed) based on Gaussian differential privacy and Multi-Objective Optimization (MOO), which effectively protects privacy while providing fair and accurate predictions. To the best of our knowledge, this is the first attempt towards achieving Pareto-optimal trade-offs between balanced accuracy and fairness in a federated environment while safeguarding the privacy of individual clients. The framework's flexible design adeptly accommodates both statistical parity and equal opportunity fairness notions, ensuring its applicability in various FL scenarios. We demonstrate our framework's effectiveness through comprehensive experiments on five real-world datasets. TrustFed consistently achieves comparable performance fairness tradeoff to the state-of-the-art (SoTA) baseline models while preserving the anonymization rights of users in FL applications.",
author = "Maryam Badar and Sandipan Sikdar and Wolfgang Nejdl and Marco Fisichella",
note = "DBLP License: DBLP's bibliographic metadata records provided through http://dblp.org/ are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions. ",
year = "2024",
doi = "10.3233/FAIA240762",
language = "English",
series = "Frontiers in Artificial Intelligence and Applications",
pages = "2370--2377",
editor = "Ulle Endriss and Melo, {Francisco S.} and Kerstin Bach and Alberto Bugarin-Diz and Alonso-Moral, {Jose M.} and Senen Barro and Fredrik Heintz",
booktitle = "ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings",

}

Download

TY - GEN

T1 - TrustFed: Navigating Trade-offs Between Performance, Fairness, and Privacy in Federated Learning.

AU - Badar, Maryam

AU - Sikdar, Sandipan

AU - Nejdl, Wolfgang

AU - Fisichella, Marco

N1 - DBLP License: DBLP's bibliographic metadata records provided through http://dblp.org/ are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.

PY - 2024

Y1 - 2024

N2 - As Federated Learning (FL) gains prominence in secure machine learning applications, achieving trustworthy predictions without compromising predictive performance becomes paramount. While Differential Privacy (DP) is extensively used for its effective privacy protection, yet its application as a lossy protection method can lower the predictive performance of the machine learning model. Also, the data being gathered from distributed clients in an FL environment often leads to class imbalance making traditional accuracy measure less reflective of the true performance of prediction model. In this context, we introduce a fairness-aware FL framework (TrustFed) based on Gaussian differential privacy and Multi-Objective Optimization (MOO), which effectively protects privacy while providing fair and accurate predictions. To the best of our knowledge, this is the first attempt towards achieving Pareto-optimal trade-offs between balanced accuracy and fairness in a federated environment while safeguarding the privacy of individual clients. The framework's flexible design adeptly accommodates both statistical parity and equal opportunity fairness notions, ensuring its applicability in various FL scenarios. We demonstrate our framework's effectiveness through comprehensive experiments on five real-world datasets. TrustFed consistently achieves comparable performance fairness tradeoff to the state-of-the-art (SoTA) baseline models while preserving the anonymization rights of users in FL applications.

AB - As Federated Learning (FL) gains prominence in secure machine learning applications, achieving trustworthy predictions without compromising predictive performance becomes paramount. While Differential Privacy (DP) is extensively used for its effective privacy protection, yet its application as a lossy protection method can lower the predictive performance of the machine learning model. Also, the data being gathered from distributed clients in an FL environment often leads to class imbalance making traditional accuracy measure less reflective of the true performance of prediction model. In this context, we introduce a fairness-aware FL framework (TrustFed) based on Gaussian differential privacy and Multi-Objective Optimization (MOO), which effectively protects privacy while providing fair and accurate predictions. To the best of our knowledge, this is the first attempt towards achieving Pareto-optimal trade-offs between balanced accuracy and fairness in a federated environment while safeguarding the privacy of individual clients. The framework's flexible design adeptly accommodates both statistical parity and equal opportunity fairness notions, ensuring its applicability in various FL scenarios. We demonstrate our framework's effectiveness through comprehensive experiments on five real-world datasets. TrustFed consistently achieves comparable performance fairness tradeoff to the state-of-the-art (SoTA) baseline models while preserving the anonymization rights of users in FL applications.

UR - http://www.scopus.com/inward/record.url?scp=85213329800&partnerID=8YFLogxK

U2 - 10.3233/FAIA240762

DO - 10.3233/FAIA240762

M3 - Conference contribution

T3 - Frontiers in Artificial Intelligence and Applications

SP - 2370

EP - 2377

BT - ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings

A2 - Endriss, Ulle

A2 - Melo, Francisco S.

A2 - Bach, Kerstin

A2 - Bugarin-Diz, Alberto

A2 - Alonso-Moral, Jose M.

A2 - Barro, Senen

A2 - Heintz, Fredrik

ER -

Von denselben Autoren