Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings |
Herausgeber/-innen | Ulle Endriss, Francisco S. Melo, Kerstin Bach, Alberto Bugarin-Diz, Jose M. Alonso-Moral, Senen Barro, Fredrik Heintz |
Seiten | 2370-2377 |
Seitenumfang | 8 |
ISBN (elektronisch) | 9781643685489 |
Publikationsstatus | Veröffentlicht - 2024 |
Publikationsreihe
Name | Frontiers in Artificial Intelligence and Applications |
---|---|
Band | 392 |
ISSN (Print) | 0922-6389 |
ISSN (elektronisch) | 1879-8314 |
Abstract
As Federated Learning (FL) gains prominence in secure machine learning applications, achieving trustworthy predictions without compromising predictive performance becomes paramount. While Differential Privacy (DP) is extensively used for its effective privacy protection, yet its application as a lossy protection method can lower the predictive performance of the machine learning model. Also, the data being gathered from distributed clients in an FL environment often leads to class imbalance making traditional accuracy measure less reflective of the true performance of prediction model. In this context, we introduce a fairness-aware FL framework (TrustFed) based on Gaussian differential privacy and Multi-Objective Optimization (MOO), which effectively protects privacy while providing fair and accurate predictions. To the best of our knowledge, this is the first attempt towards achieving Pareto-optimal trade-offs between balanced accuracy and fairness in a federated environment while safeguarding the privacy of individual clients. The framework's flexible design adeptly accommodates both statistical parity and equal opportunity fairness notions, ensuring its applicability in various FL scenarios. We demonstrate our framework's effectiveness through comprehensive experiments on five real-world datasets. TrustFed consistently achieves comparable performance fairness tradeoff to the state-of-the-art (SoTA) baseline models while preserving the anonymization rights of users in FL applications.
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings. Hrsg. / Ulle Endriss; Francisco S. Melo; Kerstin Bach; Alberto Bugarin-Diz; Jose M. Alonso-Moral; Senen Barro; Fredrik Heintz. 2024. S. 2370-2377 (Frontiers in Artificial Intelligence and Applications; Band 392).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - TrustFed: Navigating Trade-offs Between Performance, Fairness, and Privacy in Federated Learning.
AU - Badar, Maryam
AU - Sikdar, Sandipan
AU - Nejdl, Wolfgang
AU - Fisichella, Marco
N1 - DBLP License: DBLP's bibliographic metadata records provided through http://dblp.org/ are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.
PY - 2024
Y1 - 2024
N2 - As Federated Learning (FL) gains prominence in secure machine learning applications, achieving trustworthy predictions without compromising predictive performance becomes paramount. While Differential Privacy (DP) is extensively used for its effective privacy protection, yet its application as a lossy protection method can lower the predictive performance of the machine learning model. Also, the data being gathered from distributed clients in an FL environment often leads to class imbalance making traditional accuracy measure less reflective of the true performance of prediction model. In this context, we introduce a fairness-aware FL framework (TrustFed) based on Gaussian differential privacy and Multi-Objective Optimization (MOO), which effectively protects privacy while providing fair and accurate predictions. To the best of our knowledge, this is the first attempt towards achieving Pareto-optimal trade-offs between balanced accuracy and fairness in a federated environment while safeguarding the privacy of individual clients. The framework's flexible design adeptly accommodates both statistical parity and equal opportunity fairness notions, ensuring its applicability in various FL scenarios. We demonstrate our framework's effectiveness through comprehensive experiments on five real-world datasets. TrustFed consistently achieves comparable performance fairness tradeoff to the state-of-the-art (SoTA) baseline models while preserving the anonymization rights of users in FL applications.
AB - As Federated Learning (FL) gains prominence in secure machine learning applications, achieving trustworthy predictions without compromising predictive performance becomes paramount. While Differential Privacy (DP) is extensively used for its effective privacy protection, yet its application as a lossy protection method can lower the predictive performance of the machine learning model. Also, the data being gathered from distributed clients in an FL environment often leads to class imbalance making traditional accuracy measure less reflective of the true performance of prediction model. In this context, we introduce a fairness-aware FL framework (TrustFed) based on Gaussian differential privacy and Multi-Objective Optimization (MOO), which effectively protects privacy while providing fair and accurate predictions. To the best of our knowledge, this is the first attempt towards achieving Pareto-optimal trade-offs between balanced accuracy and fairness in a federated environment while safeguarding the privacy of individual clients. The framework's flexible design adeptly accommodates both statistical parity and equal opportunity fairness notions, ensuring its applicability in various FL scenarios. We demonstrate our framework's effectiveness through comprehensive experiments on five real-world datasets. TrustFed consistently achieves comparable performance fairness tradeoff to the state-of-the-art (SoTA) baseline models while preserving the anonymization rights of users in FL applications.
UR - http://www.scopus.com/inward/record.url?scp=85213329800&partnerID=8YFLogxK
U2 - 10.3233/FAIA240762
DO - 10.3233/FAIA240762
M3 - Conference contribution
T3 - Frontiers in Artificial Intelligence and Applications
SP - 2370
EP - 2377
BT - ECAI 2024 - 27th European Conference on Artificial Intelligence, Including 13th Conference on Prestigious Applications of Intelligent Systems, PAIS 2024, Proceedings
A2 - Endriss, Ulle
A2 - Melo, Francisco S.
A2 - Bach, Kerstin
A2 - Bugarin-Diz, Alberto
A2 - Alonso-Moral, Jose M.
A2 - Barro, Senen
A2 - Heintz, Fredrik
ER -