Details
| Originalsprache | Englisch |
|---|---|
| Titel des Sammelwerks | Artificial Neural Networks and Machine Learning – ICANN 2025 - 34th International Conference on Artificial Neural Networks, 2025, Proceedings |
| Herausgeber/-innen | Walter Senn, Marcello Sanguineti, Ausra Saudargiene, Igor V. Tetko, Alessandro E. P. Villa, Viktor Jirsa, Yoshua Bengio |
| Herausgeber (Verlag) | Springer Science and Business Media Deutschland GmbH |
| Seiten | 391-403 |
| Seitenumfang | 13 |
| ISBN (elektronisch) | 978-3-032-04558-4 |
| ISBN (Print) | 9783032045577 |
| Publikationsstatus | Veröffentlicht - 12 Sept. 2026 |
| Veranstaltung | 34th International Conference on Artificial Neural Networks, ICANN 2025 - Kaunas, Litauen Dauer: 9 Sept. 2025 → 12 Sept. 2025 |
Publikationsreihe
| Name | Lecture Notes in Computer Science |
|---|---|
| Band | 16068 LNCS |
| ISSN (Print) | 0302-9743 |
| ISSN (elektronisch) | 1611-3349 |
Abstract
Federated learning is a decentralized machine learning approach where models are trained collaboratively across multiple devices or nodes holding local data without sharing that data directly. It enables privacy-preserving, scalable, and collaborative machine learning. One of the key challenges in federated learning is its inefficiency in handling scenarios where data is highly imbalanced and non-independent and identically distributed (non-IID) across local nodes, leading to biased global models and slow convergence. This paper introduces a peer-to-peer refinement mechanism combined with FedAvg aggregation to enhance model accuracy in highly imbalanced and non-IID federated learning scenarios. Experiments were conducted on the MNIST, Fashion-MNIST and CIFAR-10 datasets using a Dirichlet distribution with α=0.1 to simulate highly imbalanced and non-IID data scenarios. The results demonstrated that the proposed approach achieved higher accuracy, 98.17% in MNIST, 84.35% in Fashion-MNIST and 67.49% in CIFAR-10 while requiring less than half the number of rounds to converge compared to traditional federated learning methods.
ASJC Scopus Sachgebiete
- Mathematik (insg.)
- Theoretische Informatik
- Informatik (insg.)
- Allgemeine Computerwissenschaft
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Artificial Neural Networks and Machine Learning – ICANN 2025 - 34th International Conference on Artificial Neural Networks, 2025, Proceedings. Hrsg. / Walter Senn; Marcello Sanguineti; Ausra Saudargiene; Igor V. Tetko; Alessandro E. P. Villa; Viktor Jirsa; Yoshua Bengio. Springer Science and Business Media Deutschland GmbH, 2026. S. 391-403 (Lecture Notes in Computer Science; Band 16068 LNCS).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - FedP2PAvg
T2 - 34th International Conference on Artificial Neural Networks, ICANN 2025
AU - Fernandes, Bruno J.T.
AU - Freire, Agostinho
AU - de Andrade, João V R.
AU - Silva, Leandro H.S.
AU - Navarro-Guerrero, Nicolás
N1 - Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.
PY - 2026/9/12
Y1 - 2026/9/12
N2 - Federated learning is a decentralized machine learning approach where models are trained collaboratively across multiple devices or nodes holding local data without sharing that data directly. It enables privacy-preserving, scalable, and collaborative machine learning. One of the key challenges in federated learning is its inefficiency in handling scenarios where data is highly imbalanced and non-independent and identically distributed (non-IID) across local nodes, leading to biased global models and slow convergence. This paper introduces a peer-to-peer refinement mechanism combined with FedAvg aggregation to enhance model accuracy in highly imbalanced and non-IID federated learning scenarios. Experiments were conducted on the MNIST, Fashion-MNIST and CIFAR-10 datasets using a Dirichlet distribution with α=0.1 to simulate highly imbalanced and non-IID data scenarios. The results demonstrated that the proposed approach achieved higher accuracy, 98.17% in MNIST, 84.35% in Fashion-MNIST and 67.49% in CIFAR-10 while requiring less than half the number of rounds to converge compared to traditional federated learning methods.
AB - Federated learning is a decentralized machine learning approach where models are trained collaboratively across multiple devices or nodes holding local data without sharing that data directly. It enables privacy-preserving, scalable, and collaborative machine learning. One of the key challenges in federated learning is its inefficiency in handling scenarios where data is highly imbalanced and non-independent and identically distributed (non-IID) across local nodes, leading to biased global models and slow convergence. This paper introduces a peer-to-peer refinement mechanism combined with FedAvg aggregation to enhance model accuracy in highly imbalanced and non-IID federated learning scenarios. Experiments were conducted on the MNIST, Fashion-MNIST and CIFAR-10 datasets using a Dirichlet distribution with α=0.1 to simulate highly imbalanced and non-IID data scenarios. The results demonstrated that the proposed approach achieved higher accuracy, 98.17% in MNIST, 84.35% in Fashion-MNIST and 67.49% in CIFAR-10 while requiring less than half the number of rounds to converge compared to traditional federated learning methods.
KW - Computer vision
KW - Federated learning
KW - Neural networks
UR - http://www.scopus.com/inward/record.url?scp=105016533353&partnerID=8YFLogxK
U2 - 10.1007/978-3-032-04558-4_31
DO - 10.1007/978-3-032-04558-4_31
M3 - Conference contribution
AN - SCOPUS:105016533353
SN - 9783032045577
T3 - Lecture Notes in Computer Science
SP - 391
EP - 403
BT - Artificial Neural Networks and Machine Learning – ICANN 2025 - 34th International Conference on Artificial Neural Networks, 2025, Proceedings
A2 - Senn, Walter
A2 - Sanguineti, Marcello
A2 - Saudargiene, Ausra
A2 - Tetko, Igor V.
A2 - Villa, Alessandro E. P.
A2 - Jirsa, Viktor
A2 - Bengio, Yoshua
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 9 September 2025 through 12 September 2025
ER -