Details
| Original language | English |
|---|---|
| Title of host publication | Artificial Neural Networks and Machine Learning – ICANN 2025 - 34th International Conference on Artificial Neural Networks, 2025, Proceedings |
| Editors | Walter Senn, Marcello Sanguineti, Ausra Saudargiene, Igor V. Tetko, Alessandro E. P. Villa, Viktor Jirsa, Yoshua Bengio |
| Publisher | Springer Science and Business Media Deutschland GmbH |
| Pages | 391-403 |
| Number of pages | 13 |
| ISBN (electronic) | 978-3-032-04558-4 |
| ISBN (print) | 9783032045577 |
| Publication status | Published - 12 Sept 2026 |
| Event | 34th International Conference on Artificial Neural Networks, ICANN 2025 - Kaunas, Lithuania Duration: 9 Sept 2025 → 12 Sept 2025 |
Publication series
| Name | Lecture Notes in Computer Science |
|---|---|
| Volume | 16068 LNCS |
| ISSN (Print) | 0302-9743 |
| ISSN (electronic) | 1611-3349 |
Abstract
Federated learning is a decentralized machine learning approach where models are trained collaboratively across multiple devices or nodes holding local data without sharing that data directly. It enables privacy-preserving, scalable, and collaborative machine learning. One of the key challenges in federated learning is its inefficiency in handling scenarios where data is highly imbalanced and non-independent and identically distributed (non-IID) across local nodes, leading to biased global models and slow convergence. This paper introduces a peer-to-peer refinement mechanism combined with FedAvg aggregation to enhance model accuracy in highly imbalanced and non-IID federated learning scenarios. Experiments were conducted on the MNIST, Fashion-MNIST and CIFAR-10 datasets using a Dirichlet distribution with α=0.1 to simulate highly imbalanced and non-IID data scenarios. The results demonstrated that the proposed approach achieved higher accuracy, 98.17% in MNIST, 84.35% in Fashion-MNIST and 67.49% in CIFAR-10 while requiring less than half the number of rounds to converge compared to traditional federated learning methods.
Keywords
- Computer vision, Federated learning, Neural networks
ASJC Scopus subject areas
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Artificial Neural Networks and Machine Learning – ICANN 2025 - 34th International Conference on Artificial Neural Networks, 2025, Proceedings. ed. / Walter Senn; Marcello Sanguineti; Ausra Saudargiene; Igor V. Tetko; Alessandro E. P. Villa; Viktor Jirsa; Yoshua Bengio. Springer Science and Business Media Deutschland GmbH, 2026. p. 391-403 (Lecture Notes in Computer Science; Vol. 16068 LNCS).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - FedP2PAvg
T2 - 34th International Conference on Artificial Neural Networks, ICANN 2025
AU - Fernandes, Bruno J.T.
AU - Freire, Agostinho
AU - de Andrade, João V R.
AU - Silva, Leandro H.S.
AU - Navarro-Guerrero, Nicolás
N1 - Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.
PY - 2026/9/12
Y1 - 2026/9/12
N2 - Federated learning is a decentralized machine learning approach where models are trained collaboratively across multiple devices or nodes holding local data without sharing that data directly. It enables privacy-preserving, scalable, and collaborative machine learning. One of the key challenges in federated learning is its inefficiency in handling scenarios where data is highly imbalanced and non-independent and identically distributed (non-IID) across local nodes, leading to biased global models and slow convergence. This paper introduces a peer-to-peer refinement mechanism combined with FedAvg aggregation to enhance model accuracy in highly imbalanced and non-IID federated learning scenarios. Experiments were conducted on the MNIST, Fashion-MNIST and CIFAR-10 datasets using a Dirichlet distribution with α=0.1 to simulate highly imbalanced and non-IID data scenarios. The results demonstrated that the proposed approach achieved higher accuracy, 98.17% in MNIST, 84.35% in Fashion-MNIST and 67.49% in CIFAR-10 while requiring less than half the number of rounds to converge compared to traditional federated learning methods.
AB - Federated learning is a decentralized machine learning approach where models are trained collaboratively across multiple devices or nodes holding local data without sharing that data directly. It enables privacy-preserving, scalable, and collaborative machine learning. One of the key challenges in federated learning is its inefficiency in handling scenarios where data is highly imbalanced and non-independent and identically distributed (non-IID) across local nodes, leading to biased global models and slow convergence. This paper introduces a peer-to-peer refinement mechanism combined with FedAvg aggregation to enhance model accuracy in highly imbalanced and non-IID federated learning scenarios. Experiments were conducted on the MNIST, Fashion-MNIST and CIFAR-10 datasets using a Dirichlet distribution with α=0.1 to simulate highly imbalanced and non-IID data scenarios. The results demonstrated that the proposed approach achieved higher accuracy, 98.17% in MNIST, 84.35% in Fashion-MNIST and 67.49% in CIFAR-10 while requiring less than half the number of rounds to converge compared to traditional federated learning methods.
KW - Computer vision
KW - Federated learning
KW - Neural networks
UR - http://www.scopus.com/inward/record.url?scp=105016533353&partnerID=8YFLogxK
U2 - 10.1007/978-3-032-04558-4_31
DO - 10.1007/978-3-032-04558-4_31
M3 - Conference contribution
AN - SCOPUS:105016533353
SN - 9783032045577
T3 - Lecture Notes in Computer Science
SP - 391
EP - 403
BT - Artificial Neural Networks and Machine Learning – ICANN 2025 - 34th International Conference on Artificial Neural Networks, 2025, Proceedings
A2 - Senn, Walter
A2 - Sanguineti, Marcello
A2 - Saudargiene, Ausra
A2 - Tetko, Igor V.
A2 - Villa, Alessandro E. P.
A2 - Jirsa, Viktor
A2 - Bengio, Yoshua
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 9 September 2025 through 12 September 2025
ER -