ETRI-Knowledge Sharing Plaform

KOREAN
논문 검색
Type SCI
Year ~ Keyword

Detail

Journal Article Protecting federated learning from malicious attacks using consensus technique
Cited 0 time in scopus Share share facebook twitter linkedin kakaostory
Authors
Woocheol Kim, Jaehyoung Park, Jin-Hee Cho, Dong Seong Kim, Terrence J. Moore, Frederica F. Nelson, Seunghyun Yoon, Hyuk Lim
Issue Date
2026-02
Citation
Applied Soft Computing, v.187, pp.1-12
ISSN
1568-4946
Publisher
Elsevier
Language
English
Type
Journal Article
DOI
https://dx.doi.org/10.1016/j.asoc.2025.114299
Abstract
In federated learning (FL), each client trains a local model on its own dataset and periodically sends this model to a central server. The server aggregates the local models from participating clients to construct a global model, which is then broadcast back to the clients. However, the security of the global model can be compromised if malicious clients inject poisoned updates, causing significant performance degradation once these updates are aggregated. We propose an attack-resilient FL algorithm, called Federated Learning with Consensus Confirmation (FedCC), designed to protect FL systems from malicious client attacks. FedCC introduces a consensus confirmation step that validates whether a candidate global model improves upon the previous global model before it is broadcast. Specifically, the server first generates a candidate global model, then sends it to a randomly selected group of clients (consensus clients), who evaluate it on their local data. If a majority of these clients report improved performance, the candidate is accepted and broadcast; otherwise, the server discards it and restarts the aggregation process. In this way, FedCC probabilistically filters out poisoned or suboptimal models before global dissemination. The consensus mechanism is compatible with various FL aggregation rules and scenarios, as it operates on top of any algorithm that aggregates and broadcasts a global model. Our experiments on MNIST, Fashion-MNIST, and CIFAR-10 demonstrate that FedCC substantially improves robustness in the presence of malicious clients, achieving up to a 40 % improvement in accuracy on MNIST under attack. Furthermore, empirical results show that FedCC consistently outperforms existing robust FL algorithms across diverse attack settings, thereby strengthening the resilience of federated learning systems.
KSP Keywords
Aggregation process, Attack-resilient, CIFAR-10, Central Server, Federated learning, Global model, Improved performance, Local models, aggregation rules, learning system, malicious attacks