TY - JOUR
T1 - Securing federated learning
T2 - a defense strategy against targeted data poisoning attack
AU - Khraisat, Ansam
AU - Alazab, Ammar
AU - Alazab, Moutaz
AU - Jan, Tony
AU - Singh, Sarabjot
AU - Uddin, Md Ashraf
N1 - Publisher Copyright:
© The Author(s) 2025.
PY - 2025/12
Y1 - 2025/12
N2 - Ensuring the security and integrity of Federated Learning (FL) models against adversarial attacks is critical. Among these threats, targeted data poisoning attacks, particularly label flipping, pose a significant challenge by undermining model accuracy and reliability. This paper investigates targeted data poisoning attacks in FL systems, where a small fraction of malicious participants corrupt the global model through mislabeled data updates. Our findings demonstrate that even a minor presence of malicious participants can substantially decrease classification accuracy and recall, especially when attacks focus on specific classes. We also examine the longevity and timing of these attacks during early and late training rounds, highlighting the impact of malicious participant availability on attack effectiveness. To mitigate these threats, we propose a defense strategy that identifies malicious participants by analyzing parameter updates across vulnerable training rounds. Utilizing Principal Component Analysis (PCA) for dimensionality reduction and anomaly detection, our approach effectively isolates malicious updates. Extensive simulations on standard datasets validate the effectiveness of our algorithm in accurately identifying and excluding malicious participants, thereby enhancing the integrity of the FL model. These results offer a robust defense against sophisticated poisoning strategies, significantly improving FL security.
AB - Ensuring the security and integrity of Federated Learning (FL) models against adversarial attacks is critical. Among these threats, targeted data poisoning attacks, particularly label flipping, pose a significant challenge by undermining model accuracy and reliability. This paper investigates targeted data poisoning attacks in FL systems, where a small fraction of malicious participants corrupt the global model through mislabeled data updates. Our findings demonstrate that even a minor presence of malicious participants can substantially decrease classification accuracy and recall, especially when attacks focus on specific classes. We also examine the longevity and timing of these attacks during early and late training rounds, highlighting the impact of malicious participant availability on attack effectiveness. To mitigate these threats, we propose a defense strategy that identifies malicious participants by analyzing parameter updates across vulnerable training rounds. Utilizing Principal Component Analysis (PCA) for dimensionality reduction and anomaly detection, our approach effectively isolates malicious updates. Extensive simulations on standard datasets validate the effectiveness of our algorithm in accurately identifying and excluding malicious participants, thereby enhancing the integrity of the FL model. These results offer a robust defense against sophisticated poisoning strategies, significantly improving FL security.
KW - Attack detection
KW - Federated learning
KW - Malware
KW - Poisoning attacks
UR - http://www.scopus.com/inward/record.url?scp=85218458049&partnerID=8YFLogxK
U2 - 10.1007/s43926-025-00108-6
DO - 10.1007/s43926-025-00108-6
M3 - Article
AN - SCOPUS:85218458049
SN - 2730-7239
VL - 5
JO - Discover Internet of Things
JF - Discover Internet of Things
IS - 1
M1 - 16
ER -