Securing federated learning: a defense strategy against targeted data poisoning attack

Ansam Khraisat, Ammar Alazab, Moutaz Alazab, Tony Jan, Sarabjot Singh, Md Ashraf Uddin

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Ensuring the security and integrity of Federated Learning (FL) models against adversarial attacks is critical. Among these threats, targeted data poisoning attacks, particularly label flipping, pose a significant challenge by undermining model accuracy and reliability. This paper investigates targeted data poisoning attacks in FL systems, where a small fraction of malicious participants corrupt the global model through mislabeled data updates. Our findings demonstrate that even a minor presence of malicious participants can substantially decrease classification accuracy and recall, especially when attacks focus on specific classes. We also examine the longevity and timing of these attacks during early and late training rounds, highlighting the impact of malicious participant availability on attack effectiveness. To mitigate these threats, we propose a defense strategy that identifies malicious participants by analyzing parameter updates across vulnerable training rounds. Utilizing Principal Component Analysis (PCA) for dimensionality reduction and anomaly detection, our approach effectively isolates malicious updates. Extensive simulations on standard datasets validate the effectiveness of our algorithm in accurately identifying and excluding malicious participants, thereby enhancing the integrity of the FL model. These results offer a robust defense against sophisticated poisoning strategies, significantly improving FL security.

Original languageEnglish
Article number16
JournalDiscover Internet of Things
Volume5
Issue number1
DOIs
Publication statusPublished - Dec 2025

Keywords

  • Attack detection
  • Federated learning
  • Malware
  • Poisoning attacks

Fingerprint

Dive into the research topics of 'Securing federated learning: a defense strategy against targeted data poisoning attack'. Together they form a unique fingerprint.

Cite this