NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
30.12.2025 • 05:09 Research & Innovation

GShield Boosts Federated Learning Resilience Against Data Poisoning

Global: Novel Gradient-Based Defense GShield Enhances Federated Learning Robustness

A team of machine learning researchers has unveiled a new defense mechanism called GShield designed to counter data‑poisoning attacks in federated learning environments. The approach learns the distribution of benign client gradients during an initial clustering phase and then filters out updates that deviate from this trusted profile, thereby protecting the global model from malicious or low‑quality contributions.

Background on Data Poisoning in Federated Learning

Federated learning enables decentralized model training while keeping raw data on client devices, a feature that promotes privacy but also opens the system to adversaries who can inject crafted data. Such data‑poisoning attacks can degrade overall model performance or cause targeted misclassifications, especially when client data are non‑independent and identically distributed (non‑IID).

Design of the GShield Defense

GShield operates in two stages. First, it clusters incoming gradients and fits a Gaussian model to characterize the distribution of benign updates. This baseline is established during a dedicated initial round. In the second stage, the system compares each new client update against the learned distribution and aggregates only those that fall within a predefined confidence interval, effectively isolating suspicious contributors.

Experimental Evaluation

The authors conducted extensive experiments on both tabular and image datasets, comparing GShield against several state‑of‑the‑art defenses. Results indicate that GShield consistently preserves higher overall accuracy while limiting the influence of poisoned updates, even under severe non‑IID conditions.

Performance Gains

Across the tested scenarios, GShield improved the accuracy of the targeted class by 43% to 65% after detecting malicious and low‑quality clients. Moreover, the method maintained competitive baseline accuracy for benign clients, demonstrating that security enhancements did not come at the cost of general performance.

Implications and Future Work

The study suggests that gradient‑level monitoring combined with statistical modeling can offer a scalable safeguard for federated learning deployments. Future research may explore adaptive confidence thresholds, integration with differential privacy, and broader evaluation across heterogeneous hardware environments.

Conclusion

By establishing a reliable profile of trusted client behavior, GShield provides a practical and effective tool for strengthening federated learning against data‑poisoning threats, advancing both the security and reliability of collaborative AI training.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen