NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
27.01.2026 • 05:25 Research & Innovation

New Black-Box Membership Inference Attack Challenges Federated Learning Privacy

Global: New Black-Box Membership Inference Attack Challenges Federated Learning Privacy

A recently proposed attack, termed Res-MIA, demonstrates that federated learning models can leak membership information even when accessed only through black‑box queries. The technique achieves an area‑under‑curve (AUC) of up to 0.88 on a ResNet‑18 model trained on CIFAR‑10, highlighting a notable privacy vulnerability.

Background on Membership Inference

Membership inference attacks (MIAs) enable an adversary to determine whether a particular data point was part of a model’s training set. Such attacks threaten the confidentiality of individuals whose data may be included in the training process, especially in sensitive domains.

Federated Learning and Expected Privacy

Federated learning (FL) distributes model training across multiple clients, allowing raw data to remain on local devices. This decentralized approach has been promoted as privacy‑preserving, yet recent studies suggest that the aggregated global model can still expose member information.

Mechanics of Res-MIA

Res-MIA operates without auxiliary shadow models or additional training data. It progressively reduces input resolution through controlled down‑sampling and restoration, then measures the decay in the model’s confidence scores. According to the authors, training samples exhibit a steeper confidence decline than non‑members, providing a robust signal for membership status.

Experimental Evaluation

The authors evaluated Res-MIA on a federated ResNet‑18 architecture trained on the CIFAR‑10 dataset. Compared with existing training‑free baselines, Res-MIA consistently delivered higher detection performance, reaching an AUC of 0.88 while requiring only a limited number of forward queries.

Implications for Model Design

These findings suggest that overfitting to high‑frequency input details can serve as an underexplored source of privacy leakage. Models that rely heavily on fine‑grained, non‑robust features may be especially susceptible to this class of attacks.

Recommendations and Future Work

Researchers and practitioners are encouraged to explore mitigation strategies that reduce sensitivity to high‑frequency components, such as incorporating frequency‑domain regularization or employing robust training objectives. Further investigation into the trade‑offs between model accuracy and privacy under this attack paradigm is warranted.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen