NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
27.01.2026 • 05:25 Artificial Intelligence & Ethics

Researchers Reveal New Gradient Inversion Attack Targeting Adapter-Based Federated LLMs

Global: Researchers Reveal New Gradient Inversion Attack Targeting Adapter-Based Federated LLMs

A team of machine‑learning researchers announced in January 2026 that they have developed a novel gradient inversion technique capable of reconstructing private text from federated large language models that use low‑rank adapters. The study, posted on arXiv (ID 2601.17533), demonstrates that the attack can recover input data with ROUGE‑1/2 scores exceeding 99 percent, even when traditional attacks fail. The work aims to highlight privacy risks inherent in current federated learning deployments for web‑scale applications.

Background on Adapter‑Based Federated LLMs

Adapter‑based federated learning has become popular because it freezes the massive backbone of a language model while fine‑tuning only compact, low‑rank modules. Proponents argue that this approach reduces computational, storage, and communication costs and limits the amount of gradient information that could be exposed to adversaries.

Introducing the Unordered‑Word‑Bag‑Based Text Reconstruction (UTR) Attack

The authors propose the Unordered‑Word‑Bag‑Based Text Reconstruction (UTR) attack, which exploits three characteristics of adapter‑based systems: (i) low‑dimensional gradients, (ii) frozen backbone layers, and (iii) a combinatorially large reconstruction space. UTR first infers token presence by analyzing attention patterns in the frozen layers, then performs sentence‑level inversion within the adapter’s low‑rank subspace, and finally enforces semantic coherence through constrained greedy decoding guided by language priors.

Experimental Evaluation

Extensive experiments were conducted on models including GPT‑2 Large, BERT, and Qwen2.5‑7B, using benchmark datasets such as CoLA, SST‑2, and Rotten Tomatoes. The researchers varied batch sizes and training configurations to assess the robustness of UTR against prior gradient inversion attacks.

Key Findings

Across all tested scenarios, UTR achieved near‑perfect reconstruction accuracy, with ROUGE‑1 and ROUGE‑2 scores consistently above 99 percent. Notably, the attack succeeded even when large batch sizes—conditions under which earlier attacks typically collapse—were employed.

Implications for Privacy in Federated Learning

The results suggest a fundamental tension between the efficiency gains of adapter‑based fine‑tuning and the privacy guarantees expected in federated settings. According to the authors, the findings challenge the prevailing belief that lightweight adaptation inherently enhances security.

Open Resources and Future Directions

The full codebase and experimental data have been released publicly on GitHub, enabling the community to verify the results and explore mitigation strategies. The authors recommend further research into gradient‑masking techniques and alternative adaptation mechanisms to address the identified leakage channels.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen