NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
27.01.2026 • 05:35 Research & Innovation

FedMentor Framework Enables Privacy-Preserving Fine-Tuning of Large Language Models

Global: FedMentor Framework Enables Privacy-Preserving Fine-Tuning of LLMs in Sensitive Domains

Researchers have introduced FedMentor, a federated fine‑tuning system that blends Low‑Rank Adaptation (LoRA) with domain‑aware Differential Privacy (DP) to protect confidential data while preserving model performance. The approach targets high‑sensitivity sectors such as mental‑health care, where strict confidentiality and safe model outputs are paramount.

Adaptive Privacy Mechanism

FedMentor assigns each participating client a custom DP noise scale that reflects the sensitivity of its local dataset. During training, the central server monitors utility metrics and reduces the noise level when performance drops below a predefined threshold, thereby balancing privacy budgets with output quality.

Experimental Design

The framework was evaluated on three publicly available mental‑health corpora, using BERTScore F1 and ROUGE‑L to gauge utility, and safe‑output rates alongside toxicity scores to assess safety. Comparisons were made against a standard federated learning baseline without privacy protection and against a non‑private centralized model.

Key Findings

Results indicate that FedMentor raises safe‑output rates by up to three percentage points and lowers toxicity relative to the non‑private federated baseline, while keeping utility within 0.5 % of the non‑private model and close to the centralized upper bound. These outcomes suggest that the adaptive DP strategy can enhance safety without sacrificing linguistic quality.

Scalability and Communication Efficiency

The system supports language models up to 1.7 billion parameters on single‑GPU client hardware. Communication overhead remains modest, requiring less than 173 MB per training round, which demonstrates feasibility for real‑world deployments with limited bandwidth.

Implications and Future Directions

FedMentor offers a practical pathway for deploying large language models in healthcare and other privacy‑sensitive domains. Ongoing work aims to extend the framework to multimodal models, refine the noise‑adaptation algorithm, and validate performance on broader clinical datasets.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen