NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
29.12.2025 • 15:19 Research & Innovation

New SCALA Method Tackles Label Skew in Split Federated Learning

Global: SCALA: Split Federated Learning with Concatenated Activations and Logit Adjustments

Researchers Jiarong Yang and Yuan Liu released a paper on December 25, 2025 (originally submitted May 8, 2024) that introduces SCALA, a technique designed to improve split federated learning by addressing label distribution skew across participating clients. The work appears on the arXiv preprint server under the identifier arXiv:2405.04875.

Background on Split Federated Learning

Split federated learning (SFL) partitions a machine‑learning model between a central server and multiple edge devices, allowing each client to train its portion of the model on local data while the server processes the intermediate activations. This architecture aims to reduce communication overhead and preserve data privacy.

Challenges of Label Distribution Skew

In practice, SFL systems often encounter heterogeneous data and intermittent client participation, which can create imbalances in label distributions among the subsets of data used for training. Such skew can significantly degrade overall model performance, especially when only a fraction of clients contribute updates in a given round.

Proposed SCALA Approach

The SCALA framework addresses these issues through two complementary mechanisms. First, activations generated by client‑side models are concatenated before being fed to the server‑side model, enabling the server to adjust the aggregated label distribution centrally. Second, logit adjustments are applied to the loss functions on both server and client sides, compensating for variations in label frequencies among participating client groups.

Theoretical and Experimental Validation

The authors provide a theoretical analysis demonstrating that the concatenation and logit‑adjustment steps reduce the expected risk under label skew conditions. Empirical experiments on publicly available datasets confirm that SCALA outperforms baseline SFL methods, achieving higher accuracy and faster convergence.

Implications for Distributed AI

By mitigating label distribution skew, SCALA could enhance the reliability of federated learning deployments in environments where client availability and data characteristics are unpredictable, such as mobile edge computing and Internet‑of‑Things networks.

Future Directions

Further research may explore extending SCALA to heterogeneous model architectures, evaluating its robustness against adversarial client behavior, and integrating privacy‑preserving techniques to complement the proposed adjustments.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen