FedORA Algorithm Facilitates Sample and Label Unlearning in Vertical Federated Learning
Global: FedORA Enables Efficient Unlearning in Vertical Federated Learning
A team of machine learning researchers announced a new algorithm designed to remove specific data influences from models trained under vertical federated learning (VFL) frameworks. The work, posted to arXiv in December 2025, aims to support privacy‑preserving requirements such as the “right to be forgotten.” By targeting both sample‑level and label‑level unlearning, the proposed method seeks to reduce the computational and communication burdens typically associated with VFL model updates.
Challenges in Vertical Federated Unlearning
Vertical federated learning distributes complementary feature sets across multiple parties, requiring coordinated updates to a shared model. Unlearning in this setting must address cross‑party dependencies, which introduces significant overhead compared with horizontal federated scenarios. Sample unlearning must erase the influence of individual records, while label unlearning must eliminate entire classes, both demanding careful handling of inter‑feature relationships.
FedORA: A Primal‑Dual Optimization Approach
The proposed FedORA framework formulates unlearning as a constrained optimization problem solved via a primal‑dual algorithm. This structure enables the system to incorporate removal constraints directly into the training objective, allowing parties to jointly adjust model parameters while respecting the unlearning request.
Novel Loss Function and Adaptive Step Size
FedORA introduces an unlearning loss that encourages classification uncertainty rather than explicit misclassification, thereby preserving model utility for retained data. An adaptive step‑size mechanism is employed to enhance numerical stability across heterogeneous party contributions.
Theoretical Guarantees
Analytical results demonstrate that the difference between the FedORA‑derived model and a model trained from scratch after data removal is bounded. This bound provides a formal guarantee of unlearning effectiveness while limiting degradation of overall performance.
Empirical Evaluation
Experiments conducted on both tabular and image datasets indicate that FedORA attains unlearning effectiveness and utility preservation comparable to the train‑from‑scratch baseline. At the same time, the approach reduces communication rounds and computational effort relative to naïve retraining.
Potential Impact and Future Work
By addressing the unique constraints of VFL, FedORA may facilitate broader adoption of privacy‑centric machine learning practices in multi‑party environments. Ongoing research could explore extensions to heterogeneous model architectures and real‑world deployment scenarios.
This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.
Ende der Übertragung