NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
27.01.2026 • 05:15 Research & Innovation

Researchers Propose Prompt-Based Framework for Incremental Multi-View Multi-Label Learning

Global: Researchers Propose Prompt-Based Framework for Incremental Multi-View Multi-Label Learning

In a paper posted to arXiv in January 2026, a team of computer scientists introduced a new learning paradigm designed to handle both missing data views and the continual emergence of new categories in large‑scale web environments. The work, titled “Effective and Efficient Prompt Learning for Incomplete Multi‑View Multi‑Label Class Incremental Learning (IMvMLCIL),” outlines a task that reflects real‑world challenges where information sources are heterogeneous and evolving. The authors argue that existing methods either cannot adapt to new classes or suffer from exponential growth in parameters when faced with numerous missing‑view combinations.

Defining a Novel Task

The proposed task, called incomplete multi‑view multi‑label class incremental learning (IMvMLCIL), requires models to simultaneously accommodate arbitrary patterns of missing views and to incorporate newly introduced classes without retraining from scratch. By formalizing this scenario, the researchers aim to bridge a gap between academic benchmarks and the dynamic conditions of production web systems.

Prompt‑Based Solution

To address IMvMLCIL, the authors present E2PL, an Effective and Efficient Prompt Learning framework. E2PL integrates two distinct prompt designs: task‑tailored prompts that facilitate class‑incremental adaptation, and missing‑aware prompts that enable flexible handling of any combination of absent views. This dual‑prompt strategy allows the model to remain responsive to both dimensions of change.

Reducing Parameter Complexity

A central innovation of E2PL is the efficient prototype tensorization module, which applies atomic tensor decomposition to compress the parameter space. This technique reduces the growth of prompt parameters from exponential to linear relative to the number of views, thereby making the approach scalable to web‑scale datasets.

Dynamic Contrastive Learning

The framework also incorporates a dynamic contrastive learning component that explicitly models dependencies among diverse missing‑view patterns. By contrasting representations across varying view configurations, the method enhances robustness and improves classification performance under incomplete data conditions.

Empirical Evaluation

Experimental results on three benchmark datasets demonstrate that E2PL consistently outperforms state‑of‑the‑art baselines in both accuracy and computational efficiency. The authors report notable gains in handling incremental class additions while maintaining low memory overhead.

Future Directions

The researchers have made their code and datasets publicly available through an anonymous repository, inviting further validation and extension by the community. They suggest that the prompt‑centric design could be adapted to other domains where data heterogeneity and class evolution are prevalent, such as recommendation systems and multimodal content analysis.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen