NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
13.01.2026 • 05:05 Artificial Intelligence & Ethics

Online Bayesian Framework Explains In-Context Learning in Large Language Models

Global: Online Bayesian Framework Explains In-Context Learning in Large Language Models

Researchers have introduced a theory‑first framework that treats inference‑time adaptation in large language models (LLMs) as an instance of online Bayesian state estimation. The approach models task‑ and context‑specific learning as the sequential inference of a low‑dimensional latent adaptation state governed by a linearized state‑space model.

Bayesian Filtering Perspective

Assuming Gaussian distributions, the adaptation process follows a Kalman recursion that provides closed‑form updates for both the posterior mean and covariance. This formulation elevates epistemic uncertainty to an explicit dynamical variable within the model.

Covariance Collapse and Learning Dynamics

The authors identify “covariance collapse” – a rapid contraction of posterior uncertainty driven by informative tokens – as a primary mechanism that precedes convergence of the posterior mean. This phenomenon explains how LLMs quickly assimilate new information during inference.

Theoretical Guarantees

By applying observability conditions to token‑level Jacobians, the study establishes stability of the Bayesian filter, proves exponential rates of covariance contraction, and derives mean‑square error bounds for the adaptation process.

Connection to Existing Optimization Methods

Gradient descent, natural‑gradient techniques, and meta‑learning updates emerge as singular, noise‑free limits of the proposed filtering dynamics. Consequently, traditional optimization‑based adaptation can be viewed as a degenerate approximation of the underlying Bayesian inference.

Implications for In‑Context Learning

The framework offers a unified probabilistic account of in‑context learning, parameter‑efficient adaptation, and test‑time learning without parameter updates. It supplies explicit guarantees on stability and sample efficiency, and provides a principled interpretation of prompt informativeness through information accumulation.

Empirical Illustration

Minimal illustrative experiments reported by the authors corroborate the qualitative predictions of the theory, demonstrating consistency between the Bayesian model and observed adaptation behavior in LLMs.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen