NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
28.01.2026 • 05:25 Research & Innovation

New Metric Quantifies Representational Deviations in Neural Network Compositional Generalization

Global: New Metric Quantifies Representational Deviations in Neural Network Compositional Generalization

A team of researchers has introduced a structural metric called Homomorphism Error (HE) to assess why neural networks often stumble on compositional tasks that require interpreting novel combinations of familiar components. The metric, presented in a recent preprint on arXiv, aims to bridge the gap between behavioral evaluations and representational analysis by measuring deviations from approximate homomorphisms between an expression algebra and a model’s hidden‑state space. The study focuses on small decoder‑only Transformer models and evaluates performance on out‑of‑distribution (OOD) compositional generalization scenarios.

Metric Definition

HE quantifies how closely a model’s internal representations preserve the algebraic structure of compositional operators. The authors instantiate two variants for SCAN‑style tasks: modifier HE, which captures unary composition, and sequence HE, which captures binary composition. Both are computed by learning auxiliary operators that predict the representation of a composed input from the representations of its constituent parts.

Experimental Evaluation

Across controlled experiments, the researchers found that HE correlates strongly with OOD accuracy under noise injection, achieving an R² of 0.73 between modifier HE and OOD performance. This suggests that higher representational deviation, as reflected by HE, reliably predicts poorer generalization on novel compositional inputs.

Ablation Studies

Further analysis revealed that increasing model depth had little impact on either HE values or OOD accuracy. In contrast, the breadth of training data exhibited a threshold effect: insufficient coverage caused a sharp rise in HE and a corresponding drop in OOD performance. Additionally, the systematic insertion of random noise tokens consistently elevated HE scores, indicating heightened representational distortion.

HE‑Regularized Training

The authors explored whether explicitly minimizing HE during training could improve generalization. Experiments showed that HE‑regularized training significantly reduced modifier HE (p = 1.1×10⁻⁴) and sequence HE (p = 0.001). Moreover, the approach yielded a statistically significant boost in OOD accuracy (p = 0.023), demonstrating HE’s potential as an actionable training signal.

Implications and Availability

These findings position HE as both a diagnostic tool for identifying representational weaknesses and a practical regularizer for enhancing compositional generalization in neural networks. The codebase used for all experiments has been released publicly, enabling replication and further exploration by the research community.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via arXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen