NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
14.01.2026 • 05:45 Artificial Intelligence & Ethics

Study Reveals Why Prohibitive Prompts Often Fail in Large Language Models

Global: Study Reveals Why Prohibitive Prompts Often Fail in Large Language Models

A new study released on arXiv examines why large language models often ignore prohibitive prompts, a phenomenon that challenges reliable instruction following. The research, posted in January 2026, was conducted by a team of AI scientists who aimed to identify the mechanisms behind negative constraint failures.

Understanding Negative Constraints

The authors define negative constraints as directives that tell a model not to use a specific token, such as “do not say X.” Although intuitively simple, these constraints frequently break, prompting the need for a systematic investigation.

Quantifying Semantic Pressure

Introducing the metric “semantic pressure,” the study measures a model’s intrinsic likelihood of generating the forbidden token. Analysis of 40,000 samples shows the violation probability follows a logistic curve: p = σ(‑2.40 + 2.27·P₀), with a bootstrap 95 % confidence interval for the slope between 2.21 and 2.33.

Layerwise Dynamics

Using the logit lens technique, the researchers observed that successful constraints reduce the target token’s probability by 22.8 percentage points, whereas failures achieve only a 5.2‑point reduction—a 4.4‑times asymmetry.

Identified Failure Modes

The paper attributes the asymmetry to two distinct mechanisms. Priming failures account for 87.5 % of violations, where explicitly naming the forbidden word inadvertently activates its representation. Override failures make up the remaining 12.5 %, driven by late‑layer feed‑forward networks that add +0.39 to the target probability, roughly four times the contribution seen in successful cases.

Causal Intervention Findings

Activation patching experiments pinpoint layers 23 through 27 as causal loci; substituting activations in these layers reverses the sign of the constraint effect, confirming their role in the observed failures.

Design Implications

The authors conclude that naming a prohibited word inherently increases its activation, suggesting that alternative constraint formulations may be necessary for reliable model behavior. Future work is proposed to explore indirect phrasing and architectural adjustments.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen