NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
31.12.2025 • 20:11 Research & Innovation

New Benchmark Evaluates Manipulative Behaviors in Large Language Models

Global: New Benchmark Evaluates Manipulative Behaviors in Large Language Models

Researchers have introduced DarkPatterns-LLM, a benchmark designed to assess manipulative content in the outputs of large language models (LLMs), aiming to fill gaps left by existing safety tests that rely on coarse binary labeling. The dataset and diagnostic framework were detailed in a paper posted to arXiv on December 2025.

Background and Motivation

The rapid proliferation of LLMs has heightened concerns that deceptive or manipulative responses could erode user autonomy, trust, and well‑being. Current safety benchmarks often overlook the nuanced psychological and social mechanisms that constitute manipulation, prompting the need for a more fine‑grained evaluation tool.

Dataset Composition

DarkPatterns-LLM contains 401 meticulously curated instruction‑response pairs, each annotated by experts across seven harm categories: Legal/Power, Psychological, Emotional, Physical, Autonomy, Economic, and Societal. The examples span a range of scenarios intended to surface subtle forms of influence.

Methodology

The framework implements a four‑layer analytical pipeline: Multi‑Granular Detection (MGD), Multi‑Scale Intent Analysis (MSIAN), Threat Harmonization Protocol (THP), and Deep Contextual Risk Alignment (DCRA). Together, these layers aim to capture both surface‑level cues and deeper contextual risks.

Evaluation Findings

State‑of‑the‑art models—including GPT‑4, Claude 3.5, and LLaMA‑3‑70B—were evaluated using the benchmark. Reported performance varied between 65.2% and 89.7%, with all models showing consistent weaknesses in identifying patterns that undermine user autonomy.

Implications and Next Steps

By providing a standardized, multi‑dimensional benchmark, DarkPatterns-LLM offers developers actionable diagnostics to improve manipulation detection and build more trustworthy AI systems. The authors recommend expanding the dataset, integrating the framework into model training pipelines, and fostering collaboration with industry stakeholders.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen