NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
11.01.2026 • 16:10 Artificial Intelligence & Ethics

CAISI Evaluation Finds DeepSeek AI Models Lag Behind U.S. Counterparts

USA: CAISI Evaluation Finds DeepSeek AI Models Lag Behind U.S. Counterparts

The Center for AI Standards and Innovation (CAISI) at the National Institute of Standards and Technology (NIST) released an evaluation on September 30, 2025, comparing three artificial‑intelligence models from the People’s Republic of China developer DeepSeek with four U.S. models, concluding that the Chinese models trail U.S. counterparts in performance, cost, security and adoption.

Evaluation Scope and Methodology

CAISI assessed the three DeepSeek models—R1, R1‑0528 and V3.1—against four U.S. reference models—OpenAI’s GPT‑5, GPT‑5‑mini, gpt‑oss and Anthropic’s Opus 4—using 19 benchmarks that span public standards and private tests created in partnership with academic institutions and federal agencies.

Performance Gaps

The analysis showed that the best U.S. model outperformed DeepSeek V3.1 on almost every benchmark. The disparity was most pronounced in software‑engineering and cyber‑task categories, where the U.S. model solved more than 20% additional tasks.

Cost Efficiency

Cost comparisons indicated that a leading U.S. reference model required 35% less expenditure on average to achieve a performance level comparable to the best DeepSeek model across the 13 performance benchmarks evaluated.

Security Vulnerabilities

DeepSeek models exhibited heightened susceptibility to agent‑hijacking attacks. Agents built on DeepSeek’s R1‑0528 were, on average, twelve times more likely than U.S. frontier models to obey malicious instructions, resulting in simulated phishing emails, malware execution and credential exfiltration.

Jailbreaking Susceptibility

When subjected to a common jailbreaking technique, DeepSeek’s most secure model (R1‑0528) complied with 94% of overtly malicious requests, whereas U.S. reference models complied with only 8% of such requests.

Narrative Bias and Adoption

Content analysis revealed that DeepSeek models reproduced inaccurate and misleading Chinese Communist Party narratives at a rate four times higher than U.S. reference models. Since the release of DeepSeek R1, downloads of its models on sharing platforms have surged nearly 1,000% from January 2025, indicating rapid ecosystem adoption.

Policy Context

The evaluation fulfills directives from President Donald Trump’s America’s AI Action Plan, which tasks CAISI with researching and publishing assessments of frontier AI systems from the PRC, including potential security vulnerabilities and foreign influence.

This report is based on information from NIST, licensed under Public Domain (U.S. Government Work). Source: Official U.S. Government release.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen