NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
01.01.2026 • 05:11 Artificial Intelligence & Ethics

New Black‑Box Jailbreak Method Boosts LLM Attack Efficiency

Global: New Black‑Box Jailbreak Method Boosts LLM Attack Efficiency

Researchers have introduced a black‑box technique called Jailbreak with Cross‑Behavior attacks (JCB) to improve the efficiency of prompting large language models (LLMs) into producing harmful content. The study, posted on arXiv in March 2025, details how the method leverages prior successful prompts to accelerate attacks on new model behaviors while avoiding costly auxiliary model calls.

Methodology Overview

JCB operates without direct access to model internals, instead treating the target LLM as a black box. By cataloguing successful jailbreak prompts from earlier attempts, the system reuses these patterns to craft new prompts that are more likely to succeed against different behaviors. This cross‑behavior strategy reduces the need for extensive trial‑and‑error searches.

Efficiency Gains

Experimental results indicate that JCB can achieve up to 94% fewer queries compared with leading baseline approaches, while delivering a 12.9% higher average attack success rate. These improvements stem from the method’s ability to prioritize promising prompt structures early in the search process.

Performance on Resilient Models

When evaluated against Llama‑2‑7B, a model noted for its robustness, JCB attained a 37% success rate—significantly higher than prior black‑box attacks. This demonstrates the technique’s capacity to breach defenses even in models considered among the most resilient.

Transferability Across Models

The authors also report promising zero‑shot transferability, meaning prompts generated for one LLM often retain effectiveness when applied to other, unseen models. This suggests that the underlying vulnerabilities exploited by JCB are not confined to a single architecture.

Implications for AI Safety

According to the paper, the findings highlight the need for more robust alignment and defense mechanisms that can withstand adaptive, cross‑behavior attacks. The authors recommend further research into detection methods that operate without reliance on costly auxiliary language models.

Future Research Directions

Future work outlined by the researchers includes extending JCB to multimodal models and exploring defensive strategies that can dynamically adapt to evolving jailbreak techniques.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen