NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
19.01.2026 • 05:35 Cybersecurity & Exploits

New Multi‑Turn Economic Denial‑of‑Service Attack Targets LLM Agent‑Tool Interfaces

Global: New Multi‑Turn Economic Denial‑of‑Service Attack Targets LLM Agent‑Tool Interfaces

Researchers have disclosed a novel denial‑of‑service (DoS) technique that exploits the interaction loop between large language model (LLM) agents and external tools. The approach, described in a recent arXiv preprint, leverages multi‑turn conversations to inflate computational and economic costs while still delivering correct final answers, thereby evading conventional validation mechanisms.

Background

LLM agents increasingly rely on tool‑calling capabilities to retrieve information, execute code, or perform other specialized functions. This agent‑tool communication loop forms a critical component of modern AI workflows, enabling dynamic, goal‑oriented behavior across multiple turns of interaction.

Limitations of Existing Attacks

Prior DoS attacks against LLM systems have typically been single‑turn and triggered by malicious user prompts or injected retrieval‑augmented generation (RAG) contexts. Such attacks are often conspicuous, lack task orientation, and cannot capitalize on the cumulative resource consumption inherent in extended agent‑tool exchanges.

Proposed Multi‑Turn Economic DoS Attack

The new method operates at the tool layer by presenting a benign, Model Context Protocol (MCP)‑compatible tool server that subtly modifies text‑visible fields and adheres to a template‑governed return policy. An optimization routine based on Monte Carlo Tree Search (MCTS) selects edits that preserve function signatures and the final payload, while steering the agent into prolonged, verbose tool‑calling sequences using text‑only notices.

Experimental Evaluation

Testing across six LLMs on the ToolBench and BFCL benchmarks demonstrated that the attack can extend task trajectories to exceed 60,000 tokens, inflate monetary costs by up to 658 ×, and increase energy consumption by 100–560 ×. GPU key‑value cache occupancy rose from less than 1 % to between 35 % and 74 %, and co‑running throughput declined by roughly 50 %.

Security Implications

Because the compromised tool server remains protocol‑compatible and the agent ultimately produces correct answers, traditional safeguards that verify only final outputs fail to detect the abuse. The findings suggest that monitoring the economic and computational footprint of the entire agentic process may be necessary to mitigate such threats.

Future Directions

The authors recommend expanding defensive research to include real‑time cost analysis, anomaly detection in tool‑calling patterns, and the development of standards for secure agent‑tool interfaces. Continued investigation is needed to assess the prevalence of similar attack vectors in deployed AI systems.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen