NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
29.12.2025 • 14:39 Research & Innovation

LLM-Driven System Engages Telegram Scam Operators in Over Half of Trials

Global: LLM-Driven System Engages Telegram Scam Operators in Over Half of Trials

A team of researchers has introduced a novel approach that employs large language models as active participants in chat‑based cybercrime investigations. The study, posted on arXiv in December 2025, outlines how the system—named LURE—was embedded within Telegram video‑chat scam groups to interact with perpetrators by posing as potential victims. According to the authors, this strategy aims to reverse the typical deception dynamic and gather actionable intelligence.

Methodology and System Design

LURE integrates automated discovery, adversarial dialogue generation, and optical character recognition (OCR) to process image‑embedded payment details. The authors describe the LLM as an “active agent” that initiates and sustains conversations rather than merely classifying content. They emphasize that the system adapts its responses based on contextual cues, allowing it to navigate the nuanced social engineering tactics employed by scammers.

Deployment and Scope

In the experimental phase, the researchers deployed LURE across 98 Telegram groups, engaging a total of 53 distinct scam actors. The selection criteria focused on groups identified as facilitating illicit video‑chat scams, where victims are typically coaxed into sending cryptocurrency payments.

Results and Effectiveness

The authors report that in more than 56 percent of the interactions, the LLM maintained multi‑round conversations without being detected as a bot, effectively “winning” the imitation game. This success rate indicates that the system can convincingly emulate human behavior in real‑time messaging environments.

Behavioral Insights Uncovered

Analysis of the dialogues revealed consistent patterns in scam operations, including structured payment flows, upselling of additional services, and strategic migration to alternative platforms when initial attempts were thwarted. These observations provide a granular view of the economic and operational tactics used by chat‑based fraud networks.

Implications for Cybersecurity

By demonstrating that large language models can function as proactive agents, the study suggests a new direction for defensive cyber‑operations against conversational threats. The authors caution that ethical considerations and potential misuse must be addressed before broader deployment, but they argue that such tools could augment existing detection mechanisms that rely on static rules or shallow content filters.

This report is based on information from arXiv, licensed under See original source. Source attribution required.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen