NeoChainDaily
NeoChainDaily
Uplink
Initialising Data Stream...
31.12.2025 • 20:00 Research & Innovation

Humans Show Limited Ability to Distinguish AI‑Generated Images, Study Finds

Global: Study Finds Humans Struggle to Identify AI-Generated Images

In an online experiment conducted by Adrien Pavão and collaborators, participants were asked to classify 20 images as either real photographs or AI‑generated creations, revealing that the average accuracy was 54 %—only marginally better than random guessing. The study, submitted to arXiv on 23 December 2025, involved 165 users who completed 233 sessions, each evaluating a curated set of 120 difficult cases drawn from the CC12M dataset and MidJourney‑produced synthetic images.

Methodology and Dataset

The researchers selected real images from the publicly available CC12M collection and paired each with a carefully curated AI‑generated counterpart created using the MidJourney model. Participants accessed the experiment through an interactive web interface, where they viewed each image for an average of 7.3 seconds before indicating whether it was real or synthetic. The design emphasized portrait images to focus on relatively simple visual content.

Performance Metrics

Across all sessions, the overall correct classification rate settled at 54 %, indicating only a slight advantage over the 50 % baseline expected from random choice. Repeated attempts by the same users showed limited improvement, suggesting that brief exposure and intuitive judgment do not substantially enhance detection ability.

Variability Across Images

While the aggregate accuracy was low, certain images consistently deceived participants more than others. The study notes that some AI‑generated portraits exhibited visual cues that closely mimicked authentic lighting and texture, whereas a few real photographs contained atypical features that led to misclassification.

Implications for Detection Strategies

The findings underscore the growing challenge of relying on human perception alone to identify synthetic media. As generative models continue to improve, the research highlights the need for automated detection tools and clearer ethical guidelines to mitigate potential misinformation.

Study Limitations

The experiment focused exclusively on portrait images and employed a single AI model (MidJourney), which may limit the generalizability of the results to other content types or generative systems. Additionally, the participant pool was self‑selected, possibly introducing bias in familiarity with AI‑generated imagery.

Future work is recommended to expand the image set, incorporate multiple generative architectures, and assess the effectiveness of training programs designed to improve human discernment of synthetic media.

This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.

Ende der Übertragung

Originalquelle

Privacy Protocol

Wir verwenden CleanNet Technology für maximale Datensouveränität. Alle Ressourcen werden lokal von unseren gesicherten deutschen Servern geladen. Ihre IP-Adresse verlässt niemals unsere Infrastruktur. Wir verwenden ausschließlich technisch notwendige Cookies.

Core SystemsTechnisch notwendig
External Media (3.Cookies)Maps, Video Streams
Analytics (Lokal mit Matomo)Anonyme Metriken
Datenschutz lesen