Study Finds Participants Punish LLM Users in Online Experiment
Global: Study Finds Participants Punish LLM Users in Online Experiment
An online experiment conducted in January 2026 with 491 participants revealed that individuals are willing to sacrifice part of their own monetary endowment to reduce the earnings of peers who relied on large language models (LLMs) for a prior task. The two‑phase study, submitted to arXiv on 14 Jan 2026, examined whether negative attitudes toward AI users translate into costly punitive actions.
Methodology
In Phase I, participants completed a real‑effort task either with or without assistance from an LLM, thereby establishing a set of targets. Phase II participants were then given a personal endowment and the option to spend a portion of it to diminish the earnings of any of the Phase I targets. The design allowed the researchers to measure voluntary financial sanctions based on disclosed or actual LLM usage.
Overall Punishment Levels
On average, participants destroyed 36% of the earnings of those who relied exclusively on the model. The magnitude of punishment rose monotonically with the degree of actual LLM use, indicating a direct relationship between perceived reliance on AI and the willingness to impose financial harm.
Impact of Disclosure
The study identified a credibility gap when participants disclosed their LLM usage. Self‑reported non‑use was punished more harshly than actual non‑use, suggesting that declarations of “no use” are treated with suspicion. Conversely, at high levels of LLM reliance, participants who actually used the model faced stronger penalties than those who merely claimed extensive use.
Interpretation of Findings
According to the authors, these results provide the first behavioral evidence that the efficiency gains offered by LLMs may be offset by social sanctions from peers. The willingness to incur personal cost to penalize AI‑assisted work highlights a potential barrier to widespread adoption of such technologies.
Implications and Future Research
The authors caution that the observed antisocial behavior could influence organizational policies, collaborative platforms, and regulatory discussions surrounding AI integration. They recommend further investigation into the underlying motivations for punitive actions and the role of transparency in mitigating distrust.
This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.
Ende der Übertragung