Researchers Identify 24 Factors Influencing Belief in Misinformation
Global: Researchers Identify 24 Factors Influencing Belief in Misinformation
Researchers have compiled a comprehensive literature review that examines why individuals accept false information, including misinformation, disinformation, and fabricated content. The study, posted to arXiv in January 2026, surveys existing scholarship to pinpoint the drivers behind belief in inaccurate claims. Its primary aim is to guide the development of interventions that bolster societal resilience against manipulation.
Methodological Approach
The authors conducted a systematic review of peer‑reviewed articles, conference papers, and reports, applying inclusion criteria that emphasized empirical findings on belief formation. Through this process, they extracted 24 distinct influence factors and organized them into six overarching categories.
Factor Categories
The six categories encompass demographic factors, personality traits, psychological variables, policy and values, media consumption habits, and preventive mechanisms. Each category aggregates related variables that collectively shape susceptibility to false information.
Demographic Insights
Analysis reveals that lower levels of formal education correlate with higher acceptance of inaccurate claims. Age and socioeconomic status also play roles, though the review emphasizes education as a consistent predictor across studies.
Personality and Psychology
Among personality dimensions, higher extraversion, lower agreeableness, and elevated neuroticism are associated with greater belief in misinformation. Psychological assessments indicate that individuals with low cognitive reflection scores—those less likely to engage in deliberate analytical thinking—are more prone to accept false narratives.
Policy, Values, and Media Use
Policy orientation and personal values influence how audiences evaluate information credibility. Media consumption patterns, particularly reliance on unverified online sources, further amplify vulnerability. Conversely, exposure to fact‑checking and transparent labeling mechanisms can mitigate these effects.
Preventive Strategies
The review highlights the efficacy of interventions such as explicit labeling of false content and prompting users to reflect on the correctness of information before sharing. These measures demonstrate measurable reductions in belief rates when implemented within digital platforms.
Security Implications
By framing belief in false information as a human‑centered security risk, the authors underscore its potential exploitation in social engineering attacks, decision‑making manipulation, and erosion of public trust. The findings aim to inform policymakers, platform designers, and educators seeking to strengthen socio‑technical defenses.
This report is based on information from arXiv, licensed under Academic Preprint / Open Access. Based on the abstract of the research paper. Full text available via ArXiv.
Ende der Übertragung