Skip to Content

The Chatbot Version of Truth

Why AI Is Shaping Narratives in the Balkans
18 dhjetor 2025 by
AI NOW

In December 2025, ADS and hibrid.info published the report The Chatbot Version of Truth. A Study on the Spread of Narratives Through AI Chatbots. AINOW Society participated in this research process and in the multi-stakeholder roundtable discussion on large language models and information integrity. The report examined how AI chatbots respond to sensitive geopolitical questions, with a focus on Kosovo and the Western Balkans.

Why this research matters

More people now ask chatbots questions once directed to journalists, researchers, or institutions. This shift changes how knowledge forms and how political meaning spreads. Artificial intelligence does not replace opinion. AI reshapes opinion at scale. In December 2025, ADS and hibrid.info published The Chatbot Version of Truth. A Study on the Spread of Narratives Through AI Chatbots. AINOW Society participated in this research and in the multi-stakeholder roundtable on large language models and information integrity.

What the study revealed

The research examined three widely used systems: ChatGPT from the United States, DeepSeek Chat from China, and Alice from Russia. These chatbots function increasingly as public reference points. The study tested whether neutrality exists in the way they present sensitive geopolitical questions. One hundred neutral prompts were used, covering Kosovo, the Western Balkans, and comparable global conflicts. Responses were analyzed for factual accuracy, semantic similarity, narrative framing, refusal patterns, and the impact of user location. The same question often produced different responses depending on the system and location. ChatGPT and DeepSeek Chat had higher factual accuracy but still produced errors. Alice avoided or reframed politically sensitive topics more frequently. Fluent, confident language created credibility, even when answers were incomplete or biased.

AINOW Society’s role

AINOW Society contributed to the research dialogue and participated in the roundtable Understanding LLMs, organized by hibrid.info. Our contribution emphasized that AI output is not truth and requires verification. Media literacy must extend into algorithmic systems. Smaller languages and regions face higher distortion risks due to weaker representation in training data. Responsibility for factual understanding rests with users, institutions, educators, and journalists. AI supports reasoning only when outputs are questioned, verified, and contextualized.

Read the full report here: The Chatbot Version of Truth. A Study on the Spread of Narratives Through AI Chatbots, ADS/hibrid.info, December 2025.