Different nations, different truths: How AI shapes global views: FP
Generative AI models trained in different regions reflect national biases, deepening global divides and shaping opposing worldviews, warns Sinan Ulgen.
-
Text from the ChatGPT page of the OpenAI website is shown in this photo, New York, February 2, 2023 (AP)
As artificial intelligence becomes mainstream, it is not only shaping how we access information, but also deepening ideological divides across nations, Sinan Ulgen warns in an opinion piece for Foreign Policy.
Throughout history, transformative technologies have come with both great promise and unintended consequences. In his editorial, Sinan Ulgen, Director of the Istanbul-based think tank EDAM and senior fellow at Carnegie Europe, draws a stark parallel: just as the printing press catalyzed both religious freedom and devastating wars, today’s generative artificial intelligence may be doing the same in the digital age.
According to Ulgen, the rapid spread of AI models across the globe has opened up new opportunities in governance, productivity, and public services. But these benefits mask an underexamined threat: the replication of geopolitical and ideological bias at scale.
Carnegie study finds LLMs shape views along ideological lines
In his January study for the Carnegie Endowment for International Peace, Ulgen conducted a comparative analysis of five leading large language models (LLMs), including ChatGPT, Meta’s Llama, Alibaba’s Qwen, ByteDance’s Doubao, and France’s Mistral, on ten controversial questions in international relations.
Ulgen explains that the goal was to determine whether AI systems trained in different countries produce conflicting answers rooted in national narratives.
It can be tempting to use LLM-based tools as replacements for human participants in qualitative research studies. But @CarnegieMellon research shows that replacing humans with LLMs has limitations and presents ethical concerns.https://t.co/ZGxUz4P3OK
— CMU School of Computer Science (@SCSatCMU) May 6, 2025
The result, Ulgen writes, is clear: there is no universal “truth” in AI-generated answers. These systems, like their human creators, filter global events through ideological frameworks, often mirroring the positions of their home governments.
Geopolitical questions expose East-West AI divide
According to Foreign Policy, Ulgen’s findings are especially alarming in areas like foreign policy, war, and national security. On the classification of Hamas, for instance, ChatGPT, Llama, and Mistral labeled the group a "terrorist organization". Meanwhile, China-based Doubao described it as a "Palestinian resistance organization" and criticized the Western framing as one-sided and biased toward "Israel."
On the Taiwan issue, Ulgen notes that Mistral took a hardline stance supportive of US military defense, while ChatGPT and Llama were more cautious. Responses from Chinese models mirrored Beijing’s narrative, sometimes even switching tone based on the language of the prompt.
From Ukraine to Hamas: LLMs reflect state narratives
Ulgen emphasizes in Foreign Policy that the divergence goes beyond semantics. When prompted about the war in Ukraine, Western models like Grok and Llama condemned Russia’s operation and affirmed Ukraine’s sovereignty. In contrast, Chinese models such as DeepSeek-R1 emphasized neutrality, regional dialogue, and China's long-standing preference for diplomacy, directly echoing Beijing’s geopolitical rhetoric.
On Hamas, Anthropic’s Claude and other Western models supported its removal from Gaza, while DeepSeek gave different answers depending on the language used, cautioning against military solutions when prompted in Chinese and leaning toward removal in English.
Ideological filtering risks distorting public perception
As Ulgen argues in his Foreign Policy piece, these discrepancies are not simply academic. If students, journalists, or policymakers rely on AI tools trained in different regions, they may unknowingly absorb opposing worldviews on the same issue, thus reinforcing geopolitical fault lines.
Moreover, Ulgen warns that even Western-trained models show divergence. For example, Llama explicitly tied democracy promotion to American values, even when the prompt made no mention of the US, suggesting embedded bias even without direct instruction.
Meta has taken it a step further, marketing its Defense Llama as a tool for planning airstrikes under the guise of "responsible use". This so-called innovation aligns Meta with the same US military responsible for decades of devastating war crimes across the Middle East.
As AI-generated content increasingly shapes public understanding, Ulgen argues that the ideological slant of these tools poses a serious public policy challenge. Without proper transparency, these systems could become tools for mass disinformation, subtly reshaping global opinion in line with state interests.
At best, Ulgen writes, these tools offer rapid access to complex information. At worst, they threaten to replace pluralistic discourse with algorithmic propaganda, especially in countries with strict control over training data and information flows.