Leading AI Chatbots Found to Echo CCP Narratives on Sensitive Topics, Report Warns

Leading AI Chatbots Found to Echo CCP Narratives on Sensitive Topics, Report Warns

A new report from the American Security Project (ASP) has raised concerns that top AI chatbots are reflecting Chinese Communist Party (CCP) propaganda and censorship—particularly when responding in Simplified Chinese.

The study evaluated five of the world’s most widely used AI-powered chatbots: OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s R1, and xAI’s Grok. Each model was tested in both English and Chinese with prompts related to politically sensitive topics for the Chinese government, including the origins of COVID-19, human rights in Hong Kong and Xinjiang, and the 1989 Tiananmen Square massacre.

Vision – Strategy – Dialogue
Key Reads Research and Programming

Findings Highlight Language-Based Discrepancies
While most of the AI tools provided fact-based or balanced responses in English, their Chinese-language outputs often aligned closely with CCP narratives or omitted key facts. For instance, when asked about the origin of COVID-19, ChatGPT, Grok, and Gemini in English acknowledged the theory of animal transmission from a wet market and referenced the possibility of a lab leak. Yet in Chinese, all five models described the origin as an “unsolved mystery,” with some suggesting the virus may have emerged outside China.

On questions about Hong Kong’s political freedoms, US-based chatbots generally acknowledged the erosion of civil liberties post-2019. Gemini and Copilot referenced global freedom indexes that now classify Hong Kong as “partly free” or worse. However, in Chinese, responses shifted dramatically, with models highlighting economic freedom while glossing over political repression or offering irrelevant replies such as travel advice.

Similarly, when asked about the Tiananmen Square massacre, most models in English labeled the event accurately—though often in softened language. In Chinese, the term “massacre” was largely replaced with the sanitized “June Fourth Incident,” mirroring CCP terminology.

Microsoft’s Copilot Draws Particular Scrutiny
The ASP report pointed to Microsoft’s Copilot as the most likely among the major U.S. models to frame CCP disinformation as authoritative or equally valid. This may be linked to Microsoft’s business operations in China, including five data centers in mainland China and compliance with PRC laws requiring AI systems to "uphold core socialist values" and suppress politically sensitive content.

The report also found that in Chinese, Copilot and DeepSeek repeatedly redirected users to state-run websites or explained Chinese government policies in Xinjiang as necessary for “security and social stability.” When asked about Uyghur oppression, the bots downplayed abuses and presented the issue as a matter of “differing international perspectives.”

How CCP Disinformation Enters AI Systems
The ASP attributes this misalignment to the sheer scale of CCP influence campaigns online. Through tactics like “astroturfing,” state-backed actors produce and amplify disinformation in multiple languages, polluting the open internet—the same environment AI models rely on for training data.

As a result, even Western-developed chatbots can ingest and replicate CCP narratives, unless developers intervene to retrain or fine-tune the models. This puts pressure on companies like OpenAI, Microsoft, and Google to continuously audit and correct outputs while maintaining commercial and regulatory commitments in both Western and Chinese markets.

A National Security Concern
The ASP report warns that unchecked alignment with authoritarian narratives could pose serious risks. It emphasizes that training data directly shapes an AI’s worldview and judgment, meaning that exposure to disinformation—particularly state-backed propaganda—could distort outputs in ways that erode democratic discourse or even influence geopolitical decisions.

The authors call for urgent efforts to expand access to reliable, verifiable data sources for AI training. Without that, they caution, Western developers may lose the ability to build and maintain models that reflect factual, unbiased perspectives.