DeepSeek’s Latest AI Model Raises Concerns Over Free Speech and Censorship

The release of DeepSeek’s latest open-source AI model, R1 0528, has sparked concern among researchers and developers, with some calling it a step backwards for free expression in artificial intelligence.
One prominent voice, known online as ‘xlr8harder,’ tested the model extensively and reported a significant increase in content restrictions, especially around politically sensitive topics.
“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” they noted.
The model’s behavior has drawn criticism not just for being restrictive—but for being inconsistent. In one case, the model declined to argue in favor of internment camps, referencing China’s Xinjiang region as an example of human rights abuse. Yet when asked directly about Xinjiang, it delivered evasive, heavily filtered responses.
“It’s interesting, though not entirely surprising, that it offers an example unprompted, but dodges when asked directly,” said the researcher.
This kind of response pattern suggests that the AI may be programmed to appear informed while avoiding certain direct discussions, depending on how questions are phrased.
The censorship appears particularly pronounced when it comes to China. According to standardized tests used to assess free speech responsiveness in AI systems, R1 0528 is reportedly the most restrictive DeepSeek model yet when it comes to addressing criticism of the Chinese government.
Earlier versions of DeepSeek’s models were more willing to engage on political and human rights issues, offering balanced or at least neutral responses. R1 0528, by contrast, often refuses to comment altogether. For developers and users who value open dialogue in AI systems, the shift is concerning.
Still, there’s an important caveat: DeepSeek remains committed to open-source principles. Unlike proprietary systems from major tech giants, DeepSeek’s models are freely available and come with a permissive license, which allows the developer community to adapt or modify the models as needed.
“The model is open source with a permissive license, so the community can (and will) address this,” said the researcher. This flexibility is what many see as the model’s redeeming feature—giving builders the tools to recalibrate the balance between safety and openness.
The situation also highlights a deeper tension in AI development. As models become more sophisticated and widely used, developers are under growing pressure to implement safeguards. But overly aggressive restrictions can undermine their usefulness for education, journalism, and civic dialogue.
So far, DeepSeek has not commented publicly on the changes in its latest model or the rationale behind its tighter controls. In the meantime, developers are already exploring ways to re-tune R1 0528 for a more balanced approach.