Here’s How DeepSeek Censorship Actually Works—and How to Get Around It
1 min read
Summary
Chinese startup DeepSeek has caused a stir in the AI world since launching its AI model just weeks ago, but the company has drawn attention for more than its prowess in the field: its developers have been accused of censoring certain responses about sensitive subjects from the Tiananmen incident to Taiwan’s sovereignty.
In some cases, the model will refuse to provide an answer; in others, it will give a biased reply that toe the line of the Chinese government.
To explore how this self-censorship works on a technical level, Wired tested DeepSeek R1 on its own app, on a third-party platform called Together AI, and on a WIRED computer using the application Ollama.
The result showed that while the simplest form of censorship could be avoided by not using the app, there are types of bias built into the model during the training process that are more difficult to remove.
The discovery could mean Chinese AI companies will have to work hard to prove they have conquered their biases if they want to dominate the global market.