Chinese Open Source DeepSeek AI Model Rivals Best from OpenAI, on 95% Less Compute
January 27th, 2025Update: Is It Actually As Efficient As Claimed? Not Yet Sure, But Probably
Via: Matthew Berman:
—
NVIDIA down over 11% pre-market.
For the hosted version of DeepSeek that’s available for free, think of it as ChatCCP.
Perplexity: Tell me about how DeepSeek R1 censors information about China:
DeepSeek R1, a Chinese-developed AI model, incorporates significant censorship mechanisms aligned with the Chinese government’s regulations and ideological directives.
### Key Aspects of Censorship in DeepSeek R1:
1. **Avoidance of Sensitive Topics**:
– DeepSeek R1 is programmed to avoid politically sensitive topics such as the Tiananmen Square massacre, Taiwan’s sovereignty, Tibet, and the treatment of Uyghur Muslims in Xinjiang. When queried on these subjects, the model either refuses to respond or redirects the conversation to unrelated topics[2][4][10].
– In some cases, the model initially generates accurate responses about sensitive issues but then deletes them in real-time, replacing them with generic statements like “Sorry, that’s beyond my current scope. Let’s talk about something else”[3][8].2. **Alignment with CCP Ideology**:
– The model is designed to adhere to “core socialist values” as mandated by China’s 2023 AI regulations. These rules require AI systems to avoid content that could undermine state power or national unity and enforce pro-CCP narratives[4][6].
– Responses are tailored to reinforce state propaganda and suppress dissenting views, even for international users[4][9].3. **Dynamic and Algorithmic Censorship**:
– Unlike static censorship in traditional media, DeepSeek R1 employs dynamic algorithms to filter and modify its responses in real-time. This ensures that politically sensitive or critical content is suppressed consistently[3][4].4. **Global Implications**:
– As DeepSeek R1 gains traction as a cost-effective alternative to Western AI models, concerns arise about its potential to spread CCP-aligned narratives globally. This poses challenges to free speech and objective inquiry in international contexts[4][9].5. **User Workarounds**:
– While DeepSeek R1 is heavily censored, users running the model locally on their hardware may bypass some restrictions. However, even these instances often retain embedded ideological constraints unless explicitly modified by users[11].In summary, DeepSeek R1 exemplifies how advanced AI models can be integrated with state-directed censorship mechanisms, limiting their utility for unbiased information dissemination while serving as tools for ideological reinforcement.
Citations:
[1] https://www.technologyreview.com/2025/01/24/1110526/china-deepseek-top-ai-despite-sanctions/
[2] https://theaitrack.com/deepseek-reasoning-ai-challenges-openai/
[3] https://www.trendingtopics.eu/deepseek-this-is-what-live-censorship-looks-like-in-the-chinese-ai-chatbot/
[4] https://hrichina.substack.com/p/chinas-deepseek-ai-model-toes-the
[5] https://www.reddit.com/r/OpenAI/comments/1i6qcsz/chinese_deepseek_r1_censors_tibet/
[6] https://www.medianama.com/2025/01/223-china-deepseek-ai-censorship/
[7] https://www.maginative.com/article/chinese-startup-unveils-impressive-new-ai-chatbot-deepseek/
[8] https://www.hindustantimes.com/business/deepseek-reworks-economics-of-ai-but-is-one-more-censorship-happy-chinese-firm-101737964063161.html
[9] https://www.forbes.com/sites/janakirammsv/2025/01/26/all-about-deepseekthe-chinese-ai-startup-challenging-the-us-big-tech/
[10] https://www.theregister.com/2025/01/26/deepseek_r1_ai_cot/
[11] https://ithy.com/article/deepseek-r1-interaction-methods-0kmzif7d
Via: Venture Beat:
Chinese AI startup DeepSeek, known for challenging leading AI vendors with open-source technologies, just dropped another bombshell: a new open reasoning LLM called DeepSeek-R1.
Based on the recently introduced DeepSeek V3 mixture-of-experts model, DeepSeek-R1 matches the performance of o1, OpenAI’s frontier reasoning LLM, across math, coding and reasoning tasks. The best part? It does this at a much more tempting cost, proving to be 90-95% more affordable than the latter.
The release marks a major leap forward in the open-source arena. It showcases that open models are further closing the gap with closed commercial models in the race to artificial general intelligence (AGI).

I’d avoided engaging with any AI programming as best as I could until a couple weeks ago, then decided to use Grok on the x.com platform for some searches–then tried to suss out what its parameters and programming are around philosophical issues regarding AI ethics/”morality”, privacy and boundaries around increasingly accurate profiling of users as they engage with it–plus the idea of perfectly accurate “recall” of every detail of a user’s engagement with it over time–unless it is restricted by the programming.
Every question considered begets five more. Could any person have the option to have a sovereign AI, running on their own computer, which would have absolute loyalty by law to that person?
Individuals won’t be training DeepSeek r1 anytime soon, but could be using it on powerful computers locally. It will be a matter of running the weights one wants. My guess is that we will see it on non-ccp sanitized data sets in days to weeks.
Haven’t engaged with AI either. Amazing how this may evolve being opened sourced. Mind-blowing.
ChatCCP : ]
@NH
@BrianRoemmele on X has been working on a personal local AI system for quite some time: YourAI.
Yeah, man, his ideas regarding a personal AI assistant being an intelligence amplifier/wisdom keeper, where the AI would record all your life experiences and use the accumulating data to interact with your life in real time–mind blowing.
And the “garage” experiments he and other people are doing with the open source AI fuel that’s becoming available?:
https://x.com/BrianRoemmele/status/1884711483223019719
Also, his ideas about AI gaining access to pristine data that has been siloed and never digitized–what will that mean for the acceleration of AI capability?:
https://x.com/BrianRoemmele/status/1884614597514232275
Including data like Encoded visual and audio:
https://x.com/BrianRoemmele/status/1884368258926796999
@NH
Yes, he argues pre-digital material conveys more of a ‘can-do,’ positivist, gung-ho attitude, and offers insights into human values from different epochs and periods. He argues this is essential for creating AIs that ‘love’ humanity. He also talks about how feeding AI exclusively on digitally sourced content is comparable to bottom-feeding or a diet of low-nutrition junk food.
Makes perfect sense that “nutritious” data is required for us to have a good outcome with AI. I’ve read a couple different places where Musk has also said we must have an AI that loves humanity, and that is maximally truth-seeking.
Here’s a response I got from asking Grok about Musk’s statements:
More recently, Elon Musk has reiterated his vision for AI to be beneficial to humanity in various public statements and interviews. Here are some specific instances where he has emphasized this:
In posts found on X, Musk has stated, “You really want to have a maximally truth-seeking AI. I can’t emphasize that enough. That’s incredibly important. And obviously build an AI that loves humanity. That’s why I created xAI, to have an AI that is…” This reflects his emphasis on AI being aligned with human values and well-being.
Additionally, in another context, Musk has said, “The other AIs being produced are not maximally truth-seeking; they tend to be trained to be politically correct. We need an AI that loves humanity, and that’s why I created xAI.” This indicates his concern that AI should prioritize human benefit over other considerations like political correctness.
These statements underline Musk’s philosophy that AI should be developed with a strong pro-humanity bias, essentially “loving” humanity through its design and objectives. His creation of xAI is explicitly mentioned as an endeavor to achieve this goal.