Openai is urging the US government to step up its campaign to limit the global reach of China’s artificial intelligence and ban AI models developed by China’s Deepseek.
Openai, which labels the company as “state aid” and “state management,” warns that DeepSeek’s model poses national security risks, particularly in regards to data privacy and potential foreign influences.
In its policy submission to the Bureau of Science and Technology Policy, Openai highlighted the risks of Deepseek’s AI systems being manipulated by Chinese authorities. The proposal recommends banning such models in sensitive sectors, including government, defense and critical infrastructure services.
Rising alarm: Why Openai considers Deepseek as a risk
The heart of Openai’s concern is the possibility of leveraging Deepseek’s model to collect sensitive data, manipulate output, and engage in cyber operations alongside the benefits of the state.
According to Openai, these systems may be forced to comply with the CCP’s requests, putting misuse in activities such as identity theft and fraudulent extraction of intellectual property.
Chris Lehane, Vice President of Global Affairs at Openai, highlighted the stakes by saying, “In addition to the Deepseek model of critical infrastructure and other high-risk use cases, there are significant risks in addition to the critical risk model, considering that DeepSeek can manipulate the model through CCP and cause harm.”
The proposal further suggests that the unidentified proliferation of models like Deepseek could violate democratic processes and pose a threat to global cybersecurity. Openai’s concerns are rooted in the fear that authoritarian regimes could use advanced AI to undermine an open society.
Competition with Time: Deepseek’s rapid growth
Meanwhile, Deepseek is competing to strengthen its competitiveness. The company accelerates the release of the R2 model, originally scheduled for May 2025, and is expected to debut within a few weeks. This fast truck decision reflects both increased competition and the installation of regulatory pressures.
The move is also in line with Deepseek’s broader strategic response to global scrutiny. The company faces an increase in restrictions from the Western government. The US Navy banned the use of the Deepseek AI model in military systems in January, and Texas continued to ban similarly in February, citing security risks.
Trump administration is expected to ban deepscakes on US government devices due to national security concerns
Italy has also launched an investigation into Deepseek’s data privacy practices and is investigating compliance with GDPR regulations. Aggravated these challenges, Openai and Microsoft reportedly have launched an internal review to investigate whether DeepSeek has unauthorized access to its own training data, particularly with regard to previous R1 models.
The allegations also point to the controversial use of Deepseek’s distillation. This is the practice of AI models learning by mimicking the output of other systems. According to a report by Apsociated Press, Openai suspects that the technique was used to replicate aspects of its own model, raising further concerns about the theft of intellectual property.
Deepseek founder Liang Wenfeng met with Chinese President Xi Jinping earlier this year, further amplifying concerns that are even more concerned about its relationship with China’s state policy.
Deepseek also faces logistical challenges in its development. The company stocks up Nvidia chips amid concerns that US sanctions could limit access to advanced AI hardware. Without cutting-edge hardware, DeepSeek may have a hard time keeping pace with its global competitors.
Continuing strengthening US export controls with AI chips has also led to Nividia chip stockpiling by Chinese companies that want to run models like Deepseek.
Promotion of Openai’s regulatory strategy and policy
Openai’s proposals don’t just focus on the ban on Deepseek. The company is also calling for broader AI regulations aimed at strengthening US technical leadership.
Key recommendations include increasing investment in AI infrastructure, relaxing restrictive state laws, and implementing stricter export controls to prevent advanced AI technologies from reaching hostile countries.
Additionally, Openai advocates that AI models can be trained on copyrighted materials under the Fair Use Doctrine. According to Openai, this policy is essential to maintaining competitiveness of US-developed AI systems.
Without that, US companies could fall behind in the global AI race, claiming that $175 billion in investment could move to Chinese companies.