Following Openai’s heels, Google has announced policy proposals in response to the Trump administration’s national “AI Action Plan.” The tech giant supports weak copyright restrictions on AI training and “balanced” export controls, “protecting national security while enabling US exports and global operations.”
“The US needs to pursue proactive international economic policy to defend America’s values ​​and support AI innovation internationally,” Google wrote in the document. “For too long, AI policymaking has paid disproportionate attention to risk. In many cases, it ignores the costs of false regulations on innovation, national competitiveness and scientific leadership. This is beginning to change under the new administration.”
One of Google’s more controversial recommendations is about the use of IP-protected materials.
Google argues that “exceptions to fair use and text and data mining” are “important” for AI development and AI-related scientific innovation. Like Openai, the company is trying to codify its rights to train publicly available data, primarily without restrictions, including copyrighted data.
“These exceptions allow us to use copyrighted, published material for AI training without having a significant impact on the right,” Google wrote.
Google reportedly trains many models on published copyright data, fighting lawsuits with data owners accusing the company of failing to notify and compensate before doing so. US courts have not yet decided whether fair use doctrines effectively protect AI developers from IP litigation.
With AI policy proposals, Google also has problems with certain export controls imposed under Biden Administration. It says that “imposes a disproportionate burden on US cloud service providers” and “may undermine our economic competitiveness goals.” This contrasts with statements from Google’s competitors like Microsoft, saying in January it could be “fully compliant.”
Importantly, export rules that seek to limit the availability of advanced AI chips in disadvantaged countries open up exemptions for trusted companies seeking large clusters of chips.
Elsewhere in that proposal, Google calls for “long-term and sustainable” investments in basic R&D, opposes recent federal efforts to reduce spending and eliminate grant awards. The company said the government should publish datasets that could be useful for commercial AI training, and allocate funds to “early market R&D” while ensuring computing and models are “widely available” to scientists and institutions.
Google urged the government to pass federal laws on AI, including comprehensive privacy and security frameworks, referring to a chaotic regulatory environment created by a patchwork of US state law. For just over two months to 2025, the number of pending AI invoices in the US increased to 781, according to the online tracking tool.
Google warns that it places its view as a troublesome duty with regard to AI systems, such as its obligation to use liability. In many cases, Google argues that model developers should not be liable for misuse as they have “live or have little or no control over how the model is being used.”
Historically, Google has opposed laws like California’s defeated SB 1047. This clearly explains what constitutes the precautions that AI developers should take before releasing a model, and in that case the developer may be responsible for model-induced harm.
“Even when developers provide models directly to deployers, deployers are often best positioned to understand the risks of downstream use, implement effective risk management, and implement post-market monitoring and logging,” Google writes.
In a proposal called disclosure requirements, which are considered “overly widespread” by the EU, Google said the US government should “oppose transparent rules that compromise national security by providing a roadmap to the enemy on leaks of trade secrets, allowing competitors to replicate products, or how to avoid protection or jailbreak models.”
More and more countries and states have passed legislation requiring AI developers to clarify more about how systems work. California’s AB 2013 requires companies developing AI systems to publish high-level summary of the datasets they used to train the system. In the EU, compliance with AI laws requires companies to provide model deployers with detailed instructions on the operations, restrictions and risks associated with the model.