Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Gemini as a universal AI assistant

May 22, 2025

The easiest repository to train VLMs with pure pytorch

May 21, 2025

VEO – Google Deep Mind

May 21, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Thursday, May 22
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Legislation»Google calls for weakening copyright and export rules in its AI policy proposal
AI Legislation

Google calls for weakening copyright and export rules in its AI policy proposal

By March 13, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Following Openai’s heels, Google has announced policy proposals in response to the Trump administration’s national “AI Action Plan.” The tech giant supports weak copyright restrictions on AI training and “balanced” export controls, “protecting national security while enabling US exports and global operations.”

“The US needs to pursue proactive international economic policy to defend America’s values ​​and support AI innovation internationally,” Google wrote in the document. “For too long, AI policymaking has paid disproportionate attention to risk. In many cases, it ignores the costs of false regulations on innovation, national competitiveness and scientific leadership. This is beginning to change under the new administration.”

One of Google’s more controversial recommendations is about the use of IP-protected materials.

Google argues that “exceptions to fair use and text and data mining” are “important” for AI development and AI-related scientific innovation. Like Openai, the company is trying to codify its rights to train publicly available data, primarily without restrictions, including copyrighted data.

“These exceptions allow us to use copyrighted, published material for AI training without having a significant impact on the right,” Google wrote.

Google reportedly trains many models on published copyright data, fighting lawsuits with data owners accusing the company of failing to notify and compensate before doing so. US courts have not yet decided whether fair use doctrines effectively protect AI developers from IP litigation.

With AI policy proposals, Google also has problems with certain export controls imposed under Biden Administration. It says that “imposes a disproportionate burden on US cloud service providers” and “may undermine our economic competitiveness goals.” This contrasts with statements from Google’s competitors like Microsoft, saying in January it could be “fully compliant.”

Importantly, export rules that seek to limit the availability of advanced AI chips in disadvantaged countries open up exemptions for trusted companies seeking large clusters of chips.

Elsewhere in that proposal, Google calls for “long-term and sustainable” investments in basic R&D, opposes recent federal efforts to reduce spending and eliminate grant awards. The company said the government should publish datasets that could be useful for commercial AI training, and allocate funds to “early market R&D” while ensuring computing and models are “widely available” to scientists and institutions.

Google urged the government to pass federal laws on AI, including comprehensive privacy and security frameworks, referring to a chaotic regulatory environment created by a patchwork of US state law. For just over two months to 2025, the number of pending AI invoices in the US increased to 781, according to the online tracking tool.

Google warns that it places its view as a troublesome duty with regard to AI systems, such as its obligation to use liability. In many cases, Google argues that model developers should not be liable for misuse as they have “live or have little or no control over how the model is being used.”

Historically, Google has opposed laws like California’s defeated SB 1047. This clearly explains what constitutes the precautions that AI developers should take before releasing a model, and in that case the developer may be responsible for model-induced harm.

“Even when developers provide models directly to deployers, deployers are often best positioned to understand the risks of downstream use, implement effective risk management, and implement post-market monitoring and logging,” Google writes.

In a proposal called disclosure requirements, which are considered “overly widespread” by the EU, Google said the US government should “oppose transparent rules that compromise national security by providing a roadmap to the enemy on leaks of trade secrets, allowing competitors to replicate products, or how to avoid protection or jailbreak models.”

More and more countries and states have passed legislation requiring AI developers to clarify more about how systems work. California’s AB 2013 requires companies developing AI systems to publish high-level summary of the datasets they used to train the system. In the EU, compliance with AI laws requires companies to provide model deployers with detailed instructions on the operations, restrictions and risks associated with the model.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleOpenai: America needs to be like China to defeat China with AI
Next Article SoftBank bets on the future of AI content creation with Opusclip

Related Posts

AI Legislation

Do you need a “nist for the state”? And other questions to ask before preempting decades of state law

May 20, 2025
AI Legislation

Massachusetts bill codifies cybersecurity, AI preparation

May 19, 2025
AI Legislation

Attorney General urges Congress to reject “irresponsible” state AI law moratoriums

May 19, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Introducing walletry.ai – The future of crypto wallets

March 18, 20252 Views

Subscribe to Enterprise Hub with your AWS account

May 19, 20251 Views

The Secretary of the Ministry of Information will attend the closure of the AI ​​Media Content Training Program

May 18, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Introducing walletry.ai – The future of crypto wallets

March 18, 20252 Views

Subscribe to Enterprise Hub with your AWS account

May 19, 20251 Views

The Secretary of the Ministry of Information will attend the closure of the AI ​​Media Content Training Program

May 18, 20251 Views
Don't Miss

Gemini as a universal AI assistant

May 22, 2025

The easiest repository to train VLMs with pure pytorch

May 21, 2025

VEO – Google Deep Mind

May 21, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?