What are the important features of the EU AI law?
The law categorizes AI into four risk layers: unacceptable, high, limited and minimal, using stricter rules for high risk systems.
What is an AI risk-based system?
The law classifies AI systems into different risk levels.
Unacceptable risk AI systems (such as government-run social scoring) are completely prohibited.
High-risk AI systems (e.g., AI used in job seekers screening) face strict legal requirements for safety, transparency, and human surveillance. This is the most regulated AI system. These are because they can break down or be misused when used in law enforcement or recruitment.
Limited risk: Includes AI systems that carry the risk of manipulation and fraud, such as chatbots and emotional recognition systems. Humans must be informed of their interactions with AI.
Minimum/low-risk AI systems have lighter regulations with transparency obligations. All other AI systems, namely spam filters that can be displayed without any additional restrictions.
What are the penalties for non-compliance?
Fines reach up to 7% of the company’s global revenue. That amounts to $700 million in Openai, $24.5 billion in Google and $11.5 billion in Meta.
Who enforces the law?
Governance is overseen by national authorities working together through the European Commission on Artificial Intelligence. The law also proposes support for innovation, particularly for small and medium-sized businesses, through regulatory sandboxes.
What is the deadline for August 2025?
The August 2025 deadline targets power applications such as chatbots, image generators and even creative transformations such as animating Mona Lisa for smiles and waving.
What are the orders of the EU AI law?
The law places strict obligations on AI providers, including:
Register: All GPAI providers must register with the EU AI office in Brussels.
Transparency: Companies must document training data sources, perform risk assessments for the “systemic risk” model, and report copyright compliance.
Accountability: Providers must disclose the capabilities and limitations of the model to ensure transparency.
What happens if the company doesn’t follow?
There are severe penalties for non-compliance.
For example, based on the 2024-2025 revenue forecast, a 7% fine would cost the alphabet $24.5 billion or meta $11.5 billion, and would be penalized under other regulatory frameworks.
What is the response to the EU AI law?
Google and Openai are pledged to comply, and Google signs a voluntary AI practice code.
However, Meta challenged the rules as “legally questionable.” This is a dangerous stance given the potential fines.
AI companies face even greater challenges as legal documents and compliance requirements can demand critical resources and curb innovation.
Compliance challenges
Behind the scenes, EU AI law is reconstructing AI development.
Companies need to maintain detailed inventory of AI systems, document training data, and prove copyright compliance.
This will be a logistical nightmare for small players.
The legislation also introduces immediate obligations to change existing models, urging some companies to split the deployment of the EU and global model.
What have been the effects so far?
Venture capital funds for EU-focused AI startups have already declined as investors tackle regulatory uncertainties.
Critics like Andrew Orlowski of the Telegraph argue that adopting AI without robust regulations puts “racial to the bottom” in quality and ethics at risk.
Meanwhile, computer scientist Yejin Choi highlighted the paradox of AI in a TED talk. It’s “incredibly clever, shocking stupid,” allowing for the error-prone, transformative feat that calls for a director.
EU AI Law addresses these concerns by prioritizing accountability and safety.