As the European Union continues to make significant advances in regulating emerging technologies, the Artificial Intelligence Act (AI Act) stands out as a groundbreaking legislative effort aimed at regulating AI systems. As the EU moves forward with AI law – supervisors already say preparations should begin to comply with AI law now – Several important areas of competition have emerged. These discussions highlight the challenge of balancing innovation with ethical and legal considerations, particularly in light of recent political developments (e.g. pressure from the US administration). The European Commission (EC) reportedly is already considering voluntary AI law requirements. The proposal faces a significant pushback from the European Parliament (EP).
Timeline for implementation
The AI Act is set to be fully applicable on August 1, 2026, following a two-year implementation period. However, like all complex laws, there are exceptions and some rules have been enforced previously.
Since On February 2, 2025, AI systems were classified as “unacceptable risks” (Create facial recognition databases, such as AI systems that allow social scoring of the Internet and target scraping) It is prohibited and calls for an important step towards protecting fundamental rights. Additionally, organizations developing or using AI systems must ensure that their employees are AI literals. This means that it is necessary to develop sufficient AI knowledge among employees.
By May 2, 2025, AI Systems providers must prepare a code of practice to demonstrate compliance with legal requirements.
Additionally, the high-risk system will have its deadline extended until August 2, 2027, allowing stakeholders to adapt to the new regulatory environment.
Current Status and Development
Since 2021, (Suggestion afterwards) AI Act is undergoing scrutiny and debate within the EU legislative process. The EP and the EU Council are actively involved in debates to improve the provisions of AI Act and address concerns raised by various stakeholders. The AI Act was officially adopted on March 13, 2024. EC now We aim to provide guidance in these discussionss. In February 2025, the EC released draft guidelines AI practices have been banned And on Defining AI systems – Critics who say the document leads to more confusion than clarity. cIn urend, the key areas of discussion include definitions of High risk AI systems, scope of transparency requirements, and balance between innovation and regulation.
Definition of high-risk AI systems
One of the most important areas of discussion is Definition and classification of high-risk AI systems. The AI Act seeks to impose strict requirements on systems that are deemed high risk, such as law enforcement, critical infrastructure, and those used in employment. However, stakeholders have raised concerns about the criteria used to determine what constitutes high risk. For example, some argue that the current definition may be too broad. in fact It poses a significant risk. Others advocate for more accurate standards to help lower risky technologies thrive while higher risk applications are well regulated.
Transparency and accountability
Transparency and accountability are central doctrines of AI law, but it remains a controversial issue. The AI Act requires that AI systems, particularly those classified as high-risk, be transparent in their operations and be subject to human surveillance. However, the details of these requirements are discussed. Industry representatives have expressed concern that excessively normative transparency obligations can hinder the development of unique technologies and compromise competitive advantages. Conversely, consumer advocacy groups emphasize the need for robust transparency measures to protect users and ensure the deployment of ethical AI.
Legal Copyright Gap
In a letter to the EC in February 2025, 15 cultural organizations highlighted the need for new laws to protect writers, musicians and creatives.nAccusation The “legal gap” in AI law. AI Act does not address it properly Copyright Copyright experts say the challenges posed by the generated AI model. Originally, the exemption from text and data mining in AI laws aimed at limited private use; As it is reportedly It’s misunderstood In a way that can Allow large tech companies process A huge amount of intellectual property. This caused vigilance and lawsuits from the authors and musicians. The EC recognizes these challenges and is considering additional measures to balance innovation with the protection of human creativity.
Using Hungarian facial recognition technology
A concrete example of the challenges facing the implementation of AI laws is the use of facial recognition technology in Hungary. Hungary proposes using AI-based facial recognition for pride participants in Fine Gae B Dapest. The report shows that Hungary’s deployment of the technology violates the AI Act provisions, which EC spokesman states that legality assessments depend on whether facial recognition is real-time or subsequently controlled. Members of the EP are urging the EC to investigate the issue. The lawsuit underscores the difficulties in enforcing AI law requirements across member states. In particular, the responsibility part of the AI Act is not yet clear, especially as the AI liability directive has been withdrawn.
Protection of minors
The AI Act also works to protect minors, but this area remains full of challenges. While it is a priority to ensure that AI systems do not misuse or harm minors, the guidelines are Effectively It is still sophisticated to achieve this. The complexity of regulating AI in contexts involving minors, such as educational technology and social media platforms, requires careful consideration to balance protection and access to beneficial technologies.
Impact on stakeholders
AI laws are still very theoretical and the responsibility of the law is not yet clear, but it is important for organizations using AI systems to recognize them. The rules And then I’ll start Their preparation Compliance with AI laws. Developers and users must take on it for further insight into the meaning of AI laws and the required steps On the way to compliancethe Dutch and Belgian Data Protection Agency each provides resources and guidance to its website. here and here. This includes conducting thorough risk assessments, implementing transparency measurements, and enhancing AI literacy.
Conclusion
Overall, AI Act could have a widespread impact on AI technology companies, developers and users across the EU. The AI Act represents a pivotal moment in regulating AI within the EU. As it has become clear, it is not a “silver bullet” law, but resolves all the awkwardness of organizations using and developing AI systems. The EC guidelines that seek to provide guidance cannot do just that. Furthermore, AI laws are related to all sectors and areas of practice, and AI systems are regulated by other laws. On top of that AI laws such as the GDPR and the Anti-Discrimination Act. These complexities underscore the need for comprehensive legal guidance. in Future episodes of This AI series navigates between these sectors and practice areas and digs deeper into the specific topics and aspects of AI law that you are trying to do. It creates consciousness Provides practical, interdisciplinary guidance.