Of the many execution deadlines of the EU AI method, we are approaching the corner. Experts have warned the company to strengthen the next few months.
The first aspect of EU’s groundbreaking law officially passed in March last year has been enforced from February 2, 2025, and many rules and regulations that AI developers or developers need to comply. I will bring.
This law is working on a risk -based approach to minimize, limit, or high risk. A high -risk system is defined as a threat to life, financial livelihood, or human rights.
According to Forrester’s principal analyst, ENZA Iannopollo, the members are paying attention to the most dangerous AI use case on the crossing deadline from the execution deadline.
“On February 2, some of the EU AI methods -but a powerful requirements begin. The EU focusing on the use case of the requirements that focus on this deadline is a potential adverse effect. INNOPOLLO believes that it will bring the greatest risk to the union’s value and basic rights, “said Iannopollo.
“These rules are the rules related to the prohibited AI use case, along with the requirements related to AI literacy. Organizations that violate these rules are up to 7 % of global sales. It’s important to meet the requirements effectively because of the possibility of facing, “she added.
However, he said that the fine sanctions were insufficient, and there were no authorities in charge of execution yet, so fines could not be calculated immediately.
But in the coming months, the headline may not be a significant fine, but Iannopollo stated that this is still an important milestone.
Companies need to strengthen their risk evaluation
According to the global range and factors of the law, he stated that organizations around the world need to step along with regulations, as it extends to the entire AI value chain.
“The EU AI method has a major impact on AI governance worldwide. These regulations have established the” de facto “standard for reliable AI and AI risk management,” she added. Ta.
In order to prepare, Iannopollo said that companies need to improve their risk assessments and confirm that they are classifying AI use cases according to the AC risk category.
The system corresponding to the “prohibited” category must be turned off immediately.
“Finally, we need to prepare for the next important deadline on August 2. By this date, execution machinery and sanctions will be better, and the possibility that authorities will sanction unsupported companies. In other words, this is when we look more.