Artificial intelligence (AI) systems pose new risks that cannot be fully addressed by existing laws. In response to these shortcomings, the European Union (EU) has enacted the Artificial Intelligence Regulation (EU) 2024/1689 (“EU AI Law”). This regulatory framework imposes additional obligations on providers and implementers of AI systems, with the aim of complementing rather than replacing existing legislation.
This bulletin provides an overview to help users and service providers understand how new regulations such as the EU AI Act interact with existing laws such as the GDPR (“General Data Protection Regulation”) I will. This bulletin can be read in conjunction with the bulletin “Navigating New Frontiers: Artificial Intelligence and Privacy Considerations.”
Overview of AI law
On June 13, 2024, the EU introduced the world’s first comprehensive AI bill, which aims to regulate the use of AI systems across EU member states. The EU AI law officially entered into force on August 1, 2024, but its provisions will be implemented gradually. None of the requirements apply at this stage, and the first prohibition on certain AI systems will begin on February 2, 20251. On August 2, 2025, the Notified Bodies2, General Purpose Artificial Intelligence (” GPAI “) Model 3, Governance 4, Confidentiality 5, and Penalties 6. By August 2, 2026, most remaining provisions will apply. However, Article 6(1) will enter into force on 2 August 2027 with corresponding obligations7.
EU AI law works by clearly defining what qualifies as an AI system and outlining the obligations that must be followed as a result. Section 3(1) defines an AI system as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptability after deployment, with explicit or implicit objectives. I would like to infer how to produce an output like this from the input I receive: such as predictions, content, recommendations, or decisions that may impact a physical or virtual environment.8 Additionally, part of a broader framework designed to identify and manage risks associated with AI. As such, we have adopted a risk-based regulatory approach. These risks are categorized into four different levels:
Unacceptable Risk: AI systems or uses that pose a significant or unacceptable risk of harm to individuals and their rights are prohibited. The law prohibits harmful systems, including those that use cognitive manipulation (such as dangerous voice-activated toys), social scoring, and biometrics such as real-time facial recognition9. High Risk: AI systems and uses fall into certain high-risk categories of use cases and system types and are not always prohibited or exempt. Examples of high-risk AI systems include those that pose a threat to safety or fundamental rights. These systems fall into two groups. (1) AI systems integrated into products covered by EU product safety laws, such as toys, aviation, automobiles (self-driving cars), medical devices, and elevators. (2) AI systems used in specific fields registered in EU databases; This includes, but is not limited to, access to education, vocational training, employment, workforce management, and self-employment10. Limited risk: AI systems or uses that do not fall into the high-risk category but pose specific transparency risks and requirements not associated with minimal-risk systems. Examples of these systems include deepfakes and chatbots. Minimal risk: AI systems or uses that have minimal impact on individuals and their rights. There is little direct regulation by EU AI law11. Systems classified as minimal risk are those that do not fall into the three categories listed above.
EU AI law imposes wide-ranging obligations on various actors in the lifecycle of high-risk AI systems. For example, high-risk AI systems that utilize techniques that involve training models with data must be developed on the basis of training, validation and testing datasets that meet the quality standards set out in Article 10 of the EU AI Act12 . . These specific obligations also vary depending on whether the entity or individual is the creator of the AI system (referred to as the “provider”) or simply the user of the system (referred to as the “adopter”)13. It is especially important for both providers and implementers of AI systems to understand that not only when to comply with the EU, but also with already established laws such as the General Data Protection Regulation (GDPR) It will be.
Scope of application
EU AI law sets out obligations for providers, deployers, importers, distributors and product manufacturers of AI systems, as well as links to the EU market. The EU AI law is broad in scope and therefore applicable to Canadian companies. For example, EU AI law applies to:
Providers that bring AI systems or services to the EU market or providers that bring general purpose AI models (“GPAI Models”) to the EU market. Implementers of AI systems established/based in the EU. providers and implementers of AI systems in third countries (Article 2(1) EU AI Law) if the output produced by the AI system is used within the EU;
The EU AI Law also enumerates certain exceptions to its significant scope (for example, the EU AI Law does not specify where it is prohibited or classified as a high-risk AI system or an AI system used only for purposes). (Does not apply to open source AI systems unless otherwise specified.) scientific research and development).
EU AI Law and GDPR: Understanding Compliance Obligations
Both EU AI law and GDPR may apply at various stages of the development, deployment and operation of AI systems14. Please note that these regulations address different aspects. They are designed to complement rather than overlap with each other.
The EU AI law has not yet fully come into force, so it is important to assess whether you will need to comply with the EU AI law, the GDPR, or both once the regulations begin to apply. As stated above, this assessment will depend on the specific circumstances surrounding the use and processing of personal data within the context of the system in question.
The need for continued regulatory efforts
Although the EU AI Law is designed to address many challenges related to artificial intelligence, the EU’s regulatory efforts do not end here. Increased data collection practices in various industries may increase the need for regulatory reform or the creation of new regulations.
For example, algorithmic management (AM) systems in the workplace enable detailed tracking, from monitoring work performance to examining digital behavior to managing breaks15. This intensive data collection can raise questions regarding employee privacy and transparency in how the information is used. The EU’s current directives, some of which have been in force for some time, cover a range of employee-related issues, including the protection of employee health and safety, as well as employee notification and consultation. I am. However, these Directives may be strengthened by more explicit Directives, and there are ‘sleeping clauses’ within these Directives that may be revisited16.
As a result, some stakeholders are calling for new regulations to address new risks, while others are proposing adjustments to existing laws to be more comprehensive. . What is clear is that compliance will become significantly more complex as more regulations are introduced, and debate continues over whether to create new laws or amend existing ones.
How can Fasken help with AI/privacy compliance?
At Fasken, we remain at the forefront of technology regulation in the EU and Canada and continue to provide updates on new developments in this field. For additional resources, see Knowledge about artificial intelligence. If you have any questions, please feel free to contact us. If you need help with any issues related to AI or privacy law, please feel free to contact us.
About Fasken’s Privacy and Cybersecurity Group
As one of the longest-running and leading practices in the privacy and cybersecurity field, our national privacy team of 36 attorneys provides a wide range of services. From managing complex privacy issues and data breaches to advising on EU GDPR and emerging legislation, we provide comprehensive legal advisory services and are trusted by top cyber insurers and Fortune 500 companies. Our group is recognized as a leader in our field, earning honors such as PICCASO’s “Privacy Team of the Year” award and recognition from Chambers Canada and Best Lawyers in Canada. Please visit our website for more information.
footnote
1. European Parliament. “Artificial Intelligence Law (AI Law),” Chapters 1 and 2. 2024.
2. AI Law, supra note 1, Chapter 3, Section 4.
3. AI Law, Note 1 above, Chapter V.
4. AI Law, supra note 1, Chapter 7.
5. AI Law, supra note 1, section 78.
6. AI Law, supra note 1, sections 99 and 100.
7. European Parliament. “Artificial Intelligence Law” enforcement schedule. 2024.
8. European Union (EU), July 2024. “Artificial Intelligence Law (AI Law)”, OJ L, 2024/1689, December 7, 2024, art. 3(1).
9. Ibid., Article 6.
10. European Parliament. “EU AI Law: First Regulation on Artificial Intelligence.” June 1, 2023.
11. AI Law, note 1 above.
12. AI Law, supra note 1, article 2. 10.
13. AI Law, supra note 1, article 2. 4.
14. CNIL, July 2024, “Entry into force of the European AI Regulation: First questions and answers from the CNIL.”
15. European Parliament Research Service, June 2024 “Addressing AI risks in the workforce and algorithms”.
16. Ibid.
The content of this article is intended to provide a general guide on the subject. You should seek professional advice regarding your particular situation.