Grace Nelson, technology and telecommunications analyst and recent graduate from LSE’s Media and Communications Governance program, writes about EU AI law and the impact its risk-based approach will have in Europe and beyond .
In July 2024, the final text of the European Union’s AI law was published in the EU Official Journal, ending a four-year legislative process and beginning the countdown to implementation of what the EU claims is the world’s first binding regulation. About artificial intelligence. Since the introduction of the AI Act in 2021, at least eight other countries have introduced similar laws, including Canada, Brazil, Mexico, and Vietnam. Each of these proposals takes a “risk-based” approach to EU AI regulation, imposing tiered obligations on developers and adopters based on the recognized and predictable threats commonly posed by their systems. Setting. Although the concept and algorithmic logic of artificial intelligence dates back to at least the mid-20th century, it is only in the last five years that we have reached a clear consensus on how to address the potential benefits and harms of this technology.
In some ways, the current outcome on AI regulation is entirely predictable. The AI Act and its descendants are laws built on principles of consumer protection. From the General Data Protection Regulation (GDPR) to the Digital Services Act (DSA) to the AI Act, the EU has developed and continues to reuse a digital rule-making mode that relies entirely on harm assessment and minimization. Even if the definition of consumer protection is expanded beyond economic measures, laws like the AI Act are limited to foreseeable and identifiable harms, whether economic, social or civil. Masu. Despite the goal of laws such as the AI Act and Canada’s Artificial Intelligence and Data Act to address systemic harm, these protections are still limited to recognizable and recognizable risks of the technology; It often corresponds to the risks faced by individual constituents.
As the EU now moves to support the adoption of AI and other emerging technologies, including quantum computing, across the economy, the inadequacies of its risk-based regulatory framework are becoming more apparent. Despite repeated calls from around the world to adopt a principles-based approach to AI governance, which would suggest a rationale and increase regulatory flexibility for unclear harms, AI law Laws such as , which have adopted a specific and limited list of restrictions on AI. AI development. By failing to capture unknown, unseen, or unrealized risks, the law creates a vacuum that implicitly allows various forms of innovation. These include innovations that achieve as-yet-undefined positive social outcomes and innovations in malicious compliance that policy makers hope to achieve. And the continued exploitation of the digital public, as well as the operational procedures of dominant platform companies through the enforcement of other laws. Risk-based regulation leaves undetermined the link between the principles policymakers champion for innovation (such as equity and inclusivity) and the realities they produce when realized. This is because risk-based regulation inherently refuses to specify what these principles are. Not how it is not achieved, but whether it will be achieved.
Without a forward-looking or re-engineering vision for socially, politically and economically just AI (or other emerging technologies) to be adopted not only in regulation but also in industrial policy and efforts to drive technology adoption. , future iterations of so-called revolutionary advances will only become clearer. A novel and creative adaptation of an existing oppressive power structure.
Risk-based policymaking: Social construction, personal harm, and tacit permission.
AI law is built on the premise that technology harms can be identified, isolated, and outlawed through a hierarchy of risks. Assessing this risk is partly based on the context in which the technology is applied, and partly on the capabilities of the technology, especially when the context of the application is not readily known, as is the case with “general purpose” AI. is true. The power of a system is determined through surrogate measures such as the computational power used to train a particular model. Several similar legislative proposals, such as the proposal submitted by the Australian Department of Industry, Science and Resources, also determine uncertainties regarding the capacity of the system. The application context itself is considered evidence of higher risk. The AI Act prohibits a limited number of well-defined AI use cases, such as social scoring tools and some real-time biometric systems, as examples of unacceptable risk.
Like its predecessors, the GDPR and DSA, the AI Act is built around compliance enforcement. Developers and implementers of regulated AI systems must record and document a variety of compliance metrics, including data governance policies and quality control processes. Although some new requirements have been added to Europe’s digital compliance process, such as record-keeping of the results of adversarial testing procedures, these socially constructed indicators of compliance are an early part of the digital economy. It reflects the same self-fulfilling policy construction that has underpinned each iteration of the law. Privacy laws like GDPR. Just as AI law borrows from the consumer protection framework of its ancestors, it also borrows from the narrow window of possibility of what law in this area could and could be. I did. Similar to the Privacy Act’s implementation indicators, the AI Act relies on consultative input from the regulatory sector to develop codes of practice for acceptable commercial conduct in the AI market, and also includes measures such as ‘red teaming’ and transfers. It is also based on industry practices. Manage responsibilities through the value chain to develop effective compliance standards. The regulator, in this case the newly established EU AI Secretariat, is once again constrained by the limits of existing corporate practices and ultimately becomes dependent on the industries it regulates, subject to the threat of meaningful and punitive enforcement. It will be missing.
The AI Act also reflects the practical trappings of previous policy efforts, moving beyond obstructing regulators through industry-constructed compliance and adopting a largely individualistic model of risk prevention and consumer participation. I am. Similar to the GDPR consent notice, the AI Act requires an outlet for consumer “autonomy” through transparency in how consumers interact with AI systems and the opportunity to lodge complaints about non-compliant systems. Masu. The AI Act will address systemic harm, both in defining the types of technologies assumed to have the potential to cause systemic harm and in outlawing selected systematically harmful practices. Although greater consideration is given to the possibility of Seek to identify collective or intangible harms. As Cathy O’Neill points out in her book Weapons of Mathematics Destruction, the collective harm caused by algorithmic discrimination and dysfunction is pervasive and often difficult to pinpoint, so it primarily targets individuals. Systems based on protection are even more likely to avoid danger. This individualized framework, common in Europe’s digital rulebook, ultimately provides effective intervention into collective or “horizontal” public policy problems caused by the digital economy’s gross power imbalances. prevent.
Both of these limitations of the AI Act contribute to the law’s pitfalls as a risk-based regulation created through a system of cost-benefit analysis. As Julie Cohen has explained, this mode of regulation compartmentalizes the uncertain and unknown harms of technology and sets them outside the scope of the law. Cohen argues that this vacuum frequently serves to encourage excessive risk-taking that can occur simultaneously within compliance procedures, but beyond the normative range of prohibited conduct. Point out that there is. Balancing purported interests in this perspective is to downplay or downplay collective and intangible harms that are already very difficult to capture in a system based on the protection of individual consumers. It also contributes to In summary, AI laws and the laws they imbue in the images currently emerging around the world enable the exploitation of users and communities who choose to use or are affected by these technologies; We cannot destroy the power systems that allow us to thrive.
Linking industrial policy to a forward-looking vision of technology regulation
The values that many governments and governing bodies around the world believe AI systems should operate under, including the UK and Singapore governments, which prioritize self-governance and economic growth over binding safety regulations. I’m trying to convey this. Despite these visions for the future of technology based on principles such as accountability, fairness, and equity, the current settlement on AI, ushered in by the AI Act, does not support a concrete and just future that enacts these values. failing to provide the necessary regulatory tools to achieve this goal. . Instead, the EU needs to prioritize a positive or restructuring vision of what its future entails and, specifically, how technological innovation can contribute to it.
As the European Commission shifts its focus to accelerating technology adoption for a more globally competitive economy, AI and other emerging technologies are positioned as key industries for public investment. In this context, industry players, led by platform giants such as Meta, have understandably begun campaigning for the repeal of existing EU regulations, citing the need to increase the scope for innovation. In other words, the incumbents of the world’s digital economy will not only seek to maintain their positions left largely unaffected by the AI Act, but also seek to regain full control over the direction of technological development and defund AI. They are even trying to ignore legal efforts to legalize it. A selected set of unacceptable paths. Now that the EU has created an opportunity to align industrial policy and digital regulation to create a more comprehensive picture of the desired digital society, it is privileged to ask questions about who innovation serves. If the EU is indeed pursuing justice on behalf of its constituents, as it so often claims, then the answer to that question should depart from its current position and call for such a reconstructive vision. Regulatory measures enacted must also follow.
As the European Commission shifts its focus to accelerating technology adoption for a more globally competitive economy, AI and other emerging technologies are positioned as key industries for public investment. In this context, industry players, led by platform giants such as Meta, have understandably begun campaigning for the repeal of existing EU regulations, citing the need to increase the scope for innovation. In other words, the incumbent powers of the world’s digital economy will not only seek to maintain their positions left largely unaffected by the AI Act, but also seek to regain full control over the direction of technological development and de-escalate AI. They are even trying to ignore legal efforts to legalize it. A selected set of unacceptable paths. Now that the EU has created an opportunity to align industrial policy and digital regulation to create a more comprehensive picture of the desired digital society, it is privileged to ask questions about who innovation serves. If the EU is indeed pursuing justice on behalf of its constituents, as it so often claims, then the answer to that question should depart from its current position and call for such a reconstructive vision. Regulatory measures enacted must also follow.
This post represents the views of the author and does not represent the position of the Media@LSE Blog or the London School of Economics and Political Science.
Featured: Photo by Igor Omilaev on Unsplash