How future codes of practice threaten to undermine EU AI laws
Starting August 2, 2025, providers of so-called “general AI” (GPAI) models such as GPT, Dall-E, Gemini, and Midjourney will face widespread obligations under the EU’s AI law. With the advent of these large (multimodal language) models in the second half of 2022, EU lawmakers have rushed to include GPAI regulations in AI law. GPAI model providers must provide technical documentation, implement copyright policies, and publish an overview of the training content. Additionally, risk assessments and mitigation measures should be taken for particularly strong models that can pose systematic risk.
To demonstrate compliance, AI laws allow providers to rely on “codes of practice.” It is currently drafted by more than 1,000 stakeholders under the auspices of the AI office and is expected to be adopted by the European Commission by August 2025.
The following provides an important analysis of the third draft code of practice. The drafting process competes with the procedural requirements of European law, highlighting how the draft itself exceeds the substantial obligations set by AI law. Given these concerns, I argue that the committee should scrutinize the draft before adoption. This is because approving it without amendments leads to an unconstitutional overreach of its license.
Joint regulation as a core strategy for AI law
The AI Act is based on a new legislative framework (NLF) that relies on joint regulations. Structured dialogue between regulators and industry translates general legal obligations into technical standards. Instead of specifying technical details in the law, the AI Act defines key requirements and leaves the task of embodying the European Standardization Organizations CEN and CENELEC through the Joint Committee JTC21.
Harmonized standards provide legal certainty. Once adopted by the committee, compliance with these standards will generate estimates of compatibility with the AI Act. In theory, companies can develop their own technical solutions, but administrative difficulties and additional costs usually lead to following standards. Recognizing these effects, the European Court of Justice has consistently ruled that harmonized standards form a “part of EU law” and must be developed and published in accordance with the Legal Regulations (2nd TEU).
Code of Practice as “part of EU law”
Although the GPAI model assumes harmonious standards, efforts to standardize this domain are still in their early stages. To fill this gap, the AI Act introduces a provisional instrument, the Code of Practice. When adopted by the European Commission through the implementation law, compliance with the Code acknowledges the presumption of suitability under the arts. 53(4)(2) and art. 55(2)(2)AI Method – Similar to harmonic standards. In theory, providers may choose to demonstrate compliance through alternatives without relying on code. However, in practice, the code could shape the interpretation and enforcement of the committee’s GPAI obligations.
Given its legal and practical consequences, there is no doubt that the ECJ would recognize the code as “part of EU law.” Therefore, the code must be developed procedurally and essentially in accordance with the Rule of Law (2TEU). However, that is not the case now.
Unregulated process with 1,000 stakeholders
The development of harmonized standards is in accordance with Rule 1025/2012, but drafting the Code of Practice relies solely on the Article 56 AI Act, which vaguely allows the AI Office to invite stakeholders.
The result is a process without structured rules, transparency, and no democratic protections. Initially, the AI office planned to draft the code behind closed doors. In response to criticism, it was swayed by the extremes of the opposition and began consultations with nearly 1,000 stakeholders.
With a highly compressed timeline and an cumbersome number of participants, there was little room for thoughtful deliberation or balanced input in this process. More concerning, scholars – many have no legal expertise or experience in technical standardization – have led drafting efforts. However, once adopted, this code defines the expectations of GPAI obligations and affects enforcement. Surprisingly, this is happening without meaningful participation by standardization experts, no opinions from the European Parliament, and no oversight from member states.
To be clear, this criticism is not intended to question the technical expertise of the chairs and stakeholders involved, or the willingness to consider diverse perspectives. Rather, the key issue is that the drafting process does not comply with legal procedural rules, but rather has become a top-down effort to regulate GPAI models within a short time frame.
A code of practice as a Trojan horse to reshape AI law?
The draft content is equally concerning. The purpose is to help providers comply with existing obligations, but the current draft goes beyond mere clarification. We will introduce new requirements not envisaged by the AI Act.
One example is the proposed role of “external evaluators” before releasing a GPAI model with systemic risks not provided by AI methods. The draft requires providers to obtain an external systemic risk assessment, including model ratings, before placing the model on the market (commitment II.11). However, the AI Act itself (Art. 55(1)(a) and Recital 114) does not impose this requirement. This requires adversarial testing of model assessments rather than independent external risk assessments.
Another example concerns copyright: Measurement I.2.4. Draft requires GPAI model developers to make reasonable efforts to determine whether protected content has been collected by a robots.txt compliant crawler. This is an obligation not even imposed by the AI Act. Furthermore, it measures i.2.5. GPAI model providers require downstream AI systems to take reasonable steps to mitigate the risk of repeatedly generating infringing content and prohibiting such use in the Terms of Use. However, these requirements are not found in the AI Act or the Copyright Instruction 2019/790, addressing only the primary liability (i.e. the responsibility of the GPAI model provider) and do not extend to secondary liability arising from text and data mining.
Again, the question is not whether these requirements are reasonable or not, but the sole purpose of the code is to clarify the obligations of AI law, not to redefine them. Therefore, the code should not be used as a Trojan horse to reshape AI laws according to political preferences.
Next step: Adopt or not to adopt a draft?
What happens next? The Code of Practice is effective only if approved by the Committee through the Implementation Act under the ART. 56 (6) AI method. Unlike the delegated act (290 TFEU), the conduct of act (291 TFEU) does not grant the Commission the authority to amend or supplement basic laws, namely AI laws. As the European Court of Justice has repeatedly confirmed, enforcing the act “may not amend or supplement legislative law, even in regards to non-essential elements.”
Therefore, the committee and AI committee should not simply absorb the current draft of rubber. Instead, both the committee and the AI committee should conduct a thorough critical review to ensure that the proposed measures are in fact necessary for implementation and are inconsistent or inconsistent with the provisions of the AI Act.
Whatever below, it undermines the carefully negotiated political compromise between Congress and Council under the AI Act, but also leads to an unconstitutional overreach of the committee’s license.