Italy secured final parliamentary approval on September 17, 2025 and became the first EU member to pass the National Artificial Intelligence Act, which complements the EU AI Act. This domestic law continues from Italy’s “National AI Strategy,” which already dates back to 2020 and emphasizes the need for transparency, accountability and credibility to stimulate citizens’ trust and engagement in a thriving AI ecosystem.
The law aims to set up a human-centric guardrail around AI deployments, while encouraging innovation across the economy. A combination of departmental rules, criminal code updates, copyright clarification, and institutional choices provide playbooks (and some warning substances) to other countries, including the UK, to proofread their own paths.
Overview of key regulations
Human Surveillance and Traceability: The law requires that AI-supported decisions be subject to human Surveillance and Traceability. For example, in healthcare, you must be a health professional (not AI) and make decisions related to your prevention, diagnosis, treatment, and treatment choices. They also need to recognize patients when healthcare professionals use AI technology. Access to minors: Children under the age of 14 can only access AI with parental consent. Criminal Penalties: The law introduces new crimes for the illegal spread of AI-generated or manipulated content (e.g. deepfakes), introduces 1-5 years of prison conditions in which unfair harm is caused, and increases penalties when AI is used to commit existing crimes such as market manipulation. Governance and Enforcement: Two existing government agencies – the Digital Italian Agency (AGID) and the National Cybersecurity Agency (ACN) – are designated to enforce the law. Meanwhile, the Department of Digital Transformation pilots the national AI strategy. Investment Signals: Additionally, up to 1 billion euros have been allocated to support AI, cybersecurity, quantum technology and telecommunications companies. Copyright:
(1) Copyright protection of AI Works. The law argues that works created with “AI Assistance” are subject to protection when attributed to “the intellectual efforts of real humans.” This seems to be a clarification/restatement of existing EU copyright principles that are commonly understood to protect elements of AI works resulting from true human creativity (for example, when AI is used as a tool to realize human-driven ideas). Whether work created entirely by AI attracts copyright protections remains open for legislative clarity at both the EU and national levels.
(2) Text and Data Mining (TDM). Regarding the hot inference issue around TDM, Italian law is in charge of “extracting or replication of text on the Internet or database…….. For the purposes of extracting text and data through artificial intelligence models and systems, including generation intelligence, Italian principles are in charge of respect for the principle of directives of DSM (“legally” access).
This appears to be a domestic attempt to flow through the relevant copyright/TDM provisions from EU AI law, which essentially reveals that ART 3 and 4 DSM copyright exceptions are related to “development and training” of the generation AI model.
However, the exact language of the new Italian law is not limited to “AI training purposes” in the first reading. If “Opt Outs” is respected and access to work is “legal”, it appears that TDM will allow broader AI use cases, including “remembering LLM” or “search extension” (RAG). So perhaps carelessly, Italian law exceeds the mercy of what is assumed to be acceptable under EU AI law.
It should also be noted that Italy is openly considering this new law as the EU parliament is openly considering the possibility that it could tighten the TDM copyright exception under Art 4 of the DSM Directive and make it more difficult for AI companies to rely on it to train AI models. The structural plates of the TDM law are still changing, and it is still unclear how this new Italian law will spread with EU-wide law, wherever it is.
Why is this important to the UK?
Italy enacted laws to compensate for these duties despite former Italian Prime Minister Mario Draghi calling for a pause of the AI law’s high-risk population last week. This will be added to the EU’s layered regulatory matrix, including regulations, directives and national law. Historically, UK regulators have primarily advocated a more flexible “pro-pro-pro-approach.” The UK may continue its “wait and watch” approach to AI regulations. Although the UK pivot is not enforced in Italy alone, a cascade of similar national measures across the EU puts pressure on the UK to reduce friction over criminal sanctions and protections, particularly for minors. But the longer it takes for the UK to establish an AI regulation stance, the more likely it will become one follower of an emerging approach. As readers definitely don’t remember, the UK’s proposed TDM exception (widescale TDM by AI companies in the UK for AI training was abolished after the previous government was abolished after the clash in 2023. The current government preferred to hold a large-scale discussion on copyright and AI last winter and introduce a new TDM exception that introduces copyright in line with the provisions of Art 4 of the DSM Directive (essentially The UK is linked to EU law on TDM. The talks were closed in February, but the government has not yet issued a formal response, and the Italian approach (good or bad reasons) will ensure that the government’s thoughts on legislation on the issue is informed. Meanwhile, companies that provide AI to the EU will begin planning their country-by-country obligations that may apply ahead of the deadlines of major AI laws, and will be able to reduce fragmentation and enforcement risks. -Plus” baseline (AI Act Core Plus Leading National AddOn) may be necessary.