The UK government has postponed plans to regulate AI for at least a year, but is reportedly developing a broader legislative framework to oversee technology. The proposed AI bill is currently expected to be introduced in the next parliamentary session, the Guardian reported, citing Secretary of State Peter Kyle of British Science, Innovation and Technology. The new bill addresses important issues related to AI, such as safety and copyright.
However, this law is not ready before the next king’s speech. Sources say the publication could occur in May 2026.
The first AI bill focusing on LLMS
Initially, the incumbent Labour Party planned to introduce a narrowly focused AI bill shortly after taking office. The Act covers large-scale language models (LLMs), such as CHATGPT, and includes requirements for companies submitting models for assessment by the UK AI Security Institute. The goal was to address the potential risks posed by increasingly sophisticated AI models.
This decision to delay the initial bill was attributed to the government’s desire to coordinate UK regulations with the US and avoid regulations that could prevent AI companies from expanding the former business. The current objective is to incorporate copyright regulations into a broader AI bill.
“We feel that we can use that vehicle to find a solution with copyright,” a government source told the Guardian. “We’ve been meeting with both creators and tech people, and there are some interesting ideas for the future. Once the data bill passes, the work will start in earnest.”
The government’s stance on copyright has already led to conflict with the House over another data bill. The bill allows AI companies to use copyrighted materials for training, provided that the rights holder opts out. Notable artists such as Elton John, Paul McCartney and Kate Bush support campaigns against these changes.
Recently, Kyle wrote to lawmakers, pledging to form a party’s parliamentary group to focus on AI and copyright issues.
We are calling on the public to strengthen government surveillance on AI
The news report shows that public sentiment is strongly supportive of government AI oversight in the survey by the ADA Lovelace Institute and the Alan Turing Institute. Research shows that 88% of UK citizens believe the government should have the authority to suspend AI products that pose serious risks. Furthermore, over 75% of respondents prefer government or regulatory oversight of AI safety rather than relying solely on private companies.
Meanwhile, the PWC survey highlights a significant increase in the UK’s demand for AI skills despite challenges in the global labour market. It has been found that job openings requiring AI skills will consistently increase from 2012 to 2022. The overall AI job list has declined slightly between 2023 and 2024, but the share of AI-specific roles has increased significantly.