
EWEEK content and product recommendations are editorially independent. You can earn money by clicking on the link to your partner. learn more.
The UK government officially rebranded the AI watchdog and renamed the AI Safety Institute to the AI Security Institute. The move shows a clear shift from previous risk aversion stances, prioritizing national security over broader AI-related social harms.
The AI Security Institute will work with other government departments, including: National Security Community wider than the National Cybersecurity Center, Ministry of Defense Research Institute AI Security Laboratory
Additionally, the Institute will introduce a new criminal misuse team to study crime and security-related AI threats along with the Ministry of Home Affairs.
Shifting AI policy
Technology Secretary Peter Kyle announced the new name at the Munich Security Summit on February 14th, saying it aims to “focus on serious AI risks with security implications.” I’ve explained it. A government press release revealed that the institute “does not focus on prejudice or freedom of speech.”
This indicates a significant deviation from the original mission. When the Institute first launched at the first AI Safety Summit in the UK in November 2023, its goal was to “have the most loss of humanity, from social harm such as bias and misinformation.” It was about exploring all the risks, from unlikely but extreme risks. The control of AI is totally intact.”
“The AI Security Institute’s work remains the same,” Kyle said. “But this new focus will ensure that our citizens, and our allies, are protected from those who seek to use AI for our institutions, democratic values, and ways of life. I guarantee it.”
Key areas of focus include:
Ensuring that AI is not used to develop chemical and biological weapons will strengthen cybersecurity defenses against AI-driven attacks, combating attacks that combat AI-responsive crimes, including fraud and child sexual abuse
Earlier this month, the government announced plans to make it illegal to own AI tools designed to produce child sexual abuse material.
Shift to a policy to promote support
The UK’s approach to AI has changed dramatically since the labor government came to power in July 2024. Under former Prime Minister Rishi Snack, the government has adopted a cautious approach, signing a voluntary AI Code of Conduct and publishing a white paper on AI regulations and risk assessment. Though it aims to protect consumers, these policies were not popular among tech giants.
However, current Prime Minister Kiel Starmer appears to be heading towards a more business-friendly attitude. Strict AI regulations can slow down product rollouts for Google, Meta and other major tech companies, and could drive investors away.
Coordinating with UK AI Policy
Shortly after taking office, Kyle assured executives of Google, Microsoft, Apple, Meta and other major tech players that the AI bill would focus on large-scale language models like Openai’s ChatGPT. The bill will also turn the AI Safety Institute into a “arm length government agency.”
In January 2025, Starmer released the AI Opportunities Action Plan, focusing on the forefront and center of innovation, with little mention of safety. In particular, he skipped the Paris AI Action Summit. There, the UK also refused to sign a global pledge to “comprehensive and sustainable” AI.
During his speech, US Vice President JD Vance lightly paraded the use of “overregulation” in Europe, saying that an international approach should “promote the creation of AI technology, not strangled.” Masu. The UK denied that the decision was related to US integrity, but the move has strengthened Silicon Valley’s openness to investment.
The EU adjusts AI regulations
Despite the EU signed the communica- tion and enacting the controversial AI law, the Commission has 37 legislations from its 2025 work programme, including key regulations on AI, patent licences and Eprivecy. We have announced plans to withdraw the proposal.
The decision suggests a shift in EU regulatory priorities, indicating that even European policymakers are responding to concerns from global technology leaders about overregulation.