The former AI Safety Institute will make major changes, but the government will cooperate more widely.
Improvements to the former AI Safety Institute will result in a reduction in the authority to remove new names and new partnerships with US-based startups, as well as work related to bias and freedom of speech.
Based in the science, innovation and technology sectors, the organization is now known as the AI Security Institute. At the very least, it retains the early principles of “AISI” that it is known to be.
According to DSIT, “The new name is a serious AI risk with security implications, including how technology can be used to develop chemical and biological weapons, how it can be used to carry out cyberattacks, among other things. It focuses on and enables such crimes: fraud and child sexual abuse.
As part of the updated obligation, the Institute will work closer together with other parts of the government. According to DSIT, the collaboration will focus on considering “the risks posed by frontier AI.”
The newly created “Crime Misuse Team” from AISI will work with the Ministry of Home Affairs to “conduct research into a variety of crime and security issues that could harm British citizens.”
Some of the lab’s focus areas no longer constitute a potential impact of AI on bias and freedom of speech.
DSIT Secretary of State Peter Kyle said: “The changes I am announcing today represent the logical next step in how we approach responsible AI development. The work of the AI Security Institute remains the same, but this new focus will be on us. Ensuring that citizens of our allies, and those of our allies, are protected from those who seek to use AI for our institutions, democratic values, and ways of life. The government’s main work is It is to ensure that our citizens are safe and protected. The expertise our institute can withstand is sure that the UK is in a stronger position than ever before. Please use it against us.”
Related content
Alongside the improved AISI there is a new memorandum of agreement between the British government and humanity. This is an AI and research outfit founded in San Francisco in 2021 as a public benefits company. The company created AI Assistant Claude.
The company will work with the government’s newly created sovereign AI unit, which is set in its recent response to the AI Opportunity Action Plan. The document recommended that Whitehall establish new institutions to work directly with AI companies through partnerships and investments.
The involvement between the unit and humanity includes: “sharing insights on how AI can transform public services and improve citizens’ lives, and using this transformative technology to promote new scientific breakthroughs. It includes.”
“We are pleased to announce that we are committed to providing a range of services and services to providing services that will enable us to create a range of services that will benefit you from,” said Dario Amodei, CEO of Anthropic. How human AI assistant Claude strengthens public services with the aim of discovering new ways to make UK residents more efficient access to critical information and services I look forward to finding out if I can help. We will continue to work closely with the UK AI Security Institute to investigate and evaluate AI capabilities to ensure safe deployment. ”
The government is currently working for Openai, creator of ChatGpt, to support the government’s creation of new chatbot tools: gov.uk Chat. The technology is currently in the beta stage where citizens can use automated tools to ask for answers to business-related questions.
Following the human partnership, DSIT has shown that the government “sought to secure further agreements with major AI companies.”