The launch of a new GPT-5 model from Openai warns the ADA Lovelace Institute that “the gap between AI capabilities and the ability to govern them will widen.”
The latest iteration of Openai’s flagship ChatGPT product was launched this week, claiming that the company has now reached “PHD level” intelligence.
As a major advance in AI capabilities, the launch has sparked concern among UK-based research institutes.
According to the group, the power and effectiveness of AI technology is rapidly increasing along with GPT-5, but big questions about safety, security, legality and impact on human work cannot be answered.
The government has slowed the company’s laws in hopes of spurring growth and avoiding the harsh backlash seen in the European Union following its own AI law, but research from the organization suggests that public opinion is in favor of regulation.
The group found that 72% of UK citizens say laws and regulations increase comfort with AI, while 87% say it’s important that the government or regulatory authorities have the power to stop the release of harmful AI systems.
“Nearly three years after the Bletchley AI Summit, the only person involved is making a decision on whether such a system is safe enough to be released is the company itself,” the institute said.
“Now, neither the government nor the regulators have any meaningful authority to provide transparency to businesses, report incidents, conduct safety tests, or force models to be removed from the market if they are not safe.”
Sign up for free
Bookmark your favorite posts, get daily updates and enjoy the ad reduction experience.
Do you already have an account? Login