Regarding Jeffrey Hinton’s concerns about the dangers of artificial intelligence (‘The Godfather of AI’ shortens the chance that technology will wipe out humanity over the next 30 years, December 27), I believe that these concerns are not related to the safety of AI. We believe that this can best be alleviated through collaborative research on The role of regulators at the table.
Currently, Frontier AI is tested after development using “red teams”, doing their best to extract negative results. This approach alone is never enough. AI must be designed with safety and evaluation in mind. This can be achieved by leveraging established safety-related industry expertise and experience.
Hinton doesn’t seem to believe that the existential threat posed by AI is intentionally encrypted. So why not force us to intentionally avoid this scenario?While I don’t agree with his views on the level of risk facing humanity, the precautionary principle does mean we must act now. This suggests that it must be done.
In traditional safety-sensitive areas, the need to build physical systems such as aircraft limits the rate at which safety can be impacted. Frontier AI does not have such a physical “rate limiter” upon deployment, so this is where regulation needs to play a role. Ideally, there should be a risk assessment before implementation, but current risk metrics are inadequate. For example, application areas and deployment size are not taken into account.
Regulators need the power to “recall” introduced models (and large companies developing models need to have mechanisms in place to stop certain uses), and lagging indicators alone There is a need to support risk assessment efforts that provide leading indicators of risk. In other words, governments need to focus on post-market regulatory regulation while supporting research that will allow regulators to gain insights to implement pre-market regulation. This is difficult, but essential if Hinton is right about the level of risk facing humanity.
Professor John McDiarmid
University of York Institute for Safety and Autonomy
Have an opinion on what you read in today’s Guardian? Email your letter to us. It will be considered for publication in our letters section.