Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Oracle plans to trade $400 billion Nvidia chips for AI facilities in Texas

June 8, 2025

ClarityCut ​​AI unveils a new creative engine for branded videos

June 7, 2025

The most comprehensive evaluation suite for GUI agents!

June 7, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Sunday, June 8
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Research»Joint research on AI safety is essential | Artificial Intelligence (AI)
Research

Joint research on AI safety is essential | Artificial Intelligence (AI)

By January 9, 2025Updated:February 13, 2025No Comments2 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Regarding Jeffrey Hinton’s concerns about the dangers of artificial intelligence (‘The Godfather of AI’ shortens the chance that technology will wipe out humanity over the next 30 years, December 27), I believe that these concerns are not related to the safety of AI. We believe that this can best be alleviated through collaborative research on The role of regulators at the table.

Currently, Frontier AI is tested after development using “red teams”, doing their best to extract negative results. This approach alone is never enough. AI must be designed with safety and evaluation in mind. This can be achieved by leveraging established safety-related industry expertise and experience.

Hinton doesn’t seem to believe that the existential threat posed by AI is intentionally encrypted. So why not force us to intentionally avoid this scenario?While I don’t agree with his views on the level of risk facing humanity, the precautionary principle does mean we must act now. This suggests that it must be done.

In traditional safety-sensitive areas, the need to build physical systems such as aircraft limits the rate at which safety can be impacted. Frontier AI does not have such a physical “rate limiter” upon deployment, so this is where regulation needs to play a role. Ideally, there should be a risk assessment before implementation, but current risk metrics are inadequate. For example, application areas and deployment size are not taken into account.

Regulators need the power to “recall” introduced models (and large companies developing models need to have mechanisms in place to stop certain uses), and lagging indicators alone There is a need to support risk assessment efforts that provide leading indicators of risk. In other words, governments need to focus on post-market regulatory regulation while supporting research that will allow regulators to gain insights to implement pre-market regulation. This is difficult, but essential if Hinton is right about the level of risk facing humanity.
Professor John McDiarmid
University of York Institute for Safety and Autonomy

Have an opinion on what you read in today’s Guardian? Email your letter to us. It will be considered for publication in our letters section.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAI copyright report delayed, Trump builds AI team, and more December AI updates — AI: Washington Report | Mintz – Antitrust perspective
Next Article A glimpse of the next generation AlphaFold

Related Posts

Research

JMU Education Professor was awarded for AI Research

June 3, 2025
Research

Intelligent Automation, Nvidia and Enterprise AI

June 2, 2025
Research

Can AI be your therapist? New research reveals major risks

June 2, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Don't Miss

Oracle plans to trade $400 billion Nvidia chips for AI facilities in Texas

June 8, 2025

ClarityCut ​​AI unveils a new creative engine for branded videos

June 7, 2025

The most comprehensive evaluation suite for GUI agents!

June 7, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?