Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Benchmarking large-scale language models for healthcare

June 8, 2025

Oracle plans to trade $400 billion Nvidia chips for AI facilities in Texas

June 8, 2025

Research papers provide a roadmap for AI advancements in Nigeria

June 7, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, June 9
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Ethics»Panda vs. Eagle – Future of Life Institute
AI Ethics

Panda vs. Eagle – Future of Life Institute

By November 22, 2024No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Crosspost: This is a crosspost from Mark Brakel’s Substack

On Wednesday, Ivanka Trump reshared Leopold Aschenbrenner’s influential situational awareness essay.

Aschenbrenner’s essay created a stir in the AI ​​policy bubble. Aschenbrenner said that artificial general intelligence (AGI) will soon be built, that the US government can expect to take the lead in AGI development by 2028, and that the US should step up its efforts to win against China. He claimed that there was. The stakes are high, Aschenbrenner said. “Even if Xi Jinping obtains AGI first, the Torch of Freedom will not survive.” In my view, America’s national interests are far better served by a cooperative strategy than by an adversarial strategy toward China.

AGI may be uncontrollable

Mr. Aschenbrenner’s recommendation that the United States enter into an AGI arms race with China only makes sense if it is a race that can actually be won. Aschenbrenner himself has said that “reliably controlling AI systems that are much smarter than we are is an open technical problem” and that “failure could easily be catastrophic.” The CEOs of major companies currently developing AGI, OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei, all agree that their technology will benefit humanity (and not just China). We believe it poses an existential threat to leading AI researchers, including: Yoshua Bengio, Jeffrey Hinton, and Stuart Russell express deep skepticism about our ability to reliably control AGI systems. If the U.S. race for AGI has some chance of annihilating the entire human race, including all Americans, it may be wise for the U.S. government to pursue global cooperation around the limits of AI development. do not have.

China will understand its own national interests

But do you believe that the survival risks are small enough to outweigh the risks of China’s (technological) permanent domination, or, like Aschenbrenner, what could be done to control a superhuman AI system? You may be feeling very bullish about breakthroughs in our understanding of what is needed. Still, I don’t think this justifies an AI arms race.

In Aschenbrenner’s words, “Superintelligence will become the most powerful technology and most powerful weapon ever developed by humanity, a decisive military advantage perhaps rivaling nuclear weapons.” ” Clearly, this would be huge for the international system if any of the existing superpowers believed that a rival power was trying to gain a “decisive military advantage” over them. will become unstable. To prevent conquest by the United States, China and Russia are likely to launch preemptive military actions to prevent a scenario in which the United States becomes a permanent hegemon. An AGI arms race could bring us to the brink of nuclear war, and this appears to be a very strong argument for global cooperation over frenzied competition.

view from beijing

It takes two to tango, and China would be foolish to pursue cooperation on AI if it nevertheless presses ahead. China certainly has its equals in Marc Andreessen and Jan LeCun, the vocal, economically motivated evangelists of the West’s unbridled AI development. The Economist recently identified Zhu Songchun, head of the state-sponsored AGI development program, and In Hyejun, the Minister of Science and Technology, as two key voices resisting any restraint.

Nevertheless, safety-minded voices seem to be winning so far. In the summer, the China AI Safety Network was officially launched with support from major universities in Beijing and Shanghai. Andrew Yao, the only Chinese to win the Turing Award for advances in computer science, Xue Lan, chairman of the State Expert Committee on AI Governance, and former president of Chinese tech company Baidu. Both warn against reckless AI development. It could threaten humanity. In June, Chinese President Xi Jinping sent a letter praising Andrew Yao’s work, and in July, Xi brought AI risks to the forefront at a meeting of the party’s central committee.

Cold shoulder?

Last November was a particularly promising year for U.S.-China cooperation on AI. On the first day of this month, representatives from the US and China literally shared the same stage at the Bletchley Park AI Safety Summit in the UK. Two weeks later, President Biden and President Xi held a summit in San Francisco and agreed to open bilateral channels, particularly on AI issues. This nascent but fragile collaboration was then further demonstrated at the AI ​​Safety Conference in South Korea in May.

It is clear that China and the United States are at odds over many issues, including the future of Taiwan, industrial policy, and export controls. However, some issues, such as climate change, nuclear security, and AI safety, cannot be resolved within geopolitical blocs. They are calling for a global response. The moves countries make in the coming months could determine the trajectory of global AI. Towards an AI arms race where the outcome is highly uncertain, or towards some form of shared risk management.

Western countries (including the United States) have two opportunities to keep China at the table and elevate Beijing’s security-minded voice. These include the AI ​​Safety Institute’s San Francisco conference in November and the Paris AI Action Summit in February. A significant portion of both summits will address safety benchmarking, assessment, and corporate obligations. Some of these issues are definitely political, while others are not. Ensuring that AI systems remain under human control is as much a concern for China as it is for the West, and safety agencies in particular It can be a neutral forum where technical experts gather.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleWhat does the Trump administration mean for regulating employer AI tools?
Next Article SentinelOne Announces AI Security Posture Management for the Singularity

Related Posts

AI Ethics

Artificial Power: 2025 Landscape Report

June 2, 2025
AI Ethics

NYC Book Release: The Empire of AI

June 1, 2025
AI Ethics

ai can steal your voice, and there’s not much you can do about it

May 24, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 20255 Views

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 20255 Views
Don't Miss

Benchmarking large-scale language models for healthcare

June 8, 2025

Oracle plans to trade $400 billion Nvidia chips for AI facilities in Texas

June 8, 2025

Research papers provide a roadmap for AI advancements in Nigeria

June 7, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?