Crosspost: This is a crosspost from Mark Brakel’s Substack
On Wednesday, Ivanka Trump reshared Leopold Aschenbrenner’s influential situational awareness essay.
Aschenbrenner’s essay created a stir in the AI policy bubble. Aschenbrenner said that artificial general intelligence (AGI) will soon be built, that the US government can expect to take the lead in AGI development by 2028, and that the US should step up its efforts to win against China. He claimed that there was. The stakes are high, Aschenbrenner said. “Even if Xi Jinping obtains AGI first, the Torch of Freedom will not survive.” In my view, America’s national interests are far better served by a cooperative strategy than by an adversarial strategy toward China.
AGI may be uncontrollable
Mr. Aschenbrenner’s recommendation that the United States enter into an AGI arms race with China only makes sense if it is a race that can actually be won. Aschenbrenner himself has said that “reliably controlling AI systems that are much smarter than we are is an open technical problem” and that “failure could easily be catastrophic.” The CEOs of major companies currently developing AGI, OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei, all agree that their technology will benefit humanity (and not just China). We believe it poses an existential threat to leading AI researchers, including: Yoshua Bengio, Jeffrey Hinton, and Stuart Russell express deep skepticism about our ability to reliably control AGI systems. If the U.S. race for AGI has some chance of annihilating the entire human race, including all Americans, it may be wise for the U.S. government to pursue global cooperation around the limits of AI development. do not have.
China will understand its own national interests
But do you believe that the survival risks are small enough to outweigh the risks of China’s (technological) permanent domination, or, like Aschenbrenner, what could be done to control a superhuman AI system? You may be feeling very bullish about breakthroughs in our understanding of what is needed. Still, I don’t think this justifies an AI arms race.
In Aschenbrenner’s words, “Superintelligence will become the most powerful technology and most powerful weapon ever developed by humanity, a decisive military advantage perhaps rivaling nuclear weapons.” ” Clearly, this would be huge for the international system if any of the existing superpowers believed that a rival power was trying to gain a “decisive military advantage” over them. will become unstable. To prevent conquest by the United States, China and Russia are likely to launch preemptive military actions to prevent a scenario in which the United States becomes a permanent hegemon. An AGI arms race could bring us to the brink of nuclear war, and this appears to be a very strong argument for global cooperation over frenzied competition.
view from beijing
It takes two to tango, and China would be foolish to pursue cooperation on AI if it nevertheless presses ahead. China certainly has its equals in Marc Andreessen and Jan LeCun, the vocal, economically motivated evangelists of the West’s unbridled AI development. The Economist recently identified Zhu Songchun, head of the state-sponsored AGI development program, and In Hyejun, the Minister of Science and Technology, as two key voices resisting any restraint.
Nevertheless, safety-minded voices seem to be winning so far. In the summer, the China AI Safety Network was officially launched with support from major universities in Beijing and Shanghai. Andrew Yao, the only Chinese to win the Turing Award for advances in computer science, Xue Lan, chairman of the State Expert Committee on AI Governance, and former president of Chinese tech company Baidu. Both warn against reckless AI development. It could threaten humanity. In June, Chinese President Xi Jinping sent a letter praising Andrew Yao’s work, and in July, Xi brought AI risks to the forefront at a meeting of the party’s central committee.

Cold shoulder?
Last November was a particularly promising year for U.S.-China cooperation on AI. On the first day of this month, representatives from the US and China literally shared the same stage at the Bletchley Park AI Safety Summit in the UK. Two weeks later, President Biden and President Xi held a summit in San Francisco and agreed to open bilateral channels, particularly on AI issues. This nascent but fragile collaboration was then further demonstrated at the AI Safety Conference in South Korea in May.
It is clear that China and the United States are at odds over many issues, including the future of Taiwan, industrial policy, and export controls. However, some issues, such as climate change, nuclear security, and AI safety, cannot be resolved within geopolitical blocs. They are calling for a global response. The moves countries make in the coming months could determine the trajectory of global AI. Towards an AI arms race where the outcome is highly uncertain, or towards some form of shared risk management.
Western countries (including the United States) have two opportunities to keep China at the table and elevate Beijing’s security-minded voice. These include the AI Safety Institute’s San Francisco conference in November and the Paris AI Action Summit in February. A significant portion of both summits will address safety benchmarking, assessment, and corporate obligations. Some of these issues are definitely political, while others are not. Ensuring that AI systems remain under human control is as much a concern for China as it is for the West, and safety agencies in particular It can be a neutral forum where technical experts gather.