A new report by the U.S.-China Economic and Security Review Commission recommends that “Congress establish and fund a Manhattan Project-like program dedicated to acquiring and competing in artificial general intelligence (AGI) capabilities.” I am doing it.
AGI racing is suicidal. The proposed AGI Manhattan Project and the fundamental misunderstandings underlying it represent a growing threat to U.S. national security. Systems that are better at general cognitive and problem-solving abilities than humans are by definition better at AI research and development, and can self-improve and replicate at an alarming rate. The world’s leading AI experts agree that there is no way to predict or control such systems, and no reliable way to align their goals and values with ours. This is why the CEOs of OpenAI, Anthropic, and Google DeepMind joined a roster of top AI researchers last year to warn that AGI could cause human extinction. Touting AGI as a national security benefit goes against scientific consensus. To call this a threat to national security is a shocking understatement.
While AGI claims to be dishonestly dangling benefits such as disease and poverty reduction, the report reveals a deeper motive: a false hope of empowering the Creator. Indeed, the competition with China to build the first AGI can be characterized as a “hopium war” fueled by delusional hopes of control.
In a competitive race, there is no opportunity to solve the open technical problems of control and coordination, and there is no incentive to cede decision-making and authority to the AI itself. The almost inevitable result will be an intelligence vastly superior to ours, one that is not only essentially uncontrollable, but may itself be in charge of the very systems that keep America safe and prosperous. Our critical infrastructure, including our nuclear and financial systems, would have little protection from such systems. As AI Nobel Prize winner Jeff Hinton said last month, “Once artificial intelligence becomes smarter than us, they will take control.”
This report commits scientific fraud by suggesting that AGI is almost certainly controllable. More generally, claims that such projects are for “national security” undermine the science and implications of this transformative technology, as evidenced by the report’s own technical confusion. It appears to have been dishonestly misrepresented, and the report itself did not have sufficient input from AI experts. . Rather than losing control of AGI, the United States should and will surely strengthen its national security by building innovative tools, AI, to strengthen its industry, science, education, health care, and defense. This will strengthen American leadership for generations to come.
Max Tegmark
Director, Future of Life Research Institute
This content was first published on futureoflife.org. November 20, 2024.
About Future of Life Institute
Future of Life Institute (FLI) is a global nonprofit organization with a team of more than 20 full-time staff members in the United States and Europe. Since its founding in 2014, FLI has been committed to driving the development of innovative technologies that benefit people’s lives and avoid risks at extreme scale. Learn more about our mission or explore our work.