Former Google CEO Eric Schmidt co-authored a paper warning the US about the dangers of the AI Manhattan project. Paper, Schmidt, Dan Hendrycks, and Alexandr Wang are looking for a more defensive approach.
Some of AI Tech’s biggest names say that instead of helping AI’s “Manhattan Project” protect it, it could have a volatile effect on the US.
The disastrous warning comes from the scale of former Google CEO Eric Schmidt, AI Safety Director Dan Hendrycks center, and AI CEO Alexandr Wang. They co-authored a policy paper published Wednesday entitled “Superintelligence Strategy.”
In this paper, high-tech titans urge the US to move away from aggressive push to develop urgent AI or AGIs that the authors say could cause international retaliation. In particular, China “does not sit idle,” the US works to realize AGI, and “ends to risk loss of control,” they write.
The author writes that circumstances similar to the nuclear arms race that created the Manhattan Project, a secret initiative that ended with the creation of the first atomic bomb, developed around the AI Frontier.
For example, in November 2024, a bipartisan Congressional committee called for a “Manhattan Project” program dedicated to sending funds to initiatives that would help the US defeat China in the race for AGI. A few days before the author published his paper, energy secretary Chris Wright said the country was already “at the start of a new Manhattan project.”
“The Manhattan project assumes that rivals will acquiesce to permanent imbalances and omnico rather than move to prevent it,” the author writes. “What begins as a driving force for superware pomp and global control encourages hostile countermeasures and escalating tensions, thereby encouraging strategies to seriously undermine stability aimed at ensuring.”
According to Schmidt, Hendrycks and Wang, it’s not just the government subsidizing AI progress. Private companies are developing their own “Manhattan Project.” Demis Hassabis, CEO of Google Deepmind, says he loses sleep for the possibility of ending like Robert Oppenheimer.
“A similar urgency is now evident in the global efforts leading in AI, with investment in AI training doubled every year over the past decade,” the author says. “Some ‘AI Manhattan Projects’ aim to ultimately build super intelligence funded by many of the world’s most powerful companies. ”
The authors argue that the United States believes itself to function under conditions similar to mutually guaranteed destruction. They write that further efforts to control the AI space could trigger retaliation from rival global powers.
Instead, this paper suggests that the US could benefit from taking a more defensive approach. This is not to rush your own attacks, but to sabotage your “destabilizing” AI projects through methods such as cyber attacks.
To tackle “rival states, misconducters, and risk of losing control” at once, the authors have published three strategies. It will deter you by obstruction, restrict chip access, restrict “weaponable AI systems” to “cheating people,” and ensure access to AI chips through domestic manufacturing.
“Just as Cold War deterrence regimes did not mean that the US acted purely defensively, the US must make the most of its technology leadership while maintaining strategic stability,” Hendrycks told Business Insider. “Deterrence can hinder AI projects from destabilizing, but competitiveness remains a critical factor.”
Overall, Schmidt, Hendrick and the King push balance rather than what is called a “moving fast and breaking things” strategy. They argue that the US has an opportunity to step back from the urgent rush of the arms race and move towards a more defensive strategy.
“By systematically constraining the most unstable movements, states can bring AI to unprecedented benefits, rather than risking it as a catalyst for doom,” the author writes.