As much of Silicon Valley races to develop god-like AI, Microsoft’s AI chief is trying to pump the brakes.
Mustafa Suleiman said on Saturday’s episode of the Silicon Valley Girl Podcast that the idea of artificial superintelligence shouldn’t just be avoided. It should be considered an “anti-goal”.
Artificial superintelligence, which can reason far beyond human capabilities, “doesn’t seem like a positive vision for the future,” Suleiman said.
“It would be very difficult to contain something like that or align it with our values,” he added.
Suleiman, who co-founded DeepMind before moving to Microsoft, said his team is trying to build a “humanist superintelligence” – one that supports human interests.
Suleiman also said it would be a mistake to give AI anything resembling consciousness or moral status.
“These things don’t suffer. They don’t feel pain,” Suleiman said. “They’re just simulating a quality conversation.”
Discussion about superintelligence
Suleiman’s comments come as some industry leaders talk about building artificial intelligence. Some say it may arrive within 10 years.
OpenAI CEO Sam Altman has repeatedly said that artificial general intelligence (AI that can reason like a human) is the company’s core mission. Altman said earlier this year that OpenAI is already looking beyond AGI to superintelligence.
“Superintelligent tools have the potential to vastly accelerate scientific discovery and innovation, far beyond what we could do alone, and thereby greatly increase wealth and prosperity,” Altman said in January.
Altman also said in a September interview that he would be very surprised if superintelligence did not emerge by 2030.
Demis Hassabis, co-founder of Google DeepMind, offered a similar timeline. He said in April that AGI could be achieved “within the next five to 10 years.”
“We’ll have a system that really understands everything around you in a very subtle and deep way and is built into your daily life,” he said.
Other leaders have voiced skepticism. Yann LeCun, Meta’s chief AI scientist, said it may still take “decades” to reach AGI.
“Most interesting problems are very badly scaled,” LeCun said at the National University of Singapore in April. “We cannot simply assume that more data and more computing means smarter AI.”

