AI has skyrocketed to the top of the diplomatic agenda over the past few years.
And one of the main topics of discussion among researchers, technology executives and policy makers is how the open source model (the free open source model should be governed to be used and modified by anyone).
At the AI Action Summit in Paris earlier this year, Meta’s lead AI scientist Yann Lecun wants to see a world where “training open source platforms with data centers across the world with distributed fashion.” Although each has access to its own data source, which can remain confidential, “they contribute to a common model that essentially constitutes a repository of all human knowledge,” he said.
This repository is larger than what any entity can handle, whether it’s a country or a company. For example, India may not provide a set of knowledge that includes all the languages and dialects spoken to high-tech companies. But “they’re willing to contribute to training big models. If possible, it’s open source,” he said.
But to achieve that vision, “countries need to be really careful with regulations and laws.” He said the country should not hamper open source and should not support it.
Even in the case of closed loop systems, Openai CEO Sam Altman said international regulations are important.
“I think time will come in the not-so-distant future, just as we’re not talking about decades or decades from when frontier AI systems can cause major global harm,” Altman said in last year’s all-in-podcast.
Altman said he believes these systems will “have negative impacts beyond the territory of one country” and wants to see them being regulated by “looking at the most powerful systems and ensuring reasonable safety testing.”