See the AI Safety Summits’ work, such as the recommendations of the AI Action Summit in the future and the previous summit engagement.
Technology background
AI’s recent development, especially the O3 model of US company OPENAI, shows acceleration in abilities. Recent benchmarks (ARC-AGI, CODEFORCE, GPQA) indicate that the latest models exceed human experts in many important fields. This rapid evolution creates a general risk that requires urgent international actions, combined with the increase in commercialization of autonomous AI agents.
History of the International AI Summit
Promotion of the previous summit
The development and presentation of (provisional) international science report on advanced AI safety. Share scientific understanding of the risk of general AI systems. By defining the risks of the general -purpose AI system when there is no easing measure, it is a minimal standard creation of companies that develop these systems (Frontier AI safety commitment). The government will create a role to participate in safety tests and create an AI safety agency and an international network for them.
Parismit’s ambitions and achievements
At this stage, Summit Week is planned as follows.
February 6-7: The science day of SACLAY, which provides a diplomat with a scientific outline as a basis for discussions at the summit on February 10 and 11. February 10-11: AI Action Summit February 10: Replaced with a round table reserved for leaders. February 11: At a parallel business event at the station F, the head of the state and government at Grandpare
The following announcements can be expected for products.
Create an AI Foundation to equip developing countries to equip an open source AI tool based on a not so powerful AI system. Presentation of 35 “convergence issues” to introduce the effects of the current AI system in sectors such as health care and climate change. The multilateral agreement on the impact of this technology will be signed at the end of the summit. Completed “International AI Safety Report”: 100 independent AI experts around the world will release the first international AI safety report in Paris. Supported by 30 countries, OECDs, the United Nations, and the EU, this report summarizes AI’s abilities and risks, and how to alleviate those risks. It helps to further promote the general understanding of the risks brought by a general AI system.
Please see the web page of the official summit.
contact:
Ima Vero, the person in charge of the Future Life Research Institute (FLI)
ima@futureflife.org | + 33 6 28 73 89 64
This content was first released in FutureFlife.org January 31, 2025。
About the future of the Life Research Institute
The Future of Life Institute (FLI) is a global non -profit organization with a team of more than 20 full -time staff in the United States and Europe. Since its establishment in 2014, FLI has been working to lead the development of transformed technologies for the interests of life. Find out the details of our mission or explore our work.