New white paper explores models and capabilities for international organizations that could help manage advanced AI opportunities and reduce risks
Increasing awareness of the global impact of advanced artificial intelligence (AI) has fueled public debate about the need for international governance structures to manage opportunities and mitigate the associated risks.
Much of the discussion is based on similarities with ICAO (International Civil Aviation Organization) in civil aviation. CERN (European Organization for Nuclear Research) Particle Physics. IAEA (International Atomic Energy Agency) in nuclear technology. and intergovernmental and multi-stakeholder organizations in many other sectors. Although the analogy is a useful starting point, the technology that emerges from AI is different from aviation, particle physics, or nuclear technology.
Successful AI governance requires a deeper understanding of:
What specific benefits and risks need to be managed internationally? What governance features do those benefits and risks require? What type of organization can best provide those capabilities?
A new paper by collaborators at the University of Oxford, University of Montreal, University of Toronto, Columbia University, Harvard University, Stanford University, and OpenAI addresses these questions and explores how international institutions are helping to manage the global impact of frontier AI developments. We are investigating how we can contribute. and ensure that the benefits of AI reach all communities.
The important role of international and multilateral organizations
Access to certain AI technologies has the potential to significantly increase prosperity and stability, but the benefits of these technologies are not evenly distributed or reach the greatest extent to underrepresented communities and developing countries. The focus is not on needs. Certain groups may also be unable to fully benefit from advances in AI due to insufficient access to internet services, computing power, or machine learning training and expertise.
International cooperation encourages organizations to develop systems and applications that address the needs of underserved communities, and provides education, infrastructure, and economic support for such communities to take full advantage of AI technologies. ameliorating physical impairments may help address these issues.
Additionally, managing the risks posed by powerful AI capabilities may require international efforts. Without appropriate safeguards, some of these features, such as automated software development, chemical and synthetic biology research, and text and video generation, can be misused to cause harm. Advanced AI systems can fail in ways that are difficult to predict and create the risk of accidents with potentially international implications if the technology is not deployed responsibly.
Working together, international organizations and multiple stakeholders could help advance AI development and deployment protocols that minimize such risks. For example, it could foster global agreement on the threat that different AI capabilities pose to society and set international standards for identifying and handling models with dangerous capabilities. International cooperation on safety research will also further enhance the ability of systems to be reliable and resistant to abuse.
Finally, in situations where countries have incentives (e.g., due to economic competition) to undermine each other’s regulatory efforts, international organizations can help support and encourage best practices and even monitor compliance with standards. There is a possibility.
Four potential institutional models
We consider four complementary institutional models for supporting global coordination and governance functions.
The Intergovernmental Commission on Frontier AI could build international consensus on the opportunities and risks of advanced AI and how to manage them. This will increase public awareness and understanding of the prospects and issues of AI, contribute to scientifically-informed explanations of AI use and risk mitigation, and serve as a source of expertise for policymakers. Advanced intergovernmental or multi-stakeholder AI governance organizations could help with internationalization and coordination. Efforts to address global risks from advanced AI systems by setting and supporting implementation of governance norms and standards. It may also perform compliance monitoring functions for any international governance regime. Frontier AI collaborations have the potential to accelerate access to advanced AI as international public-private partnerships. Doing so will enable underserved societies to benefit from cutting-edge AI technologies and facilitate international access to AI technologies for safety and governance purposes. Probably. AI safety projects could bring together top researchers and engineers and provide access to computational resources and data. Advanced AI models for research on technical mitigation of AI risks. This will increase and accelerate the scale, resources, and collaboration of AI safety research and development.
Operational challenges
Many important unanswered questions remain regarding the viability of these institutional models. For example, the Advanced AI Commission faces significant scientific challenges, given the extreme uncertainty about AI’s trajectory and capabilities and the limited scientific research on advanced AI issues to date. It will be.
Rapid advances in AI and limited public sector capacity for cutting-edge AI issues may also make it difficult for advanced AI governance organizations to set standards that are responsive to the risk landscape. International coordination presents many challenges, raising questions about how countries will be motivated to adopt the standards or accept oversight.
Similarly, many obstacles to society’s ability to fully exploit the benefits of advanced AI systems (and other technologies) may prevent optimizing the effectiveness of frontier AI collaborations. There can also be tensions that are difficult to manage between sharing the benefits of AI and preventing the spread of dangerous systems.
It is also important for AI safety projects to carefully consider which elements of safety research are best conducted through collaboration rather than individual company efforts. Additionally, projects may struggle to ensure adequate access from all relevant developers to the most capable models to conduct safety studies.
Given the enormous global opportunities and challenges at hand posed by AI systems, governments and Further discussion is required among other stakeholders.
We hope this research will contribute to expanding the international debate on how to ensure the development of advanced AI for the benefit of humanity.