Foundation models, also known as general purpose AI, are large models trained on vast amounts of data that serve as a starting point or “foundation” for developing AI systems across domains. These models can power a variety of applications, from content generation to paving the way for interactive systems that can perform complex digital tasks autonomously. However, despite their vast utility, these general-purpose AI systems can pose significant risks to humanity if not developed, deployed, and used responsibly.
Ensuring the safety of these systems requires industry leaders, governments, and civil society to work together to develop practical guidance and regulations that can address these risks. A year ago, the UK government hosted the first global summit on AI safety. Prior to the summit, PAI released guidance on implementing a secure foundation model, laying the foundation for the governance of general-purpose AI. This guidance provides a customized set of responsible practices for safely developing and deploying models that are specific to the model functionality and release approach that model providers are developing.
Over the past year, EU AI law has set a global precedent for comprehensive AI regulation. The newly established European AI Office is now adopting a multi-stakeholder approach similar to PAI, inviting model providers, downstream developers, civil society organizations and academic experts to develop specific codes of practice for general purpose AI models. is under development. Code of practice. The first draft of the code, prepared by independent experts, was published earlier this month, and drafting will continue until April 2025.
To the European AI Agency and other policy makers to develop foundational model guidelines, based on lessons learned from the multi-stakeholder process to develop the 2023 Model Deployment Guidance and recent expanded guidelines that address the entire AI value chain. We offer three important considerations:
Creating repeatable guidance:
Policymakers should be prepared to reconsider and iterate guidance. Similar to the October 2023 version of the Model Implementation Guidance, which opened a public comment period, the guidelines should be refined and updated to reflect the underlying models that are currently in widespread use. Our public comment feedback emphasized that as these models become more widely available and adaptable, responsibility for safe development and deployment extends beyond model providers alone. As a result, we have been able to issue extended guidance specific to the Open Foundation Model. Tailor guidance to specific models and release types.
AI is not monolithic and different models require uniquely tailored approaches, but not all underlying models warrant the same level of oversight. It is important that guidance is customizable and reflects these nuances, for example by providing clear recommendations for frontier models (paradigm-shifting general-purpose models that advance the current state of the art) . Research releases that demonstrate new technology or concepts don’t require the extensive guardrails of large-scale deployments that can impact millions of users. The features of a model and how it is released can have a significant impact on its potential social impact. Our guidance accomplishes this by providing customized recommendations across a variety of scenarios, from the release of frontier models that require extensive safety precautions, to closed deployments where models are integrated directly into production without public release. is reflected. This latter scenario is likely to become increasingly common as some companies follow the pattern seen with recommendation systems and choose in-house implementation. That’s why we developed a custom guidance generator. To make this even more accessible, we have published a guidance checklist for three scenarios that ensure a clear governance approach. Frontier x Restricted: Comprehensive guidelines for paradigm-shifting underlying models that require extensive safeguards Advanced x Open: A decentralized approach with an emphasis on collaborative value chain governance Frontier x Closed: Do not expose your models Extend governance beyond the internal implementation-focused guideline model by integrating directly into your product with providers:
Following the public release of our initial guidance, we published an expanded recommendation in 2024 that considers the roles and responsibilities of key actors across the Open Foundation Model value chain. While model providers play an important role, effective governance must also address model adapters that customize these models, hosting services that make the models accessible, and application developers that build end-user products. There is. Our value chain analysis shows how each of these actors contributes to and shares responsibility for the development and deployment of secure AI.
A multi-stakeholder process is not only beneficial, but essential to developing a robust governance framework that can effectively shape responsible AI policies.
More than 40 global institutions, including model providers, civil society organizations, and academic institutions, participated in the development of PAI’s model deployment guidance. Our work continues, and efforts are currently underway to calibrate and understand emerging policy landscapes based on foundational models. Our recent report on Aligning AI Transparency Policy provides an in-depth analysis of eight key policy frameworks in the underlying model, with a particular focus on documentation requirements, and how they can help foster interoperability. We provide recommendations.
Looking ahead, our focus on agent AI, systems that can act autonomously on behalf of users, will be an important next step in our efforts. As these systems become more sophisticated, governance frameworks must be developed to ensure their reliability and ethical implementation. Understanding and addressing the unique challenges posed by agent AI is critical to the future of human-AI interaction. To keep up to date with our work in this field, please subscribe to our newsletter.