The open source community has shown how to scale AI across complex computing infrastructures using tools like TRL, TorchForge, and verl. But computing is only one side of the coin. The other side is the developer community. The people and tools that enable agent systems. That’s why Meta and Hugging Face are partnering to launch OpenEnv Hub, a shared open community hub for agent environments.
The agent environment defines everything an agent needs to perform its tasks, including tools, APIs, credentials, and execution context. These bring clarity, safety, and sandbox control to agent behavior.
These environments can be used for both training and deployment, and serve as the foundation for scalable agent development.
problem
Modern AI agents can operate autonomously across thousands of tasks. However, large language models alone are not sufficient to actually perform these tasks. You need access to the right tools. It’s not reasonable (or safe) to expose millions of tools directly to your model. Instead, we need an agent environment, a safe and semantically clear sandbox that defines exactly what is needed for the task. These environments handle the following important details:
Clear semantics about what a task requires Sandboxed execution and safety guarantees Seamless access to certified tools and APIs
solution
To power this next wave of agent development, Meta-PyTorch and Hugging Face are partnering to launch an environmental hub. This is a shared space where developers can build, share, and explore OpenEnv-compatible environments for both training and deployment. The diagram below shows how OpenEnv fits into the new post-training stack being developed by Meta, with ongoing integrations with other libraries such as TRL, SkyRL, and Unsloth.
Starting next week, developers will be able to:
Visit Hugging Face’s new environment hub to seed some initial environments. Interact directly with the environment as a human agent. Participate models in solving tasks in the environment. Inspect how the environment defines the tools it exposes and its observations. All environments uploaded to the hub that comply with the OpenEnv specification will automatically get this functionality. This allows for quick and easy validation and iteration before performing full RL training.
In parallel, we are releasing the OpenEnv 0.1 specification (RFC) to gather feedback from the community and help shape the standard.
RFC
The current state of the repository allows environment creators to create environments using the step(), reset(), and close() APIs (part of the RFC below). You can see some examples on how to create such an environment here. Environment users can work with local Docker-based environments for all environments already available in the repository. The following RFCs are under consideration.
RFC 001: Establishes the architecture of how core components such as environments, agents, and tasks are related. RFC 002: Proposes basic environment interfaces, packaging, isolation, and communication with the environment. RFC 003: Proposes encapsulation of MCP tools with environment abstraction and isolation boundaries RFC 004: Extends tool support to cover a uniform action schema that covers tool invocation agents and the CodeAct paradigm.
use case
RL post-training: Capture environments across collections and use them to train RL agents with TRL, TorchForge+Monarch, VeRL, etc. Create an environment: Build your environment, ensure it interoperates with popular RL tools in your ecosystem, share it with collaborators, and more. Replicate SOTA methods: Easily replicate methods like FAIR’s code world model by integrating agenttic coding and software engineering environments. Deploy: User can create an environment, train on the same environment, and use the same environment for inference as well (across pipelines)
what’s next
This is just the beginning. We’re integrating OpenEnv Hub with Meta’s new TorchForge RL library and working with other open source RL projects like verl, TRL, and SkyRL to extend compatibility. Join us at the PyTorch conference on October 23rd for live demos and tutorials of the spec. Also, stay tuned for future community meetups on environments, RL post-training, and agent development.
👉 Explore Hugging Face’s OpenEnv hub and start building environments that power the next generation of agents.
👉 Check out the 0.1 spec implemented in the OpenEnv project → We welcome your ideas and contributions to make it better!
👉 Join us on Discord to discuss RL, environments, and agent development with the community
👉 Try it yourself – we’ve created a comprehensive notebook that walks you through an end-to-end example. Of course, you can easily pip install packages via PyPI. This notebook describes the abstractions we built, along with examples of how to use existing integrations and add your own. Try it with Google Colab.
👉 Check out our supported platforms – Unsloth, TRL, Lightning.AI
Let’s build the future of open agency together, one environment at a time 🔥!