Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Business Insider begins publishing stories with AI ‘authors’

October 24, 2025

Super charging OSS robotics learning

October 24, 2025

Autonomy in the real world? Druid AI releases AI agent “Factory”

October 24, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Friday, October 24
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»Super charging OSS robotics learning
Tools

Super charging OSS robotics learning

versatileaiBy versatileaiOctober 24, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

We’re excited to announce a series of significant advances across LeRobot designed to make open source robot learning more powerful, scalable, and easier to use than ever before. From revamped datasets to versatile editing tools, new simulation environments, and a groundbreaking plug-in system for our hardware, LeRobot is continually evolving to meet the demands of cutting-edge somatic AI.

TL;DR

LeRobot v0.4.0 delivers a major upgrade in open source robotics, introducing scalable Datasets v3.0, powerful new VLA models such as PI0.5 and GR00T N1.5, and a new plugin system for easier hardware integration. This release also adds support for LIBERO and Meta-World simulations, simplified multi-GPU training, and a new Hugging Face Robot Learning Course.

table of contents

Datasets: Prepare for the next wave of large-scale robot learning

LeRobotDataset v3.0 has completely overhauled our dataset infrastructure with a new chunked episode format and streaming capabilities. It is an innovative tool for processing large datasets such as OXE (Open X Elementary) and Droid, resulting in unparalleled efficiency and scalability.

What’s new in dataset v3.0?

Chunked episodes for massive scale: Our new format supports OXE-level (>400 GB) datasets, enabling unprecedented scalability. Efficient video storage + streaming: Enjoy faster load times and seamless streaming of your video data. Unified Parquet Metadata: Say goodbye to scattered JSON! All episode metadata is now stored in a unified, structured Parquet file for easier management and access. Faster loading and improved performance: Dataset initialization time is significantly reduced and memory usage is more efficient.

We also provide conversion scripts to easily migrate existing v2.1 datasets to the new v3.0 format, ensuring a smooth transition. Please see our previous blog post for more information. Open source robotics continues to improve!

New feature: Dataset editing tools!

Working with LeRobot datasets is now much easier. Introduced a powerful set of utilities for flexible dataset editing.

With the new lerobot-edit-dataset CLI, you can:

Remove specific episodes from an existing dataset. Split the dataset by fraction or episode index. You can easily add or remove features. Combine multiple datasets into one unified set. lerobot-edit-dataset \ –repo_id lerobot/pusht_merged \ –operation.type merge \ –operation.repo_ids “(‘lerobot/pusht_train’, ‘lerobot/pusht_val’)”

lerobot-edit-dataset \ –repo_id lerobot/pusht \ –new_repo_id lerobot/pusht_after_deletion \ –operation.type delete_episodes \ –operation.episode_indices “(0, 2, 5)”

These tools streamline your workflow and allow you to curate and optimize your robotic datasets like never before. See the documentation for more details.

Simulation environment: Expand your training ground

We are continually expanding LeRobot’s simulation capabilities to provide a richer and more diverse training environment for robot policy.

libero support

LeRobot officially supports LIBERO, one of the largest open benchmarks for Vision, Language, and Action (VLA) policies, boasting over 130 tasks. This is a major step towards building a go-to assessment hub for VLA, allowing easy integration and integration setup for assessing VLA policies.

To get started, check out the LIBERO dataset and documentation.

Metaworld integration

We have integrated Meta-World, the premier benchmark for testing multitasking and generalization abilities in robot manipulation, with over 50 diverse manipulation tasks. This integration, together with the standardized use of gymnasium ≥ 1.0.0 and mujoco ≥ 3.0.0, ensures a definitive seed and a robust simulation foundation.

Train your policy using the Meta-World dataset today.

Codebase: A powerful tool that anyone can use

We make robot control more flexible and accessible, enabling new possibilities for data collection and model training.

New pipeline for data processing

Getting data from a robot to a model (and vice versa) is difficult. The raw sensor data, joint positions, and verbal instructions don’t match what the AI ​​model expects. The model requires normalized and batched tensors on the appropriate device, while the robot hardware requires specific action commands.

We’re excited to introduce Processor, a new modular pipeline that acts as a universal translator for your data. Think of it as an assembly line, where each ProcessorStep handles one specific job (normalizing text, tokenizing, moving data to the GPU, etc.).

You can chain these steps into powerful pipelines for complete control over your data flow. To make things easier, I’ve also created two different types.

PolicyProcessorPipeline: Built for models. Skillfully process batched tensors for high-performance training and inference. RobotProcessorPipeline: Built for hardware. Process individual data points (such as single observations or actions) for real-time robot control. obs = robot.get_observation() obs_processed = preprocess(obs) action = model.select_action(obs_processed) action_processed = postprocess(action) robot.send_action(action_processed)

This system makes it easy to connect any policy to any robot, ensuring that data is always in perfect format every step of the way. For more information, please see the Introduction to Processors document.

Multi-GPU training made easy

Training robot policies at scale is now significantly faster. We’ve integrated Accelerate directly into your training pipeline, making it incredibly easy to scale your experiments across multiple GPUs with a single command.

Speed ​​up startup \ –multi_gpu \ –num_processes=$NUM_GPU quantity \ $(which one lerobot-train) \ –dataset.repo_id=${HF_USER}/my_dataset \ –policy.repo_id=${HF_USER}/my_trained_policy \ –policy.type=$POLICY_TYPE \

Whether you’re fine-tuning your policies or running large-scale experiments, LeRobot can now handle all the complexities of distributed training. This means you can significantly reduce your training time, by half if you have two GPUs, by a third if you have three GPUs, or even more.

Check out our documentation to accelerate your robot’s learning.

Policy: Unleashing the open-world generalization

groot demo

PI0 and PI0.5

In a major milestone in open source robotics, we have integrated the pi0 and pi0.5 policies from Physical Intelligence into LeRobot. These visual language action (VLA) models represent a major advance toward the generalization of open worlds in robotics. But what makes π₀.₅ innovative?

Open-world generalization: Designed to adapt to entirely new environments and situations, and generalize across physical, semantic, and environmental levels. Collaborative training of heterogeneous data: Learn from diverse combinations of multimodal web data, verbal instructions, subtask commands, and multienvironment robot data. Physical Intelligence Collaboration: We deeply appreciate the groundbreaking work of the Physical Intelligence team.

You can find the models on the Hugging Face Hub: pi0.5_base, pi0_base, and the corresponding models tuned in Libero. For more information, visit Physical Intelligence Research.

GR00T N1.5

In another exciting development, thanks to great collaboration with the NVIDIA Robotics team, we have integrated NVIDIA’s GR00T N1.5 into LeRobot. This open foundation model powers generalized robot reasoning and skills. As a cross-embodiment model, multimodal inputs (e.g., language and images) are required to perform complex manipulation tasks in diverse environments, representing a new major leap forward in generalized robotics. But what makes the GR00T N1.5 so transformative?

Generalized Reasoning and Skills: Designed as a cross-embodiment foundational model, GR00T N1.5 has improved language following ability and excels in generalized reasoning and manipulation tasks. Augmented Heterogeneous Training: Learn from large datasets that combine real-world captured humanoid data, synthetic data generated by NVIDIA Isaac GR00T Blueprints, and internet-scale video data. NVIDIA Collaboration: We are excited to partner with the NVIDIA team to bring this cutting-edge model to the open source LeRobot community.

The model is on Hug Face Hub: GR00T-N1.5-3B. For more information, check out the NVIDIA research page and official GitHub repository.

Integrating these policies natively into lerobot is a huge step forward in making robot learning as open and reproducible as possible. Try it now, share your journey, and let’s advance the frontiers of embodied AI together!

Robots: A new era of hardware integration with plug-in systems

Big news for hardware enthusiasts. We’ve launched an all-new plugin system that revolutionizes the way you integrate LeRobot with third-party hardware. Connecting robots, cameras, and remote operators is now as easy as a pip install and requires no changes to the core libraries.

Main benefits

Extensibility: Develop and integrate custom hardware in separate Python packages. Scalability: Support a growing ecosystem of devices without bloating your core libraries. Community-friendly: Lower barriers to entry for community contributions and foster a more collaborative environment.

Check out our documentation to learn how to create your own plugins.

pip install lerobot_teleoperator_my_awesome_teleop lerobot-teleoperate –teleop.type=my_awesome_teleop

Reachy 2 integration

Pollen Robotics’ Reachy 2 has also been added to LeRobot thanks to a new plugin system. Reachy 2 can be used for both real robot control and simulation, so you can quickly experiment with remote control and autonomous demos.

phone integration

Thanks to a powerful new pipeline system, you can now remotely control the follower arm directly from your mobile phone (iOS/Android). The phone acts as a remote control device, and our RobotProcessor pipeline handles all the transformations, making it easy to drive the robot in different action spaces (such as the end-effector space). Check out the example.

Hug face robot learning course

We are launching a comprehensive, self-paced, fully open source course designed to make robotics learning accessible to everyone. If you’re interested in how real-world robots learn, this is a great place to start.

In this course, you will learn how to:

Understand the fundamentals of classical robotics. Use generative models for imitation learning (VAE, diffusion, etc.). Apply reinforcement learning to real-world robots. Check out the latest generalist robot policies like PI0 and SmolVLA.

Join the Hugging Face Robotics organization, follow us and start your journey!

More information: Latest robot learning tutorials

For those who want to learn more, we’ve also published practical tutorials on the latest advancements in robotics. This guide provides self-contained instructions, re-derives modern techniques from first principles, and includes ready-to-use code examples using LeRobot and Hugging Face.

The tutorial itself is hosted in the space and features a hands-on example using LeRobot using all the models and datasets on Hugging Hub. Please also see our paper for a detailed overview.

Final thoughts from the team

In addition to these key features, this release is packed with numerous bug fixes, documentation improvements, updated dependencies, more samples, and better infrastructure to make your LeRobot experience smoother and more reliable.

We would like to express our gratitude to the community for your valuable contributions, feedback, and support. We are very excited about the future of open source robotics and can’t wait to collaborate on what’s next.

Stay tuned for more updates 🤗 Start here! – LeRobot Team ❤️

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAutonomy in the real world? Druid AI releases AI agent “Factory”
Next Article Business Insider begins publishing stories with AI ‘authors’
versatileai

Related Posts

Tools

Autonomy in the real world? Druid AI releases AI agent “Factory”

October 24, 2025
Tools

Co-building an open agent ecosystem: Introducing OpenEnv

October 23, 2025
Tools

Investigate top AI security threats

October 23, 2025
Add A Comment

Comments are closed.

Top Posts

Paris AI Safety Breakfast #3: Yoshua Bengio

February 13, 20256 Views

WhatsApp blocks AI chatbots to protect business platform

October 19, 20254 Views

Investigate top AI security threats

October 23, 20253 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Paris AI Safety Breakfast #3: Yoshua Bengio

February 13, 20256 Views

WhatsApp blocks AI chatbots to protect business platform

October 19, 20254 Views

Investigate top AI security threats

October 23, 20253 Views
Don't Miss

Business Insider begins publishing stories with AI ‘authors’

October 24, 2025

Super charging OSS robotics learning

October 24, 2025

Autonomy in the real world? Druid AI releases AI agent “Factory”

October 24, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?