You’ll be back here throughout the week to learn the latest things about physical AI. This allows machines to perceive, plan, and act on greater autonomy and intelligence in real-world environments.
During this National Robot Week, which will be held until April 12th, Nvidia highlights pioneering technologies that shape the future of intelligent machines and drive advances in manufacturing, healthcare and logistics.
Advances in robot simulation and robot learning are driving this fundamental change in the industry. Furthermore, the advent of the World Foundation model has accelerated the evolution of AI-enabled robots that can adapt to dynamic, complex scenarios.
For example, by providing robot foundation models such as the NVIDIA GR00T N1, frameworks such as the NVIDIA ISAAC SIM and ISAAC labs for robot simulation and training, and synthetic data generation pipelines for training robots for a variety of tasks, the NVIDIA ISAAC and GR00T platforms develop energy to promote robots.
Robots have the potential to automate and extend difficult and repetitive tasks. However, programming robots for safely performing these tasks have traditionally been challenging, expensive and specialized. Scaled Foundations is a member of the NVIDIA Inception program for cutting-edge startups, using a grid platform to lower barriers to entry.
By integrating Nvidia Isaac Sim into the grid, Scaled Foundations offers users the opportunity to quickly track the development and deployment of advanced robotic AI solutions across new robot types. Developers and students can access cutting-edge tools to develop, simulate and deploy fully robotic AI systems within their browsers.
https://www.youtube.com/watch?v=t8n-qnsluh8
Access, build and manage seamless robot intelligence from your browser.
For more information on watching NVIDIA GTC sessions and deploying solutions using the Scaled Foundations grid platform, see “Introduction to Robot Simulation: Learn how to develop, simulate and deploy scalable robot intelligence.”
The University of Washington research project, Wheeled Lab, brings intersimulation robotics to a low-cost, open-source platform.
The wheeled lab integrated with Nvidia Isaac Lab, a unified framework for robot learning, can train reinforcement robots for complex tasks such as controlled drifting, obstacle avoidance, elevation crossing, and visual navigation. This pipeline can use domain randomization, sensor simulation, and end-to-end learning to bridge the gap between simulated training and real deployment, all ensuring transfer from zero-shot simulation to reality.

Left: Drift policy. Right: Training in ISAAC lab simulation.
The entire stack spanning simulation, training and deployment is completely open source, allowing developers to iterate, modify policies and experiment with reinforcement learning techniques in reproducible environments.


Left: Drift policy. Right: Training in ISAAC lab simulation.
Let’s get started with GitHub code.
What does it take to teach robots’ complex decisions in the real world? For Nicklas Hansen, a doctoral candidate at UC San Diego and a graduate researcher at Nvidia, the answer lies in scalable and robust machine learning algorithms.
With his experience at the University of California, Berkeley, Meta AI (Fair), and the Institute of Technology Denmark, Hansen pushes the boundaries of how robots perceive, plan and act in dynamic environments. Their research lies at the intersection of robotics, reinforcement learning, and computer vision. Fills the gap between simulation and real-world developments.

Hansen’s recent work is addressing one of the toughest challenges of robotics: one of the elder operations. Their paper, multi-stage operations, policies and world models with demonstration rewards introduces a framework that uses multi-stage task structures to increase data efficiency in a sparse reward environment.

Another important project by Hansen, the hierarchical world model as a visual whole-body humanoid controller, advances the control strategy of humanoid robots, allowing for more adaptive and human movements.
Beyond his own research, Hansen advocates for making AI-driven robotics more accessible.
“My advice for anyone looking to start AI in robotics is to simply play around with the many open source tools available and gradually start contributing to projects that match your goals and interests,” they said. “Free simulation tools such as Mujoco, Nvidia Isaac Lab and Maniskill are available, allowing you to make a big impact on the field without owning a real robot.”
Hansen is the lead author of TD-MPC2, a model-based reinforcement learning algorithm that allows you to learn a variety of control tasks without domain knowledge. The algorithm is open source and can be run on a single consumer grade GPU.
Find out more about Hansen and other Nvidia graduate fellowship recipients who drive innovation in AI and robotics. Check out the replay of the “Graduate Program First Forward” session from the NVIDIA GTC AI Conference. Doctoral students from the NVIDIA Graduate Fellowship will present groundbreaking research.
Last month, Seed Studio embodies AI Hackathon and brings together the Robotics community to showcase innovative projects using the Lerobot SO-100ARM motor kit.
The event highlighted how robot learning is driving AI-driven robotics, and the team successfully integrated the NVIDIA ISAAC GR00T N1 model to speed up humanoid robot development. Notable projects include the development of leader-follower robot pairs that can learn pick-and-place tasks with post-training robot foundation models on real-world demo data.
https://www.youtube.com/watch?v=intdruktx30
How the project works:
Real-world imitation learning: The robot observes and mimics human-driven demonstrations recorded through Arducam Vision Systems and external cameras. Post-training pipeline: Captured data is structured into Modality.json dataset for efficient GPU-based training using GR00T N1. Two-sided operation: This model is optimized to control two robot arms simultaneously, improving cooperation skills.
The dataset is published on GitHub with implementation details and embracing faces.

Find out more about the project.
In March, the IEEE Robotics and Automation Society announced the winners of the 2025 Early Academic Career Award in recognition of their outstanding contributions to the field of robotics and automation.
This year’s winners, including Nvidia’s Shuran Song, Abhishek Gupta and Yuke Zhu, are pioneering advances in scalable robot learning, real-world reinforcement learning and embodied AI. Their work is driving innovation that forms the next generation of intelligent systems and affects both research and real-world applications.
For more information about the winners, please see
Shuran Song, a leading research scientist at Nvidia, has been recognized for his contributions to scalable robot learning. A prominent recent paper has been awarded Nvidia’s visiting professor Abhishek Gupta for his pioneering work in real-life robot reinforcement learning. A prominent recent paper has given Yuke Zhu, a leading research scientist at Nvidia, to a widely used open source software platform with a contribution to AI that has been embodied. Some of the most well-known recent papers include:
https://www.youtube.com/watch?v=a2jqz3ohnfu
These researchers will be recognized at the International Conference on Robots and Automation in May.
Keep up to date with NVIDIA’s leading robot research through our Robotics Research and Development Digest (R2D2) Technology Blog series, subscribe to this newsletter and follow Nvidia Robotics on YouTube, Discord and developer forums.