the study
Published September 12, 2024 Author
robotics team
Two new AI systems, ALOHA Unleashed and DemoStart, help robots learn how to perform complex tasks that require dexterous movement
People perform various tasks every day, such as tying shoelaces and tightening screws. However, it is incredibly difficult for robots to properly learn these highly dexterous tasks. To make robots more useful in people’s lives, we need to enable them to better interact with physical objects in dynamic environments.
Today we’re introducing two new papers featuring the latest artificial intelligence (AI) advances in robotic dexterity research. DemoStart uses simulation to improve the real-world performance of a multi-fingered robotic hand.
These systems pave the way for robots to perform a variety of useful tasks by allowing robots to learn from human demonstrations and translate images into actions.
Improving imitation learning with two robot arms
Until now, most state-of-the-art AI robots have only been able to lift and place objects using a single arm. In our new paper, we introduce ALOHA Unleashed, which enables advanced dexterity with two-arm manipulation. With this new method, our robot learned to tie shoelaces, hang shirts, repair another robot, insert gears, and even clean the kitchen.
An example of a two-armed robot stretching shoelaces and tying them into a bow.
An example of a two-armed robot arranging polo shirts on a table, hanging them on hangers, and then hanging them on a rack.
An example of a dual-armed robot repairing another robot.
The ALOHA Unleashed method is built on the ALOHA 2 platform, which is based on Stanford University’s original ALOHA, a low-cost open-source hardware system for two-handed remote control.
ALOHA 2 has two hands that can be easily remotely controlled for training and data collection purposes, and is much more dexterous than previous systems, allowing the robot to learn how to perform new tasks with fewer demonstrations.
The ergonomics of the robot hardware has also been improved to enhance the learning process of modern systems. First, they collected demonstration data by remotely controlling the robot’s movements and performing difficult tasks such as tying shoelaces and hanging T-shirts. We then applied a diffusion method to predict robot behavior from random noise, similar to how the Imagen model generates images. This allows the robot to learn from the data and perform the same tasks on its own.
Learn robot behavior from several simulated demonstrations
Controlling a dexterous robotic hand is a complex task, made even more complex with each additional finger, joint, and sensor. Another new paper introduces DemoStart, which uses reinforcement learning algorithms to help robots acquire dexterous movements in simulation. These learned behaviors are especially useful for complex embodiments, such as hands with multiple fingers.
DemoStart learns from easy conditions first and then progresses to more difficult conditions over time, mastering the task to the best of its ability. The simulation demonstration required to learn how to solve a task in simulation is 100 times less than what would normally be required to learn from real-world examples for the same purpose.
The robot achieved a success rate of more than 98% on a variety of tasks in the simulation, including reorienting cubes with specific colors, tightening nuts and bolts, and organizing tools. In our actual setup, we achieved a 97% success rate for cube reorientation and lifting, and a 64% success rate for plug socket insertion tasks that require a high degree of finger coordination and precision.
An example of a robotic arm learning how to successfully insert a yellow connector in a simulation (left) and real-world setup (right).
An example of a robotic arm learning to tighten bolts and screws in simulation.
DemoStart was developed using MuJoCo, an open source physics simulator. After mastering various tasks in simulation and reducing the gap between simulation and reality using standard techniques such as domain randomization, our approach can be transferred to the physical world with almost zero shots. I did.
Robot learning in simulation can reduce the cost and time required to perform real physics experiments. However, these simulations are difficult to design and do not always translate well to real-world performance. DemoStart’s progressive learning combines reinforcement learning with learning from several demos to automatically generate a curriculum that bridges the gap between simulation and reality, making it easy to transfer knowledge from simulation to physical robots. This reduces the cost and time required for learning. I’m doing a physics experiment.
To enable more advanced robot learning through focused experimentation, we tested this new approach on a three-fingered robot hand called DEX-EE, which we co-developed with Shadow Robot.
Image of the DEX-EE dexterous robot hand developed by Shadow Robot in collaboration with the Google DeepMind robotics team (Credit: Shadow Robot).
The future of robotic dexterity
Robotics is a unique field of AI research that shows how well our approaches work in the real world. For example, a large language model can tell you how to tighten a bolt or tie your shoes, but even if it were built into a robot, it would not be able to perform those tasks on its own. you can’t.
One day, AI robots will assist people with all kinds of tasks, including at home and at work. Dexterity research, including the efficient and general learning approaches described today, can help make that future possible.
Although we still have a long way to go before robots can grasp and manipulate objects as easily and precisely as humans, we are making great progress and each breakthrough innovation moves us in the right direction. This is a new step.
Acknowledgment
DemoStart authors: Maria Bauza, Jose Enrique Chen, Valentin Dalibard, Nimrod Gileadi, Roland Hafner, Antoine Laurens, Murilo F. Martins, Joss Moore, Rugile Pevceviciute, Dushyant Rao, Martina Zambelli, Martin Riedmiller, Jon Scholz, Konstantinos Bousmalis, Francescoć»Nori, Nicholas!
Authors of Aloha Unleashed: Tony Z. Zhao, Jonathan Thompson, Danny Dries, Pete Florence, Kamyar Ghasemipour, Chelsea Finn, Aizaan Wahid.