Google’s AI Research Lab, Google DeepMind, announced on Wednesday a new AI model called Gemini Robotics, designed to allow real machines to interact with objects and navigate their environments.
DeepMind has released a series of demo videos featuring a robot equipped with a Gemini Robotics folding paper, putting glasses in a case and posting other tasks in response to voice commands. According to the lab, Gemini Robotics are trained to generalize behavior across a range of different robot hardware, and by connecting items, they can “see” with actions the robot may take.
Deepmind claims in his tests that Gemini Robotics allowed the robot to function well in an environment not included in the training data. The lab has released a slim model, Gemini Robotics-ER, which researchers can use to train their own models for robot control, and a benchmark called Asimov, which measures risk in AI-powered robots.