Google has introduced a new AI model called RT-2 AI that allows robots to think for themselves, and finish tasks given to them.
Google's Deepmind AI division calls it a vision-language-action (VLA) model that learns by scouring the web and parsing through heaps of robotic data. The model then translates this data into instructions for robotic control.
Also read | The sorry state of search is an ominous sign for the AI eraIn a blog post by the team, they said RT-2 builds on the work done for RT-1 which was trained on "multi-task demonstrations", and learned through "combinations of tasks and objects seen in the robotic data".
RT-2 improves upon these capabilities allowing it to interpret new commands and responding to user instructions using "chain-of-thought reasoning".
The team says RT-2 can perform, "multi-stage semantic reasoning, like deciding which object could be used as an improvised hammer (a rock), or which type of drink is best for a tired person (an energy drink)".
Also read | 'Never thought first benchers will be in Amazon': Google techie who taught college juniorsGoogle says that RT-2 performed just as well as the predecessor in over 6,000 robotic trials, and nearly doubled the performance in unusual scenarios. The model allows robots to learn similar to humans, by applying previous concepts to various situations.
With RT-2, the team hopes to build a general purpose robot that can interpret problems and come up with solutions, with the help of multi-layered reasoning.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.