Viewing a single comment thread. View all comments

Borrowedshorts t1_j5ilq8l wrote

There's already evidence that they do learn world models. The Google robotics lab has demonstrated a sort of 'common sense' task understanding by adding a LLM to its algorithmic capability, perhaps demonstrating the first such time it's been done. LLM's and multimodal models will greatly speed up algorithmic control capabilities of robotics. It’s already been demonstrated.

10

blissblogs t1_j6ihukp wrote

I can't quite figure out how Google robotics has shown that they learn world models..do you have more details? Thanks!

1

Borrowedshorts t1_j6jrc54 wrote

They combined a platform called saycan with a LLM and it demonstrated much higher planning accuracy than what's previously been shown with robotics. So apparently the LLM is giving it the capability to have some real world smarts and better understands the relationships between objects. Actual task execution still has a ways to go, the main limitation there being robotic control algorithms, which Google admittedly is pretty bad at.

1