Comments

You must log in or register to comment.

Borrowedshorts t1_j5ilq8l wrote

There's already evidence that they do learn world models. The Google robotics lab has demonstrated a sort of 'common sense' task understanding by adding a LLM to its algorithmic capability, perhaps demonstrating the first such time it's been done. LLM's and multimodal models will greatly speed up algorithmic control capabilities of robotics. It’s already been demonstrated.

10

Surur t1_j5j4hx1 wrote

This is such an important and very accessible paper for all the sceptics who do not understand that LLMs have millions of artificial neurons and do a lot of internal processing to accurately "simply predict the next word".

In short, no ChatGPT is not just "Eliza on steroids."

11

Particular_Number_68 t1_j5km94m wrote

Even after this people like Gary Marcus will call deep learning a "gimmick" and a "waste of money"

8

Superschlenz t1_j5mudwo wrote

>If it makes it correctly, it will update its parameters to reinforce its confidence

Nonsense. If it makes it correctly, the loss will be zero and the parameters are allowed to remain as they are. Only if it makes a mistake, the loss will be non-zero and change the parameters as it propagates backward through the network.

2

Borrowedshorts t1_j6jrc54 wrote

They combined a platform called saycan with a LLM and it demonstrated much higher planning accuracy than what's previously been shown with robotics. So apparently the LLM is giving it the capability to have some real world smarts and better understands the relationships between objects. Actual task execution still has a ways to go, the main limitation there being robotic control algorithms, which Google admittedly is pretty bad at.

1