Submitted by Buck-Nasty t3_10j3sac in singularity
Comments
Borrowedshorts t1_j5ilq8l wrote
There's already evidence that they do learn world models. The Google robotics lab has demonstrated a sort of 'common sense' task understanding by adding a LLM to its algorithmic capability, perhaps demonstrating the first such time it's been done. LLM's and multimodal models will greatly speed up algorithmic control capabilities of robotics. It’s already been demonstrated.
blissblogs t1_j6ihukp wrote
I can't quite figure out how Google robotics has shown that they learn world models..do you have more details? Thanks!
Borrowedshorts t1_j6jrc54 wrote
They combined a platform called saycan with a LLM and it demonstrated much higher planning accuracy than what's previously been shown with robotics. So apparently the LLM is giving it the capability to have some real world smarts and better understands the relationships between objects. Actual task execution still has a ways to go, the main limitation there being robotic control algorithms, which Google admittedly is pretty bad at.
Particular_Number_68 t1_j5km94m wrote
Even after this people like Gary Marcus will call deep learning a "gimmick" and a "waste of money"
Superschlenz t1_j5mudwo wrote
>If it makes it correctly, it will update its parameters to reinforce its confidence
Nonsense. If it makes it correctly, the loss will be zero and the parameters are allowed to remain as they are. Only if it makes a mistake, the loss will be non-zero and change the parameters as it propagates backward through the network.
Surur t1_j5j4hx1 wrote
This is such an important and very accessible paper for all the sceptics who do not understand that LLMs have millions of artificial neurons and do a lot of internal processing to accurately "simply predict the next word".
In short, no ChatGPT is not just "Eliza on steroids."