Viewing a single comment thread. View all comments

beezlebub33 t1_j5yqbuv wrote

>gary marcus' objections have nothing to do with world models,

I think they do. See: https://garymarcus.substack.com/p/how-come-gpt-can-seem-so-brilliant . GPT and other LLMs don't are not grounded in the real world, so cannot form an accurate model of them; only getting secondary (from human text). This causes them to make mistakes about relationships; they don't 'master abstract relationships'. I know he doesn't use the term there, but that's what he's getting at.

Also, at https://garymarcus.substack.com/p/how-new-are-yann-lecuns-new-ideas he says:

>A large part of LeCun’s new manifesto is a well-motivated call for incorporating a “configurable predictive world model” into deep learning. I’ve been calling for that for a little while....

The essay isn't primarily about his thoughts on world models, but marcus, for better or worse, thinks that they are important.

3

dasnihil t1_j5z5ijq wrote

disclaimer: idk much about gary marcus, i only follow a few people closely in the field like joscha bach, and i'm sure he wouldn't say or worry about such things.

if you give 3 hands to a generally intelligent neural network, it will figure out how to make use of 3 hands, or no hands. it doesn't matter. so those trivial things are not to be worried about, the problem at hand is different.

0