Viewing a single comment thread. View all comments

BellyDancerUrgot t1_j30tyxp wrote

Comparing current AI to AGI is laughable. To quote Yoshua Bengio : “current AI algorithms are dumber than a dog” iirc that’s what he said in 2021 in a video interview. None of the leading researchers in the field be it LeCunn or Bengio or Parikh or Hinton etc think we are remotely close to basic human intelligence. Comparing GPT to a human is stupid. It literally parrots information it memorized. Attention and self attention aren’t magic. We are at a stage when AI or rather PI is good enough to understand some context of for some words because it has seen it billions of times. In fact , we aren’t even at a stage where any model can actually completely not hallucinate random things that aren’t true. So it technically doesn’t even understand true context. Ask any worthwhile researcher in the field and they’ll tell u how this article is complete garbage.

There’s an entire branch of ML that focusses on scaling. Irina Rish is one of the big names behind the “scale is all you need” motto. Is she right? Maybe! But even she’ll tell u that we aren’t within reach of the dumbest human being when it comes to intelligence.

−1

marvinthedog t1_j3123ct wrote

If AI algorithms of 2021 were remotely comparable to a dog it seems to me that we are getting really, really, really close.

4

visarga t1_j30wx6i wrote

> Comparing GPT to a human is stupid. It literally parrots information it memorized.

Can I say you are parroting human language because you are just using a bunch of words memorised somewhere else?

No matter how large is our training set, most word combinations never appear.

Google says:

> Your search - "No matter how large is our training set" - did not match any documents.

Not even these specific 8 words are in the training set! You see?

Language Models are almost always in this domain - generating novel word combinations that still make sense and solve tasks. When did a parrot ever do that?

2

BellyDancerUrgot t1_j311o8o wrote

No because humans do not hallucinate information and can derive conclusions based on cause and effect on subjects it hasn’t seen before. LLMs can’t even differentiate between cause and effect without memorizing patterns, something humans can naturally do.

And no, human beings in fact do not parrot information. I can reason about subjects I have never studied because human beings do not parrot words and actually understand them rather than memorizing spatial context. It’s like we are back at a stage when people thought we have finally developed AGI back when Goodfellows paper on GANs was published in 2014.

If you actually get off of the hype train u will realize most major industries use gradient boosting and achieve almost the same generalization performance for their needs as an LLM trained with giga fking tons of data. Because they can’t generalize well at all.

1