Submitted by sideways t3_103hwns in singularity
BellyDancerUrgot t1_j311o8o wrote
Reply to comment by visarga in 2022 was the year AGI arrived (Just don't call it that) by sideways
No because humans do not hallucinate information and can derive conclusions based on cause and effect on subjects it hasn’t seen before. LLMs can’t even differentiate between cause and effect without memorizing patterns, something humans can naturally do.
And no, human beings in fact do not parrot information. I can reason about subjects I have never studied because human beings do not parrot words and actually understand them rather than memorizing spatial context. It’s like we are back at a stage when people thought we have finally developed AGI back when Goodfellows paper on GANs was published in 2014.
If you actually get off of the hype train u will realize most major industries use gradient boosting and achieve almost the same generalization performance for their needs as an LLM trained with giga fking tons of data. Because they can’t generalize well at all.
[deleted] t1_j34ba6z wrote
[deleted]
BellyDancerUrgot t1_j34m2fe wrote
Totally irrelevant to the conversation. Doesn’t address anything I said.
Viewing a single comment thread. View all comments