Submitted by [deleted] t3_115ez2r in MachineLearning
overactor t1_j95hrop wrote
Reply to comment by KPTN25 in [D] Please stop by [deleted]
I really don't think you can say that with such confidence. If you were saying they no existing LLMs have achieved sentience and they can't at the scale we're working today, I'd agree, but I really don't see how you can be so sure that increasing the size and training data couldn't result in sentience somewhere down the line.
KPTN25 t1_j95kx5j wrote
Reproducing language is a very different problem than true thought or self-awareness, is why.
LLMs are no more likely to become sentient than a linear regression or random forest model. Frankly, they're no more likely than a peanut butter sandwich to achieve sentience.
Is it possible that we've bungled our study of peanut butter sandwiches so badly that we may have missed some incredible sentience-granting mechanism? I guess, but it's so absurd and infinitesimal it's not worth considering or entertaining practically.
The black box argument is intellectually lazy. We have a better understanding of what is happening in LLMs and other models than most clickbaity headlines imply.
overactor t1_j95oem0 wrote
Your ridiculous hyperbole is not helping your argument. It's entirely possible that sentience is an instrumental goal for achieving a certain level of text prediction. And I don't see why a sufficiently large LLM definitely couldn't achieve it. It could be that another few paradigm shifts will be needed, but it could also be an we need to do is scaling up. I think anyone who claims to know if LLMs can achieve sentience is either ignorant or lying.
Viewing a single comment thread. View all comments