Submitted by [deleted] t3_115ez2r in MachineLearning
KPTN25 t1_j94a1y0 wrote
Reply to comment by Metacognitor in [D] Please stop by [deleted]
Nah. Negatives are a lot easier to prove than positives in this case. LLMs aren't able to produce sentience for the same reason a peanut butter sandwich can't produce sentience.
Just because I don't know positively how to achieve eternal youth, doesn't invalidate the fact that I'm quite confident it isn't McDonalds.
Metacognitor t1_j94ois4 wrote
That's a fair enough point, I can see where you're coming from on that. Although my perspective is perhaps as the models become increasingly large, to the point of being almost entirely a "black box" from a dev perspective, maybe something resembling sentience could emerge spontaneously as a function of some type of self-referential or evaluative model within the primary. It would obviously be a more limited form of sentience (not human-level) but perhaps.
Viewing a single comment thread. View all comments