Submitted by [deleted] t3_115ez2r in MachineLearning
Metacognitor t1_j92wykk wrote
Reply to comment by KPTN25 in [D] Please stop by [deleted]
Oh yeah? What is capable of producing sentience?
KPTN25 t1_j92yfz4 wrote
None of the models or frameworks developed to date. None are even close.
the320x200 t1_j93a7sy wrote
Given our track record of mistreating animals and our fellow people, treating them as just objects, it's very likely when the day does come we will cross the line first and only realize it afterwards.
Metacognitor t1_j941yl1 wrote
My question was more rhetorical, as in, what would be capable of producing sentience? Because I don't believe anyone actually knows, which makes any definitive statements of the nature (like yours above) come across as presumptuous. Just my opinion.
KPTN25 t1_j94a1y0 wrote
Nah. Negatives are a lot easier to prove than positives in this case. LLMs aren't able to produce sentience for the same reason a peanut butter sandwich can't produce sentience.
Just because I don't know positively how to achieve eternal youth, doesn't invalidate the fact that I'm quite confident it isn't McDonalds.
Metacognitor t1_j94ois4 wrote
That's a fair enough point, I can see where you're coming from on that. Although my perspective is perhaps as the models become increasingly large, to the point of being almost entirely a "black box" from a dev perspective, maybe something resembling sentience could emerge spontaneously as a function of some type of self-referential or evaluative model within the primary. It would obviously be a more limited form of sentience (not human-level) but perhaps.
Viewing a single comment thread. View all comments