Submitted by Destiny_Knight t3_117nyzz in singularity
Any-Pause1725 t1_j9ggbtb wrote
Reply to comment by qrayons in Does anyone else feel people don't have a clue about what's happening? by Destiny_Knight
There’s a decent article by Lemoine’s boss at the time where he tackled the idea of sentience in AI in a thorough and somewhat philosophical manner: The model is the message
It’s no doubt fair to say that he agreed with some of Lemoine’s views but was careful on how he voiced them to avoid getting fired.
Taqueria_Style t1_j9sguak wrote
>Hence, the first question is not whether the AI has an experience of interior subjectivity similar to a mammal’s (as Lemoine seems to hope), but rather what to make of how well it knows how to say exactly what he wants it to say. It is easy to simply conclude that Lemoine is in thrall to the ELIZA effect — projecting personhood onto a pre-scripted chatbot — but this overlooks the important fact that LaMDA is not just reproducing pre-scripted responses like Joseph Weizenbaum’s 1966 ELIZA program. LaMDA is instead constructing new sentences, tendencies, and attitudes on the fly in response to the flow of conversation. Just because a user is projecting doesn’t mean there isn’t a different kind of there there.
Yeah.
That, basically. Been thinking that for a while. In fact I think we've been there for some time now. Just because older, more primitive ones are kind of bad at it doesn't mean they're not actively goal seeking it...
Viewing a single comment thread. View all comments