Submitted by blabboy t3_11ffg1u in MachineLearning
An article written by Blake Lemoine, the man who sounded the alarm about Google LaMDA's sentience last summer.
One quote that caught my eye:
"Since Bing's AI has been released, people have commented on its potential sentience, raising similar concerns that I did last summer. I don't think "vindicated" is the right word for how this has felt. Predicting a train wreck, having people tell you that there's no train, and then watching the train wreck happen in real time doesn't really lead to a feeling of vindication. It's just tragic."
https://www.newsweek.com/google-ai-blake-lemoine-bing-chatbot-sentient-1783340
lifesthateasy t1_jaj6vlq wrote
Ugh ffs. It's a statistical model that is trained on human interactions, so of course it's gonna sound like a human and answer as if it had the same fears as a human.
It doesn't think, all it ever does is it gives you the statistically most probable correct response to your prompt, if and only if it gets a prompt.