The_Woman_of_Gont
The_Woman_of_Gont t1_jdywthg wrote
Reply to comment by User1539 in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Agreed. I’d add to that sentiment that I think non-AGI AI is enough to convince reasonable laypeople it’s conscious to an extent I don’t believe anyone had really thought possible.
We’re entering a huge grey area with AIs that can increasingly convincingly pass Turing Tests and "seem" like AGI despite…well, not being AGI. I think it’s an area which hasn’t been given much of any real thought even in fiction, and I tend to suspect we’re going to be in this spot for a long while(relatively speaking, anyway). Things are going to get very interesting as this technology disseminates and we get more products like Replika out there that are more oriented towards simulating social experiences, lots of people are going to develop unhealthy attachments to these things.
The_Woman_of_Gont t1_j1k7qzo wrote
Reply to comment by farmer15erf in Texas coach Chris Beard's fiancee says he didn't strangle her. by PrincessBananas85
Yup. It’s part of why a lot of states will decide to charge these sorts of crimes unilaterally without regard to input from the victims. The amount of abused individuals willing to voluntarily cooperate with charging their abuser with a crime, and not ask that they be dropped, is frighteningly small.
The_Woman_of_Gont t1_jdyy87t wrote
Reply to comment by Azuladagio in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Exactly, and that’s kind of the problem. The goalposts that some people set this stuff at are so high that you’re basically asking it to just pull knowledge out of a vacuum, equivalent to performing the Forbidden Experiment in the hopes of the subject spontaneously developing their own language for no apparent reason(then declaring the child no sentient when it fails).
It’s pretty clear that at this moment we’re a decent ways away from proper AGI that is able to act on its own “volition” without very direct prompting or to discover scientific processes on it’s own, but I also don’t think anyone has adequately defined where the line actually is in terms of when the input is sufficiently negligible as to make the novel or unexpected output a sign of emergent intelligence rather than just a fluke of the programming.
Honestly I don’t know that we actually even can agree on the answer to that question, especially if we’re bringing relevant papers like Bargh & Chartrand 1999 into the discussion, and I suspect as things develop the moment people decide there’s a ghost in the machine will ultimately boil down to a gut level “I know it when I see it” reaction rather than any particular hard-figure. And some people will simply never reach that point, while there are probably a handful right now who already have.