NikoKun

NikoKun t1_je8g4xn wrote

I agree. Tho I think it's just people using the idea of AI not "understanding" to make themselves feel more comfortable with how good things are getting, and 'move the bar' on what constitutes "real AI".

I recently stumbled upon this video that does a decent job explaining what I think you're trying to get across.

1

NikoKun t1_j3hm4ym wrote

There does appear to be some level of understanding and problem-solving, emerging as more than the sum of it's knowledge, & that goes well beyond merely answering with solutions it's already seen. I can assure you, I've asked it to help me with some very obscure coding problem-solving, that I'd been stuck on for a while, and I think thanks to it's short-term memory, it figured out a solution I never would have. All it took was a little back and forth to give it enough context, and it worked out a solution that really couldn't exist anywhere else.

2

NikoKun t1_iw04ud9 wrote

> It's worth noting The Turing Test is considered obsolete. It only requires an AI to appear to be intelligent enough to fool a human. In some instances, GPT-3 already does that with some of the more credulous sections of the population.

That depends more on the human, the specifications of said Turing Test, and how thoroughly it's performed. What would be the point of conducting a Turing Test using a "credulous" interviewer? lol

If we're talking about an extended-length test, conducted by multiple experts who understand the concepts and are driven to figure out which participant is AI.. I don't think GPT-3 could pass such a test, at least not for more than a few minutes, at best.. heh

58