NikoKun
NikoKun t1_j3hm4ym wrote
Reply to comment by KSRandom195 in Will ChatGPT be able to write better code than any human within the next year? by [deleted]
There does appear to be some level of understanding and problem-solving, emerging as more than the sum of it's knowledge, & that goes well beyond merely answering with solutions it's already seen. I can assure you, I've asked it to help me with some very obscure coding problem-solving, that I'd been stuck on for a while, and I think thanks to it's short-term memory, it figured out a solution I never would have. All it took was a little back and forth to give it enough context, and it worked out a solution that really couldn't exist anywhere else.
NikoKun t1_iw0766m wrote
Reply to comment by Godgobbledmyknoble in Amazon debuts Sparrow, a new bin-picking robot arm by Surur
I assume they'll likely change a lot about how the items are stored, in order to avoid that issue, rather than leave them on traditional shelves.
NikoKun t1_iw04ud9 wrote
Reply to comment by lughnasadh in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
> It's worth noting The Turing Test is considered obsolete. It only requires an AI to appear to be intelligent enough to fool a human. In some instances, GPT-3 already does that with some of the more credulous sections of the population.
That depends more on the human, the specifications of said Turing Test, and how thoroughly it's performed. What would be the point of conducting a Turing Test using a "credulous" interviewer? lol
If we're talking about an extended-length test, conducted by multiple experts who understand the concepts and are driven to figure out which participant is AI.. I don't think GPT-3 could pass such a test, at least not for more than a few minutes, at best.. heh
NikoKun t1_je8g4xn wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I agree. Tho I think it's just people using the idea of AI not "understanding" to make themselves feel more comfortable with how good things are getting, and 'move the bar' on what constitutes "real AI".
I recently stumbled upon this video that does a decent job explaining what I think you're trying to get across.