Submitted by Sieventer t3_zj8inu in singularity
hydraofwar t1_izymadp wrote
Reply to comment by User1539 in Exponential improvement in 6 months of AI in image generation ft. Ronald McDonald by Sieventer
Damn, where Google said "this is nothing compared to what they're working on"? Imagine if lamda actually sounds exactly like a human
User1539 t1_izyqe02 wrote
I've been playing with ChatGPT quite a bit, and you can kind of catch it not really understanding what it's talking about.
I was testing if it could write code, and it's pretty good spitting out example code for a problem that's 90% what I want it to be. I'm not saying that isn't impressive as hell, especially for easy boilerplate stuff I'd otherwise google and look for an answer.
That said, in its summary of what it did, it was sometimes wrong. Usually just little things like 'This opens an HTTP server on port 80', where the actual example it wrote opened the port on 8080.
It was like talking to a kid who'd diligently copied their homework from another kid, but didn't quite understand what it said.
Still, as a tool it would be useful as-is, and as an AI it's impressive as hell. But, if you play with it long enough you'll catch it contradicting itself and clearly not quite understanding what it's telling you.
I have seen other PHD level experiments with AI where you're able to talk to a virtual bot about its surroundings, and it will respond in a way that suggests it really does know what's going on around it, and can help you find and do things in its virtual world.
I think that level of 'understanding' of the text it's producing is still a ways off from what ChatGPT is doing today. Maybe that's what they're excited about in the next version already, or what Google is talking about?
Either way, I'm prepared to have my mind blown by AI's progress on a weekly basis.
Viewing a single comment thread. View all comments