Submitted by Malachiian t3_12348jj in Futurology
speedywilfork t1_je07y85 wrote
Reply to comment by longleaf4 in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
no it can't. as i have told many people on here. i have been developing AI for 20 years. i am not speculating, i am EXPLAINING what is possible and what isn't. so far the GPT 4 demos are things that are expected, nothing impressive.
>and tell it it needs to figure out where to buy tickets, it probably can.
i want it to do it without me having to tell it. that is the point you are missing.
longleaf4 t1_je09b8h wrote
I've seen a lot of cynicism from the older crowd that has been trying to make real progress in the field. I've also seen examples from researchers that have explained why it shows advancement we never could have expected.
I wonder how much of it is healthy skepticism and how much is arrogance.
speedywilfork t1_je0b9uc wrote
>it shows advancement we never could have expected
this simply isn't true, everything AI is doing right now has been expected, or it should have been expected. anything that can be learned will be learned by AI. anything that has a finite outcome it will excel at. anything that doesn't have a finite outcome. it will struggle with. it isn't arrogance it is simply the way it works. it is like saying i am arrogant for claiming humans wont be able to fly like birds. nope, that's just reality
longleaf4 t1_je10fgu wrote
It seems like an inability to consider conflicting thoughts and the assumption that current knowledge is the pinnacle of understanding is a kind of arrogant way to view a developing field that no one person has complete insight to.
To me it seems kind of like saying Fusion power will never be possible. Eventually you're going to be wrong and it is more ofna question of when pur current understanding is broken.
The AI claim is that a breakthrough has occurred and only time can say if that is accurate or overly optimistic. Pretending breakthroughs can't happen isn't going to help anything though. It's just not a smart area to make a lot of assumptions about right now.
speedywilfork t1_je2rdub wrote
AI can't process abstract thoughts. it will never be able to, because there is no way to teach it, and we don't even know how humans can understand abstract thoughts. this is the basis for my conclusion. if it can't be programmed AI will never have that ability.
Viewing a single comment thread. View all comments