Submitted by Malachiian t3_12348jj in Futurology
speedywilfork t1_je0b9uc wrote
Reply to comment by longleaf4 in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
>it shows advancement we never could have expected
this simply isn't true, everything AI is doing right now has been expected, or it should have been expected. anything that can be learned will be learned by AI. anything that has a finite outcome it will excel at. anything that doesn't have a finite outcome. it will struggle with. it isn't arrogance it is simply the way it works. it is like saying i am arrogant for claiming humans wont be able to fly like birds. nope, that's just reality
longleaf4 t1_je10fgu wrote
It seems like an inability to consider conflicting thoughts and the assumption that current knowledge is the pinnacle of understanding is a kind of arrogant way to view a developing field that no one person has complete insight to.
To me it seems kind of like saying Fusion power will never be possible. Eventually you're going to be wrong and it is more ofna question of when pur current understanding is broken.
The AI claim is that a breakthrough has occurred and only time can say if that is accurate or overly optimistic. Pretending breakthroughs can't happen isn't going to help anything though. It's just not a smart area to make a lot of assumptions about right now.
speedywilfork t1_je2rdub wrote
AI can't process abstract thoughts. it will never be able to, because there is no way to teach it, and we don't even know how humans can understand abstract thoughts. this is the basis for my conclusion. if it can't be programmed AI will never have that ability.
Viewing a single comment thread. View all comments