khrisrino t1_j728qg8 wrote
Reply to comment by goldygnome in I finally think the concept of AGI is misleading, fueled by all the hype, and will never happen by ReExperienceUrSenses
“… intelligence is just learning and application of skills”
Sure but that learning has been going on for a billion years encoded in dna and passed on in culture, traditions, books, internet etc. That training dataset does not exist to train an LLM on. We may have success in very narrow domains but I doubt there will (ever?) be a time when we have an AI that is equivalent to a human brain over all domains at the same time. Maybe the only way to achieve that will be to replicate the brain completely. Also many domains are exponentially intractable because it’s not just one human brain but all human brains over all time that are involved in the outcome eg stock market, political systems etc
goldygnome t1_j7nldog wrote
Self learning AI exist. Labels are just our names for repeating patterns in data. Self learning AIs make up their own labels that don't match ours. It's a solved problem. Your information is out of date.
Google has a household robot project that successfully demonstrated human like capabilities across many domains six months ago.
True, it's not across ALL domains, but it proves that narrow AI is not the end of the line. Who knows how capable it will be when it's scaled up?
khrisrino t1_j7pjj2g wrote
We have “a” self learning AI that works for certain narrow domains. We don’t necessarily have “the” self learning AI that gets us to full general AI. The fallacy with all these approaches is that it only ever sees the tip of the iceberg. It can only summarize the past it’s no good to predict the future. We fail to account for how complex the real world is and how little of it is available as training data. I’d argue we have neither the training dataset nor available compute capacity and our predictions are all a bit too over optimistic.
goldygnome t1_j7rvgk5 wrote
Where are you getting your info? I've seen papers over a year ago that demonstrated multi-doman self supervised learning.
And what makes you think AI can't predict the future based on past patterns? It's used for that purpose routinely and has been for years. Two good examples are weather forecasting & finance.
I'd argue that training data is any data for unsupervised AI, that AI has access to far more data than puny humans because humans can't directly sense the majority of the EM spectrum and that you're massively overestimating the compute used by the average human.
Viewing a single comment thread. View all comments