goldygnome t1_j7nldog wrote
Reply to comment by khrisrino in I finally think the concept of AGI is misleading, fueled by all the hype, and will never happen by ReExperienceUrSenses
Self learning AI exist. Labels are just our names for repeating patterns in data. Self learning AIs make up their own labels that don't match ours. It's a solved problem. Your information is out of date.
Google has a household robot project that successfully demonstrated human like capabilities across many domains six months ago.
True, it's not across ALL domains, but it proves that narrow AI is not the end of the line. Who knows how capable it will be when it's scaled up?
khrisrino t1_j7pjj2g wrote
We have “a” self learning AI that works for certain narrow domains. We don’t necessarily have “the” self learning AI that gets us to full general AI. The fallacy with all these approaches is that it only ever sees the tip of the iceberg. It can only summarize the past it’s no good to predict the future. We fail to account for how complex the real world is and how little of it is available as training data. I’d argue we have neither the training dataset nor available compute capacity and our predictions are all a bit too over optimistic.
goldygnome t1_j7rvgk5 wrote
Where are you getting your info? I've seen papers over a year ago that demonstrated multi-doman self supervised learning.
And what makes you think AI can't predict the future based on past patterns? It's used for that purpose routinely and has been for years. Two good examples are weather forecasting & finance.
I'd argue that training data is any data for unsupervised AI, that AI has access to far more data than puny humans because humans can't directly sense the majority of the EM spectrum and that you're massively overestimating the compute used by the average human.
Viewing a single comment thread. View all comments