khrisrino

khrisrino t1_j7pjj2g wrote

We have “a” self learning AI that works for certain narrow domains. We don’t necessarily have “the” self learning AI that gets us to full general AI. The fallacy with all these approaches is that it only ever sees the tip of the iceberg. It can only summarize the past it’s no good to predict the future. We fail to account for how complex the real world is and how little of it is available as training data. I’d argue we have neither the training dataset nor available compute capacity and our predictions are all a bit too over optimistic.

1

khrisrino t1_j728qg8 wrote

“… intelligence is just learning and application of skills”

Sure but that learning has been going on for a billion years encoded in dna and passed on in culture, traditions, books, internet etc. That training dataset does not exist to train an LLM on. We may have success in very narrow domains but I doubt there will (ever?) be a time when we have an AI that is equivalent to a human brain over all domains at the same time. Maybe the only way to achieve that will be to replicate the brain completely. Also many domains are exponentially intractable because it’s not just one human brain but all human brains over all time that are involved in the outcome eg stock market, political systems etc

0

khrisrino t1_j71rnmw wrote

I agree. It sounds logical to me to think of the human brain as an exceedingly complex byproduct of billions of years of evolution and that unlike the laws of physics there is no central algorithm “in there” to mimic. You can predict where a comet will go by observing a tiny fraction of its path since its movement is mostly governed by a few simple laws of physics. But assuming no central algorithm in the human brain it’s not possible for an AI to emulate by the method of observe and mimic since the problem is always underspecified. However an AI does not need to match the entirety of the brains functions to be useful. It just needs to model some very narrow domains and perform to our specification of what’s correct.

1

khrisrino t1_j6mx49i wrote

It depends on what the AI optimization function is. The AI has been trained on a large corpus of data (scraped from the internet I guess?). Search engine has also been trained on a large selection of data from the internet. So what’s the difference ultimately? The search engine ranking algorithm also has biases when you go deep into it. There is an infinite variety of queries you can come up with and a rather limited training set of data behind it. Google themselves said that even after all these years 15% of queries are completely new queries. So for those new queries it’s likely you’re just getting some random results which could very well be all wrong. Same issue with Reddit … if you ask some common questions you get great answers. But ask something nontrivial and all you get are random and misinformed answers. The Reddit ranking function is also inherently very susceptible to bias because it depends on people to upvote or downvote without knowledge of whether the people voting actually know anything about the topic.

5