khrisrino
khrisrino t1_j728qg8 wrote
Reply to comment by goldygnome in I finally think the concept of AGI is misleading, fueled by all the hype, and will never happen by ReExperienceUrSenses
“… intelligence is just learning and application of skills”
Sure but that learning has been going on for a billion years encoded in dna and passed on in culture, traditions, books, internet etc. That training dataset does not exist to train an LLM on. We may have success in very narrow domains but I doubt there will (ever?) be a time when we have an AI that is equivalent to a human brain over all domains at the same time. Maybe the only way to achieve that will be to replicate the brain completely. Also many domains are exponentially intractable because it’s not just one human brain but all human brains over all time that are involved in the outcome eg stock market, political systems etc
khrisrino t1_j71rnmw wrote
Reply to I finally think the concept of AGI is misleading, fueled by all the hype, and will never happen by ReExperienceUrSenses
I agree. It sounds logical to me to think of the human brain as an exceedingly complex byproduct of billions of years of evolution and that unlike the laws of physics there is no central algorithm “in there” to mimic. You can predict where a comet will go by observing a tiny fraction of its path since its movement is mostly governed by a few simple laws of physics. But assuming no central algorithm in the human brain it’s not possible for an AI to emulate by the method of observe and mimic since the problem is always underspecified. However an AI does not need to match the entirety of the brains functions to be useful. It just needs to model some very narrow domains and perform to our specification of what’s correct.
khrisrino t1_j6n01m9 wrote
Reply to comment by shanoshamanizum in Why AI can not replace search index by shanoshamanizum
Yes conceptually the search engine is better in that way. But we don’t live in a conceptual world. We are very much exposed to the misbehavior of providers so we cannot ignore them in the evaluation.
khrisrino t1_j6mzbk0 wrote
Reply to comment by shanoshamanizum in Why AI can not replace search index by shanoshamanizum
You think you’re getting the full set of possible search results from which you then freely choose … but that’s not how it works. They have a ranker behind the scenes that decides what you get the see.
khrisrino t1_j6mx49i wrote
Reply to comment by shanoshamanizum in Why AI can not replace search index by shanoshamanizum
It depends on what the AI optimization function is. The AI has been trained on a large corpus of data (scraped from the internet I guess?). Search engine has also been trained on a large selection of data from the internet. So what’s the difference ultimately? The search engine ranking algorithm also has biases when you go deep into it. There is an infinite variety of queries you can come up with and a rather limited training set of data behind it. Google themselves said that even after all these years 15% of queries are completely new queries. So for those new queries it’s likely you’re just getting some random results which could very well be all wrong. Same issue with Reddit … if you ask some common questions you get great answers. But ask something nontrivial and all you get are random and misinformed answers. The Reddit ranking function is also inherently very susceptible to bias because it depends on people to upvote or downvote without knowledge of whether the people voting actually know anything about the topic.
khrisrino t1_j6mssno wrote
Reply to comment by shanoshamanizum in Why AI can not replace search index by shanoshamanizum
“Give me links to 10 most relevant websites that support or contradict what you just told me”. How’s that?
khrisrino t1_j6msbxx wrote
Reply to Why AI can not replace search index by shanoshamanizum
You could ask the AI to give you 10 answers instead of just 1 right?
khrisrino t1_j7pjj2g wrote
Reply to comment by goldygnome in I finally think the concept of AGI is misleading, fueled by all the hype, and will never happen by ReExperienceUrSenses
We have “a” self learning AI that works for certain narrow domains. We don’t necessarily have “the” self learning AI that gets us to full general AI. The fallacy with all these approaches is that it only ever sees the tip of the iceberg. It can only summarize the past it’s no good to predict the future. We fail to account for how complex the real world is and how little of it is available as training data. I’d argue we have neither the training dataset nor available compute capacity and our predictions are all a bit too over optimistic.