Submitted by 420BigDawg_ t3_107ve7y in singularity
Now down to July 21st, 2027.
This time last month: November 1st, 2027
This time last year: February 25th, 2040
Submitted by 420BigDawg_ t3_107ve7y in singularity
Now down to July 21st, 2027.
This time last month: November 1st, 2027
This time last year: February 25th, 2040
Sources?
Just trust me, bro.
Maybe you could actually give him time to respond before coming up with some snarky little jab? Heinrich is active in multiple communities and is a trustworthy and reliable person.
/u/rationalkat made a cherry-picked list in the big predictions thread:
[deleted]
All the experts I've seen say 2029, like Altman and Carmack. Musk has also said 2029 if that's an opinion you care about.
"All the experts"
Altman is a CEO with a vested interested in hyping up AI progress for his business.
Musk said we would be on Mars and having self-driving cars take us everywhere by now lol.
I don't see how saying "[thing] will come in 7 years" influences anything as a prediction, it's too far away to generate any tangible hype in the public. If he was going to lie to manipulate a products value I'd think I'd make my predictions something more near term, if we're indeed cynically manipulating the market. Not to mention any of that about Sam Altman changes nothing about the fact he's an expert and his credibility rests on his correctness, it's in his interest to be right. You can't just claim biased interests here, it's more nuanced than that, also none of that changes the fact they all are saying the same thing, 2029. That's pretty consistent, and I'm inclined to believe it.
you have it all backwards
generating long term hype is perfect for a tech startup for 2 reasons
​
Musk is a complete moron, sorry to bust your bubble.
What bubble I agree with you lol
which experts, big dawg
I’d like to see a source for this.
It already happened......
I've commented this before, and since it's relevant, I'll comment it again (almost verbatim):
Take Metaculus seriously at your risk. Anyone can make a prediction on that website, and those who do tend to be tech-junkies who are generally optimistic about timelines.
To my understanding, most AI/ML expert surveys continue to have an AGI arrival year average of some decades from now/mid-century plus, and the majority of individuals who are AI/ML researchers have similar AGI timelines.
Also, I'm a bit skeptical that the amount of progress that's been made in AI the past year (which has been impressive, no doubt) merits THAT much of a shave-off from the February 2022 prediction. Just my thoughts.
>most AI/ML expert surveys continue to have an AGI arrival year average of some decades from now/mid-century plus, and the majority of individuals who are AI/ML researchers have similar AGI timelines
You know, when the Manhattan project was being worked on, who would you trust for a prediction of the first nuke detonation, Enrico Fermi or some physicist who had worked on radioactive materials.
I'm suspicious that any "experts" with valid opinions exist outside of well funded labs (openAI/google/meta/anthropic/hugging face etc)
They are saying a median of about ~8 years, which would be 2031.
>They are saying a median of about ~8 years, which would be 2031.
That's an oddly specific number/year.
Also, remember that people who work at AI corporations, as opposed to academia (for example), have the tendency to hype up their work, which makes their timelines (on average) shorter. To me personally, a survey of AI researchers on timelines has more weight than AI Twitter, which is infested with hype.
> That's an oddly specific number/year.
No, that's the median of a spread, and it's stated with the caveat of "about". That's literally the opposite of "specific".
Source on that 8 years number? Would certainly be quite a compelling argument if a random sampling of exclusively well funded AI PhDs had a median prediction of 8 years.
It's just the opinions on the eleuther AI discord. Arguably weak general AI will be here in 1-2 years.
My main point is the members I am referring to all live in Bay Area and work for hugging face and openAI. Their opinion is more valid than say a 60 year old professor in the artificial intelligence department at Carnegie melon.
> Also, I'm a bit skeptical that the amount of progress that's been made in AI the past year (which has been impressive, no doubt) merits THAT much of a shave-off from the February 2022 prediction. Just my thoughts.
Correct, and if anything, the mere fact that the prediction has changed by over a decade in the span of 12 months is strong evidence of exactly what you’re saying — this prediction is made by people who aren’t really in the know.
If the weather man told you it was going to be 72 and sunny tomorrow and then when you woke up tomorrow he said actually it’s going to be -15 and a blizzard you would probably think hmmm, maybe this guy doesn’t know what the fuck he’s talking about.
I agree with all of your comments. And to add what I believe to be a more important point, the Metaculus question defines weakly general AI as (heavily paraphrased):
- Pass the Turing Test (text prompt)
- Achieve human-level written language comprehension on the Winograd Schema Challenge
- Achieve human-level result on the math section of the SATs
- Play the Atari game Montezuma's Revenge at a human level
We already have separate narrow AIs that can do these tasks at either human or nearly human levels. We even have more general AIs that can do multiple of these tasks at a near-human level. I wouldn't be overly surprised if by the end of 2023, we have a single AI that could do all of these tasks (and many other human-level task). But even so, many people wouldn't call it general AI.
Not trying to throw shade here on Metaculus. They had to narrowly define general AI and have concrete, measurable objectives. I just personally disagree with where they drew that line.
I trust the opinión of an unknown redditor without any links sure. If you do decide to post a link it should be after stable diffusion and chatgpt3 the survey.
My guess is AGI in 2029, so they're more optimistic than I am, but I hope it happens sooner.
One of the few non-terrible posts in the thread. Otherwise it's been largely garbage.
As a prediction, this is utterly meaningless. I'm not even sure if this is useful at all as a gauge of anything.
it's not just a prediction, it's a crowdsourced prediction. Statistically, crowdsourcing does better at converging to the actual answer.
>Statistically, crowdsourcing does better at converging to the actual answer.
This should be the top reply.
But what is the crowd? Is this based on a sampling of all types of people, or enthusiasts being enthusiastic?
Yes, this if the key question. If I'd build such a website I'd try to implement some ways to categorize the crowd. "30% expert, 50% enthusiast, 20% hobbyist" or something like that... Of course getting any kind of certainty on that would be hard, but it turns out if you actually ask nicely and with a time of seriosity most people just tell the truth, so maybe even not.
> Statistically, crowdsourcing does better at converging to the actual answer.
Statistician here, and this is a good example of a relatively meaningless statistic, to be honest. Crowdsourcing statistically tends to be more accurate than just asking one person, in the average case, for what should be mathematically obvious reasons.
But the “average case” isn’t applicable to literally every situation. I would posit that when we start to talk about areas of expertise that require a PhD to even begin to be taken seriously for your opinion, crowdsourcing from unverified users starts to become a whole lot more biased.
[removed]
I just feel like a lot of people are seeing some acceleration and think that this is all of it. What I think, is that we'll continue seeing regular advances in tech and AI, science in general. But the 30's will be the start of AGI, and 40's will be when it really takes off (in terms of adoption and utilization). Even a guess of before 2035 is, in my estimation, an optimistic projection where everything goes right and there aren't any setbacks or delays. But just saying 30's is a solid guess.
Your prediction and the 2027 prediction could both be right. DeepMind and OpenAI could have something that looks like AGI in 2027, but they keep it within the lab for another 3 years just testing it and building safeguards. Then in the 30s they go public with it and it begins proliferating. Then maybe it takes 10 years for it to transform manufacturing, agriculture, robotics, medicine, and the wider population, etc, due to regulation, ethical concerns, and resource limits.
How big do you think are chances it going Paperclip Maximizer-level wrong?
low. at most 5%. Although 5% is still high.
>But the 30's will be the start of AGI, and 40's will be when it really takes off
I vehemently disagree. How would it take 10 years for such a transformative technology to be optimized and utilized? Do you have a timeline for that 10 years between "start of the AGI" and its takeoff?
I never said it'd be 10 years, though it could for all anyone knows. If I said it would be released in 2035, and widely adopted by 2040, I don't think that's unreasonable. But I also believe in a slow takeoff and more practical timelines. Even Google, as seemingly ubiquitous as it is, did not become that way overnight, it took a few years to become widely known and used. Also we're dealing with multiple unknowns, like how many companies are working on AGI, how far along they are, how long it takes to adequately train them before release, how the rest of the world (not just enthusiasts) accepts or doesn't accept AGI, how many markets will be disrupted and the reaction to that, legal issues along the way, etc. etc. Optimistic timelines don't seem to account for everything.
Edit: I should also mention one of the biggest hurdles is even getting people to understand and agree on what AGI is! We could have it for years and many people might not even realize. Conversely, we have people claiming we have it NOW, or that certain things are AGI when they aren't even close.
I have chatGPT in my frickin' pocket most of the day. It's amazing but mostly just a testbot still so here I am, kind of meh, even though it was not on my radar for at least a few years, or so I thought a few months ago.
Faster than expected. And yet life carries on much as before, with a little sorcerer's apprentice nearby if I want to bother. What a time!
Did 2022 actually feel as "some" acceleration to you?
Feel? No, not quite. But it's all relative. If one narrows their perspective on what's to come, it could feel like a huge change already. Personally I think this is just us dipping our toes into the water, so to speak. So yes "some" acceleration, especially when considering how many people think that what we've seen so far is half or most of the way to AGI.
Who cares if it’s meaningless?
Fair enough, but it's a thing for a reason. Obviously the date will continue to change, so it could only possibly be a measure of that change. So why is it changing? What is it based on? It would make more sense to say a decade than a specific date or even year.
Why are you asking stupid questions?
What's interesting is, 10 years ago the prediction of a lot of people I knew was 10 years and hey it's 10 years again. I think psychologically, 10 years is about the level people have a hard time imagining past, but still think is pretty close. For most adults, 20-25 years isn't really going to help their life, so they pick 10 years.
As far as the crowdsource comment, yikes. We aren't out there crowdsourcing PhDs and open heart surgery. I know there was that whole crowdfarm article in communications of the ACM and I think that is more degradation of labor rights than value in random input.
>What's interesting is, 10 years ago the prediction of a lot of people I knew was 10 years and hey it's 10 years again.
May be true for "the people you know", but if you look at the general opinion of people interested in this field, the predictions used to start at the 2040s just last year.
While selection bias is already a thing, I'm pretty sure "the people I know" being generally software engineers with advanced degrees and philosophers into AI... it's a pretty educated opinion on the bias.
In that case maybe educated opinion is worse than the wisdom of the crowds, as the community prediction for AGI was 2040 last year as you can see from the post which is not "10 years away".
It's 18, the point I'm making is we have a cognitive bias towards 10-20 years or so when making estimates and we also have a difficult time understanding nonlinearity.
The big singinst hypothesis was there would be a "foom" moment where we go to super exponential progression. From that point of view, you would have to start talking probability distribution of when that nonlinearity happens.
I prefer stacked sigmoidal distributions, where it goes exponential for a while, hits some limit (think Moore's and around 8nm)
Training a giant neural net towards language models is a very important development, but I mean imho AlphaGo was more interesting technically with the combination of value and policy networks, vs billions of nodes in some multilayer net.
The problem is that nobody reads the definition of Metaculus for what they hold as to be 'Weakly General AI':
It requires a unified system to accomplish 4 tasks. But two of those tasks were already being able to be completed by AI (the Winogrande challenge and playing Montezuma's revenge), one of those tasks might be hard to accomplish due to the requirement (good chance that an AI system has more than 10 sat papers in it, good luck getting that out), and one of the tasks is not defunct.
Aka, I'd rate the ability of a system to meet those 4 requirements probably way earlier than 2027, but thats because the requirements dont seem to hold up great to what the community perceives to be weak AGI. Actual weak AGI id rate way later than the Metaculus question.
If you have a single AI system that does all four we will already have something lots more powerful than what exists today
The Metaculus definition will already cause massive waves of disruption. I would consider it indeed a "weak AGI", but this is just more or less fruitless categorization.
making one system do all 4 is a lot harder than making 4 systems that do one each.
My guess is that an “almost” AGI that is 90% correct and is almost 90% as productive as a human (even though it has many odd quirks) will happen by 2027. An AI like this that will be able to double check it’s own work will be enough to radically change the world.
"90% as productive" - what does that even mean?
[deleted]
If Metaculus was reliable I could be a billionaire in a few weeks.
General AI is a mountain range.
From far away it's easy to point at it and say 'that's it!'
As you get closer though it gets harder and harder to determine when you're actually on or at the top of the mountain, because you're surrounded by other smaller mountains.
I think the same will happen with AI. We're obsessed with the only 3-5 AI's currently available, but by the end of the year there will be multiple AI's doing multiple things very very well.
The AI landscape is going to change, and we'll be so surrounded by AI's that it will be hard to determine which one, by itself, becomes the general AI of our dreams.
Maybe the General AI is just one that knows which sub-AI model is best for the task you request and farms it out to that one in particular? Kind of like a general contractor and sub-contractors when you do home renovations..
'Prediction for General A.I continues to drop.'
Probably because of people from this sub lol
AGI tomorrow xD
HeinrichTheWolf_17 t1_j3opxnr wrote
The consensus is the same outside this sub too, albeit not as soon, many experts are moving their timelines to the 2030s.