Submitted by Singularian2501 t3_yrw80z in singularity
AsuhoChinami t1_ivwktl4 wrote
lol. There was a thread just a couple of days ago where an army of le super intelligent and mature self-proclaimed rational skeptics said a bunch of stupid shit about how even the most optimistic of experts expect AGI no earlier than the 2040s, and yet OP has links to many experts who believe it will come during the next 5-10 years. It's... it's almost as though self-proclaimed skeptics and cynics and "realists" pull stuff out of their ass to deflate the other side and might not be 100 percent intellectually honest...? Nah, that's crazy talk, they're all-knowing oracles and the lone voices of sanity and reason (just as they have been since I got into futurism 11 years ago) and anyone who disagrees with them on anything is a delusional fucking moron Singultarian religious wackjob who needs a "reality check" (just as they have been since, again, 2011 at absolute bare minimum).
As for me, I expect amazing things from 2023. Not AGI, but AIs of such sophistication, intelligence, and generality that it's hard to care too much about the way it falls short because what's there is incredible enough to be deliriously happy. I also expect 2023 AI to be good enough that it becomes easy to pinpoint a specific year (almost certainly within the 20s) for AGI, instead of the current "idk sometime in the next few years/2030s/2040s/whatever." I expect the more intellectually honest "realists" to join the "10 years or less" camp while the stubborn morons who are the complete opposite of "realistic" cling to their 2040s+ stance and sneer at anyone who disagrees with them on anything just as they have for the past 10+ years.
KIFF_82 t1_ivxiidh wrote
I believe many of them come from futurology, which is one of the saddest and depressing subreddits ever created. Why they are joining this one..? idk.
HeinrichTheWolf_17 t1_iw14qpi wrote
I started over there back from 2011, used to be a good Subreddit back then, but now it’s basically r/climatechangedoomerism not r/futurology anymore.
A lot of people say the mods ruined it and I tend to agree.
RavenWolf1 t1_iw53t4j wrote
Don't worry, this sub is fast turning like futurology because all the bullshit article spam we are having here these days.
KIFF_82 t1_iw78abj wrote
Let’s see what happens after gpt-4. 🤞
PrivateLudo t1_ivx2ne4 wrote
Most people don’t realize and don’t want to realize how quickly technology is advancing.
Considering that the growth of computing power used in AI is currently doubling every 3.4 months.
In 2017, deepfakes have started to be legit and used on the internet. In 2018, we had GPT-1, a much inferior version to GPT-3 (which came out in 2020). DALL-E came out in 2021, with its much superior version, DALL- E 2 coming out in 2022. Only one year separates the two version of DALL-E and it can now create highly detailed art (even one won an academic award). And now we’re just recently seeing videos made entirely by an AI with text prompts.
In FIVE years all of this happened. If we take Moore’s law into consideration and growth of computing power, that means breakthroughs and changes will happen even faster. Not only that but AI industry is growing extremely rapidly in just the last two years.
Its absolutely not crazy to think AGI could come in the next 5-10 years.
imlaggingsobad t1_ivxhtv0 wrote
By 2023 I think it will become obvious to anyone paying attention that AGI WILL happen and that every job will get replaced in our lifetime.
Russila t1_ivxe531 wrote
I literally responded to someone with this exact attitude. I said I based my expectations based on what the best researchers working on the problem say and the response I got was "That's just selection bias" which for sure it could be. But if we assume even the best researchers in a field don't know what they are talking about then why tf are they there?
Thatingles t1_ivxe8ct wrote
They'll just redefine terms. 'Oh, this walking, talking robot isn't AGI, it's just algorithms, I'm not wrong yet'. It's standard in this sort of debate.
TheTomatoBoy9 t1_iw07lpa wrote
I mean... walking talking isn't AGI either.
Unless you yourself are redefining what the term means. If it is indeed algorithms that used ML to get to a functioning walk and speech doesn't mean it's AGI.
It's not because you install a speech software on a robot from Boston Dynamics that it's magically AGI.
You're doing the same thing they do but the other way
CriminalizeGolf t1_ivzqzkz wrote
Why don't you go ask the people on /r/machinelearning when they expect AGI?
AsuhoChinami t1_iw02scp wrote
Let me guess - they're a group of skeptical badasses who tell it like it is and as such get your Seal of Approval? What makes you think I really give a shit what they have to say? They aren't going to undo the opinions I've developed over the past 10 years ot reading about AI and observing its progress, nor are they going to override the opinions of friends and acquaintances who I respect far, far more than these random nobodies.
CriminalizeGolf t1_iw04wcb wrote
It just seems to me like people who actually work with and understand the SOTA in machine learning are probably the most qualified to make predictions about the future of the field.
AsuhoChinami t1_iw065a9 wrote
You're right, because I can't think of a single person in high places or who works with SOTA who predicts 20s AGI. The only people who say that, ever, are clueless laypersons. Only those who share your exact opinion are in any way informed or worth listening to. Oh, wait... none of that is true at all.
HeinrichTheWolf_17 t1_iw150o3 wrote
IIRC a lot of people at OpenAI and Deepmind said they expected AGI by 2030, Shane Legg comes to mind, Sam Altman also seems to expect AGI any day now. I think Demis Hassabis of Deepmind was one exception when he said ‘decades and decades’ but so far he’s retracted that statement. I believe the last time he said that was back when AlphaGo beat Lee Sedol.
AsuhoChinami t1_iw1arq2 wrote
Sam Altman expects AGI any year now? Like, possibly 2023 or something? That's interesting.
"Decades and decades" was a pretty reasonable sentiment in 2016, I think. I myself probably would have expected AGI in either the 2030s or 2040s back then. But now... nah. AI has advanced too much during the 20s, already reached proof-of-concept levels of sophistication and generalization, and each consecitive year makes a bigger difference than the last. It's just... mathematically impossible at this point to have 7+ major leaps forward and not end up with AGI. The gap between modern AI and AGI is not large enough to have seven years on par with 2022 and not end up with AGI (and future years won't be "on par," 2023 will make more progress than 2022, 2024 more than 2023).
Anyway though, apologies to CriminalizeGolf. It's unfair of me to be an asshole when he was perfectly respectful and polite. I'm just fractious after 11 years of dealing with tens of thousands of skeptics and "realists" who are snotty and condescending.
Viewing a single comment thread. View all comments