Viewing a single comment thread. View all comments

AsuhoChinami t1_j52x8ly wrote

Weird, some super aggressive, inflammatory guy outright called me a delusional idiot for not believing AGI to take until 2050-2065 (which is, in his words, the consensus amongst almost all AI experts).

28

icedrift t1_j535agx wrote

He's not wrong... In a 2017 survey distributed among AI veterans only 50% think a true AGI will arrive before 2050 https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/

I'd be interested in a more recent poll but this was the most up to date that I could find.

EDIT: Found this from last year https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022

Looks like predictions haven't changed all that much, but there's still a wide range. Nobody really knows that's certain.

10

blueSGL OP t1_j53btzc wrote

You might find this section of an interview with Ajeya Cotra (of biological anchors for forecasting AI timelines fame)

Starts at 29.14 https://youtu.be/pJSFuFRc4eU?t=1754

Where she talks about how several benchmarks were past early last year that surveys of ML workers had a median of 2026.
Also she casts doubt on people that are working in the field but are not working on specifically forecasting AGI/TAI directly as a source for useful information.

16

Ortus14 t1_j54byn2 wrote

It has always been the case that people working within a field over-estimate how long it will take to achieve things within that field. They are hyper focused on their tiny part and miss the big picture.

To make accurate predictions you need to use data, trendlines and growth curves. It literally doesn't matter how many "experts" are surveyed, the facts remain the facts.

A few people making data and trendline based predictions hold far more weight than an infinite number of "experts" that base their predictions on anything other than trendlines.

5

Borrowedshorts t1_j53ksqo wrote

There are two types of AI experts. Those who focus their efforts on a very narrow subdomain and then there are those who study the problem from a broader lens. The latter group who are AGI experts and who have actually studied the problem tend to be very optimistic on timelines. I'd trust the opinion of those who have actually studied the problem vs those who haven't. There are numerous examples of experts in narrow subdomains being wrong or just completely overshadowed by changes they could not see.

12

AsuhoChinami t1_j53omnk wrote

No way icedrift and techno-skeptics cannot be wrong on anything ever, AGI in 2150 at EARLIEST and you're delusional if you think otherwise cuz I say so lmao

9

SurroundSwimming3494 t1_j546mre wrote

I think most experts fit in the latter group, though, and the ones who have very optimistic timelines are a minority in that group too, and not just in general.

1

AsuhoChinami t1_j5360dz wrote

And the half that agrees with you counts more than the half that doesn't because reasons? I'm a delusional idiot for sharing the same opinion as a tiny, miniscule, insignificant, irrelevant, vanishingly small, barely even existent 50 percent demographic?

8

icedrift t1_j537xjq wrote

I'm inclined to trust the people actually building AI. 50% or experts agreeing AGI is likely in the next 30 years is still pretty insane. Personally I think a lot of the AI by 2030 folks are delusional.

5

Borrowedshorts t1_j53lrlq wrote

The world has never seen anything like AI progress. AI capability has been advancing at nearly an order of magnitude improvement each year. It's completely unprecedented in human history. I think it's much more absurd to have such confidence AI progress will cease for no particular reason, which is what will have to happen if the post-2050 predictions are correct.

9

AsuhoChinami t1_j538ht5 wrote

A lot of the pre-2050 crowd does include people building AI.

4

icedrift t1_j538wbw wrote

Yeah of course; but that's 2050, not 2027 as metaculus predicts.

2

94746382926 t1_j53aqvj wrote

Yeah, the amount of ai experts predicting pre 2030 or 2035 is probably only like 10%.

1

Borrowedshorts t1_j53m7r6 wrote

That group also consists of a disproportionate number of researchers who have actually studied AGI broadly.

11

SurroundSwimming3494 t1_j54661j wrote

My guess is that most AI researchers are pretty familiar with AI beyond narrow cases, so I think most of them are qualified to give an answer to "will AGI ever arrive, and if so, when?".

Also, I get the sense that a lot of the AGI crowd knowingly engage in hype to get more publicity, and it makes sense. "AGI soon" is a lot sexier of a discussion to touch on on a podcast (for example) as opposed to "AGI far away".

0

Borrowedshorts t1_j55qvtl wrote

I don't think they are honestly. They may know some of the intracacies and difficulties of their specific problem and then project that it will be that difficult to make progress in other subdomains. Which is probably true, but they also tend to underestimate the efforts other groups are putting in and the progress that can happen in other subdomains, which isn't always linear. So imo, they aren't really qualified to give an accurate prediction because very few have actually even studied the problem. I'd trust the people who have actually studied the problem, these are AGI experts and tend to be much more optimistic than the AI field overall.

3

AsheyDS t1_j55s0h2 wrote

>AGI experts

No such thing yet since AGI doesn't exist. Even when it does, there are still going to be many more paths to AGI in my opinion, so it may be quite a while before anyone can be considered an expert in AGI. Even the term is new and lacks a solid definition.

1

SurroundSwimming3494 t1_j55sj7z wrote

Is studying AGI even a thing, though? AGI does not exist yet and could never do so (potentially), so I'm not sure how one can study something nonexistent. To have theories, sure, but that's another thing.

0