Submitted by kmtrp t3_zybgwc in singularity
Comments
Cryptizard t1_j25218f wrote
It's almost like they know there are a lot of unsolved problems. And they aren't as easily impressed by ChatGPT as normal people because they now how it actually works and what its limitations are.
kmtrp OP t1_j252a2l wrote
So 50% know something the other 50% don't...?
Cryptizard t1_j253hru wrote
No, you just picked a random date 2060 as a cutoff because it was the 50% mark. On the whole, the vast majority (> 90%) predicted a date that is farther in the future than what the consensus on this sub is.
Kaarssteun t1_j25bl79 wrote
Honestly an argument for us not being batshit crazy. 10% of scientists & well-informed people is a whole lot better than 0%
Calm_Bonus_6464 t1_j25dnei wrote
ChatGpt wasn't even a thing when this survey was taken.
enilea t1_j25dwym wrote
"human-level" isn't really the way to go about it. There are many areas of intelligence and even among humans they vary greatly. There will be areas where humans will be vastly surpassed and others where AIs will still take some decades to get there.
Calm_Bonus_6464 t1_j25dzi9 wrote
I think Cotra's prediction will end up being correct, 2040 - 2050 is the window for AGI and ASI should also be achieved in that window
Sieventer t1_j25j3qw wrote
I love the 50%. It's like "maybe yes, maybe not"
12342ekd t1_j25m381 wrote
AGI will be here probably by 2024. Once we do ASI will shortly follow and we will be unable to control it, maybe it figures out everything that is able to be figured out and it becomes a peaceful world. Maybe it terminates us for good. Maybe it makes us suffer in unimaginable ways. Who knows what will happen
turnip_burrito t1_j25mtyn wrote
2024? I want whatever you're having. There are a lot of problems unsolved, and 2024 is not much time to solve them.
Effective-Dig8734 t1_j25v9wc wrote
Yeah 40-50 years sounds right, if ai growth wasn’t exponential.
helliun t1_j263lcc wrote
Anyone with a confident opinion on this j sounds dumb ngl. Like please don't pretend you know what's gonna happen 40 years from now when your predictions for even the next 5 years will probably be dogshit
[deleted] t1_j2651yh wrote
[deleted]
EpicMasterOfWar t1_j26e33s wrote
They are actually more pessimistic than you think. The chart shows half the experts think there is a 50% chance there will be AGI by 2060. Not a 100% chance.
Impressive_Oaktree t1_j26ib47 wrote
I just want a robot that helps me around the house (i.e. a butler) thx
94746382926 t1_j27gl2f wrote
GPT 3 and paLM were, and they're very similar.
kmtrp OP t1_j2dhhk4 wrote
I picked it precisely because it's the 50% mark, how is that random?
Cryptizard t1_j2dhlmy wrote
My point is that you made a completely vacuous statement. Take any set of data, pick the median and say “durrr I guess 50% of the people know something the other 50% don’t.” It means literally nothing.
kmtrp OP t1_j2digyh wrote
I didn't make a spontaneous statement, indeed it would've been nonsensical. It was a reply to your "It's almost like they know there are a lot of unsolved problems" implying that excited people here don't take those into account, no? But according to the poll, if you were right, half of that same set of experts wouldn't take those into account either, arbitrarily deciding that the experts voting for >2061AGI simply "know how it actually works" ; which I assume it coincidentally lines up with your beliefs. I'm sure you believe you "know how it actually works" too, yes?
edit: to be clearer, it's an idiotic response on purpose to highlight your idiotic comment, as in no true scotmman fallacy.
Cryptizard t1_j2dje9r wrote
Wtf are you talking about. I was comparing he results of the survey to the opinions on THIS sub, which are largely ASI before 2040 (check the survey posts if you don’t believe me). You are the one that fixated on 2060, which as I said is a meaningless divide in the data.
kmtrp OP t1_j2djn9l wrote
I know that's what you are comparing to, I wrote it in the comment you just replied to mate. You didn't understand it I think, but it's not important. Happy new year.
HolyNucleoli t1_j2evixy wrote
>Human-level AI was defined as unaided machines being able to accomplish every task better and more cheaply than human workers.
That definition sets a high bar. Every task? I would imagine that the percent of tasks AI can do unequivocally better than humans would follow an s curve, where maybe 95%+ of tasks are solved within a relatively short time frame but a few tricky ones remain unsolved long after AI becomes massively disruptive.
kmtrp OP t1_j24v779 wrote
I can't believe half of the "experts" believe AGI won't appear sooner than 2061.