sticky_symbols
sticky_symbols t1_j9w3kze wrote
Reply to comment by SurroundSwimming3494 in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
The thing about chatGPT is that everyone talked about it and tried it. I and most ML folks hadn't tried GPT3
Everyone I know of was pretty shocked at how good GPT3 is. It did change timelines in the folks I know of, including the ones that think about timelines a lot as part of their jobs.
sticky_symbols t1_j9w3cav wrote
Reply to comment by DukkyDrake in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
A lot of people who think about this a lot think it does. LLMs seem like they may play an important role in creating genuine general intelligence. But of course they would need many additions.
sticky_symbols t1_j9u4kd2 wrote
Reply to comment by Jinoc in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Yes; but there's a general agreement that tool AI is vastly less dangerous than agentic AI. This seems to be the crux of disagreement between those who think risk is very high or just moderately high.
sticky_symbols t1_j9rf56v wrote
Reply to comment by MinaKovacs in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Many thousands of human hours are cheap to buy, and cycles get cheaper every year. So those things aren't really constraints except currently for small businesses.
sticky_symbols t1_j9rezil wrote
Reply to [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
ML researchers worry a lot less than AGI safety people. I think that's because only the AGI safety people spend a lot of time thinking about getting all the way to agentic superhuman intelligence.
If we're building tools, not much need to worry.
If we're building beings with goals, smarter than ourselves, time to worry.
Now: do you think we'll all stop with tools? Or go on to build cool agents that think and act for themselves?
sticky_symbols t1_j9qsmt6 wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
Wow. I just don't get it.
This was done before chatGPT and most people hadn't used gpt 3 before that.
sticky_symbols t1_j9mbzia wrote
Reply to comment by FirstOrderCat in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Sorry; my implication was that Asimov introduced the topic but wasn't particularly compelling. Yudkowsky created the first institute and garnered the first funding. But of course credit should be broadly shared.
sticky_symbols t1_j9m8yn3 wrote
Reply to comment by FirstOrderCat in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Asimov's rules don't work, and many of the stories were actually about that. But they also don't include civilization ending mistakes. The movie I Robot actually did a great job updating that premise, I think.
One counterintuitive thing is that people in the field of AI are way harder to convince than civilians. They have a vested interest in research moving ahead full speed.
As for your bs detector, I'm don't know what to say. And I'm not linking this account to my real identity. You can believe me or not.
If you're skeptical that such a field exists, you can look at the Alignment Forum as the principle place that we publish.
sticky_symbols t1_j9m6t5d wrote
Reply to comment by FirstOrderCat in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Well, I'm now a professional in the field of AGI safety. Not sure how you can document influence. I'd say most of my colleagues would agree with that. Not that it wouldn't have happened without him but might've taken many more years to ramp up the same amount.
sticky_symbols t1_j9m3uus wrote
Reply to comment by FirstOrderCat in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Good point, but those didn't convince anyone to take it seriously because they didn't have compelling arguments. Yudkowsky did.
sticky_symbols t1_j9itrli wrote
Reply to comment by FirstOrderCat in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Founding a field is a bit of a rare thing
sticky_symbols t1_j9i0yw8 wrote
Reply to comment by FirstOrderCat in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Well, he's the father of a whole field that might determine the future of humanity. It would be tough to keep it cool the 1009th time you've seen the same poorly thought out dismissal of the whole thing. If I were in his shoes I might be even crankier.
sticky_symbols t1_j9gxu26 wrote
Reply to The dreamers of dreams by [deleted]
It's probably mostly a side effect of being able to simulate possible futures. This helps in planning and selecting actions based on likely outcomes several steps away.
And yes, that is also crucial for how we experience our consciousness.
sticky_symbols t1_j9gwa67 wrote
He's the direct father of the whole AGI safety field. I got interested after reading an article by him in maybe 2004. Bostrom credits him with many of the ideas in Superintelligence, including the core logic about alignment being necessary for human survival.
Now he's among the least optimistic. And he's not necessarily wrong.
He could be a little nicer and more optimistic about others' intelligence.
sticky_symbols t1_j9gvp91 wrote
Reply to comment by diabeetis in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
I think slightly douchy is more fair. I've read a ton of his stuff and only a subset is offensive to anyone. But yeah, he's not as considerate as he probably should be.
sticky_symbols t1_j8iu8wz wrote
Reply to comment by Frumpagumpus in Altman vs. Yudkowsky outlook by kdun19ham
Yeah. But if we get it wrong, we're all dead. So we have to try.
sticky_symbols t1_j8g5wkl wrote
Reply to comment by Frumpagumpus in Altman vs. Yudkowsky outlook by kdun19ham
Good sociopolitical breakdown.
But biases aren't the whole story - there's a lot of logic in play. And way more of it is deployed on one side than the other...
sticky_symbols t1_j8g5ij0 wrote
Reply to Altman vs. Yudkowsky outlook by kdun19ham
I'm pretty deep into this field. I have published in the field, and have followed it almost since it started with Yudkowsky.
I believe they both have strong arguments. Or rather, those who share Altmann's cautious-but-optimistic view have strong arguments.
But both arguments are based on how AGI will be built. And we simply don't know that. So we can't accurately guess our odds.
But it's for sure that working hard on this problem will improve our odds of a really good future over disaster.
sticky_symbols t1_j7y4etr wrote
Reply to The copium goes both ways by IndependenceRound453
This is an excellent point. Many of us are probably underestimating timelines based on a desire to believe. Motivated reasoning and confirmation bias are huge influences.
You probably shouldn't have mixed it with an argument for longer timelines. That will give an excuse to argue that and ig ore the point.
The reasonable estimate is very wide. Nobody knows how easy or hard it might be to create AGI. I've looked at all of the arguments, and have enough expertise to understand them. Nobody knows.
sticky_symbols t1_j7hfe3e wrote
Reply to The Simulation Problem: from The Culture by Wroisu
I mean yeah
sticky_symbols t1_j5kqu80 wrote
Reply to comment by Vehks in AGI will not happen in your lifetime. Or will it? by NotInte
You can definitely predict some things outside of five years with good accuracy. Look at Moore's Law. That's way more accurate than predictions need to be to be useful. Sure if nukes were exchanged all bets are off, but outside of that I just disagree with your statement. For instance: will China's gender imbalance cause them trouble in ten years? It almost certainly will.
sticky_symbols t1_j5goljf wrote
Reply to comment by PanzerKommander in Anyway things go downhill? by [deleted]
I respectfully am going with the opinions of those who have studied the effects of fallout and nuclear winter.
Yes we could have a large war without a nuclear exchange. That does not seem likely.
sticky_symbols t1_j5gdat2 wrote
Reply to comment by PanzerKommander in Anyway things go downhill? by [deleted]
Ww3 will most certainly end civilization, if not the entire species. Experts are unsure whether a full scale nuclear exchange would kill every single human being, but certainly the vast majority.
sticky_symbols t1_j5gd1tg wrote
Reply to Anyway things go downhill? by [deleted]
Nukes
sticky_symbols t1_j9w9b6e wrote
Reply to comment by MinaKovacs in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
There's obviously intelligence under some definitions. It meets a weak definition of AGI since it reasons about a lot of things almost as well as the average human.
And yes, I know how it works and what its limitations are. It's not that useful yet, but discounting it entirely is as silly as thinking it's the AGI we're looking for.