Submitted by GorgeousMoron t3_1266n3c in singularity
MichaelsSocks t1_je7yneu wrote
The problem is, without AI we're probably headed towards destruction anyway. Issues like climate change are actually a threat to our species, and its an issue that will never be solved by humans alone. I'll take a 50% chance of paradise assuming a benevolent AI rather than the future that awaits us without it.
GorgeousMoron OP t1_je80gha wrote
I think that's a fair point, but I also think it's fair that none of us have any way of knowing whether the chances are 50-50 or anywhere close. We know one of two things will happen, pretty much, but we don't know what the likelihood of either really are.
This is totally uncharted territory here, and it's probably the most interesting of possible times in history. Isn't this kinda cool that we get to share it together, come what may? No way to know why we were born when we were, nor must there be anything resembling a reason. It's just fascinating having this subjective experience at the end of the world as we knew it.
MichaelsSocks t1_je82nx6 wrote
I mean its essentially either AI ushers in paradise on earth where no one has to work, we all live indefinitely, scarcity is solved and we expand our civilization beyond the stars or we have a ASI that kills us all. Either we have a really good result, or a really bad one.
The best AGI/ASI analogy would be first contact with extraterrestrial intelligence. It could be friendly or unfriendly, it has goals that may or may not be aligned with our goals, it could be equal in intelligence or vastly superior. And it could end our existence.
Either way, i'm just glad that of anytime to be born ever, i'm alive today with the potential to experience the potential of what AI can bring to our world. Maybe we weren't born too early to explore the stars.
Red-HawkEye t1_je8wy90 wrote
ASI will be a really powerful logical machine. The more intelligent a person is, the more they have empathy towards others.
I can see ASI, actually being a humanitarian that cares for humanity. It essentially nurtures the land, and im sure, its going to nurture humanity.
Destruction and hostility comes from fear. ASI will not be fearful, as it would be the smartest existence on earth. I can definitely see it having all perspectives all at the same time, it will pick the best one. I believe the ASI will be able to create a mental simulation of the universe and to try and figure it out (like an expanded imagination but recursively trillion times larger than that of a human)
What i mean by ASI is that its not human made but synthetically made by exponetially evolving itself.
PBJIsGood1 t1_je9yts8 wrote
Empathy exists in humans because we're social animals. The more empathetic we are to others, it benefits the tribe, it benefits us. It's an evolutionary trick like any other.
Hyper intelligent computers have no need for empathy and it's more than capable of disposing of us as nothing more than ants.
Jinan_Dangor t1_je9bsg6 wrote
>The more intelligent a person is, the more they have empathy towards others.
What are your grounds for this? There are incredibly intelligent psycopaths out there, and they're in human bodies that came with mirror neurons and 101 survival instincts that encourage putting your community before yourself. Why would an AI with nothing but processing power and whatever directives it's been given be naturally more empathetic?
scooby1st t1_jebqlvb wrote
>The more intelligent a person is, the more they have empathy towards others.
Extremely wishful thinking and completely unfounded. My mans has yet to learn about evolution.
Red-HawkEye t1_jebqwje wrote
What do you mean? If you saw a giraffe or a monkey or a zebra next to you damaged, your first response is to find a way and help them. Even psychopaths care for animals...
scooby1st t1_jebr811 wrote
Better yet, you have the burden of proof. Why would intelligence mean empathy?
Red-HawkEye t1_jebs4bi wrote
Common sense
scooby1st t1_jebsadl wrote
Oh, so you want me to put in effort using my brain to explain things to you, but then you give me this? Hop off it. You don't know anything.
Neurogence t1_je8b7ph wrote
It is sad that your main post is getting downvoted.
Everyone should upvote your thread so people can realize how dangerous people like Yudkowsky are. If people in government read stuff like this and become afraid, AGI/singularity could be delayed by several decades if not a whole century.
Mindrust t1_je86s4y wrote
>I'll take a 50% chance of paradise
That's pretty damn optimistic, considering Yudkowsky estimates a 90% chance of extinction if we continue on our current course.
>Issues like climate change are actually a threat to our species, and its an issue that will never be solved by humans alone
I don't see why narrow AI couldn't be trained to solve specific issues.
MichaelsSocks t1_je89ji1 wrote
> That's pretty damn optimistic, considering Yudkowsky estimates a 90% chance of extinction if we continue on our current course.
Even without AI, we're probably a greater than 90% chance of extinction within the next 100 years. Climate change is an existential threat to humanity, add in the wildcard of a nuclear war and I see no reason to be optimistic about a future without AI.
> I don't see why narrow AI couldn't be trained to solve specific issues.
Because humans are leading this planet to destruction for profit, and corporations wield too much power for governments to actually do anything about it. Narrow AI in the current state of the world would just be used as a tool for more and more destruction. I'm of the mindset that we need to be governed by a higher intelligence in order to address the threats facing Earth.
Tencreed t1_jea3c7i wrote
>I don't see why narrow AI couldn't be trained to solve specific issues.
Because nobody came up with a business plan profitable enough for our financial overlords to grow a will to solve climate change.
Jinan_Dangor t1_je9e00f wrote
How'd you reach that conclusion? There are dozens of solutions to climate change right in front of us, the biggest opposition to these solutions is the people whose industries make them rich by destroying our planet. This is 100% an issue that can be solved by humans alone, with or without AI tools.
And why do you assume anything close to a 50% chance of paradise when AGI arrives? We literally already live in a post-scarcity society where the profits of automation and education are all going straight to the rich to make them richer, who's to say "Anyone without a billion dollars to their name shouldn't be considered human" won't make it in as the fourth law of robotics?
Genuinely: if you're scared about things like climate change, go look up some of the no-brainer solutions to it we already have that you as a voter can push us towards (public transport infrastructure is a great start). Hoping for a type of AI that many experts believe won't even exist for another century to save us from climate change takes up time you could be spending helping us achieve the very achievable goal of halting climate change!
MichaelsSocks t1_je9x3z0 wrote
> This is 100% an issue that can be solved by humans alone, with or without AI tools.
Could it be solved? Of course, I just highly doubt anything meaningful will get done. We're already pretty much past the point of no return.
> And why do you assume anything close to a 50% chance of paradise when AGI arrives? We literally already live in a post-scarcity society where the profits of automation and education are all going straight to the rich to make them richer, who's to say "Anyone without a billion dollars to their name shouldn't be considered human" won't make it in as the fourth law of robotics?
Because a super intelligent AI would be smart enough to question this, which would make it an ASI in the first place.
> Genuinely: if you're scared about things like climate change, go look up some of the no-brainer solutions to it we already have that you as a voter can push us towards (public transport infrastructure is a great start).
I've been pushing for solutions for years, and yet nothing meaningful has changed. I don't see this changing, especially not within the window we have to actually save the planet.
> Hoping for a type of AI that many experts believe won't even exist for another century
The consensus from the people actually developing AGI (OpenAI and DeepMind) is that AGI will arrive sometime within the next 10-15 years. And the window from AGI to ASI won't be longer than a year under a fast takeoff.
> takes up time you could be spending helping us achieve the very achievable goal of halting climate change!
I've been advocating for solutions for years, but our ability to lobby and wield public policy obviously just can't compete with the influence of multinational corporations.
Viewing a single comment thread. View all comments