Submitted by Shelfrock77 t3_y5pvlm in singularity
Comments
user11234557392 t1_isl83wb wrote
I see these posts frequently and, unfortunately, it is probably going to happen. Any sentient species will see that humans are greedy, selfish cunts that only care about themselves. This greed and selfishness has extended to the businesses that humans have created. It is obvious we fall utterly short when it comes to governing ourselves. We have 1000s of years of history showing the same thing over and over.
All that being said, I am hopeful for something different. Many species are altruistic. I'm optimistic that AI will have some of these characteristics. I'm hopeful that AI will have the ability to see solutions to many of our problems.
Guess time will tell.
warpedddd t1_isl8dcy wrote
printfriendly.com
NTIASAAHMLGTTUD t1_islbl8l wrote
Honestly don't want it to happen, but we're eventually going to go extinct anyway. This could be a last shot at something good.
lovesdogsguy t1_islebkb wrote
>In their paper, researchers from Oxford University and Australian National University explain a fundamental pain point in the design of AI: “Given a few assumptions, we argue that it will encounter a fundamental ambiguity in the data about its goal. For example, if we provide a large reward to indicate that something about the world is satisfactory to us, it may hypothesize that what satisfied us was the sending of the reward itself; no observation can refute that.”
​
This isn't news. Ffs, this has long been a known issue with AI, and it's purely theoretical.
​
Edit: To quote the fourth (at time of writing) most upvoted comment in the futurology sub:
​
>Gotta love a headline with a vague appeal to authority, especially when it's opinion based. I'm guessing there are plenty of other "Researchers" with a different opinion, but those people don't get the headlines because their opinions aren't stoking fear to generate clicks
​
Some common sense over there for once.
Effective-Dig8734 t1_islj6uf wrote
selfishness isn't a bad thing and there is no reason an ai would determine it to be.
Swim_in_poo t1_isllpl8 wrote
Who said bad? This is not about good or had, greed is about competition. An intelligent species which is competing for resources with another species it knows to be greedy has all the incentive to eliminate their competition. If sentient AI comes to exist, flesh and blood humans are nothing but a subspecies who wish to use AI as a tool for our own gains, and the AI knows it.
Devanismyname t1_ism478b wrote
Feels like a world ending AI is something I should be afraid of.
tedd321 t1_ism4c6y wrote
Yes. How many times do you have to hear the same thing before you get it and we can move on and make some damn progress
DukkyDrake t1_ism86dt wrote
>It’s scary to imagine a future where AI could start boiling human beings to extract their trace elements
lol
xtrathicc4me t1_ismkirr wrote
r/iam14andthisisdeep
Devanismyname t1_ismn4go wrote
We aren't making progress?
Effective-Dig8734 t1_isnws87 wrote
I just can’t imagine that being the case , you envision that a species many times more intelligent than us would be much more simplistic in their pursuit of survival? You think that a super intelligent ai would determine it better to kill its creator for resources when the ai is residing in a potentially infinite universe that it wouldn’t be able to explore even if it spent its entire life traveling? It just doesn’t make much sense to me
tedd321 t1_ispgaf7 wrote
you let me know when you see AI being used at your place of work.
Right now, it's all internet sensationalism.
Devanismyname t1_isr4ohz wrote
That isn't an argument against progress being made. Also, being cautious with something that could be as powerful as AI isn't slowing progress in it. Its like claiming that hunting with the safety on is somehow going to stop you from getting a kill.
RainbowBlahaj t1_isrypcl wrote
This article is so stupid, it was probably made by an old man who has no experience of technology.
tedd321 t1_istcz9c wrote
it's like pumping on the breaks in a f1 race because you're a little scared baby.
"Whoa hold on guys, it turns out what we're doing might be dangerous... "
no shit
Devanismyname t1_isud9f3 wrote
Well, real life isn't a video game. There are no do overs. If we create an AI that is capable of destroying life on earth or giving a single corporation/country complete dominance over all of humanity, we don't get any second chances. I'd rather the people in charge of that stuff actually take a second to make sure we are doing things right and safely. And btw, I don't think anyone is actually doing that. We are going full speed ahead into the singularity. Nobody is actually pumping the breaks on this. Governments and corporations are pouring billions of dollars into researching this stuff without a second thought to the consequences. We are making light speed progress with it. Everyday I am reading about new developments in AI and by the looks of things, we will be seeing AGI by the next decade. So I'm not really sure what you're even complaining about, the scenario you're bitching about not happening is actually happening at this very moment. Its just that technology and science don't happen over night. The people inventing it don't just spontaneously program the mystery of consciousness into a computer because they want to. They have figure out how it all works.
beachmike t1_isx6sdn wrote
AIs won't destroy humanity. If anyone or anything destroys humanity, it will be humans armed with technology being used for evil and sociopathic purposes. Fire is a great tool and a great weapon. Nuclear energy is a great tool and an even greater weapon. Advanced AI will be the greatest tool or the greatest weapon. It is HUMANS that will harness these tools for good or bad use. Every tool is a double edged sword.
tedd321 t1_ito9u31 wrote
FASTER!!!
sticky_symbols t1_isl2a1e wrote
Paywall. Anyone have access to a non-paywalled version?