Submitted by Ashamed-Asparagus-93 t3_10fep5n in singularity

A simple YouTube search of "AI" seems to bring up tons of videos with nothing good to say about AI. Here's a few examples of what will pop up.

( I tried to warn you Elon Musk LAST WARNING - 12m views ) ( Top 10 scary things Robots have said - 3.5m views ) ( Elon warned us about this | The dangers of AI - 39m views) (Google just shut down its AI after it revealed this - 979k views)

The list goes on. At first I was annoyed now I just feel kind of sad that this is what everyone want to see on youtube..

Why are people like this? Is it because they like scary movies or do they see the potential danger and choose to focus on the possible threat?

What are your thoughts on the majority of the population having a doomer opinion of AI?

Btw maybe it's not the majority but it sure seems like it is.

67

Comments

You must log in or register to comment.

tatleoat t1_j4warqp wrote

These are the people who are at the most risk of falling in love with AIs because they're so easily led by their emotions

1

CyberAchilles t1_j4wbgwc wrote

If you watch any of the videos, they do bring up valid points on the subject. Mainly, the alignment problem that is a very real issue. It isn't just doom and gloom.

10

Helloscottykitty t1_j4wc105 wrote

Because fear, hate and anger drive engagement which gets picked up by algorithms, no one's favourite movie is one they hate neither is their favourite book or piece of music but when it's comes to social media, which I include public participation video streaming such as YouTube its hard to have a pleasent experience.

Even if you go out your way, my YouTube is mostly used to watch and rewatch issac Arthur, quins ideas, comics explained, lazy masquerade, Hfy stories and watchculture.

Yet my algorithm is almost all negative stuff "This is when scientists think the world will end" kind of click bait. My viewing is on purpose positive, I got rid of social media because it wasn't joyful and if YouTube keeps trying to push Ben shipiro videos on me I may have to jump over to nebula.

31

HeinrichTheWolf_17 t1_j4wcz00 wrote

A lot of it is grifting a clickbait, playing into people’s knee jerk reactions gets attention.

2

Memomomomo t1_j4weomg wrote

sci-fi brainrot moment

the amount of people that freaked out over that "lamda sentient1!!1!!1" meme (even on this fucking subreddit lol) painted a bad picture for future public perception of AI

−4

Chad_Abraxas t1_j4wgsvj wrote

I'm kinda old and this is just how people react when new technology comes along or when technology spurs a major change in the way we do things. I've seen it a few times now throughout my life.

People will chill out as they adapt to the new reality.

12

sticky_symbols t1_j4wgu9h wrote

It's a serious discussion, that as usual has some dumb points on both sides.

As well as some good ones.

Being excited about the future shouldn't mean that we just dismiss reasons to be careful.

7

epixzone t1_j4wqmkw wrote

Technophobia has been around since the dawn of the industrial revolution. Unskilled workers doing manual labor became Luddites around that time as well which pretty much is reflected today amongst blue collar workers. There is also technophobia being popularized in the arts with the earliest examples such as Frankenstein, War of the worlds, Metropolis and again reflected in today's dystopian themes in all forms of media. Simply put, the poor, uneducated, extremely religious minded, are the main drivers of the fear complex and sad to say are gaining more popularity thru social media and even worse, right-wing conspiracy groups that have infitrated the political arena.

7

AsuhoChinami t1_j4x7sf1 wrote

Honestly, despite being a tech-optimistic myself, Luddites bother me a lot less than technoskeptics do. Luddites at least believe in the power of technology, that it's advancing rapidly and will change the world. I prefer that to the technoskeptics "lulz we'll just remain in 2006 forever and you're delusional if you think otherwise LMAO" attitude.

38

onyxengine t1_j4xa6hm wrote

To be fair AI is going to create economic upheaval, in the long term it should be an overall positive. In the short term it should accelerate job loss to the point that governments have no choice but to start rolling out UBI

8

neo101b t1_j4xk8mg wrote

I can see the movie, Transcendence being a template for what may happen. The technophobes will be out trying to destroy technology because of their fear.

They may just go all Sarah Conner. If only humans embraced AI and the world could have been saved, but they had to go out and destroy technology, out of fear.

I do think it was a decent movie and worth watching.

3

Yomiel94 t1_j4xosdp wrote

> Simply put, the poor, uneducated, extremely religious minded, are the main drivers of the fear complex

Oh come on… Have you seen /r/technology recently? Have you read mainstream tech journalism? Have you watched science fiction? There is a very negative, very cynical view of technology that’s become mainstream in recent years, and it’s coming from the cultural elites.

2

Yomiel94 t1_j4xplbb wrote

I feel like even here people generally don’t think big enough. If we manage to create AGI with greater than human capabilities, we’ll have basically invented god.

It’s probably impossible to imagine what that could mean.

25

NarrowTea t1_j4xpxcy wrote

Youtubers need to make money. Truthfulness is secondary...

8

Ortus14 t1_j4xx8ro wrote

Humans naturally polarize into extremes by vilifying those with "opposite" beliefs/opinions/thoughts.

The Ai doomers vs Ai optimists.

We don't know if our solutions to the Ai control problem will be sufficient.

6

Baturinsky t1_j4yllun wrote

It's not about do tech or not. It's about to do it with care it requires, or fuck everything up forever.

1

dutsi t1_j4ytnom wrote

No technological emergence which you have experienced is anywhere close to what is forthcoming. In a century every example you can point to today will be considered just a stepping stone to this.

4

dutsi t1_j4yttov wrote

Corporations have spent the last couple of centuries systemically harvesting profit from human lives by leveraging their concentration of capital, labor, intelligence, government influence, etc to transfer wealth to shareholders. Governments have used their monopoly on violence to enforce the extraction of taxes since the beginning of civilization itself. This has resulted in nearly every aspect of our lives becoming a profit or tax base for some powerful entity (especially so if you are an American, perhaps less so if you are Scandinavian).

Now we have reached the point where an emergent technology is about to further disempower natural born beings by devaluing the not yet already automated work which was the last beachhead upon which they could survive in the increasingly predatory economics of our current reality. The same entity structures (corporations & governments) will now control the most powerful technology ever created to serve their own purposes and goals which historically have not always been in the best interest of the rest of us.

It is an unpopular opinion here but maybe people are wise to be concerned given the track records of the players involved. Perhaps a rose tinted view of infinite prosperity for all emerging as an immediate result of AGI is not calibrated with the truth of human greed and ignorance. It will get much uglier before AGI delivers us a utopia.

1

V-I-S-E-O-N t1_j4z9v4v wrote

You believe "we'll have basically invented god" and at the same time comment that 'AI doomers' shouldn't be doomer about it? Alright, you guys are fucking weird. Maybe get the ethics department a bigger space when it comes to AI before you literally like you said, create some kind of fucking devil. The discussions here are never about ethics, always just about how to get there faster and how wrong people are for wanting to slow down.

4

V-I-S-E-O-N t1_j4za375 wrote

If anything about politics in this world should've taught you, UBI not gonna happen any time soon and especially not fast enough. That's why the fuck people want you to look at ethics, slow down, and communicate with the people in the respective fields as well as with those who can actually bring changes when it comes to laws. But the CEOs don't give a shit as they'll be even richer off it.

3

SmoothPlastic9 t1_j4zc4t9 wrote

You do know that on YouTube AI is basically a clickbait title

2

dragon_dez_nuts t1_j4zemzo wrote

It's only a natural human instinct we feel threatened by a entity that could be more powerful and smarter than all humans combined

1

InquisitiveDude t1_j4zhivm wrote

Sensational or controversial content often receives more views and engagement, and so its more likely to appear at the top of search results.

Additionally, there may be more videos that are critical of AI simply because many people have concerns or worries about the technology and its impact on society, and so they may be more likely to create or share content about these concerns. Finally, the negative impact of AI can be a trending topic, and it can be an interesting subject for discussion and debate, so it can be more likely to be created or shared. ~ From ChatGPT

3

Bierculles t1_j4zjrfg wrote

That is why it's called the singularity, it's impossible to predict what hapens after the singularity started, it truly is an event horizon on the time axis that we can't even see.

9

Bierculles t1_j4zl7he wrote

You are way too optimistic, as allways, our governments will be hillariously unprepared. Get ready for a mountain of baindaid fixes before anythign is done to actually tackle the problem.

7

thebenshapirobot t1_j4zlchy wrote

I saw that you mentioned Ben Shapiro. In case some of you don't know, Ben Shapiro is a grifter and a hack. If you find anything he's said compelling, you should keep in mind he also says things like this:

>Heterosexual marriage is the cornerstone of society; homosexual marriage offers no benefits to society.


^(I'm a bot. My purpose is to counteract online radicalization. You can summon me by tagging thebenshapirobot. Options: history, civil rights, sex, novel, etc.)

^Opt ^Out

4

Helloscottykitty t1_j4zlt5s wrote

When I see a title and its something like right wing voices are being silenced online I just think wow, what option is that please because I wanna watch dull science videos and all I get is right wing voices pushed on me.

2

Dalinian1 t1_j4zohud wrote

Sharing efforts in creating ethical use would help soften a transition. Tech is good and bad, depends on the handlers. How do we promote the good more? I'm both excited but worried. I'm excited at the efficiencies and potential to more broadly connect but worried for the parts of the population that might abuse it or be abused. Same problem, new variable, but just different century.

1

Gordon_Freeman01 t1_j4zpwm7 wrote

I used to think in a similar way. Today I think it is not possible. An AI is just an algorithm. How are you going to generate an algorithm for everything ? For every possible situation ? It would have to be conscious and that is impossible. Something, that is conscious, has to be built in a certain way, which our current computers are not.

0

JVM_ t1_j50gq8b wrote

AI is electricity.

We live with electricity that can do certain things faster and better than we can.

We'll live with AI that can do certain things faster and better than we can.

1

luisbrudna t1_j50lhlo wrote

Low quality 'Elon Musk' videos. Automatic block.

1

IcebergSlimFast t1_j50w5q2 wrote

“Something that is conscious has to be built in a certain way, which our current computers are not.”

Remarkable! So you’ve single-handedly solved the hard problem of consciousness? Do tell: is consciousness substrate-dependent? What is the specific architecture that makes it possible?

1

luv_ya t1_j50xcyo wrote

Ethics are regularly discussed here from what I’ve seen. Also developing an AGI/ASI does not automatically mean it’ll form into a “devil”. All we’re saying there’s still some level of uncertainty. The problem with even discussing it is that it’s so hard to predict what’ll come out of it, all we can hope for the best. Some might feel it’s not worth the risk but I feel like it considering the potential upside could solve just about every human problem known to mankind if we implement it right.

2

Agrauwin t1_j518g0h wrote

but 'viewing' means what exactly? that they actually watched the whole video? or that they just clicked play?

1

Gordon_Freeman01 t1_j51snoc wrote

Thank you, but the honour is not for me. Have you ever heard of the 'Integrated Information Theory of Consciousness' ? And yes, consciousness is substrate-dependent. The mechanism is too complicated to explain it here. But you can read it for yourself. It's an interesting theory.

1