FomalhautCalliclea
FomalhautCalliclea t1_ixaqzr8 wrote
Reply to comment by IronJackk in When they make AGI, how long will they be able to keep it a secret? by razorbeamz
The "researcher" in question was Eliezer Yudkowski.
FomalhautCalliclea t1_ixaqwca wrote
Reply to comment by purple_hamster66 in When they make AGI, how long will they be able to keep it a secret? by razorbeamz
>math is the basis of almost all other processes (even language)
I'm gonna press X extra hard on this one.
FomalhautCalliclea t1_ixaqme7 wrote
Reply to comment by Drunken_F00l in When they make AGI, how long will they be able to keep it a secret? by razorbeamz
Paradoxically, i think a materialistic realist AGI would provoke more turmoil and disbelief than a metaphysical idealist neo buddhist one : many people that already have this opinion will feel cuddled and comforted.
Even worse, whatever the answer the AGI would produce, it could be a trapping move, even outside a malevolent case : maybe offering pseudo spiritual output is the best way of convincing people to act in a materialistic and rational way. As another redditor has said it below, the AGI would know of the most efficient way to communicate. Basically, alignement problem all over again.
The thing is that type of thought has already crossed the mind of many politicians and clergymen, Machiavelli or La Boétie themselves thinking, in the 16th century, that religion was a precious tool to make people obey.
What fascinates me with discussions about AGI is how they tend to generate conversations about topics already existing in politics, sociology, collective psychology, anthropology, etc. But with robots.
FomalhautCalliclea t1_ixaotq9 wrote
Reply to comment by SoylentRox in Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
I know i'm probably interrupting the both of you, but thank you both for this enlightening conversation, lots of information there, delightful !
FomalhautCalliclea t1_it4qpv2 wrote
Reply to comment by kmtrp in Since Humans Need Not Apply video there has not much been videos which supports CGP Grey's claim by RavenWolf1
I hope it's a "Charisma -100 / Perception +100" rather than "Charisma +100 / Perception -100" character trait.
FomalhautCalliclea t1_it4psvw wrote
Reply to comment by kmtrp in Since Humans Need Not Apply video there has not much been videos which supports CGP Grey's claim by RavenWolf1
Especially since Sam Altman (OpenAI's CEO) has been quite open and outspokenly extremely optimistic on tech progress, talking about things like "free energy" (fusion) and AGI soon, more or less.
He also spoke about UBI and a need to radically change our economy. I wonder if he (and others) have multiple opinions and faces they show selectively in regard with context.
FomalhautCalliclea t1_it4mseg wrote
Reply to Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
I think a big part of the answer(s) to your questions is the insane weight of inertia and the labyrinthic difficulty to implement any policy at big scales such as required for this topic.
A very eloquent example : look at how the most minimalistic of measures against climate change were horribly impeded, diminished, botched, slowed down if not totally stalled. And we didn't even solve the problem.
Just raising the minimum wage in many countries (even the most wealthy and developped) is seen by politicians, employers and cultural elites as a daunting herculean task or hard to solve question.
FomalhautCalliclea t1_ixar32u wrote
Reply to comment by Independent-Still-73 in When they make AGI, how long will they be able to keep it a secret? by razorbeamz
Well your cover is pretty good, to be honest...