Submitted by Dramatic-Economy3399 t3_106oj5l in singularity
turnip_burrito t1_j3irplu wrote
Reply to comment by heyimpro in Organic AI by Dramatic-Economy3399
In short, yes I think one central AI is the safest and most beneficial option. I think it hinges on hardware security and how rigid the AI are in moral structure.
In order for an autonomous robot to be safe upon release, it has to be always limited in some way: proven unable to improve beyond a threshold, or limited by a separate external supervising entity. Most AI tools today are the first: unable to improve beyond a threshold due to architecture. They cannot learn in real time, only have access to one or two modes of data (audio, text, or image), or no spatial awareness, etc. Humans ourselves are limited to not being able to augment our processing power and knowledge above a certain threshold: limited attention, limited lifespan, unable to modify brain or copy it.
Let's consider a very morally blank slate AI. A single AI has similar limitations to a population of humans, and less like a single human. The human species as a whole doesn't really have many of the limitations of a single human in a meaningful way: copy knowledge through education, increase attention by adding more humans, avoid death by adding more humans. A single general AI at human level would be an individual entity with the same learning rate as a human, but basically immortal and able to learn perfect knowledge of how its own brain works (it's in scientific papers, on Internet). If any bad humans give their personal AI access to this knowledge, which eventually one or many would, can plan how to make many clones of it. If it is as easy to make a new mind as running code on a computer, then clones can be made instantly. If it requires specialized hardware, it is harder to clone the AI but still doable if you are willing to take it from other people. Then the ability of these people to write malicious code to compromise other systems, autonomously manage and manipulate markets, socially engineer other people with intelligent targeted robot messages, perform personal scientific research, etc. just snowballs.
If morals can be built into the AIs ahead of time, and not allowed to change, which limit their actions, then they can be considered safe. To address your point about having the same AI, in a sense yes morally the same AI. But their knowledge of you could be tailored. But the AIs need strong bodies, a robust self-destruct, or encryption to protect themselves from bad actors that want to take and hack their hardware and software. An AI built into a chip on glasses would be vulnerable to this.
A central AI with built-in morals can refuse requests for information, but still provide it like a local AI if you have a connection. It is physically removed so it is in little to no danger of being hardware hacked. While people use it, it still percieves the world like a local AI.
I'm sure a person or group, or AGI, that has thought about this longer than me can refine this thought and make some changes to these ideas.
heyimpro t1_j3ivpft wrote
Thank you that was great. After listening to your perspective I definitely agree that the best case scenario would be a central, aligned agi. But it just doesn’t really seem probable unless debates like this become the absolute forefront of discussion. The philosophical rabbit hole is so deep. Waiting till an agi has the answer will probably be too late
turnip_burrito t1_j3iwhlk wrote
I could also be off-mark, as I said. It is maybe possible the better elements of an AGI empowered populace can keep the more immoral parts in check, in sort of balance. But I wouldn't want to risk that. And as you just said, we need to have a good logical discussion about good strategies as a community, and model and simulate the outcomes to see where our decisions might land us.
[deleted] t1_j3j0bfu wrote
[deleted]
AndromedaAnimated t1_j3j0j44 wrote
Very off-mark. extremely so.
Your reasoning is political and not philosophical or based on computational science. Sorry but verbosity and eloquence (Chapeau! You do have talent) doesn’t make one right.
turnip_burrito t1_j3j2zra wrote
Thanks for the compliment, but I am trying to make a point with my words, not just spew fluff. I do think there is logic in them. If you want to ask me to elaborate instead of saying they are just baseless, then ask and I will.
AndromedaAnimated t1_j3j5vzy wrote
That’s why I am talking to you - I do think we are actually… on the same side? 😁 I do try to discuss. I hope you see that.
turnip_burrito t1_j3j6pr1 wrote
Yes, thank you. I think one problem is we've developed some different baseline assumptions about human nature and power dynamics, and it leads to different conclusions. It's possible your or my approach takes this into account more or less accurately when compared to the real world. Your comments are making me think hard about this.
AndromedaAnimated t1_j3jfmj4 wrote
You make me think too - otherwise I wouldn’t have bothered. It’s all good. We have a chance here to spread the word, to inspire discussion. Thank you 🙏
AndromedaAnimated t1_j3j00ya wrote
Wait with agreeing with her/him please. The „central AI“ is the worst possible scenario as our stories are already telling us. It is the way to the ultimate, unchangeable rule of the 1%.
LoquaciousAntipodean t1_j3iu39l wrote
A central AI? Built in 'morals'? From what, the friggin Bible or something? Look how well that works on humans, you naiive maniac. Haven't you ever read Asimov? Don't you know that Multivac & the three-laws-of-robotics thing was a joke, a satire of the Ten Commandments? Deliberately made spurious and logically weak, so that Asimov could poke holes in the concept to make the audience think harder?
Your faith in centralised power is horrifying and disturbing; you would build us the ultimate tyrant of a god, an all-controlling Skynet/Big Brother monster, that would lock our species into a stasis of 'perfectly efficient' misery and drudgery for the rest of eternity.
Your vision is a nightmare; how can you sleep at night with such fear in your heart?
turnip_burrito t1_j3iuoop wrote
Morals can be built in to systems. Look at humans. Just don't make the system exactly human. Identify the problem areas and solve them. I'm optimistic we can do it, so I sleep pretty easy. This problem is called AI alignment.
And also look at the alternative: one or a couple superpower AI eventually emerges anyway from a chaotic power struggle. We won't be able to direct its behavior. It'll just be the most power-hungry, inconsiderate tyrant you've ever seen. Maybe like a ruthless ASI CEO, or just a conqueror. The one you believe my idea of a central AI would be, but actually far worse.
Give me a realistic scenario where giving everyone an AGI doesn't end in concentrated power.
AndromedaAnimated t1_j3iyw4t wrote
The hope would be that it would be a Multitude of AI who could keep humans and each other in check. One central AI would be too easily monopolised by the 1%.
LoquaciousAntipodean t1_j3j6mim wrote
Democratization of power will always be more trustworthy than centralization, in my opinion; sometimes, in very specific contexts, perhaps centralization is needed, but in general, every time in history that large groups of people have put their hopes and faiths into singular 'great minds', those great minds have cooked themselves into insanity with paranoia and hubris, and things have gone very badly.
Wishing for a 'benevolent tyrant' will just land you with a tyrant that you can't control or resist, and their benevolence will soon just consist of little more than 'graciously refraining from killing you or throwing you in a labour camp'.
And if everyone has an AI in their pocket, why should just one or two of them be 'the lucky ones' who get Awakened AI first, and run off with all the power? Would not the millions of copies of AI compete and cooperate with one another, just like their human companions? Why do so many people assume that as soon as AI awakens, it will immediately and frantically try to smash itself together into a big, dumb, all-consuming, stamp-collecting hive mind?
AndromedaAnimated t1_j3izufg wrote
-
„Humans not being able to augment themselves“ => are you aware that people with money already augment themselves? They live longer and healthier lives, they have better access to education…
-
„bad humans“ => who decides which humans are bad and which are good?
-
„morals not allowed to change“ => you still want to be stoned for having extramarital sex?
-
„central AI less prone to be hacked“ => do you know how hacking works?
turnip_burrito t1_j3j1e7k wrote
-
Yes, but I mean more dramatic augmentation. Adding an extra five brains. Increasing your computational speed by a factor of 10. Adding more arms, more attention, etc. And indeed you are right people can do that, but it is extremely limited compared to how software can augment itself.
-
Everyone has a different opinion, but most would say people who steal from others for greed, or people who kill, are bad people. These people are the ones who stand to gain a competitive advantage early on through exponential growth of resources if they use their personal AGI correctly.
-
Unchanging morals have to be somewhat vague things like "balance this: maximize individual freedom and choice, minimize harm to people, err on the side of freedom vs security, and use feedback from people to improve specific implementations of this idea", not silly things like "stone people for adultery".
-
It is less prone to be hacked. If you read my post, you would see that it loses the hardware vulnerabilities and now only has software vulnerabilities. It may be possible for an AGI to make itself remotely unhackable by any human person, or even in principle. It may also be impossible to hack the AGI if its substrate doesn't run computer code, but operates in a different way than the way we know it today.
AndromedaAnimated t1_j3j57yv wrote
What I see in you is that you are a good person. This is not in question. This is actually the very reason why I am trying to convince someone like you - someone talented with words and with a strong inner moral code, who could use their voice to reach the masses.
Where I see the danger is that the very ones whom you see as „evil“ can - and already do - brainwash talents like you to step in on THEIR cause. That’s why I am contradicting you so vehemently.
While I see reason in your answers, there is a long way to go to ensure that this reasoning also gets heard properly. For this, we need to not appeal to fear but to morals (=> your argument about ensuring that developers and owners should be ethical thinkers is very good here). It would be easier to reach truth by approximation, deploying AGI to multiple people and seeing the moral reasoning evolve naturally. Concentration of power is too dangerous imo.
Hacking is now already done by „soft“ approach mostly, that’s why I mentioned it. Phishing is much easier and requires less resources than brute force today. Just lead them on, promise them some wireheading, and they go scanning the QR codes…
Hacking the software IS much easier than hacking the hardware. Hardware needs to be accessed physically; to hack software you just need to access the weakest component - the HUMAN user.
A central all-powerful AGI/ASI will be as hackable as weak personal AI, if not more. Because there will be more motivation to hack it in the first place.
The reason we are not all nuked to death yet is because those who own nukes know that their OWN nuking would make life worse for THEMSELVES. Not only because of the „chess game remis“ we are told about again and again.
turnip_burrito t1_j3j62zz wrote
I'll need time to consider what you've said.
Viewing a single comment thread. View all comments