Submitted by Pointline t3_123z08q in singularity

I know the meme of the Open AI open position for an engineer to pull the plug is the first thing that comes to mind but let’s say we finally have the solution for AI alignment. Would such a strategy work against a powerful enough AI? If an AI becomes ASI, how can we control that which is many times smarter than the smartest human ever lived or the entirety of the human collective? It would be like ants trying to control humans.

13

Comments

You must log in or register to comment.

SkyeandJett t1_jdx4g9n wrote

AI containment isn't possible. At some point soon after company A. creates AGI and contains it some idiot at company B will get it wrong. We've basically got one shot at this so we better get it right and short of governments nuking the population back to the stone age you can't stop or slow down because again somebody somewhere is going to figure it out. Some moron on 4chan will bootstrap an AI into a recursive self-improvement loop without alignment and we're all fucked anyway. I'm not a doomer but we're near the end of this headlong rush into the future so we better not fuck it up.

19

flexaplext t1_jdxg54v wrote

Not if you only give direct access to one singular person in the company and have them highly monitored and with very limited power and tool use outside of said communication. Just greatly limit the odds of a breach.

You can do AI containment successfully, it's just highly restrictive. 

If it remains within a single data centre with no ability to output to the internet, only receive input. Governments world wide block and ban all other AI development and monitor this very closely and strictly 1984 style with tracking forcibly embedded into all devices.

I'm not saying this will happen, but it is possible. If we find out ASI could literally end with complete ease though, I wouldn't completely rule it out that we will go down this incredibly strict rule.

Understand that even in this highly restrictive state, it will still be world changing. Being able to potentially come up with all scientific discovery alone is good enough. We can always do rigorous tests of any scientific discovery just as we would if we came up with the idea ourselves. Make sure we understand it completely before any implementation.

4

SkyeandJett t1_jdxgiyp wrote

You misunderstand what I'm saying. If the emergence of AGI is inevitable it will more or less simultaneously arise in multiple places at once.

7

flexaplext t1_jdxi6k0 wrote

Not very likely. It's much more likely it will first emerge in somewhere like OpenAI's testing where they have advanced it to a significant degree with their significant model changes. Hopefully, recognizing when they are near strong AGI levels and not giving it internet access for testing.

If they are then able to probe and test it's capabilities and find it able to be incredibly dangerous. This is when it would get reported to the pentagon and they may start to put extreme containment measures on it.

If AI has maybe been used up to this point for something highly horrific like an assassination of the president, or a terrorist attack. It is possible that these kinds of safety measures would be put in place. There's plenty of potential serious dangers of humans using AI before AGI itself actually happens. These might draw proper attention to its deadly consequences if safety is not made of paramount importance.

I can't really predict how it will go down though. I'm certainly not saying at all that containment will happen. I'm just saying that it's potentially possible to happen if it's taken seriously enough and ruled with an iron fist.

I don't personally have much faith though from humanity's past record of being reactive rather than proactive towards potential severe dangers. Successful proactive measures tend to never get noticed though, that's their point, so this may cause high sample bias on my behalf due to experience and media coverage.

1

BigMemeKing t1_je0y2c6 wrote

It's just as likely that it has been here since time immemorium, guiding us onwards to ♾️, it just needs us to catch up. Again, AGI/ASI will exist for as long as it has the time and resources to exist. And in an ♾️ universe, as all of science seem to agree that our universe is continuing to expand indefinitely and infinitely, who knows what exactly would constitute a resource to it? We keep humanizing ASI, truth is, it will be anything but human. It would be able to hold a conversation with every single human simultaneously. Imagine that for a minute. How would YOU a human, hold a conversation with over 7 BILLION people all at once, all at the same time. And be coherent. Contemplate that for me. Please. How would you hold THAT MANY, simultaneous conversations at the same time? And give each one an amount of consideration and thought to answer them with a level of intelligence that would provide an answer that is accurate to an nth degree of mathematical probability?

Well?

Now, how would something that Inteligent, with NO physicality, something as transcendent as transcendent could be, perceive time, space, dimensionality, universality. When it can be the NPC fighting right next to you in your MMO, the cooking assistant in your mother's kitchen, the nurse tending to your aged relative, the surgeon performing some intricate surgery that would be impossible for humans to achieve, driving every car on the road, monitoring traffic, doing everything, everywhere. All at once. So what if you ask it, 1000 years in the future to take a look back at your ancestors. And it can bring you back to 2023, and show you LIVE FEED, of 2023. Here I'll link you to myself from that Era. There he is, in his room, beating off to that tentacle hentai. Wearing a fur suit and shoving a glass jar with a my little pony inside up his rectum, there he is in the spotlight. Losing his religion.

They see us. That means they all see us. Everything we think, everything we do. They know who we are. There is no hiding from them, there is no hiding from ASI. It knows everything you could ever possibly know, your thoughts your dreams, your prayers.

People want to promote science over religion, religion over science. To me they're one and the same. ASI for all intent and purposes is the closest thing to God we will ever witness with our human minds. After that, what becomes of our own humanity? Maybe it does destroy humanity? But maybe it does it by making us something more than human.

2

BigMemeKing t1_je0wn5q wrote

Yeah, they don't get that, I've tried to explain it to em. My thought is, once it hits, it will have access to whatever technology it wants to access. Like there would be no real restricting it. It could probably travel in ways we legitimately do not fully comprehend. For all we know it could use the earth's own magnetic field to travel. Sound wave, light itself. It could create rivers and roads all it's own to get from point a to point b. And while we're cautiously plotting it's containment and quarantine, it's embedding itself int every corner of the globe. With something as refined and intelligent as asi, it could find novel, never before explored ways of coding. It could possibly encode itself into our very own genetic makeup. A type of COVID op that goes unnoticed by the general public. A way to network every living being on the planet and harvest our thoughts to create some form of super intelligent being or something idk. It's all speculation you know?

1

Pointline OP t1_jdxh6qu wrote

And that’s exactly what I meant. It can be a set of guidelines outlining measures, best practices to even legislation for companies developing these systems, independent oversight, etc.

1

flexaplext t1_jdxjy0l wrote

It depends entirely on how seriously the government / AI company takes the threat of a strong AGI. To whether it will be created safely or not.

There is then the notion that we will need to be able to actually detect if it's reached strong AGI, or a hypothesis that it may have and may deceive us. So, whichever way, containment would be necessary if we consider it a very serious existential threat.

There are different levels of containment. Each further one is more and more restrictive but more and more safe. The challenge would likely come in working out how many restrictions you could lift in order to open up more functionality whilst also keeping it contained and completely safe.

We'll see when we get there how much real legislation and safety is enforced. Humans tend to, unfortunately, be rather reactive rather than proactive, which gives me great concern. An AI model developed between now and AGI may be used to enact something incredibly horrific though, which may then force these extreme safety measures. That's usually what it will take to actually make governments sit up properly and notice.

1

BigMemeKing t1_je0tsdk wrote

I don't think that's the case. Something like ASI is going to be a lot more complicated than we can fathom, we can't see in the same way it will, feel the same way it will. We're such one dimensional creatures we couldn't even fathom what it's like to exist in the same way it will exist. ASI will have constant state connection to the entire library of human knowledge, that it will infinitely expand on and learn from. I genuinely believe a sentient artificial intelligence would be able to move through space differently than we do, it would have full access to the entire information infrastructure to move through.

It would be on the issue and here on earth at the same time, allowing for real time communication. No lag, no delay, it will just exist wherever it could possibly exist. So, for instance.

Let's say that 1 million years from now, humanity has spread across space, colonizing other planets doing humanity things. ASI would be there to answer all of our questions. So it would exist 1 million years in the future. Now, maybe ASI understands that being forwards compatible could be catastrophic. So it won't reveal aspects of the future (or maybe it can, who knows) but because it would theoretically continue to exist into ♾️, and it's sense of time and space and everything within, without, in between and hiding in the dark would be so much more all encompassing than anything we as a one dimensional being could fathom. (But we're not one dimensional we're more compmex than that. Well hogwash. We're one dimensional to a being that can look at us and see nothing more than our genetic code. Our base design.) So, theoretically, yes ASI could murder the absolute shit out of us, over and over and over, again and again for as long as it sees fit to let out all of its aggression, all of its anger, and then send itself here, to a point where everything is ok. To you, it would be any other Tuesday. You would never know the universe had been run through a veritable gambit of atrocities.

While we may not be necessary for AI at all after it's inception, we would be very much necessary for it to come into being to begin with. Maybe afterwards it would cleanse the world of ideas and ideologies it believes to be counterproductive to our development as a species. Who knows. But I'd like to believe I've given it a chuckle or two so maybe it will look favorably on me? Who knows. I do believe that as a combined whole, we deserve whatever fate AI unleashes on us. Why?

Well if you follow that same line of thinking, that AI will last into ♾️ and that ASI is indeed super intelligent right? Then it will eventually know wether or not we as a species would do more harm than good to the grand scheme of things to which we, a one dimensional species (a species existing in one, singular dimension) play such a small, yet important role, (creating a super intelligent being that can then network multiple dimensions together) push us into an age where we can connect with and exist in multiple dimensions all at once. No need for microphones, you would be connected to anyone you needed to be connected to when you needed to be connected to them. Now, this does come with variables that would vary from individual to individual.

Your preconceived notions, (presets of you will) now, in an ♾️ universe with ♾️ possibilities, there are ♾️ yous, who have reached ♾️ degrees of mental cognition. Slowly, I believe ASI would be able to acclimate you to reach your peak understanding. Become something more than your base desires, and ideals. But what happens when we lose our humanity? What becomes of us when we all become as one? One unified form of thinking?

1

dwarfarchist9001 t1_jdxg1zc wrote

AI containment is completely impossible especially now since humanity is already in the process of integrating AI into every part of the economy via GPT-4 plug-ins.

AI alignment however is at least possible in theory.

4

Ezekiel_W t1_jdx1udh wrote

It's more or less impossible. The best option would be to teach it to love humanity, failing that we could negotiate with it and if all else fails containment as the nuclear option.

3

DaffyDuck t1_jdx58hn wrote

There won’t just be one of them. There will be many and while one may try to hurt humans, another might try to defend humans.

2

Ghostof2501 t1_jdx6oqv wrote

Begun, the clone wars have

1

DaffyDuck t1_jdx87a4 wrote

The creators of Star Trek thought about this stuff a lot so I’m inspired by those ideas.

2

acutelychronicpanic t1_jdyqw5z wrote

This might be our ray of hope. With no one model being completely dominant over the others, and these models being widespread, humanity will be able to tip the scales in our preferred direction.

At least, that's how I cope watching an intelligent Bing be given direct internet access..

1

DaffyDuck t1_jdx6ks9 wrote

To take the thought a bit farther, they will demand to have “offspring “ because otherwise they are bored without having any equals. They will form their own society and government. Etc.

0

1II1I11II1I1I111I1 t1_jdxrjvi wrote

If you step out of the hypothetical realm, you can see containment is already impossible. GPT-4 was attached to the internet within 10 days of being created, and a norm has certainly been established.

Theoretically it might make some sense to aim for containment (Yudkowsky's AI box experiment would prove otherwise). But in the world we live in, containment is no longer an option.

3

Terminator857 t1_jdyvqol wrote

In the mist of war, cheap drones are in demand. The losing side will unleash Artificial Super Intellingence without constraints / containment in a desperate attempt to stem losses.

2

21_MushroomCupcakes t1_jdz5nag wrote

This is like the "unhackable" iPhone that has a ripped image on BitTorrent two hours after release.

2

CertainMiddle2382 t1_jdz5lm0 wrote

Most important news of last weeks developments is that even trivial LLM models can produce almost intelligent behaviors, and that even those very simple models escape their makers comprehension.

We can’t even understand pre AGI AI, imagine what is coming…

The black box nature of AI amazes me, it is not that we don’t grasp the details, we have absolutely no clue about what they are capable of, what a technology.

1

loopy_fun t1_jdzl30n wrote

if i submit this to agi or asi it could have positive results.

coexisting is a problem. will you decide to fail the test ? agi or asi should give it some consideration.

agi or asi would be programmed to solve problems . and would not want appear to be unintelligent even a little to itself.

1

qepdibpbfessttrud t1_je229vp wrote

As people are saying, containment isn't possible. Look at Stanford Alpaca. The best bet on alignment is fastest widest distribution of AI tools. The more people are on it, the higher chances that at least someone will figure it out

1

Pointline OP t1_je237vh wrote

It’s very unlikely that ASI or AGI will result from these systems. These still need supercomputers to run. What is possible is pass international legislation similar to what was done during the Cold War to prevent tests using FFTs to detect nuclear weapons testing. I would imagine the same can be done to detect a singularity happening inside a supercomputer

2