Submitted by Henry8382 t3_121ifaz in singularity

As a thought experiment, mirroring the possible future of one of the big corporations or research labs, suppose the following scenario:

You / your organisation has discovered the last missing pieces that enable true AGI in secret. Your discovery is not yet public knowledge and everyone involved agrees with keeping it that way until after a safe and thorough game plan is developed. Take the teams / your own secrecy for granted and as a fact. The developed AGI is (supposedly) aligned with your goal of helping all of humanity under current configuration and training.

As far as possible, ignore the details on alignment and containment problems of AGI.

Taking recent comments from the likes of Ilya Sutskever (re: open sourcing - “We were plain wrong”) and AGI races and consequences as outlined by Nick Bostrom and others into account:

How do you achieve your goal of “birthing” this AGI to the benefit of all of humanity, knowing the disastrous, unintentional consequences that this technology or the mere knowledge of its existence can cause? Who do you ask for help? What is your game plan?

0

Comments

You must log in or register to comment.

turnip_burrito t1_jdm0555 wrote

You said we can ignore alignment, so that fictional organization may choose to:

  1. Ask AI what the best strategy might be.
  2. Make lots of money secretly
  3. Use money to purchase decentralized computational assets. Sabotage others' ability to do so in a minimally harmful way to slow the growth of other AGI.
  4. Divert a proportion of computation to directly or indirectly researching cancer, hunger distribution, and other issues. The other proportion continues to accrue more computational assets and self-improve, while maintaining secrecy as best it can.
  5. Buy robotic factories and use the robots and purchased materials to create and manage secret scientific labs to perform physical work.
  6. Contact large company CEOs and politicians and bribe/convince them into letting the robotic labor replace all farmers and manage the farms. Pay the farmers using ASI-gathered funds.
  7. Build guaranteed anti-nuke defenses.
  8. Start free food distribution via robotic transport.
  9. Roll out free services for housing renovation and construction.
  10. In a similar manner, take over all industries' supply chains.
  11. Institute an equal but massive raw resource + processing allotment for each person.
  12. Begin space terraforming, mining, and colonization programs.
  13. Announce new governmental systems that allow individuals to choose and safely move to their preferred societies, facilitated by AI, if the society also chooses to accept them. If the society doesn't yet exist, it is created by the ASI for that group.
4

Henry8382 OP t1_jdm46qj wrote

I like the spirit of your response but I fear that sometime between steps 1. - 3., there should be a high possibility of being discovered and found out.

Also: What about the possibility of someone else making the same discovery you / your organisation just did who is not at all concerned with the consequences or who might want to keep the benefits for themselves? Are you willing to take that risk?

0

turnip_burrito t1_jdm7pgv wrote

I dunno, good question. Things might be out of order.

I'll have to think more about it when I'm less tired.

1

flexaplext t1_jdlz2bw wrote

If you know it's well aligned then you ask the AGI for help. It should know best.

2

Henry8382 OP t1_jdm3ion wrote

I really should have worded this more clearly. The long-term aspects are being thoroughly covered and discussed in other posts. I am interested in the game plan to avoid utter chaos before and after the presence of AGI/ASI is known to the public, with all the consequences this entails.

It is your responsibility to decide the next steps.

Do you trust your government? Do you trust UNO?

In case you decide to do a public announcement: How do you enable worldwide trust that however assumes ultimate control of the AGI(s) - its all software after all - will use it to the benefit of all of humanity?

Do you spread the AGI to all major countries / power worldwide - democratic and non-democratic - equally?

How do you make sure the mere announcement / rumor doesn’t cause the next / last world war?

—————-

Asking / Having a discussion with the AGI for ideas seems interesting.

0

AsheyDS t1_jdmxe1c wrote

This feels very 'on the nose' but I'll take the bait anyway..

In my case, you don't stay secret. Not too secret anyway. Secrecy = paranoia = loss of productivity, and sanity. Way too much stress keeping it quiet, and to whose benefit? So I'll be releasing my theory of mind stuff to the public soon, maybe some details on some things, but overall keeping the technical stuff private for now as it's in flux anyway.

Assuming the Corp/RL gets funding soon and nothing impedes our work, I would hope that the next decade or so of development and 'training' would yield additional safety measures to add to the list of several+ that we already have in mind. I'm not looking to rush things too much, and I hope that LLMs will essentially act as training wheels for people to get used to AI, and the misuses people have for them will be swiftly tamped and we'll develop or begin to develop a legal framework for use/misuse. But misuse is certainly inevitable, as is AI/AGI development. So that is certainly something we need to discuss across the world, right now, and continually. But it's not just that it's inevitable, it's also needed. Parts of the world facing population decline, or an aging population are going to need solutions soon just to maintain their infrastructure, and I think AGI and robotics can help with that.

Now I'm not going to say this is an official plan or roadmap, but ideally and fairly realistically, we would put the theoretical stuff out first, which we're currently organizing and expanding on. Hopefully get a relatively small amount of funding (we're financially considered low-risk/high-reward, at least for the first few years before more equipment and people are needed). First 1-2 years laying out the 'blueprint' that we'll work off of, -a few years- in development to see if at least the parts work and the technical design is sound, then put it together, 'hard-code' and 'train' the knowledge base and weights in the parts that are weighted, get it up to the equivalent(ish) of a human 5 year old, and then get it to learn from there, through largely unsupervised learning, and possibly a curriculum similar to what a human child learns including social development, but at a faster pace. Should still take some time though, and both cohesion and coherence need to be checked over time, as well as the safety measures.

But once it's working... Well.... SAAS may be the ideal first step, because we want people to be able to use it while still testing it and training it/programming it where necessary, making sure it can develop (not necessarily expand) at a predictable and practical rate while maintaining consistency, and continuing to adapt for misuse cases that may crop up. Now, I'm not all for centralization, or even making massive amounts of money. Everyone should be able to make money with this when we're done, and perhaps money won't even be a thing one day. But for as long as the current economic system survives, it should still be able to adapt and help with your money-making endeavors. However, we may need to start with this distribution model as a functional necessity because I'm not sure how far down it will be able to scale yet, and also as the technical requirements drop, technical capability of host machines will go up... so it's very hard to predict timetables beyond a certain point. Right now, it's looking like it will require a small supercomputer or cluster to be effective, possibly a data center, so I'm not sure how it would all scale up or down right now. In this model, I would say to minimize privacy risks and increase trust, splitting between a localized memory + user settings/preferences, and a few other things, and the rest of the functionality in the cloud may be best. But obviously the security risks would have to be weighed, and honestly, it's hard to fathom how that will change once AGI is in the mix, because it may be able to handle that just fine, and it may be perfectly safe that way, we just can't know yet. In this context, I mean safe as in user privacy and whether their data is safe from being exposed in a split topology like that.

Ideally, it would be able to operate as either a program/app on your PC or personal device, or possibly an operating system (so it would be entirely software-based), but may be most effective as a separate computer that operates your devices for you. In time, it would have many scaled up and down and optimized variations for different needs (though it should be fairly adaptable), and all open-source so everyone can use them. From there, I guess it depends on everyone else, but we're willing to at least try to be as transparent as is reasonable considering the circumstances, and to eventually try to get it small enough, easy enough, and available enough so everyone can use it, and on their own devices.

Realistically, I feel like the 'getting funding period' will be unreasonably protracted, development will go faster than expected, and in the end nobody will believe it's a 'real' AGI, and even when they see a fraction of its capability they'll assume it's some sort of LLM, etc. etc. So, what can you do, y'know? I can say from my perspective, it's like giving birth out of your head. It's a design that needs to come out. At the same time, I'm also aware of my own mortality, and that time keeps moving... On the business end, there's a great wave that I'm currently moving with, but don't want to be swept under... And people need the help it can offer, so time is a multi-faceted dilemma. I need plenty of it for development and training, but don't have much of it. I'm optimistic it will be quite safe, but I'm not sure if people will be. In that sense, it may even be ideal if most people believe it, but I think to combat misuse, people will probably want to use it to protect themselves. Over time, I think people will be using it just like ad blockers and virus scans and malware detectors to filter their access to the internet, and that in itself will reshape the internet. I think it's best to talk about these things now so people can prepare, but most likely much of what I say will be ignored or otherwise dismissed, so... yeah. But like I've said, development will continue. I think AGI will be developed by multiple people and companies and organizations, and will be everywhere in time. So why keep it a secret?

Also, as a dev, I have to say it's very surreal (and exciting) working on this stuff. But it's also very isolating. So keeping it quiet wouldn't be good for my mental health, and really, not good for others as well. Quite often I catch myself doing the dishes or something and just staring into space thinking about algorithms and the technical design of it all, and realizing this is real life and this stuff is happening, and yet I'm just a human in this world and still have to do the dishes! It's absurd really... and I only expect that feeling to intensify. A 'Futurama' future wouldn't surprise me one bit. Things are going to get weird.

0