Viewing a single comment thread. View all comments

Trackest t1_je9mlrd wrote

First off I do agree that in the ideal world, AI research continues under a European-style, open source and collaborative framework. Silicon valley companies in the US are really good at "moving fast and breaking things" which is why most of the AI innovation is happening in the US currently. However since AI is a major existential risk I believe moving to a strict and controlled progress like what we see with nuclear fusion in ITER and theoretical physics in CERN is the best model for AI research.

Unfortunately there are a couple points that may make this unfeasible in reality.

  • Unlike with nuclear fusion or theoretical physics where profitability and application potential is extremely low during the R&D phase, every improvement in AI that brings us closer to AGI has extreme potential profits in the form of automating more and more jobs. Corporations have no motive to give up their AI research to a non-profit international organization besides the goodness of their hearts.
  • AGI and Proto-AGI models are huge national security risks that no nation-state would be willing to give up.
  • Open-sourcing research will greatly increase risk of mis-aligned models landing in the wrong hands or having nations continue research secretly. If AI research has to be concentrated within an international body, there should be a moratorium on large scale AI research outside of that organization. This may be a deal-breaker.

If we can somehow convince all the top AI researchers to quit their jobs and join this LAION initiative that would be awesome.

14

acutelychronicpanic t1_je9qay6 wrote

I don't mean some open-source ideal. I mean a mixed approach with governments, research institutions, companies, megacorporations all doing their own work on models. Too much collaboration on Alignment may actually lead to issues where weaknesses are shared across models. Collaboration will be important, but there need to be diverse approaches.

Any moratorium falls victim to a sort of prisoner's dilemma where only 100% worldwide compliance helps everyone, but even one group ignoring it means that the moratorium hurts the 99% participants and benefits the 1% rogue faction. To the extent that Apocalypse isn't off the table if that happens.

Its a knee-jerk reaction.

The strict and controlled research is impossible in the real world and, I think, likely to increase the risks overall due to only good actors following it.

The military won't shut its research down. Not in any country except maybe some EU states. We couldn't even do this with nukes and those are far less useful and far less dangerous.

16

Trackest t1_je9s80s wrote

Right, taking into account real-world limitations perhaps your suggestion is the best approach. A world-wide moratorium is impossible.

Ideally reaching AGI is harder than we think, so the multiple actors working collaboratively have time to share which alignment methods work and which do not like how you described. I agree that having many actors working on alignment will increase probability of finding a method that works.

However with the potential for enormous profits and the fact that the best AI model will reap the most benefits, how can you possibly ensure these diverse organizations will share their work, apply effective alignment strategies, and not race to the "finish"? Getting everyone to join a nominal "safety and collaboration" organization seems like a good idea, but we all know how easily lofty ideals collapse in the face of raw profits.

3

acutelychronicpanic t1_je9ttym wrote

The best bet is for the leaders to just do what they do (being open would be nice, but I won't hold my breath), and for at least some of the trailing projects to collaborate in the interest of not being obsolete. The prize isn't necessarily just getting rich, its also creating a society where being rich doesn't matter so much. Personally, I want to see everyone get to do whatever they want with their lives. Lots of folks are into that.

Edit & Quick Thought: Being rich wouldn't hold a candle to being one of the OG developers of the system which results in utopia. Imagine the clout. You could make t-shirts. I'll personally get a back tattoo of their faces. Bonus, there's every chance you get to enjoy it for.. forever? Aging seems solvable with AGI.

If foundational models become openly available, then people will be working more on fine-tuning which seems to be much cheaper. Ideally they could explicitly exclude the leading players in their licensing to reduce the gap between whoever is first and everyone else, regardless of who is first. (But I'm not 100% on that last idea. I'll chew on it).

If we all have access to very-smart-but-not-AGI systems like GPT-4 and can more easily make narrow AI for cybersecurity, science, etc. Then even if the leading player is 6 months ahead, their intelligence advantage may not be enough to allow them to leverage their existing resources to dominate the world, just get very rich. I'm okay with that.

4

Caffdy t1_jebfvjx wrote

> The prize isn't necessarily just getting rich, its also creating a society where being rich doesn't matter so much

This phrase, this phrase alone say it all. Getting rich and all the profits in the world won't matter when we will be a inch-step close to extintion; from AGI to Super Artificial Intelligence it won't take long; we are a bunch of dumb monkeys fighting over a floating piece of dirt in the blackness of space, we're not prepared to understand and undertake on the risks of developing this kind of technology

−1

Borrowedshorts t1_je9zb9x wrote

ITER is a complete joke. CERN is doing okay, but doesn't seem to fit the mold of AI research in any way. There's really no basis for holding these up as the models AI research should follow.

5

Trackest t1_jea2k7c wrote

Yes I know these projects are bureaucratically overloaded and extremely slow in progress. However they are some of the only examples we have of actual international collaboration at a large scale. For example ITER has US, European, and Chinese scientists working together on a common goal! Imagine that!

This is precisely the kind of AI research we need, slow progress that is transparent to everyone involved, so that we have time to think and adjust.

I know a lot of people on this sub can't wait for AGI to arrive tomorrow and crown GPT as the new ruler of the world. They reflexively oppose anything that might slow down AI development. I think this discourse comes from a dangerously blind belief in the omnipotence and benevolence of ASI, most likely due to lack of trust in humans stemming from the recent pandemic and fatalist/doomer trends. You can't just wave your hands and bet everything on some machine messiah to save humanity just because society is imperfect!

I would much rather prefer we make the greatest possible effort to slow down and adjust before we step into the event horizon.

−2

Borrowedshorts t1_jeabhvm wrote

ITER is a complete disaster. If people thought NASA's SLS program was bad, ITER is at least an order of magnitude worse. I agree AI development is going extremely fast. I disagree there's much we can do to stop it or even slow it down much. I agree with Sam Altman's take, it's better these AI's to get into the wild now, while the stakes are low, than to have to experience that for the first time when these systems are far more capable. It's inevitable it's going to happen, it's better to make our mistakes now.

8

Smellz_Of_Elderberry t1_jebrrey wrote

>However since AI is a major existential risk I believe moving to a strict and controlled progress like what we see with nuclear fusion in ITER and theoretical physics in CERN is the best model for AI research.

This is going to lead to us waiting decades for progress and testing. Look at drug development.. Takes decades of clinical trials for us to even start making it available, and then it's prohibitively expensive. We might have cured cancer already, If we didn't have so many barriers in the way.

>Open-sourcing research will greatly increase risk of mis-aligned models landing in the wrong hands or having nations continue research secretly. If AI research has to be concentrated within an international body, there should be a moratorium on large scale AI research outside of that organization. This may be a deal-breaker.

So you want an unelected international body to hold the keys to the most powerful technology in existence? That sounds like a terrible idea. Open source is the only solution to alignment, because it will make the power available to all. Thus allowing all the disparate and opposing ideological groups the ability to, in a custom manner, align ai to themselves.

All an international group will do, is align ai in a way that maximizes the benefit of all parties involved. Parties which really have no incentive to actually care about you or i.

3