Viewing a single comment thread. View all comments

Smellz_Of_Elderberry t1_jebrrey wrote

>However since AI is a major existential risk I believe moving to a strict and controlled progress like what we see with nuclear fusion in ITER and theoretical physics in CERN is the best model for AI research.

This is going to lead to us waiting decades for progress and testing. Look at drug development.. Takes decades of clinical trials for us to even start making it available, and then it's prohibitively expensive. We might have cured cancer already, If we didn't have so many barriers in the way.

>Open-sourcing research will greatly increase risk of mis-aligned models landing in the wrong hands or having nations continue research secretly. If AI research has to be concentrated within an international body, there should be a moratorium on large scale AI research outside of that organization. This may be a deal-breaker.

So you want an unelected international body to hold the keys to the most powerful technology in existence? That sounds like a terrible idea. Open source is the only solution to alignment, because it will make the power available to all. Thus allowing all the disparate and opposing ideological groups the ability to, in a custom manner, align ai to themselves.

All an international group will do, is align ai in a way that maximizes the benefit of all parties involved. Parties which really have no incentive to actually care about you or i.

3