Submitted by No-Performance-8745 t3_125f3x5 in singularity
Recently, the Future of Life Institute (FLI) released an open letter calling for a "pause for at least 6 months the training of AI systems more powerful than GPT-4." If you are unaware, FLI is a not-for-profit aimed at ensuring positive outcomes for the human race, a subgoal of which is preventing the extinction of it; something they (and many others) have identified as a potential outcome of transformative artificial intelligence (TAI). There already exists a post for general discussion of the letter here, and that isn't what I plan to have this post be.
Many have noticed that that influential figures who stand to gain an exceptional amount from the development of artificial intelligence (e.g. Emad Mostaque) have signed this letter, and are curious as to why, postulating that perhaps they know more than we, or that the signing is fake, etc. If you are asking these questions, I ask you to consider the idea that perhaps these people are truly worried about the outcome of TAI, and that perhaps even the people who stand to gain the most from it still fear it.
To give credence to this, I point you to the fact that Victoria Krakovna (a research scientist at Deepmind) is a board member of FLI, that OpenAI have acknowledged the existential risks of TAI, and that the field of AI safety exists, and is populated by people who fear a negative TAI outcome. This is not to say that we should never build TAI, but just that we should do it after we know it will be safe. If all this takes is a few more years without TAI, and it could prevent the extinction of the human race; maybe we should consider it.
I so badly want AGI, just like almost everyone in this community; and I desire a safe iteration of it ASAP, but it is also so so critical to consider things like "maybe the Deepmind research scientist is correct", "maybe OpenAI isn't handling safety responsibly" and "what if this could go wrong?".
If you have read this and are thinking something like "why would an AGI ever want to exterminate humanity?", "why would we build an AGI that would do that?" or something along those lines, then you are asking the right questions! Keep asking them, and get engaged with safety research. There is no reason why safety and capabilities need to be opposed or separate, we should all be looking forward to the same goal; safe TAI.
I wrote the paragraphs above because of how I interpreted the top comments on the post, and I think that regardless of whether or not you think an open letter like this could ever succeed in slowing down as valuable a technology as AI; we should not dismiss it. Most of the people proposing ideas like this open letter love AI, and want safe TAI just as much as the next singularitarian, but think that it should be done in a safe, scientific and responsible manner.
Sure_Cicada_4459 t1_je4fln6 wrote
It reeks of sour grapes, not only are many of the signature fake which straight up puts this into at best shady af territory but there is literally zero workable plan after 6 months, hell even during it. No criteria as to what is "enough" pause and who decides them. And that also ignores that PAUSING DOESN'T WORK, there are all kinds of open source models out there and the tech is starting to move away from large = better. It's FOMO + desperate power grab + neurotic unfalsifiable fears. I am not saying x-risk is 0, but drastic action need commensurate evidence. I get tail risks are hard to get evidence for in advance, but we have seen so many ridiculous claims of misalignment like people coaxing ChatGPT or Bing into no-no talk and people claiming "It's aggressively misaligned", and yet at the very same time saying "It's hallucinating and doesn't understand anything abt reality". Everything abt this signals to me motivated reasoning, fears of obsolence, and projection of one's own demon onto a completely alien class of mind.