Viewing a single comment thread. View all comments

MassiveWasabi t1_jegl538 wrote

Just for reference this paper showed why the safety testing was actually pretty important. The original GPT-4 would literally answer any question with very useful solutions.

People would definitely be able to do some heinous shit if they just released GPT-4 without any safety training. Not just political/ethical stuff, but literally asking how to kill the most people for cheap and getting a good answer, or where to get black market guns and explosives and being given the exact dark web sites to buy from. Sure, you could technically figure these things out yourself, but this makes it so much more accessible for the people who might actually want to commit atrocities.

Also consider that OpenAI would actually be forced to pause AI advancement if people started freaking out due to some terrible crime being linked to GPT-4’s instructions. Look at the most high profile crimes in America (like 9/11) and how our entire legislation changed because of it. I’m not saying you could literally do that kind of thing with GPT-4, but you can see what I’m getting at. So we would actually be waiting longer for more advanced AI like GPT-5.

I definitely don’t want a “pause” on anything and I’m sure it won’t happen. But the alignment thing will make or break OpenAI’s ability to do this work unhindered, and they know it.

10

Illustrious_Savior t1_jegue7b wrote

So if elon musk wants to keep twitter, he needs an atrocity. Hope he stays a good boy.

2