Viewing a single comment thread. View all comments

3SquirrelsinaCoat t1_je4qpxm wrote

There are a few sides to it. Plenty of leading AI people have been increasingly talking about the ethics of AI, not in terms of "should we or shouldn't we use AI," but instead, how do we use it in a way that doesn't lead to a bunch of unintended consequences. That's a very fuzzy unclear area until you put some concrete stuff around it, which is AI governance. Governance takes AI innovation from the equivalent of three drunk guys flying down the highway in a Porche at 150 mph and turns it into three drunk guys being driven in an uber at a safe speed. It puts guardrails around the whole thing, bringing more people to the table, getting more input - it changes it from the AI engineers doing their thing in a vacuum to an organization doing something together, and when you take that approach, you are much more ready to avoid the harms. This was true of just your run of the mill machine learning a couple years ago. GPT and its friends are different, and what governance looks like for that is new.

So one idea of that letter on GPT4 is a call for businesses to pump the breaks and ensure all this AI innovation is governed. I don't know that that came through clear enough, but I imagine part of the audience got it.

The second idea of the letter is a call to governments to set independent guardrails (ie regulations) to guide this maturing tech. That, I believe the scientific term is "absolutely fucking unrealistic" in 6 months. Shit, that won't even happen in 2 years of meetings and rulemaking. Just look at where we were with gpt in January. Government bodies have zero hope of passing regulations in a timeframe where they will be meaningful. It's why it was so fucking reckless for OpenAI and some others to just throw this shit into the wild with their fingers crossed.

Now the cat is out of the bag, government can't do anything in time (even if the regulators understood this stuff, and they don't), which means the onus to "stop" falls entirely on the shoulders of the organizations that lack the governance structures to manage this stuff. It's all fucked man. AI philosophers don't have much to add here in terms of actually doing something. It's much more immediately action-oriented, not idea oriented. We've got the ideas; many organizations lack the ability to implement them.

That's my two cents anyway.

8