Submitted by Klaud-Boi t3_127x67n in singularity
MrEloi t1_jegbh7i wrote
With all the political/ethical moaning, I suspect that it will be greatly delayed .. at least for the general public.
It will spend months in 'safety testing' to avoid/control AGI .. during which time of course the rich & powerful will have access to it.
Any delay will however be a mistake : the 'amateurs' out there will use GPT-3.5 and GPT-4 with add-on code etc to simulate GPT-5.
If amateurs achieve AGI - or quasi-AGI - with a smaller model than GPT-5, then their ad hoc techniques will enable AGI on other small systems too.
In other words, a delay to GPT-5 to block AGI could in fact enable AGI on smaller platforms ... which would be contrary to what the delay proponents want.
SkyeandJett t1_jegd2en wrote
You hit the nail on the head. Individual users and groups are cobbling together what could in fact be considered AGI as we speak. Anyone whose AGI prediction is later than 2024 might want to adjust it. Any sort of delay is ill advised. Most of these models still use GPT-4 at their core but I suspect once they're refined you could get away with something like Dolly for all but the most demanding problems and that's assuming someone doesn't bootstrap a self-improvement loop together that actually takes off.
As an example:
MassiveWasabi t1_jegl538 wrote
Just for reference this paper showed why the safety testing was actually pretty important. The original GPT-4 would literally answer any question with very useful solutions.
People would definitely be able to do some heinous shit if they just released GPT-4 without any safety training. Not just political/ethical stuff, but literally asking how to kill the most people for cheap and getting a good answer, or where to get black market guns and explosives and being given the exact dark web sites to buy from. Sure, you could technically figure these things out yourself, but this makes it so much more accessible for the people who might actually want to commit atrocities.
Also consider that OpenAI would actually be forced to pause AI advancement if people started freaking out due to some terrible crime being linked to GPT-4’s instructions. Look at the most high profile crimes in America (like 9/11) and how our entire legislation changed because of it. I’m not saying you could literally do that kind of thing with GPT-4, but you can see what I’m getting at. So we would actually be waiting longer for more advanced AI like GPT-5.
I definitely don’t want a “pause” on anything and I’m sure it won’t happen. But the alignment thing will make or break OpenAI’s ability to do this work unhindered, and they know it.
Illustrious_Savior t1_jegue7b wrote
So if elon musk wants to keep twitter, he needs an atrocity. Hope he stays a good boy.
DowntownYou5783 t1_jegl688 wrote
That's a really interesting idea I hadn't considered. Are you aware of any articles that further discuss this?
Viewing a single comment thread. View all comments