Submitted by Klaud-Boi t3_127x67n in singularity
Predictions ?
Submitted by Klaud-Boi t3_127x67n in singularity
Predictions ?
With Microsoft accelerating the deployment of OpenAi products, I would guess early 2024
With all the political/ethical moaning, I suspect that it will be greatly delayed .. at least for the general public.
It will spend months in 'safety testing' to avoid/control AGI .. during which time of course the rich & powerful will have access to it.
Any delay will however be a mistake : the 'amateurs' out there will use GPT-3.5 and GPT-4 with add-on code etc to simulate GPT-5.
If amateurs achieve AGI - or quasi-AGI - with a smaller model than GPT-5, then their ad hoc techniques will enable AGI on other small systems too.
In other words, a delay to GPT-5 to block AGI could in fact enable AGI on smaller platforms ... which would be contrary to what the delay proponents want.
You hit the nail on the head. Individual users and groups are cobbling together what could in fact be considered AGI as we speak. Anyone whose AGI prediction is later than 2024 might want to adjust it. Any sort of delay is ill advised. Most of these models still use GPT-4 at their core but I suspect once they're refined you could get away with something like Dolly for all but the most demanding problems and that's assuming someone doesn't bootstrap a self-improvement loop together that actually takes off.
As an example:
Christmas of 25
End of 2023 - Judgement Day
Do you mean GPT5? I would wager to gas probably around the same time it took GP T4 so may be anywhere from 2 to 3 years
[removed]
Just for reference this paper showed why the safety testing was actually pretty important. The original GPT-4 would literally answer any question with very useful solutions.
People would definitely be able to do some heinous shit if they just released GPT-4 without any safety training. Not just political/ethical stuff, but literally asking how to kill the most people for cheap and getting a good answer, or where to get black market guns and explosives and being given the exact dark web sites to buy from. Sure, you could technically figure these things out yourself, but this makes it so much more accessible for the people who might actually want to commit atrocities.
Also consider that OpenAI would actually be forced to pause AI advancement if people started freaking out due to some terrible crime being linked to GPT-4’s instructions. Look at the most high profile crimes in America (like 9/11) and how our entire legislation changed because of it. I’m not saying you could literally do that kind of thing with GPT-4, but you can see what I’m getting at. So we would actually be waiting longer for more advanced AI like GPT-5.
I definitely don’t want a “pause” on anything and I’m sure it won’t happen. But the alignment thing will make or break OpenAI’s ability to do this work unhindered, and they know it.
That's a really interesting idea I hadn't considered. Are you aware of any articles that further discuss this?
Sooner than expected. Honestly, I'm excited to see how dramatically thing change with ChatGPT-4's full multi-modal abilities and plugins.
I am more afraid of bad people with pre-agi than of AGI itself.
If they delay it, they may be the new google of the next gen. And maybe another country with money and resources in Asia comes out with the best IA of the next gen. And they will have the power to develop the course of the planet. The thing is that maybe AI is the only chance we have to save ourselves from extintion, climate change etc.
So if elon musk wants to keep twitter, he needs an atrocity. Hope he stays a good boy.
Nov 2024
GPT-3 was released three years ago and it took another three years for GPT-4 so maybe yet another three years. It feels like advancements have been super quick, mere months, but this is not true. They just happened to make the ChatGPT site with conversation tuning soon before GPT-4, but GPT 3 is not "new".
I don't expect some sort of exponential speed here. They're already running into hardware road blocks with GPT-4 and currently probably have their hands full trying to accomplish a GPT-4 Turbo since this is a quite desperate situation. As for exponentials, it looks like resource demand increases exponentially too...
Then there is the political situation as AI awareness is striking. For any progress there needs to be very real financial motives (preferably not overly high running costs) and low political risks. Is that what the horizon looks like today?
Also, there is the question when diminishing returns hit LLM's of this kind. If we're looking at 10x costs once more for a 20% improvement it's probably not going to be deemed justified and rather trying to innovate in the field of exactly how much you can do given a certain parameter size? The Stanford dudes kind of opened some eyes there.
My guess is that the next major advancement will share roughly GPT-4 size.
You should watch the Ilya interview. He's confident there's still plenty of room for growth with just text but the real advancements will be multi-modal training data. I'd also take a look at Cerebras hardware. There's plenty of room for advancement with training hardware as well. We've got a LOT of runway ahead before hitting any real blocks and by then I'm 100% sure we'll have already hit self-improving AGI.
I would say sometime next year, possibly in march, OpenAI plans to release GPT-4.5 in december similarly to 3.5, so GPT-5 could go similarly to GPT-4, that is if it isnt delaye for whatever reason-like if it would scare OpenAI how powerful it is
edit GPT-4.5 will be released in september or october
QLaHPD t1_jeg8l2h wrote
Never, I thing they will change the name to a more apelaing one. Should be in 2025. Research takes some time.