was_der_Fall_ist t1_izuipc1 wrote
Reply to comment by Kolinnor in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
> If I had the power to self-improve...
That's really the crux of the matter. What if we scale up to GPT-5 such that it is extremely skilled/reliable at text-based tasks, to the point that it would seem reasonable to consider it generally intelligent, yet perhaps for whatever reason it's not able to recursively self-improve by training new neural networks or conducting novel scientific research or whatever would have to be done for that. Maybe being trained on human data leaves it stuck at ~human level. It's hard to say right now.
overlordpotatoe t1_izvxoqt wrote
I do wonder if there's a hard limit to the intelligence of large language models like GPT considering they fundamentally don't have any actual understanding.
electriceeeeeeeeeel t1_j01q7j2 wrote
You can already see how good it is at coding. It does lack understanding context and memory and longer term planning. But honestly that stuff should be here by GPT-5 it seems relatively easier than other problems they have solved. So I wouldn't be suprised if it's already self improving by then.
Consider this -- an OpenAI software engineer probably already used chatbot to improve code, even if just a line. It means its already self improving just a bit slow, but with increasing speed no doubt.
Viewing a single comment thread. View all comments