Ace_Snowlight OP t1_j0tsspm wrote
Reply to comment by Accomplished_Diver86 in Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
Not trying to prove anything, just sharing, look at this: https://www.adept.ai/act
I can't wait to get my hands on this! Isn't it cool? ✨
GuyWithLag t1_j0ttml7 wrote
Still, to have AGI you need to have working memory; right now for all transformer-based models, the working memory is their input and output. Adding it is... non-trivial.
Ace_Snowlight OP t1_j0tv0b4 wrote
You have a point, I edited my post take a look.
__ingeniare__ t1_j0tvtay wrote
I wouldn't call ACT-1 AGI, but it looks revolutionary nonetheless. If what they show in those videos is legit, it will be a game changer.
Ace_Snowlight OP t1_j0tvzqa wrote
I know right!
red75prime t1_j0v9jzt wrote
Given the current state of LLMs, I expect it to fail 10-30% of requests.
Ace_Snowlight OP t1_j0vaeji wrote
Even if you are right, that percentage will most likely go down really fast unexpectedly soon.
And even so it will still be a huge deal, every failure just boosts the next success (at least in this context).
red75prime t1_j0vpb7l wrote
They haven't provided any information on their online learning method. If it utilizes transformer in-context learning (the simplest thing you can do to boost performance), the results will not be especially spectacular or revolutionary. We'll see.
Viewing a single comment thread. View all comments