Submitted by intergalacticskyline t3_xyb4h0 in singularity
TemetN t1_irgglyl wrote
In terms of weak AGI (broadly meeting human level on benchmarks) by 2025. I think people tend to either underestimate progress in this area, or consider AGI from a different perspective than simply broad human level performance.
DungeonsAndDradis t1_irghzsf wrote
That's my guess as well. Just with the rapid advancements this year alone. Gato and Palm and Lambda are crazy.
TemetN t1_irgisen wrote
Yes, that and the dataset misses (most notably with MATH) make me think this is going to surprise even people tracking the field.
​
Honestly we're due for another major LLM drop soon, it's easy to get lost in all the other stuff, but it's mostly been focused elsewhere.
SejaGentil t1_irirwko wrote
what are those?
DungeonsAndDradis t1_irisfb1 wrote
Basically they are three state of the art advances in Large Language Models. Lamda is most-famous because a Google engineer made the claim that it is sentient.
PaLM: https://www.reddit.com/r/singularity/comments/tw72kz/pathways_language_model_palm_scaling_to_540/
sideways t1_irl39bo wrote
PaLM's logical reasoning really blew my mind. That, more than anything, convinced me that we are close to AGI.
Viewing a single comment thread. View all comments