sam__izdat t1_j8bn58f wrote
Reply to comment by mycall in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
I don't want to be that guy, but can y'all leave the doe-eyed ML mysticism to the more Ray Kurzweil themed subreddits?
Soundwave_47 t1_j8bpaqd wrote
Yes, please keep this sort of stuff in /r/futurology or something. We're here trying to formalize the n steps needed to even get to something that vaguely resembles AGI.
kaityl3 t1_j8d7hsw wrote
Do we even know what WOULD resemble an AGI, or exactly how to tell?
Soundwave_47 t1_j8fu3r6 wrote
Somewhat, and no.
We generally define AGI as an intelligence (which, in the current paradigm, would be a set of algorithms) that has decision making and inference capabilities in a broad set of areas, and is able to improve its understanding of that which it does not know. Think of it like school subjects, it might not be an expert in all of {math, science, history, language, economics}, but it has some notion of how to do basic work in all of those areas.
This is extremely vague and not universally agreed upon (for example, some say it should exceed peak human capabilities in all tasks).
Viewing a single comment thread. View all comments