Submitted by Dr_Singularity t3_xu0oos in singularity
Comments
dalayylmao t1_iqt5vcc wrote
Does this mean what I think it means?
CommentBot01 t1_iqtizzd wrote
Maybe the author is LLM :)
[deleted] t1_iqtjtdn wrote
[deleted]
Smoke-away t1_iqtjxlx wrote
Yes.
AGI 🤖 2022
manOnPavementWaving t1_iqtkra0 wrote
Actually we know what the LM is, it's PaLM, developed by google under Jeff Dean.
Anonymous peer review is a fucking joke
Tavrin t1_iqtombh wrote
It's anonymous for double peer reviewing (to try to prevent review biases) but like someone said, it's probably PaLM since the model is the same size, so the authors are probably from Google.
FusionRocketsPlease t1_iqtrem9 wrote
Shut up.
jlpt1591 t1_iqts7s5 wrote
If you think agi 2022 your iq is probably below average
GeneralZain t1_iqttze5 wrote
>If you think agi 2022 your iq is probably below average
Ironic
AI_Enjoyer87 t1_iqtxew2 wrote
Is this it? Now begins the curve? AGI in a year? Lol
Professional-Song216 t1_iqu1sdg wrote
If I could upvote this comment again I would lol
Phoenix5869 t1_iqu3akd wrote
Yh some ppl on here are very optimistic, to put it nicely
Bataranger999 t1_iqulkrw wrote
I did it for you
space_spider t1_iqum8oo wrote
This is close to nvidia’s megatron parameter count: https://developer.nvidia.com/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/
It’s also the same as PaLM: https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html?m=1
This approach (chain of thought) has been discussed for a few months at least, so I think this could be a legit paper from nvidia or google
[deleted] t1_iqup47u wrote
[deleted]
GoldenRain t1_iqurjcv wrote
It does not seem to continuously improve. Given the chance for even more self training in the study, the answers actually decreased some in quality.
A huge step forward but not quite there yet.
salaryboy t1_iqusrej wrote
What's the cringe here?
Schpaedzles t1_iquxrjs wrote
I wouldnt say cringe, but the constant overreactions are a bit annoying lol
TheSingulatarian t1_iqv0u5x wrote
Religious fanaticism.
EOE97 t1_iqv21rq wrote
A decade ago and most people wouldn't expect this amount of progress any decade soon. Let them have their hopium.
2Punx2Furious t1_iqv2q9n wrote
> double peer reviewing
Wasn't it called "double blind"? (I'm not a researcher).
2Punx2Furious t1_iqv2rvn wrote
I mean, in this case it's obvious, but usually it's not that easy to guess who the authors are.
2Punx2Furious t1_iqv2vn6 wrote
It's not coming, we skipped directly to AGI.
Akimbo333 t1_iqv2z12 wrote
How did we do that exactly?
2Punx2Furious t1_iqv352f wrote
Just a joke (mostly), considering how fast AI progress is going.
manOnPavementWaving t1_iqv3mmi wrote
Its in the author's best interests to show of who they are, misaligning that tends to just result in subtly cheating the system
Peer review in AI has been less and less important though, trial by twitter tends to perform much better
NTaya t1_iqv3v9w wrote
Gato was not cringe. It was very impressive due to multimodality—definitely worth celebrating, but for reasons other than this subreddit's.
Self-improving PaLM is interesting, but it uses the same old techniques for that, and it's not continuously improving nor superhuman, so yeah. Comments are definitely full of hopium. The article is still great, though, but as an incremental upgrade.
Aggravating_Ad5989 t1_iqv5jyl wrote
Half these comments feel religious af, i really wouldn't be surprised if most people tried worshiping AGI as their God.
ReasonablyBadass t1_iqv5mze wrote
Guys, relax. This is just about finetuning a few percentage points.
Aggravating_Ad5989 t1_iqv60cm wrote
I wont be celebrating Gato until they can show it can be scaled up. Until then its just a toy.
Scientific_Thinking t1_iqv8dws wrote
exciting news boys and girls! we're getting there! time to designate a robot bible and start worshiping our new overlords!
letharus t1_iqvhl53 wrote
Considering how easily we are influenced by things like social media algorithms there’s an argument to be said that we’re already being “programmed” by A.I. Just replace the instigators (currently humans making marketing decisions) with A.I.
Akimbo333 t1_iqvj507 wrote
Oh ok lol!
yurituran t1_iqvs0iy wrote
Some people absolutely will whenever ASI is developed. Then the christians will go nuts because a powerful "material god" will be in the world solving problems and they will be like "This is just like revelations! ASI is the anti-christ!" and it will be a whole big thing.
quantum1eeps t1_iqvucdq wrote
Yes there will be no reason to learn your mother’s native tongue when there’s true babelfish. We will miss language eventually
Fel1ace t1_iqvw1qf wrote
I mean, think of the benefits:
- it exists
- it is smart
- it is just and unbiased
doodlesandyac t1_iqwh8ou wrote
This is basically just active learning with chain of thought right?
doodlesandyac t1_iqwhczz wrote
Yeah it’s called active learning lol
jlpt1591 t1_iqwsu52 wrote
Agi tomorrow is actually near genius level
[deleted] t1_iqx1f97 wrote
[deleted]
MercuriusExMachina t1_iqx2aqv wrote
ASI 2022 woohoo + happy cake day!
GoodToKnowYouAll t1_iqx7eyd wrote
sparcity_of_time t1_ir2ux0u wrote
self-training has been around for a while (earlier: https://arxiv.org/abs/2204.12639v1), also likely shows up in humans as System 2 -> System 1 compression based on minimizing prediction losses: https://www.youtube.com/watch?v=75d_29QWELk. that said, neat to see.
AKnightAlone t1_iqt59wh wrote
Just imagine when an AI creates a superior dialect that catches on for use by people. Now there's an interesting thought. Human culture being swayed significantly by casual AI creations.