Comments

You must log in or register to comment.

AKnightAlone t1_iqt59wh wrote

Just imagine when an AI creates a superior dialect that catches on for use by people. Now there's an interesting thought. Human culture being swayed significantly by casual AI creations.

16

dalayylmao t1_iqt5vcc wrote

Does this mean what I think it means?

25

Nmanga90 t1_iqt8fju wrote

Holy shit a 540B LLM. That’s like 3 times the size of GPT-3. Why are the authors anonymous? There’s only a few orgs that this could realistically be

41

Tavrin t1_iqtombh wrote

It's anonymous for double peer reviewing (to try to prevent review biases) but like someone said, it's probably PaLM since the model is the same size, so the authors are probably from Google.

18

AI_Enjoyer87 t1_iqtxew2 wrote

Is this it? Now begins the curve? AGI in a year? Lol

14

Heizard t1_iqu0d15 wrote

Most beautiful news I had to read this year.

Welcome to the world child, let your potential shine like unlimited light.

54

space_spider t1_iqum8oo wrote

This is close to nvidia’s megatron parameter count: https://developer.nvidia.com/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/

It’s also the same as PaLM: https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html?m=1

This approach (chain of thought) has been discussed for a few months at least, so I think this could be a legit paper from nvidia or google

7

Lawjarp2 t1_iqupv5x wrote

The cringe in this subreddit is crazy. First they celebrated gato like idiots and now this.

−6

GoldenRain t1_iqurjcv wrote

It does not seem to continuously improve. Given the chance for even more self training in the study, the answers actually decreased some in quality.

A huge step forward but not quite there yet.

16

manOnPavementWaving t1_iqv3mmi wrote

Its in the author's best interests to show of who they are, misaligning that tends to just result in subtly cheating the system

Peer review in AI has been less and less important though, trial by twitter tends to perform much better

4

NTaya t1_iqv3v9w wrote

Gato was not cringe. It was very impressive due to multimodality—definitely worth celebrating, but for reasons other than this subreddit's.

Self-improving PaLM is interesting, but it uses the same old techniques for that, and it's not continuously improving nor superhuman, so yeah. Comments are definitely full of hopium. The article is still great, though, but as an incremental upgrade.

8

Aggravating_Ad5989 t1_iqv5jyl wrote

Half these comments feel religious af, i really wouldn't be surprised if most people tried worshiping AGI as their God.

15

ReasonablyBadass t1_iqv5mze wrote

Guys, relax. This is just about finetuning a few percentage points.

2

Scientific_Thinking t1_iqv8dws wrote

exciting news boys and girls! we're getting there! time to designate a robot bible and start worshiping our new overlords!

−4

letharus t1_iqvhl53 wrote

Considering how easily we are influenced by things like social media algorithms there’s an argument to be said that we’re already being “programmed” by A.I. Just replace the instigators (currently humans making marketing decisions) with A.I.

7

yurituran t1_iqvs0iy wrote

Some people absolutely will whenever ASI is developed. Then the christians will go nuts because a powerful "material god" will be in the world solving problems and they will be like "This is just like revelations! ASI is the anti-christ!" and it will be a whole big thing.

6

doodlesandyac t1_iqwh8ou wrote

This is basically just active learning with chain of thought right?

3