Viewing a single comment thread. View all comments

CKtalon t1_j62hmsr wrote

Before people get their hopes up, BLOOM and OPT are known to be seriously undertrained (not Chinchilla-optimal, BLOOM more so than OPT), so it’s possible that most of the weights were useless to begin with. The results of this paper seem to imply that.

97

data-drone t1_j62n3b9 wrote

How much more training they need?

14

CKtalon t1_j62n9yw wrote

About 10-12 times more then the tokens seen.

26

maizeq t1_j66b3l5 wrote

Chinchilla (70B) is trained with 1.4 trillion, so 140B would presumably need at least 2.8 trillion (it scales linearly afaik).

I’m not sure a 2.8 trillion token dataset actually exists

3

rainy_moon_bear t1_j676oo9 wrote

This is something people don't seem to understand. Pretty much all models 100B+ are undertrained.

3

Taenk t1_j688cev wrote

> I’m not sure a 2.8 trillion token dataset actually exists

DeepMind's Massive Text is assumed to be 10TB large, the largest publically available dataset is The Pile and weighs in at about 820GB.

A 2.8 trillion token dataset would need to be more than 20TB large, which could be possible by including more of Common Crawl - weighing in at 380TiB - or non-English resources. I have a suspicion that training LLMs on more languages, especially outside of the Indo-European family, will improve performance within the Indo-European family.

2

maizeq t1_j69vuec wrote

Nice. How are you converting between dataset size and number of tokens?

Doesn’t common crawl get deduplicated and that’s why the number of usable tokens decreases - or is it also curation? How much of that 380TiB is actually utilisable.

Given the ostensibly impressive performance of the bilingual GLM-130B (Chinese+English) model that came out of Tsinghua university that might very well be the case.

1

lookatmetype t1_j64nstm wrote

To be fair, most of the weights in every "Foundation" model are useless.

3

flashdude64 t1_j65z2q4 wrote

Do you have a citation for this that I could read?

1