Viewing a single comment thread. View all comments

currentscurrents t1_jajfjr5 wrote

Problem is we don't actually know how big ChatGPT is.

I strongly doubt they're running the full 175B model, you can prune/distill a lot without affecting performance.

11

MysteryInc152 t1_jal7d3p wrote

Distillation doesn't work for token predicting language models for some reason.

3

currentscurrents t1_jalajj3 wrote

DistillBERT worked though?

2

MysteryInc152 t1_jalau7e wrote

Sorry i meant the really large scale models. Nobody has gotten a gpt-3/chinchilla etc scale model to actually distill properly.

6