Dankmemexplorer t1_iymieav wrote
for a sense of scale, GPT-NeoX, a 20 billion parameter model, requires ~45GB of vram to run. gpt-3 davinci is 175 billion parameters.
unless these models can be pared down somehow (unlikely, the whole point of training these huge models is because their performance scales with size), we will have to wait a decade or two for consumer electronics to catch up
Deep-Station-1746 t1_iymmi79 wrote
> we will have to wait a decade or two
The best I can is 4 years. Take it or leave it.
Dankmemexplorer t1_iymsbgo wrote
my current gpu is 4 years old 😖
state of the art has gotten a lot better since then but not that much better
aero_oliver2 OP t1_iymivor wrote
Interesting. So you’re saying rather than adjusting the models to work on current devices the better option is actually designing the devices to work with these models ?
Dankmemexplorer t1_iymjsty wrote
running the full gpt-3 on a laptop would be like running crysis 3 on a commodore 64. you cant pare it down enough to run without ruining it
Viewing a single comment thread. View all comments