Submitted by head_robotics t3_1172jrs in MachineLearning
smallfried t1_j9dtyf7 wrote
Reply to comment by catch23 in [D] Large Language Models feasible to run on 32GB RAM / 8 GB VRAM / 24GB VRAM by head_robotics
That is very interesting!
The paper is not yet on GitHub, but I'm assuming the hardware requirements are as mentioned one beefy consumer GPU (3090) and a whole bunch of DRAM (>210GB) ?
I've played with opt-175b and with a bit of twiddling it can actually generate some Python code :)
This is very exciting as it gets these models into the prosumer range hardware!
catch23 t1_j9dxlze wrote
Their benchmark was done on a 16GB T4 which is anything but beefy. The T4 maxes out at 80W power consumption, and was primarily marketed toward model inference. The T4 is the cheapest GPU offered by google cloud.
Viewing a single comment thread. View all comments