[D] Running large language models on a home PC? Submitted by Zondartul t3_zrbfcr on December 21, 2022 at 5:29 AM in MachineLearning 41 comments 86
wywywywy t1_j151o6u wrote on December 21, 2022 at 6:46 PM You could run a cut-down version of such models. I managed to run inference on OPT 2.7B, GPT-Neo 2.7B, etc on my 8GB gpu. Now that I've upgraded to a used 3090, I can run OPT 6.7B, GPT-J 6B, etc. Permalink 5
Viewing a single comment thread. View all comments