Comments
3deal t1_jbz6b91 wrote
Wait, the https://huggingface.co/decapoda-research/llama-13b-hf-int4/resolve/main/llama-13b-4bit.pt is the Facebook one ?
Is it fully open now ?
Amazing_Painter_7692 OP t1_jbz7hta wrote
It's the HuggingFace transformers module version of the weights from Meta/Facebook Research.
Upstairs_Suit_9464 t1_jbz8dyt wrote
I have to ask… is this a joke or are people actually working on digitizing trained networks?
[deleted] t1_jbz8xgk wrote
[removed]
kkg_scorpio t1_jbz91de wrote
Check out the terms "quantization aware training" and "post training quantization".
8-bit, 4-bit, 2-bit, hell even 1-bit inference are scenarios which are extremely relevant for edge devices.
remghoost7 t1_jbz96lt wrote
><9 GiB VRAM
So does that mean my 1060 6GB can run it....? haha.
I doubt it, but I'll give it a shot later just in case.
Taenk t1_jbzaeau wrote
Isn't 1-bit quantisation qualitatively different as you can do optimizations only available if the parameters are fully binary?
Amazing_Painter_7692 OP t1_jbzbcmi wrote
Should work fine with the 7b param model: https://huggingface.co/decapoda-research/llama-7b-hf-int4
stefanof93 t1_jbzeots wrote
Anyone evaluate all the quantized versions and compare them against smaller models yet? How many bits can you throw away before you're better of picking a smaller version?
Dendriform1491 t1_jbzj7zu wrote
Wait until you hear about the 1/2 bit.
remghoost7 t1_jbzmfku wrote
Super neat. Thanks for the reply. I'll try that.
Also, do you know if there's a local interface for it....?
I know it's not quite the scope of the post, but it'd be neat to interact with it through a simple python interface (or something like how Gradio is used for A1111's Stable Diffusion) rather than piping it all through Discord.
Kinexity t1_jbznlup wrote
There is a repo for CPU interference written in pure C++: https://github.com/ggerganov/llama.cpp
30B model can run on just over 20GB of RAM and take 1.2sec per token on my i7 8750H. Though actual Windows support has yet to arrive and as of right now the output is garbage for some reason.
Edit: fp16 version works. It's 4 bit quantisation that returns garbage.
Amazing_Painter_7692 OP t1_jbzoq05 wrote
There's an inference engine class if you want to build out your own API:
And there's a simple text inference script here:
Or in the original repo:
https://github.com/qwopqwop200/GPTQ-for-LLaMa
BUT someone has already made a webUI like the automatic1111 one!
https://github.com/oobabooga/text-generation-webui
Unfortunately it looked really complicated for me to set up with 4-bits weights and I tend to do everything over a Linux terminal. :P
Amazing_Painter_7692 OP t1_jbzov27 wrote
https://github.com/qwopqwop200/GPTQ-for-LLaMa
Performance is quite good.
remghoost7 t1_jbzqf5m wrote
Most excellent. Thank you so much! I will look into all of these.
Guess I know what I'm doing for the rest of the day. Time to make more coffee! haha.
You are my new favorite person this week.
Also, one final question, if you will. What's so unique about the 4-bit weights and why would you prefer to run it in that manner? Is it just VRAM optimization requirements....? I'm decently versed in Stable Diffusion, but LLMs are fairly new territory for me.
My question seemed to have been answered here, and it is a VRAM limitation. Also, that last link seems to support 4-bit models as well. Doesn't seem too bad to set up.... Though I installed A1111 when it first came out, so I learned through the garbage of that. Lol. I was wrong. Oh so wrong. haha.
Yet again, thank you for your time and have a wonderful rest of your day. <3
[deleted] t1_jbzqsrt wrote
[removed]
The_frozen_one t1_jbzqvwc wrote
I'm running it using https://github.com/ggerganov/llama.cpp. The 4-bit version of 13b runs ok without GPU acceleration.
remghoost7 t1_jbzro03 wrote
Nice!
How's the generation speed...?
[deleted] t1_jbztbxc wrote
[removed]
The_frozen_one t1_jbzv0gt wrote
It takes about 7 seconds to generate a full response using 13B to a prompt with the default (128) number of predicted tokens.
th3nan0byt3 t1_jbzw23a wrote
only if you turn your pc case upside down
toothpastespiders t1_jc01mr9 wrote
> BUT someone has already made a webUI like the automatic1111 one!
There's a subreddit for it over at /r/Oobabooga too that deserves more attention. I've only had a little time to play around with it but it's a pretty sleek system from what I've seen.
> it looked really complicated for me to set up with 4-bits weights
I'd like to say that the warnings make it more intimidating than it really is. I think it was just copying and pasting four or five lines for me onto a terminal. Then again I also couldn't get it to work so I might be doing something wrong. I'm guessing it's just that my weirdo gpu wasn't really accounted for somewhere. I'm going to bang my head against it when I've got time just because it's frustrating having tons of vram to spare and not getting the most out of it.
[deleted] t1_jc02vok wrote
[deleted]
currentscurrents t1_jc03yjr wrote
You could pack more bits in your bit with in-memory compression. You'd need hardware support for decompression inside the processor core.
Dendriform1491 t1_jc0bgxd wrote
Or make it data free altogether
remghoost7 t1_jc0bymy wrote
I'm having an issue with the C++ compiler on the last step.
I've been trying to use python 3.10.9 though, so maybe that's my problem....? My venv is set up correctly as well.
Not specifically looking for help.
Apparently this person posted a guide on it in that subreddit. Will report back if I am successful.
edit - Success! But, using WSL instead of Windows (because that was a freaking headache). WSL worked the first time following the instructions on the GitHub page. Would highly recommend using WSL to install it instead of trying to force Windows to figure it out.
Pathos14489 t1_jc0dame wrote
r/Oobabooga isn't accessible for me.
cr125rider t1_jc0jwka wrote
Wtf is that GitHub handle lol
[deleted] t1_jc0rr6z wrote
[deleted]
[deleted] t1_jc0s2gq wrote
[removed]
light24bulbs t1_jc0s4wr wrote
That is slowwwww
MorallyDeplorable t1_jc0tuwg wrote
It got leaked, not officially released. I have 30B 4 bit running here.
APUsilicon t1_jc0zbtj wrote
oooh, I've been getting trash responses from opt-6.7b hopefully this is better.
futilehabit t1_jc10obb wrote
Guess hospice is pretty boring
AsIAm t1_jc168cw wrote
It is. But that doesn't mean 1-bit neural nets are impossible. Even Turing himself toyed with such networks – https://www.npl.co.uk/getattachment/about-us/History/Famous-faces/Alan-Turing/80916595-Intelligent-Machinery.pdf?lang=en-GB
Necessary_Ad_9800 t1_jc1j36g wrote
Where can I see stuff generated from this model?
Kinexity t1_jc1lwah wrote
That is fast. We are literally talking about a high end laptop CPU from 5 years ago running a 30B LLM.
Raise_Fickle t1_jc1p9x5 wrote
Anyone having any luck finetuning LLama in a multi-gpu setup?
[deleted] t1_jc1tflg wrote
[removed]
MorallyDeplorable t1_jc1umt7 wrote
I'm not actually sure. I've just been chatting with people in an unrelated Discord's off topic channel about it.
I'd post some of what I've got from it but I have no idea what I'm doing with it and don't think what I'm getting would be decently representative of what it can actually do.
luaks1337 t1_jc24dqa wrote
They managed to run the 7B model on a Raspberry PI and a Samsung Galaxy S22 Ultra.
light24bulbs t1_jc2s2oc wrote
Oh, definitely, it's an amazing optimization.
But less than a token a second is going to be too slow for a lot of real time applications like human chat.
Still, very cool though
3deal t1_jc32dgv wrote
Does it run on a RTX 3090 ?
MorallyDeplorable t1_jc32jfw wrote
It should, yea. I'm running it on a 4090 which has the same amount of VRAM. It takes about 20-21 GB of RAM.
3deal t1_jc32o55 wrote
Cool, it is sad here is no download link to try it 🙂
LetterRip t1_jc4rifv wrote
Depends on the model. Some have difficulty even with full 8bit quantization; others you can go to 4bit relatively easily. There is some research that suggests 3bit might be the useful limit, with rarely certain 2bit models.
Lajamerr_Mittesdine t1_jc5b99n wrote
I imagine 1 token per 0.2 seconds would be fast enough. That'd be equivalent to a 60 WPM typer.
Someone should benchmark it on an AMD 7950X3D or Intel 13900-KS
light24bulbs t1_jc5e0zk wrote
yeah theres definitely a threshold in there where its fast enough for human interaction. It's only an order of magnitude off, that's not too bad.
wirefire07 t1_jcgx51q wrote
Already heared about this project? https://github.com/ggerganov/llama.cpp -> It's very fast!!
thoughtdrops t1_jcjjq48 wrote
>Samsung Galaxy S22 Ultra.
can you link to the samsung galaxy post? that sounds great
ML4Bratwurst t1_jbyzell wrote
Can't wait for the 1 bit quantization