Viewing a single comment thread. View all comments

Select_Beautiful8 t1_jbq9m13 wrote

No, I wasn't able to load the 7B model, it still says CUDA out of memory :(

1

KerfuffleV2 t1_jbqo9qh wrote

You might have to reduce the CUDA layers by 1-3, but with only 16GB RAM you're probably going to have trouble.

If you still run out of CUDA memory trying to load it, then maybe you're not setting the strategy correctly. How are you trying to change it?

2

Select_Beautiful8 t1_jbqpd5x wrote

>How do I reduce the CUDA layers?

1

KerfuffleV2 t1_jbqtx6j wrote

Note: I'm just a random person on the internet, no affiliation to OP. I also don't really know what I'm doing here, so follow my advice at your own risk.

cuda fp16i8 *16 -> cpu fp32 as the strategy means use 16 fp16i8 format CUDA layers and then put the rest on the CPU (as fp32). So if you want to reduce how many layers go to the GPU, you'd reduce "16" there.

Assuming we're talking about the same thing, you'd have the ChatRWKV repo checked out and be editing v2/chat.py

There should be a line like:

args.strategy = 'cuda fp16i8 *16 -> cpu fp32'

Either make sure other other lines setting args.strategy in that area are commented out or make sure the one with the setting you want to use is the last one. (Otherwise the other variable assignment statements would override what you added.)

2

Select_Beautiful8 t1_jbqyth8 wrote

Thanks. I'm actually using the oobabooga text generation webui on github

1

KerfuffleV2 t1_jbr6r2f wrote

> I'm actually using the oobabooga text generation webui on github

I'm not familiar with that. It does seem like it can use RWKV and supports passing strategy though: https://github.com/oobabooga/text-generation-webui/wiki/RWKV-model#setting-a-custom-strategy

Are you already using that flag with the correct parameter?

2

Select_Beautiful8 t1_jbr867y wrote

Oh it loaded, it was because I wrote "cuda fp32" instead of "cpu fp32" in the second half of the argument. Thanks

1

KerfuffleV2 t1_jbr95r5 wrote

No problem. fp16i8 uses about half the memory of fp16, so what you had would not only use 4x as much memory but it would try to put everything on the GPU!

2

Select_Beautiful8 t1_jbra2af wrote

ok so "cuda fp16i8 *16 -> cpu fp32" would be the most optimal argument for me?

1

KerfuffleV2 t1_jbrb0qa wrote

I'm definitely not qualified to answer a question like that. I'm just a person that managed to get it working on a 6G VRAM GPU. Basically, as far as I understand the more you can run on the GPU, the better. So it really depends on what other stuff you have using your GPU's memory.

Like I mentioned, when I got it working I already had about 1.25G used by other applications and my desktop environment. From my calculations, it should be possible to fit 21, maybe 22 layers onto the GPU as long as nothing else is using it (so basically, you'd have to be in text mode with no desktop environment running).

If you're using Linux and an Nvidia card then you can try install an application called nvtop — it can show stuff like VRAM usage, etc. The way to install it will be specific to your distribution, so I can't help you with that. If you're using Windows or a different OS I can't really help you either.

But anyway, if you can find how much VRAM you have free, you can look at how much of that loading 16 layers uses and calculate how many more you can add before you run out.

That's still not necessarily going to be optimal though. I don't know how stuff like the difference in speed/precision for fp16 vs fp16i8 works or stuff like that. It's not impossible there's some other combination of parameters that would be better in some way than just trying to as much as possible onto the GPU in fp16i8 format. You'd have to ask someone more knowledgeable for a real answer.

2

Select_Beautiful8 t1_jbrbor0 wrote

Thanks, I use Windows, but I want to do a dual boot

1

KerfuffleV2 t1_jbz7yfk wrote

I've been playing with this for a bit and I actually haven't found any case where fp16i8 worked better than halving the layers and using fp16.

If you haven't already tried it, give something like cuda fp16 *7 -> cuda fp16 *0+ -> cpu fp32 *1 a try and see what happens. It's around twice as fast as cuda fp16i8 *16 -> cpu fp32 for me, which is surprising.

That one will use 7 fp16 layers on the GPU, and stream all the rest except the very last as fp16 on the GPU also. The 33rd layer gets run on the CPU. Not sure if that last part makes a big difference.

2

Select_Beautiful8 t1_jc0w1px wrote

This gave me the "out if memory" error again, which did not happen with the "cuda fp18i8 *16 -> cpu fp32" :(

1

KerfuffleV2 t1_jc18f6a wrote

Huh, that's weird. You can try reducing the first one from 7 to 6 or maybe even 5:

cuda fp16 *6 -> cuda fp16 *0+ -> cpu fp32 *1

Also, be sure to double check for typos. :) Any incorrect numbers/punctuation will probably cause problems. Especially the "+" in the second part.

2

Select_Beautiful8 t1_jc9lckr wrote

just got time to try it, but it doesn't load nor does it give error message :( Thanks anyways for your help!

1