Submitted by faschu t3_1035xzs in MachineLearning
Sylv__ t1_j2yrga7 wrote
Reply to comment by faschu in [Discussion]: Quantization in native pytorch for GPUs (Cuda)? by faschu
Well, you can always debug / try quantization configs with fake quantization on GPU. And once one is good enough for you, move to TensorRT, although AFAIK the support in TRT is very limited. Of course, this will only allow you to benchmark configs for prediction quality, not speedup.
Maybe there will be a support for quantized kernels in torchinductor? I recall reading around this in a github issue at some point.
Otherwise you could try bitsandbytes, and pass the good param to do all computations in 8-bit.
The authors of SmoothQuant implemented as well torch-int, which is a wrapper around CUTLASS to use int8 GEMM. You can find it on github!
faschu OP t1_j3q0sr7 wrote
Thanks a lot for the detailed reply! I will try these suggestions.
Viewing a single comment thread. View all comments