Submitted by Haghiri75 t3_11wdi8m in deeplearning
Jaffa6 t1_jd03par wrote
Reply to comment by Haghiri75 in Alpaca-7B and Dalai, how can I get coherent results? by Haghiri75
That's odd.
Quantisation should make it go from (e.g.) 32 bit floats to 16bit floats, but I wouldn't expect it to lose that much coherency at all. Did they say somewhere that that's why?
Haghiri75 OP t1_jd32s29 wrote
Apparently I was wrong, the problem is not only quantization. It is because it's not Stanford's Alpaca and another alpaca-like model. This was what I can surely say about that.
Viewing a single comment thread. View all comments