Submitted by init__27 t3_10dggxc in MachineLearning
royalemate357 t1_j4qdfwj wrote
Reply to comment by BeatLeJuce in [D] Tim Dettmers' GPU advice blog updated for 4000 series by init__27
Tbh I don't think it's an especially good name, but I believe the answer to your question is that it actually uses 32 bits to store a TF32 value in memory. its just that when they pass it into tensor cores to do matmuls, they temporarily downcast it to this 19-bit precision format.
>Dot product computation, which forms the building block for both matrix multiplies and convolutions, rounds FP32 inputs to TF32, computes the products without loss of precision, then accumulates those products into an FP32 output (Figure 1).
(from https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/)
Viewing a single comment thread. View all comments