alexnasla
alexnasla OP t1_iujbukx wrote
Reply to comment by BlazeObsidian in [D] When the GPU is NOT the bottleneck...? by alexnasla
Im pretty sure its running on the GPU. I dont remember what the GPU utilization was though, ill take a look when I get a chance.
The test that I mentioned ran for 8 hours.
alexnasla OP t1_iuj9596 wrote
Reply to comment by fnbr in [D] When the GPU is NOT the bottleneck...? by alexnasla
So right now the bottleneck is such where I need to speed up the training time to about 10 times to be able to match the sampling time that with the training time and to be able to sample and train at the same time without the bottleneck.
alexnasla OP t1_iuj8se6 wrote
Reply to comment by Kon-kkk in [D] When the GPU is NOT the bottleneck...? by alexnasla
Oh my bad!
- PyTorch
- Its 4 sequential layers, Dense+conv1d+lstm+dense
- Hmm any resources you know of I can check out to learn more about doing that?
alexnasla OP t1_iujn3mm wrote
Reply to comment by K-o-s-l-s in [D] When the GPU is NOT the bottleneck...? by alexnasla
Ok so what I did was actual max out the input buffers to the most the GPU can handle without crashing. So basically fully saturating the VRAM.