staros25
staros25 t1_iydwrac wrote
Reply to comment by labloke11 in Does anyone uses Intel Arc A770 GPU for machine learning? [D] by labloke11
So far I’m happy with it.
Intel publish extensions for PyTorch and Tensorflow. I’ve been working with PyTorch so I just needed to follow these instructions to get everything set up.
This was a replacement to my GTX 1070. I don’t have any direct benchmarks, but the memory increase alone allowed me to train some models I had issues with before.
For “pros”, I’d say the performance for the price point is pretty money. Looking at NVIDIA GPUs that have 16+ GB of memory, you’d need a 3070 which looks to be in the $600-$700 range. The setup took me an evening to get everything figured out, but it wasn’t too bad.
For “cons”, it’s still a new GPU and there are a couple open issues. So far I haven’t run into any dealbreakers. Probably the biggest drawback is Intel needs to release their extension paired to a release of PyTorch / Tensorflow. I think the Tensorflow extension works with the newest version. PyTorch current supports v1.10 (1.13 is current).
All in all I think it’s a solid choice if you’re OK diving into the Intel ecosystem. While their extensions aren’t nearly as plug-and-play as CUDA, you can tell Intel really does take open-source seriously by the amount of engagement in GitHub. Plus, for $350 you can almost by 2 for the cost of a 3070.
staros25 t1_iyd748z wrote
Yes, I’ve been using one for about a month now.
staros25 t1_iye786t wrote
Reply to comment by labloke11 in Does anyone uses Intel Arc A770 GPU for machine learning? [D] by labloke11
Happy to contribute! Hit me up with any questions.