Submitted by Moppmopp t3_y6aejk in MachineLearning
Moppmopp OP t1_isontnp wrote
Reply to comment by ZestyData in rx6900xt for ML? [D] by Moppmopp
Thank you for your detailed answer. So to make it short gpu performance and Vram doesnt matter at all if and only if the gpu doesnt have dedicated cuda cores? Or in other words its nearly impossible to run ML stuff on amd cards?
Blasket_Basket t1_isp0d5p wrote
Yep, pretty much. AMD cards are pretty close to useless when it comes to Deep Learning. Shallow algorithms (anything this is ML but not DL) typically run on the CPU, not the GPU.
For DL, you need Nvidia cards.
dhruvdh t1_ispgllz wrote
It is potentially enough. But most material on the internet assumes you a CUDA device, so as a novice it would make sense to take the path of least resistance.
If you do not have an option, look into https://www.amd.com/en/graphics/servers-solutions-rocm-ml. It won't explicitly say your card is supported but it should run fine.
ROCm ML is supported only on linux, as far as I know.
Moppmopp OP t1_ispheog wrote
how about an rtx3080 as an alternative. Would you say that would be the overall better choice? I am hesitant because its 50€ more expensive while having 6gb less vram and worse rasterization performance
dhruvdh t1_ispjncc wrote
Have you considered not buying a GPU at all and making using of paid services from Google Colab, lambdacloud, etc.
You can use these while you learn, and learn more about your requirements, and make a more educated decision later.
Colab free tier works great for short experiments, and next up tier is just 10$ a month.
AMD is also set to announce new GPUs on November 3, depending on their price all last gen prices should go down.
Moppmopp OP t1_ispjzpb wrote
Interesting. I will consider that
Viewing a single comment thread. View all comments