Submitted by Moppmopp t3_y6aejk in MachineLearning
ZestyData t1_isobpf6 wrote
You don't need a GPU to get into ML. You can do everything on CPU.
You only really need a GPU if you're doing industry-grade ML where vast sums of money are dependent on models training quickly on large datasets. Or if you are hoping to publish research papers - but that'd be a long way away regardless.
​
...However, if its a non-CUDA card then it won't be helpful.
Moppmopp OP t1_isoc10i wrote
I am a scientist working in theoretical chemistry. We have several gpu clusters but I want to get started locally on my home machine
ZestyData t1_isocotj wrote
Ah gotcha, so you are hoping to publish research!
Well, if you have the GPU clusters then you can prototype on your machine with/without the speed of a GPU (or with a slow GPU) and run actual experiments remotely on the clusters. The functionality and process of doing ML is identical whether using CUDA or not.
Point stands that you'd want a CUDA card.
Moppmopp OP t1_isod3gl wrote
well I am not quite sure let put it that way. We also have our people (part of our working group) explicitly working on neural network potential energy surfaces for evaluation of interatomic interactions and they are quite experienced. I only have limited knowledge about how that stuff works and thats why I want to educate myself more and also its in general a fascinating topic.
Could you elaborate why I even need CUDA cores? What is so special about that?
ZestyData t1_isog0e4 wrote
Awesome, glad to hear you have an interest. Coming from the pure Computer Science side applying ML to purely CS problems, I find ML applied to natural sciences very exciting!
So, you don't need a GPU at all. Your regular CPU that runs everything on a computer also can process ML algorithms very well. GPUs just speed this process up because their electronic architecture is designed to multiply matrices together (originally done because 3D computer graphics is essentially multiplying matrices together). Modern CPUs are quick enough to do most tasks in machine learning, its only when you get to scale up your experiment to get top performance that CPUs take a long time, and GPUs make a speedy difference. There is no ultimate difference in experiment outcomes, however. Just time.
And all of that GPU matrix magic happens way behind the scenes, such that the code you write (and by extension the way you implement these neural networks) is identical whether you have a GPU or not. You'd use one extra line of code to enable GPU support!
CUDA is the crucial middle-man technology made by Nvidia that sits between our neural network code and the GPU's circuitry itself, allowing our normal code to run on GPUS without us having to change any details and tinker with super low level electronics programming. CUDA is the magic that takes all of our normal code that usually runs on CPUs, and instead funnels it to the GPU in a way that makes it run incredibly quickly.
And because NVidia invented this technology, it only works with Nvidia cards.
AMD are developing their own version of CUDA to allow us to use AMD cards but at the moment that's not really ready for use. This is why, in the ML world, the terms 'CUDA' and 'GPU' are often used interchangeably.
When you open your text editor and write a python script, nothing is different whether you have a (CUDA enabled / Nvidia) GPU or whether you don't. That's why if you're just getting started learning, it really won't matter.
Moppmopp OP t1_isontnp wrote
Thank you for your detailed answer. So to make it short gpu performance and Vram doesnt matter at all if and only if the gpu doesnt have dedicated cuda cores? Or in other words its nearly impossible to run ML stuff on amd cards?
Blasket_Basket t1_isp0d5p wrote
Yep, pretty much. AMD cards are pretty close to useless when it comes to Deep Learning. Shallow algorithms (anything this is ML but not DL) typically run on the CPU, not the GPU.
For DL, you need Nvidia cards.
dhruvdh t1_ispgllz wrote
It is potentially enough. But most material on the internet assumes you a CUDA device, so as a novice it would make sense to take the path of least resistance.
If you do not have an option, look into https://www.amd.com/en/graphics/servers-solutions-rocm-ml. It won't explicitly say your card is supported but it should run fine.
ROCm ML is supported only on linux, as far as I know.
Moppmopp OP t1_ispheog wrote
how about an rtx3080 as an alternative. Would you say that would be the overall better choice? I am hesitant because its 50€ more expensive while having 6gb less vram and worse rasterization performance
dhruvdh t1_ispjncc wrote
Have you considered not buying a GPU at all and making using of paid services from Google Colab, lambdacloud, etc.
You can use these while you learn, and learn more about your requirements, and make a more educated decision later.
Colab free tier works great for short experiments, and next up tier is just 10$ a month.
AMD is also set to announce new GPUs on November 3, depending on their price all last gen prices should go down.
Moppmopp OP t1_ispjzpb wrote
Interesting. I will consider that
[deleted] t1_ispep72 wrote
[deleted]
Viewing a single comment thread. View all comments