Submitted by digital-bolkonsky t3_zivwuc in deeplearning
I was asked this question today and can’t really think of anything that really stands out. Thoughts?
Submitted by digital-bolkonsky t3_zivwuc in deeplearning
I was asked this question today and can’t really think of anything that really stands out. Thoughts?
Quite different tech stack for APIs. DL requires some kind of model server with GPU. For traditional ML use Lambda or FastAPI on a server.
For batch processing it's more similar, depending on your data size, you might not need a GPU even for Deep learning.
Also deep learning is usually unstructured data, which requires different storage and training infrastructure.
You can read books about that topic however at the core that's the difference a that's why a lot of companies still don't utilize DL.
Right so when it comes to computing if I am building a DL api for someone. How should I address the computing issue?
How do we assess the GPU need in production?
The question is about development and tech stack
How do you manage the GPU issue in building an api?
Pytorch / Keras / Tensorflow for deep learning
And any basic ML library you want, scitkit leaen etc.
Deep learning is all about GPU usage and running long experiments in production. I’m confused what you even want
Is the question basically asking, what skills would someone specialized in DL have vs someone specializing in non-DL ML have?
The question is about productization
You’re still not asking a clear question. Using ML to build a product or a model being the product. If the model is the product, then your answer is “What’s the difference between an non-DL ML model and a DL model”.
Sorry for being blunt, wtf is productization in this context, what does this word include? This is way too broad of a question, there are many nuances in ml/dl development, too many varibles could change based on a specific use case.
Simple models can be used just with the trained model and some API calls, this is the same between DL and ML. Non computational intensive tasks don’t even need GPUs/TPUs, most can even run on embedded hardwares. However they differ in amount of data required for training; data formats/ types also matter, typical ml algorithms work better with tabular data, but you wouldn’t use them for images. I mean what kind of garbage question is this lol. You can write a whole book on this.
If I get asked this question I’d ask back for a more concrete example, throwing out a generalized question only indicate the interviewer does not have the know how in ml/dl operations.
I see a lot of people mentioning needing a GPU for DL, but it appears no one has yet clarified you only need that for training.
If you're looking for the standard use case of training a model, saving it, and then productionizing that model by exposing an API for model inference only, then you only need a GPU for the training phase. For inference, you do not need a GPU. AWS rents specialized EC2 instances with fast CPUs optimized specifically for model inference.
Another major difference may be that business requirements may preclude the use of Deep Learning in the solution. For instance, business areas like credit risk are regulated and require a level of model explainability that we can't provide with neural networks.
Others have already made great comments regarding tabular vs unstructured data, no other comments to add there.
One final thing area is the sheer volume of data needed for a DL solution vs a "Shallow" ML solution. You need orders of magnitude more data to successfully train most DL models than you do to get good performance with most other ML algorithms.
This. We have no context of what ML even entails here. It’s too broad.
sqweeeeeeeeeeeeeeeps t1_izspjid wrote
Google difference between ML and Deep Learning.