Submitted by digital-bolkonsky t3_zivwuc in deeplearning
Blasket_Basket t1_izv8icg wrote
I see a lot of people mentioning needing a GPU for DL, but it appears no one has yet clarified you only need that for training.
If you're looking for the standard use case of training a model, saving it, and then productionizing that model by exposing an API for model inference only, then you only need a GPU for the training phase. For inference, you do not need a GPU. AWS rents specialized EC2 instances with fast CPUs optimized specifically for model inference.
Another major difference may be that business requirements may preclude the use of Deep Learning in the solution. For instance, business areas like credit risk are regulated and require a level of model explainability that we can't provide with neural networks.
Others have already made great comments regarding tabular vs unstructured data, no other comments to add there.
One final thing area is the sheer volume of data needed for a DL solution vs a "Shallow" ML solution. You need orders of magnitude more data to successfully train most DL models than you do to get good performance with most other ML algorithms.
Viewing a single comment thread. View all comments