Submitted by naequs t3_yon48p in MachineLearning
Naive_Piglet_III t1_ivf7an0 wrote
Reply to comment by naequs in [D] Do you think there is a competitive future for smaller, locally trained/served models? by naequs
This is where I believe the human component of AI/ ML lies in the future. Being able to discern use-cases where simple models will work vs where complex algorithms will add value.
If you look at how businesses approach AI/ML today, everyone wants to have a cloud based platform that’s integrated to a massive data lake capable of running deep learning / reinforced learning algorithms. But the reality is, majority of business problems (specifically in non-tech businesses like retail, e-commerce, financial services etc..) don’t require such complex things.
My heart weeps when organisations try to implement a deep learning model for a simple fraud detection use case which could well be achieved by a logistic regression model using much smaller amount of data. What’s worse, they’d spend probably millions of dollars in trying to develop and operationalise the solution.
The problem however is that hype merchants (read consulting companies) make it sound like this is the only way that companies can stay competitive in the future. AI/ML conferences also don’t help in that they almost always only want to showcase an insanely complicated algorithm utilising a massive tech-stack. I feel, there are very few people in the industry too, who advocate for simplification.
But eventually, I expect the hype to die and companies to realise that this doesn’t give them any incremental benefit in every use case.
Having said all that, the specific example that you’ve given like language and image processing, I also expect the large / deep models to become the norm because these models are also offered as a service like GitHub copilot. And it might actually be cheaper to use them directly than develop a small-scale customised model.
Viewing a single comment thread. View all comments