Submitted by fedegarzar t3_z9vbw7 in MachineLearning
​
Machine learning progress is plagued by the conflict between competing ideas, with no shortage of failed reviews, underdelivering models, and failed investments in expensive over-engineered solutions.
We don't subscribe the Deep Learning hype for time series and present a fully reproducible experiment that shows that:
- A simple statistical ensemble outperforms most individual deep-learning models.
- A simple statistical ensemble is 25,000 faster and only slightly less accurate than an ensemble of deep learning models.
In other words, deep-learning ensembles outperform statistical ensembles just by 0.36 points in SMAPE. However, the DL ensemble takes more than 14 days to run and costs around USD 11,000, while the statistical ensemble takes 6 minutes to run and costs $0.5c.
For the 3,003 series of M3, these are the results.
In conclusion: in terms of speed, costs, simplicity and interpretability, deep learning is far behind the simple statistical ensemble. In terms of accuracy, they are rather close.
You can read the full report and reproduce the experiments in this Github repo: https://github.com/Nixtla/statsforecast/tree/main/experiments/m3
[deleted] t1_iyitfkz wrote
[deleted]