Submitted by Worth-Advance-1232 t3_10asgah in MachineLearning
Basically what the titel says. For me it seems that neither in business nor in literature Super Learners / Stacking is used frequently. Therefore I was wondering why this is the case? Especially since Stacking should guarantee at least equal performance as the base learners used for it. One reason that comes up my mind is the curse of data. As more levels in the architecture we have the more data splits are needed, reducing the available training data for each individual model, thus reducing the model performance. Another thing might be the complexity when building a Stacked Learner. Still that doesn’t see to be that bad of a trade-off. Anything I‘m totally missing here?
ndemir t1_j466c3f wrote
It is just one of the tools that you end up using if you are using some kind of AutoML. I just confirmed that with h2o ;) https://docs.h2o.ai/h2o/latest-stable/h2o-docs/automl.html