Submitted by Visual-Arm-7375 t3_zd6a6j in MachineLearning
killver t1_iz02ql9 wrote
Reply to comment by Visual-Arm-7375 in [D] Model comparison (train/test vs cross-validation) by Visual-Arm-7375
I think you are misunderstanding it. Each validation fold is always a separate holdout dataset, so when you evaluate your model on it, you are not training on it. Why would it be a problem training on that fold for another validation holdout.
Actually your point 5 is also what you can do in the end, for production model to make use of all data.
The main goal of cross validation is to find hyperparamters that make your model generalize well.
If you take a look at papers or Kaggle, you will never find someone having both validation and test data locally. The test data usually is the real production data, or data you compare the models on. But you make decisions on your local cross validation to find a model that can generalize well on unseen test data (that is not in your current possession).
Visual-Arm-7375 OP t1_iz03bcf wrote
Mmmm okay. But imagine you have 1000 datapoints and you want to compare a random forest and a DNN and select which one is the best to put it into production, how would you do it?
killver t1_iz03hvr wrote
Do a 5-fold cross validation, train both models 5 times, and compare the OOF scores.
And of course optimize hyperparameters for each model type.
Viewing a single comment thread. View all comments