Submitted by Visual-Arm-7375 t3_zd6a6j in MachineLearning
Visual-Arm-7375 OP t1_iz065iq wrote
Reply to comment by killver in [D] Model comparison (train/test vs cross-validation) by Visual-Arm-7375
I don't have a clear opinion, I'm trying to learn and I'm proposing a situation and you're not listening. You are evaluating the performance of the model with the same accuracy you are selecting hyperparameters, this does not make sense.
Anyway, thank you for your help, really appreciate it.
killver t1_iz06mz6 wrote
Maybe that's your confusion, getting a raw accuracy score that you are communicating, vs. finding and selecting hyperparameters/models. Your original post asked about model comparison.
Anyways, I suggest you take a look at how research papers are doing it, and also browse through Kaggle solutions. Usually people are always doing local cross validation, and the actual production data is the test set (e.g. ImageNet, Kaggle Leaderboard, Business Production data, etc.).
rahuldave t1_iz4u6o4 wrote
Many kaggle competitions will have public and private leaderboards. And you are strongly advised to separate out your own validation set from the training data they give you to choose your best model to compare on the public leaderboard. And there are times people have fit to the public leaderboard, but this can be checked with adverserial validation and the like. If you like this kinda stuff, both Abhishek Thakur and Konrad Banachevicz's books are real nice...
Viewing a single comment thread. View all comments