Submitted by thanderrine t3_zc0kco in MachineLearning
_Arsenie_Boca_ t1_izgbbrj wrote
Reply to comment by CrazyCrab in [D] Determining the right time to quit training (CNN) by thanderrine
Then how can you tell if you overfitted on the validation set?
CrazyCrab t1_izgcu6i wrote
Ok, so my annotated data consists of about 50 images of size 10000x5000 pixels on average. The task is binary segmentation. Positives constitute approximately 8% of all pixels. 38 images are in the training part, 12 images are in the test part (I divided them randomly).
The batch cross entropy plot and the validation cross entropy plot were crazy unstable during training. After a little bit of training there mostly wasn't any stable trend in either going up or down. However, as the time went on, the best validation cross entropy over all checkpoints went down and went down...
So I think my checkpoint-selecting method gave me a model overfit to the validation dataset. I.e., I expect that on future samples the performance will be more like on the training dataset than on the validation dataset. The only other likely explanation I can think of is that I got unlucky and my validation dataset turned out to be significantly easier than my training dataset.
Viewing a single comment thread. View all comments