trajo123 t1_ja8cyw2 wrote
How is your loss defined? How is your validation set created? Does it happen if for any test/validation split?
Apprehensive_Air8919 OP t1_ja96vdu wrote
nn.MSELoss(), I used sklearn train_test_split() with test_size being = 0.2. It is consistent behavior across any split i've seen. The wierd thing is that it only happens when I run very low lr
trajo123 t1_ja9aghn wrote
Very strange.
Are you sure your dataset is shuffled before the split? Have you tried different random seeds, different split ratios?
Or maybe there a bug in how you calculate the loss, but that should affect the training set as well...
So my best guess is you either don't have your data shuffled and the validation samples are "easier" or maybe it's something more trivial, like a bug in the plotting code. Or maybe that's the point where your model become self-aware :)
Apprehensive_Air8919 OP t1_jacst55 wrote
omg... I think I found the bug. I had used the depth estimation image as input for the model in the validation loop....................
trajo123 t1_jaekibz wrote
Apprehensive_Air8919 OP t1_jackmpu wrote
I just did a run with test_size being 0.5. The same thing happend. Wtf is going on :/
[deleted] t1_ja8d3hh wrote
[deleted]
Viewing a single comment thread. View all comments