Viewing a single comment thread. View all comments

trajo123 t1_ja8cyw2 wrote

How is your loss defined? How is your validation set created? Does it happen if for any test/validation split?

2

Apprehensive_Air8919 OP t1_ja96vdu wrote

nn.MSELoss(), I used sklearn train_test_split() with test_size being = 0.2. It is consistent behavior across any split i've seen. The wierd thing is that it only happens when I run very low lr

1

trajo123 t1_ja9aghn wrote

Very strange.

Are you sure your dataset is shuffled before the split? Have you tried different random seeds, different split ratios?

Or maybe there a bug in how you calculate the loss, but that should affect the training set as well...

So my best guess is you either don't have your data shuffled and the validation samples are "easier" or maybe it's something more trivial, like a bug in the plotting code. Or maybe that's the point where your model become self-aware :)

1

Apprehensive_Air8919 OP t1_jacst55 wrote

omg... I think I found the bug. I had used the depth estimation image as input for the model in the validation loop....................

2

Apprehensive_Air8919 OP t1_jackmpu wrote

I just did a run with test_size being 0.5. The same thing happend. Wtf is going on :/

1