Viewing a single comment thread. View all comments

trajo123 t1_ja9aghn wrote

Very strange.

Are you sure your dataset is shuffled before the split? Have you tried different random seeds, different split ratios?

Or maybe there a bug in how you calculate the loss, but that should affect the training set as well...

So my best guess is you either don't have your data shuffled and the validation samples are "easier" or maybe it's something more trivial, like a bug in the plotting code. Or maybe that's the point where your model become self-aware :)

1

Apprehensive_Air8919 OP t1_jacst55 wrote

omg... I think I found the bug. I had used the depth estimation image as input for the model in the validation loop....................

2

Apprehensive_Air8919 OP t1_jackmpu wrote

I just did a run with test_size being 0.5. The same thing happend. Wtf is going on :/

1