Submitted by aleguida t3_yde1q8 in MachineLearning
aleguida OP t1_itvhqg8 wrote
Reply to comment by _peabody124 in [D] Tensorflow learning differently than Pytorch by aleguida
We should have 50% of the dataset as cats and 50% dogs.
​
- Looks like a bug. The cross entropy loss in PyTorch seems incredibly out of line, as does the training-validation loss gap.
Good point. I need to double check that. What worries me more is the TF implementation that is struggling to get Ok results.
aleguida OP t1_itw7yh3 wrote
there was indeed a bug on the val loss calculation. Good catch!
Viewing a single comment thread. View all comments