Submitted by PleaseKillMeNowOkay t3_xtadfd in deeplearning
SimulatedAnnealing t1_iqs94b6 wrote
Reply to comment by PleaseKillMeNowOkay in Neural network that models a probability distribution by PleaseKillMeNowOkay
The most plausible explanation is overfitting. How do they compare in terms of error in the train set?
PleaseKillMeNowOkay OP t1_iqscxo9 wrote
The simpler model had lower training loss with the same number of epochs. I tried training the second model until it had the same training loss as the first model, which took much longer. The validation did not improve and had a slight upward trend, which I know means that it's overfitting.
Viewing a single comment thread. View all comments