twocupv60
twocupv60 OP t1_iytea8i wrote
Reply to comment by MrsBotHigh in [D] Ensemble Training Logistics and Mathematical Equivalences by twocupv60
Can you elaborate?
twocupv60 OP t1_iysmgyv wrote
Reply to comment by Thakshu in [D] Ensemble Training Logistics and Mathematical Equivalences by twocupv60
The initial error is (y - y_hat)^2 where y_hat is mean(y1, ... yn). So the error is divided up among the y1...yn sequence based on how bad they contribute to the y. If models are trained separately, then thefull error of y is backproped. If the models are trained together, one model might have a lot of error which will influence the proportion assigned to the rest which I believe effectively lowers the learning rate. Is this what you mean by "loss values will be smoother."
Is there a mistake here?
twocupv60 OP t1_ir15jsz wrote
Reply to comment by caedin8 in [D] How do you go about hyperparameter tuning when network takes a long time to train? by twocupv60
This isn't for a production model
twocupv60 OP t1_ir0zyni wrote
Reply to comment by [deleted] in [D] How do you go about hyperparameter tuning when network takes a long time to train? by twocupv60
perceptual manifold priors using deep networks
twocupv60 OP t1_ir0uhh2 wrote
Reply to comment by [deleted] in [D] How do you go about hyperparameter tuning when network takes a long time to train? by twocupv60
Your very last thought seems the most reasonable. I can't imagine shrinking the model. I would surely think this would bias the results.
twocupv60 OP t1_ir0udtf wrote
Reply to comment by franztesting in [D] How do you go about hyperparameter tuning when network takes a long time to train? by twocupv60
zero dollars USD
twocupv60 OP t1_iytf8ei wrote
Reply to comment by Zealousideal_Low1287 in [D] Ensemble Training Logistics and Mathematical Equivalences by twocupv60
Thank you. Figure 1 is exactly the models I am considering for this problem.