Submitted by optimized-adam t3_zay9gt in MachineLearning
In many papers, no confidence estimates are reported at all (one has to assume the best results for the own method are reported). In other papers, min/max or standard deviation as well as the mean are reported. Even more seldomly, the mean and standard error of the mean is reported. Once in a blue moon, an actual statistical test is run.
Given that there plainly is no consensus in the field on how to handle this issue, what is the best way to do it in your opinion?
Superschlenz t1_iypuia6 wrote
In an optimal world there would be no random weights initialisation or other usages of pseudo random number generators.