Submitted by optimized-adam t3_zay9gt in MachineLearning
In many papers, no confidence estimates are reported at all (one has to assume the best results for the own method are reported). In other papers, min/max or standard deviation as well as the mean are reported. Even more seldomly, the mean and standard error of the mean is reported. Once in a blue moon, an actual statistical test is run.
Given that there plainly is no consensus in the field on how to handle this issue, what is the best way to do it in your opinion?
abio93 t1_iyqjfv5 wrote
In an ideal word the code and the intermediate results (ALL of them, also the ones not used in the final paper) should be available