lemlo100

lemlo100 t1_iywnr89 wrote

Totally true. I also tend to believe my results are garbage and double- and triple-check. For my last project I implemented some tests in fact. It was a data augmentation approach for reinforcement learning so it was testable. My supervisor was not happy about is and considered it a waste of time. I also ran about 50 seeds after reading the Neurips best paper "On the edge of the statistical precipice" in my experiments as opposed to only five like my supervisor used to do. We were not able to work together and ended it early because he didn't want me junior interfering in him dashing out cooked results.

Edit: That same supervisor, by the way, had a paper published that contained a bug. Sampling was not quite implemented the way it was described in the paper. When I brought attention to this, since my project was based on this piece of code, instead of thanking me for spotting the bug he argued how in his opinion it shouldn't make a difference. That was shocking.

43

lemlo100 t1_iywgmvk wrote

I really don't wanna know. I think the problem is huge. Anyone who has worked in software engineering has the awareness that bugs always happen and that that makes unit testing crucial. I understand many machine learning researchers have not worked in software engineering so the awareness just isn't there.

67