lemlo100
lemlo100 t1_iywgmvk wrote
Reply to comment by lameheavy in [D] NeurIPS 2022 Outstanding Paper modified results significantly in the camera ready by Even_Stay3387
I really don't wanna know. I think the problem is huge. Anyone who has worked in software engineering has the awareness that bugs always happen and that that makes unit testing crucial. I understand many machine learning researchers have not worked in software engineering so the awareness just isn't there.
lemlo100 t1_iywnr89 wrote
Reply to comment by pyepyepie in [D] NeurIPS 2022 Outstanding Paper modified results significantly in the camera ready by Even_Stay3387
Totally true. I also tend to believe my results are garbage and double- and triple-check. For my last project I implemented some tests in fact. It was a data augmentation approach for reinforcement learning so it was testable. My supervisor was not happy about is and considered it a waste of time. I also ran about 50 seeds after reading the Neurips best paper "On the edge of the statistical precipice" in my experiments as opposed to only five like my supervisor used to do. We were not able to work together and ended it early because he didn't want me junior interfering in him dashing out cooked results.
Edit: That same supervisor, by the way, had a paper published that contained a bug. Sampling was not quite implemented the way it was described in the paper. When I brought attention to this, since my project was based on this piece of code, instead of thanking me for spotting the bug he argued how in his opinion it shouldn't make a difference. That was shocking.