Viewing a single comment thread. View all comments

BrotherAmazing t1_jda5jna wrote

Either it’s an easy problem where 98% - 100% accuracy on samples this size is just typical and not really worth publishing, or (not exclusive) the study is flawed.

One could get a totally independent data set of FNA images with these features extracted from different patients in different years, etc. and run their random forest on those. If it gets 98% - 100% accuracy then this is not a hard problem (the feature engineering might have been hard—not taking away from that if so!). If it fails miserably or just gets waaaay lower that 100% you know the study was flawed.

There are so many ML neophytes making “rookie mistakes” with this stuff who don’t fully grasp basic concepts that I think you always need a totally new independent test set that the authors didn’t have access to in order to really test it. That’s even a good idea for experts to be honest.

The paper’s conclusion is likely wrong either way; i.e., that Random Forests are “superior” for this application. Did they get an expert in XGBoost, neural networks, etc and put as much time and effort into those techniques using the same training and test sets to see if they also got 99% - 100%? It didn’t appear so from my cursory glance.

1