Submitted by orangelord234 t3_11129cq in MachineLearning
For a dataset, the top result gets a high accuracy ~10% better than the second-best paper. But this "SOTA" paper uses some methods that just don't seem practical for applications at all. For example, they use an ensemble of 6 different SOTA models and also train on external data. Of course, it performs well, but it's a bit ridiculous cause it adds almost nothing of value besides "we combined all the best models and got a better score!".
If I have a novel method that is applied to the second-best paper that improves it by ~5% with the same to better compute efficiency but still is worse than the SOTA method, is it still good research to try to publish to conferences? It's also 40% above the baseline model.
I would think so because it's a decent improvement (with an interesting motivation + method) from prior work while keeping the model reasonable. Would reviewers agree or would they just see that it isn't better than SOTA and reject based on not being SOTA alone?
Pyramid_Jumper t1_j8cuxdp wrote
Yes, of course. If the research is novel and you believe that the methods are interesting and/or of value then you should definitely seek publication. The goal of research is not to develop SoTA models, but to expand our knowledge in a particular area.
Yes, developing a SoTA method is a great way of getting published, but laying the groundwork for other methods and exploring ideas are all crucial parts of ML research too.