Submitted by redlow0992 t3_11r97fn in MachineLearning
For the papers we have submitted in recent years, there has been a significant increase in the number of reviewers whose only complaint is the paper not following a "hip" version of the research topic. They don't care about the results and don't care about the merit of the work, their problem is that our work does not follow the trend. It feels like there is this subset of reviewers see anything that is more than a year old as "out of date" and a reason for rejection.
Have we been unlucky with our reviewer bingo recently or is this the case for others as well?
respeckKnuckles t1_jc8xver wrote
"Not using gpt4" is going to be in all NLP conference paper reviews for the next six months.