Submitted by TiredOldCrow t3_y7mwmw in MachineLearning
gravitas_shortage t1_isvkgcr wrote
Peer review is in a death spiral right now, it's not going to be a solution long-term. I expect the only viable option will be adversarial AIs trained to detect fake papers.
blablanonymous t1_iswe0za wrote
Won’t you need a labeled training set to make that work?
stevewithaweave t1_isy1dji wrote
I think you generate your own fake papers as the label. And mix it in with real papers
blablanonymous t1_isystb6 wrote
But you wouldn’t you need to have a set of real papers you’re actually very confident they are real?
stevewithaweave t1_isyt7es wrote
Anything before 2005 lol
blablanonymous t1_isytbad wrote
🤣😂
the_mighty_skeetadon t1_iszprhg wrote
That can't be the only method, because if your model for generating fake papers differs significantly from somebody else's model, you will be both unable to detect those fake papers and unable to detect that you're failing.
Better is to have fake papers rejected from journals labeled thusly and to synthetically generate more fake papers with a wide variety of known approaches.
stevewithaweave t1_iszusz3 wrote
I think the original commenter was referring to an architecture similar to GANs. I agree that including examples of fake papers would improve the model but is not required
visarga t1_iszuegi wrote
Rather than detecting fakes I'd rather have a model that can generate and implement papers. I bet there's a ton of samples to train on. Close the loop on AI self improvement.
Viewing a single comment thread. View all comments