Submitted by AlmightySnoo t3_117iqtp in MachineLearning
One of the things in current publications that completely irritates me is people just forcing the use of GANs where they are not even needed nor suited at all, just to ride on the hype of generative AI.
These guys usually have samples (x_1, y_1=phi(x_1)), ..., (x_n, y_n=phi(x_n))
of a random pair (X, Y=phi(X))
where phi
is some unknown target function (ie in fancy-pants math we know that Y
is sigma(X)
-measurable). A direct way to solve this is to treat it naturally as a regression problem and use your usual ML/DL toolkit. These guys however think that they can make the problem look sexier if they introduce GANs. For instance, they'd train a GAN taking X
as an input and through the discriminator have the generator output something that has the same distribution as Y=phi(X)
. Some will even add some random noise z
, that has nothing to do with X
, to the inputs of the generator despite knowing that X
is already enough to fully determine Y
. GANs would have been useful if we didn't have joint observations of X
and Y
but that is not the case here.
One of the papers I have in mind is this one: https://openreview.net/pdf?id=SDD5n1888
How on earth are these papers getting accepted? To me that is literally just plagiarism of what's already available (physics-informed NNs in that case) by adding a totally useless layer (the GAN) to make it seem like this is a novel approach. That paper is only one of many cases. I know of a professor actively using that same technique to get cheap articles where he just replaces a standard regression NN in an old paper found online by a totally unjustified GAN. IMO reviewers at these journals/conferences need to be more mindful of this kind of plagiarism/low-effort submission.
Borrowedshorts t1_j9cy0ui wrote
It's not plagiarism. Novelty and plagiarism are two separate concepts.