Viewing a single comment thread. View all comments

DMLearn t1_j7pq8wc wrote

The model is trained by getting rewarded for fooling a model that tries to distinguish between the real and fake images. So no, it won’t be perfect, but it’s going to be good enough to trick a model the vast majority of the time because that is literally a part of the training. Not just a small part, that’s is the central tenet of the training and optimization of generative models, generative ADVERSARIAL networks.

1

nutpeabutter t1_j7rxvb8 wrote

Your argument falls apart when you realize that there are training artifacts. Ever wonder why FID scales inversely with model size?

−1