Submitted by carlml t3_xw0pql in MachineLearning
Suppose you have an image, then you pass through a VAE model and it'll give you a similar image. What happens if you feed the output as input through the VAE again? Is there any theory discussing this? has anyone done any experiments/papers?
I'll be writing some code to do this, but if anyone knows anything regarding this, please share.
Disclaimer: this is driven purely by curiosity.
MagentaBadger t1_ir4zkun wrote
Here is a paper you may be interested in about “Perceptual Generative Autoencoders”. The authors leverage the concept you describe to improve training and generative performance. 🙂