Submitted by aozorahime t3_y2nyn5 in MachineLearning
Hi, I am a master's student working on GAN in speech enhancement. Probably I must say I learned a lot from this topic and I had to restudy probability to get an understanding of generative models and such. I am just curious whether the generative model such as GAN is still a good topic for Ph.D. since recently I am getting exposed to the current model such as diffusion model. BTW, I also interested in information bottleneck in deep learning. Any suggestion would be helpful :) thanks
M4xM9450 t1_is480yg wrote
Diffusion seems to be taking somewhat of a lead on GANs due to being more stable to train. It has the same generative applications of GANs with the downside of being “slower” (I need to research that claim a bit more for the details).