pilooch
pilooch t1_j17wizo wrote
Reply to comment by Seankala in [D] Hype around LLMs by Ayicikio
OK but maybe don't miss the key element here: DDPM captures distribution modes with very high precision, in a supervised manner. Massive improvement !
pilooch t1_j0tjsw7 wrote
Hello, the goto tutorial I do recommend around to colleagues and customers/researchers is the one from CVPR 2022: https://cvpr2022-tutorial-diffusion-models.github.io/ Some do skip the score-based presentations, and/or start from the applications instead. Very informative in all cases !
pilooch t1_iy63zvp wrote
Reply to [D] Informal meetup at NeurIPS next week by tlyleung
To let know interested people, the meetup is confirmed at rusty nail Tuesday after 9pm, from the host tlyleung
pilooch t1_ixm5ocg wrote
Reply to [D] Informal meetup at NeurIPS next week by tlyleung
Hi, sure, will join ! There was a fun one back in 2016 :)
pilooch t1_iwybbh6 wrote
Reply to [D] My embarrassing trouble with inverting a GAN generator. Do GAN questions still get answered? ;-) by _Ruffy_
Hey there, this is a truly difficult problem. With colleagues we do train very precise GANs on a daily basis. We've given up on inversion and latent control a couple years ago, and we actually don't need it anymore.
My raw take on this is that the GAN latent space is too compressed/folded for low level control. When finetuning image to image GANs for instance, we do get a certain fine control of the generator, though we 'see' it snap to one 'mode' or the other. Meaning, we do witness a lack of smoothness that implicitly may prevent granular control.
Haven't looked at the theoretical side of this in a while though, so you may well know better...
pilooch t1_isiuf8z wrote
Reply to comment by ThomaSinn in [D] Is the GAN architecture currently old-fashioned? by teraRockstar
We use https://github.com/jolibrain/joliGAN which is a lib for image2image with additional "semantic" constraints. I.e. when there's a need to conserve labels, physics, anything between the two domains. This lib aggregates and improves on existing works.
If you are looking for more traditional noise -> xxx GANs, go for https://github.com/autonomousvision/projected_gan/. Another recent work is https://github.com/nupurkmr9/vision-aided-gan.
The key element in GAN convergence is the discriminator. Joligan above defaults to multiple discriminators by combining and improving on the works above, ensuring fast early convergence and stability while the semantic constraints narrow the path to relevant modes.
We've found that tranformers as generators have interesting properties on some tasks and converge well with a ViT-based projected discriminator.
pilooch t1_isiqaqv wrote
Some of my colleagues and myself are working daily with GANs in industry-grade applications.
My current understanding is that due to explicit supervision, DDPM do not directly apply to unpaired datasets, for which GANs shine. There are a few papers about this though, so this should emerge as well. Bear in mind that in industry, some datasets are unpaired by the problem's nature. DDPM are insanely good as soon as the dataset is paired.
GANs generators are very controllable for inference, including real-time. DDPM will follow, but are not there yet exactly AFAIK.
Another quick observation: GANs are more difficult to train but modern implementations and libraries do exhibit fast and accurate convergence.
pilooch t1_j59g48g wrote
Reply to comment by samb-t in [D] Question about using diffusion to denoise images by CurrentlyJoblessFML
Absolutely, I do second this, Palette is what you are looking for. We have a modified version in JoliGAN, with PR for various conditioning, including masks and sketches, cf https://github.com/jolibrain/joliGAN/pull/339
Palette-like DDPM works exceptionnally well (we have industrial-grade use cases), but a paired dataset is required, that's the number one drawback I see atm. My understanding is that unpaired diffusion but for at least a single work (UNIT-DDPM) without a known public implementation remains a research field.