Submitted by Beautiful-Gur-9456 t3_124jfoa in MachineLearning
Hey all!
Recently, researchers from OpenAI proposed consistency models, a new family of generative models. It allows us to generate high quality images in a single forward pass, just like good-old GANs and VAEs.
​
​
I have been working on it and found it definetly works! You can try it with diffusers
.
import diffusers
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained(
"consistency/cifar10-32-demo",
custom_pipeline="consistency/pipeline",
)
pipeline().images[0] # Super Fast Generation! π€―
pipeline(steps=5).images[0] # More steps for sample quality
It would be fascinating if we could train these models on different datasets and share our results and ideas! π€ So, I've made a simple library called consistency
that makes it easy to train your own consistency models and publish them. You can check it out here:
https://github.com/junhsss/consistency-models
I would appreciate any feedback you could provide!
noraizon t1_je10328 wrote
x0-parametrization has been used for some time now. imo, nothing new under the sun. maybe it's something else I don't see