Stable Diffusion

Wednesday 10th August, 2022 - Bruce Sterling

Stability AI and our collaborators are proud to announce the first stage of release of Stable Diffusion to researchers via this form, the model weights are hosted by our friends at Hugging Face once you get access. The code is available here and the model card here. We are working together towards a public release soon.

This has been led by Patrick Esser from Runway and Robin Rombach from the CompVis lab at Heidelberg University (now the Machine Vision & Learning research group at LMU), combined with support from communities at Eleuther AI, LAION and our own generative AI team.

Stable Diffusion is a text-to-image model that will empower billions of people to create stunning art within seconds. It is a breakthrough in speed and quality meaning that it can run on consumer GPUs. You can see some of the amazing output that has been created by this model without pre or post-processing on this page.

The model itself builds upon the work of the team at CompVis and Runway in their widely used latent diffusion model combined with insights from the conditional diffusion models by our lead generative AI developer Katherine Crowson, Dall-E 2 by Open AI, Imagen by Google Brain and many others. We are delighted that AI media generation is a cooperative field and hope it can continue this way to bring the gift of creativity to all.