Johannes S. Fischer* · Ming Gui* · Pingchuan Ma* · Nick Stracke · Stefan A. Baumann · Vincent Tao Hu · Björn Ommer
CompVis Group, LMU Munich
* denotes equal contribution
⇒ code coming soon!
Recently, there has been tremendous progress in visual synthesis and the underlying generative models.
Here, diffusion models (DMs) stand out particularly, but lately, flow matching (FM) has also garnered
considerable interest. While DMs excel in providing diverse images, they suffer from long training and
slow generation. With latent diffusion, these issues are only partially alleviated. Conversely, FM offers
faster training and inference but exhibits less diversity in synthesis. We demonstrate that introducing FM between the Diffusion model and the convolutional decoder in Latent Diffusion models offers high-resolution image synthesis with reduced computational cost and model size. Diffusion can then efficiently provide the necessary generation diversity. FM compensates for the lower resolution, mapping the small latent space to a high-dimensional one.
Subsequently, the convolutional decoder of the LDM maps these latents to high-resolution images. By
combining the diversity of DMs, the efficiency of FMs, and the effectiveness of convolutional decoders, we
achieve state-of-the-art high-resolution image synthesis at
Samples synthesized in
Super-resolution samples from the LHQ dataset. Left: low-resolution ground truth image bi-linearly up-sampled. Right: high resolution image up-sampled in latent space with our CFM model.
Up-sampling results with resolution