Overview

Diffusion models have emerged as powerful tools in various computer vision tasks, showcasing their ability to capture intricate patterns and generate realistic content. Let’s have a look at the various tasks these models support. The common thread among these diffusion tasks is using diffusion models to capture and propagate information to generate images (according to the datasets they were trained on), offering promising solutions for various image generation and manipulation tasks in computer vision.

Unconditional image generation

Unconditional image generation using a diffusion model involves training the model to generate new images without conditioning on specific input information. In other words, the model learns to capture the underlying distribution of the entire dataset and can then produce diverse and realistic samples from that distribution. This is a fairly easy task. A DDPM consists of a task where our model only generates images without any additional information along with the input.

Get hands-on with 1200+ tech skills courses.