Diffusion models have emerged as powerful generative tools, rivaling GANs in sample quality and mirroring the likelihood scores of autoregressive models. A subset of these models, exemplified by DDIMs, exhibit an inherent asymmetry: they are trained over
The code has been tested on PyTorch 1.6.
Training is exactly the same as DDPM with the following:
python main.py --config {DATASET}.yml --exp {PROJECT_PATH} --doc {MODEL_NAME} --ni
python main.py --config {DATASET}.yml --exp {PROJECT_PATH} --doc {MODEL_NAME} --sample --fid --timesteps {STEPS} --eta {ETA} --ni
where
ETA
controls the scale of the variance (0 is DDIM, and 1 is one type of DDPM).STEPS
controls how many timesteps used in the process.MODEL_NAME
finds the pre-trained checkpoint according to its inferred path.
If you want to use the DDPM pretrained model:
python main.py --config {DATASET}.yml --exp {PROJECT_PATH} --use_pretrained --sample --fid --timesteps {STEPS} --eta {ETA} --ni
the --use_pretrained
option will automatically load the model according to the dataset.
If you want to use the version with the larger variance in DDPM: use the --sample_type ddpm_noisy
option.
Use --interpolation
option instead of --fid
.
Use --sequence
option instead.
The above two cases contain some hard-coded lines specific to producing the image, so modify them according to your needs.
This implementation is based on / inspired by:
- https://github.com/ermongroup/ddim (the DDIM repo).