Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

many people asked, so.. #771

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 13 additions & 2 deletions .github/FUNDING.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,13 @@
ko_fi: hlky_
github: [hlky, altryne]
# These are supported funding model platforms

github: ['neonsecret'] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: neonsecret
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: #[ 'https://paypal.me/' ]
6 changes: 3 additions & 3 deletions .github/ISSUE_TEMPLATE/bug_report.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name: 🐞 Bug Report
description: File a bug report
title: "[Bug]: "
labels: ["bug", "triage"]
labels: [ "bug", "triage" ]
assignees:
- octocat
body:
Expand Down Expand Up @@ -40,7 +40,7 @@ body:
- type: dropdown
id: os
attributes:
label: Where are you running the webui?
label: Where are you running the webui?
multiple: true
options:
- Windows
Expand All @@ -52,7 +52,7 @@ body:
attributes:
label: Custom settings
description: If you are running the webui with specifi settings, please paste them here for reference (like --nitro)
render: shell
render: shell
- type: textarea
id: logs
attributes:
Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@ contact_links:
- name: Feature Request, Question or Suggestion
url: https://github.com/hlky/stable-diffusion-webui/discussions
about: Please create a discussion and see if folks have already solved it
- name: Colab version specific bug?
- name: Colab version specific bug?
url: https://github.com/altryne/sd-webui-colab/issues/new/choose
about: Please open colab related bugs here
11 changes: 0 additions & 11 deletions .idea/.gitignore

This file was deleted.

229 changes: 140 additions & 89 deletions README.md

Large diffs are not rendered by default.

115 changes: 75 additions & 40 deletions Stable_Diffusion_v1_Model_Card.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,20 @@
# Stable Diffusion v1 Model Card
This model card focuses on the model associated with the Stable Diffusion model, available [here](https://github.com/CompVis/stable-diffusion).

This model card focuses on the model associated with the Stable Diffusion model,
available [here](https://github.com/CompVis/stable-diffusion).

## Model Details

- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [Proprietary](LICENSE)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is
a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text
encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in
the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion)
, [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**

@InProceedings{Rombach_2022_CVPR,
Expand All @@ -21,9 +28,9 @@ This model card focuses on the model associated with the Stable Diffusion model,

# Uses

## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
## Direct Use

The model is intended for research purposes only. Possible research areas and tasks include

- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Expand All @@ -33,17 +40,27 @@ tasks include

Excluded uses are described below.

### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
### Misuse, Malicious Use, and Out-of-Scope Use

_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies
in the same way to Stable Diffusion v1_.

The model should not be used to intentionally create or disseminate images that create hostile or alienating
environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or
offensive; or content that propagates historical or current stereotypes.

The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

The model was not trained to be factual or true representations of people or events, and therefore using the model to
generate such content is out-of-scope for the abilities of this model.

#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:

- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not
limited to:

- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures,
religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
Expand All @@ -58,23 +75,23 @@ Using the model to generate content that is cruel to individuals is a misuse of

- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image
corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without
additional safety mechanisms and considerations.

### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.

While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images
that are primarily limited to English descriptions. Texts and images from communities and cultures that use other
languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and
western cultures are often set as the default. Further, the ability of the model to generate content with non-English
prompts is significantly worse than with English-language prompts.

## Training

Expand All @@ -84,22 +101,32 @@ The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)

**Training Procedure**
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
Stable Diffusion v1 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in
the latent space of the autoencoder. During training,

- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative
downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via
cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the
UNet.

We currently provide three checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`,
which were trained as follows,
We currently provide three checkpoints, `sd-v1-1.ckpt`, `sd-v1-2.ckpt` and `sd-v1-3.ckpt`, which were trained as
follows,

- `sd-v1-1.ckpt`: 237k steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194k steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`.
515k steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and 10\% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
194k steps at resolution `512x512`
on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B
with resolution `>= 1024x1024`).
- `sd-v1-2.ckpt`: Resumed from `sd-v1-1.ckpt`. 515k steps at resolution `512x512` on "laion-improved-aesthetics" (a
subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and
an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score
is estimated using
an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- `sd-v1-3.ckpt`: Resumed from `sd-v1-2.ckpt`. 195k steps at resolution `512x512` on "laion-improved-aesthetics" and
10\% dropping of the text-conditioning to
improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).


- **Hardware:** 32 x 8 x A100 GPUs
Expand All @@ -108,25 +135,32 @@ filtered to images with an original size `>= 512x512`, estimated aesthetics scor
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant

## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
## Evaluation Results

Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
steps show the relative improvements of the checkpoints:

![pareto](assets/v1-variants-scores.jpg)
![pareto](assets/v1-variants-scores.jpg)

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512
resolution. Not optimized for FID scores.

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact

**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
Based on that information, we estimate the following CO2 emissions using
the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented
in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region
were utilized to estimate the carbon impact.

- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.

## Citation

@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
Expand All @@ -136,5 +170,6 @@ Based on that information, we estimate the following CO2 emissions using the [Ma
pages = {10684-10695}
}

*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
*This model card was written by: Robin Rombach and Patrick Esser and is based on
the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*

4 changes: 2 additions & 2 deletions configs/autoencoder/autoencoder_kl_16x16x16.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ model:
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [ 1,1,2,2,4] # num_down = len(ch_mult)-1
ch_mult: [ 1,1,2,2,4 ] # num_down = len(ch_mult)-1
num_res_blocks: 2
attn_resolutions: [16]
attn_resolutions: [ 16 ]
dropout: 0.0


Expand Down
4 changes: 2 additions & 2 deletions configs/autoencoder/autoencoder_kl_8x8x64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ model:
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [ 1,1,2,2,4,4] # num_down = len(ch_mult)-1
ch_mult: [ 1,1,2,2,4,4 ] # num_down = len(ch_mult)-1
num_res_blocks: 2
attn_resolutions: [16,8]
attn_resolutions: [ 16,8 ]
dropout: 0.0

data:
Expand Down
30 changes: 15 additions & 15 deletions configs/latent-diffusion/celebahq-ldm-vq-4.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,19 +20,19 @@ model:
out_channels: 3
model_channels: 224
attention_resolutions:
# note: this isn\t actually the resolution but
# the downsampling factor, i.e. this corresnponds to
# attention on spatial resolution 8,16,32, as the
# spatial reolution of the latents is 64 for f4
- 8
- 4
- 2
# note: this isn\t actually the resolution but
# the downsampling factor, i.e. this corresnponds to
# attention on spatial resolution 8,16,32, as the
# spatial reolution of the latents is 64 for f4
- 8
- 4
- 2
num_res_blocks: 2
channel_mult:
- 1
- 2
- 3
- 4
- 1
- 2
- 3
- 4
num_head_channels: 32
first_stage_config:
target: ldm.models.autoencoder.VQModelInterface
Expand All @@ -48,11 +48,11 @@ model:
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 1
- 2
- 4
num_res_blocks: 2
attn_resolutions: []
attn_resolutions: [ ]
dropout: 0.0
lossconfig:
target: torch.nn.Identity
Expand Down
30 changes: 15 additions & 15 deletions configs/latent-diffusion/cin-ldm-vq-f8.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,18 +22,18 @@ model:
out_channels: 4
model_channels: 256
attention_resolutions:
#note: this isn\t actually the resolution but
# the downsampling factor, i.e. this corresnponds to
# attention on spatial resolution 8,16,32, as the
# spatial reolution of the latents is 32 for f8
- 4
- 2
- 1
#note: this isn\t actually the resolution but
# the downsampling factor, i.e. this corresnponds to
# attention on spatial resolution 8,16,32, as the
# spatial reolution of the latents is 32 for f8
- 4
- 2
- 1
num_res_blocks: 2
channel_mult:
- 1
- 2
- 4
- 1
- 2
- 4
num_head_channels: 32
use_spatial_transformer: true
transformer_depth: 1
Expand All @@ -52,13 +52,13 @@ model:
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 2
- 4
- 1
- 2
- 2
- 4
num_res_blocks: 2
attn_resolutions:
- 32
- 32
dropout: 0.0
lossconfig:
target: torch.nn.Identity
Expand Down
Loading