Skip to content

Commit

Permalink
test (#1)
Browse files Browse the repository at this point in the history
* repair file paste for Firefox from AUTOMATIC1111#10615
remove animation when pasting files into prompt
rework two dragdrop js files into one

* Upgrade Gradio, remove docs URL hack

* fix error in dragdrop logic

* Add custom karras scheduler

* remove debug print

* `modules/api/api.py`: disable `timeout_keep_alive`

* Add dropdown for scheduler type

* Change karras to kdiffusion

* Replace karras by k_diffusion, fix gen info

* only add metadata when k_sched is actually been used

* remove not related code

* Avoid loop import

* Minor naming fixes

* Add error information for recursion error

* use sigma_max/min in model if sigma_max/min is 0

* Revert AUTOMATIC1111#10586

* Fix for AUTOMATIC1111#10643 (pixel noise in webui inpainting canvas breaking inpainting, so that it behaves like plain img2img)

* Better hint for user

Co-authored-by: catboxanon <[email protected]>

* Add hint for custom k_diffusion scheduler

* Use settings instead of main interface

* Use better way to impl

* Fix xyz

* Subject:.
Improvements to handle VAE filenames in generated image filenames

Body:.
1) Added new line 24 to import sd_vae module.
2) Added new method get_vae_filename at lines 340-349 to obtain the VAE filename to be used for image generation and further process it to extract only the filename by splitting it with a dot symbol.
3) Added a new lambda function 'vae_filename' at line 373 to handle VAE filenames.

Reason:.
A function was needed to get the VAE filename and handle it in the program.

Test:.
We tested whether we could use this new functionality to get the expected file names.
The correct behaviour was confirmed for the following commonly distributed VAE files.
vae-ft-mse-840000-ema-pruned.safetensors -> vae-ft-mse-840000-ema-pruned
anything-v4.0.vae.pt -> anything-v4.0

ruff response:.
There were no problems with the code I added.

There was a minor configuration error in a line I did not modify, but I did not modify it as it was not relevant to this modification.
Logged.
images.py:426:56: F841 [*] Local variable `_` is assigned to but never used
images.py:432:43: F841 [*] Local variable `_` is assigned to but never used

Impact:.
This change makes it easier to retrieve the VAE filename used for image generation and use it in the programme.

* Use type to determine if it is enable

* fix bad styling for thumbs view in extra networks AUTOMATIC1111#10639

* possible fix for empty list of optimizations AUTOMATIC1111#10605

* Fix ruff error

* Use automatic instead of None/default

* improvements

See:
AUTOMATIC1111#10649 (comment)

* use Schedule instead of Sched

* Changed 'images.zip' to generation by pattern

* Optimize tooltip checks

* Instead of traversing tens of thousands of text nodes, only look at elements and their children
* Debounce the checks to happen only every one second

* Restore support for dropdown tooltips

* Add support for tooltips on dropdown options

* Cleaner image metadata read

* Just use console.error, it's in all browsers

* Merge executeCallbacks and runCallback, simplify + optimize

* Document on* handlers (for extension authors' sake)

* Add onAfterUiUpdate callback

* Use onAfterUiUpdate where possible

* Remove try/except in img metadata read

* Small fixes to prepare_tcmalloc for Debian/Ubuntu compatibility

- /usr/sbin (where ldconfig is usually located) is not typically on users' PATHs by default, so we set that variable before trying to run ldconfig.
- The libtcmalloc library is called libtcmalloc_minimal on Debian/Ubuntu systems. We now check whether libtcmalloc_minimal exists when running prepare_tcmalloc.

* change to AMD only if NVIDIA is not presented

* Update webui.sh

* Remove exit() from select_checkpoint()

Raising a FileNotFoundError instead.

* Show full traceback in get_sd_model()

to reveal if an error is caused by an extension

* custom unet support

* fix serving images that have already been saved without temp files function that broke after updating gradio

* updates for the noise schedule settings

* Ability to zoom and move the canvas

* Formatted Prettier added fullscreen mode canvas expansion function

* Improve reset zoom when toggle tabs

* add quoting for infotext values that have a colon in them

* Mark caption_image_overlay's textfont as deprecated; fix AUTOMATIC1111#10778

* Sort requirements files

* Upgrade xformers

* Synchronize requirements/requirements_versions

* Remove deps not listed in _versions from requirements

* Omit versions when they don't match _versions

* fix "hires. fix" prompt/neg sharing same labels as txt2img_prompt/negative_prompt

* typo

vidocard -> videocard

* Corrected the code according to Code style

* changed the document to gradioApp()

* Round down scale destination dimensions to nearest multiple of 8

* Refactor EmbeddingDatabase.register_embedding() to allow unregistering

* fix xyz clip

* Upgrade transformers

Refs AUTOMATIC1111#9035 (comment)

* fix disable png info

* clarify issue template

* Only poll gamepads while connected

* Update imageviewerGamepad.js

* Patch GitPython to not use leaky persistent processes

* Add & use modules.errors.print_error where currently printing exception info by hand

* Revert "fix xyz clip"

This reverts commit edd766e.

* fix get_conds_with_caching()

* improve filename matching for mask

we should not rely that mask filename will be of the same extension
as the image filename so better pattern matching is added

* add scale_by to batch processing

* ruffed

* Moved the script to the extension build-in

* Added VAE listing to web API.

* Fix s_min_uncond default type int

* Move gamepaddisconnected listener

* Vendor in the single module used from taming_transformers; remove taming_transformers dependency

(and fix the two ruff complaints)

* a small fix for very wide images, because of the scroll bar was the wrong zoom

* Frontend: only look at top-level tabs, not nested tabs

Refs adieyal/sd-dynamic-prompts#459 (comment)

* Fix typo in `--update-check` help message

Change `chck` to `check`

* rename print_error to report, use it with together with package name

* change UI reorder setting to multiselect

* add an option to show selected setting in main txt2img/img2img UI
split some code from ui.py into ui_settings.py ui_gradio_edxtensions.py
add before_process callback for scripts
add ability for alwayson scripts to specify section and let user reorder those sections

* fix [Bug]: LoRA don't apply on dropdown list sd_lora AUTOMATIC1111#10880

* Fixed the problem with sticking to the mouse, created a tooltip

* use ui_reorder_list rather than ui_reorder for UI reorder option to make the program not break when reverting to old version

* fix 10896 pnginfo parameters

* remove redundant

* assign devices.dtype early because it's needed before the model is loaded

* revert default cross attention optimization to Doggettx
make --disable-opt-split-attention command line option work again

* add hiding and a colspans to startup profile table

* add subdir support for images, masks and output; search mask only in subdir

* fallback to original file retrieving; skip img if mask not found

usage of `shared.walk_files` breaks controlnet extension
images are processed in different order 
which leads to unmatched img file used for img2img and img file used for controlnet 
(if no folder is specified for control net
or the same as img2img input dir used for it)

* revert the erroneous change for model setting added in df02498

* Added the ability to configure hotkeys via webui

Now you can configure the hotkeys directly through the settings

JS and Python scripts are tested and code style compliant

* Added a hotkey repeat check to avoid bugs

* Support dynamic sort of extra networks

* lint fixes

* Cross attention optimization

Cross attention optimization

cross attention optimization

* remove redundant call list_optimizers()

* remove redundant

* Simplify a bunch of `len(x) > 0`/`len(x) == 0` style expressions

* fallback version info form CHANGELOG.md

* Made tooltip optional.

You can disable it in the settings.
Enabled by default

* Added support for workarounds on external GPU.

lspci detects VGA for main/integrated videocards and Display
for external videocards.

This commit should apply workarounds on computers with more than
one GPU. Useful for most laptops using weak iGPU and good dGPU.

Signed-off-by: Pablo Cholaky <[email protected]>

* Apply suggestions from code review

Co-authored-by: Aarni Koskela <[email protected]>

* Added the ability to swap the zoom hotkeys and resize the brush

* small ui fix

In the error the user will see R instead of KeyR

* Update modules/launch_utils.py

Co-authored-by: Aarni Koskela <[email protected]>

* fallback version info form CHANGELOG.md

* a yet another method to restart webui

* Added sysinfo tab to settings

* lint

* Round upscaled dimensions only when not divisible by 8

* Use a more concise calculation for dest dims

* Fix missing ext_filter kwarg

* Made the applyZoomAndPan function global for other extensions

* torch.cuda.is_available() check for SdOptimizationXformers

* fix conds caching with extra network

* simplify self.extra_network_data

* remove redone compare

* Fixed the redmask bug

* Made a function applyZoomAndPan isolated each instance

Isolated each instance of applyZoomAndPan, now if you add another element to the page, they will work correctly

* Fixed visual bugs

* Correct definition zoom level

I changed the regular expression and now I always have to select scale from style.transfo

* Update ui_tempdir.py

Make override function have the same input parameters with original function

* infer styles from prompts, and an option to control the behavior

* add whitelist for environment in the report
add extra link to view the report instead of downloading it

* fix the broken line for AUTOMATIC1111#10990

* fix for conds of second hires fox pass being calculated using first pass's networks, and add an option to revert to old behavior

* prevent calculating cons for second pass of hires fix when they are the same as for the first pass

* Add endpoint to get latent_upscale_modes for hires fix

* Zoom and Pan: move helpers into its namespace to avoid littering global scope

* Zoom and Pan: use elementIDs from closure scope

* Zoom and Pan: simplify getElements (it's not actually async)

* Zoom and Pan: use for instead of forEach

* Zoom and Pan: simplify waitForOpts

* revert the message to how it was

* rework-disable-autolaunch

* Restart: only do restart if running via the wrapper script

* restore old disable --autolaunch

* SD_WEBUI_RESTARTING

* print error and continue

print error and continue

* Forcing Torch Version to 1.13.1 for Navi and Renoir GPUs

* Fix error in webui.sh

* Force python1 for Navi1 only, use python_cmd for python

* Check python version for Navi 1 only

* Write "RX 5000 Series" instead of "Navi" in err

* link footer API to Wiki when API is not active

* Skip force pyton and pytorch ver if TORCH_COMMAND already set

* Fix upcast attention dtype error.

Without this fix, enabling the "Upcast cross attention layer to float32" option while also using `--opt-sdp-attention` breaks generation with an error:

```
  File "/ext3/automatic1111/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 612, in sdp_attnblock_forward
    out = torch.nn.functional.scaled_dot_product_attention(q, k, v, dropout_p=0.0, is_causal=False)
RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::Half instead.
```

The fix is to make sure to upcast the value tensor too.

* persistent conds cache

Update shared.py

* Generate Forever during generation

Generate Forever during generation

* Split mask blur into X and Y components

Prequisite to fixing Outpainting MK2 mask blur bug.

* Split Outpainting MK2 mask blur into X and Y components

Fixes unexpected noise in non-outpainted borders when using MK2 script.

* Don't die when a LoRA is a broken symlink

Fixes AUTOMATIC1111#11098

* linter

* add changelog for 1.4.0

* fixed typos

* Improved error output, improved settings menu

* remove console.log

* Use os.makedirs(..., exist_ok=True)

* Reworked the disabling of functions, refactored part of the code

* Formatting code with Prettier

* Fix Typo of hints.js

* Strip whitespaces from URL and dirname prior to extension installation

This avoid some cryptic errors brought by accidental spaces around urls

* add missing infotext entry for the pad cond/uncond option

---------

Signed-off-by: Pablo Cholaky <[email protected]>
Co-authored-by: AUTOMATIC1111 <[email protected]>
Co-authored-by: Aarni Koskela <[email protected]>
Co-authored-by: Kohaku-Blueleaf <[email protected]>
Co-authored-by: Monty Anderson <[email protected]>
Co-authored-by: catboxanon <[email protected]>
Co-authored-by: ArthurHeitmann <[email protected]>
Co-authored-by: fumitaka.yano <[email protected]>
Co-authored-by: strelokhalfer <[email protected]>
Co-authored-by: kernelmethod <[email protected]>
Co-authored-by: Roman Beltiukov <[email protected]>
Co-authored-by: linkoid <[email protected]>
Co-authored-by: Danil Boldyrev <[email protected]>
Co-authored-by: Sakura-Luna <[email protected]>
Co-authored-by: nyqui <[email protected]>
Co-authored-by: yoinked <[email protected]>
Co-authored-by: ramyma <[email protected]>
Co-authored-by: klimaleksus <[email protected]>
Co-authored-by: w-e-w <[email protected]>
Co-authored-by: missionfloyd <[email protected]>
Co-authored-by: Artem Kotov <[email protected]>
Co-authored-by: James <[email protected]>
Co-authored-by: David Chuang <[email protected]>
Co-authored-by: Will Frey <[email protected]>
Co-authored-by: Pablo Cholaky <[email protected]>
Co-authored-by: Chanchana Sornsoontorn <[email protected]>
Co-authored-by: Vivek K. Vasishtha <[email protected]>
Co-authored-by: Vesnica <[email protected]>
Co-authored-by: DGdev91 <[email protected]>
Co-authored-by: Alexander Ljungberg <[email protected]>
Co-authored-by: Splendide Imaginarius <119545140+Splendide-Imaginarius@users.noreply.github.com>
Co-authored-by: arch-fan <[email protected]>
Co-authored-by: zhtttylz <[email protected]>
Co-authored-by: Jabasukuriputo Wang <[email protected]>
  • Loading branch information
1 parent baf6946 commit cc40430
Show file tree
Hide file tree
Showing 102 changed files with 3,379 additions and 995 deletions.
9 changes: 6 additions & 3 deletions .eslintrc.js
Original file line number Diff line number Diff line change
Expand Up @@ -50,13 +50,14 @@ module.exports = {
globals: {
//script.js
gradioApp: "readonly",
executeCallbacks: "readonly",
onAfterUiUpdate: "readonly",
onOptionsChanged: "readonly",
onUiLoaded: "readonly",
onUiUpdate: "readonly",
onOptionsChanged: "readonly",
uiCurrentTab: "writable",
uiElementIsVisible: "readonly",
uiElementInSight: "readonly",
executeCallbacks: "readonly",
uiElementIsVisible: "readonly",
//ui.js
opts: "writable",
all_gallery_buttons: "readonly",
Expand Down Expand Up @@ -84,5 +85,7 @@ module.exports = {
// imageviewer.js
modalPrevImage: "readonly",
modalNextImage: "readonly",
// token-counters.js
setupTokenCounters: "readonly",
}
};
21 changes: 19 additions & 2 deletions .github/ISSUE_TEMPLATE/bug_report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@ body:
- type: input
id: commit
attributes:
label: Commit where the problem happens
description: Which commit are you running ? (Do not write *Latest version/repo/commit*, as this means nothing and will have changed by the time we read your issue. Rather, copy the **Commit** link at the bottom of the UI, or from the cmd/terminal if you can't launch it.)
label: Version or Commit where the problem happens
description: "Which webui version or commit are you running ? (Do not write *Latest Version/repo/commit*, as this means nothing and will have changed by the time we read your issue. Rather, copy the **Version: v1.2.3** link at the bottom of the UI, or from the cmd/terminal if you can't launch it.)"
validations:
required: true
- type: dropdown
Expand Down Expand Up @@ -80,6 +80,23 @@ body:
- AMD GPUs (RX 5000 below)
- CPU
- Other GPUs
- type: dropdown
id: cross_attention_opt
attributes:
label: Cross attention optimization
description: What cross attention optimization are you using, Settings -> Optimizations -> Cross attention optimization
multiple: false
options:
- Automatic
- xformers
- sdp-no-mem
- sdp
- Doggettx
- V1
- InvokeAI
- "None "
validations:
required: true
- type: dropdown
id: browsers
attributes:
Expand Down
57 changes: 57 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,60 @@
## 1.4.0

### Features:
* zoom controls for inpainting
* run basic torch calculation at startup in parallel to reduce the performance impact of first generation
* option to pad prompt/neg prompt to be same length
* remove taming_transformers dependency
* custom k-diffusion scheduler settings
* add an option to show selected settings in main txt2img/img2img UI
* sysinfo tab in settings
* infer styles from prompts when pasting params into the UI
* an option to control the behavior of the above

### Minor:
* bump Gradio to 3.32.0
* bump xformers to 0.0.20
* Add option to disable token counters
* tooltip fixes & optimizations
* make it possible to configure filename for the zip download
* `[vae_filename]` pattern for filenames
* Revert discarding penultimate sigma for DPM-Solver++(2M) SDE
* change UI reorder setting to multiselect
* read version info form CHANGELOG.md if git version info is not available
* link footer API to Wiki when API is not active
* persistent conds cache (opt-in optimization)

### Extensions:
* After installing extensions, webui properly restarts the process rather than reloads the UI
* Added VAE listing to web API. Via: /sdapi/v1/sd-vae
* custom unet support
* Add onAfterUiUpdate callback
* refactor EmbeddingDatabase.register_embedding() to allow unregistering
* add before_process callback for scripts
* add ability for alwayson scripts to specify section and let user reorder those sections

### Bug Fixes:
* Fix dragging text to prompt
* fix incorrect quoting for infotext values with colon in them
* fix "hires. fix" prompt sharing same labels with txt2img_prompt
* Fix s_min_uncond default type int
* Fix for #10643 (Inpainting mask sometimes not working)
* fix bad styling for thumbs view in extra networks #10639
* fix for empty list of optimizations #10605
* small fixes to prepare_tcmalloc for Debian/Ubuntu compatibility
* fix --ui-debug-mode exit
* patch GitPython to not use leaky persistent processes
* fix duplicate Cross attention optimization after UI reload
* torch.cuda.is_available() check for SdOptimizationXformers
* fix hires fix using wrong conds in second pass if using Loras.
* handle exception when parsing generation parameters from png info
* fix upcast attention dtype error
* forcing Torch Version to 1.13.1 for RX 5000 series GPUs
* split mask blur into X and Y components, patch Outpainting MK2 accordingly
* don't die when a LoRA is a broken symlink
* allow activation of Generate Forever during generation


## 1.3.2

### Bug Fixes:
Expand Down
8 changes: 2 additions & 6 deletions extensions-builtin/LDSR/scripts/ldsr_model.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
import os
import sys
import traceback

from basicsr.utils.download_util import load_file_from_url

from modules.upscaler import Upscaler, UpscalerData
from ldsr_model_arch import LDSR
from modules import shared, script_callbacks
from modules import shared, script_callbacks, errors
import sd_hijack_autoencoder # noqa: F401
import sd_hijack_ddpm_v1 # noqa: F401

Expand Down Expand Up @@ -51,10 +49,8 @@ def load_model(self, path: str):

try:
return LDSR(model, yaml)

except Exception:
print("Error importing LDSR:", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
errors.report("Error importing LDSR", exc_info=True)
return None

def do_upscale(self, img, path):
Expand Down
5 changes: 3 additions & 2 deletions extensions-builtin/LDSR/sd_hijack_autoencoder.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
from torch.optim.lr_scheduler import LambdaLR

from ldm.modules.ema import LitEma
from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer
from vqvae_quantize import VectorQuantizer2 as VectorQuantizer
from ldm.modules.diffusionmodules.model import Encoder, Decoder
from ldm.util import instantiate_from_config

Expand Down Expand Up @@ -91,8 +91,9 @@ def init_from_ckpt(self, path, ignore_keys=None):
del sd[k]
missing, unexpected = self.load_state_dict(sd, strict=False)
print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
if len(missing) > 0:
if missing:
print(f"Missing Keys: {missing}")
if unexpected:
print(f"Unexpected Keys: {unexpected}")

def on_train_batch_end(self, *args, **kwargs):
Expand Down
4 changes: 2 additions & 2 deletions extensions-builtin/LDSR/sd_hijack_ddpm_v1.py
Original file line number Diff line number Diff line change
Expand Up @@ -195,9 +195,9 @@ def init_from_ckpt(self, path, ignore_keys=None, only_model=False):
missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict(
sd, strict=False)
print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
if len(missing) > 0:
if missing:
print(f"Missing Keys: {missing}")
if len(unexpected) > 0:
if unexpected:
print(f"Unexpected Keys: {unexpected}")

def q_mean_variance(self, x_start, t):
Expand Down
147 changes: 147 additions & 0 deletions extensions-builtin/LDSR/vqvae_quantize.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
# Vendored from https://raw.githubusercontent.com/CompVis/taming-transformers/24268930bf1dce879235a7fddd0b2355b84d7ea6/taming/modules/vqvae/quantize.py,
# where the license is as follows:
#
# Copyright (c) 2020 Patrick Esser and Robin Rombach and Björn Ommer
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
# OR OTHER DEALINGS IN THE SOFTWARE./

import torch
import torch.nn as nn
import numpy as np
from einops import rearrange


class VectorQuantizer2(nn.Module):
"""
Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly
avoids costly matrix multiplications and allows for post-hoc remapping of indices.
"""

# NOTE: due to a bug the beta term was applied to the wrong term. for
# backwards compatibility we use the buggy version by default, but you can
# specify legacy=False to fix it.
def __init__(self, n_e, e_dim, beta, remap=None, unknown_index="random",
sane_index_shape=False, legacy=True):
super().__init__()
self.n_e = n_e
self.e_dim = e_dim
self.beta = beta
self.legacy = legacy

self.embedding = nn.Embedding(self.n_e, self.e_dim)
self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e)

self.remap = remap
if self.remap is not None:
self.register_buffer("used", torch.tensor(np.load(self.remap)))
self.re_embed = self.used.shape[0]
self.unknown_index = unknown_index # "random" or "extra" or integer
if self.unknown_index == "extra":
self.unknown_index = self.re_embed
self.re_embed = self.re_embed + 1
print(f"Remapping {self.n_e} indices to {self.re_embed} indices. "
f"Using {self.unknown_index} for unknown indices.")
else:
self.re_embed = n_e

self.sane_index_shape = sane_index_shape

def remap_to_used(self, inds):
ishape = inds.shape
assert len(ishape) > 1
inds = inds.reshape(ishape[0], -1)
used = self.used.to(inds)
match = (inds[:, :, None] == used[None, None, ...]).long()
new = match.argmax(-1)
unknown = match.sum(2) < 1
if self.unknown_index == "random":
new[unknown] = torch.randint(0, self.re_embed, size=new[unknown].shape).to(device=new.device)
else:
new[unknown] = self.unknown_index
return new.reshape(ishape)

def unmap_to_all(self, inds):
ishape = inds.shape
assert len(ishape) > 1
inds = inds.reshape(ishape[0], -1)
used = self.used.to(inds)
if self.re_embed > self.used.shape[0]: # extra token
inds[inds >= self.used.shape[0]] = 0 # simply set to zero
back = torch.gather(used[None, :][inds.shape[0] * [0], :], 1, inds)
return back.reshape(ishape)

def forward(self, z, temp=None, rescale_logits=False, return_logits=False):
assert temp is None or temp == 1.0, "Only for interface compatible with Gumbel"
assert rescale_logits is False, "Only for interface compatible with Gumbel"
assert return_logits is False, "Only for interface compatible with Gumbel"
# reshape z -> (batch, height, width, channel) and flatten
z = rearrange(z, 'b c h w -> b h w c').contiguous()
z_flattened = z.view(-1, self.e_dim)
# distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z

d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \
torch.sum(self.embedding.weight ** 2, dim=1) - 2 * \
torch.einsum('bd,dn->bn', z_flattened, rearrange(self.embedding.weight, 'n d -> d n'))

min_encoding_indices = torch.argmin(d, dim=1)
z_q = self.embedding(min_encoding_indices).view(z.shape)
perplexity = None
min_encodings = None

# compute loss for embedding
if not self.legacy:
loss = self.beta * torch.mean((z_q.detach() - z) ** 2) + \
torch.mean((z_q - z.detach()) ** 2)
else:
loss = torch.mean((z_q.detach() - z) ** 2) + self.beta * \
torch.mean((z_q - z.detach()) ** 2)

# preserve gradients
z_q = z + (z_q - z).detach()

# reshape back to match original input shape
z_q = rearrange(z_q, 'b h w c -> b c h w').contiguous()

if self.remap is not None:
min_encoding_indices = min_encoding_indices.reshape(z.shape[0], -1) # add batch axis
min_encoding_indices = self.remap_to_used(min_encoding_indices)
min_encoding_indices = min_encoding_indices.reshape(-1, 1) # flatten

if self.sane_index_shape:
min_encoding_indices = min_encoding_indices.reshape(
z_q.shape[0], z_q.shape[2], z_q.shape[3])

return z_q, loss, (perplexity, min_encodings, min_encoding_indices)

def get_codebook_entry(self, indices, shape):
# shape specifying (batch, height, width, channel)
if self.remap is not None:
indices = indices.reshape(shape[0], -1) # add batch axis
indices = self.unmap_to_all(indices)
indices = indices.reshape(-1) # flatten again

# get quantized latent vectors
z_q = self.embedding(indices)

if shape is not None:
z_q = z_q.view(shape)
# reshape back to match original input shape
z_q = z_q.permute(0, 3, 1, 2).contiguous()

return z_q
4 changes: 2 additions & 2 deletions extensions-builtin/Lora/extra_networks_lora.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,14 @@ def __init__(self):
def activate(self, p, params_list):
additional = shared.opts.sd_lora

if additional != "None" and additional in lora.available_loras and len([x for x in params_list if x.items[0] == additional]) == 0:
if additional != "None" and additional in lora.available_loras and not any(x for x in params_list if x.items[0] == additional):
p.all_prompts = [x + f"<lora:{additional}:{shared.opts.extra_networks_default_multiplier}>" for x in p.all_prompts]
params_list.append(extra_networks.ExtraNetworkParams(items=[additional, shared.opts.extra_networks_default_multiplier]))

names = []
multipliers = []
for params in params_list:
assert len(params.items) > 0
assert params.items

names.append(params.items[0])
multipliers.append(float(params.items[1]) if len(params.items) > 1 else 1.0)
Expand Down
10 changes: 7 additions & 3 deletions extensions-builtin/Lora/lora.py
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ def load_lora(name, lora_on_disk):
else:
raise AssertionError(f"Bad Lora layer name: {key_diffusers} - must end in lora_up.weight, lora_down.weight or alpha")

if len(keys_failed_to_match) > 0:
if keys_failed_to_match:
print(f"Failed to match keys when loading Lora {lora_on_disk.filename}: {keys_failed_to_match}")

return lora
Expand Down Expand Up @@ -267,7 +267,7 @@ def load_loras(names, multipliers=None):
lora.multiplier = multipliers[i] if multipliers else 1.0
loaded_loras.append(lora)

if len(failed_to_load_loras) > 0:
if failed_to_load_loras:
sd_hijack.model_hijack.comments.append("Failed to find Loras: " + ", ".join(failed_to_load_loras))


Expand Down Expand Up @@ -448,7 +448,11 @@ def list_available_loras():
continue

name = os.path.splitext(os.path.basename(filename))[0]
entry = LoraOnDisk(name, filename)
try:
entry = LoraOnDisk(name, filename)
except OSError: # should catch FileNotFoundError and PermissionError etc.
errors.report(f"Failed to load LoRA {name} from {filename}", exc_info=True)
continue

available_loras[name] = entry

Expand Down
4 changes: 3 additions & 1 deletion extensions-builtin/Lora/ui_extra_networks_lora.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ def refresh(self):
lora.list_available_loras()

def list_items(self):
for name, lora_on_disk in lora.available_loras.items():
for index, (name, lora_on_disk) in enumerate(lora.available_loras.items()):
path, ext = os.path.splitext(lora_on_disk.filename)

alias = lora_on_disk.get_alias()
Expand All @@ -27,6 +27,8 @@ def list_items(self):
"prompt": json.dumps(f"<lora:{alias}:") + " + opts.extra_networks_default_multiplier + " + json.dumps(">"),
"local_preview": f"{path}.{shared.opts.samples_format}",
"metadata": json.dumps(lora_on_disk.metadata, indent=4) if lora_on_disk.metadata else None,
"sort_keys": {'default': index, **self.get_sort_keys(lora_on_disk.filename)},

}

def allowed_directories_for_previews(self):
Expand Down
Loading

0 comments on commit cc40430

Please sign in to comment.