Compare commits

...

106 Commits

Author SHA1 Message Date
w-e-w 08c68820cd gradio 3.41.2 attempt 2023-12-03 20:47:31 +09:00
AUTOMATIC1111 f0f100e67b add categories to settings 2023-11-26 17:56:22 +03:00
AUTOMATIC1111 500de919ed Merge pull request #14108 from AUTOMATIC1111/json.dump(ensure_ascii=False)
json.dump(ensure_ascii=False)
2023-11-26 16:15:56 +03:00
w-e-w a15dd151ff json.dump(ensure_ascii=False)
improve json readability
2023-11-26 21:56:21 +09:00
AUTOMATIC1111 2a40d3c603 compact prompt layout: preserve scroll when switching between lora tabs 2023-11-26 14:58:56 +03:00
AUTOMATIC1111 e44103264d Merge pull request #13936 from cabelo/compatibility
Compatibility
2023-11-26 11:57:13 +03:00
AUTOMATIC1111 6955c210b7 Merge pull request #14059 from akx/upruff
Update Ruff to 0.1.6
2023-11-26 11:54:36 +03:00
AUTOMATIC1111 d1750e5eca fix linter errors 2023-11-26 11:37:12 +03:00
AUTOMATIC1111 c5a0c59a83 do not save HTML explanations from options page to config 2023-11-26 11:36:17 +03:00
AUTOMATIC1111 f7f015e84b Merge pull request #14084 from wfjsw/move-from-sysinfo-to-errors
Move exception_records related methods to errors.py
2023-11-26 11:29:27 +03:00
AUTOMATIC1111 f85b74763d Merge branch 'hypertile-in-sample' into dev 2023-11-26 11:18:49 +03:00
AUTOMATIC1111 fd8674a4bc Merge pull request #13948 from aria1th/hypertile-in-sample
support HyperTile optimization
2023-11-26 11:18:25 +03:00
AUTOMATIC1111 d2e0c1ca13 rework hypertile into a built-in extension 2023-11-26 11:17:38 +03:00
AUTOMATIC1111 3a9bf4ac10 move file 2023-11-26 08:29:12 +03:00
Jabasukuriputo Wang 5cedc8f9b2 remove traceback in sysinfo 2023-11-24 11:30:30 -06:00
Jabasukuriputo Wang 86b99b1e98 Move exception_records related methods to errors.py 2023-11-24 11:28:54 -06:00
Aarni Koskela 066afda2f6 Simplify restart_sampler (suggested by ruff) 2023-11-22 18:05:12 +02:00
Aarni Koskela 8fe1e19522 Update ruff to 0.1.6 2023-11-22 18:05:12 +02:00
AUTOMATIC1111 8aa51f682c fix [Bug]: (Dev Branch) Placing "Dimensions" first in "ui_reorder_list" prevents start #14047 2023-11-21 08:32:07 +03:00
AUTOMATIC1111 5f36f6ab21 Merge pull request #14009 from AUTOMATIC1111/Option-to-show-batch-img2img-results-in-UI
Option to show batch img2img results in UI
2023-11-20 17:44:58 +03:00
AUTOMATIC1111 1463cea949 Merge branch 'dag' into dev 2023-11-20 14:50:01 +03:00
AUTOMATIC1111 73a0b4bba6 Merge pull request #13944 from wfjsw/dag
implementing script metadata and DAG sorting mechanism
2023-11-20 14:49:46 +03:00
AUTOMATIC1111 9b471436b2 rework extensions metadata: use custom sorter that doesn't mess the order as much and ignores cyclic errors, use classes with named fields instead of dictionaries, eliminate some duplicated code 2023-11-20 14:47:09 +03:00
AUTOMATIC1111 411da7c281 Merge pull request #14035 from AUTOMATIC1111/sysinfo-json
save sysinfo as .json
2023-11-20 08:56:45 +03:00
w-e-w 6d337bf23d save sysinfo as .json
GitHub now allows uploading of .json files in issues
2023-11-20 01:38:31 +09:00
w-e-w dea5e43c83 Option to show batch img2img results in UI
shared.opts.img2img_batch_show_results_limit
limit the number of images return to the UI for batch img2img
default limit 32
0 no images are shown
-1 unlimited, all images are shown
2023-11-19 17:37:32 +09:00
wfjsw bde439ef67 use metadata.ini for meta filename 2023-11-19 00:58:47 -06:00
AUTOMATIC1111 fc83af4432 Merge pull request #13931 from AUTOMATIC1111/style-hotkeys
Enable prompt hotkeys in style editor
2023-11-19 09:11:49 +03:00
AUTOMATIC1111 337bc4a2fb Merge pull request #13014 from AUTOMATIC1111/thread-safe-extranetworks-list_items
thread safe extra network list_items
2023-11-19 09:09:21 +03:00
AUTOMATIC1111 6fac65f334 Merge pull request #13929 from kingljl/fix-dependency-address-patch-1
Fix dependency address patch 1
2023-11-19 09:01:39 +03:00
AUTOMATIC1111 5a031d9233 Merge pull request #13962 from kaalibro/dev
Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed
2023-11-19 09:01:11 +03:00
AUTOMATIC1111 e4e875fffe Merge pull request #13968 from kaalibro/extranetworks-path-sorting
Adds 'Path' sorting for Extra network cards
2023-11-19 09:00:05 +03:00
AUTOMATIC1111 b945ba716b Merge pull request #13977 from AUTOMATIC1111/hotfix-postprocessing-state-end
Hotfix: call shared.state.end() after postprocessing done
2023-11-19 08:59:32 +03:00
AUTOMATIC1111 2207ef363a Merge pull request #13692 from v0xie/network-oft
Support inference with OFT networks
2023-11-19 08:59:09 +03:00
AUTOMATIC1111 3a13b0e762 Merge pull request #13996 from Luxter77/patch-1
Adds tqdm handler to logging_config.py for progress bar integration
2023-11-19 08:57:14 +03:00
AUTOMATIC1111 6429c3db11 Merge pull request #13826 from ezxzeng/ui_mobile_optimizations
added accordion settings options
2023-11-19 08:42:58 +03:00
AUTOMATIC1111 5a9dc1c0ca Merge pull request #14004 from storyicon/master
feat: fix randn found element of type float at pos 2
2023-11-19 08:40:29 +03:00
storyicon 4f2a4a3615 feat: fix randn found element of type float at pos 2
Signed-off-by: storyicon <storyicon@foxmail.com>
2023-11-17 09:48:18 +00:00
aria1th 97431f29fe fix double gc and decoding with unet context 2023-11-17 10:05:28 +09:00
aria1th ffd0f8ddc3 set empty value for SD XL 3rd layer 2023-11-17 09:54:33 +09:00
aria1th c0725ba2d0 Fix inverted option issue
I'm pretty sure I was sleepy while implementing this
2023-11-17 09:34:50 +09:00
aria1th c40be2252a Fix critical issue - unet apply 2023-11-17 09:22:27 +09:00
Your Name 7021cdb1de actually adds handler to logging_config.py 2023-11-16 17:53:57 -03:00
Lucas Daniel Velazquez M cdb60a690d Take into account tqdm not being installed before first boot for logging 2023-11-16 16:49:59 -03:00
Lucas Daniel Velazquez M 236eb82c3a Adds tqdm handler to logging_config.py for progress bar integration 2023-11-16 13:20:33 -03:00
AngelBottomless 472c22cc8a fix ruff - add newline 2023-11-16 19:03:45 +09:00
AngelBottomless bcfaf3979a convert/add hypertile options 2023-11-16 18:43:16 +09:00
v0xie eb667e715a feat: LyCORIS/kohya OFT network support 2023-11-15 18:28:48 -08:00
v0xie d6d0b22e66 fix: ignore calc_scale() for COFT which has very small alpha 2023-11-15 03:08:50 -08:00
aria1th af45872fdb copy LDM VAE key from XL 2023-11-15 15:15:14 +09:00
aria1th b29fc6d4de Implement Hypertile
Co-Authored-By: Kieran Hunt <kph@hotmail.ca>
2023-11-15 15:13:39 +09:00
AngelBottomless a292d2c47f hotfix: call shared.state.end() after postprocessing done 2023-11-15 14:26:37 +09:00
kaalibro c1c816006e Adds 'Path' sorting for Extra network cards 2023-11-13 22:01:52 +06:00
kaalibro 94e9669566 Fixes generation restart not working for some users when 'Ctrl+Enter' is pressed 2023-11-13 14:51:06 +06:00
wfjsw 3bb32befe9 bug fix 2023-11-11 11:58:19 -06:00
wfjsw 48d6102b31 fix 2023-11-11 11:17:26 -06:00
wfjsw 520e52f846 allow comma and whitespace as separator 2023-11-11 10:58:26 -06:00
wfjsw 7af576e745 remove the assumption of same name 2023-11-11 10:46:47 -06:00
aria1th 294f8a514f add hyperTile
https://github.com/tfernd/HyperTile
2023-11-11 23:28:12 +09:00
wfjsw bc1a450124 reverse the extension load order so builtin extensions load earlier natively 2023-11-11 04:08:45 -06:00
wfjsw 0d1924c48b populate loaded_extensions from extension list instead 2023-11-11 04:03:55 -06:00
wfjsw 0fc7dc1c04 implementing script metadata and DAG sorting mechanism 2023-11-11 04:01:13 -06:00
Emily Zeng 3a4a6c43a4 ExitStack as alternative to suppress 2023-11-10 16:06:01 -05:00
w-e-w 5432d93013 fix added accordion settings options 2023-11-11 05:30:35 +09:00
Alessandro de Oliveira Faria (A.K.A. CABELO) 6a86b3ad9b Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+ 2023-11-10 14:15:34 -03:00
missionfloyd 7ff54005fe Enable prompt hotkeys in style editor 2023-11-09 23:47:53 -07:00
Alessandro de Oliveira Faria (A.K.A. CABELO) 66767e3876 - opensuse compatibility 2023-11-10 03:45:44 -03:00
fuchen.ljl 6d77a6e1c6 Update README.md
Modify the stablediffusion dependency address
2023-11-10 14:40:39 +08:00
fuchen.ljl 98fc525a2c Update README.md
Modify the stablediffusion dependency address
2023-11-10 14:37:30 +08:00
Emily Zeng ff2952f105 multiline with statement for readibility 2023-11-09 13:35:52 -05:00
Emily Zeng 9aa4d098f0 removed changes that weren't merged properly 2023-11-09 13:25:24 -05:00
Emily Zeng a625a7bb81 moved nested with to single line to remove extra tabs 2023-11-09 13:15:06 -05:00
ezxzeng f9c14a8c8c Merge branch 'dev' into ui_mobile_optimizations 2023-11-07 15:25:27 -05:00
AUTOMATIC1111 5e80d9ee99 fix pix2pix producing bad results 2023-11-07 11:33:33 +03:00
AUTOMATIC1111 47bccbebae Merge pull request #13884 from GerryDE/notification-sound-volume
Add option to set notification sound volume
2023-11-07 08:29:06 +03:00
GerryDE 9ba991cad8 Add option to set notification sound volume 2023-11-07 03:09:08 +01:00
v0xie 7edd50f304 Merge pull request #2 from v0xie/network-oft-change-impl
Use same updown implementation for LyCORIS OFT as kohya-ss OFT
2023-11-04 15:06:04 -07:00
v0xie bbf00a96af refactor: remove unused function 2023-11-04 14:56:47 -07:00
v0xie 329c8bacce refactor: use same updown for both kohya OFT and LyCORIS diag-oft 2023-11-04 14:54:36 -07:00
v0xie 1dd25be037 Merge pull request #1 from v0xie/oft-faster
Support LyCORIS diag-oft OFT implementation (minus MultiheadAttention layer), maintains support for kohya-ss OFT
2023-11-03 19:47:27 -07:00
v0xie f6c8201e56 refactor: move factorization to lyco_helpers, separate calc_updown for kohya and kb 2023-11-03 19:35:15 -07:00
v0xie fe1967a4c4 skip multihead attn for now 2023-11-03 17:52:55 -07:00
Emily Zeng 759515316e added accordion settings options 2023-11-02 21:54:48 -04:00
v0xie d727ddfccd no idea what i'm doing, trying to support both type of OFT, kblueleaf diag_oft has MultiheadAttn which kohya's doesn't?, attempt create new module based off network_lora.py, errors about tensor dim mismatch 2023-11-02 00:13:11 -07:00
v0xie 65ccd6305f detect diag_oft type 2023-11-02 00:11:32 -07:00
v0xie a2fad6ee05 test implementation based on kohaku diag-oft implementation 2023-11-01 22:34:27 -07:00
v0xie 6523edb8a4 style: conform style 2023-10-22 09:31:15 -07:00
v0xie 3b8515d2c9 fix: multiplier applied twice in finalize_updown 2023-10-22 09:27:48 -07:00
v0xie 4a50c9638c refactor: remove used OFT functions 2023-10-22 08:54:24 -07:00
v0xie de8ee92ed8 fix: use merge_weight to cache value 2023-10-21 17:37:17 -07:00
v0xie 76f5abdbdb style: cleanup oft 2023-10-21 16:07:45 -07:00
v0xie fce86ab7d7 fix: support multiplier, no forward pass hook 2023-10-21 16:03:54 -07:00
v0xie 7683547728 fix: return orig weights during updown, merge weights before forward 2023-10-21 14:42:24 -07:00
v0xie 2d8c894b27 refactor: use forward hook instead of custom forward 2023-10-21 13:43:31 -07:00
v0xie 0550659ce6 style: fix ambiguous variable name 2023-10-19 13:13:02 -07:00
v0xie d10c4db57e style: formatting 2023-10-19 12:52:14 -07:00
v0xie 321680ccd0 refactor: fix constraint, re-use get_weight 2023-10-19 12:41:17 -07:00
v0xie eb01d7f0e0 faster by calculating R in updown and using cached R in forward 2023-10-18 04:56:53 -07:00
v0xie 853e21d98e faster by using cached R in forward 2023-10-18 04:27:44 -07:00
v0xie 1c6efdbba7 inference working but SLOW 2023-10-18 04:16:01 -07:00
v0xie ec718f76b5 wip incorrect OFT implementation 2023-10-17 23:35:50 -07:00
w-e-w 74b80e7211 add comment 2023-09-12 09:29:07 +09:00
w-e-w e785402b6a return nothing if not found 2023-09-11 19:37:55 +09:00
w-e-w f5959c1c30 thread safe extra network using list 2023-09-09 17:05:50 +09:00
w-e-w 25de9a785c Revert "thread safe extra network list_items"
This reverts commit aab385d01b.
2023-09-09 16:56:19 +09:00
w-e-w aab385d01b thread safe extra network list_items 2023-09-03 11:56:02 +09:00
43 changed files with 1147 additions and 140 deletions
+1 -1
View File
@@ -20,7 +20,7 @@ jobs:
# not to have GHA download an (at the time of writing) 4 GB cache
# of PyTorch and other dependencies.
- name: Install Ruff
run: pip install ruff==0.0.272
run: pip install ruff==0.1.6
- name: Run Ruff
run: ruff .
lint-js:
+5 -2
View File
@@ -121,7 +121,9 @@ Alternatively, use online services (like Google Colab):
# Debian-based:
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
# Red Hat-based:
sudo dnf install wget git python3
sudo dnf install wget git python3 gperftools-libs libglvnd-glx
# openSUSE-based:
sudo zypper install wget git python3 libtcmalloc4 libglvnd
# Arch-based:
sudo pacman -S wget git python3
```
@@ -147,7 +149,7 @@ For the purposes of getting Google and other search engines to crawl the wiki, h
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- Stable Diffusion - https://github.com/Stability-AI/stablediffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
@@ -174,5 +176,6 @@ Licenses for borrowed code can be found in `Settings -> Licenses` screen, and al
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
- Hypertile - tfernd - https://github.com/tfernd/HyperTile
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
+47
View File
@@ -19,3 +19,50 @@ def rebuild_cp_decomposition(up, down, mid):
up = up.reshape(up.size(0), -1)
down = down.reshape(down.size(0), -1)
return torch.einsum('n m k l, i n, m j -> i j k l', mid, up, down)
# copied from https://github.com/KohakuBlueleaf/LyCORIS/blob/dev/lycoris/modules/lokr.py
def factorization(dimension: int, factor:int=-1) -> tuple[int, int]:
'''
return a tuple of two value of input dimension decomposed by the number closest to factor
second value is higher or equal than first value.
In LoRA with Kroneckor Product, first value is a value for weight scale.
secon value is a value for weight.
Becuase of non-commutative property, A⊗B ≠ B⊗A. Meaning of two matrices is slightly different.
examples)
factor
-1 2 4 8 16 ...
127 -> 1, 127 127 -> 1, 127 127 -> 1, 127 127 -> 1, 127 127 -> 1, 127
128 -> 8, 16 128 -> 2, 64 128 -> 4, 32 128 -> 8, 16 128 -> 8, 16
250 -> 10, 25 250 -> 2, 125 250 -> 2, 125 250 -> 5, 50 250 -> 10, 25
360 -> 8, 45 360 -> 2, 180 360 -> 4, 90 360 -> 8, 45 360 -> 12, 30
512 -> 16, 32 512 -> 2, 256 512 -> 4, 128 512 -> 8, 64 512 -> 16, 32
1024 -> 32, 32 1024 -> 2, 512 1024 -> 4, 256 1024 -> 8, 128 1024 -> 16, 64
'''
if factor > 0 and (dimension % factor) == 0:
m = factor
n = dimension // factor
if m > n:
n, m = m, n
return m, n
if factor < 0:
factor = dimension
m, n = 1, dimension
length = m + n
while m<n:
new_m = m + 1
while dimension%new_m != 0:
new_m += 1
new_n = dimension // new_m
if new_m + new_n > length or new_m>factor:
break
else:
m, n = new_m, new_n
if m > n:
n, m = m, n
return m, n
+97
View File
@@ -0,0 +1,97 @@
import torch
import network
from lyco_helpers import factorization
from einops import rearrange
class ModuleTypeOFT(network.ModuleType):
def create_module(self, net: network.Network, weights: network.NetworkWeights):
if all(x in weights.w for x in ["oft_blocks"]) or all(x in weights.w for x in ["oft_diag"]):
return NetworkModuleOFT(net, weights)
return None
# Supports both kohya-ss' implementation of COFT https://github.com/kohya-ss/sd-scripts/blob/main/networks/oft.py
# and KohakuBlueleaf's implementation of OFT/COFT https://github.com/KohakuBlueleaf/LyCORIS/blob/dev/lycoris/modules/diag_oft.py
class NetworkModuleOFT(network.NetworkModule):
def __init__(self, net: network.Network, weights: network.NetworkWeights):
super().__init__(net, weights)
self.lin_module = None
self.org_module: list[torch.Module] = [self.sd_module]
# kohya-ss
if "oft_blocks" in weights.w.keys():
self.is_kohya = True
self.oft_blocks = weights.w["oft_blocks"] # (num_blocks, block_size, block_size)
self.alpha = weights.w["alpha"] # alpha is constraint
self.dim = self.oft_blocks.shape[0] # lora dim
# LyCORIS
elif "oft_diag" in weights.w.keys():
self.is_kohya = False
self.oft_blocks = weights.w["oft_diag"]
# self.alpha is unused
self.dim = self.oft_blocks.shape[1] # (num_blocks, block_size, block_size)
is_linear = type(self.sd_module) in [torch.nn.Linear, torch.nn.modules.linear.NonDynamicallyQuantizableLinear]
is_conv = type(self.sd_module) in [torch.nn.Conv2d]
is_other_linear = type(self.sd_module) in [torch.nn.MultiheadAttention] # unsupported
if is_linear:
self.out_dim = self.sd_module.out_features
elif is_conv:
self.out_dim = self.sd_module.out_channels
elif is_other_linear:
self.out_dim = self.sd_module.embed_dim
if self.is_kohya:
self.constraint = self.alpha * self.out_dim
self.num_blocks = self.dim
self.block_size = self.out_dim // self.dim
else:
self.constraint = None
self.block_size, self.num_blocks = factorization(self.out_dim, self.dim)
def calc_updown_kb(self, orig_weight, multiplier):
oft_blocks = self.oft_blocks.to(orig_weight.device, dtype=orig_weight.dtype)
oft_blocks = oft_blocks - oft_blocks.transpose(1, 2) # ensure skew-symmetric orthogonal matrix
R = oft_blocks.to(orig_weight.device, dtype=orig_weight.dtype)
R = R * multiplier + torch.eye(self.block_size, device=orig_weight.device)
# This errors out for MultiheadAttention, might need to be handled up-stream
merged_weight = rearrange(orig_weight, '(k n) ... -> k n ...', k=self.num_blocks, n=self.block_size)
merged_weight = torch.einsum(
'k n m, k n ... -> k m ...',
R,
merged_weight
)
merged_weight = rearrange(merged_weight, 'k m ... -> (k m) ...')
updown = merged_weight.to(orig_weight.device, dtype=orig_weight.dtype) - orig_weight
output_shape = orig_weight.shape
return self.finalize_updown(updown, orig_weight, output_shape)
def calc_updown(self, orig_weight):
# if alpha is a very small number as in coft, calc_scale() will return a almost zero number so we ignore it
multiplier = self.multiplier()
return self.calc_updown_kb(orig_weight, multiplier)
# override to remove the multiplier/scale factor; it's already multiplied in get_weight
def finalize_updown(self, updown, orig_weight, output_shape, ex_bias=None):
if self.bias is not None:
updown = updown.reshape(self.bias.shape)
updown += self.bias.to(orig_weight.device, dtype=orig_weight.dtype)
updown = updown.reshape(output_shape)
if len(output_shape) == 4:
updown = updown.reshape(output_shape)
if orig_weight.size().numel() == updown.size().numel():
updown = updown.reshape(orig_weight.shape)
if ex_bias is not None:
ex_bias = ex_bias * self.multiplier()
return updown, ex_bias
+13
View File
@@ -11,6 +11,7 @@ import network_ia3
import network_lokr
import network_full
import network_norm
import network_oft
import torch
from typing import Union
@@ -28,6 +29,7 @@ module_types = [
network_full.ModuleTypeFull(),
network_norm.ModuleTypeNorm(),
network_glora.ModuleTypeGLora(),
network_oft.ModuleTypeOFT(),
]
@@ -189,6 +191,17 @@ def load_network(name, network_on_disk):
key = key_network_without_network_parts.replace("lora_te1_text_model", "transformer_text_model")
sd_module = shared.sd_model.network_layer_mapping.get(key, None)
# kohya_ss OFT module
elif sd_module is None and "oft_unet" in key_network_without_network_parts:
key = key_network_without_network_parts.replace("oft_unet", "diffusion_model")
sd_module = shared.sd_model.network_layer_mapping.get(key, None)
# KohakuBlueLeaf OFT module
if sd_module is None and "oft_diag" in key:
key = key_network_without_network_parts.replace("lora_unet", "diffusion_model")
key = key_network_without_network_parts.replace("lora_te1_text_model", "0_transformer_text_model")
sd_module = shared.sd_model.network_layer_mapping.get(key, None)
if sd_module is None:
keys_failed_to_match[key_network] = key
continue
@@ -17,6 +17,8 @@ class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage):
def create_item(self, name, index=None, enable_filter=True):
lora_on_disk = networks.available_networks.get(name)
if lora_on_disk is None:
return
path, ext = os.path.splitext(lora_on_disk.filename)
@@ -66,9 +68,10 @@ class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage):
return item
def list_items(self):
for index, name in enumerate(networks.available_networks):
# instantiate a list to protect against concurrent modification
names = list(networks.available_networks)
for index, name in enumerate(names):
item = self.create_item(name, index)
if item is not None:
yield item
+345
View File
@@ -0,0 +1,345 @@
"""
Hypertile module for splitting attention layers in SD-1.5 U-Net and SD-1.5 VAE
Warn: The patch works well only if the input image has a width and height that are multiples of 128
Original author: @tfernd Github: https://github.com/tfernd/HyperTile
"""
from __future__ import annotations
import functools
from dataclasses import dataclass
from typing import Callable
from functools import wraps, cache
import math
import torch.nn as nn
import random
from einops import rearrange
@dataclass
class HypertileParams:
depth = 0
layer_name = ""
tile_size: int = 0
swap_size: int = 0
aspect_ratio: float = 1.0
forward = None
enabled = False
# TODO add SD-XL layers
DEPTH_LAYERS = {
0: [
# SD 1.5 U-Net (diffusers)
"down_blocks.0.attentions.0.transformer_blocks.0.attn1",
"down_blocks.0.attentions.1.transformer_blocks.0.attn1",
"up_blocks.3.attentions.0.transformer_blocks.0.attn1",
"up_blocks.3.attentions.1.transformer_blocks.0.attn1",
"up_blocks.3.attentions.2.transformer_blocks.0.attn1",
# SD 1.5 U-Net (ldm)
"input_blocks.1.1.transformer_blocks.0.attn1",
"input_blocks.2.1.transformer_blocks.0.attn1",
"output_blocks.9.1.transformer_blocks.0.attn1",
"output_blocks.10.1.transformer_blocks.0.attn1",
"output_blocks.11.1.transformer_blocks.0.attn1",
# SD 1.5 VAE
"decoder.mid_block.attentions.0",
"decoder.mid.attn_1",
],
1: [
# SD 1.5 U-Net (diffusers)
"down_blocks.1.attentions.0.transformer_blocks.0.attn1",
"down_blocks.1.attentions.1.transformer_blocks.0.attn1",
"up_blocks.2.attentions.0.transformer_blocks.0.attn1",
"up_blocks.2.attentions.1.transformer_blocks.0.attn1",
"up_blocks.2.attentions.2.transformer_blocks.0.attn1",
# SD 1.5 U-Net (ldm)
"input_blocks.4.1.transformer_blocks.0.attn1",
"input_blocks.5.1.transformer_blocks.0.attn1",
"output_blocks.6.1.transformer_blocks.0.attn1",
"output_blocks.7.1.transformer_blocks.0.attn1",
"output_blocks.8.1.transformer_blocks.0.attn1",
],
2: [
# SD 1.5 U-Net (diffusers)
"down_blocks.2.attentions.0.transformer_blocks.0.attn1",
"down_blocks.2.attentions.1.transformer_blocks.0.attn1",
"up_blocks.1.attentions.0.transformer_blocks.0.attn1",
"up_blocks.1.attentions.1.transformer_blocks.0.attn1",
"up_blocks.1.attentions.2.transformer_blocks.0.attn1",
# SD 1.5 U-Net (ldm)
"input_blocks.7.1.transformer_blocks.0.attn1",
"input_blocks.8.1.transformer_blocks.0.attn1",
"output_blocks.3.1.transformer_blocks.0.attn1",
"output_blocks.4.1.transformer_blocks.0.attn1",
"output_blocks.5.1.transformer_blocks.0.attn1",
],
3: [
# SD 1.5 U-Net (diffusers)
"mid_block.attentions.0.transformer_blocks.0.attn1",
# SD 1.5 U-Net (ldm)
"middle_block.1.transformer_blocks.0.attn1",
],
}
# XL layers, thanks for GitHub@gel-crabs for the help
DEPTH_LAYERS_XL = {
0: [
# SD 1.5 U-Net (diffusers)
"down_blocks.0.attentions.0.transformer_blocks.0.attn1",
"down_blocks.0.attentions.1.transformer_blocks.0.attn1",
"up_blocks.3.attentions.0.transformer_blocks.0.attn1",
"up_blocks.3.attentions.1.transformer_blocks.0.attn1",
"up_blocks.3.attentions.2.transformer_blocks.0.attn1",
# SD 1.5 U-Net (ldm)
"input_blocks.4.1.transformer_blocks.0.attn1",
"input_blocks.5.1.transformer_blocks.0.attn1",
"output_blocks.3.1.transformer_blocks.0.attn1",
"output_blocks.4.1.transformer_blocks.0.attn1",
"output_blocks.5.1.transformer_blocks.0.attn1",
# SD 1.5 VAE
"decoder.mid_block.attentions.0",
"decoder.mid.attn_1",
],
1: [
# SD 1.5 U-Net (diffusers)
#"down_blocks.1.attentions.0.transformer_blocks.0.attn1",
#"down_blocks.1.attentions.1.transformer_blocks.0.attn1",
#"up_blocks.2.attentions.0.transformer_blocks.0.attn1",
#"up_blocks.2.attentions.1.transformer_blocks.0.attn1",
#"up_blocks.2.attentions.2.transformer_blocks.0.attn1",
# SD 1.5 U-Net (ldm)
"input_blocks.4.1.transformer_blocks.1.attn1",
"input_blocks.5.1.transformer_blocks.1.attn1",
"output_blocks.3.1.transformer_blocks.1.attn1",
"output_blocks.4.1.transformer_blocks.1.attn1",
"output_blocks.5.1.transformer_blocks.1.attn1",
"input_blocks.7.1.transformer_blocks.0.attn1",
"input_blocks.8.1.transformer_blocks.0.attn1",
"output_blocks.0.1.transformer_blocks.0.attn1",
"output_blocks.1.1.transformer_blocks.0.attn1",
"output_blocks.2.1.transformer_blocks.0.attn1",
"input_blocks.7.1.transformer_blocks.1.attn1",
"input_blocks.8.1.transformer_blocks.1.attn1",
"output_blocks.0.1.transformer_blocks.1.attn1",
"output_blocks.1.1.transformer_blocks.1.attn1",
"output_blocks.2.1.transformer_blocks.1.attn1",
"input_blocks.7.1.transformer_blocks.2.attn1",
"input_blocks.8.1.transformer_blocks.2.attn1",
"output_blocks.0.1.transformer_blocks.2.attn1",
"output_blocks.1.1.transformer_blocks.2.attn1",
"output_blocks.2.1.transformer_blocks.2.attn1",
"input_blocks.7.1.transformer_blocks.3.attn1",
"input_blocks.8.1.transformer_blocks.3.attn1",
"output_blocks.0.1.transformer_blocks.3.attn1",
"output_blocks.1.1.transformer_blocks.3.attn1",
"output_blocks.2.1.transformer_blocks.3.attn1",
"input_blocks.7.1.transformer_blocks.4.attn1",
"input_blocks.8.1.transformer_blocks.4.attn1",
"output_blocks.0.1.transformer_blocks.4.attn1",
"output_blocks.1.1.transformer_blocks.4.attn1",
"output_blocks.2.1.transformer_blocks.4.attn1",
"input_blocks.7.1.transformer_blocks.5.attn1",
"input_blocks.8.1.transformer_blocks.5.attn1",
"output_blocks.0.1.transformer_blocks.5.attn1",
"output_blocks.1.1.transformer_blocks.5.attn1",
"output_blocks.2.1.transformer_blocks.5.attn1",
"input_blocks.7.1.transformer_blocks.6.attn1",
"input_blocks.8.1.transformer_blocks.6.attn1",
"output_blocks.0.1.transformer_blocks.6.attn1",
"output_blocks.1.1.transformer_blocks.6.attn1",
"output_blocks.2.1.transformer_blocks.6.attn1",
"input_blocks.7.1.transformer_blocks.7.attn1",
"input_blocks.8.1.transformer_blocks.7.attn1",
"output_blocks.0.1.transformer_blocks.7.attn1",
"output_blocks.1.1.transformer_blocks.7.attn1",
"output_blocks.2.1.transformer_blocks.7.attn1",
"input_blocks.7.1.transformer_blocks.8.attn1",
"input_blocks.8.1.transformer_blocks.8.attn1",
"output_blocks.0.1.transformer_blocks.8.attn1",
"output_blocks.1.1.transformer_blocks.8.attn1",
"output_blocks.2.1.transformer_blocks.8.attn1",
"input_blocks.7.1.transformer_blocks.9.attn1",
"input_blocks.8.1.transformer_blocks.9.attn1",
"output_blocks.0.1.transformer_blocks.9.attn1",
"output_blocks.1.1.transformer_blocks.9.attn1",
"output_blocks.2.1.transformer_blocks.9.attn1",
],
2: [
# SD 1.5 U-Net (diffusers)
"mid_block.attentions.0.transformer_blocks.0.attn1",
# SD 1.5 U-Net (ldm)
"middle_block.1.transformer_blocks.0.attn1",
"middle_block.1.transformer_blocks.1.attn1",
"middle_block.1.transformer_blocks.2.attn1",
"middle_block.1.transformer_blocks.3.attn1",
"middle_block.1.transformer_blocks.4.attn1",
"middle_block.1.transformer_blocks.5.attn1",
"middle_block.1.transformer_blocks.6.attn1",
"middle_block.1.transformer_blocks.7.attn1",
"middle_block.1.transformer_blocks.8.attn1",
"middle_block.1.transformer_blocks.9.attn1",
],
3 : [] # TODO - separate layers for SD-XL
}
RNG_INSTANCE = random.Random()
def random_divisor(value: int, min_value: int, /, max_options: int = 1) -> int:
"""
Returns a random divisor of value that
x * min_value <= value
if max_options is 1, the behavior is deterministic
"""
min_value = min(min_value, value)
# All big divisors of value (inclusive)
divisors = [i for i in range(min_value, value + 1) if value % i == 0] # divisors in small -> big order
ns = [value // i for i in divisors[:max_options]] # has at least 1 element # big -> small order
idx = RNG_INSTANCE.randint(0, len(ns) - 1)
return ns[idx]
def set_hypertile_seed(seed: int) -> None:
RNG_INSTANCE.seed(seed)
@functools.cache
def largest_tile_size_available(width: int, height: int) -> int:
"""
Calculates the largest tile size available for a given width and height
Tile size is always a power of 2
"""
gcd = math.gcd(width, height)
largest_tile_size_available = 1
while gcd % (largest_tile_size_available * 2) == 0:
largest_tile_size_available *= 2
return largest_tile_size_available
def iterative_closest_divisors(hw:int, aspect_ratio:float) -> tuple[int, int]:
"""
Finds h and w such that h*w = hw and h/w = aspect_ratio
We check all possible divisors of hw and return the closest to the aspect ratio
"""
divisors = [i for i in range(2, hw + 1) if hw % i == 0] # all divisors of hw
pairs = [(i, hw // i) for i in divisors] # all pairs of divisors of hw
ratios = [w/h for h, w in pairs] # all ratios of pairs of divisors of hw
closest_ratio = min(ratios, key=lambda x: abs(x - aspect_ratio)) # closest ratio to aspect_ratio
closest_pair = pairs[ratios.index(closest_ratio)] # closest pair of divisors to aspect_ratio
return closest_pair
@cache
def find_hw_candidates(hw:int, aspect_ratio:float) -> tuple[int, int]:
"""
Finds h and w such that h*w = hw and h/w = aspect_ratio
"""
h, w = round(math.sqrt(hw * aspect_ratio)), round(math.sqrt(hw / aspect_ratio))
# find h and w such that h*w = hw and h/w = aspect_ratio
if h * w != hw:
w_candidate = hw / h
# check if w is an integer
if not w_candidate.is_integer():
h_candidate = hw / w
# check if h is an integer
if not h_candidate.is_integer():
return iterative_closest_divisors(hw, aspect_ratio)
else:
h = int(h_candidate)
else:
w = int(w_candidate)
return h, w
def self_attn_forward(params: HypertileParams, scale_depth=True) -> Callable:
@wraps(params.forward)
def wrapper(*args, **kwargs):
if not params.enabled:
return params.forward(*args, **kwargs)
latent_tile_size = max(128, params.tile_size) // 8
x = args[0]
# VAE
if x.ndim == 4:
b, c, h, w = x.shape
nh = random_divisor(h, latent_tile_size, params.swap_size)
nw = random_divisor(w, latent_tile_size, params.swap_size)
if nh * nw > 1:
x = rearrange(x, "b c (nh h) (nw w) -> (b nh nw) c h w", nh=nh, nw=nw) # split into nh * nw tiles
out = params.forward(x, *args[1:], **kwargs)
if nh * nw > 1:
out = rearrange(out, "(b nh nw) c h w -> b c (nh h) (nw w)", nh=nh, nw=nw)
# U-Net
else:
hw: int = x.size(1)
h, w = find_hw_candidates(hw, params.aspect_ratio)
assert h * w == hw, f"Invalid aspect ratio {params.aspect_ratio} for input of shape {x.shape}, hw={hw}, h={h}, w={w}"
factor = 2 ** params.depth if scale_depth else 1
nh = random_divisor(h, latent_tile_size * factor, params.swap_size)
nw = random_divisor(w, latent_tile_size * factor, params.swap_size)
if nh * nw > 1:
x = rearrange(x, "b (nh h nw w) c -> (b nh nw) (h w) c", h=h // nh, w=w // nw, nh=nh, nw=nw)
out = params.forward(x, *args[1:], **kwargs)
if nh * nw > 1:
out = rearrange(out, "(b nh nw) hw c -> b nh nw hw c", nh=nh, nw=nw)
out = rearrange(out, "b nh nw (h w) c -> b (nh h nw w) c", h=h // nh, w=w // nw)
return out
return wrapper
def hypertile_hook_model(model: nn.Module, width, height, *, enable=False, tile_size_max=128, swap_size=1, max_depth=3, is_sdxl=False):
hypertile_layers = getattr(model, "__webui_hypertile_layers", None)
if hypertile_layers is None:
if not enable:
return
hypertile_layers = {}
layers = DEPTH_LAYERS_XL if is_sdxl else DEPTH_LAYERS
for depth in range(4):
for layer_name, module in model.named_modules():
if any(layer_name.endswith(try_name) for try_name in layers[depth]):
params = HypertileParams()
module.__webui_hypertile_params = params
params.forward = module.forward
params.depth = depth
params.layer_name = layer_name
module.forward = self_attn_forward(params)
hypertile_layers[layer_name] = 1
model.__webui_hypertile_layers = hypertile_layers
aspect_ratio = width / height
tile_size = min(largest_tile_size_available(width, height), tile_size_max)
for layer_name, module in model.named_modules():
if layer_name in hypertile_layers:
params = module.__webui_hypertile_params
params.tile_size = tile_size
params.swap_size = swap_size
params.aspect_ratio = aspect_ratio
params.enabled = enable and params.depth <= max_depth
@@ -0,0 +1,73 @@
import hypertile
from modules import scripts, script_callbacks, shared
class ScriptHypertile(scripts.Script):
name = "Hypertile"
def title(self):
return self.name
def show(self, is_img2img):
return scripts.AlwaysVisible
def process(self, p, *args):
hypertile.set_hypertile_seed(p.all_seeds[0])
configure_hypertile(p.width, p.height, enable_unet=shared.opts.hypertile_enable_unet)
def before_hr(self, p, *args):
configure_hypertile(p.hr_upscale_to_x, p.hr_upscale_to_y, enable_unet=shared.opts.hypertile_enable_unet_secondpass or shared.opts.hypertile_enable_unet)
def configure_hypertile(width, height, enable_unet=True):
hypertile.hypertile_hook_model(
shared.sd_model.first_stage_model,
width,
height,
swap_size=shared.opts.hypertile_swap_size_vae,
max_depth=shared.opts.hypertile_max_depth_vae,
tile_size_max=shared.opts.hypertile_max_tile_vae,
enable=shared.opts.hypertile_enable_vae,
)
hypertile.hypertile_hook_model(
shared.sd_model.model,
width,
height,
swap_size=shared.opts.hypertile_swap_size_unet,
max_depth=shared.opts.hypertile_max_depth_unet,
tile_size_max=shared.opts.hypertile_max_tile_unet,
enable=enable_unet,
is_sdxl=shared.sd_model.is_sdxl
)
def on_ui_settings():
import gradio as gr
options = {
"hypertile_explanation": shared.OptionHTML("""
<a href='https://github.com/tfernd/HyperTile'>Hypertile</a> optimizes the self-attention layer within U-Net and VAE models,
resulting in a reduction in computation time ranging from 1 to 4 times. The larger the generated image is, the greater the
benefit.
"""),
"hypertile_enable_unet": shared.OptionInfo(False, "Enable Hypertile U-Net").info("noticeable change in details of the generated picture; if enabled, overrides the setting below"),
"hypertile_enable_unet_secondpass": shared.OptionInfo(False, "Enable Hypertile U-Net for hires fix second pass"),
"hypertile_max_depth_unet": shared.OptionInfo(3, "Hypertile U-Net max depth", gr.Slider, {"minimum": 0, "maximum": 3, "step": 1}),
"hypertile_max_tile_unet": shared.OptionInfo(256, "Hypertile U-net max tile size", gr.Slider, {"minimum": 0, "maximum": 512, "step": 16}),
"hypertile_swap_size_unet": shared.OptionInfo(3, "Hypertile U-net swap size", gr.Slider, {"minimum": 0, "maximum": 6, "step": 1}),
"hypertile_enable_vae": shared.OptionInfo(False, "Enable Hypertile VAE").info("minimal change in the generated picture"),
"hypertile_max_depth_vae": shared.OptionInfo(3, "Hypertile VAE max depth", gr.Slider, {"minimum": 0, "maximum": 3, "step": 1}),
"hypertile_max_tile_vae": shared.OptionInfo(128, "Hypertile VAE max tile size", gr.Slider, {"minimum": 0, "maximum": 512, "step": 16}),
"hypertile_swap_size_vae": shared.OptionInfo(3, "Hypertile VAE swap size ", gr.Slider, {"minimum": 0, "maximum": 6, "step": 1}),
}
for name, opt in options.items():
opt.section = ('hypertile', "Hypertile")
shared.opts.add_option(name, opt)
script_callbacks.on_ui_settings(on_ui_settings)
+4
View File
@@ -130,6 +130,10 @@ function extraNetworksMovePromptToTab(tabname, id, showPrompt, showNegativePromp
} else {
promptContainer.insertBefore(prompt, promptContainer.firstChild);
}
if (elem) {
elem.classList.toggle('extra-page-prompts-active', showNegativePrompt || showPrompt);
}
}
+5 -1
View File
@@ -26,7 +26,11 @@ onAfterUiUpdate(function() {
lastHeadImg = headImg;
// play notification sound if available
gradioApp().querySelector('#audio_notification audio')?.play();
const notificationAudio = gradioApp().querySelector('#audio_notification audio');
if (notificationAudio) {
notificationAudio.volume = opts.notification_volume / 100.0 || 1.0;
notificationAudio.play();
}
if (document.hasFocus()) return;
+25
View File
@@ -44,3 +44,28 @@ onUiLoaded(function() {
buttonShowAllPages.addEventListener("click", settingsShowAllTabs);
});
onOptionsChanged(function() {
if (gradioApp().querySelector('#settings .settings-category')) return;
var sectionMap = {};
gradioApp().querySelectorAll('#settings > div > button').forEach(function(x) {
sectionMap[x.textContent.trim()] = x;
});
opts._categories.forEach(function(x) {
var section = x[0];
var category = x[1];
var span = document.createElement('SPAN');
span.textContent = category;
span.className = 'settings-category';
var sectionElem = sectionMap[section];
if (!sectionElem) return;
sectionElem.parentElement.insertBefore(span, sectionElem);
});
});
+2 -2
View File
@@ -93,8 +93,8 @@ class PydanticModelGenerator:
d.field: (d.field_type, Field(default=d.field_value, alias=d.field_alias, exclude=d.field_exclude)) for d in self._model_def
}
DynamicModel = create_model(self._model_name, **fields)
DynamicModel.__config__.allow_population_by_field_name = True
DynamicModel.__config__.allow_mutation = True
DynamicModel.model_config['populate_by_name'] = True
DynamicModel.model_config['frozen'] = True
return DynamicModel
StableDiffusionTxt2ImgProcessingAPI = PydanticModelGenerator(
+1 -1
View File
@@ -32,7 +32,7 @@ def dump_cache():
with cache_lock:
cache_filename_tmp = cache_filename + "-"
with open(cache_filename_tmp, "w", encoding="utf8") as file:
json.dump(cache_data, file, indent=4)
json.dump(cache_data, file, indent=4, ensure_ascii=False)
os.replace(cache_filename_tmp, cache_filename)
+16 -2
View File
@@ -6,6 +6,21 @@ import traceback
exception_records = []
def format_traceback(tb):
return [[f"{x.filename}, line {x.lineno}, {x.name}", x.line] for x in traceback.extract_tb(tb)]
def format_exception(e, tb):
return {"exception": str(e), "traceback": format_traceback(tb)}
def get_exceptions():
try:
return list(reversed(exception_records))
except Exception as e:
return str(e)
def record_exception():
_, e, tb = sys.exc_info()
if e is None:
@@ -14,8 +29,7 @@ def record_exception():
if exception_records and exception_records[-1] == e:
return
from modules import sysinfo
exception_records.append(sysinfo.format_exception(e, tb))
exception_records.append(format_exception(e, tb))
if len(exception_records) > 5:
exception_records.pop(0)
+84 -12
View File
@@ -1,11 +1,14 @@
from __future__ import annotations
import configparser
import os
import threading
import re
from modules import shared, errors, cache, scripts
from modules.gitpython_hack import Repo
from modules.paths_internal import extensions_dir, extensions_builtin_dir, script_path # noqa: F401
extensions = []
os.makedirs(extensions_dir, exist_ok=True)
@@ -19,11 +22,55 @@ def active():
return [x for x in extensions if x.enabled]
class ExtensionMetadata:
filename = "metadata.ini"
config: configparser.ConfigParser
canonical_name: str
requires: list
def __init__(self, path, canonical_name):
self.config = configparser.ConfigParser()
filepath = os.path.join(path, self.filename)
if os.path.isfile(filepath):
try:
self.config.read(filepath)
except Exception:
errors.report(f"Error reading {self.filename} for extension {canonical_name}.", exc_info=True)
self.canonical_name = self.config.get("Extension", "Name", fallback=canonical_name)
self.canonical_name = canonical_name.lower().strip()
self.requires = self.get_script_requirements("Requires", "Extension")
def get_script_requirements(self, field, section, extra_section=None):
"""reads a list of requirements from the config; field is the name of the field in the ini file,
like Requires or Before, and section is the name of the [section] in the ini file; additionally,
reads more requirements from [extra_section] if specified."""
x = self.config.get(section, field, fallback='')
if extra_section:
x = x + ', ' + self.config.get(extra_section, field, fallback='')
return self.parse_list(x.lower())
def parse_list(self, text):
"""converts a line from config ("ext1 ext2, ext3 ") into a python list (["ext1", "ext2", "ext3"])"""
if not text:
return []
# both "," and " " are accepted as separator
return [x for x in re.split(r"[,\s]+", text.strip()) if x]
class Extension:
lock = threading.Lock()
cached_fields = ['remote', 'commit_date', 'branch', 'commit_hash', 'version']
metadata: ExtensionMetadata
def __init__(self, name, path, enabled=True, is_builtin=False):
def __init__(self, name, path, enabled=True, is_builtin=False, metadata=None):
self.name = name
self.path = path
self.enabled = enabled
@@ -36,6 +83,8 @@ class Extension:
self.branch = None
self.remote = None
self.have_info_from_repo = False
self.metadata = metadata if metadata else ExtensionMetadata(self.path, name.lower())
self.canonical_name = metadata.canonical_name
def to_dict(self):
return {x: getattr(self, x) for x in self.cached_fields}
@@ -56,6 +105,7 @@ class Extension:
self.do_read_info_from_repo()
return self.to_dict()
try:
d = cache.cached_data_for_file('extensions-git', self.name, os.path.join(self.path, ".git"), read_from_repo)
self.from_dict(d)
@@ -136,9 +186,6 @@ class Extension:
def list_extensions():
extensions.clear()
if not os.path.isdir(extensions_dir):
return
if shared.cmd_opts.disable_all_extensions:
print("*** \"--disable-all-extensions\" arg was used, will not load any extensions ***")
elif shared.opts.disable_all_extensions == "all":
@@ -148,18 +195,43 @@ def list_extensions():
elif shared.opts.disable_all_extensions == "extra":
print("*** \"Disable all extensions\" option was set, will only load built-in extensions ***")
extension_paths = []
for dirname in [extensions_dir, extensions_builtin_dir]:
loaded_extensions = {}
# scan through extensions directory and load metadata
for dirname in [extensions_builtin_dir, extensions_dir]:
if not os.path.isdir(dirname):
return
continue
for extension_dirname in sorted(os.listdir(dirname)):
path = os.path.join(dirname, extension_dirname)
if not os.path.isdir(path):
continue
extension_paths.append((extension_dirname, path, dirname == extensions_builtin_dir))
canonical_name = extension_dirname
metadata = ExtensionMetadata(path, canonical_name)
for dirname, path, is_builtin in extension_paths:
extension = Extension(name=dirname, path=path, enabled=dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin)
extensions.append(extension)
# check for duplicated canonical names
already_loaded_extension = loaded_extensions.get(metadata.canonical_name)
if already_loaded_extension is not None:
errors.report(f'Duplicate canonical name "{canonical_name}" found in extensions "{extension_dirname}" and "{already_loaded_extension.name}". Former will be discarded.', exc_info=False)
continue
is_builtin = dirname == extensions_builtin_dir
extension = Extension(name=extension_dirname, path=path, enabled=extension_dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin, metadata=metadata)
extensions.append(extension)
loaded_extensions[canonical_name] = extension
# check for requirements
for extension in extensions:
for req in extension.metadata.requires:
required_extension = loaded_extensions.get(req)
if required_extension is None:
errors.report(f'Extension "{extension.name}" requires "{req}" which is not installed.', exc_info=False)
continue
if not extension.enabled:
errors.report(f'Extension "{extension.name}" requires "{required_extension.name}" which is disabled.', exc_info=False)
continue
extensions: list[Extension] = []
+70 -1
View File
@@ -1,3 +1,5 @@
from inspect import signature
from functools import wraps
import gradio as gr
from modules import scripts, ui_tempdir, patches
@@ -64,10 +66,77 @@ def Blocks_get_config_file(self, *args, **kwargs):
return config
original_IOComponent_init = patches.patch(__name__, obj=gr.components.IOComponent, field="__init__", replacement=IOComponent_init)
def gradio_component_compatibility_layer(component_function):
@wraps(component_function)
def patched_function(*args, **kwargs):
original_signature = signature(component_function).parameters
valid_kwargs = {k: v for k, v in kwargs.items() if k in original_signature}
result = component_function(*args, **valid_kwargs)
return result
return patched_function
sub_events = ['then', 'success']
def gradio_component_events_compatibility_layer(component_function):
@wraps(component_function)
def patched_function(*args, **kwargs):
kwargs['js'] = kwargs.get('js', kwargs.pop('_js', None))
original_signature = signature(component_function).parameters
valid_kwargs = {k: v for k, v in kwargs.items() if k in original_signature}
result = component_function(*args, **valid_kwargs)
for sub_event in sub_events:
component_event_then_function = getattr(result, sub_event, None)
if component_event_then_function:
patched_component_event_then_function = gradio_component_sub_events_compatibility_layer(component_event_then_function)
setattr(result, sub_event, patched_component_event_then_function)
# original_component_event_then_function = patches.patch(f'{__name__}.', obj=result, field='then', replacement=patched_component_event_then_function)
return result
return patched_function
def gradio_component_sub_events_compatibility_layer(component_function):
@wraps(component_function)
def patched_function(*args, **kwargs):
kwargs['js'] = kwargs.get('js', kwargs.pop('_js', None))
original_signature = signature(component_function).parameters
valid_kwargs = {k: v for k, v in kwargs.items() if k in original_signature}
result = component_function(*args, **valid_kwargs)
return result
return patched_function
for component_name in set(gr.components.__all__ + gr.layouts.__all__):
try:
component = getattr(gr, component_name)
component_init = getattr(component, '__init__')
patched_component_init = gradio_component_compatibility_layer(component_init)
original_IOComponent_init = patches.patch(f'{__name__}.{component_name}', obj=component, field="__init__", replacement=patched_component_init)
component_events = set(getattr(component, 'EVENTS'))
for component_event in component_events:
component_event_function = getattr(component, component_event)
patched_component_event_function = gradio_component_events_compatibility_layer(component_event_function)
original_component_event_function = patches.patch(f'{__name__}.{component_name}.{component_event}', obj=component, field=component_event, replacement=patched_component_event_function)
except Exception as e:
print(e)
pass
gr.Box = gr.Group
original_IOComponent_init = patches.patch(__name__, obj=gr.components.base.Component, field="__init__", replacement=IOComponent_init)
original_Block_get_config = patches.patch(__name__, obj=gr.blocks.Block, field="get_config", replacement=Block_get_config)
original_BlockContext_init = patches.patch(__name__, obj=gr.blocks.BlockContext, field="__init__", replacement=BlockContext_init)
original_Blocks_get_config_file = patches.patch(__name__, obj=gr.blocks.Blocks, field="get_config_file", replacement=Blocks_get_config_file)
ui_tempdir.install_ui_tempdir_override()
+20 -4
View File
@@ -44,6 +44,8 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
steps = p.steps
override_settings = p.override_settings
sd_model_checkpoint_override = get_closet_checkpoint_match(override_settings.get("sd_model_checkpoint", None))
batch_results = None
discard_further_results = False
for i, image in enumerate(images):
state.job = f"{i+1} out of {len(images)}"
if state.skipped:
@@ -127,7 +129,21 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
if proc is None:
p.override_settings.pop('save_images_replace_action', None)
process_images(p)
proc = process_images(p)
if not discard_further_results and proc:
if batch_results:
batch_results.images.extend(proc.images)
batch_results.infotexts.extend(proc.infotexts)
else:
batch_results = proc
if 0 <= shared.opts.img2img_batch_show_results_limit < len(batch_results.images):
discard_further_results = True
batch_results.images = batch_results.images[:int(shared.opts.img2img_batch_show_results_limit)]
batch_results.infotexts = batch_results.infotexts[:int(shared.opts.img2img_batch_show_results_limit)]
return batch_results
def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, steps: int, sampler_name: str, mask_blur: int, mask_alpha: float, inpainting_fill: int, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, request: gr.Request, *args):
@@ -212,10 +228,10 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
with closing(p):
if is_batch:
assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled"
processed = process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args, to_scale=selected_scale_tab == 1, scale_by=scale_by, use_png_info=img2img_batch_use_png_info, png_info_props=img2img_batch_png_info_props, png_info_dir=img2img_batch_png_info_dir)
process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args, to_scale=selected_scale_tab == 1, scale_by=scale_by, use_png_info=img2img_batch_use_png_info, png_info_props=img2img_batch_png_info_props, png_info_dir=img2img_batch_png_info_dir)
processed = Processed(p, [], p.seed, "")
if processed is None:
processed = Processed(p, [], p.seed, "")
else:
processed = modules.scripts.scripts_img2img.run(p, *args)
if processed is None:
+1 -1
View File
@@ -441,7 +441,7 @@ def dump_sysinfo():
import datetime
text = sysinfo.get()
filename = f"sysinfo-{datetime.datetime.utcnow().strftime('%Y-%m-%d-%H-%M')}.txt"
filename = f"sysinfo-{datetime.datetime.utcnow().strftime('%Y-%m-%d-%H-%M')}.json"
with open(filename, "w", encoding="utf8") as file:
file.write(text)
+26 -1
View File
@@ -1,16 +1,41 @@
import os
import logging
try:
from tqdm.auto import tqdm
class TqdmLoggingHandler(logging.Handler):
def __init__(self, level=logging.INFO):
super().__init__(level)
def emit(self, record):
try:
msg = self.format(record)
tqdm.write(msg)
self.flush()
except Exception:
self.handleError(record)
TQDM_IMPORTED = True
except ImportError:
# tqdm does not exist before first launch
# I will import once the UI finishes seting up the enviroment and reloads.
TQDM_IMPORTED = False
def setup_logging(loglevel):
if loglevel is None:
loglevel = os.environ.get("SD_WEBUI_LOG_LEVEL")
loghandlers = []
if TQDM_IMPORTED:
loghandlers.append(TqdmLoggingHandler())
if loglevel:
log_level = getattr(logging, loglevel.upper(), None) or logging.INFO
logging.basicConfig(
level=log_level,
format='%(asctime)s %(levelname)s [%(name)s] %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
handlers=loghandlers
)
+71 -10
View File
@@ -1,5 +1,6 @@
import json
import sys
from dataclasses import dataclass
import gradio as gr
@@ -8,13 +9,14 @@ from modules.shared_cmd_options import cmd_opts
class OptionInfo:
def __init__(self, default=None, label="", component=None, component_args=None, onchange=None, section=None, refresh=None, comment_before='', comment_after='', infotext=None, restrict_api=False):
def __init__(self, default=None, label="", component=None, component_args=None, onchange=None, section=None, refresh=None, comment_before='', comment_after='', infotext=None, restrict_api=False, category_id=None):
self.default = default
self.label = label
self.component = component
self.component_args = component_args
self.onchange = onchange
self.section = section
self.category_id = category_id
self.refresh = refresh
self.do_not_save = False
@@ -63,7 +65,11 @@ class OptionHTML(OptionInfo):
def options_section(section_identifier, options_dict):
for v in options_dict.values():
v.section = section_identifier
if len(section_identifier) == 2:
v.section = section_identifier
elif len(section_identifier) == 3:
v.section = section_identifier[0:2]
v.category_id = section_identifier[2]
return options_dict
@@ -76,7 +82,7 @@ class Options:
def __init__(self, data_labels: dict[str, OptionInfo], restricted_opts):
self.data_labels = data_labels
self.data = {k: v.default for k, v in self.data_labels.items()}
self.data = {k: v.default for k, v in self.data_labels.items() if not v.do_not_save}
self.restricted_opts = restricted_opts
def __setattr__(self, key, value):
@@ -158,7 +164,7 @@ class Options:
assert not cmd_opts.freeze_settings, "saving settings is disabled"
with open(filename, "w", encoding="utf8") as file:
json.dump(self.data, file, indent=4)
json.dump(self.data, file, indent=4, ensure_ascii=False)
def same_type(self, x, y):
if x is None or y is None:
@@ -206,23 +212,59 @@ class Options:
d = {k: self.data.get(k, v.default) for k, v in self.data_labels.items()}
d["_comments_before"] = {k: v.comment_before for k, v in self.data_labels.items() if v.comment_before is not None}
d["_comments_after"] = {k: v.comment_after for k, v in self.data_labels.items() if v.comment_after is not None}
item_categories = {}
for item in self.data_labels.values():
category = categories.mapping.get(item.category_id)
category = "Uncategorized" if category is None else category.label
if category not in item_categories:
item_categories[category] = item.section[1]
# _categories is a list of pairs: [section, category]. Each section (a setting page) will get a special heading above it with the category as text.
d["_categories"] = [[v, k] for k, v in item_categories.items()] + [["Defaults", "Other"]]
return json.dumps(d)
def add_option(self, key, info):
self.data_labels[key] = info
if key not in self.data:
if key not in self.data and not info.do_not_save:
self.data[key] = info.default
def reorder(self):
"""reorder settings so that all items related to section always go together"""
"""Reorder settings so that:
- all items related to section always go together
- all sections belonging to a category go together
- sections inside a category are ordered alphabetically
- categories are ordered by creation order
Category is a superset of sections: for category "postprocessing" there could be multiple sections: "face restoration", "upscaling".
This function also changes items' category_id so that all items belonging to a section have the same category_id.
"""
category_ids = {}
section_categories = {}
section_ids = {}
settings_items = self.data_labels.items()
for _, item in settings_items:
if item.section not in section_ids:
section_ids[item.section] = len(section_ids)
if item.section not in section_categories:
section_categories[item.section] = item.category_id
self.data_labels = dict(sorted(settings_items, key=lambda x: section_ids[x[1].section]))
for _, item in settings_items:
item.category_id = section_categories.get(item.section)
for category_id in categories.mapping:
if category_id not in category_ids:
category_ids[category_id] = len(category_ids)
def sort_key(x):
item: OptionInfo = x[1]
category_order = category_ids.get(item.category_id, len(category_ids))
section_order = item.section[1]
return category_order, section_order
self.data_labels = dict(sorted(settings_items, key=sort_key))
def cast_value(self, key, value):
"""casts an arbitrary to the same type as this setting's value with key
@@ -245,3 +287,22 @@ class Options:
value = expected_type(value)
return value
@dataclass
class OptionsCategory:
id: str
label: str
class OptionsCategories:
def __init__(self):
self.mapping = {}
def register_category(self, category_id, label):
if category_id in self.mapping:
return category_id
self.mapping[category_id] = OptionsCategory(category_id, label)
categories = OptionsCategories()
+1 -1
View File
@@ -78,7 +78,7 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir,
image_data.close()
devices.torch_gc()
shared.state.end()
return outputs, ui_common.plaintext_to_html(infotext), ''
+2 -7
View File
@@ -296,7 +296,7 @@ class StableDiffusionProcessing:
return conditioning
def edit_image_conditioning(self, source_image):
conditioning_image = images_tensor_to_samples(source_image*0.5+0.5, approximation_indexes.get(opts.sd_vae_encode_method))
conditioning_image = shared.sd_model.encode_first_stage(source_image).mode()
return conditioning_image
@@ -799,7 +799,6 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
infotexts = []
output_images = []
with torch.no_grad(), p.sd_model.ema_scope():
with devices.autocast():
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
@@ -873,7 +872,6 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
else:
if opts.sd_vae_decode_method != 'Full':
p.extra_generation_params['VAE Decoder'] = opts.sd_vae_decode_method
x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
x_samples_ddim = torch.stack(x_samples_ddim).float()
@@ -1147,6 +1145,7 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
if not self.enable_hr:
return samples
devices.torch_gc()
if self.latent_scale_mode is None:
decoded_samples = torch.stack(decode_latent_batch(self.sd_model, samples, target_device=devices.cpu, check_for_nans=True)).to(dtype=torch.float32)
@@ -1156,8 +1155,6 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
with sd_models.SkipWritingToConfig():
sd_models.reload_model_weights(info=self.hr_checkpoint_info)
devices.torch_gc()
return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
def sample_hr_pass(self, samples, decoded_samples, seeds, subseeds, subseed_strength, prompts):
@@ -1165,7 +1162,6 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
return samples
self.is_hr_pass = True
target_width = self.hr_upscale_to_x
target_height = self.hr_upscale_to_y
@@ -1254,7 +1250,6 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
decoded_samples = decode_latent_batch(self.sd_model, samples, target_device=devices.cpu, check_for_nans=True)
self.is_hr_pass = False
return decoded_samples
def close(self):
+1 -1
View File
@@ -110,7 +110,7 @@ class ImageRNG:
self.is_first = True
def first(self):
noise_shape = self.shape if self.seed_resize_from_h <= 0 or self.seed_resize_from_w <= 0 else (self.shape[0], self.seed_resize_from_h // 8, self.seed_resize_from_w // 8)
noise_shape = self.shape if self.seed_resize_from_h <= 0 or self.seed_resize_from_w <= 0 else (self.shape[0], int(self.seed_resize_from_h) // 8, int(self.seed_resize_from_w // 8))
xs = []
+103 -16
View File
@@ -311,20 +311,113 @@ scripts_data = []
postprocessing_scripts_data = []
ScriptClassData = namedtuple("ScriptClassData", ["script_class", "path", "basedir", "module"])
def topological_sort(dependencies):
"""Accepts a dictionary mapping name to its dependencies, returns a list of names ordered according to dependencies.
Ignores errors relating to missing dependeencies or circular dependencies
"""
visited = {}
result = []
def inner(name):
visited[name] = True
for dep in dependencies.get(name, []):
if dep in dependencies and dep not in visited:
inner(dep)
result.append(name)
for depname in dependencies:
if depname not in visited:
inner(depname)
return result
@dataclass
class ScriptWithDependencies:
script_canonical_name: str
file: ScriptFile
requires: list
load_before: list
load_after: list
def list_scripts(scriptdirname, extension, *, include_extensions=True):
scripts_list = []
scripts = {}
basedir = os.path.join(paths.script_path, scriptdirname)
if os.path.exists(basedir):
for filename in sorted(os.listdir(basedir)):
scripts_list.append(ScriptFile(paths.script_path, filename, os.path.join(basedir, filename)))
loaded_extensions = {ext.canonical_name: ext for ext in extensions.active()}
loaded_extensions_scripts = {ext.canonical_name: [] for ext in extensions.active()}
# build script dependency map
root_script_basedir = os.path.join(paths.script_path, scriptdirname)
if os.path.exists(root_script_basedir):
for filename in sorted(os.listdir(root_script_basedir)):
if not os.path.isfile(os.path.join(root_script_basedir, filename)):
continue
if os.path.splitext(filename)[1].lower() != extension:
continue
script_file = ScriptFile(paths.script_path, filename, os.path.join(root_script_basedir, filename))
scripts[filename] = ScriptWithDependencies(filename, script_file, [], [], [])
if include_extensions:
for ext in extensions.active():
scripts_list += ext.list_files(scriptdirname, extension)
extension_scripts_list = ext.list_files(scriptdirname, extension)
for extension_script in extension_scripts_list:
if not os.path.isfile(extension_script.path):
continue
scripts_list = [x for x in scripts_list if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)]
script_canonical_name = ("builtin/" if ext.is_builtin else "") + ext.canonical_name + "/" + extension_script.filename
relative_path = scriptdirname + "/" + extension_script.filename
script = ScriptWithDependencies(
script_canonical_name=script_canonical_name,
file=extension_script,
requires=ext.metadata.get_script_requirements("Requires", relative_path, scriptdirname),
load_before=ext.metadata.get_script_requirements("Before", relative_path, scriptdirname),
load_after=ext.metadata.get_script_requirements("After", relative_path, scriptdirname),
)
scripts[script_canonical_name] = script
loaded_extensions_scripts[ext.canonical_name].append(script)
for script_canonical_name, script in scripts.items():
# load before requires inverse dependency
# in this case, append the script name into the load_after list of the specified script
for load_before in script.load_before:
# if this requires an individual script to be loaded before
other_script = scripts.get(load_before)
if other_script:
other_script.load_after.append(script_canonical_name)
# if this requires an extension
other_extension_scripts = loaded_extensions_scripts.get(load_before)
if other_extension_scripts:
for other_script in other_extension_scripts:
other_script.load_after.append(script_canonical_name)
# if After mentions an extension, remove it and instead add all of its scripts
for load_after in list(script.load_after):
if load_after not in scripts and load_after in loaded_extensions_scripts:
script.load_after.remove(load_after)
for other_script in loaded_extensions_scripts.get(load_after, []):
script.load_after.append(other_script.script_canonical_name)
dependencies = {}
for script_canonical_name, script in scripts.items():
for required_script in script.requires:
if required_script not in scripts and required_script not in loaded_extensions:
errors.report(f'Script "{script_canonical_name}" requires "{required_script}" to be loaded, but it is not.', exc_info=False)
dependencies[script_canonical_name] = script.load_after
ordered_scripts = topological_sort(dependencies)
scripts_list = [scripts[script_canonical_name].file for script_canonical_name in ordered_scripts]
return scripts_list
@@ -365,15 +458,9 @@ def load_scripts():
elif issubclass(script_class, scripts_postprocessing.ScriptPostprocessing):
postprocessing_scripts_data.append(ScriptClassData(script_class, scriptfile.path, scriptfile.basedir, module))
def orderby(basedir):
# 1st webui, 2nd extensions-builtin, 3rd extensions
priority = {os.path.join(paths.script_path, "extensions-builtin"):1, paths.script_path:0}
for key in priority:
if basedir.startswith(key):
return priority[key]
return 9999
for scriptfile in sorted(scripts_list, key=lambda x: [orderby(x.basedir), x]):
# here the scripts_list is already ordered
# processing_script is not considered though
for scriptfile in scripts_list:
try:
if scriptfile.basedir != paths.script_path:
sys.path = [scriptfile.basedir] + sys.path
+1 -1
View File
@@ -60,7 +60,7 @@ def restart_sampler(model, x, sigmas, extra_args=None, callback=None, disable=No
sigma_restart = get_sigmas_karras(restart_steps, sigmas[min_idx].item(), sigmas[max_idx].item(), device=sigmas.device)[:-1]
while restart_times > 0:
restart_times -= 1
step_list.extend([(old_sigma, new_sigma) for (old_sigma, new_sigma) in zip(sigma_restart[:-1], sigma_restart[1:])])
step_list.extend(zip(sigma_restart[:-1], sigma_restart[1:]))
last_sigma = None
for old_sigma, new_sigma in tqdm.tqdm(step_list, disable=disable):
+33 -22
View File
@@ -3,7 +3,7 @@ import gradio as gr
from modules import localization, ui_components, shared_items, shared, interrogate, shared_gradio_themes
from modules.paths_internal import models_path, script_path, data_path, sd_configs_path, sd_default_config, sd_model_file, default_sd_model_file, extensions_dir, extensions_builtin_dir # noqa: F401
from modules.shared_cmd_options import cmd_opts
from modules.options import options_section, OptionInfo, OptionHTML
from modules.options import options_section, OptionInfo, OptionHTML, categories
options_templates = {}
hide_dirs = shared.hide_dirs
@@ -21,7 +21,14 @@ restricted_opts = {
"outdir_init_images"
}
options_templates.update(options_section(('saving-images', "Saving images/grids"), {
categories.register_category("saving", "Saving images")
categories.register_category("sd", "Stable Diffusion")
categories.register_category("ui", "User Interface")
categories.register_category("system", "System")
categories.register_category("postprocessing", "Postprocessing")
categories.register_category("training", "Training")
options_templates.update(options_section(('saving-images', "Saving images/grids", "saving"), {
"samples_save": OptionInfo(True, "Always save all generated images"),
"samples_format": OptionInfo('png', 'File format for images'),
"samples_filename_pattern": OptionInfo("", "Images filename pattern", component_args=hide_dirs).link("wiki", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Images-Filename-Name-and-Subdirectory"),
@@ -64,9 +71,10 @@ options_templates.update(options_section(('saving-images', "Saving images/grids"
"save_incomplete_images": OptionInfo(False, "Save incomplete images").info("save images that has been interrupted in mid-generation; even if not saved, they will still show up in webui output."),
"notification_audio": OptionInfo(True, "Play notification sound after image generation").info("notification.mp3 should be present in the root directory").needs_reload_ui(),
"notification_volume": OptionInfo(100, "Notification sound volume", gr.Slider, {"minimum": 0, "maximum": 100, "step": 1}).info("in %"),
}))
options_templates.update(options_section(('saving-paths', "Paths for saving"), {
options_templates.update(options_section(('saving-paths', "Paths for saving", "saving"), {
"outdir_samples": OptionInfo("", "Output directory for images; if empty, defaults to three directories below", component_args=hide_dirs),
"outdir_txt2img_samples": OptionInfo("outputs/txt2img-images", 'Output directory for txt2img images', component_args=hide_dirs),
"outdir_img2img_samples": OptionInfo("outputs/img2img-images", 'Output directory for img2img images', component_args=hide_dirs),
@@ -78,7 +86,7 @@ options_templates.update(options_section(('saving-paths', "Paths for saving"), {
"outdir_init_images": OptionInfo("outputs/init-images", "Directory for saving init images when using img2img", component_args=hide_dirs),
}))
options_templates.update(options_section(('saving-to-dirs', "Saving to a directory"), {
options_templates.update(options_section(('saving-to-dirs', "Saving to a directory", "saving"), {
"save_to_dirs": OptionInfo(True, "Save images to a subdirectory"),
"grid_save_to_dirs": OptionInfo(True, "Save grids to a subdirectory"),
"use_save_to_dirs_for_ui": OptionInfo(False, "When using \"Save\" button, save images to a subdirectory"),
@@ -86,21 +94,21 @@ options_templates.update(options_section(('saving-to-dirs', "Saving to a directo
"directories_max_prompt_words": OptionInfo(8, "Max prompt words for [prompt_words] pattern", gr.Slider, {"minimum": 1, "maximum": 20, "step": 1, **hide_dirs}),
}))
options_templates.update(options_section(('upscaling', "Upscaling"), {
options_templates.update(options_section(('upscaling', "Upscaling", "postprocessing"), {
"ESRGAN_tile": OptionInfo(192, "Tile size for ESRGAN upscalers.", gr.Slider, {"minimum": 0, "maximum": 512, "step": 16}).info("0 = no tiling"),
"ESRGAN_tile_overlap": OptionInfo(8, "Tile overlap for ESRGAN upscalers.", gr.Slider, {"minimum": 0, "maximum": 48, "step": 1}).info("Low values = visible seam"),
"realesrgan_enabled_models": OptionInfo(["R-ESRGAN 4x+", "R-ESRGAN 4x+ Anime6B"], "Select which Real-ESRGAN models to show in the web UI.", gr.CheckboxGroup, lambda: {"choices": shared_items.realesrgan_models_names()}),
"upscaler_for_img2img": OptionInfo(None, "Upscaler for img2img", gr.Dropdown, lambda: {"choices": [x.name for x in shared.sd_upscalers]}),
}))
options_templates.update(options_section(('face-restoration', "Face restoration"), {
options_templates.update(options_section(('face-restoration', "Face restoration", "postprocessing"), {
"face_restoration": OptionInfo(False, "Restore faces", infotext='Face restoration').info("will use a third-party model on generation result to reconstruct faces"),
"face_restoration_model": OptionInfo("CodeFormer", "Face restoration model", gr.Radio, lambda: {"choices": [x.name() for x in shared.face_restorers]}),
"code_former_weight": OptionInfo(0.5, "CodeFormer weight", gr.Slider, {"minimum": 0, "maximum": 1, "step": 0.01}).info("0 = maximum effect; 1 = minimum effect"),
"face_restoration_unload": OptionInfo(False, "Move face restoration model from VRAM into RAM after processing"),
}))
options_templates.update(options_section(('system', "System"), {
options_templates.update(options_section(('system', "System", "system"), {
"auto_launch_browser": OptionInfo("Local", "Automatically open webui in browser on startup", gr.Radio, lambda: {"choices": ["Disable", "Local", "Remote"]}),
"enable_console_prompts": OptionInfo(shared.cmd_opts.enable_console_prompts, "Print prompts to console when generating with txt2img and img2img."),
"show_warnings": OptionInfo(False, "Show warnings in console.").needs_reload_ui(),
@@ -115,13 +123,13 @@ options_templates.update(options_section(('system', "System"), {
"dump_stacks_on_signal": OptionInfo(False, "Print stack traces before exiting the program with ctrl+c."),
}))
options_templates.update(options_section(('API', "API"), {
options_templates.update(options_section(('API', "API", "system"), {
"api_enable_requests": OptionInfo(True, "Allow http:// and https:// URLs for input images in API", restrict_api=True),
"api_forbid_local_requests": OptionInfo(True, "Forbid URLs to local resources", restrict_api=True),
"api_useragent": OptionInfo("", "User agent for requests", restrict_api=True),
}))
options_templates.update(options_section(('training', "Training"), {
options_templates.update(options_section(('training', "Training", "training"), {
"unload_models_when_training": OptionInfo(False, "Move VAE and CLIP to RAM when training if possible. Saves VRAM."),
"pin_memory": OptionInfo(False, "Turn on pin_memory for DataLoader. Makes training slightly faster but can increase memory usage."),
"save_optimizer_state": OptionInfo(False, "Saves Optimizer state as separate *.optim file. Training of embedding or HN can be resumed with the matching optim file."),
@@ -136,7 +144,7 @@ options_templates.update(options_section(('training', "Training"), {
"training_tensorboard_flush_every": OptionInfo(120, "How often, in seconds, to flush the pending tensorboard events and summaries to disk."),
}))
options_templates.update(options_section(('sd', "Stable Diffusion"), {
options_templates.update(options_section(('sd', "Stable Diffusion", "sd"), {
"sd_model_checkpoint": OptionInfo(None, "Stable Diffusion checkpoint", gr.Dropdown, lambda: {"choices": shared_items.list_checkpoint_tiles(shared.opts.sd_checkpoint_dropdown_use_short)}, refresh=shared_items.refresh_checkpoints, infotext='Model hash'),
"sd_checkpoints_limit": OptionInfo(1, "Maximum number of checkpoints loaded at the same time", gr.Slider, {"minimum": 1, "maximum": 10, "step": 1}),
"sd_checkpoints_keep_in_cpu": OptionInfo(True, "Only keep one model on device").info("will keep models other than the currently used one in RAM rather than VRAM"),
@@ -153,14 +161,14 @@ options_templates.update(options_section(('sd', "Stable Diffusion"), {
"hires_fix_refiner_pass": OptionInfo("second pass", "Hires fix: which pass to enable refiner for", gr.Radio, {"choices": ["first pass", "second pass", "both passes"]}, infotext="Hires refiner"),
}))
options_templates.update(options_section(('sdxl', "Stable Diffusion XL"), {
options_templates.update(options_section(('sdxl', "Stable Diffusion XL", "sd"), {
"sdxl_crop_top": OptionInfo(0, "crop top coordinate"),
"sdxl_crop_left": OptionInfo(0, "crop left coordinate"),
"sdxl_refiner_low_aesthetic_score": OptionInfo(2.5, "SDXL low aesthetic score", gr.Number).info("used for refiner model negative prompt"),
"sdxl_refiner_high_aesthetic_score": OptionInfo(6.0, "SDXL high aesthetic score", gr.Number).info("used for refiner model prompt"),
}))
options_templates.update(options_section(('vae', "VAE"), {
options_templates.update(options_section(('vae', "VAE", "sd"), {
"sd_vae_explanation": OptionHTML("""
<abbr title='Variational autoencoder'>VAE</abbr> is a neural network that transforms a standard <abbr title='red/green/blue'>RGB</abbr>
image into latent space representation and back. Latent space representation is what stable diffusion is working on during sampling
@@ -175,7 +183,7 @@ For img2img, VAE is used to process user's input image before the sampling, and
"sd_vae_decode_method": OptionInfo("Full", "VAE type for decode", gr.Radio, {"choices": ["Full", "TAESD"]}, infotext='VAE Decoder').info("method to decode latent to image"),
}))
options_templates.update(options_section(('img2img', "img2img"), {
options_templates.update(options_section(('img2img', "img2img", "sd"), {
"inpainting_mask_weight": OptionInfo(1.0, "Inpainting conditioning mask strength", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}, infotext='Conditional mask weight'),
"initial_noise_multiplier": OptionInfo(1.0, "Noise multiplier for img2img", gr.Slider, {"minimum": 0.0, "maximum": 1.5, "step": 0.001}, infotext='Noise multiplier'),
"img2img_extra_noise": OptionInfo(0.0, "Extra noise multiplier for img2img and hires fix", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}, infotext='Extra noise').info("0 = disabled (default); should be lower than denoising strength"),
@@ -188,9 +196,10 @@ options_templates.update(options_section(('img2img', "img2img"), {
"img2img_inpaint_sketch_default_brush_color": OptionInfo("#ffffff", "Inpaint sketch initial brush color", ui_components.FormColorPicker, {}).info("default brush color of img2img inpaint sketch").needs_reload_ui(),
"return_mask": OptionInfo(False, "For inpainting, include the greyscale mask in results for web"),
"return_mask_composite": OptionInfo(False, "For inpainting, include masked composite in results for web"),
"img2img_batch_show_results_limit": OptionInfo(32, "Show the first N batch img2img results in UI", gr.Slider, {"minimum": -1, "maximum": 1000, "step": 1}).info('0: disable, -1: show all images. Too many images can cause lag'),
}))
options_templates.update(options_section(('optimizations', "Optimizations"), {
options_templates.update(options_section(('optimizations', "Optimizations", "sd"), {
"cross_attention_optimization": OptionInfo("Automatic", "Cross attention optimization", gr.Dropdown, lambda: {"choices": shared_items.cross_attention_optimizations()}),
"s_min_uncond": OptionInfo(0.0, "Negative Guidance minimum sigma", gr.Slider, {"minimum": 0.0, "maximum": 15.0, "step": 0.01}).link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9177").info("skip negative prompt for some steps when the image is almost ready; 0=disable, higher=faster"),
"token_merging_ratio": OptionInfo(0.0, "Token merging ratio", gr.Slider, {"minimum": 0.0, "maximum": 0.9, "step": 0.1}, infotext='Token merging ratio').link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9256").info("0=disable, higher=faster"),
@@ -201,7 +210,7 @@ options_templates.update(options_section(('optimizations', "Optimizations"), {
"batch_cond_uncond": OptionInfo(True, "Batch cond/uncond").info("do both conditional and unconditional denoising in one batch; uses a bit more VRAM during sampling, but improves speed; previously this was controlled by --always-batch-cond-uncond comandline argument"),
}))
options_templates.update(options_section(('compatibility', "Compatibility"), {
options_templates.update(options_section(('compatibility', "Compatibility", "sd"), {
"use_old_emphasis_implementation": OptionInfo(False, "Use old emphasis implementation. Can be useful to reproduce old seeds."),
"use_old_karras_scheduler_sigmas": OptionInfo(False, "Use old karras scheduler sigmas (0.1 to 10)."),
"no_dpmpp_sde_batch_determinism": OptionInfo(False, "Do not make DPM++ SDE deterministic across different batch sizes."),
@@ -226,7 +235,7 @@ options_templates.update(options_section(('interrogate', "Interrogate"), {
"deepbooru_filter_tags": OptionInfo("", "deepbooru: filter out those tags").info("separate by comma"),
}))
options_templates.update(options_section(('extra_networks', "Extra Networks"), {
options_templates.update(options_section(('extra_networks', "Extra Networks", "sd"), {
"extra_networks_show_hidden_directories": OptionInfo(True, "Show hidden directories").info("directory is hidden if its name starts with \".\"."),
"extra_networks_hidden_models": OptionInfo("When searched", "Show cards for models in hidden directories", gr.Radio, {"choices": ["Always", "When searched", "Never"]}).info('"When searched" option will only show the item when the search string has 4 characters or more'),
"extra_networks_default_multiplier": OptionInfo(1.0, "Default multiplier for extra networks", gr.Slider, {"minimum": 0.0, "maximum": 2.0, "step": 0.01}),
@@ -234,7 +243,7 @@ options_templates.update(options_section(('extra_networks', "Extra Networks"), {
"extra_networks_card_height": OptionInfo(0, "Card height for Extra Networks").info("in pixels"),
"extra_networks_card_text_scale": OptionInfo(1.0, "Card text scale", gr.Slider, {"minimum": 0.0, "maximum": 2.0, "step": 0.01}).info("1 = original size"),
"extra_networks_card_show_desc": OptionInfo(True, "Show description on card"),
"extra_networks_card_order_field": OptionInfo("Name", "Default order field for Extra Networks cards", gr.Dropdown, {"choices": ['Name', 'Date Created', 'Date Modified']}).needs_reload_ui(),
"extra_networks_card_order_field": OptionInfo("Path", "Default order field for Extra Networks cards", gr.Dropdown, {"choices": ['Path', 'Name', 'Date Created', 'Date Modified']}).needs_reload_ui(),
"extra_networks_card_order": OptionInfo("Ascending", "Default order for Extra Networks cards", gr.Dropdown, {"choices": ['Ascending', 'Descending']}).needs_reload_ui(),
"extra_networks_add_text_separator": OptionInfo(" ", "Extra networks separator").info("extra text to add before <...> when adding extra network to prompt"),
"ui_extra_networks_tab_reorder": OptionInfo("", "Extra networks tab order").needs_reload_ui(),
@@ -243,7 +252,7 @@ options_templates.update(options_section(('extra_networks', "Extra Networks"), {
"sd_hypernetwork": OptionInfo("None", "Add hypernetwork to prompt", gr.Dropdown, lambda: {"choices": ["None", *shared.hypernetworks]}, refresh=shared_items.reload_hypernetworks),
}))
options_templates.update(options_section(('ui', "User interface"), {
options_templates.update(options_section(('ui', "User interface", "ui"), {
"localization": OptionInfo("None", "Localization", gr.Dropdown, lambda: {"choices": ["None"] + list(localization.localizations.keys())}, refresh=lambda: localization.list_localizations(cmd_opts.localizations_dir)).needs_reload_ui(),
"gradio_theme": OptionInfo("Default", "Gradio theme", ui_components.DropdownEditable, lambda: {"choices": ["Default"] + shared_gradio_themes.gradio_hf_hub_themes}).info("you can also manually enter any of themes from the <a href='https://huggingface.co/spaces/gradio/theme-gallery'>gallery</a>.").needs_reload_ui(),
"gradio_themes_cache": OptionInfo(True, "Cache gradio themes locally").info("disable to update the selected Gradio theme"),
@@ -272,11 +281,13 @@ options_templates.update(options_section(('ui', "User interface"), {
"hires_fix_show_sampler": OptionInfo(False, "Hires fix: show hires checkpoint and sampler selection").needs_reload_ui(),
"hires_fix_show_prompts": OptionInfo(False, "Hires fix: show hires prompt and negative prompt").needs_reload_ui(),
"disable_token_counters": OptionInfo(False, "Disable prompt token counters").needs_reload_ui(),
"txt2img_settings_accordion": OptionInfo(False, "Settings in txt2img hidden under Accordion").needs_reload_ui(),
"img2img_settings_accordion": OptionInfo(False, "Settings in img2img hidden under Accordion").needs_reload_ui(),
"compact_prompt_box": OptionInfo(False, "Compact prompt layout").info("puts prompt and negative prompt inside the Generate tab, leaving more vertical space for the image on the right").needs_reload_ui(),
}))
options_templates.update(options_section(('infotext', "Infotext"), {
options_templates.update(options_section(('infotext', "Infotext", "ui"), {
"add_model_hash_to_info": OptionInfo(True, "Add model hash to generation information"),
"add_model_name_to_info": OptionInfo(True, "Add model name to generation information"),
"add_user_name_to_info": OptionInfo(False, "Add user name to generation information when authenticated"),
@@ -291,7 +302,7 @@ options_templates.update(options_section(('infotext', "Infotext"), {
}))
options_templates.update(options_section(('ui', "Live previews"), {
options_templates.update(options_section(('ui', "Live previews", "ui"), {
"show_progressbar": OptionInfo(True, "Show progressbar"),
"live_previews_enable": OptionInfo(True, "Show live previews of the created image"),
"live_previews_image_format": OptionInfo("png", "Live preview file format", gr.Radio, {"choices": ["jpeg", "png", "webp"]}),
@@ -304,7 +315,7 @@ options_templates.update(options_section(('ui', "Live previews"), {
"live_preview_fast_interrupt": OptionInfo(False, "Return image with chosen live preview method on interrupt").info("makes interrupts faster"),
}))
options_templates.update(options_section(('sampler-params', "Sampler parameters"), {
options_templates.update(options_section(('sampler-params', "Sampler parameters", "sd"), {
"hide_samplers": OptionInfo([], "Hide samplers in user interface", gr.CheckboxGroup, lambda: {"choices": [x.name for x in shared_items.list_samplers()]}).needs_reload_ui(),
"eta_ddim": OptionInfo(0.0, "Eta for DDIM", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}, infotext='Eta DDIM').info("noise multiplier; higher = more unpredictable results"),
"eta_ancestral": OptionInfo(1.0, "Eta for k-diffusion samplers", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}, infotext='Eta').info("noise multiplier; currently only applies to ancestral samplers (i.e. Euler a) and SDE samplers"),
@@ -326,7 +337,7 @@ options_templates.update(options_section(('sampler-params', "Sampler parameters"
'uni_pc_lower_order_final': OptionInfo(True, "UniPC lower order final", infotext='UniPC lower order final'),
}))
options_templates.update(options_section(('postprocessing', "Postprocessing"), {
options_templates.update(options_section(('postprocessing', "Postprocessing", "postprocessing"), {
'postprocessing_enable_in_main_ui': OptionInfo([], "Enable postprocessing operations in txt2img and img2img tabs", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
'postprocessing_operation_order': OptionInfo([], "Postprocessing operation order", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
'upscaling_max_images_in_cache': OptionInfo(5, "Maximum number of images in upscaling cache", gr.Slider, {"minimum": 0, "maximum": 10, "step": 1}),
+1 -17
View File
@@ -1,7 +1,6 @@
import json
import os
import sys
import traceback
import platform
import hashlib
@@ -84,7 +83,7 @@ def get_dict():
"Checksum": checksum_token,
"Commandline": get_argv(),
"Torch env info": get_torch_sysinfo(),
"Exceptions": get_exceptions(),
"Exceptions": errors.get_exceptions(),
"CPU": {
"model": platform.processor(),
"count logical": psutil.cpu_count(logical=True),
@@ -104,21 +103,6 @@ def get_dict():
return res
def format_traceback(tb):
return [[f"{x.filename}, line {x.lineno}, {x.name}", x.line] for x in traceback.extract_tb(tb)]
def format_exception(e, tb):
return {"exception": str(e), "traceback": format_traceback(tb)}
def get_exceptions():
try:
return list(reversed(errors.exception_records))
except Exception as e:
return str(e)
def get_environment():
return {k: os.environ[k] for k in sorted(os.environ) if k in environment_whitelist}
+20 -11
View File
@@ -4,6 +4,7 @@ import os
import sys
from functools import reduce
import warnings
from contextlib import ExitStack
import gradio as gr
import gradio.utils
@@ -31,8 +32,8 @@ from modules.generation_parameters_copypaste import image_from_url_text
create_setting_component = ui_settings.create_setting_component
warnings.filterwarnings("default" if opts.show_warnings else "ignore", category=UserWarning)
warnings.filterwarnings("default" if opts.show_gradio_deprecation_warnings else "ignore", category=gr.deprecation.GradioDeprecationWarning)
# warnings.filterwarnings("default" if opts.show_warnings else "ignore", category=UserWarning)
# warnings.filterwarnings("default" if opts.show_gradio_deprecation_warnings else "ignore", category=gr.deprecation.GradioDeprecationWarning)
# this is a fix for Windows users. Without it, javascript files will be served with text/html content-type and the browser will not show any UI
mimetypes.init()
@@ -270,7 +271,11 @@ def create_ui():
extra_tabs.__enter__()
with gr.Tab("Generation", id="txt2img_generation") as txt2img_generation_tab, ResizeHandleRow(equal_height=False):
with gr.Column(variant='compact', elem_id="txt2img_settings"):
with ExitStack() as stack:
if shared.opts.txt2img_settings_accordion:
stack.enter_context(gr.Accordion("Open for Settings", open=False))
stack.enter_context(gr.Column(variant='compact', elem_id="txt2img_settings"))
scripts.scripts_txt2img.prepare_ui()
for category in ordered_ui_categories():
@@ -489,7 +494,11 @@ def create_ui():
extra_tabs.__enter__()
with gr.Tab("Generation", id="img2img_generation") as img2img_generation_tab, ResizeHandleRow(equal_height=False):
with gr.Column(variant='compact', elem_id="img2img_settings"):
with ExitStack() as stack:
if shared.opts.img2img_settings_accordion:
stack.enter_context(gr.Accordion("Open for Settings", open=False))
stack.enter_context(gr.Column(variant='compact', elem_id="img2img_settings"))
copy_image_buttons = []
copy_image_destinations = {}
@@ -626,12 +635,6 @@ def create_ui():
scale_by.release(**on_change_args)
button_update_resize_to.click(**on_change_args)
# the code below is meant to update the resolution label after the image in the image selection UI has changed.
# as it is now the event keeps firing continuously for inpaint edits, which ruins the page with constant requests.
# I assume this must be a gradio bug and for now we'll just do it for non-inpaint inputs.
for component in [init_img, sketch]:
component.change(fn=lambda: None, _js="updateImg2imgResizeToTextAfterChangingImage", inputs=[], outputs=[], show_progress=False)
tab_scale_to.select(fn=lambda: 0, inputs=[], outputs=[selected_scale_tab])
tab_scale_by.select(fn=lambda: 1, inputs=[], outputs=[selected_scale_tab])
@@ -692,6 +695,12 @@ def create_ui():
if category not in {"accordions"}:
scripts.scripts_img2img.setup_ui_for_section(category)
# the code below is meant to update the resolution label after the image in the image selection UI has changed.
# as it is now the event keeps firing continuously for inpaint edits, which ruins the page with constant requests.
# I assume this must be a gradio bug and for now we'll just do it for non-inpaint inputs.
for component in [init_img, sketch]:
component.change(fn=lambda: None, _js="updateImg2imgResizeToTextAfterChangingImage", inputs=[], outputs=[], show_progress=False)
def select_img2img_tab(tab):
return gr.update(visible=tab in [2, 3, 4]), gr.update(visible=tab == 3),
@@ -1299,7 +1308,7 @@ def setup_ui_api(app):
from fastapi.responses import PlainTextResponse
text = sysinfo.get()
filename = f"sysinfo-{datetime.datetime.utcnow().strftime('%Y-%m-%d-%H-%M')}.txt"
filename = f"sysinfo-{datetime.datetime.utcnow().strftime('%Y-%m-%d-%H-%M')}.json"
return PlainTextResponse(text, headers={'Content-Disposition': f'{"attachment" if attachment else "inline"}; filename="{filename}"'})
+1 -1
View File
@@ -65,7 +65,7 @@ def save_config_state(name):
filename = os.path.join(config_states_dir, f"{timestamp}_{name}.json")
print(f"Saving backup of webui/extension state to {filename}.")
with open(filename, "w", encoding="utf-8") as f:
json.dump(current_config_state, f, indent=4)
json.dump(current_config_state, f, indent=4, ensure_ascii=False)
config_states.list_config_states()
new_value = next(iter(config_states.all_config_states.keys()), "Current")
new_choices = ["Current"] + list(config_states.all_config_states.keys())
+6 -2
View File
@@ -279,6 +279,7 @@ class ExtraNetworksPage:
"date_created": int(stat.st_ctime or 0),
"date_modified": int(stat.st_mtime or 0),
"name": pth.name.lower(),
"path": str(pth.parent).lower(),
}
def find_preview(self, path):
@@ -369,6 +370,9 @@ def create_ui(interface: gr.Blocks, unrelated_tabs, tabname):
for page in ui.stored_extra_pages:
with gr.Tab(page.title, elem_id=f"{tabname}_{page.id_page}", elem_classes=["extra-page"]) as tab:
with gr.Column(elem_id=f"{tabname}_{page.id_page}_prompts", elem_classes=["extra-page-prompts"]):
pass
elem_id = f"{tabname}_{page.id_page}_cards_html"
page_elem = gr.HTML('Loading...', elem_id=elem_id)
ui.pages.append(page_elem)
@@ -382,7 +386,7 @@ def create_ui(interface: gr.Blocks, unrelated_tabs, tabname):
related_tabs.append(tab)
edit_search = gr.Textbox('', show_label=False, elem_id=tabname+"_extra_search", elem_classes="search", placeholder="Search...", visible=False, interactive=True)
dropdown_sort = gr.Dropdown(choices=['Name', 'Date Created', 'Date Modified', ], value=shared.opts.extra_networks_card_order_field, elem_id=tabname+"_extra_sort", elem_classes="sort", multiselect=False, visible=False, show_label=False, interactive=True, label=tabname+"_extra_sort_order")
dropdown_sort = gr.Dropdown(choices=['Path', 'Name', 'Date Created', 'Date Modified', ], value=shared.opts.extra_networks_card_order_field, elem_id=tabname+"_extra_sort", elem_classes="sort", multiselect=False, visible=False, show_label=False, interactive=True, label=tabname+"_extra_sort_order")
button_sortorder = ToolButton(switch_values_symbol, elem_id=tabname+"_extra_sortorder", elem_classes=["sortorder"] + ([] if shared.opts.extra_networks_card_order == "Ascending" else ["sortReverse"]), visible=False, tooltip="Invert sort order")
button_refresh = gr.Button('Refresh', elem_id=tabname+"_extra_refresh", visible=False)
checkbox_show_dirs = gr.Checkbox(True, label='Show dirs', elem_id=tabname+"_extra_show_dirs", elem_classes="show-dirs", visible=False)
@@ -399,7 +403,7 @@ def create_ui(interface: gr.Blocks, unrelated_tabs, tabname):
allow_prompt = "true" if page.allow_prompt else "false"
allow_negative_prompt = "true" if page.allow_negative_prompt else "false"
jscode = 'extraNetworksTabSelected("' + tabname + '", "' + f"{tabname}_{page.id_page}" + '", ' + allow_prompt + ', ' + allow_negative_prompt + ');'
jscode = 'extraNetworksTabSelected("' + tabname + '", "' + f"{tabname}_{page.id_page}_prompts" + '", ' + allow_prompt + ', ' + allow_negative_prompt + ');'
tab.select(fn=lambda: [gr.update(visible=True) for _ in tab_controls], _js='function(){ ' + jscode + ' }', inputs=[], outputs=tab_controls, show_progress=False)
+7 -1
View File
@@ -17,6 +17,9 @@ class ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage):
def create_item(self, name, index=None, enable_filter=True):
checkpoint: sd_models.CheckpointInfo = sd_models.checkpoint_aliases.get(name)
if checkpoint is None:
return
path, ext = os.path.splitext(checkpoint.filename)
return {
"name": checkpoint.name_for_extra,
@@ -32,9 +35,12 @@ class ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage):
}
def list_items(self):
# instantiate a list to protect against concurrent modification
names = list(sd_models.checkpoints_list)
for index, name in enumerate(names):
yield self.create_item(name, index)
item = self.create_item(name, index)
if item is not None:
yield item
def allowed_directories_for_previews(self):
return [v for v in [shared.cmd_opts.ckpt_dir, sd_models.model_path] if v is not None]
+10 -3
View File
@@ -13,7 +13,10 @@ class ExtraNetworksPageHypernetworks(ui_extra_networks.ExtraNetworksPage):
shared.reload_hypernetworks()
def create_item(self, name, index=None, enable_filter=True):
full_path = shared.hypernetworks[name]
full_path = shared.hypernetworks.get(name)
if full_path is None:
return
path, ext = os.path.splitext(full_path)
sha256 = sha256_from_cache(full_path, f'hypernet/{name}')
shorthash = sha256[0:10] if sha256 else None
@@ -31,8 +34,12 @@ class ExtraNetworksPageHypernetworks(ui_extra_networks.ExtraNetworksPage):
}
def list_items(self):
for index, name in enumerate(shared.hypernetworks):
yield self.create_item(name, index)
# instantiate a list to protect against concurrent modification
names = list(shared.hypernetworks)
for index, name in enumerate(names):
item = self.create_item(name, index)
if item is not None:
yield item
def allowed_directories_for_previews(self):
return [shared.cmd_opts.hypernetwork_dir]
@@ -14,6 +14,8 @@ class ExtraNetworksPageTextualInversion(ui_extra_networks.ExtraNetworksPage):
def create_item(self, name, index=None, enable_filter=True):
embedding = sd_hijack.model_hijack.embedding_db.word_embeddings.get(name)
if embedding is None:
return
path, ext = os.path.splitext(embedding.filename)
return {
@@ -29,8 +31,12 @@ class ExtraNetworksPageTextualInversion(ui_extra_networks.ExtraNetworksPage):
}
def list_items(self):
for index, name in enumerate(sd_hijack.model_hijack.embedding_db.word_embeddings):
yield self.create_item(name, index)
# instantiate a list to protect against concurrent modification
names = list(sd_hijack.model_hijack.embedding_db.word_embeddings)
for index, name in enumerate(names):
item = self.create_item(name, index)
if item is not None:
yield item
def allowed_directories_for_previews(self):
return list(sd_hijack.model_hijack.embedding_db.embedding_dirs)
+1 -1
View File
@@ -134,7 +134,7 @@ class UserMetadataEditor:
basename, ext = os.path.splitext(filename)
with open(basename + '.json', "w", encoding="utf8") as file:
json.dump(metadata, file, indent=4)
json.dump(metadata, file, indent=4, ensure_ascii=False)
def save_user_metadata(self, name, desc, notes):
user_metadata = self.get_user_metadata(name)
+1 -1
View File
@@ -141,7 +141,7 @@ class UiLoadsave:
def write_to_file(self, current_ui_settings):
with open(self.filename, "w", encoding="utf8") as file:
json.dump(current_ui_settings, file, indent=4)
json.dump(current_ui_settings, file, indent=4, ensure_ascii=False)
def dump_defaults(self):
"""saves default values to a file unless tjhe file is present and there was an error loading default values at start"""
+2 -2
View File
@@ -68,10 +68,10 @@ class UiPromptStyles:
self.copy = ui_components.ToolButton(value=styles_copy_symbol, elem_id=f"{tabname}_style_copy", tooltip="Copy main UI prompt to style.")
with gr.Row():
self.prompt = gr.Textbox(label="Prompt", show_label=True, elem_id=f"{tabname}_edit_style_prompt", lines=3)
self.prompt = gr.Textbox(label="Prompt", show_label=True, elem_id=f"{tabname}_edit_style_prompt", lines=3, elem_classes=["prompt"])
with gr.Row():
self.neg_prompt = gr.Textbox(label="Negative prompt", show_label=True, elem_id=f"{tabname}_edit_style_neg_prompt", lines=3)
self.neg_prompt = gr.Textbox(label="Negative prompt", show_label=True, elem_id=f"{tabname}_edit_style_neg_prompt", lines=3, elem_classes=["prompt"])
with gr.Row():
self.save = gr.Button('Save', variant='primary', elem_id=f'{tabname}_edit_style_save', visible=False)
+1 -1
View File
@@ -61,7 +61,7 @@ def save_pil_to_file(self, pil_image, dir=None, format="png"):
def install_ui_tempdir_override():
"""override save to file function so that it also writes PNG info"""
gradio.components.IOComponent.pil_to_temp_file = save_pil_to_file
# gradio.components.IOComponent.pil_to_temp_file = save_pil_to_file
def on_tmpdir_changed():
+1
View File
@@ -16,6 +16,7 @@ exclude = [
ignore = [
"E501", # Line too long
"E721", # Do not compare types, use `isinstance`
"E731", # Do not assign a `lambda` expression, use a `def`
"I001", # Import block is un-sorted or un-formatted
+1 -1
View File
@@ -8,7 +8,7 @@ clean-fid
einops
fastapi>=0.90.1
gfpgan
gradio==3.41.2
gradio==4.7.1
inflection
jsonmerge
kornia
+2 -2
View File
@@ -5,9 +5,9 @@ basicsr==1.4.2
blendmodes==2022
clean-fid==0.1.35
einops==0.4.1
fastapi==0.94.0
fastapi==0.104.1
gfpgan==1.3.8
gradio==3.41.2
gradio==4.7.1
httpcore==0.15
inflection==0.5.1
jsonmerge==1.8.0
+12 -3
View File
@@ -133,9 +133,18 @@ document.addEventListener('keydown', function(e) {
if (isEnter && isModifierKey) {
if (interruptButton.style.display === 'block') {
interruptButton.click();
setTimeout(function() {
generateButton.click();
}, 500);
const callback = (mutationList) => {
for (const mutation of mutationList) {
if (mutation.type === 'attributes' && mutation.attributeName === 'style') {
if (interruptButton.style.display === 'none') {
generateButton.click();
observer.disconnect();
}
}
}
};
const observer = new MutationObserver(callback);
observer.observe(interruptButton, {attributes: true});
} else {
generateButton.click();
}
+19 -2
View File
@@ -462,6 +462,15 @@ div.toprow-compact-tools{
padding: 4px;
}
#settings > div.tab-nav .settings-category{
display: block;
margin: 1em 0 0.25em 0;
font-weight: bold;
text-decoration: underline;
cursor: default;
user-select: none;
}
#settings_result{
height: 1.4em;
margin: 0 1.2em;
@@ -840,8 +849,16 @@ footer {
/* extra networks UI */
.extra-page .prompt{
margin: 0 0 0.5em 0;
.extra-page > div.gap{
gap: 0;
}
.extra-page-prompts{
margin-bottom: 0;
}
.extra-page-prompts.extra-page-prompts-active{
margin-bottom: 1em;
}
.extra-network-cards{
+2 -2
View File
@@ -89,7 +89,7 @@ delimiter="################################################################"
printf "\n%s\n" "${delimiter}"
printf "\e[1m\e[32mInstall script for stable-diffusion + Web UI\n"
printf "\e[1m\e[34mTested on Debian 11 (Bullseye)\e[0m"
printf "\e[1m\e[34mTested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.\e[0m"
printf "\n%s\n" "${delimiter}"
# Do not run as root
@@ -223,7 +223,7 @@ fi
# Try using TCMalloc on Linux
prepare_tcmalloc() {
if [[ "${OSTYPE}" == "linux"* ]] && [[ -z "${NO_TCMALLOC}" ]] && [[ -z "${LD_PRELOAD}" ]]; then
TCMALLOC="$(PATH=/usr/sbin:$PATH ldconfig -p | grep -Po "libtcmalloc(_minimal|)\.so\.\d" | head -n 1)"
TCMALLOC="$(PATH=/sbin:$PATH ldconfig -p | grep -Po "libtcmalloc(_minimal|)\.so\.\d" | head -n 1)"
if [[ ! -z "${TCMALLOC}" ]]; then
echo "Using TCMalloc: ${TCMALLOC}"
export LD_PRELOAD="${TCMALLOC}"