Compare commits

..

79 Commits

Author SHA1 Message Date
missionfloyd 623ce9ef50 Downgrade Gradio to 4.38.1
ImageEditor color switching is broken on newer versions.
2024-09-08 22:52:10 -06:00
missionfloyd 3e94d2216c Bump Gradio to 4.42.0
Fixes the API docs
2024-09-04 19:46:13 -06:00
missionfloyd 5ad03e6586 Merge branch 'dev' into gradio4 2024-09-04 15:21:47 -06:00
missionfloyd 077d55545e Hide layers menu for inpaint 2024-09-04 15:14:44 -06:00
missionfloyd c6cc80eed7 Don't import Brush separately 2024-09-01 21:38:43 -06:00
missionfloyd d5de55f26c Fix image editor layer menu appearance 2024-04-27 20:58:40 -06:00
missionfloyd 953e12095c Fix extras task id
Fix restore extension config
2024-04-25 22:50:14 -06:00
missionfloyd 26e78a7ee2 Fix img2img parameters 2024-04-25 21:13:54 -06:00
missionfloyd 5e4cfb8bb1 Fix sending images from gallery 2024-04-25 19:39:53 -06:00
missionfloyd e6f46a94ad Merge branch 'gradio4' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into gradio4 2024-04-25 18:36:32 -06:00
missionfloyd ca59516fa1 Copy img2img layers 2024-04-25 18:36:29 -06:00
missionfloyd 2f75ae9f2c Change init_images default back to None 2024-04-25 00:15:13 -06:00
missionfloyd 43e893ce2a Get API working
Docs still doesn't work.
2024-04-24 23:53:01 -06:00
missionfloyd 4e8dfa3af5 Make gallery return images 2024-04-22 00:54:16 -06:00
missionfloyd dcb73d4373 Fix save button 2024-04-21 18:29:33 -06:00
missionfloyd e643abda93 Remove unused import 2024-04-21 18:21:11 -06:00
missionfloyd 8356e6beae Lint 2024-04-21 18:18:47 -06:00
missionfloyd 32281b272e Merge branch 'dev' into gradio4 2024-04-21 18:15:55 -06:00
missionfloyd cd9f740668 Update move_files_to_cache 2024-04-21 17:59:29 -06:00
missionfloyd f805f7384b Lint 2024-04-21 16:58:20 -06:00
missionfloyd 2d8d54def5 Fix empty override settings
Apparently empty dropdowns now return None
2024-04-21 16:55:27 -06:00
missionfloyd 77f9222599 Bump gradio to 4.27.0 2024-04-21 14:50:13 -06:00
catboxanon 492f902454 Fix merge errors 2024-03-24 16:38:02 -04:00
catboxanon 25f636cb3a Merge branch 'dev' into gradio4 2024-03-24 16:26:38 -04:00
AUTOMATIC1111 40e4ca99c5 Merge branch 'dev' into gradio4 2024-03-02 11:32:08 +03:00
AUTOMATIC1111 c167861d91 fix line endings for gradio.js 2024-03-02 08:48:34 +03:00
AUTOMATIC1111 b63dda3f45 linter 2024-03-02 08:43:46 +03:00
AUTOMATIC1111 50699ce112 more fixes for latest gradio 2024-03-02 08:40:06 +03:00
AUTOMATIC1111 f7a3067d2a Merge branch 'dev' into gradio4 2024-03-02 08:27:22 +03:00
AUTOMATIC1111 3ee79332b1 update to latest gradio 2024-03-02 08:25:10 +03:00
w-e-w 43850655d9 remove timestamp from path 2024-02-05 17:57:06 +09:00
AUTOMATIC1111 28899117da Merge pull request #14818 from daswer123/zoom-fix
Zoom & Pan: More fixes for gradio 4
2024-02-02 12:30:44 +03:00
Danil Boldyrev 7af009deb5 moved the part of code to a more suitable place, lint 2024-02-02 02:14:54 +03:00
Danil Boldyrev 9ccdbe2f84 fix caused unnecessary borders when the cursor leaves the drawing area 2024-02-02 01:40:25 +03:00
Danil Boldyrev 02e9c79ec5 Fixed a bug with the cursor size when scrolling
When scrolling there is a bug, a gradio bug, because of which a parameter breaks the cursor size and it becomes bigger, so I made a solution that gets rid of this problem
2024-02-02 01:32:13 +03:00
AUTOMATIC1111 8d2053a2e6 Merge pull request #14816 from daswer123/zoom-fix
Zoom & Pan: fix for gradio 4
2024-02-01 22:01:59 +03:00
Danil Boldyrev b389727e31 place the cursor next to the original 2024-02-01 16:00:23 +03:00
Danil Boldyrev 7eda3319de Remove unused code and lint 2024-02-01 15:43:01 +03:00
Danil Boldyrev c2ab058897 Made the zoom functionality work, both for drawing and erasing 2024-02-01 15:20:21 +03:00
Danil Boldyrev a08eff391e Temporary fix that returns functionality when sending via buttons 2024-02-01 12:13:25 +03:00
Danil Boldyrev 733f8c7c51 fix fitToScreen and adjustBrushSize funcs in zoom.js 2024-02-01 11:46:20 +03:00
Danil Boldyrev ca7ba7d394 Fix the startup zoom error 2024-02-01 11:29:06 +03:00
AUTOMATIC1111 5e37bf66c1 lint 2024-01-27 12:10:37 +03:00
AUTOMATIC1111 cee0bf8464 fix send to tab and hires upscale button 2024-01-27 12:05:26 +03:00
AUTOMATIC1111 91d1034d8d mark output gallery as non-editable 2024-01-27 11:36:00 +03:00
AUTOMATIC1111 816390938f fix js error at startup 2024-01-27 11:35:39 +03:00
AUTOMATIC1111 cf08f5b4d2 linter 2024-01-27 11:17:47 +03:00
AUTOMATIC1111 9dd3b2a10b solve some of issues with img2img copy to tab functionality 2024-01-27 11:12:59 +03:00
AUTOMATIC1111 67285e3478 remove gradio warning for refiner checkpoint 2024-01-27 10:37:17 +03:00
AUTOMATIC1111 983b58b897 Merge branch 'dev' into gradio4 2024-01-27 10:19:27 +03:00
AUTOMATIC1111 08f1926f30 repair resolution calculation for img2img 2024-01-07 20:45:24 +03:00
AUTOMATIC1111 174b71994f remove img2img editor height option, because it breaks gradio 4 image editors 2024-01-07 20:45:12 +03:00
AUTOMATIC1111 c43f7a874f align the image in the gallery 2024-01-07 20:24:28 +03:00
AUTOMATIC1111 cc6f27614b make it possible again to serve saved pictures without writing them to a temporary directory 2024-01-07 20:20:24 +03:00
AUTOMATIC1111 d51619e53b bump gradio version 2024-01-07 20:19:24 +03:00
missionfloyd 83e0eb094f Fix displaying images that haven't already been saved
Still copies already_saved_as images to temp.
2023-12-28 18:10:58 -07:00
missionfloyd 945cb97996 Merge branch 'gradio4' of https://github.com/automatic1111/stable-diffusion-webui into gradio4 2023-12-21 21:34:54 -07:00
missionfloyd 745efef08d Expand gr.Image() dropzone to fill component 2023-12-21 21:34:02 -07:00
w-e-w f604c29191 fix extras caption BLIP 2023-12-22 13:22:29 +09:00
missionfloyd 654ca97fe3 Fix extras BLIP caption 2023-12-21 21:06:30 -07:00
missionfloyd 6b4f147a07 Fix img2img interrogate 2023-12-21 19:03:25 -07:00
missionfloyd 92b33344bf Remove unused import 2023-12-17 22:09:19 -07:00
missionfloyd af71f64ad8 Fix saving images from gallery 2023-12-17 22:07:12 -07:00
missionfloyd eb41c73b96 Lint 2023-12-17 00:58:25 -07:00
missionfloyd 5b636b3105 Make extras work again
Not all postprocessing scripts work
2023-12-17 00:43:18 -07:00
missionfloyd a8e41f585e Fix "detect image size" button 2023-12-09 20:02:03 -07:00
missionfloyd 2daf98a5b6 Fix aspect ratio overlay
Make it work on inpaint upload tab
2023-12-08 23:06:59 -07:00
missionfloyd 5742836180 Simplify inpaint sketch mask 2023-12-06 22:01:11 -07:00
missionfloyd b5e7135ad8 Remove unused import 2023-12-05 20:47:01 -07:00
missionfloyd 9d1385de50 Fix sketch, inpaint sketch
Seems to work right, anyway.
Added webcam source.
Some img2img modes may now be redundant.
2023-12-05 20:45:19 -07:00
missionfloyd df14dc215c Change img2img_selected_tab back to gr.State 2023-12-05 06:56:40 -07:00
missionfloyd 10791e7d35 Fix inpaint 2023-12-04 22:40:40 -07:00
missionfloyd 0d9b431571 Fix img2img 2023-12-04 21:48:24 -07:00
missionfloyd d6271939d0 Fix popup CSS (mostly)
Center image buttons
2023-12-03 22:51:09 -07:00
missionfloyd 64fb3d16a9 Fix fullscreen image viewer 2023-12-03 21:48:11 -07:00
AUTOMATIC1111 b8040e4ab9 linter 2023-12-03 17:10:13 +03:00
AUTOMATIC1111 656c6a5f4d make extra networks work again 2023-12-03 16:56:47 +03:00
AUTOMATIC1111 8b2c562fb1 remove the code that breaks extra networks 2023-12-03 16:49:33 +03:00
AUTOMATIC1111 051375258c gradio4 2023-12-03 16:44:03 +03:00
65 changed files with 884 additions and 1035 deletions
+12 -1
View File
@@ -1 +1,12 @@
* @AUTOMATIC1111 @w-e-w @catboxanon * @AUTOMATIC1111
# if you were managing a localization and were removed from this file, this is because
# the intended way to do localizations now is via extensions. See:
# https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions
# Make a repo with your localization and since you are still listed as a collaborator
# you can add it to the wiki page yourself. This change is because some people complained
# the git commit log is cluttered with things unrelated to almost everyone and
# because I believe this is the best overall for the project to handle localizations almost
# entirely without my oversight.
-1
View File
@@ -148,7 +148,6 @@ python_cmd="python3.11"
2. Navigate to the directory you would like the webui to be installed and execute the following command: 2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash ```bash
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
chmod +x webui.sh
``` ```
Or just clone the repo wherever you want: Or just clone the repo wherever you want:
```bash ```bash
-98
View File
@@ -1,98 +0,0 @@
model:
target: sgm.models.diffusion.DiffusionEngine
params:
scale_factor: 0.13025
disable_first_stage_autocast: True
denoiser_config:
target: sgm.modules.diffusionmodules.denoiser.DiscreteDenoiser
params:
num_idx: 1000
weighting_config:
target: sgm.modules.diffusionmodules.denoiser_weighting.VWeighting
scaling_config:
target: sgm.modules.diffusionmodules.denoiser_scaling.VScaling
discretization_config:
target: sgm.modules.diffusionmodules.discretizer.LegacyDDPMDiscretization
network_config:
target: sgm.modules.diffusionmodules.openaimodel.UNetModel
params:
adm_in_channels: 2816
num_classes: sequential
use_checkpoint: False
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [4, 2]
num_res_blocks: 2
channel_mult: [1, 2, 4]
num_head_channels: 64
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: [1, 2, 10] # note: the first is unused (due to attn_res starting at 2) 32, 16, 8 --> 64, 32, 16
context_dim: 2048
spatial_transformer_attn_type: softmax-xformers
legacy: False
conditioner_config:
target: sgm.modules.GeneralConditioner
params:
emb_models:
# crossattn cond
- is_trainable: False
input_key: txt
target: sgm.modules.encoders.modules.FrozenCLIPEmbedder
params:
layer: hidden
layer_idx: 11
# crossattn and vector cond
- is_trainable: False
input_key: txt
target: sgm.modules.encoders.modules.FrozenOpenCLIPEmbedder2
params:
arch: ViT-bigG-14
version: laion2b_s39b_b160k
freeze: True
layer: penultimate
always_return_pooled: True
legacy: False
# vector cond
- is_trainable: False
input_key: original_size_as_tuple
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
# vector cond
- is_trainable: False
input_key: crop_coords_top_left
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
# vector cond
- is_trainable: False
input_key: target_size_as_tuple
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
first_stage_config:
target: sgm.models.autoencoder.AutoencoderKLInferenceWrapper
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
attn_type: vanilla-xformers
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [1, 2, 4, 4]
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
@@ -16,6 +16,20 @@ onUiLoaded(async() => {
// Helper functions // Helper functions
// Get active tab // Get active tab
function debounce(func, wait) {
let timeout;
return function executedFunction(...args) {
const later = () => {
clearTimeout(timeout);
func(...args);
};
clearTimeout(timeout);
timeout = setTimeout(later, wait);
};
}
/** /**
* Waits for an element to be present in the DOM. * Waits for an element to be present in the DOM.
*/ */
@@ -58,6 +72,30 @@ onUiLoaded(async() => {
} }
} }
// // Hack to make the cursor always be the same size
function fixCursorSize() {
window.scrollBy(0, 1);
}
function copySpecificStyles(sourceElement, targetElement, zoomLevel = 1) {
const stylesToCopy = ['top', 'left', 'width', 'height'];
stylesToCopy.forEach(styleName => {
if (sourceElement.style[styleName]) {
// Convert style value to number and multiply by zoomLevel.
let adjustedStyleValue = parseFloat(sourceElement.style[styleName]) / zoomLevel;
// Set the adjusted style value back to target element's style.
// Important: this will work fine for top and left styles as they are usually in px.
// But be careful with other units like em or % that might need different handling.
targetElement.style[styleName] = `${adjustedStyleValue}px`;
}
});
targetElement.style["opacity"] = sourceElement.style["opacity"];
}
// Detect whether the element has a horizontal scroll bar // Detect whether the element has a horizontal scroll bar
function hasHorizontalScrollbar(element) { function hasHorizontalScrollbar(element) {
return element.scrollWidth > element.clientWidth; return element.scrollWidth > element.clientWidth;
@@ -167,48 +205,6 @@ onUiLoaded(async() => {
return config; return config;
} }
/**
* The restoreImgRedMask function displays a red mask around an image to indicate the aspect ratio.
* If the image display property is set to 'none', the mask breaks. To fix this, the function
* temporarily sets the display property to 'block' and then hides the mask again after 300 milliseconds
* to avoid breaking the canvas. Additionally, the function adjusts the mask to work correctly on
* very long images.
*/
function restoreImgRedMask(elements) {
const mainTabId = getTabId(elements);
if (!mainTabId) return;
const mainTab = gradioApp().querySelector(mainTabId);
const img = mainTab.querySelector("img");
const imageARPreview = gradioApp().querySelector("#imageARPreview");
if (!img || !imageARPreview) return;
imageARPreview.style.transform = "";
if (parseFloat(mainTab.style.width) > 865) {
const transformString = mainTab.style.transform;
const scaleMatch = transformString.match(
/scale\(([-+]?[0-9]*\.?[0-9]+)\)/
);
let zoom = 1; // default zoom
if (scaleMatch && scaleMatch[1]) {
zoom = Number(scaleMatch[1]);
}
imageARPreview.style.transformOrigin = "0 0";
imageARPreview.style.transform = `scale(${zoom})`;
}
if (img.style.display !== "none") return;
img.style.display = "block";
setTimeout(() => {
img.style.display = "none";
}, 400);
}
const hotkeysConfigOpts = await waitForOpts(); const hotkeysConfigOpts = await waitForOpts();
@@ -224,7 +220,6 @@ onUiLoaded(async() => {
canvas_hotkey_grow_brush: "KeyW", canvas_hotkey_grow_brush: "KeyW",
canvas_disabled_functions: [], canvas_disabled_functions: [],
canvas_show_tooltip: true, canvas_show_tooltip: true,
canvas_auto_expand: true,
canvas_blur_prompt: false, canvas_blur_prompt: false,
}; };
@@ -264,18 +259,6 @@ onUiLoaded(async() => {
); );
const elemData = {}; const elemData = {};
// Apply functionality to the range inputs. Restore redmask and correct for long images.
const rangeInputs = elements.rangeGroup ?
Array.from(elements.rangeGroup.querySelectorAll("input")) :
[
gradioApp().querySelector("#img2img_width input[type='range']"),
gradioApp().querySelector("#img2img_height input[type='range']")
];
for (const input of rangeInputs) {
input?.addEventListener("input", () => restoreImgRedMask(elements));
}
function applyZoomAndPan(elemId, isExtension = true) { function applyZoomAndPan(elemId, isExtension = true) {
const targetElement = gradioApp().querySelector(elemId); const targetElement = gradioApp().querySelector(elemId);
@@ -289,14 +272,118 @@ onUiLoaded(async() => {
elemData[elemId] = { elemData[elemId] = {
zoom: 1, zoom: 1,
panX: 0, panX: 0,
panY: 0 panY: 0,
}; };
let fullScreenMode = false; let fullScreenMode = false;
// Cursor manipulation script for a painting application.
// The purpose of this code is to create custom cursors (for painting and erasing)
// that can change depending on which button the user presses.
// When the mouse moves over the canvas, the appropriate custom cursor also moves,
// replicating its appearance dynamically based on various CSS properties.
// This is done because the original cursor is tied to the size of the kanvas, it can not be changed, so I came up with a hack that creates an exact copy that works properly
const eraseButton = targetElement.querySelector(`button[aria-label='Erase button']`);
const paintButton = targetElement.querySelector(`button[aria-label='Draw button']`);
const canvasCursors = targetElement.querySelectorAll("span.svelte-btgkrd");
const paintCursorCopy = canvasCursors[0].cloneNode(true);
const eraserCursorCopy = canvasCursors[1].cloneNode(true);
canvasCursors.forEach(cursor => cursor.style.display = "none");
canvasCursors[0].parentNode.insertBefore(paintCursorCopy, canvasCursors[0].nextSibling);
canvasCursors[1].parentNode.insertBefore(eraserCursorCopy, canvasCursors[1].nextSibling);
// targetElement.appendChild(paintCursorCopy);
// paintCursorCopy.style.display = "none";
// targetElement.appendChild(eraserCursorCopy);
// eraserCursorCopy.style.display = "none";
let activeCursor;
paintButton.addEventListener('click', () => {
activateTool(paintButton, eraseButton, paintCursorCopy);
});
eraseButton.addEventListener('click', () => {
activateTool(eraseButton, paintButton, eraserCursorCopy);
});
function activateTool(activeButton, inactiveButton, activeCursorCopy) {
activeButton.classList.add("active");
inactiveButton.classList.remove("active");
// canvasCursors.forEach(cursor => cursor.style.display = "none");
if (activeCursor) {
activeCursor.style.display = "none";
}
activeCursor = activeCursorCopy;
// activeCursor.style.display = "none";
activeCursor.style.position = "absolute";
}
const canvasAreaEventsHandler = e => {
canvasCursors.forEach(cursor => cursor.style.display = "none");
if (!activeCursor) return;
const cursorNum = eraseButton.classList.contains("active") ? 1 : 0;
if (elemData[elemId].zoomLevel != 1) {
copySpecificStyles(canvasCursors[cursorNum], activeCursor, elemData[elemId].zoomLevel);
} else {
// Update the styles of the currently active cursor
copySpecificStyles(canvasCursors[cursorNum], activeCursor);
}
let offsetXAdjusted = e.offsetX;
let offsetYAdjusted = e.offsetY;
// Position the cursor based on the current mouse coordinates within target element.
activeCursor.style.transform =
`translate(${offsetXAdjusted}px, ${offsetYAdjusted}px)`;
};
const canvasAreaLeaveHandler = () => {
if (activeCursor) {
// activeCursor.style.opacity = 0
activeCursor.style.display = "none";
}
};
const canvasAreaEnterHandler = () => {
if (activeCursor) {
// activeCursor.style.opacity = 1
activeCursor.style.display = "block";
}
};
const canvasArea = targetElement.querySelector("canvas");
// Attach event listeners to the target element and canvas area
targetElement.addEventListener("mousemove", canvasAreaEventsHandler);
canvasArea.addEventListener("mouseout", canvasAreaLeaveHandler);
canvasArea.addEventListener("mouseenter", canvasAreaEnterHandler);
// Additional listener for handling zoom or other transformations which might affect visual representation
targetElement.addEventListener("wheel", canvasAreaEventsHandler);
// Remove border, cause bags
const canvasBorder = targetElement.querySelector(".border");
canvasBorder.style.display = "none";
// Create tooltip // Create tooltip
function createTooltip() { function createTooltip() {
const toolTipElement = const toolTipElement = targetElement.querySelector(".image-container");
targetElement.querySelector(".image-container");
const tooltip = document.createElement("div"); const tooltip = document.createElement("div");
tooltip.className = "canvas-tooltip"; tooltip.className = "canvas-tooltip";
@@ -359,25 +446,15 @@ onUiLoaded(async() => {
// Add a hint element to the target element // Add a hint element to the target element
toolTipElement.appendChild(tooltip); toolTipElement.appendChild(tooltip);
return tooltip;
} }
//Show tool tip if setting enable //Show tool tip if setting enable
if (hotkeysConfig.canvas_show_tooltip) { const canvasTooltip = createTooltip();
createTooltip();
}
// In the course of research, it was found that the tag img is very harmful when zooming and creates white canvases. This hack allows you to almost never think about this problem, it has no effect on webui. if (!hotkeysConfig.canvas_show_tooltip) {
function fixCanvas() { canvasTooltip.style.display = "none";
const activeTab = getActiveTab(elements)?.textContent.trim();
if (activeTab && activeTab !== "img2img") {
const img = targetElement.querySelector(`${elemId} img`);
if (img && img.style.display !== "none") {
img.style.display = "none";
img.style.visibility = "hidden";
}
}
} }
// Reset the zoom level and pan position of the target element to their initial values // Reset the zoom level and pan position of the target element to their initial values
@@ -385,7 +462,7 @@ onUiLoaded(async() => {
elemData[elemId] = { elemData[elemId] = {
zoomLevel: 1, zoomLevel: 1,
panX: 0, panX: 0,
panY: 0 panY: 0,
}; };
if (isExtension) { if (isExtension) {
@@ -394,45 +471,22 @@ onUiLoaded(async() => {
targetElement.isZoomed = false; targetElement.isZoomed = false;
fixCanvas();
targetElement.style.transform = `scale(${elemData[elemId].zoomLevel}) translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px)`; targetElement.style.transform = `scale(${elemData[elemId].zoomLevel}) translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px)`;
const canvas = gradioApp().querySelector( const canvas = gradioApp().querySelector(
`${elemId} canvas[key="interface"]` `${elemId} canvas`
); );
toggleOverlap("off"); toggleOverlap("off");
fullScreenMode = false; fullScreenMode = false;
const closeBtn = targetElement.querySelector("button[aria-label='Remove Image']"); const closeBtn = targetElement.querySelector("button[aria-label='Clear canvas']");
if (closeBtn) { if (closeBtn) {
closeBtn.addEventListener("click", resetZoom); closeBtn.addEventListener("click", resetZoom);
} }
if (canvas && isExtension) {
const parentElement = targetElement.closest('[id^="component-"]');
if (
canvas &&
parseFloat(canvas.style.width) > parentElement.offsetWidth &&
parseFloat(targetElement.style.width) > parentElement.offsetWidth
) {
fitToElement();
return;
}
}
if (
canvas &&
!isExtension &&
parseFloat(canvas.style.width) > 865 &&
parseFloat(targetElement.style.width) > 865
) {
fitToElement();
return;
}
targetElement.style.width = ""; targetElement.style.width = "";
fixCursorSize();
} }
// Toggle the zIndex of the target element between two values, allowing it to overlap or be overlapped by other elements // Toggle the zIndex of the target element between two values, allowing it to overlap or be overlapped by other elements
@@ -459,10 +513,10 @@ onUiLoaded(async() => {
) { ) {
const input = const input =
gradioApp().querySelector( gradioApp().querySelector(
`${elemId} input[aria-label='Brush radius']` `${elemId} input[type='range']`
) || ) ||
gradioApp().querySelector( gradioApp().querySelector(
`${elemId} button[aria-label="Use brush"]` `${elemId} button[aria-label="Size button"]`
); );
if (input) { if (input) {
@@ -482,10 +536,15 @@ onUiLoaded(async() => {
// Reset zoom when uploading a new image // Reset zoom when uploading a new image
const fileInput = gradioApp().querySelector( const fileInput = gradioApp().querySelector(
`${elemId} input[type="file"][accept="image/*"].svelte-116rqfv` `${elemId} .upload-container input[type="file"][accept="image/*"]`
); );
fileInput.addEventListener("click", resetZoom); fileInput.addEventListener("click", resetZoom);
// Create clickble area
const inputCanvas = targetElement.querySelector("canvas");
// Update the zoom level and pan position of the target element based on the values of the zoomLevel, panX and panY variables // Update the zoom level and pan position of the target element based on the values of the zoomLevel, panX and panY variables
function updateZoom(newZoomLevel, mouseX, mouseY) { function updateZoom(newZoomLevel, mouseX, mouseY) {
newZoomLevel = Math.max(0.1, Math.min(newZoomLevel, 15)); newZoomLevel = Math.max(0.1, Math.min(newZoomLevel, 15));
@@ -503,6 +562,9 @@ onUiLoaded(async() => {
targetElement.style.overflow = "visible"; targetElement.style.overflow = "visible";
} }
// Hack to make the cursor always be the same size
fixCursorSize();
return newZoomLevel; return newZoomLevel;
} }
@@ -538,67 +600,6 @@ onUiLoaded(async() => {
} }
} }
/**
* This function fits the target element to the screen by calculating
* the required scale and offsets. It also updates the global variables
* zoomLevel, panX, and panY to reflect the new state.
*/
function fitToElement() {
//Reset Zoom
targetElement.style.transform = `translate(${0}px, ${0}px) scale(${1})`;
let parentElement;
if (isExtension) {
parentElement = targetElement.closest('[id^="component-"]');
} else {
parentElement = targetElement.parentElement;
}
// Get element and screen dimensions
const elementWidth = targetElement.offsetWidth;
const elementHeight = targetElement.offsetHeight;
const screenWidth = parentElement.clientWidth;
const screenHeight = parentElement.clientHeight;
// Get element's coordinates relative to the parent element
const elementRect = targetElement.getBoundingClientRect();
const parentRect = parentElement.getBoundingClientRect();
const elementX = elementRect.x - parentRect.x;
// Calculate scale and offsets
const scaleX = screenWidth / elementWidth;
const scaleY = screenHeight / elementHeight;
const scale = Math.min(scaleX, scaleY);
const transformOrigin =
window.getComputedStyle(targetElement).transformOrigin;
const [originX, originY] = transformOrigin.split(" ");
const originXValue = parseFloat(originX);
const originYValue = parseFloat(originY);
const offsetX =
(screenWidth - elementWidth * scale) / 2 -
originXValue * (1 - scale);
const offsetY =
(screenHeight - elementHeight * scale) / 2.5 -
originYValue * (1 - scale);
// Apply scale and offsets to the element
targetElement.style.transform = `translate(${offsetX}px, ${offsetY}px) scale(${scale})`;
// Update global variables
elemData[elemId].zoomLevel = scale;
elemData[elemId].panX = offsetX;
elemData[elemId].panY = offsetY;
fullScreenMode = false;
toggleOverlap("off");
}
/** /**
* This function fits the target element to the screen by calculating * This function fits the target element to the screen by calculating
* the required scale and offsets. It also updates the global variables * the required scale and offsets. It also updates the global variables
@@ -608,9 +609,11 @@ onUiLoaded(async() => {
// Fullscreen mode // Fullscreen mode
function fitToScreen() { function fitToScreen() {
const canvas = gradioApp().querySelector( const canvas = gradioApp().querySelector(
`${elemId} canvas[key="interface"]` `${elemId} canvas`
); );
// print(canvas)
if (!canvas) return; if (!canvas) return;
if (canvas.offsetWidth > 862 || isExtension) { if (canvas.offsetWidth > 862 || isExtension) {
@@ -621,6 +624,7 @@ onUiLoaded(async() => {
targetElement.style.overflow = "visible"; targetElement.style.overflow = "visible";
} }
fixCursorSize();
if (fullScreenMode) { if (fullScreenMode) {
resetZoom(); resetZoom();
fullScreenMode = false; fullScreenMode = false;
@@ -728,7 +732,7 @@ onUiLoaded(async() => {
targetElement.isExpanded = false; targetElement.isExpanded = false;
function autoExpand() { function autoExpand() {
const canvas = document.querySelector(`${elemId} canvas[key="interface"]`); const canvas = document.querySelector(`${elemId} canvas`);
if (canvas) { if (canvas) {
if (hasHorizontalScrollbar(targetElement) && targetElement.isExpanded === false) { if (hasHorizontalScrollbar(targetElement) && targetElement.isExpanded === false) {
targetElement.style.visibility = "hidden"; targetElement.style.visibility = "hidden";
@@ -744,26 +748,6 @@ onUiLoaded(async() => {
targetElement.addEventListener("mousemove", getMousePosition); targetElement.addEventListener("mousemove", getMousePosition);
//observers
// Creating an observer with a callback function to handle DOM changes
const observer = new MutationObserver((mutationsList, observer) => {
for (let mutation of mutationsList) {
// If the style attribute of the canvas has changed, by observation it happens only when the picture changes
if (mutation.type === 'attributes' && mutation.attributeName === 'style' &&
mutation.target.tagName.toLowerCase() === 'canvas') {
targetElement.isExpanded = false;
setTimeout(resetZoom, 10);
}
}
});
// Apply auto expand if enabled
if (hotkeysConfig.canvas_auto_expand) {
targetElement.addEventListener("mousemove", autoExpand);
// Set up an observer to track attribute changes
observer.observe(targetElement, {attributes: true, childList: true, subtree: true});
}
// Handle events only inside the targetElement // Handle events only inside the targetElement
let isKeyDownHandlerAttached = false; let isKeyDownHandlerAttached = false;
@@ -790,15 +774,7 @@ onUiLoaded(async() => {
targetElement.addEventListener("mouseleave", handleMouseLeave); targetElement.addEventListener("mouseleave", handleMouseLeave);
// Reset zoom when click on another tab // Reset zoom when click on another tab
if (elements.img2imgTabs) { elements.img2imgTabs.addEventListener("click", resetZoom);
elements.img2imgTabs.addEventListener("click", resetZoom);
elements.img2imgTabs.addEventListener("click", () => {
// targetElement.style.width = "";
if (parseInt(targetElement.style.width) > 865) {
setTimeout(fitToElement, 0);
}
});
}
targetElement.addEventListener("wheel", e => { targetElement.addEventListener("wheel", e => {
// change zoom level // change zoom level
@@ -816,7 +792,7 @@ onUiLoaded(async() => {
// Increase or decrease brush size based on scroll direction // Increase or decrease brush size based on scroll direction
adjustBrushSize(elemId, e.deltaY); adjustBrushSize(elemId, e.deltaY);
} }
}, {passive: false}); });
// Handle the move event for pan functionality. Updates the panX and panY variables and applies the new transform to the target element. // Handle the move event for pan functionality. Updates the panX and panY variables and applies the new transform to the target element.
function handleMoveKeyDown(e) { function handleMoveKeyDown(e) {
@@ -878,6 +854,7 @@ onUiLoaded(async() => {
elemData[elemId].panY += movementY * panSpeed; elemData[elemId].panY += movementY * panSpeed;
// Delayed redraw of an element // Delayed redraw of an element
const canvas = targetElement.querySelector("canvas");
requestAnimationFrame(() => { requestAnimationFrame(() => {
targetElement.style.transform = `translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px) scale(${elemData[elemId].zoomLevel})`; targetElement.style.transform = `translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px) scale(${elemData[elemId].zoomLevel})`;
toggleOverlap("on"); toggleOverlap("on");
@@ -936,7 +913,6 @@ onUiLoaded(async() => {
gradioApp().addEventListener("mousemove", handleMoveByKey); gradioApp().addEventListener("mousemove", handleMoveByKey);
} }
applyZoomAndPan(elementIDs.sketch, false); applyZoomAndPan(elementIDs.sketch, false);
@@ -966,9 +942,30 @@ onUiLoaded(async() => {
}; };
window.applyZoomAndPan = applyZoomAndPan; // Only 1 elements, argument elementID, for example applyZoomAndPan("#txt2img_controlnet_ControlNet_input_image") window.applyZoomAndPan = applyZoomAndPan; // Only 1 elements, argument elementID, for example applyZoomAndPan("#txt2img_controlnet_ControlNet_input_image")
window.applyZoomAndPanIntegration = applyZoomAndPanIntegration; // for any extension window.applyZoomAndPanIntegration = applyZoomAndPanIntegration; // for any extension
// Return zoom functionality when send img via buttons
const img2imgArea = document.querySelector("#img2img_settings");
const checkForTooltip = (e) => {
const tabId = getTabId(elements); // Make sure that the item is passed correctly to determine the tabId
if (tabId === "#img2img_sketch" || tabId === "#inpaint_sketch" || tabId === "#img2maskimg") {
const zoomTooltip = document.querySelector(`${tabId} .canvas-tooltip`);
if (!zoomTooltip) {
applyZoomAndPan(tabId, false);
// resetZoom()
}
}
};
// Wrapping your function through debounce to reduce the number of calls
const debouncedCheckForTooltip = debounce(checkForTooltip, 20);
// Assigning an event handler
img2imgArea.addEventListener("mousemove", debouncedCheckForTooltip);
/* /*
The function `applyZoomAndPanIntegration` takes two arguments: The function `applyZoomAndPanIntegration` takes two arguments:
@@ -11,7 +11,6 @@ shared.options_templates.update(shared.options_section(('canvas_hotkey', "Canvas
"canvas_hotkey_reset": shared.OptionInfo("R", "Reset zoom and canvas position"), "canvas_hotkey_reset": shared.OptionInfo("R", "Reset zoom and canvas position"),
"canvas_hotkey_overlap": shared.OptionInfo("O", "Toggle overlap").info("Technical button, needed for testing"), "canvas_hotkey_overlap": shared.OptionInfo("O", "Toggle overlap").info("Technical button, needed for testing"),
"canvas_show_tooltip": shared.OptionInfo(True, "Enable tooltip on the canvas"), "canvas_show_tooltip": shared.OptionInfo(True, "Enable tooltip on the canvas"),
"canvas_auto_expand": shared.OptionInfo(True, "Automatically expands an image that does not fit completely in the canvas area, similar to manually pressing the S and R buttons"),
"canvas_blur_prompt": shared.OptionInfo(False, "Take the focus off the prompt when working with a canvas"), "canvas_blur_prompt": shared.OptionInfo(False, "Take the focus off the prompt when working with a canvas"),
"canvas_disabled_functions": shared.OptionInfo(["Overlap"], "Disable function that you don't use", gr.CheckboxGroup, {"choices": ["Zoom","Adjust brush size","Hotkey enlarge brush","Hotkey shrink brush","Moving canvas","Fullscreen","Reset Zoom","Overlap"]}), "canvas_disabled_functions": shared.OptionInfo(["Overlap"], "Disable function that you don't use", gr.CheckboxGroup, {"choices": ["Zoom","Adjust brush size","Hotkey enlarge brush","Hotkey shrink brush","Moving canvas","Fullscreen","Reset Zoom","Overlap"]}),
})) }))
+1 -1
View File
@@ -1,7 +1,7 @@
""" """
Hypertile module for splitting attention layers in SD-1.5 U-Net and SD-1.5 VAE Hypertile module for splitting attention layers in SD-1.5 U-Net and SD-1.5 VAE
Warn: The patch works well only if the input image has a width and height that are multiples of 128 Warn: The patch works well only if the input image has a width and height that are multiples of 128
Original author: @tfernd GitHub: https://github.com/tfernd/HyperTile Original author: @tfernd Github: https://github.com/tfernd/HyperTile
""" """
from __future__ import annotations from __future__ import annotations
@@ -34,14 +34,14 @@ class ScriptPostprocessingAutosizedCrop(scripts_postprocessing.ScriptPostprocess
with ui_components.InputAccordion(False, label="Auto-sized crop") as enable: with ui_components.InputAccordion(False, label="Auto-sized crop") as enable:
gr.Markdown('Each image is center-cropped with an automatically chosen width and height.') gr.Markdown('Each image is center-cropped with an automatically chosen width and height.')
with gr.Row(): with gr.Row():
mindim = gr.Slider(minimum=64, maximum=2048, step=8, label="Dimension lower bound", value=384, elem_id=self.elem_id_suffix("postprocess_multicrop_mindim")) mindim = gr.Slider(minimum=64, maximum=2048, step=8, label="Dimension lower bound", value=384, elem_id="postprocess_multicrop_mindim")
maxdim = gr.Slider(minimum=64, maximum=2048, step=8, label="Dimension upper bound", value=768, elem_id=self.elem_id_suffix("postprocess_multicrop_maxdim")) maxdim = gr.Slider(minimum=64, maximum=2048, step=8, label="Dimension upper bound", value=768, elem_id="postprocess_multicrop_maxdim")
with gr.Row(): with gr.Row():
minarea = gr.Slider(minimum=64 * 64, maximum=2048 * 2048, step=1, label="Area lower bound", value=64 * 64, elem_id=self.elem_id_suffix("postprocess_multicrop_minarea")) minarea = gr.Slider(minimum=64 * 64, maximum=2048 * 2048, step=1, label="Area lower bound", value=64 * 64, elem_id="postprocess_multicrop_minarea")
maxarea = gr.Slider(minimum=64 * 64, maximum=2048 * 2048, step=1, label="Area upper bound", value=640 * 640, elem_id=self.elem_id_suffix("postprocess_multicrop_maxarea")) maxarea = gr.Slider(minimum=64 * 64, maximum=2048 * 2048, step=1, label="Area upper bound", value=640 * 640, elem_id="postprocess_multicrop_maxarea")
with gr.Row(): with gr.Row():
objective = gr.Radio(["Maximize area", "Minimize error"], value="Maximize area", label="Resizing objective", elem_id=self.elem_id_suffix("postprocess_multicrop_objective")) objective = gr.Radio(["Maximize area", "Minimize error"], value="Maximize area", label="Resizing objective", elem_id="postprocess_multicrop_objective")
threshold = gr.Slider(minimum=0, maximum=1, step=0.01, label="Error threshold", value=0.1, elem_id=self.elem_id_suffix("postprocess_multicrop_threshold")) threshold = gr.Slider(minimum=0, maximum=1, step=0.01, label="Error threshold", value=0.1, elem_id="postprocess_multicrop_threshold")
return { return {
"enable": enable, "enable": enable,
@@ -11,10 +11,10 @@ class ScriptPostprocessingFocalCrop(scripts_postprocessing.ScriptPostprocessing)
def ui(self): def ui(self):
with ui_components.InputAccordion(False, label="Auto focal point crop") as enable: with ui_components.InputAccordion(False, label="Auto focal point crop") as enable:
face_weight = gr.Slider(label='Focal point face weight', value=0.9, minimum=0.0, maximum=1.0, step=0.05, elem_id=self.elem_id_suffix("postprocess_focal_crop_face_weight")) face_weight = gr.Slider(label='Focal point face weight', value=0.9, minimum=0.0, maximum=1.0, step=0.05, elem_id="postprocess_focal_crop_face_weight")
entropy_weight = gr.Slider(label='Focal point entropy weight', value=0.15, minimum=0.0, maximum=1.0, step=0.05, elem_id=self.elem_id_suffix("postprocess_focal_crop_entropy_weight")) entropy_weight = gr.Slider(label='Focal point entropy weight', value=0.15, minimum=0.0, maximum=1.0, step=0.05, elem_id="postprocess_focal_crop_entropy_weight")
edges_weight = gr.Slider(label='Focal point edges weight', value=0.5, minimum=0.0, maximum=1.0, step=0.05, elem_id=self.elem_id_suffix("postprocess_focal_crop_edges_weight")) edges_weight = gr.Slider(label='Focal point edges weight', value=0.5, minimum=0.0, maximum=1.0, step=0.05, elem_id="postprocess_focal_crop_edges_weight")
debug = gr.Checkbox(label='Create debug image', elem_id=self.elem_id_suffix("train_process_focal_crop_debug")) debug = gr.Checkbox(label='Create debug image', elem_id="train_process_focal_crop_debug")
return { return {
"enable": enable, "enable": enable,
@@ -35,8 +35,8 @@ class ScriptPostprocessingSplitOversized(scripts_postprocessing.ScriptPostproces
def ui(self): def ui(self):
with ui_components.InputAccordion(False, label="Split oversized images") as enable: with ui_components.InputAccordion(False, label="Split oversized images") as enable:
with gr.Row(): with gr.Row():
split_threshold = gr.Slider(label='Threshold', value=0.5, minimum=0.0, maximum=1.0, step=0.05, elem_id=self.elem_id_suffix("postprocess_split_threshold")) split_threshold = gr.Slider(label='Threshold', value=0.5, minimum=0.0, maximum=1.0, step=0.05, elem_id="postprocess_split_threshold")
overlap_ratio = gr.Slider(label='Overlap ratio', value=0.2, minimum=0.0, maximum=0.9, step=0.05, elem_id=self.elem_id_suffix("postprocess_overlap_ratio")) overlap_ratio = gr.Slider(label='Overlap ratio', value=0.2, minimum=0.0, maximum=0.9, step=0.05, elem_id="postprocess_overlap_ratio")
return { return {
"enable": enable, "enable": enable,
@@ -1,69 +1,36 @@
// Stable Diffusion WebUI - Bracket Checker // Stable Diffusion WebUI - Bracket checker
// By @Bwin4L, @akx, @w-e-w, @Haoming02 // By Hingashi no Florin/Bwin4L & @akx
// Counts open and closed brackets (round, square, curly) in the prompt and negative prompt text boxes in the txt2img and img2img tabs. // Counts open and closed brackets (round, square, curly) in the prompt and negative prompt text boxes in the txt2img and img2img tabs.
// If there's a mismatch, the keyword counter turns red, and if you hover on it, a tooltip tells you what's wrong. // If there's a mismatch, the keyword counter turns red and if you hover on it, a tooltip tells you what's wrong.
function checkBrackets(textArea, counterElem) { function checkBrackets(textArea, counterElt) {
const pairs = [ var counts = {};
['(', ')', 'round brackets'], (textArea.value.match(/[(){}[\]]/g) || []).forEach(bracket => {
['[', ']', 'square brackets'], counts[bracket] = (counts[bracket] || 0) + 1;
['{', '}', 'curly brackets'] });
]; var errors = [];
const counts = {}; function checkPair(open, close, kind) {
const errors = new Set(); if (counts[open] !== counts[close]) {
let i = 0; errors.push(
`${open}...${close} - Detected ${counts[open] || 0} opening and ${counts[close] || 0} closing ${kind}.`
while (i < textArea.value.length) { );
let char = textArea.value[i];
let escaped = false;
while (char === '\\' && i + 1 < textArea.value.length) {
escaped = !escaped;
i++;
char = textArea.value[i];
}
if (escaped) {
i++;
continue;
}
for (const [open, close, label] of pairs) {
if (char === open) {
counts[label] = (counts[label] || 0) + 1;
} else if (char === close) {
counts[label] = (counts[label] || 0) - 1;
if (counts[label] < 0) {
errors.add(`Incorrect order of ${label}.`);
}
}
}
i++;
}
for (const [open, close, label] of pairs) {
if (counts[label] == undefined) {
continue;
}
if (counts[label] > 0) {
errors.add(`${open} ... ${close} - Detected ${counts[label]} more opening than closing ${label}.`);
} else if (counts[label] < 0) {
errors.add(`${open} ... ${close} - Detected ${-counts[label]} more closing than opening ${label}.`);
} }
} }
counterElem.title = [...errors].join('\n'); checkPair('(', ')', 'round brackets');
counterElem.classList.toggle('error', errors.size !== 0); checkPair('[', ']', 'square brackets');
checkPair('{', '}', 'curly brackets');
counterElt.title = errors.join('\n');
counterElt.classList.toggle('error', errors.length !== 0);
} }
function setupBracketChecking(id_prompt, id_counter) { function setupBracketChecking(id_prompt, id_counter) {
const textarea = gradioApp().querySelector(`#${id_prompt} > label > textarea`); var textarea = gradioApp().querySelector("#" + id_prompt + " > label > textarea");
const counter = gradioApp().getElementById(id_counter); var counter = gradioApp().getElementById(id_counter);
if (textarea && counter) { if (textarea && counter) {
onEdit(`${id_prompt}_BracketChecking`, textarea, 400, () => checkBrackets(textarea, counter)); textarea.addEventListener("input", () => checkBrackets(textarea, counter));
} }
} }
+1 -1
View File
@@ -1,7 +1,7 @@
<div> <div>
<a href="{api_docs}">API</a> <a href="{api_docs}">API</a>
 •   • 
<a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">GitHub</a> <a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">Github</a>
 •   • 
<a href="https://gradio.app">Gradio</a> <a href="https://gradio.app">Gradio</a>
 •   • 
+14 -22
View File
@@ -1,10 +1,8 @@
let currentWidth;
let currentWidth = null; let currentHeight;
let currentHeight = null; let arFrameTimeout;
let arFrameTimeout = setTimeout(function() {}, 0);
function dimensionChange(e, is_width, is_height) { function dimensionChange(e, is_width, is_height) {
if (is_width) { if (is_width) {
currentWidth = e.target.value * 1.0; currentWidth = e.target.value * 1.0;
} }
@@ -22,18 +20,18 @@ function dimensionChange(e, is_width, is_height) {
var tabIndex = get_tab_index('mode_img2img'); var tabIndex = get_tab_index('mode_img2img');
if (tabIndex == 0) { // img2img if (tabIndex == 0) { // img2img
targetElement = gradioApp().querySelector('#img2img_image div[data-testid=image] img'); targetElement = gradioApp().querySelector('#img2img_image div[data-testid=image] canvas');
} else if (tabIndex == 1) { //Sketch } else if (tabIndex == 1) { //Sketch
targetElement = gradioApp().querySelector('#img2img_sketch div[data-testid=image] img'); targetElement = gradioApp().querySelector('#img2img_sketch div[data-testid=image] canvas');
} else if (tabIndex == 2) { // Inpaint } else if (tabIndex == 2) { // Inpaint
targetElement = gradioApp().querySelector('#img2maskimg div[data-testid=image] img'); targetElement = gradioApp().querySelector('#img2maskimg div[data-testid=image] canvas');
} else if (tabIndex == 3) { // Inpaint sketch } else if (tabIndex == 3) { // Inpaint sketch
targetElement = gradioApp().querySelector('#inpaint_sketch div[data-testid=image] img'); targetElement = gradioApp().querySelector('#inpaint_sketch div[data-testid=image] canvas');
} else if (tabIndex == 4) { // Inpaint upload
targetElement = gradioApp().querySelector('#img_inpaint_base div[data-testid=image] img');
} }
if (targetElement) { if (targetElement) {
var arPreviewRect = gradioApp().querySelector('#imageARPreview'); var arPreviewRect = gradioApp().querySelector('#imageARPreview');
if (!arPreviewRect) { if (!arPreviewRect) {
arPreviewRect = document.createElement('div'); arPreviewRect = document.createElement('div');
@@ -41,14 +39,11 @@ function dimensionChange(e, is_width, is_height) {
gradioApp().appendChild(arPreviewRect); gradioApp().appendChild(arPreviewRect);
} }
var viewportOffset = targetElement.getBoundingClientRect(); var viewportOffset = targetElement.getBoundingClientRect();
var viewportscale = Math.min(targetElement.clientWidth / targetElement.width, targetElement.clientHeight / targetElement.height);
var viewportscale = Math.min(targetElement.clientWidth / targetElement.naturalWidth, targetElement.clientHeight / targetElement.naturalHeight); var scaledx = targetElement.width * viewportscale;
var scaledy = targetElement.height * viewportscale;
var scaledx = targetElement.naturalWidth * viewportscale;
var scaledy = targetElement.naturalHeight * viewportscale;
var clientRectTop = (viewportOffset.top + window.scrollY); var clientRectTop = (viewportOffset.top + window.scrollY);
var clientRectLeft = (viewportOffset.left + window.scrollX); var clientRectLeft = (viewportOffset.left + window.scrollX);
@@ -75,21 +70,18 @@ function dimensionChange(e, is_width, is_height) {
}, 2000); }, 2000);
arPreviewRect.style.display = 'block'; arPreviewRect.style.display = 'block';
} }
} }
onAfterUiUpdate(function() { onAfterUiUpdate(function() {
var arPreviewRect = gradioApp().querySelector('#imageARPreview'); var arPreviewRect = gradioApp().querySelector('#imageARPreview');
if (arPreviewRect) { if (arPreviewRect) {
arPreviewRect.style.display = 'none'; arPreviewRect.style.display = 'none';
} }
var tabImg2img = gradioApp().querySelector("#tab_img2img"); var tabImg2img = gradioApp().querySelector("#tab_img2img");
if (tabImg2img) { if (tabImg2img) {
var inImg2img = tabImg2img.style.display == "block"; if (tabImg2img.style.display == "block") {
if (inImg2img) {
let inputs = gradioApp().querySelectorAll('input'); let inputs = gradioApp().querySelectorAll('input');
inputs.forEach(function(e) { inputs.forEach(function(e) {
var is_width = e.parentElement.id == "img2img_width"; var is_width = e.parentElement.id == "img2img_width";
+1 -1
View File
@@ -104,7 +104,7 @@ var contextMenuInit = function() {
e.preventDefault(); e.preventDefault();
} }
}); });
}, {passive: false}); });
}); });
eventListenerApplied = true; eventListenerApplied = true;
+1 -1
View File
@@ -201,7 +201,7 @@ function setupExtraNetworks() {
setupExtraNetworksForTab('img2img'); setupExtraNetworksForTab('img2img');
} }
var re_extranet = /<([^:^>]+:[^:]+):[\d.]+>(.*)/s; var re_extranet = /<([^:^>]+:[^:]+):[\d.]+>(.*)/;
var re_extranet_g = /<([^:^>]+:[^:]+):[\d.]+>/g; var re_extranet_g = /<([^:^>]+:[^:]+):[\d.]+>/g;
var re_extranet_neg = /\(([^:^>]+:[\d.]+)\)/; var re_extranet_neg = /\(([^:^>]+:[\d.]+)\)/;
+7
View File
@@ -0,0 +1,7 @@
// added to fix a weird error in gradio 4.19 at page load
Object.defineProperty(Array.prototype, 'toLowerCase', {
value: function() {
return this;
}
});
+13 -19
View File
@@ -13,7 +13,6 @@ function showModal(event) {
if (modalImage.style.display === 'none') { if (modalImage.style.display === 'none') {
lb.style.setProperty('background-image', 'url(' + source.src + ')'); lb.style.setProperty('background-image', 'url(' + source.src + ')');
} }
updateModalImage();
lb.style.display = "flex"; lb.style.display = "flex";
lb.focus(); lb.focus();
@@ -32,26 +31,21 @@ function negmod(n, m) {
return ((n % m) + m) % m; return ((n % m) + m) % m;
} }
function updateModalImage() {
const modalImage = gradioApp().getElementById("modalImage");
let currentButton = selected_gallery_button();
let preview = gradioApp().querySelectorAll('.livePreview > img');
if (opts.js_live_preview_in_modal_lightbox && preview.length > 0) {
// show preview image if available
modalImage.src = preview[preview.length - 1].src;
} else if (currentButton?.children?.length > 0 && modalImage.src != currentButton.children[0].src) {
modalImage.src = currentButton.children[0].src;
if (modalImage.style.display === 'none') {
const modal = gradioApp().getElementById("lightboxModal");
modal.style.setProperty('background-image', `url(${modalImage.src})`);
}
}
}
function updateOnBackgroundChange() { function updateOnBackgroundChange() {
const modalImage = gradioApp().getElementById("modalImage"); const modalImage = gradioApp().getElementById("modalImage");
if (modalImage && modalImage.offsetParent) { if (modalImage && modalImage.offsetParent) {
updateModalImage(); let currentButton = selected_gallery_button();
let preview = gradioApp().querySelectorAll('.livePreview > img');
if (opts.js_live_preview_in_modal_lightbox && preview.length > 0) {
// show preview image if available
modalImage.src = preview[preview.length - 1].src;
} else if (currentButton?.children?.length > 0 && modalImage.src != currentButton.children[0].src) {
modalImage.src = currentButton.children[0].src;
if (modalImage.style.display === 'none') {
const modal = gradioApp().getElementById("lightboxModal");
modal.style.setProperty('background-image', `url(${modalImage.src})`);
}
}
} }
} }
@@ -183,7 +177,7 @@ function modalTileImageToggle(event) {
} }
onAfterUiUpdate(function() { onAfterUiUpdate(function() {
var fullImg_preview = gradioApp().querySelectorAll('.gradio-gallery > div > img'); var fullImg_preview = gradioApp().querySelectorAll('.gradio-gallery > button > button > img');
if (fullImg_preview != null) { if (fullImg_preview != null) {
fullImg_preview.forEach(setupImageForLightbox); fullImg_preview.forEach(setupImageForLightbox);
} }
+1 -2
View File
@@ -79,12 +79,11 @@ function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgre
var wakeLock = null; var wakeLock = null;
var requestWakeLock = async function() { var requestWakeLock = async function() {
if (!opts.prevent_screen_sleep_during_generation || wakeLock !== null) return; if (!opts.prevent_screen_sleep_during_generation || wakeLock) return;
try { try {
wakeLock = await navigator.wakeLock.request('screen'); wakeLock = await navigator.wakeLock.request('screen');
} catch (err) { } catch (err) {
console.error('Wake Lock is not supported.'); console.error('Wake Lock is not supported.');
wakeLock = false;
} }
}; };
+1 -1
View File
@@ -124,7 +124,7 @@
} else { } else {
R.screenX = evt.changedTouches[0].screenX; R.screenX = evt.changedTouches[0].screenX;
} }
}, {passive: false}); });
}); });
resizeHandle.addEventListener('dblclick', onDoubleClick); resizeHandle.addEventListener('dblclick', onDoubleClick);
+4 -17
View File
@@ -38,9 +38,6 @@ function extract_image_from_gallery(gallery) {
if (gallery.length == 0) { if (gallery.length == 0) {
return [null]; return [null];
} }
if (gallery.length == 1) {
return [gallery[0]];
}
var index = selected_gallery_index(); var index = selected_gallery_index();
@@ -49,7 +46,7 @@ function extract_image_from_gallery(gallery) {
index = 0; index = 0;
} }
return [gallery[index]]; return [[gallery[index]]];
} }
window.args_to_array = Array.from; // Compatibility with e.g. extensions that may expect this to be around window.args_to_array = Array.from; // Compatibility with e.g. extensions that may expect this to be around
@@ -116,14 +113,6 @@ function get_img2img_tab_index() {
function create_submit_args(args) { function create_submit_args(args) {
var res = Array.from(args); var res = Array.from(args);
// As it is currently, txt2img and img2img send back the previous output args (txt2img_gallery, generation_info, html_info) whenever you generate a new image.
// This can lead to uploading a huge gallery of previously generated images, which leads to an unnecessary delay between submitting and beginning to generate.
// I don't know why gradio is sending outputs along with inputs, but we can prevent sending the image gallery here, which seems to be an issue for some.
// If gradio at some point stops sending outputs, this may break something
if (Array.isArray(res[res.length - 3])) {
res[res.length - 3] = null;
}
return res; return res;
} }
@@ -189,7 +178,6 @@ function submit_img2img() {
var res = create_submit_args(arguments); var res = create_submit_args(arguments);
res[0] = id; res[0] = id;
res[1] = get_tab_index('mode_img2img');
return res; return res;
} }
@@ -207,7 +195,6 @@ function submit_extras() {
res[0] = id; res[0] = id;
console.log(res);
return res; return res;
} }
@@ -376,9 +363,9 @@ function selectCheckpoint(name) {
gradioApp().getElementById('change_checkpoint').click(); gradioApp().getElementById('change_checkpoint').click();
} }
function currentImg2imgSourceResolution(w, h, scaleBy) { function currentImg2imgSourceResolution(w, h, r) {
var img = gradioApp().querySelector('#mode_img2img > div[style="display: block;"] img'); var img = gradioApp().querySelector('#mode_img2img > div[style="display: block;"] :is(img, canvas)');
return img ? [img.naturalWidth, img.naturalHeight, scaleBy] : [0, 0, scaleBy]; return img ? [img.naturalWidth || img.width, img.naturalHeight || img.height, r] : [0, 0, r];
} }
function updateImg2imgResizeToTextAfterChangingImage() { function updateImg2imgResizeToTextAfterChangingImage() {
+10 -4
View File
@@ -14,10 +14,16 @@ onOptionsChanged(function() {
if (!commentBefore && !commentAfter) return; if (!commentBefore && !commentAfter) return;
var span = null; var span = null;
if (div.classList.contains('gradio-checkbox')) span = div.querySelector('label span'); if (div.classList.contains('gradio-checkbox')) {
else if (div.classList.contains('gradio-checkboxgroup')) span = div.querySelector('span').firstChild; span = div.querySelector('label span');
else if (div.classList.contains('gradio-radio')) span = div.querySelector('span').firstChild; } else if (div.classList.contains('gradio-checkboxgroup')) {
else span = div.querySelector('label span').firstChild; span = div.querySelector('span').firstChild;
} else if (div.classList.contains('gradio-radio')) {
span = div.querySelector('span').firstChild;
} else {
var elem = div.querySelector('label span');
if (elem) span = elem.firstChild;
}
if (!span) return; if (!span) return;
+2 -11
View File
@@ -122,7 +122,7 @@ def encode_pil_to_base64(image):
if opts.samples_format.lower() in ("jpg", "jpeg"): if opts.samples_format.lower() in ("jpg", "jpeg"):
image.save(output_bytes, format="JPEG", exif = exif_bytes, quality=opts.jpeg_quality) image.save(output_bytes, format="JPEG", exif = exif_bytes, quality=opts.jpeg_quality)
else: else:
image.save(output_bytes, format="WEBP", exif = exif_bytes, quality=opts.jpeg_quality, lossless=opts.webp_lossless) image.save(output_bytes, format="WEBP", exif = exif_bytes, quality=opts.jpeg_quality)
else: else:
raise HTTPException(status_code=500, detail="Invalid image format") raise HTTPException(status_code=500, detail="Invalid image format")
@@ -207,7 +207,7 @@ class Api:
self.router = APIRouter() self.router = APIRouter()
self.app = app self.app = app
self.queue_lock = queue_lock self.queue_lock = queue_lock
api_middleware(self.app) #api_middleware(self.app) # XXX this will have to be fixed
self.add_api_route("/sdapi/v1/txt2img", self.text2imgapi, methods=["POST"], response_model=models.TextToImageResponse) self.add_api_route("/sdapi/v1/txt2img", self.text2imgapi, methods=["POST"], response_model=models.TextToImageResponse)
self.add_api_route("/sdapi/v1/img2img", self.img2imgapi, methods=["POST"], response_model=models.ImageToImageResponse) self.add_api_route("/sdapi/v1/img2img", self.img2imgapi, methods=["POST"], response_model=models.ImageToImageResponse)
self.add_api_route("/sdapi/v1/extra-single-image", self.extras_single_image_api, methods=["POST"], response_model=models.ExtrasSingleImageResponse) self.add_api_route("/sdapi/v1/extra-single-image", self.extras_single_image_api, methods=["POST"], response_model=models.ExtrasSingleImageResponse)
@@ -249,8 +249,6 @@ class Api:
self.add_api_route("/sdapi/v1/server-kill", self.kill_webui, methods=["POST"]) self.add_api_route("/sdapi/v1/server-kill", self.kill_webui, methods=["POST"])
self.add_api_route("/sdapi/v1/server-restart", self.restart_webui, methods=["POST"]) self.add_api_route("/sdapi/v1/server-restart", self.restart_webui, methods=["POST"])
self.add_api_route("/sdapi/v1/server-stop", self.stop_webui, methods=["POST"]) self.add_api_route("/sdapi/v1/server-stop", self.stop_webui, methods=["POST"])
self.add_api_route("/sdapi/v1/server-reload-ui", self.reload_webui, methods=["POST"])
self.add_api_route("/sdapi/v1/server-reload-script-bodies", self.reload_script_bodies, methods=["POST"])
self.default_script_arg_txt2img = [] self.default_script_arg_txt2img = []
self.default_script_arg_img2img = [] self.default_script_arg_img2img = []
@@ -928,10 +926,3 @@ class Api:
shared.state.server_command = "stop" shared.state.server_command = "stop"
return Response("Stopping.") return Response("Stopping.")
def reload_webui(self):
shared.state.request_restart()
return Response("Reloading.")
def reload_script_bodies(self):
scripts.reload_script_body_only()
return Response("Reload script bodies.")
+33 -26
View File
@@ -1,6 +1,6 @@
import inspect import inspect
from pydantic import BaseModel, Field, create_model from pydantic import BaseModel, Field, create_model, ConfigDict
from typing import Any, Optional, Literal from typing import Any, Optional, Literal
from inflection import underscore from inflection import underscore
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img
@@ -92,9 +92,7 @@ class PydanticModelGenerator:
fields = { fields = {
d.field: (d.field_type, Field(default=d.field_value, alias=d.field_alias, exclude=d.field_exclude)) for d in self._model_def d.field: (d.field_type, Field(default=d.field_value, alias=d.field_alias, exclude=d.field_exclude)) for d in self._model_def
} }
DynamicModel = create_model(self._model_name, **fields) DynamicModel = create_model(self._model_name, __config__=ConfigDict(populate_by_name=True, frozen=False), **fields)
DynamicModel.__config__.allow_population_by_field_name = True
DynamicModel.__config__.allow_mutation = True
return DynamicModel return DynamicModel
StableDiffusionTxt2ImgProcessingAPI = PydanticModelGenerator( StableDiffusionTxt2ImgProcessingAPI = PydanticModelGenerator(
@@ -102,13 +100,13 @@ StableDiffusionTxt2ImgProcessingAPI = PydanticModelGenerator(
StableDiffusionProcessingTxt2Img, StableDiffusionProcessingTxt2Img,
[ [
{"key": "sampler_index", "type": str, "default": "Euler"}, {"key": "sampler_index", "type": str, "default": "Euler"},
{"key": "script_name", "type": str, "default": None}, {"key": "script_name", "type": str | None, "default": None},
{"key": "script_args", "type": list, "default": []}, {"key": "script_args", "type": list, "default": []},
{"key": "send_images", "type": bool, "default": True}, {"key": "send_images", "type": bool, "default": True},
{"key": "save_images", "type": bool, "default": False}, {"key": "save_images", "type": bool, "default": False},
{"key": "alwayson_scripts", "type": dict, "default": {}}, {"key": "alwayson_scripts", "type": dict, "default": {}},
{"key": "force_task_id", "type": str, "default": None}, {"key": "force_task_id", "type": str | None, "default": None},
{"key": "infotext", "type": str, "default": None}, {"key": "infotext", "type": str | None, "default": None},
] ]
).generate_model() ).generate_model()
@@ -117,27 +115,27 @@ StableDiffusionImg2ImgProcessingAPI = PydanticModelGenerator(
StableDiffusionProcessingImg2Img, StableDiffusionProcessingImg2Img,
[ [
{"key": "sampler_index", "type": str, "default": "Euler"}, {"key": "sampler_index", "type": str, "default": "Euler"},
{"key": "init_images", "type": list, "default": None}, {"key": "init_images", "type": list | None, "default": None},
{"key": "denoising_strength", "type": float, "default": 0.75}, {"key": "denoising_strength", "type": float, "default": 0.75},
{"key": "mask", "type": str, "default": None}, {"key": "mask", "type": str | None, "default": None},
{"key": "include_init_images", "type": bool, "default": False, "exclude" : True}, {"key": "include_init_images", "type": bool, "default": False, "exclude" : True},
{"key": "script_name", "type": str, "default": None}, {"key": "script_name", "type": str | None, "default": None},
{"key": "script_args", "type": list, "default": []}, {"key": "script_args", "type": list, "default": []},
{"key": "send_images", "type": bool, "default": True}, {"key": "send_images", "type": bool, "default": True},
{"key": "save_images", "type": bool, "default": False}, {"key": "save_images", "type": bool, "default": False},
{"key": "alwayson_scripts", "type": dict, "default": {}}, {"key": "alwayson_scripts", "type": dict, "default": {}},
{"key": "force_task_id", "type": str, "default": None}, {"key": "force_task_id", "type": str | None, "default": None},
{"key": "infotext", "type": str, "default": None}, {"key": "infotext", "type": str | None, "default": None},
] ]
).generate_model() ).generate_model()
class TextToImageResponse(BaseModel): class TextToImageResponse(BaseModel):
images: list[str] = Field(default=None, title="Image", description="The generated image in base64 format.") images: list[str] | None = Field(default=None, title="Image", description="The generated image in base64 format.")
parameters: dict parameters: dict
info: str info: str
class ImageToImageResponse(BaseModel): class ImageToImageResponse(BaseModel):
images: list[str] = Field(default=None, title="Image", description="The generated image in base64 format.") images: list[str] | None = Field(default=None, title="Image", description="The generated image in base64 format.")
parameters: dict parameters: dict
info: str info: str
@@ -163,7 +161,7 @@ class ExtrasSingleImageRequest(ExtrasBaseRequest):
image: str = Field(default="", title="Image", description="Image to work on, must be a Base64 string containing the image's data.") image: str = Field(default="", title="Image", description="Image to work on, must be a Base64 string containing the image's data.")
class ExtrasSingleImageResponse(ExtraBaseResponse): class ExtrasSingleImageResponse(ExtraBaseResponse):
image: str = Field(default=None, title="Image", description="The generated image in base64 format.") image: str | None = Field(default=None, title="Image", description="The generated image in base64 format.")
class FileData(BaseModel): class FileData(BaseModel):
data: str = Field(title="File data", description="Base64 representation of the file") data: str = Field(title="File data", description="Base64 representation of the file")
@@ -190,15 +188,15 @@ class ProgressResponse(BaseModel):
progress: float = Field(title="Progress", description="The progress with a range of 0 to 1") progress: float = Field(title="Progress", description="The progress with a range of 0 to 1")
eta_relative: float = Field(title="ETA in secs") eta_relative: float = Field(title="ETA in secs")
state: dict = Field(title="State", description="The current state snapshot") state: dict = Field(title="State", description="The current state snapshot")
current_image: str = Field(default=None, title="Current image", description="The current image in base64 format. opts.show_progress_every_n_steps is required for this to work.") current_image: str | None = Field(default=None, title="Current image", description="The current image in base64 format. opts.show_progress_every_n_steps is required for this to work.")
textinfo: str = Field(default=None, title="Info text", description="Info text used by WebUI.") textinfo: str | None = Field(default=None, title="Info text", description="Info text used by WebUI.")
class InterrogateRequest(BaseModel): class InterrogateRequest(BaseModel):
image: str = Field(default="", title="Image", description="Image to work on, must be a Base64 string containing the image's data.") image: str = Field(default="", title="Image", description="Image to work on, must be a Base64 string containing the image's data.")
model: str = Field(default="clip", title="Model", description="The interrogate model used.") model: str = Field(default="clip", title="Model", description="The interrogate model used.")
class InterrogateResponse(BaseModel): class InterrogateResponse(BaseModel):
caption: str = Field(default=None, title="Caption", description="The generated caption for the image.") caption: str | None = Field(default=None, title="Caption", description="The generated caption for the image.")
class TrainResponse(BaseModel): class TrainResponse(BaseModel):
info: str = Field(title="Train info", description="Response string from train embedding or hypernetwork task.") info: str = Field(title="Train info", description="Response string from train embedding or hypernetwork task.")
@@ -223,7 +221,7 @@ _options = vars(parser)['_option_string_actions']
for key in _options: for key in _options:
if(_options[key].dest != 'help'): if(_options[key].dest != 'help'):
flag = _options[key] flag = _options[key]
_type = str _type = str | None
if _options[key].default is not None: if _options[key].default is not None:
_type = type(_options[key].default) _type = type(_options[key].default)
flags.update({flag.dest: (_type, Field(default=flag.default, description=flag.help))}) flags.update({flag.dest: (_type, Field(default=flag.default, description=flag.help))})
@@ -233,7 +231,7 @@ FlagsModel = create_model("Flags", **flags)
class SamplerItem(BaseModel): class SamplerItem(BaseModel):
name: str = Field(title="Name") name: str = Field(title="Name")
aliases: list[str] = Field(title="Aliases") aliases: list[str] = Field(title="Aliases")
options: dict[str, str] = Field(title="Options") options: dict[str, Any] = Field(title="Options")
class SchedulerItem(BaseModel): class SchedulerItem(BaseModel):
name: str = Field(title="Name") name: str = Field(title="Name")
@@ -243,6 +241,9 @@ class SchedulerItem(BaseModel):
need_inner_model: Optional[bool] = Field(title="Needs Inner Model") need_inner_model: Optional[bool] = Field(title="Needs Inner Model")
class UpscalerItem(BaseModel): class UpscalerItem(BaseModel):
class Config:
protected_namespaces = ()
name: str = Field(title="Name") name: str = Field(title="Name")
model_name: Optional[str] = Field(title="Model Name") model_name: Optional[str] = Field(title="Model Name")
model_path: Optional[str] = Field(title="Path") model_path: Optional[str] = Field(title="Path")
@@ -253,6 +254,9 @@ class LatentUpscalerModeItem(BaseModel):
name: str = Field(title="Name") name: str = Field(title="Name")
class SDModelItem(BaseModel): class SDModelItem(BaseModel):
class Config:
protected_namespaces = ()
title: str = Field(title="Title") title: str = Field(title="Title")
model_name: str = Field(title="Model Name") model_name: str = Field(title="Model Name")
hash: Optional[str] = Field(title="Short hash") hash: Optional[str] = Field(title="Short hash")
@@ -261,6 +265,9 @@ class SDModelItem(BaseModel):
config: Optional[str] = Field(title="Config file") config: Optional[str] = Field(title="Config file")
class SDVaeItem(BaseModel): class SDVaeItem(BaseModel):
class Config:
protected_namespaces = ()
model_name: str = Field(title="Model Name") model_name: str = Field(title="Model Name")
filename: str = Field(title="Filename") filename: str = Field(title="Filename")
@@ -300,12 +307,12 @@ class MemoryResponse(BaseModel):
class ScriptsList(BaseModel): class ScriptsList(BaseModel):
txt2img: list = Field(default=None, title="Txt2img", description="Titles of scripts (txt2img)") txt2img: list | None = Field(default=None, title="Txt2img", description="Titles of scripts (txt2img)")
img2img: list = Field(default=None, title="Img2img", description="Titles of scripts (img2img)") img2img: list | None = Field(default=None, title="Img2img", description="Titles of scripts (img2img)")
class ScriptArg(BaseModel): class ScriptArg(BaseModel):
label: str = Field(default=None, title="Label", description="Name of the argument in UI") label: str | None = Field(default=None, title="Label", description="Name of the argument in UI")
value: Optional[Any] = Field(default=None, title="Value", description="Default value of the argument") value: Optional[Any] = Field(default=None, title="Value", description="Default value of the argument")
minimum: Optional[Any] = Field(default=None, title="Minimum", description="Minimum allowed value for the argumentin UI") minimum: Optional[Any] = Field(default=None, title="Minimum", description="Minimum allowed value for the argumentin UI")
maximum: Optional[Any] = Field(default=None, title="Minimum", description="Maximum allowed value for the argumentin UI") maximum: Optional[Any] = Field(default=None, title="Minimum", description="Maximum allowed value for the argumentin UI")
@@ -314,9 +321,9 @@ class ScriptArg(BaseModel):
class ScriptInfo(BaseModel): class ScriptInfo(BaseModel):
name: str = Field(default=None, title="Name", description="Script name") name: str | None = Field(default=None, title="Name", description="Script name")
is_alwayson: bool = Field(default=None, title="IsAlwayson", description="Flag specifying whether this script is an alwayson script") is_alwayson: bool | None = Field(default=None, title="IsAlwayson", description="Flag specifying whether this script is an alwayson script")
is_img2img: bool = Field(default=None, title="IsImg2img", description="Flag specifying whether this script is an img2img script") is_img2img: bool | None = Field(default=None, title="IsImg2img", description="Flag specifying whether this script is an img2img script")
args: list[ScriptArg] = Field(title="Arguments", description="List of script's arguments") args: list[ScriptArg] = Field(title="Arguments", description="List of script's arguments")
class ExtensionItem(BaseModel): class ExtensionItem(BaseModel):
+4 -18
View File
@@ -1,7 +1,7 @@
import os import os
from modules import modelloader, errors from modules import modelloader, errors
from modules.shared import cmd_opts, opts, hf_endpoint from modules.shared import cmd_opts, opts
from modules.upscaler import Upscaler, UpscalerData from modules.upscaler import Upscaler, UpscalerData
from modules.upscaler_utils import upscale_with_model from modules.upscaler_utils import upscale_with_model
@@ -49,18 +49,7 @@ class UpscalerDAT(Upscaler):
scaler.local_data_path = modelloader.load_file_from_url( scaler.local_data_path = modelloader.load_file_from_url(
scaler.data_path, scaler.data_path,
model_dir=self.model_download_path, model_dir=self.model_download_path,
hash_prefix=scaler.sha256,
) )
if os.path.getsize(scaler.local_data_path) < 200:
# Re-download if the file is too small, probably an LFS pointer
scaler.local_data_path = modelloader.load_file_from_url(
scaler.data_path,
model_dir=self.model_download_path,
hash_prefix=scaler.sha256,
re_download=True,
)
if not os.path.exists(scaler.local_data_path): if not os.path.exists(scaler.local_data_path):
raise FileNotFoundError(f"DAT data missing: {scaler.local_data_path}") raise FileNotFoundError(f"DAT data missing: {scaler.local_data_path}")
return scaler return scaler
@@ -71,23 +60,20 @@ def get_dat_models(scaler):
return [ return [
UpscalerData( UpscalerData(
name="DAT x2", name="DAT x2",
path=f"{hf_endpoint}/w-e-w/DAT/resolve/main/experiments/pretrained_models/DAT/DAT_x2.pth", path="https://github.com/n0kovo/dat_upscaler_models/raw/main/DAT/DAT_x2.pth",
scale=2, scale=2,
upscaler=scaler, upscaler=scaler,
sha256='7760aa96e4ee77e29d4f89c3a4486200042e019461fdb8aa286f49aa00b89b51',
), ),
UpscalerData( UpscalerData(
name="DAT x3", name="DAT x3",
path=f"{hf_endpoint}/w-e-w/DAT/resolve/main/experiments/pretrained_models/DAT/DAT_x3.pth", path="https://github.com/n0kovo/dat_upscaler_models/raw/main/DAT/DAT_x3.pth",
scale=3, scale=3,
upscaler=scaler, upscaler=scaler,
sha256='581973e02c06f90d4eb90acf743ec9604f56f3c2c6f9e1e2c2b38ded1f80d197',
), ),
UpscalerData( UpscalerData(
name="DAT x4", name="DAT x4",
path=f"{hf_endpoint}/w-e-w/DAT/resolve/main/experiments/pretrained_models/DAT/DAT_x4.pth", path="https://github.com/n0kovo/dat_upscaler_models/raw/main/DAT/DAT_x4.pth",
scale=4, scale=4,
upscaler=scaler, upscaler=scaler,
sha256='391a6ce69899dff5ea3214557e9d585608254579217169faf3d4c353caff049e',
), ),
] ]
+1 -1
View File
@@ -109,7 +109,7 @@ def check_versions():
expected_torch_version = "2.1.2" expected_torch_version = "2.1.2"
expected_xformers_version = "0.0.23.post1" expected_xformers_version = "0.0.23.post1"
expected_gradio_version = "3.41.2" expected_gradio_version = "4.38.1"
if version.parse(torch.__version__) < version.parse(expected_torch_version): if version.parse(torch.__version__) < version.parse(expected_torch_version):
print_error_explanation(f""" print_error_explanation(f"""
+1 -1
View File
@@ -23,7 +23,7 @@ def run_pnginfo(image):
info = '' info = ''
for key, text in items.items(): for key, text in items.items():
info += f""" info += f"""
<div class="infotext"> <div>
<p><b>{plaintext_to_html(str(key))}</b></p> <p><b>{plaintext_to_html(str(key))}</b></p>
<p>{plaintext_to_html(str(text))}</p> <p>{plaintext_to_html(str(text))}</p>
</div> </div>
+166
View File
@@ -0,0 +1,166 @@
import inspect
import warnings
from functools import wraps
import gradio as gr
import gradio.component_meta
from modules import scripts, ui_tempdir, patches
class GradioDeprecationWarning(DeprecationWarning):
pass
def add_classes_to_gradio_component(comp):
"""
this adds gradio-* to the component for css styling (ie gradio-button to gr.Button), as well as some others
"""
comp.elem_classes = [f"gradio-{comp.get_block_name()}", *(getattr(comp, 'elem_classes', None) or [])]
if getattr(comp, 'multiselect', False):
comp.elem_classes.append('multiselect')
def IOComponent_init(self, *args, **kwargs):
self.webui_tooltip = kwargs.pop('tooltip', None)
if scripts.scripts_current is not None:
scripts.scripts_current.before_component(self, **kwargs)
scripts.script_callbacks.before_component_callback(self, **kwargs)
res = original_IOComponent_init(self, *args, **kwargs)
add_classes_to_gradio_component(self)
scripts.script_callbacks.after_component_callback(self, **kwargs)
if scripts.scripts_current is not None:
scripts.scripts_current.after_component(self, **kwargs)
return res
def Block_get_config(self):
config = original_Block_get_config(self)
webui_tooltip = getattr(self, 'webui_tooltip', None)
if webui_tooltip:
config["webui_tooltip"] = webui_tooltip
config.pop('example_inputs', None)
return config
def BlockContext_init(self, *args, **kwargs):
if scripts.scripts_current is not None:
scripts.scripts_current.before_component(self, **kwargs)
scripts.script_callbacks.before_component_callback(self, **kwargs)
res = original_BlockContext_init(self, *args, **kwargs)
add_classes_to_gradio_component(self)
scripts.script_callbacks.after_component_callback(self, **kwargs)
if scripts.scripts_current is not None:
scripts.scripts_current.after_component(self, **kwargs)
return res
def Blocks_get_config_file(self, *args, **kwargs):
config = original_Blocks_get_config_file(self, *args, **kwargs)
for comp_config in config["components"]:
if "example_inputs" in comp_config:
comp_config["example_inputs"] = {"serialized": []}
return config
original_IOComponent_init = patches.patch(__name__, obj=gr.components.Component, field="__init__", replacement=IOComponent_init)
original_Block_get_config = patches.patch(__name__, obj=gr.blocks.Block, field="get_config", replacement=Block_get_config)
original_BlockContext_init = patches.patch(__name__, obj=gr.blocks.BlockContext, field="__init__", replacement=BlockContext_init)
original_Blocks_get_config_file = patches.patch(__name__, obj=gr.blocks.Blocks, field="get_config_file", replacement=Blocks_get_config_file)
ui_tempdir.install_ui_tempdir_override()
def gradio_component_meta_create_or_modify_pyi(component_class, class_name, events):
if hasattr(component_class, 'webui_do_not_create_gradio_pyi_thank_you'):
return
gradio_component_meta_create_or_modify_pyi_original(component_class, class_name, events)
# this prevents creation of .pyi files in webui dir
gradio_component_meta_create_or_modify_pyi_original = patches.patch(__file__, gradio.component_meta, 'create_or_modify_pyi', gradio_component_meta_create_or_modify_pyi)
# this function is broken and does not seem to do anything useful
gradio.component_meta.updateable = lambda x: x
def repair(grclass):
if not getattr(grclass, 'EVENTS', None):
return
@wraps(grclass.__init__)
def __repaired_init__(self, *args, tooltip=None, source=None, original=grclass.__init__, **kwargs):
if source:
kwargs["sources"] = [source]
allowed_kwargs = inspect.signature(original).parameters
fixed_kwargs = {}
for k, v in kwargs.items():
if k in allowed_kwargs:
fixed_kwargs[k] = v
else:
warnings.warn(f"unexpected argument for {grclass.__name__}: {k}", GradioDeprecationWarning, stacklevel=2)
original(self, *args, **fixed_kwargs)
self.webui_tooltip = tooltip
for event in self.EVENTS:
replaced_event = getattr(self, str(event))
def fun(*xargs, _js=None, replaced_event=replaced_event, **xkwargs):
if _js:
xkwargs['js'] = _js
return replaced_event(*xargs, **xkwargs)
setattr(self, str(event), fun)
grclass.__init__ = __repaired_init__
grclass.update = gr.update
for component in set(gr.components.__all__ + gr.layouts.__all__):
repair(getattr(gr, component, None))
class Dependency(gr.events.Dependency):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def then(*xargs, _js=None, **xkwargs):
if _js:
xkwargs['js'] = _js
return original_then(*xargs, **xkwargs)
original_then = self.then
self.then = then
gr.events.Dependency = Dependency
gr.Box = gr.Group
-83
View File
@@ -1,83 +0,0 @@
import gradio as gr
from modules import scripts, ui_tempdir, patches
def add_classes_to_gradio_component(comp):
"""
this adds gradio-* to the component for css styling (ie gradio-button to gr.Button), as well as some others
"""
comp.elem_classes = [f"gradio-{comp.get_block_name()}", *(comp.elem_classes or [])]
if getattr(comp, 'multiselect', False):
comp.elem_classes.append('multiselect')
def IOComponent_init(self, *args, **kwargs):
self.webui_tooltip = kwargs.pop('tooltip', None)
if scripts.scripts_current is not None:
scripts.scripts_current.before_component(self, **kwargs)
scripts.script_callbacks.before_component_callback(self, **kwargs)
res = original_IOComponent_init(self, *args, **kwargs)
add_classes_to_gradio_component(self)
scripts.script_callbacks.after_component_callback(self, **kwargs)
if scripts.scripts_current is not None:
scripts.scripts_current.after_component(self, **kwargs)
return res
def Block_get_config(self):
config = original_Block_get_config(self)
webui_tooltip = getattr(self, 'webui_tooltip', None)
if webui_tooltip:
config["webui_tooltip"] = webui_tooltip
config.pop('example_inputs', None)
return config
def BlockContext_init(self, *args, **kwargs):
if scripts.scripts_current is not None:
scripts.scripts_current.before_component(self, **kwargs)
scripts.script_callbacks.before_component_callback(self, **kwargs)
res = original_BlockContext_init(self, *args, **kwargs)
add_classes_to_gradio_component(self)
scripts.script_callbacks.after_component_callback(self, **kwargs)
if scripts.scripts_current is not None:
scripts.scripts_current.after_component(self, **kwargs)
return res
def Blocks_get_config_file(self, *args, **kwargs):
config = original_Blocks_get_config_file(self, *args, **kwargs)
for comp_config in config["components"]:
if "example_inputs" in comp_config:
comp_config["example_inputs"] = {"serialized": []}
return config
original_IOComponent_init = patches.patch(__name__, obj=gr.components.IOComponent, field="__init__", replacement=IOComponent_init)
original_Block_get_config = patches.patch(__name__, obj=gr.blocks.Block, field="get_config", replacement=Block_get_config)
original_BlockContext_init = patches.patch(__name__, obj=gr.blocks.BlockContext, field="__init__", replacement=BlockContext_init)
original_Blocks_get_config_file = patches.patch(__name__, obj=gr.blocks.Blocks, field="get_config_file", replacement=Blocks_get_config_file)
ui_tempdir.install_ui_tempdir_override()
+7 -9
View File
@@ -2,7 +2,6 @@ import os
from contextlib import closing from contextlib import closing
from pathlib import Path from pathlib import Path
import numpy as np
from PIL import Image, ImageOps, ImageFilter, ImageEnhance, UnidentifiedImageError from PIL import Image, ImageOps, ImageFilter, ImageEnhance, UnidentifiedImageError
import gradio as gr import gradio as gr
@@ -149,25 +148,24 @@ def process_batch(p, input, output_dir, inpaint_mask_dir, args, to_scale=False,
return batch_results return batch_results
def img2img(id_task: str, request: gr.Request, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, mask_blur: int, mask_alpha: float, inpainting_fill: int, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, img2img_batch_source_type: str, img2img_batch_upload: list, *args): def img2img(id_task: str, request: gr.Request, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, init_img_inpaint, init_mask_inpaint, mask_blur: int, mask_alpha: float, inpainting_fill: int, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, img2img_batch_source_type: str, img2img_batch_upload: list, *args):
override_settings = create_override_settings_dict(override_settings_texts) override_settings = create_override_settings_dict(override_settings_texts)
is_batch = mode == 5 is_batch = mode == 5
if mode == 0: # img2img if mode == 0: # img2img
image = init_img image = init_img["composite"]
mask = None mask = None
elif mode == 1: # img2img sketch elif mode == 1: # img2img sketch
image = sketch image = sketch["composite"]
mask = None mask = None
elif mode == 2: # inpaint elif mode == 2: # inpaint
image, mask = init_img_with_mask["image"], init_img_with_mask["mask"] image, mask = init_img_with_mask["background"], init_img_with_mask["layers"][0]
mask = processing.create_binary_mask(mask) mask = processing.create_binary_mask(mask)
elif mode == 3: # inpaint sketch elif mode == 3: # inpaint sketch
image = inpaint_color_sketch image = inpaint_color_sketch["composite"]
orig = inpaint_color_sketch_orig or inpaint_color_sketch orig = inpaint_color_sketch["background"]
pred = np.any(np.array(image) != np.array(orig), axis=-1) mask = inpaint_color_sketch["layers"][0].getchannel("A")
mask = Image.fromarray(pred.astype(np.uint8) * 255, "L")
mask = ImageEnhance.Brightness(mask).enhance(1 - mask_alpha / 100) mask = ImageEnhance.Brightness(mask).enhance(1 - mask_alpha / 100)
blur = ImageFilter.GaussianBlur(mask_blur) blur = ImageFilter.GaussianBlur(mask_blur)
image = Image.composite(image.filter(blur), orig, mask.filter(blur)) image = Image.composite(image.filter(blur), orig, mask.filter(blur))
+25 -11
View File
@@ -74,29 +74,38 @@ def image_from_url_text(filedata):
if filedata is None: if filedata is None:
return None return None
if type(filedata) == list and filedata and type(filedata[0]) == dict and filedata[0].get("is_file", False): if isinstance(filedata, list):
if len(filedata) == 0:
return None
filedata = filedata[0] filedata = filedata[0]
if isinstance(filedata, dict) and filedata.get("is_file", False):
filedata = filedata
filename = None
if type(filedata) == dict and filedata.get("is_file", False): if type(filedata) == dict and filedata.get("is_file", False):
filename = filedata["name"] filename = filedata["name"]
elif isinstance(filedata, tuple) and len(filedata) == 2: # gradio 4.16 sends images from gallery as a list of tuples
return filedata[0]
if filename:
is_in_right_dir = ui_tempdir.check_tmp_file(shared.demo, filename) is_in_right_dir = ui_tempdir.check_tmp_file(shared.demo, filename)
assert is_in_right_dir, 'trying to open image file outside of allowed directories' assert is_in_right_dir, 'trying to open image file outside of allowed directories'
filename = filename.rsplit('?', 1)[0] filename = filename.rsplit('?', 1)[0]
return images.read(filename) return images.read(filename)
if type(filedata) == list: if isinstance(filedata, str):
if len(filedata) == 0: if filedata.startswith("data:image/png;base64,"):
return None filedata = filedata[len("data:image/png;base64,"):]
filedata = filedata[0] filedata = base64.decodebytes(filedata.encode('utf-8'))
image = images.read(io.BytesIO(filedata))
return image
if filedata.startswith("data:image/png;base64,"): return None
filedata = filedata[len("data:image/png;base64,"):]
filedata = base64.decodebytes(filedata.encode('utf-8'))
image = images.read(io.BytesIO(filedata))
return image
def add_paste_fields(tabname, init_img, fields, override_settings_component=None): def add_paste_fields(tabname, init_img, fields, override_settings_component=None):
@@ -186,6 +195,8 @@ def connect_paste_params_buttons():
def send_image_and_dimensions(x): def send_image_and_dimensions(x):
if isinstance(x, Image.Image): if isinstance(x, Image.Image):
img = x img = x
elif isinstance(x, list) and isinstance(x[0], tuple):
img = x[0][0]
else: else:
img = image_from_url_text(x) img = image_from_url_text(x)
@@ -413,6 +424,9 @@ def create_override_settings_dict(text_pairs):
res = {} res = {}
if not text_pairs:
return res
params = {} params = {}
for pair in text_pairs: for pair in text_pairs:
k, v = pair.split(":", maxsplit=1) k, v = pair.split(":", maxsplit=1)
+1 -1
View File
@@ -36,7 +36,7 @@ def imports():
shared_init.initialize() shared_init.initialize()
startup_timer.record("initialize shared") startup_timer.record("initialize shared")
from modules import processing, gradio_extensons, ui # noqa: F401 from modules import processing, gradio_extensions, ui # noqa: F401
startup_timer.record("other imports") startup_timer.record("other imports")
+5 -3
View File
@@ -4,6 +4,8 @@ import signal
import sys import sys
import re import re
import starlette
from modules.timer import startup_timer from modules.timer import startup_timer
@@ -192,8 +194,7 @@ def configure_opts_onchange():
def setup_middleware(app): def setup_middleware(app):
from starlette.middleware.gzip import GZipMiddleware from starlette.middleware.gzip import GZipMiddleware
app.middleware_stack = None # reset current middleware to allow modifying user provided list app.user_middleware.insert(0, starlette.middleware.Middleware(GZipMiddleware, minimum_size=1000))
app.add_middleware(GZipMiddleware, minimum_size=1000)
configure_cors_middleware(app) configure_cors_middleware(app)
app.build_middleware_stack() # rebuild middleware stack on-the-fly app.build_middleware_stack() # rebuild middleware stack on-the-fly
@@ -211,5 +212,6 @@ def configure_cors_middleware(app):
cors_options["allow_origins"] = cmd_opts.cors_allow_origins.split(',') cors_options["allow_origins"] = cmd_opts.cors_allow_origins.split(',')
if cmd_opts.cors_allow_origins_regex: if cmd_opts.cors_allow_origins_regex:
cors_options["allow_origin_regex"] = cmd_opts.cors_allow_origins_regex cors_options["allow_origin_regex"] = cmd_opts.cors_allow_origins_regex
app.add_middleware(CORSMiddleware, **cors_options)
app.user_middleware.insert(0, starlette.middleware.Middleware(CORSMiddleware, **cors_options))
+24 -1
View File
@@ -10,7 +10,6 @@ import torch
from modules import shared from modules import shared
from modules.upscaler import Upscaler, UpscalerLanczos, UpscalerNearest, UpscalerNone from modules.upscaler import Upscaler, UpscalerLanczos, UpscalerNearest, UpscalerNone
from modules.util import load_file_from_url # noqa, backwards compatibility
if TYPE_CHECKING: if TYPE_CHECKING:
import spandrel import spandrel
@@ -18,6 +17,30 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
def load_file_from_url(
url: str,
*,
model_dir: str,
progress: bool = True,
file_name: str | None = None,
hash_prefix: str | None = None,
) -> str:
"""Download a file from `url` into `model_dir`, using the file present if possible.
Returns the path to the downloaded file.
"""
os.makedirs(model_dir, exist_ok=True)
if not file_name:
parts = urlparse(url)
file_name = os.path.basename(parts.path)
cached_file = os.path.abspath(os.path.join(model_dir, file_name))
if not os.path.exists(cached_file):
print(f'Downloading: "{url}" to {cached_file}\n')
from torch.hub import download_url_to_file
download_url_to_file(url, cached_file, progress=progress, hash_prefix=hash_prefix)
return cached_file
def load_models(model_path: str, model_url: str = None, command_path: str = None, ext_filter=None, download_name=None, ext_blacklist=None, hash_prefix=None) -> list: def load_models(model_path: str, model_url: str = None, command_path: str = None, ext_filter=None, download_name=None, ext_blacklist=None, hash_prefix=None) -> list:
""" """
A one-and done loader to try finding the desired models in specified directories. A one-and done loader to try finding the desired models in specified directories.
+3 -3
View File
@@ -24,7 +24,7 @@ class SafetensorsMapping(typing.Mapping):
return self.file.get_tensor(key) return self.file.get_tensor(key)
CLIPL_URL = f"{shared.hf_endpoint}/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/clip_l.safetensors" CLIPL_URL = "https://huggingface.co/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/clip_l.safetensors"
CLIPL_CONFIG = { CLIPL_CONFIG = {
"hidden_act": "quick_gelu", "hidden_act": "quick_gelu",
"hidden_size": 768, "hidden_size": 768,
@@ -33,7 +33,7 @@ CLIPL_CONFIG = {
"num_hidden_layers": 12, "num_hidden_layers": 12,
} }
CLIPG_URL = f"{shared.hf_endpoint}/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/clip_g.safetensors" CLIPG_URL = "https://huggingface.co/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/clip_g.safetensors"
CLIPG_CONFIG = { CLIPG_CONFIG = {
"hidden_act": "gelu", "hidden_act": "gelu",
"hidden_size": 1280, "hidden_size": 1280,
@@ -43,7 +43,7 @@ CLIPG_CONFIG = {
"textual_inversion_key": "clip_g", "textual_inversion_key": "clip_g",
} }
T5_URL = f"{shared.hf_endpoint}/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/t5xxl_fp16.safetensors" T5_URL = "https://huggingface.co/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/t5xxl_fp16.safetensors"
T5_CONFIG = { T5_CONFIG = {
"d_ff": 10240, "d_ff": 10240,
"d_model": 4096, "d_model": 4096,
+3
View File
@@ -13,6 +13,9 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir,
outputs = [] outputs = []
if isinstance(image, dict):
image = image["composite"]
def get_images(extras_mode, image, image_folder, input_dir): def get_images(extras_mode, image, image_folder, input_dir):
if extras_mode == 1: if extras_mode == 1:
for img in image_folder: for img in image_folder:
+1 -4
View File
@@ -1259,10 +1259,7 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
if self.hr_checkpoint_info is None: if self.hr_checkpoint_info is None:
raise Exception(f'Could not find checkpoint with name {self.hr_checkpoint_name}') raise Exception(f'Could not find checkpoint with name {self.hr_checkpoint_name}')
if shared.sd_model.sd_checkpoint_info == self.hr_checkpoint_info: self.extra_generation_params["Hires checkpoint"] = self.hr_checkpoint_info.short_title
self.hr_checkpoint_info = None
else:
self.extra_generation_params["Hires checkpoint"] = self.hr_checkpoint_info.short_title
if self.hr_sampler_name is not None and self.hr_sampler_name != self.sampler_name: if self.hr_sampler_name is not None and self.hr_sampler_name != self.sampler_name:
self.extra_generation_params["Hires sampler"] = self.hr_sampler_name self.extra_generation_params["Hires sampler"] = self.hr_sampler_name
+1 -1
View File
@@ -22,7 +22,7 @@ class ScriptRefiner(scripts.ScriptBuiltinUI):
def ui(self, is_img2img): def ui(self, is_img2img):
with InputAccordion(False, label="Refiner", elem_id=self.elem_id("enable")) as enable_refiner: with InputAccordion(False, label="Refiner", elem_id=self.elem_id("enable")) as enable_refiner:
with gr.Row(): with gr.Row():
refiner_checkpoint = gr.Dropdown(label='Checkpoint', elem_id=self.elem_id("checkpoint"), choices=sd_models.checkpoint_tiles(), value='', tooltip="switch to another model in the middle of generation") refiner_checkpoint = gr.Dropdown(label='Checkpoint', elem_id=self.elem_id("checkpoint"), choices=["", *sd_models.checkpoint_tiles()], value='', tooltip="switch to another model in the middle of generation")
create_refresh_button(refiner_checkpoint, sd_models.list_models, lambda: {"choices": sd_models.checkpoint_tiles()}, self.elem_id("checkpoint_refresh")) create_refresh_button(refiner_checkpoint, sd_models.list_models, lambda: {"choices": sd_models.checkpoint_tiles()}, self.elem_id("checkpoint_refresh"))
refiner_switch_at = gr.Slider(value=0.8, label="Switch at", minimum=0.01, maximum=1.0, step=0.01, elem_id=self.elem_id("switch_at"), tooltip="fraction of sampling steps when the switch to refiner model should happen; 1=never, 0.5=switch in the middle of generation") refiner_switch_at = gr.Slider(value=0.8, label="Switch at", minimum=0.01, maximum=1.0, step=0.01, elem_id=self.elem_id("switch_at"), tooltip="fraction of sampling steps when the switch to refiner model should happen; 1=never, 0.5=switch in the middle of generation")
+1 -1
View File
@@ -34,7 +34,7 @@ class ScriptSeed(scripts.ScriptBuiltinUI):
random_seed = ToolButton(ui.random_symbol, elem_id=self.elem_id("random_seed"), tooltip="Set seed to -1, which will cause a new random number to be used every time") random_seed = ToolButton(ui.random_symbol, elem_id=self.elem_id("random_seed"), tooltip="Set seed to -1, which will cause a new random number to be used every time")
reuse_seed = ToolButton(ui.reuse_symbol, elem_id=self.elem_id("reuse_seed"), tooltip="Reuse seed from last generation, mostly useful if it was randomized") reuse_seed = ToolButton(ui.reuse_symbol, elem_id=self.elem_id("reuse_seed"), tooltip="Reuse seed from last generation, mostly useful if it was randomized")
seed_checkbox = gr.Checkbox(label='Extra', elem_id=self.elem_id("subseed_show"), value=False) seed_checkbox = gr.Checkbox(label='Extra', elem_id=self.elem_id("subseed_show"), value=False, scale=0, min_width=60)
with gr.Group(visible=False, elem_id=self.elem_id("seed_extras")) as seed_extras: with gr.Group(visible=False, elem_id=self.elem_id("seed_extras")) as seed_extras:
with gr.Row(elem_id=self.elem_id("subseed_row")): with gr.Row(elem_id=self.elem_id("subseed_row")):
+6 -5
View File
@@ -1,3 +1,4 @@
from __future__ import annotations
import base64 import base64
import io import io
import time import time
@@ -66,11 +67,11 @@ class ProgressResponse(BaseModel):
active: bool = Field(title="Whether the task is being worked on right now") active: bool = Field(title="Whether the task is being worked on right now")
queued: bool = Field(title="Whether the task is in queue") queued: bool = Field(title="Whether the task is in queue")
completed: bool = Field(title="Whether the task has already finished") completed: bool = Field(title="Whether the task has already finished")
progress: float = Field(default=None, title="Progress", description="The progress with a range of 0 to 1") progress: float | None = Field(default=None, title="Progress", description="The progress with a range of 0 to 1")
eta: float = Field(default=None, title="ETA in secs") eta: float | None = Field(default=None, title="ETA in secs")
live_preview: str = Field(default=None, title="Live preview image", description="Current live preview; a data: uri") live_preview: str | None = Field(default=None, title="Live preview image", description="Current live preview; a data: uri")
id_live_preview: int = Field(default=None, title="Live preview image ID", description="Send this together with next request to prevent receiving same image") id_live_preview: int | None = Field(default=None, title="Live preview image ID", description="Send this together with next request to prevent receiving same image")
textinfo: str = Field(default=None, title="Info text", description="Info text used by WebUI.") textinfo: str | None = Field(default=None, title="Info text", description="Info text used by WebUI.")
def setup_progress_api(app): def setup_progress_api(app):
+1 -2
View File
@@ -13,7 +13,6 @@ class ScriptPostprocessingForMainUI(scripts.Script):
return scripts.AlwaysVisible return scripts.AlwaysVisible
def ui(self, is_img2img): def ui(self, is_img2img):
self.script.tab_name = '_img2img' if is_img2img else '_txt2img'
self.postprocessing_controls = self.script.ui() self.postprocessing_controls = self.script.ui()
return self.postprocessing_controls.values() return self.postprocessing_controls.values()
@@ -34,7 +33,7 @@ def create_auto_preprocessing_script_data():
for name in shared.opts.postprocessing_enable_in_main_ui: for name in shared.opts.postprocessing_enable_in_main_ui:
script = next(iter([x for x in scripts.postprocessing_scripts_data if x.script_class.name == name]), None) script = next(iter([x for x in scripts.postprocessing_scripts_data if x.script_class.name == name]), None)
if script is None or script.script_class.extra_only: if script is None:
continue continue
constructor = lambda s=script: ScriptPostprocessingForMainUI(s.script_class()) constructor = lambda s=script: ScriptPostprocessingForMainUI(s.script_class())
+5 -31
View File
@@ -1,4 +1,3 @@
import re
import dataclasses import dataclasses
import os import os
import gradio as gr import gradio as gr
@@ -60,10 +59,6 @@ class ScriptPostprocessing:
args_from = None args_from = None
args_to = None args_to = None
# define if the script should be used only in extras or main UI
extra_only = None
main_ui_only = None
order = 1000 order = 1000
"""scripts will be ordred by this value in postprocessing UI""" """scripts will be ordred by this value in postprocessing UI"""
@@ -102,31 +97,6 @@ class ScriptPostprocessing:
def image_changed(self): def image_changed(self):
pass pass
tab_name = '' # used by ScriptPostprocessingForMainUI
replace_pattern = re.compile(r'\s')
rm_pattern = re.compile(r'[^a-z_0-9]')
def elem_id(self, item_id):
"""
Helper function to generate id for a HTML element
constructs final id out of script name and user-supplied item_id
'script_extras_{self.name.lower()}_{item_id}'
{tab_name} will append to the end of the id if set
tab_name will be set to '_img2img' or '_txt2img' if use by ScriptPostprocessingForMainUI
Extensions should use this function to generate element IDs
"""
return self.elem_id_suffix(f'extras_{self.name.lower()}_{item_id}')
def elem_id_suffix(self, base_id):
"""
Append tab_name to the base_id
Extensions that already have specific there element IDs and wish to keep their IDs the same when possible should use this function
"""
base_id = self.rm_pattern.sub('', self.replace_pattern.sub('_', base_id))
return f'{base_id}{self.tab_name}'
def wrap_call(func, filename, funcname, *args, default=None, **kwargs): def wrap_call(func, filename, funcname, *args, default=None, **kwargs):
try: try:
@@ -149,6 +119,10 @@ class ScriptPostprocessingRunner:
for script_data in scripts_data: for script_data in scripts_data:
script: ScriptPostprocessing = script_data.script_class() script: ScriptPostprocessing = script_data.script_class()
script.filename = script_data.path script.filename = script_data.path
if script.name == "Simple Upscale":
continue
self.scripts.append(script) self.scripts.append(script)
def create_script_ui(self, script, inputs): def create_script_ui(self, script, inputs):
@@ -178,7 +152,7 @@ class ScriptPostprocessingRunner:
return len(self.scripts) return len(self.scripts)
filtered_scripts = [script for script in self.scripts if script.name not in scripts_filter_out and not script.main_ui_only] filtered_scripts = [script for script in self.scripts if script.name not in scripts_filter_out]
script_scores = {script.name: (script_score(script.name), script.order, script.name, original_index) for original_index, script in enumerate(filtered_scripts)} script_scores = {script.name: (script_score(script.name), script.order, script.name, original_index) for original_index, script in enumerate(filtered_scripts)}
return sorted(filtered_scripts, key=lambda x: script_scores[x.name]) return sorted(filtered_scripts, key=lambda x: script_scores[x.name])
+1 -1
View File
@@ -76,7 +76,7 @@ class DisableInitialization(ReplaceHelper):
def transformers_utils_hub_get_file_from_cache(original, url, *args, **kwargs): def transformers_utils_hub_get_file_from_cache(original, url, *args, **kwargs):
# this file is always 404, prevent making request # this file is always 404, prevent making request
if url == f'{shared.hf_endpoint}/openai/clip-vit-large-patch14/resolve/main/added_tokens.json' or url == 'openai/clip-vit-large-patch14' and args[0] == 'added_tokens.json': if url == 'https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/added_tokens.json' or url == 'openai/clip-vit-large-patch14' and args[0] == 'added_tokens.json':
return None return None
try: try:
+5 -10
View File
@@ -159,7 +159,7 @@ def list_models():
model_url = None model_url = None
expected_sha256 = None expected_sha256 = None
else: else:
model_url = f"{shared.hf_endpoint}/stable-diffusion-v1-5/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" model_url = f"{shared.hf_endpoint}/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors"
expected_sha256 = '6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa' expected_sha256 = '6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa'
model_list = modelloader.load_models(model_path=model_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".ckpt", ".safetensors"], download_name="v1-5-pruned-emaonly.safetensors", ext_blacklist=[".vae.ckpt", ".vae.safetensors"], hash_prefix=expected_sha256) model_list = modelloader.load_models(model_path=model_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".ckpt", ".safetensors"], download_name="v1-5-pruned-emaonly.safetensors", ext_blacklist=[".vae.ckpt", ".vae.safetensors"], hash_prefix=expected_sha256)
@@ -423,10 +423,6 @@ def load_model_weights(model, checkpoint_info: CheckpointInfo, state_dict, timer
set_model_type(model, state_dict) set_model_type(model, state_dict)
set_model_fields(model) set_model_fields(model)
if 'ztsnr' in state_dict:
model.ztsnr = True
else:
model.ztsnr = False
if model.is_sdxl: if model.is_sdxl:
sd_models_xl.extend_sdxl(model) sd_models_xl.extend_sdxl(model)
@@ -665,7 +661,7 @@ def apply_alpha_schedule_override(sd_model, p=None):
p.extra_generation_params['Downcast alphas_cumprod'] = opts.use_downcasted_alpha_bar p.extra_generation_params['Downcast alphas_cumprod'] = opts.use_downcasted_alpha_bar
sd_model.alphas_cumprod = sd_model.alphas_cumprod.half().to(shared.device) sd_model.alphas_cumprod = sd_model.alphas_cumprod.half().to(shared.device)
if opts.sd_noise_schedule == "Zero Terminal SNR" or (hasattr(sd_model, 'ztsnr') and sd_model.ztsnr): if opts.sd_noise_schedule == "Zero Terminal SNR":
if p is not None: if p is not None:
p.extra_generation_params['Noise Schedule'] = opts.sd_noise_schedule p.extra_generation_params['Noise Schedule'] = opts.sd_noise_schedule
sd_model.alphas_cumprod = rescale_zero_terminal_snr_abar(sd_model.alphas_cumprod).to(shared.device) sd_model.alphas_cumprod = rescale_zero_terminal_snr_abar(sd_model.alphas_cumprod).to(shared.device)
@@ -787,7 +783,7 @@ def get_obj_from_str(string, reload=False):
return getattr(importlib.import_module(module, package=None), cls) return getattr(importlib.import_module(module, package=None), cls)
def load_model(checkpoint_info=None, already_loaded_state_dict=None, checkpoint_config=None): def load_model(checkpoint_info=None, already_loaded_state_dict=None):
from modules import sd_hijack from modules import sd_hijack
checkpoint_info = checkpoint_info or select_checkpoint() checkpoint_info = checkpoint_info or select_checkpoint()
@@ -805,8 +801,7 @@ def load_model(checkpoint_info=None, already_loaded_state_dict=None, checkpoint_
else: else:
state_dict = get_checkpoint_state_dict(checkpoint_info, timer) state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
if not checkpoint_config: checkpoint_config = sd_models_config.find_checkpoint_config(state_dict, checkpoint_info)
checkpoint_config = sd_models_config.find_checkpoint_config(state_dict, checkpoint_info)
clip_is_included_into_sd = any(x for x in [sd1_clip_weight, sd2_clip_weight, sdxl_clip_weight, sdxl_refiner_clip_weight] if x in state_dict) clip_is_included_into_sd = any(x for x in [sd1_clip_weight, sd2_clip_weight, sdxl_clip_weight, sdxl_refiner_clip_weight] if x in state_dict)
timer.record("find config") timer.record("find config")
@@ -979,7 +974,7 @@ def reload_model_weights(sd_model=None, info=None, forced_reload=False):
if sd_model is not None: if sd_model is not None:
send_model_to_trash(sd_model) send_model_to_trash(sd_model)
load_model(checkpoint_info, already_loaded_state_dict=state_dict, checkpoint_config=checkpoint_config) load_model(checkpoint_info, already_loaded_state_dict=state_dict)
return model_data.sd_model return model_data.sd_model
try: try:
-4
View File
@@ -14,7 +14,6 @@ config_sd2 = os.path.join(sd_repo_configs_path, "v2-inference.yaml")
config_sd2v = os.path.join(sd_repo_configs_path, "v2-inference-v.yaml") config_sd2v = os.path.join(sd_repo_configs_path, "v2-inference-v.yaml")
config_sd2_inpainting = os.path.join(sd_repo_configs_path, "v2-inpainting-inference.yaml") config_sd2_inpainting = os.path.join(sd_repo_configs_path, "v2-inpainting-inference.yaml")
config_sdxl = os.path.join(sd_xl_repo_configs_path, "sd_xl_base.yaml") config_sdxl = os.path.join(sd_xl_repo_configs_path, "sd_xl_base.yaml")
config_sdxlv = os.path.join(sd_configs_path, "sd_xl_v.yaml")
config_sdxl_refiner = os.path.join(sd_xl_repo_configs_path, "sd_xl_refiner.yaml") config_sdxl_refiner = os.path.join(sd_xl_repo_configs_path, "sd_xl_refiner.yaml")
config_sdxl_inpainting = os.path.join(sd_configs_path, "sd_xl_inpaint.yaml") config_sdxl_inpainting = os.path.join(sd_configs_path, "sd_xl_inpaint.yaml")
config_depth_model = os.path.join(sd_repo_configs_path, "v2-midas-inference.yaml") config_depth_model = os.path.join(sd_repo_configs_path, "v2-midas-inference.yaml")
@@ -82,9 +81,6 @@ def guess_model_config_from_state_dict(sd, filename):
if diffusion_model_input.shape[1] == 9: if diffusion_model_input.shape[1] == 9:
return config_sdxl_inpainting return config_sdxl_inpainting
else: else:
if ('v_pred' in sd):
del sd['v_pred']
return config_sdxlv
return config_sdxl return config_sdxl
if sd.get('conditioner.embedders.0.model.ln_final.weight', None) is not None: if sd.get('conditioner.embedders.0.model.ln_final.weight', None) is not None:
+3 -5
View File
@@ -16,12 +16,10 @@ def dat_models_names():
return [x.name for x in modules.dat_model.get_dat_models(None)] return [x.name for x in modules.dat_model.get_dat_models(None)]
def postprocessing_scripts(filter_out_extra_only=False, filter_out_main_ui_only=False): def postprocessing_scripts():
import modules.scripts import modules.scripts
return list(filter(
lambda s: (not filter_out_extra_only or not s.extra_only) and (not filter_out_main_ui_only or not s.main_ui_only), return modules.scripts.scripts_postproc.scripts
modules.scripts.scripts_postproc.scripts,
))
def sd_vae_items(): def sd_vae_items():
+7 -10
View File
@@ -33,12 +33,12 @@ categories.register_category("training", "Training")
options_templates.update(options_section(('saving-images', "Saving images/grids", "saving"), { options_templates.update(options_section(('saving-images', "Saving images/grids", "saving"), {
"samples_save": OptionInfo(True, "Always save all generated images"), "samples_save": OptionInfo(True, "Always save all generated images"),
"samples_format": OptionInfo('png', 'File format for images', ui_components.DropdownEditable, {"choices": ("png", "jpg", "jpeg", "webp", "avif")}).info("manual input of <a href='https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html' target='_blank'>other formats</a> is possible, but compatibility is not guaranteed"), "samples_format": OptionInfo('png', 'File format for images'),
"samples_filename_pattern": OptionInfo("", "Images filename pattern", component_args=hide_dirs).link("wiki", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Images-Filename-Name-and-Subdirectory"), "samples_filename_pattern": OptionInfo("", "Images filename pattern", component_args=hide_dirs).link("wiki", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Images-Filename-Name-and-Subdirectory"),
"save_images_add_number": OptionInfo(True, "Add number to filename when saving", component_args=hide_dirs), "save_images_add_number": OptionInfo(True, "Add number to filename when saving", component_args=hide_dirs),
"save_images_replace_action": OptionInfo("Replace", "Saving the image to an existing file", gr.Radio, {"choices": ["Replace", "Add number suffix"], **hide_dirs}), "save_images_replace_action": OptionInfo("Replace", "Saving the image to an existing file", gr.Radio, {"choices": ["Replace", "Add number suffix"], **hide_dirs}),
"grid_save": OptionInfo(True, "Always save all generated image grids"), "grid_save": OptionInfo(True, "Always save all generated image grids"),
"grid_format": OptionInfo('png', 'File format for grids', ui_components.DropdownEditable, {"choices": ("png", "jpg", "jpeg", "webp", "avif")}).info("manual input of <a href='https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html' target='_blank'>other formats</a> is possible, but compatibility is not guaranteed"), "grid_format": OptionInfo('png', 'File format for grids'),
"grid_extended_filename": OptionInfo(False, "Add extended info (seed, prompt) to filename when saving grid"), "grid_extended_filename": OptionInfo(False, "Add extended info (seed, prompt) to filename when saving grid"),
"grid_only_if_multiple": OptionInfo(True, "Do not save grids consisting of one picture"), "grid_only_if_multiple": OptionInfo(True, "Do not save grids consisting of one picture"),
"grid_prevent_empty_spots": OptionInfo(False, "Prevent empty spots in grid (when set to autodetect)"), "grid_prevent_empty_spots": OptionInfo(False, "Prevent empty spots in grid (when set to autodetect)"),
@@ -128,7 +128,6 @@ options_templates.update(options_section(('system', "System", "system"), {
"disable_mmap_load_safetensors": OptionInfo(False, "Disable memmapping for loading .safetensors files.").info("fixes very slow loading speed in some cases"), "disable_mmap_load_safetensors": OptionInfo(False, "Disable memmapping for loading .safetensors files.").info("fixes very slow loading speed in some cases"),
"hide_ldm_prints": OptionInfo(True, "Prevent Stability-AI's ldm/sgm modules from printing noise to console."), "hide_ldm_prints": OptionInfo(True, "Prevent Stability-AI's ldm/sgm modules from printing noise to console."),
"dump_stacks_on_signal": OptionInfo(False, "Print stack traces before exiting the program with ctrl+c."), "dump_stacks_on_signal": OptionInfo(False, "Print stack traces before exiting the program with ctrl+c."),
"concurrent_git_fetch_limit": OptionInfo(16, "Number of simultaneous extension update checks ", gr.Slider, {"step": 1, "minimum": 1, "maximum": 100}).info("reduce extension update check time"),
})) }))
options_templates.update(options_section(('profiler', "Profiler", "system"), { options_templates.update(options_section(('profiler', "Profiler", "system"), {
@@ -220,7 +219,6 @@ options_templates.update(options_section(('img2img', "img2img", "sd"), {
"img2img_color_correction": OptionInfo(False, "Apply color correction to img2img results to match original colors."), "img2img_color_correction": OptionInfo(False, "Apply color correction to img2img results to match original colors."),
"img2img_fix_steps": OptionInfo(False, "With img2img, do exactly the amount of steps the slider specifies.").info("normally you'd do less with less denoising"), "img2img_fix_steps": OptionInfo(False, "With img2img, do exactly the amount of steps the slider specifies.").info("normally you'd do less with less denoising"),
"img2img_background_color": OptionInfo("#ffffff", "With img2img, fill transparent parts of the input image with this color.", ui_components.FormColorPicker, {}), "img2img_background_color": OptionInfo("#ffffff", "With img2img, fill transparent parts of the input image with this color.", ui_components.FormColorPicker, {}),
"img2img_editor_height": OptionInfo(720, "Height of the image editor", gr.Slider, {"minimum": 80, "maximum": 1600, "step": 1}).info("in pixels").needs_reload_ui(),
"img2img_sketch_default_brush_color": OptionInfo("#ffffff", "Sketch initial brush color", ui_components.FormColorPicker, {}).info("default brush color of img2img sketch").needs_reload_ui(), "img2img_sketch_default_brush_color": OptionInfo("#ffffff", "Sketch initial brush color", ui_components.FormColorPicker, {}).info("default brush color of img2img sketch").needs_reload_ui(),
"img2img_inpaint_mask_brush_color": OptionInfo("#ffffff", "Inpaint mask brush color", ui_components.FormColorPicker, {}).info("brush color of inpaint mask").needs_reload_ui(), "img2img_inpaint_mask_brush_color": OptionInfo("#ffffff", "Inpaint mask brush color", ui_components.FormColorPicker, {}).info("brush color of inpaint mask").needs_reload_ui(),
"img2img_inpaint_sketch_default_brush_color": OptionInfo("#ffffff", "Inpaint sketch initial brush color", ui_components.FormColorPicker, {}).info("default brush color of img2img inpaint sketch").needs_reload_ui(), "img2img_inpaint_sketch_default_brush_color": OptionInfo("#ffffff", "Inpaint sketch initial brush color", ui_components.FormColorPicker, {}).info("default brush color of img2img inpaint sketch").needs_reload_ui(),
@@ -232,7 +230,7 @@ options_templates.update(options_section(('img2img', "img2img", "sd"), {
options_templates.update(options_section(('optimizations', "Optimizations", "sd"), { options_templates.update(options_section(('optimizations', "Optimizations", "sd"), {
"cross_attention_optimization": OptionInfo("Automatic", "Cross attention optimization", gr.Dropdown, lambda: {"choices": shared_items.cross_attention_optimizations()}), "cross_attention_optimization": OptionInfo("Automatic", "Cross attention optimization", gr.Dropdown, lambda: {"choices": shared_items.cross_attention_optimizations()}),
"s_min_uncond": OptionInfo(0.0, "Negative Guidance minimum sigma", gr.Slider, {"minimum": 0.0, "maximum": 15.0, "step": 0.01}, infotext='NGMS').link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9177").info("skip negative prompt for some steps when the image is almost ready; 0=disable, higher=faster"), "s_min_uncond": OptionInfo(0.0, "Negative Guidance minimum sigma", gr.Slider, {"minimum": 0.0, "maximum": 15.0, "step": 0.01}, infotext='NGMS').link("PR", "https://github.com/AUTOMATIC1111/stablediffusion-webui/pull/9177").info("skip negative prompt for some steps when the image is almost ready; 0=disable, higher=faster"),
"s_min_uncond_all": OptionInfo(False, "Negative Guidance minimum sigma all steps", infotext='NGMS all steps').info("By default, NGMS above skips every other step; this makes it skip all steps"), "s_min_uncond_all": OptionInfo(False, "Negative Guidance minimum sigma all steps", infotext='NGMS all steps').info("By default, NGMS above skips every other step; this makes it skip all steps"),
"token_merging_ratio": OptionInfo(0.0, "Token merging ratio", gr.Slider, {"minimum": 0.0, "maximum": 0.9, "step": 0.1}, infotext='Token merging ratio').link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9256").info("0=disable, higher=faster"), "token_merging_ratio": OptionInfo(0.0, "Token merging ratio", gr.Slider, {"minimum": 0.0, "maximum": 0.9, "step": 0.1}, infotext='Token merging ratio').link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9256").info("0=disable, higher=faster"),
"token_merging_ratio_img2img": OptionInfo(0.0, "Token merging ratio for img2img", gr.Slider, {"minimum": 0.0, "maximum": 0.9, "step": 0.1}).info("only applies if non-zero and overrides above"), "token_merging_ratio_img2img": OptionInfo(0.0, "Token merging ratio for img2img", gr.Slider, {"minimum": 0.0, "maximum": 0.9, "step": 0.1}).info("only applies if non-zero and overrides above"),
@@ -292,7 +290,6 @@ options_templates.update(options_section(('extra_networks', "Extra Networks", "s
"textual_inversion_print_at_load": OptionInfo(False, "Print a list of Textual Inversion embeddings when loading model"), "textual_inversion_print_at_load": OptionInfo(False, "Print a list of Textual Inversion embeddings when loading model"),
"textual_inversion_add_hashes_to_infotext": OptionInfo(True, "Add Textual Inversion hashes to infotext"), "textual_inversion_add_hashes_to_infotext": OptionInfo(True, "Add Textual Inversion hashes to infotext"),
"sd_hypernetwork": OptionInfo("None", "Add hypernetwork to prompt", gr.Dropdown, lambda: {"choices": ["None", *shared.hypernetworks]}, refresh=shared_items.reload_hypernetworks), "sd_hypernetwork": OptionInfo("None", "Add hypernetwork to prompt", gr.Dropdown, lambda: {"choices": ["None", *shared.hypernetworks]}, refresh=shared_items.reload_hypernetworks),
"textual_inversion_image_embedding_data_cache": OptionInfo(False, 'Cache the data of image embeddings').info('potentially increase TI load time at the cost some disk space'),
})) }))
options_templates.update(options_section(('ui_prompt_editing', "Prompt editing", "ui"), { options_templates.update(options_section(('ui_prompt_editing', "Prompt editing", "ui"), {
@@ -406,15 +403,15 @@ options_templates.update(options_section(('sampler-params', "Sampler parameters"
'uni_pc_order': OptionInfo(3, "UniPC order", gr.Slider, {"minimum": 1, "maximum": 50, "step": 1}, infotext='UniPC order').info("must be < sampling steps"), 'uni_pc_order': OptionInfo(3, "UniPC order", gr.Slider, {"minimum": 1, "maximum": 50, "step": 1}, infotext='UniPC order').info("must be < sampling steps"),
'uni_pc_lower_order_final': OptionInfo(True, "UniPC lower order final", infotext='UniPC lower order final'), 'uni_pc_lower_order_final': OptionInfo(True, "UniPC lower order final", infotext='UniPC lower order final'),
'sd_noise_schedule': OptionInfo("Default", "Noise schedule for sampling", gr.Radio, {"choices": ["Default", "Zero Terminal SNR"]}, infotext="Noise Schedule").info("for use with zero terminal SNR trained models"), 'sd_noise_schedule': OptionInfo("Default", "Noise schedule for sampling", gr.Radio, {"choices": ["Default", "Zero Terminal SNR"]}, infotext="Noise Schedule").info("for use with zero terminal SNR trained models"),
'skip_early_cond': OptionInfo(0.0, "Ignore negative prompt during early sampling", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}, infotext="Skip Early CFG").info("disables CFG on a proportion of steps at the beginning of generation; 0=skip none; 1=skip all; can both improve sample diversity/quality and speed up sampling; XYZ plot: Skip Early CFG"), 'skip_early_cond': OptionInfo(0.0, "Ignore negative prompt during early sampling", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}, infotext="Skip Early CFG").info("disables CFG on a proportion of steps at the beginning of generation; 0=skip none; 1=skip all; can both improve sample diversity/quality and speed up sampling"),
'beta_dist_alpha': OptionInfo(0.6, "Beta scheduler - alpha", gr.Slider, {"minimum": 0.01, "maximum": 1.0, "step": 0.01}, infotext='Beta scheduler alpha').info('Default = 0.6; the alpha parameter of the beta distribution used in Beta sampling'), 'beta_dist_alpha': OptionInfo(0.6, "Beta scheduler - alpha", gr.Slider, {"minimum": 0.01, "maximum": 1.0, "step": 0.01}, infotext='Beta scheduler alpha').info('Default = 0.6; the alpha parameter of the beta distribution used in Beta sampling'),
'beta_dist_beta': OptionInfo(0.6, "Beta scheduler - beta", gr.Slider, {"minimum": 0.01, "maximum": 1.0, "step": 0.01}, infotext='Beta scheduler beta').info('Default = 0.6; the beta parameter of the beta distribution used in Beta sampling'), 'beta_dist_beta': OptionInfo(0.6, "Beta scheduler - beta", gr.Slider, {"minimum": 0.01, "maximum": 1.0, "step": 0.01}, infotext='Beta scheduler beta').info('Default = 0.6; the beta parameter of the beta distribution used in Beta sampling'),
})) }))
options_templates.update(options_section(('postprocessing', "Postprocessing", "postprocessing"), { options_templates.update(options_section(('postprocessing', "Postprocessing", "postprocessing"), {
'postprocessing_enable_in_main_ui': OptionInfo([], "Enable postprocessing operations in txt2img and img2img tabs", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts(filter_out_extra_only=True)]}), 'postprocessing_enable_in_main_ui': OptionInfo([], "Enable postprocessing operations in txt2img and img2img tabs", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
'postprocessing_disable_in_extras': OptionInfo([], "Disable postprocessing operations in extras tab", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts(filter_out_main_ui_only=True)]}), 'postprocessing_disable_in_extras': OptionInfo([], "Disable postprocessing operations in extras tab", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
'postprocessing_operation_order': OptionInfo([], "Postprocessing operation order", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts(filter_out_main_ui_only=True)]}), 'postprocessing_operation_order': OptionInfo([], "Postprocessing operation order", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
'upscaling_max_images_in_cache': OptionInfo(5, "Maximum number of images in upscaling cache", gr.Slider, {"minimum": 0, "maximum": 10, "step": 1}), 'upscaling_max_images_in_cache': OptionInfo(5, "Maximum number of images in upscaling cache", gr.Slider, {"minimum": 0, "maximum": 10, "step": 1}),
'postprocessing_existing_caption_action': OptionInfo("Ignore", "Action for existing captions", gr.Radio, {"choices": ["Ignore", "Keep", "Prepend", "Append"]}).info("when generating captions using postprocessing; Ignore = use generated; Keep = use original; Prepend/Append = combine both"), 'postprocessing_existing_caption_action': OptionInfo("Ignore", "Action for existing captions", gr.Radio, {"choices": ["Ignore", "Keep", "Prepend", "Append"]}).info("when generating captions using postprocessing; Ignore = use generated; Keep = use original; Prepend/Append = combine both"),
})) }))
+13 -31
View File
@@ -12,7 +12,7 @@ import safetensors.torch
import numpy as np import numpy as np
from PIL import Image, PngImagePlugin from PIL import Image, PngImagePlugin
from modules import shared, devices, sd_hijack, sd_models, images, sd_samplers, sd_hijack_checkpoint, errors, hashes, cache from modules import shared, devices, sd_hijack, sd_models, images, sd_samplers, sd_hijack_checkpoint, errors, hashes
import modules.textual_inversion.dataset import modules.textual_inversion.dataset
from modules.textual_inversion.learn_schedule import LearnRateScheduler from modules.textual_inversion.learn_schedule import LearnRateScheduler
@@ -116,7 +116,6 @@ class EmbeddingDatabase:
self.expected_shape = -1 self.expected_shape = -1
self.embedding_dirs = {} self.embedding_dirs = {}
self.previously_displayed_embeddings = () self.previously_displayed_embeddings = ()
self.image_embedding_cache = cache.cache('image-embedding')
def add_embedding_dir(self, path): def add_embedding_dir(self, path):
self.embedding_dirs[path] = DirWithTextualInversionEmbeddings(path) self.embedding_dirs[path] = DirWithTextualInversionEmbeddings(path)
@@ -155,31 +154,6 @@ class EmbeddingDatabase:
vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1) vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
return vec.shape[1] return vec.shape[1]
def read_embedding_from_image(self, path, name):
try:
ondisk_mtime = os.path.getmtime(path)
if (cache_embedding := self.image_embedding_cache.get(path)) and ondisk_mtime == cache_embedding.get('mtime', 0):
# cache will only be used if the file has not been modified time matches
return cache_embedding.get('data', None), cache_embedding.get('name', None)
embed_image = Image.open(path)
if hasattr(embed_image, 'text') and 'sd-ti-embedding' in embed_image.text:
data = embedding_from_b64(embed_image.text['sd-ti-embedding'])
name = data.get('name', name)
elif data := extract_image_data_embed(embed_image):
name = data.get('name', name)
if data is None or shared.opts.textual_inversion_image_embedding_data_cache:
# data of image embeddings only will be cached if the option textual_inversion_image_embedding_data_cache is enabled
# results of images that are not embeddings will allways be cached to reduce unnecessary future disk reads
self.image_embedding_cache[path] = {'data': data, 'name': None if data is None else name, 'mtime': ondisk_mtime}
return data, name
except Exception:
errors.report(f"Error loading embedding {path}", exc_info=True)
return None, None
def load_from_file(self, path, filename): def load_from_file(self, path, filename):
name, ext = os.path.splitext(filename) name, ext = os.path.splitext(filename)
ext = ext.upper() ext = ext.upper()
@@ -189,10 +163,17 @@ class EmbeddingDatabase:
if second_ext.upper() == '.PREVIEW': if second_ext.upper() == '.PREVIEW':
return return
data, name = self.read_embedding_from_image(path, name) embed_image = Image.open(path)
if data is None: if hasattr(embed_image, 'text') and 'sd-ti-embedding' in embed_image.text:
return data = embedding_from_b64(embed_image.text['sd-ti-embedding'])
name = data.get('name', name)
else:
data = extract_image_data_embed(embed_image)
if data:
name = data.get('name', name)
else:
# if data is None, means this is not an embedding, just a preview image
return
elif ext in ['.BIN', '.PT']: elif ext in ['.BIN', '.PT']:
data = torch.load(path, map_location="cpu") data = torch.load(path, map_location="cpu")
elif ext in ['.SAFETENSORS']: elif ext in ['.SAFETENSORS']:
@@ -210,6 +191,7 @@ class EmbeddingDatabase:
else: else:
print(f"Unable to load Textual inversion embedding due to data issue: '{name}'.") print(f"Unable to load Textual inversion embedding due to data issue: '{name}'.")
def load_from_dir(self, embdir): def load_from_dir(self, embdir):
if not os.path.isdir(embdir.path): if not os.path.isdir(embdir.path):
return return
+21 -46
View File
@@ -8,11 +8,10 @@ from contextlib import ExitStack
import gradio as gr import gradio as gr
import gradio.utils import gradio.utils
import numpy as np
from PIL import Image, PngImagePlugin # noqa: F401 from PIL import Image, PngImagePlugin # noqa: F401
from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, wrap_gradio_call, wrap_gradio_call_no_job # noqa: F401 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, wrap_gradio_call, wrap_gradio_call_no_job # noqa: F401
from modules import gradio_extensons, sd_schedulers # noqa: F401 from modules import gradio_extensions, sd_schedulers # noqa: F401
from modules import sd_hijack, sd_models, script_callbacks, ui_extensions, deepbooru, extra_networks, ui_common, ui_postprocessing, progress, ui_loadsave, shared_items, ui_settings, timer, sysinfo, ui_checkpoint_merger, scripts, sd_samplers, processing, ui_extra_networks, ui_toprow, launch_utils from modules import sd_hijack, sd_models, script_callbacks, ui_extensions, deepbooru, extra_networks, ui_common, ui_postprocessing, progress, ui_loadsave, shared_items, ui_settings, timer, sysinfo, ui_checkpoint_merger, scripts, sd_samplers, processing, ui_extra_networks, ui_toprow, launch_utils
from modules.ui_components import FormRow, FormGroup, ToolButton, FormHTML, InputAccordion, ResizeHandleRow from modules.ui_components import FormRow, FormGroup, ToolButton, FormHTML, InputAccordion, ResizeHandleRow
from modules.paths import script_path from modules.paths import script_path
@@ -33,7 +32,7 @@ from modules.infotext_utils import image_from_url_text, PasteField
create_setting_component = ui_settings.create_setting_component create_setting_component = ui_settings.create_setting_component
warnings.filterwarnings("default" if opts.show_warnings else "ignore", category=UserWarning) warnings.filterwarnings("default" if opts.show_warnings else "ignore", category=UserWarning)
warnings.filterwarnings("default" if opts.show_gradio_deprecation_warnings else "ignore", category=gr.deprecation.GradioDeprecationWarning) warnings.filterwarnings("default" if opts.show_gradio_deprecation_warnings else "ignore", category=gradio_extensions.GradioDeprecationWarning)
# this is a fix for Windows users. Without it, javascript files will be served with text/html content-type and the browser will not show any UI # this is a fix for Windows users. Without it, javascript files will be served with text/html content-type and the browser will not show any UI
mimetypes.init() mimetypes.init()
@@ -44,9 +43,6 @@ mimetypes.add_type('application/javascript', '.mjs')
mimetypes.add_type('image/webp', '.webp') mimetypes.add_type('image/webp', '.webp')
mimetypes.add_type('image/avif', '.avif') mimetypes.add_type('image/avif', '.avif')
# override potentially incorrect mimetypes
mimetypes.add_type('text/css', '.css')
if not cmd_opts.share and not cmd_opts.listen: if not cmd_opts.share and not cmd_opts.listen:
# fix gradio phoning home # fix gradio phoning home
gradio.utils.version_check = lambda: None gradio.utils.version_check = lambda: None
@@ -104,8 +100,8 @@ def calc_resolution_hires(enable, width, height, hr_scale, hr_resize_x, hr_resiz
def resize_from_to_html(width, height, scale_by): def resize_from_to_html(width, height, scale_by):
target_width = int(width * scale_by) target_width = int(float(width) * scale_by)
target_height = int(height * scale_by) target_height = int(float(height) * scale_by)
if not target_width or not target_height: if not target_width or not target_height:
return "no image selected" return "no image selected"
@@ -114,10 +110,11 @@ def resize_from_to_html(width, height, scale_by):
def process_interrogate(interrogation_function, mode, ii_input_dir, ii_output_dir, *ii_singles): def process_interrogate(interrogation_function, mode, ii_input_dir, ii_output_dir, *ii_singles):
if mode in {0, 1, 3, 4}: mode = int(mode)
return [interrogation_function(ii_singles[mode]), None] if mode in (0, 1, 3, 4):
return [interrogation_function(ii_singles[mode]["composite"]), None]
elif mode == 2: elif mode == 2:
return [interrogation_function(ii_singles[mode]["image"]), None] return [interrogation_function(ii_singles[mode]["composite"]), None]
elif mode == 5: elif mode == 5:
assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled" assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled"
images = shared.listfiles(ii_input_dir) images = shared.listfiles(ii_input_dir)
@@ -270,7 +267,8 @@ def create_ui():
with gr.Blocks(analytics_enabled=False) as txt2img_interface: with gr.Blocks(analytics_enabled=False) as txt2img_interface:
toprow = ui_toprow.Toprow(is_img2img=False, is_compact=shared.opts.compact_prompt_box) toprow = ui_toprow.Toprow(is_img2img=False, is_compact=shared.opts.compact_prompt_box)
dummy_component = gr.Label(visible=False) dummy_component = gr.Textbox(visible=False)
dummy_component_number = gr.Number(visible=False)
extra_tabs = gr.Tabs(elem_id="txt2img_extra_tabs", elem_classes=["extra-networks"]) extra_tabs = gr.Tabs(elem_id="txt2img_extra_tabs", elem_classes=["extra-networks"])
extra_tabs.__enter__() extra_tabs.__enter__()
@@ -313,7 +311,7 @@ def create_ui():
with gr.Row(elem_id="txt2img_accordions", elem_classes="accordions"): with gr.Row(elem_id="txt2img_accordions", elem_classes="accordions"):
with InputAccordion(False, label="Hires. fix", elem_id="txt2img_hr") as enable_hr: with InputAccordion(False, label="Hires. fix", elem_id="txt2img_hr") as enable_hr:
with enable_hr.extra(): with enable_hr.extra():
hr_final_resolution = FormHTML(value="", elem_id="txtimg_hr_finalres", label="Upscaled resolution", interactive=False, min_width=0) hr_final_resolution = FormHTML(value="", elem_id="txtimg_hr_finalres", label="Upscaled resolution")
with FormRow(elem_id="txt2img_hires_fix_row1", variant="compact"): with FormRow(elem_id="txt2img_hires_fix_row1", variant="compact"):
hr_upscaler = gr.Dropdown(label="Upscaler", elem_id="txt2img_hr_upscaler", choices=[*shared.latent_upscale_modes, *[x.name for x in shared.sd_upscalers]], value=shared.latent_upscale_default_mode) hr_upscaler = gr.Dropdown(label="Upscaler", elem_id="txt2img_hr_upscaler", choices=[*shared.latent_upscale_modes, *[x.name for x in shared.sd_upscalers]], value=shared.latent_upscale_default_mode)
@@ -427,7 +425,7 @@ def create_ui():
output_panel.button_upscale.click( output_panel.button_upscale.click(
fn=wrap_gradio_gpu_call(modules.txt2img.txt2img_upscale, extra_outputs=[None, '', '']), fn=wrap_gradio_gpu_call(modules.txt2img.txt2img_upscale, extra_outputs=[None, '', '']),
_js="submit_txt2img_upscale", _js="submit_txt2img_upscale",
inputs=txt2img_inputs[0:1] + [output_panel.gallery, dummy_component, output_panel.generation_info] + txt2img_inputs[1:], inputs=txt2img_inputs[0:1] + [output_panel.gallery, dummy_component_number, output_panel.generation_info] + txt2img_inputs[1:],
outputs=txt2img_outputs, outputs=txt2img_outputs,
show_progress=False, show_progress=False,
) )
@@ -541,31 +539,21 @@ def create_ui():
img2img_selected_tab = gr.Number(value=0, visible=False) img2img_selected_tab = gr.Number(value=0, visible=False)
with gr.TabItem('img2img', id='img2img', elem_id="img2img_img2img_tab") as tab_img2img: with gr.TabItem('img2img', id='img2img', elem_id="img2img_img2img_tab") as tab_img2img:
init_img = gr.Image(label="Image for img2img", elem_id="img2img_image", show_label=False, source="upload", interactive=True, type="pil", tool="editor", image_mode="RGBA", height=opts.img2img_editor_height) init_img = gr.ImageEditor(label="Image for img2img", elem_id="img2img_image", show_label=False, interactive=True, type="pil", image_mode="RGBA")
add_copy_image_controls('img2img', init_img) add_copy_image_controls('img2img', init_img)
with gr.TabItem('Sketch', id='img2img_sketch', elem_id="img2img_img2img_sketch_tab") as tab_sketch: with gr.TabItem('Sketch', id='img2img_sketch', elem_id="img2img_img2img_sketch_tab") as tab_sketch:
sketch = gr.Image(label="Image for img2img", elem_id="img2img_sketch", show_label=False, source="upload", interactive=True, type="pil", tool="color-sketch", image_mode="RGB", height=opts.img2img_editor_height, brush_color=opts.img2img_sketch_default_brush_color) sketch = gr.ImageEditor(label="Image for img2img", elem_id="img2img_sketch", show_label=False, interactive=True, type="pil", image_mode="RGBA", brush=gr.Brush(default_color=opts.img2img_sketch_default_brush_color))
add_copy_image_controls('sketch', sketch) add_copy_image_controls('sketch', sketch)
with gr.TabItem('Inpaint', id='inpaint', elem_id="img2img_inpaint_tab") as tab_inpaint: with gr.TabItem('Inpaint', id='inpaint', elem_id="img2img_inpaint_tab") as tab_inpaint:
init_img_with_mask = gr.Image(label="Image for inpainting with mask", show_label=False, elem_id="img2maskimg", source="upload", interactive=True, type="pil", tool="sketch", image_mode="RGBA", height=opts.img2img_editor_height, brush_color=opts.img2img_inpaint_mask_brush_color) init_img_with_mask = gr.ImageEditor(label="Image for inpainting with mask", show_label=False, elem_id="img2maskimg", brush=gr.Brush(colors=[opts.img2img_inpaint_mask_brush_color], color_mode="fixed"), interactive=True, type="pil", image_mode="RGBA", layers=False)
add_copy_image_controls('inpaint', init_img_with_mask) add_copy_image_controls('inpaint', init_img_with_mask)
with gr.TabItem('Inpaint sketch', id='inpaint_sketch', elem_id="img2img_inpaint_sketch_tab") as tab_inpaint_color: with gr.TabItem('Inpaint sketch', id='inpaint_sketch', elem_id="img2img_inpaint_sketch_tab") as tab_inpaint_color:
inpaint_color_sketch = gr.Image(label="Color sketch inpainting", show_label=False, elem_id="inpaint_sketch", source="upload", interactive=True, type="pil", tool="color-sketch", image_mode="RGB", height=opts.img2img_editor_height, brush_color=opts.img2img_inpaint_sketch_default_brush_color) inpaint_color_sketch = gr.ImageEditor(label="Color sketch inpainting", show_label=False, elem_id="inpaint_sketch", brush=gr.Brush(default_color=opts.img2img_inpaint_sketch_default_brush_color), interactive=True, type="pil", image_mode="RGBA", layers=False)
inpaint_color_sketch_orig = gr.State(None)
add_copy_image_controls('inpaint_sketch', inpaint_color_sketch) add_copy_image_controls('inpaint_sketch', inpaint_color_sketch)
def update_orig(image, state):
if image is not None:
same_size = state is not None and state.size == image.size
has_exact_match = np.any(np.all(np.array(image) == np.array(state), axis=-1))
edited = same_size and has_exact_match
return image if not edited or state is None else state
inpaint_color_sketch.change(update_orig, [inpaint_color_sketch, inpaint_color_sketch_orig], inpaint_color_sketch_orig)
with gr.TabItem('Inpaint upload', id='inpaint_upload', elem_id="img2img_inpaint_upload_tab") as tab_inpaint_upload: with gr.TabItem('Inpaint upload', id='inpaint_upload', elem_id="img2img_inpaint_upload_tab") as tab_inpaint_upload:
init_img_inpaint = gr.Image(label="Image for img2img", show_label=False, source="upload", interactive=True, type="pil", elem_id="img_inpaint_base") init_img_inpaint = gr.Image(label="Image for img2img", show_label=False, source="upload", interactive=True, type="pil", elem_id="img_inpaint_base")
init_mask_inpaint = gr.Image(label="Mask", source="upload", interactive=True, type="pil", image_mode="RGBA", elem_id="img_inpaint_mask") init_mask_inpaint = gr.Image(label="Mask", source="upload", interactive=True, type="pil", image_mode="RGBA", elem_id="img_inpaint_mask")
@@ -598,20 +586,14 @@ def create_ui():
for i, tab in enumerate(img2img_tabs): for i, tab in enumerate(img2img_tabs):
tab.select(fn=lambda tabnum=i: tabnum, inputs=[], outputs=[img2img_selected_tab]) tab.select(fn=lambda tabnum=i: tabnum, inputs=[], outputs=[img2img_selected_tab])
def copy_image(img):
if isinstance(img, dict) and 'image' in img:
return img['image']
return img
for button, name, elem in copy_image_buttons: for button, name, elem in copy_image_buttons:
button.click( button.click(
fn=copy_image, fn=lambda img: img,
inputs=[elem], inputs=[elem],
outputs=[copy_image_destinations[name]], outputs=[copy_image_destinations[name]],
) )
button.click( button.click(
fn=lambda: None, fn=None,
_js=f"switch_to_{name.replace(' ', '_')}", _js=f"switch_to_{name.replace(' ', '_')}",
inputs=[], inputs=[],
outputs=[], outputs=[],
@@ -714,12 +696,6 @@ def create_ui():
if category not in {"accordions"}: if category not in {"accordions"}:
scripts.scripts_img2img.setup_ui_for_section(category) scripts.scripts_img2img.setup_ui_for_section(category)
# the code below is meant to update the resolution label after the image in the image selection UI has changed.
# as it is now the event keeps firing continuously for inpaint edits, which ruins the page with constant requests.
# I assume this must be a gradio bug and for now we'll just do it for non-inpaint inputs.
for component in [init_img, sketch]:
component.change(fn=lambda: None, _js="updateImg2imgResizeToTextAfterChangingImage", inputs=[], outputs=[], show_progress=False)
def select_img2img_tab(tab): def select_img2img_tab(tab):
return gr.update(visible=tab in [2, 3, 4]), gr.update(visible=tab == 3), return gr.update(visible=tab in [2, 3, 4]), gr.update(visible=tab == 3),
@@ -737,7 +713,7 @@ def create_ui():
_js="submit_img2img", _js="submit_img2img",
inputs=[ inputs=[
dummy_component, dummy_component,
dummy_component, img2img_selected_tab,
toprow.prompt, toprow.prompt,
toprow.negative_prompt, toprow.negative_prompt,
toprow.ui_styles.dropdown, toprow.ui_styles.dropdown,
@@ -745,7 +721,6 @@ def create_ui():
sketch, sketch,
init_img_with_mask, init_img_with_mask,
inpaint_color_sketch, inpaint_color_sketch,
inpaint_color_sketch_orig,
init_img_inpaint, init_img_inpaint,
init_mask_inpaint, init_mask_inpaint,
mask_blur, mask_blur,
@@ -804,9 +779,9 @@ def create_ui():
res_switch_btn.click(fn=None, _js="function(){switchWidthHeight('img2img')}", inputs=None, outputs=None, show_progress=False) res_switch_btn.click(fn=None, _js="function(){switchWidthHeight('img2img')}", inputs=None, outputs=None, show_progress=False)
detect_image_size_btn.click( detect_image_size_btn.click(
fn=lambda w, h, _: (w or gr.update(), h or gr.update()), fn=lambda w, h: (w or gr.update(), h or gr.update()),
_js="currentImg2imgSourceResolution", _js="currentImg2imgSourceResolution",
inputs=[dummy_component, dummy_component, dummy_component], inputs=[dummy_component, dummy_component],
outputs=[width, height], outputs=[width, height],
show_progress=False, show_progress=False,
) )
+2 -5
View File
@@ -8,7 +8,6 @@ from contextlib import nullcontext
import gradio as gr import gradio as gr
from modules import call_queue, shared, ui_tempdir, util from modules import call_queue, shared, ui_tempdir, util
from modules.infotext_utils import image_from_url_text
import modules.images import modules.images
from modules.ui_components import ToolButton from modules.ui_components import ToolButton
import modules.infotext_utils as parameters_copypaste import modules.infotext_utils as parameters_copypaste
@@ -115,10 +114,8 @@ def save_files(js_data, images, do_make_zip, index):
writer.writerow(fields) writer.writerow(fields)
for image_index, filedata in enumerate(images, start_index): for image_index, filedata in enumerate(images, start_index):
image = image_from_url_text(filedata) image = filedata[0]
is_grid = image_index < p.index_of_first_image is_grid = image_index < p.index_of_first_image
p.batch_index = image_index-1 p.batch_index = image_index-1
parameters = parameters_copypaste.parse_generation_parameters(data["infotexts"][image_index], []) parameters = parameters_copypaste.parse_generation_parameters(data["infotexts"][image_index], [])
@@ -184,7 +181,7 @@ def create_output_panel(tabname, outdir, toprow=None):
with gr.Column(variant='panel', elem_id=f"{tabname}_results_panel"): with gr.Column(variant='panel', elem_id=f"{tabname}_results_panel"):
with gr.Group(elem_id=f"{tabname}_gallery_container"): with gr.Group(elem_id=f"{tabname}_gallery_container"):
res.gallery = gr.Gallery(label='Output', show_label=False, elem_id=f"{tabname}_gallery", columns=4, preview=True, height=shared.opts.gallery_height or None) res.gallery = gr.Gallery(label='Output', show_label=False, elem_id=f"{tabname}_gallery", columns=4, preview=True, height=shared.opts.gallery_height or None, interactive=False, type="pil")
with gr.Row(elem_id=f"image_buttons_{tabname}", elem_classes="image-buttons"): with gr.Row(elem_id=f"image_buttons_{tabname}", elem_classes="image-buttons"):
open_folder_button = ToolButton(folder_symbol, elem_id=f'{tabname}_open_folder', visible=not shared.cmd_opts.hide_ui_dir_config, tooltip="Open images output directory.") open_folder_button = ToolButton(folder_symbol, elem_id=f'{tabname}_open_folder', visible=not shared.cmd_opts.hide_ui_dir_config, tooltip="Open images output directory.")
+43 -35
View File
@@ -1,7 +1,12 @@
from functools import wraps
import gradio as gr import gradio as gr
from modules import gradio_extensions # noqa: F401
class FormComponent: class FormComponent:
webui_do_not_create_gradio_pyi_thank_you = True
def get_expected_parent(self): def get_expected_parent(self):
return gr.components.Form return gr.components.Form
@@ -9,12 +14,13 @@ class FormComponent:
gr.Dropdown.get_expected_parent = FormComponent.get_expected_parent gr.Dropdown.get_expected_parent = FormComponent.get_expected_parent
class ToolButton(FormComponent, gr.Button): class ToolButton(gr.Button, FormComponent):
"""Small button with single emoji as text, fits inside gradio forms""" """Small button with single emoji as text, fits inside gradio forms"""
def __init__(self, *args, **kwargs): @wraps(gr.Button.__init__)
classes = kwargs.pop("elem_classes", []) def __init__(self, value="", *args, elem_classes=None, **kwargs):
super().__init__(*args, elem_classes=["tool", *classes], **kwargs) elem_classes = elem_classes or []
super().__init__(*args, elem_classes=["tool", *elem_classes], value=value, **kwargs)
def get_block_name(self): def get_block_name(self):
return "button" return "button"
@@ -22,7 +28,9 @@ class ToolButton(FormComponent, gr.Button):
class ResizeHandleRow(gr.Row): class ResizeHandleRow(gr.Row):
"""Same as gr.Row but fits inside gradio forms""" """Same as gr.Row but fits inside gradio forms"""
webui_do_not_create_gradio_pyi_thank_you = True
@wraps(gr.Row.__init__)
def __init__(self, **kwargs): def __init__(self, **kwargs):
super().__init__(**kwargs) super().__init__(**kwargs)
@@ -32,92 +40,92 @@ class ResizeHandleRow(gr.Row):
return "row" return "row"
class FormRow(FormComponent, gr.Row): class FormRow(gr.Row, FormComponent):
"""Same as gr.Row but fits inside gradio forms""" """Same as gr.Row but fits inside gradio forms"""
def get_block_name(self): def get_block_name(self):
return "row" return "row"
class FormColumn(FormComponent, gr.Column): class FormColumn(gr.Column, FormComponent):
"""Same as gr.Column but fits inside gradio forms""" """Same as gr.Column but fits inside gradio forms"""
def get_block_name(self): def get_block_name(self):
return "column" return "column"
class FormGroup(FormComponent, gr.Group): class FormGroup(gr.Group, FormComponent):
"""Same as gr.Group but fits inside gradio forms""" """Same as gr.Group but fits inside gradio forms"""
def get_block_name(self): def get_block_name(self):
return "group" return "group"
class FormHTML(FormComponent, gr.HTML): class FormHTML(gr.HTML, FormComponent):
"""Same as gr.HTML but fits inside gradio forms""" """Same as gr.HTML but fits inside gradio forms"""
def get_block_name(self): def get_block_name(self):
return "html" return "html"
class FormColorPicker(FormComponent, gr.ColorPicker): class FormColorPicker(gr.ColorPicker, FormComponent):
"""Same as gr.ColorPicker but fits inside gradio forms""" """Same as gr.ColorPicker but fits inside gradio forms"""
def get_block_name(self): def get_block_name(self):
return "colorpicker" return "colorpicker"
class DropdownMulti(FormComponent, gr.Dropdown): class DropdownMulti(gr.Dropdown, FormComponent):
"""Same as gr.Dropdown but always multiselect""" """Same as gr.Dropdown but always multiselect"""
@wraps(gr.Dropdown.__init__)
def __init__(self, **kwargs): def __init__(self, **kwargs):
super().__init__(multiselect=True, **kwargs) kwargs['multiselect'] = True
super().__init__(**kwargs)
def get_block_name(self): def get_block_name(self):
return "dropdown" return "dropdown"
class DropdownEditable(FormComponent, gr.Dropdown): class DropdownEditable(gr.Dropdown, FormComponent):
"""Same as gr.Dropdown but allows editing value""" """Same as gr.Dropdown but allows editing value"""
@wraps(gr.Dropdown.__init__)
def __init__(self, **kwargs): def __init__(self, **kwargs):
super().__init__(allow_custom_value=True, **kwargs) kwargs['allow_custom_value'] = True
super().__init__(**kwargs)
def get_block_name(self): def get_block_name(self):
return "dropdown" return "dropdown"
class InputAccordion(gr.Checkbox): class InputAccordionImpl(gr.Checkbox):
"""A gr.Accordion that can be used as an input - returns True if open, False if closed. """A gr.Accordion that can be used as an input - returns True if open, False if closed.
Actually just a hidden checkbox, but creates an accordion that follows and is followed by the state of the checkbox. Actually just a hidden checkbox, but creates an accordion that follows and is followed by the state of the checkbox.
""" """
accordion_id_set = set() webui_do_not_create_gradio_pyi_thank_you = True
global_index = 0 global_index = 0
def __init__(self, value, **kwargs): @wraps(gr.Checkbox.__init__)
def __init__(self, value=None, setup=False, **kwargs):
if not setup:
super().__init__(value=value, **kwargs)
return
self.accordion_id = kwargs.get('elem_id') self.accordion_id = kwargs.get('elem_id')
if self.accordion_id is None: if self.accordion_id is None:
self.accordion_id = f"input-accordion-{InputAccordion.global_index}" self.accordion_id = f"input-accordion-{InputAccordionImpl.global_index}"
InputAccordion.global_index += 1 InputAccordionImpl.global_index += 1
if not InputAccordion.accordion_id_set:
from modules import script_callbacks
script_callbacks.on_script_unloaded(InputAccordion.reset)
if self.accordion_id in InputAccordion.accordion_id_set:
count = 1
while (unique_id := f'{self.accordion_id}-{count}') in InputAccordion.accordion_id_set:
count += 1
self.accordion_id = unique_id
InputAccordion.accordion_id_set.add(self.accordion_id)
kwargs_checkbox = { kwargs_checkbox = {
**kwargs, **kwargs,
"elem_id": f"{self.accordion_id}-checkbox", "elem_id": f"{self.accordion_id}-checkbox",
"visible": False, "visible": False,
} }
super().__init__(value, **kwargs_checkbox) super().__init__(value=value, **kwargs_checkbox)
self.change(fn=None, _js='function(checked){ inputAccordionChecked("' + self.accordion_id + '", checked); }', inputs=[self]) self.change(fn=None, _js='function(checked){ inputAccordionChecked("' + self.accordion_id + '", checked); }', inputs=[self])
@@ -128,6 +136,7 @@ class InputAccordion(gr.Checkbox):
"elem_classes": ['input-accordion'], "elem_classes": ['input-accordion'],
"open": value, "open": value,
} }
self.accordion = gr.Accordion(**kwargs_accordion) self.accordion = gr.Accordion(**kwargs_accordion)
def extra(self): def extra(self):
@@ -156,7 +165,6 @@ class InputAccordion(gr.Checkbox):
def get_block_name(self): def get_block_name(self):
return "checkbox" return "checkbox"
@classmethod
def reset(cls): def InputAccordion(value=None, **kwargs):
cls.global_index = 0 return InputAccordionImpl(value=value, setup=True, **kwargs)
cls.accordion_id_set.clear()
+5 -12
View File
@@ -1,6 +1,5 @@
import json import json
import os import os
from concurrent.futures import ThreadPoolExecutor
import threading import threading
import time import time
from datetime import datetime, timezone from datetime import datetime, timezone
@@ -107,24 +106,18 @@ def check_updates(id_task, disable_list):
exts = [ext for ext in extensions.extensions if ext.remote is not None and ext.name not in disabled] exts = [ext for ext in extensions.extensions if ext.remote is not None and ext.name not in disabled]
shared.state.job_count = len(exts) shared.state.job_count = len(exts)
lock = threading.Lock() for ext in exts:
shared.state.textinfo = ext.name
def _check_update(ext):
try: try:
ext.check_updates() ext.check_updates()
except FileNotFoundError as e: except FileNotFoundError as e:
if 'FETCH_HEAD' not in str(e): if 'FETCH_HEAD' not in str(e):
raise raise
except Exception: except Exception:
with lock: errors.report(f"Error checking updates for {ext.name}", exc_info=True)
errors.report(f"Error checking updates for {ext.name}", exc_info=True)
with lock:
shared.state.textinfo = ext.name
shared.state.nextjob()
with ThreadPoolExecutor(max_workers=max(1, int(shared.opts.concurrent_git_fetch_limit))) as executor: shared.state.nextjob()
for ext in exts:
executor.submit(_check_update, ext)
return extension_table(), "" return extension_table(), ""
@@ -695,7 +688,7 @@ def create_ui():
config_save_button.click(fn=save_config_state, inputs=[config_save_name], outputs=[config_states_list, config_states_info]) config_save_button.click(fn=save_config_state, inputs=[config_save_name], outputs=[config_states_list, config_states_info])
dummy_component = gr.Label(visible=False) dummy_component = gr.State()
config_restore_button.click(fn=restore_config_state, _js="config_state_confirm_restore", inputs=[dummy_component, config_states_list, config_restore_type], outputs=[config_states_info]) config_restore_button.click(fn=restore_config_state, _js="config_state_confirm_restore", inputs=[dummy_component, config_states_list, config_restore_type], outputs=[config_states_info])
config_states_list.change( config_states_list.change(
+6 -2
View File
@@ -177,8 +177,10 @@ def add_pages_to_demo(app):
app.add_api_route("/sd_extra_networks/get-single-card", get_single_card, methods=["GET"]) app.add_api_route("/sd_extra_networks/get-single-card", get_single_card, methods=["GET"])
def quote_js(s: str): def quote_js(s):
return json.dumps(s, ensure_ascii=False) s = s.replace('\\', '\\\\')
s = s.replace('"', '\\"')
return f'"{s}"'
class ExtraNetworksPage: class ExtraNetworksPage:
@@ -748,9 +750,11 @@ def create_ui(interface: gr.Blocks, unrelated_tabs, tabname):
elem_id = f"{tabname}_{page.extra_networks_tabname}_cards_html" elem_id = f"{tabname}_{page.extra_networks_tabname}_cards_html"
page_elem = gr.HTML(page.create_html(tabname, empty=True), elem_id=elem_id) page_elem = gr.HTML(page.create_html(tabname, empty=True), elem_id=elem_id)
ui.pages.append(page_elem) ui.pages.append(page_elem)
editor = page.create_user_metadata_editor(ui, tabname) editor = page.create_user_metadata_editor(ui, tabname)
editor.create_ui() editor.create_ui()
ui.user_metadata_editors.append(editor) ui.user_metadata_editors.append(editor)
related_tabs.append(tab) related_tabs.append(tab)
ui.button_save_preview = gr.Button('Save preview', elem_id=f"{tabname}_save_preview", visible=False) ui.button_save_preview = gr.Button('Save preview', elem_id=f"{tabname}_save_preview", visible=False)
+1 -1
View File
@@ -176,7 +176,7 @@ class UiLoadsave:
if new_value == old_value: if new_value == old_value:
continue continue
if old_value is None and (new_value == '' or new_value == []): if old_value is None and new_value == '' or new_value == []:
continue continue
yield path, old_value, new_value yield path, old_value, new_value
+3 -3
View File
@@ -5,14 +5,14 @@ from modules.ui_components import ResizeHandleRow
def create_ui(): def create_ui():
dummy_component = gr.Label(visible=False) dummy_component = gr.Textbox(visible=False)
tab_index = gr.Number(value=0, visible=False) tab_index = gr.State(0)
with ResizeHandleRow(equal_height=False, variant='compact'): with ResizeHandleRow(equal_height=False, variant='compact'):
with gr.Column(variant='compact'): with gr.Column(variant='compact'):
with gr.Tabs(elem_id="mode_extras"): with gr.Tabs(elem_id="mode_extras"):
with gr.TabItem('Single Image', id="single_image", elem_id="extras_single_tab") as tab_single: with gr.TabItem('Single Image', id="single_image", elem_id="extras_single_tab") as tab_single:
extras_image = gr.Image(label="Source", source="upload", interactive=True, type="pil", elem_id="extras_image", image_mode="RGBA") extras_image = gr.ImageEditor(label="Source", interactive=True, type="pil", elem_id="extras_image", image_mode="RGBA")
with gr.TabItem('Batch Process', id="batch_process", elem_id="extras_batch_process_tab") as tab_batch: with gr.TabItem('Batch Process', id="batch_process", elem_id="extras_batch_process_tab") as tab_batch:
image_batch = gr.Files(label="Batch Process", interactive=True, elem_id="extras_image_batch") image_batch = gr.Files(label="Batch Process", interactive=True, elem_id="extras_image_batch")
+113 -14
View File
@@ -4,6 +4,7 @@ from collections import namedtuple
from pathlib import Path from pathlib import Path
import gradio.components import gradio.components
import gradio as gr
from PIL import PngImagePlugin from PIL import PngImagePlugin
@@ -13,25 +14,35 @@ from modules import shared
Savedfile = namedtuple("Savedfile", ["name"]) Savedfile = namedtuple("Savedfile", ["name"])
def register_tmp_file(gradio, filename): def register_tmp_file(gradio_app, filename):
if hasattr(gradio, 'temp_file_sets'): # gradio 3.15 if hasattr(gradio_app, 'temp_file_sets'): # gradio 3.15
gradio.temp_file_sets[0] = gradio.temp_file_sets[0] | {os.path.abspath(filename)} if hasattr(gr.utils, 'abspath'): # gradio 4.19
filename = gr.utils.abspath(filename)
else:
filename = os.path.abspath(filename)
if hasattr(gradio, 'temp_dirs'): # gradio 3.9 gradio_app.temp_file_sets[0] = gradio_app.temp_file_sets[0] | {filename}
gradio.temp_dirs = gradio.temp_dirs | {os.path.abspath(os.path.dirname(filename))}
if hasattr(gradio_app, 'temp_dirs'): # gradio 3.9
gradio_app.temp_dirs = gradio_app.temp_dirs | {os.path.abspath(os.path.dirname(filename))}
def check_tmp_file(gradio, filename): def check_tmp_file(gradio_app, filename):
if hasattr(gradio, 'temp_file_sets'): if hasattr(gradio_app, 'temp_file_sets'):
return any(filename in fileset for fileset in gradio.temp_file_sets) if hasattr(gr.utils, 'abspath'): # gradio 4.19
filename = gr.utils.abspath(filename)
else:
filename = os.path.abspath(filename)
if hasattr(gradio, 'temp_dirs'): return any(filename in fileset for fileset in gradio_app.temp_file_sets)
return any(Path(temp_dir).resolve() in Path(filename).resolve().parents for temp_dir in gradio.temp_dirs)
if hasattr(gradio_app, 'temp_dirs'):
return any(Path(temp_dir).resolve() in Path(filename).resolve().parents for temp_dir in gradio_app.temp_dirs)
return False return False
def save_pil_to_file(self, pil_image, dir=None, format="png"): def save_pil_to_file(pil_image, cache_dir=None, format="png"):
already_saved_as = getattr(pil_image, 'already_saved_as', None) already_saved_as = getattr(pil_image, 'already_saved_as', None)
if already_saved_as and os.path.isfile(already_saved_as): if already_saved_as and os.path.isfile(already_saved_as):
register_tmp_file(shared.demo, already_saved_as) register_tmp_file(shared.demo, already_saved_as)
@@ -39,9 +50,10 @@ def save_pil_to_file(self, pil_image, dir=None, format="png"):
register_tmp_file(shared.demo, filename_with_mtime) register_tmp_file(shared.demo, filename_with_mtime)
return filename_with_mtime return filename_with_mtime
if shared.opts.temp_dir != "": if shared.opts.temp_dir:
dir = shared.opts.temp_dir dir = shared.opts.temp_dir
else: else:
dir = cache_dir
os.makedirs(dir, exist_ok=True) os.makedirs(dir, exist_ok=True)
use_metadata = False use_metadata = False
@@ -56,9 +68,96 @@ def save_pil_to_file(self, pil_image, dir=None, format="png"):
return file_obj.name return file_obj.name
async def async_move_files_to_cache(data, block, postprocess=False, check_in_upload_folder=False, keep_in_cache=False):
"""Move any files in `data` to cache and (optionally), adds URL prefixes (/file=...) needed to access the cached file.
Also handles the case where the file is on an external Gradio app (/proxy=...).
Runs after .postprocess() and before .preprocess().
Copied from gradio's processing_utils.py
Args:
data: The input or output data for a component. Can be a dictionary or a dataclass
block: The component whose data is being processed
postprocess: Whether its running from postprocessing
check_in_upload_folder: If True, instead of moving the file to cache, checks if the file is in already in cache (exception if not).
keep_in_cache: If True, the file will not be deleted from cache when the server is shut down.
"""
from gradio import FileData
from gradio.data_classes import GradioRootModel
from gradio.data_classes import GradioModel
from gradio_client import utils as client_utils
from gradio.utils import get_upload_folder, is_in_or_equal, is_static_file
async def _move_to_cache(d: dict):
payload = FileData(**d)
# EDITED
payload.path = payload.path.rsplit('?', 1)[0]
# If the gradio app developer is returning a URL from
# postprocess, it means the component can display a URL
# without it being served from the gradio server
# This makes it so that the URL is not downloaded and speeds up event processing
if payload.url and postprocess and client_utils.is_http_url_like(payload.url):
payload.path = payload.url
elif is_static_file(payload):
pass
elif not block.proxy_url:
# EDITED
if check_tmp_file(shared.demo, payload.path):
temp_file_path = payload.path
else:
# If the file is on a remote server, do not move it to cache.
if check_in_upload_folder and not client_utils.is_http_url_like(
payload.path
):
path = os.path.abspath(payload.path)
if not is_in_or_equal(path, get_upload_folder()):
raise ValueError(
f"File {path} is not in the upload folder and cannot be accessed."
)
if not payload.is_stream:
temp_file_path = await block.async_move_resource_to_block_cache(
payload.path
)
if temp_file_path is None:
raise ValueError("Did not determine a file path for the resource.")
payload.path = temp_file_path
if keep_in_cache:
block.keep_in_cache.add(payload.path)
url_prefix = "/stream/" if payload.is_stream else "/file="
if block.proxy_url:
proxy_url = block.proxy_url.rstrip("/")
url = f"/proxy={proxy_url}{url_prefix}{payload.path}"
elif client_utils.is_http_url_like(payload.path) or payload.path.startswith(
f"{url_prefix}"
):
url = payload.path
else:
url = f"{url_prefix}{payload.path}"
payload.url = url
return payload.model_dump()
if isinstance(data, (GradioRootModel, GradioModel)):
data = data.model_dump()
return await client_utils.async_traverse(
data, _move_to_cache, client_utils.is_file_obj
)
def install_ui_tempdir_override(): def install_ui_tempdir_override():
"""override save to file function so that it also writes PNG info""" """
gradio.components.IOComponent.pil_to_temp_file = save_pil_to_file override save to file function so that it also writes PNG info.
override gradio4's move_files_to_cache function to prevent it from writing a copy into a temporary directory.
"""
gradio.processing_utils.save_pil_to_cache = save_pil_to_file
gradio.processing_utils.async_move_files_to_cache = async_move_files_to_cache
def on_tmpdir_changed(): def on_tmpdir_changed():
+1 -2
View File
@@ -93,14 +93,13 @@ class UpscalerData:
scaler: Upscaler = None scaler: Upscaler = None
model: None model: None
def __init__(self, name: str, path: str, upscaler: Upscaler = None, scale: int = 4, model=None, sha256: str = None): def __init__(self, name: str, path: str, upscaler: Upscaler = None, scale: int = 4, model=None):
self.name = name self.name = name
self.data_path = path self.data_path = path
self.local_data_path = path self.local_data_path = path
self.scaler = upscaler self.scaler = upscaler
self.scale = scale self.scale = scale
self.model = model self.model = model
self.sha256 = sha256
def __repr__(self): def __repr__(self):
return f"<UpscalerData name={self.name} path={self.data_path} scale={self.scale}>" return f"<UpscalerData name={self.name} path={self.data_path} scale={self.scale}>"
-77
View File
@@ -211,80 +211,3 @@ Requested path was: {path}
subprocess.Popen(["explorer.exe", subprocess.check_output(["wslpath", "-w", path])]) subprocess.Popen(["explorer.exe", subprocess.check_output(["wslpath", "-w", path])])
else: else:
subprocess.Popen(["xdg-open", path]) subprocess.Popen(["xdg-open", path])
def load_file_from_url(
url: str,
*,
model_dir: str,
progress: bool = True,
file_name: str | None = None,
hash_prefix: str | None = None,
re_download: bool = False,
) -> str:
"""Download a file from `url` into `model_dir`, using the file present if possible.
Returns the path to the downloaded file.
file_name: if specified, it will be used as the filename, otherwise the filename will be extracted from the url.
file is downloaded to {file_name}.tmp then moved to the final location after download is complete.
hash_prefix: sha256 hex string, if provided, the hash of the downloaded file will be checked against this prefix.
if the hash does not match, the temporary file is deleted and a ValueError is raised.
re_download: forcibly re-download the file even if it already exists.
"""
from urllib.parse import urlparse
import requests
try:
from tqdm import tqdm
except ImportError:
class tqdm:
def __init__(self, *args, **kwargs):
pass
def update(self, n=1, *args, **kwargs):
pass
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
pass
if not file_name:
parts = urlparse(url)
file_name = os.path.basename(parts.path)
cached_file = os.path.abspath(os.path.join(model_dir, file_name))
if re_download or not os.path.exists(cached_file):
os.makedirs(model_dir, exist_ok=True)
temp_file = os.path.join(model_dir, f"{file_name}.tmp")
print(f'\nDownloading: "{url}" to {cached_file}')
response = requests.get(url, stream=True)
response.raise_for_status()
total_size = int(response.headers.get('content-length', 0))
with tqdm(total=total_size, unit='B', unit_scale=True, desc=file_name, disable=not progress) as progress_bar:
with open(temp_file, 'wb') as file:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
file.write(chunk)
progress_bar.update(len(chunk))
if hash_prefix and not compare_sha256(temp_file, hash_prefix):
print(f"Hash mismatch for {temp_file}. Deleting the temporary file.")
os.remove(temp_file)
raise ValueError(f"File hash does not match the expected hash prefix {hash_prefix}!")
os.rename(temp_file, cached_file)
return cached_file
def compare_sha256(file_path: str, hash_prefix: str) -> bool:
"""Check if the SHA256 hash of the file matches the given prefix."""
import hashlib
hash_sha256 = hashlib.sha256()
blksize = 1024 * 1024
with open(file_path, "rb") as f:
for chunk in iter(lambda: f.read(blksize), b""):
hash_sha256.update(chunk)
return hash_sha256.hexdigest().startswith(hash_prefix.strip().lower())
+1 -1
View File
@@ -8,7 +8,7 @@ diskcache
einops einops
facexlib facexlib
fastapi>=0.90.1 fastapi>=0.90.1
gradio==3.41.2 gradio==4.38.1
inflection inflection
jsonmerge jsonmerge
kornia kornia
+3 -3
View File
@@ -7,8 +7,8 @@ clean-fid==0.1.35
diskcache==5.6.3 diskcache==5.6.3
einops==0.4.1 einops==0.4.1
facexlib==0.3.0 facexlib==0.3.0
fastapi==0.94.0 fastapi==0.104.1
gradio==3.41.2 gradio==4.38.1
httpcore==0.15 httpcore==0.15
inflection==0.5.1 inflection==0.5.1
jsonmerge==1.8.0 jsonmerge==1.8.0
@@ -22,7 +22,7 @@ protobuf==3.20.0
psutil==5.9.5 psutil==5.9.5
pytorch_lightning==1.9.4 pytorch_lightning==1.9.4
resize-right==0.0.2 resize-right==0.0.2
safetensors==0.4.5 safetensors==0.4.2
scikit-image==0.21.0 scikit-image==0.21.0
spandrel==0.3.4 spandrel==0.3.4
spandrel-extra-arches==0.1.1 spandrel-extra-arches==0.1.1
+2 -2
View File
@@ -12,8 +12,8 @@ class ScriptPostprocessingCodeFormer(scripts_postprocessing.ScriptPostprocessing
def ui(self): def ui(self):
with ui_components.InputAccordion(False, label="CodeFormer") as enable: with ui_components.InputAccordion(False, label="CodeFormer") as enable:
with gr.Row(): with gr.Row():
codeformer_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Visibility", value=1.0, elem_id=self.elem_id_suffix("extras_codeformer_visibility")) codeformer_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Visibility", value=1.0, elem_id="extras_codeformer_visibility")
codeformer_weight = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Weight (0 = maximum effect, 1 = minimum effect)", value=0, elem_id=self.elem_id_suffix("extras_codeformer_weight")) codeformer_weight = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Weight (0 = maximum effect, 1 = minimum effect)", value=0, elem_id="extras_codeformer_weight")
return { return {
"enable": enable, "enable": enable,
+1 -1
View File
@@ -11,7 +11,7 @@ class ScriptPostprocessingGfpGan(scripts_postprocessing.ScriptPostprocessing):
def ui(self): def ui(self):
with ui_components.InputAccordion(False, label="GFPGAN") as enable: with ui_components.InputAccordion(False, label="GFPGAN") as enable:
gfpgan_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Visibility", value=1.0, elem_id=self.elem_id_suffix("extras_gfpgan_visibility")) gfpgan_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Visibility", value=1.0, elem_id="extras_gfpgan_visibility")
return { return {
"enable": enable, "enable": enable,
+15 -16
View File
@@ -30,31 +30,31 @@ class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):
def ui(self): def ui(self):
selected_tab = gr.Number(value=0, visible=False) selected_tab = gr.Number(value=0, visible=False)
with InputAccordion(True, label="Upscale", elem_id=self.elem_id_suffix("extras_upscale")) as upscale_enabled: with InputAccordion(True, label="Upscale", elem_id="extras_upscale") as upscale_enabled:
with FormRow(): with FormRow():
extras_upscaler_1 = gr.Dropdown(label='Upscaler 1', elem_id=self.elem_id_suffix("extras_upscaler_1"), choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name) extras_upscaler_1 = gr.Dropdown(label='Upscaler 1', elem_id="extras_upscaler_1", choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name)
with FormRow(): with FormRow():
extras_upscaler_2 = gr.Dropdown(label='Upscaler 2', elem_id=self.elem_id_suffix("extras_upscaler_2"), choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name) extras_upscaler_2 = gr.Dropdown(label='Upscaler 2', elem_id="extras_upscaler_2", choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name)
extras_upscaler_2_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Upscaler 2 visibility", value=0.0, elem_id=self.elem_id_suffix("extras_upscaler_2_visibility")) extras_upscaler_2_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Upscaler 2 visibility", value=0.0, elem_id="extras_upscaler_2_visibility")
with FormRow(): with FormRow():
with gr.Tabs(elem_id=self.elem_id_suffix("extras_resize_mode")): with gr.Tabs(elem_id="extras_resize_mode"):
with gr.TabItem('Scale by', elem_id=self.elem_id_suffix("extras_scale_by_tab")) as tab_scale_by: with gr.TabItem('Scale by', elem_id="extras_scale_by_tab") as tab_scale_by:
with gr.Row(): with gr.Row():
with gr.Column(scale=4): with gr.Column(scale=4):
upscaling_resize = gr.Slider(minimum=1.0, maximum=8.0, step=0.05, label="Resize", value=4, elem_id=self.elem_id_suffix("extras_upscaling_resize")) upscaling_resize = gr.Slider(minimum=1.0, maximum=8.0, step=0.05, label="Resize", value=4, elem_id="extras_upscaling_resize")
with gr.Column(scale=1, min_width=160): with gr.Column(scale=1, min_width=160):
max_side_length = gr.Number(label="Max side length", value=0, elem_id=self.elem_id_suffix("extras_upscale_max_side_length"), tooltip="If any of two sides of the image ends up larger than specified, will downscale it to fit. 0 = no limit.", min_width=160, step=8, minimum=0) max_side_length = gr.Number(label="Max side length", value=0, elem_id="extras_upscale_max_side_length", tooltip="If any of two sides of the image ends up larger than specified, will downscale it to fit. 0 = no limit.", min_width=160, step=8, minimum=0)
with gr.TabItem('Scale to', elem_id=self.elem_id_suffix("extras_scale_to_tab")) as tab_scale_to: with gr.TabItem('Scale to', elem_id="extras_scale_to_tab") as tab_scale_to:
with FormRow(): with FormRow():
with gr.Column(elem_id=self.elem_id_suffix("upscaling_column_size"), scale=4): with gr.Column(elem_id="upscaling_column_size", scale=4):
upscaling_resize_w = gr.Slider(minimum=64, maximum=8192, step=8, label="Width", value=512, elem_id=self.elem_id_suffix("extras_upscaling_resize_w")) upscaling_resize_w = gr.Slider(minimum=64, maximum=8192, step=8, label="Width", value=512, elem_id="extras_upscaling_resize_w")
upscaling_resize_h = gr.Slider(minimum=64, maximum=8192, step=8, label="Height", value=512, elem_id=self.elem_id_suffix("extras_upscaling_resize_h")) upscaling_resize_h = gr.Slider(minimum=64, maximum=8192, step=8, label="Height", value=512, elem_id="extras_upscaling_resize_h")
with gr.Column(elem_id=self.elem_id_suffix("upscaling_dimensions_row"), scale=1, elem_classes="dimensions-tools"): with gr.Column(elem_id="upscaling_dimensions_row", scale=1, elem_classes="dimensions-tools"):
upscaling_res_switch_btn = ToolButton(value=switch_values_symbol, elem_id=self.elem_id_suffix("upscaling_res_switch_btn"), tooltip="Switch width/height") upscaling_res_switch_btn = ToolButton(value=switch_values_symbol, elem_id="upscaling_res_switch_btn", tooltip="Switch width/height")
upscaling_crop = gr.Checkbox(label='Crop to fit', value=True, elem_id=self.elem_id_suffix("extras_upscaling_crop")) upscaling_crop = gr.Checkbox(label='Crop to fit', value=True, elem_id="extras_upscaling_crop")
def on_selected_upscale_method(upscale_method): def on_selected_upscale_method(upscale_method):
if not shared.opts.set_scale_by_when_changing_upscaler: if not shared.opts.set_scale_by_when_changing_upscaler:
@@ -169,7 +169,6 @@ class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):
class ScriptPostprocessingUpscaleSimple(ScriptPostprocessingUpscale): class ScriptPostprocessingUpscaleSimple(ScriptPostprocessingUpscale):
name = "Simple Upscale" name = "Simple Upscale"
order = 900 order = 900
main_ui_only = True
def ui(self): def ui(self):
with FormRow(): with FormRow():
+34 -40
View File
@@ -20,7 +20,7 @@ import modules.sd_models
import modules.sd_vae import modules.sd_vae
import re import re
from modules.ui_components import ToolButton, InputAccordion from modules.ui_components import ToolButton
fill_values_symbol = "\U0001f4d2" # 📒 fill_values_symbol = "\U0001f4d2" # 📒
@@ -259,7 +259,6 @@ axis_options = [
AxisOption("Schedule min sigma", float, apply_override("sigma_min")), AxisOption("Schedule min sigma", float, apply_override("sigma_min")),
AxisOption("Schedule max sigma", float, apply_override("sigma_max")), AxisOption("Schedule max sigma", float, apply_override("sigma_max")),
AxisOption("Schedule rho", float, apply_override("rho")), AxisOption("Schedule rho", float, apply_override("rho")),
AxisOption("Skip Early CFG", float, apply_override('skip_early_cond')),
AxisOption("Beta schedule alpha", float, apply_override("beta_dist_alpha")), AxisOption("Beta schedule alpha", float, apply_override("beta_dist_alpha")),
AxisOption("Beta schedule beta", float, apply_override("beta_dist_beta")), AxisOption("Beta schedule beta", float, apply_override("beta_dist_beta")),
AxisOption("Eta", float, apply_field("eta")), AxisOption("Eta", float, apply_field("eta")),
@@ -285,7 +284,7 @@ axis_options = [
] ]
def draw_xyz_grid(p, xs, ys, zs, x_labels, y_labels, z_labels, cell, draw_legend, include_lone_images, include_sub_grids, first_axes_processed, second_axes_processed, margin_size, draw_grid): def draw_xyz_grid(p, xs, ys, zs, x_labels, y_labels, z_labels, cell, draw_legend, include_lone_images, include_sub_grids, first_axes_processed, second_axes_processed, margin_size):
hor_texts = [[images.GridAnnotation(x)] for x in x_labels] hor_texts = [[images.GridAnnotation(x)] for x in x_labels]
ver_texts = [[images.GridAnnotation(y)] for y in y_labels] ver_texts = [[images.GridAnnotation(y)] for y in y_labels]
title_texts = [[images.GridAnnotation(z)] for z in z_labels] title_texts = [[images.GridAnnotation(z)] for z in z_labels]
@@ -370,30 +369,29 @@ def draw_xyz_grid(p, xs, ys, zs, x_labels, y_labels, z_labels, cell, draw_legend
print("Unexpected error: draw_xyz_grid failed to return even a single processed image") print("Unexpected error: draw_xyz_grid failed to return even a single processed image")
return Processed(p, []) return Processed(p, [])
if draw_grid: z_count = len(zs)
z_count = len(zs)
for i in range(z_count): for i in range(z_count):
start_index = (i * len(xs) * len(ys)) + i start_index = (i * len(xs) * len(ys)) + i
end_index = start_index + len(xs) * len(ys) end_index = start_index + len(xs) * len(ys)
grid = images.image_grid(processed_result.images[start_index:end_index], rows=len(ys)) grid = images.image_grid(processed_result.images[start_index:end_index], rows=len(ys))
if draw_legend:
grid_max_w, grid_max_h = map(max, zip(*(img.size for img in processed_result.images[start_index:end_index])))
grid = images.draw_grid_annotations(grid, grid_max_w, grid_max_h, hor_texts, ver_texts, margin_size)
processed_result.images.insert(i, grid)
processed_result.all_prompts.insert(i, processed_result.all_prompts[start_index])
processed_result.all_seeds.insert(i, processed_result.all_seeds[start_index])
processed_result.infotexts.insert(i, processed_result.infotexts[start_index])
z_grid = images.image_grid(processed_result.images[:z_count], rows=1)
z_sub_grid_max_w, z_sub_grid_max_h = map(max, zip(*(img.size for img in processed_result.images[:z_count])))
if draw_legend: if draw_legend:
z_grid = images.draw_grid_annotations(z_grid, z_sub_grid_max_w, z_sub_grid_max_h, title_texts, [[images.GridAnnotation()]]) grid_max_w, grid_max_h = map(max, zip(*(img.size for img in processed_result.images[start_index:end_index])))
processed_result.images.insert(0, z_grid) grid = images.draw_grid_annotations(grid, grid_max_w, grid_max_h, hor_texts, ver_texts, margin_size)
# TODO: Deeper aspects of the program rely on grid info being misaligned between metadata arrays, which is not ideal. processed_result.images.insert(i, grid)
# processed_result.all_prompts.insert(0, processed_result.all_prompts[0]) processed_result.all_prompts.insert(i, processed_result.all_prompts[start_index])
# processed_result.all_seeds.insert(0, processed_result.all_seeds[0]) processed_result.all_seeds.insert(i, processed_result.all_seeds[start_index])
processed_result.infotexts.insert(0, processed_result.infotexts[0]) processed_result.infotexts.insert(i, processed_result.infotexts[start_index])
z_grid = images.image_grid(processed_result.images[:z_count], rows=1)
z_sub_grid_max_w, z_sub_grid_max_h = map(max, zip(*(img.size for img in processed_result.images[:z_count])))
if draw_legend:
z_grid = images.draw_grid_annotations(z_grid, z_sub_grid_max_w, z_sub_grid_max_h, title_texts, [[images.GridAnnotation()]])
processed_result.images.insert(0, z_grid)
# TODO: Deeper aspects of the program rely on grid info being misaligned between metadata arrays, which is not ideal.
# processed_result.all_prompts.insert(0, processed_result.all_prompts[0])
# processed_result.all_seeds.insert(0, processed_result.all_seeds[0])
processed_result.infotexts.insert(0, processed_result.infotexts[0])
return processed_result return processed_result
@@ -443,6 +441,7 @@ class Script(scripts.Script):
with gr.Row(variant="compact", elem_id="axis_options"): with gr.Row(variant="compact", elem_id="axis_options"):
with gr.Column(): with gr.Column():
draw_legend = gr.Checkbox(label='Draw legend', value=True, elem_id=self.elem_id("draw_legend"))
no_fixed_seeds = gr.Checkbox(label='Keep -1 for seeds', value=False, elem_id=self.elem_id("no_fixed_seeds")) no_fixed_seeds = gr.Checkbox(label='Keep -1 for seeds', value=False, elem_id=self.elem_id("no_fixed_seeds"))
with gr.Row(): with gr.Row():
vary_seeds_x = gr.Checkbox(label='Vary seeds for X', value=False, min_width=80, elem_id=self.elem_id("vary_seeds_x"), tooltip="Use different seeds for images along X axis.") vary_seeds_x = gr.Checkbox(label='Vary seeds for X', value=False, min_width=80, elem_id=self.elem_id("vary_seeds_x"), tooltip="Use different seeds for images along X axis.")
@@ -450,12 +449,9 @@ class Script(scripts.Script):
vary_seeds_z = gr.Checkbox(label='Vary seeds for Z', value=False, min_width=80, elem_id=self.elem_id("vary_seeds_z"), tooltip="Use different seeds for images along Z axis.") vary_seeds_z = gr.Checkbox(label='Vary seeds for Z', value=False, min_width=80, elem_id=self.elem_id("vary_seeds_z"), tooltip="Use different seeds for images along Z axis.")
with gr.Column(): with gr.Column():
include_lone_images = gr.Checkbox(label='Include Sub Images', value=False, elem_id=self.elem_id("include_lone_images")) include_lone_images = gr.Checkbox(label='Include Sub Images', value=False, elem_id=self.elem_id("include_lone_images"))
csv_mode = gr.Checkbox(label='Use text inputs instead of dropdowns', value=False, elem_id=self.elem_id("csv_mode"))
with InputAccordion(True, label='Draw grid', elem_id=self.elem_id('draw_grid')) as draw_grid:
with gr.Row():
include_sub_grids = gr.Checkbox(label='Include Sub Grids', value=False, elem_id=self.elem_id("include_sub_grids")) include_sub_grids = gr.Checkbox(label='Include Sub Grids', value=False, elem_id=self.elem_id("include_sub_grids"))
draw_legend = gr.Checkbox(label='Draw legend', value=True, elem_id=self.elem_id("draw_legend")) csv_mode = gr.Checkbox(label='Use text inputs instead of dropdowns', value=False, elem_id=self.elem_id("csv_mode"))
with gr.Column():
margin_size = gr.Slider(label="Grid margins (px)", minimum=0, maximum=500, value=0, step=2, elem_id=self.elem_id("margin_size")) margin_size = gr.Slider(label="Grid margins (px)", minimum=0, maximum=500, value=0, step=2, elem_id=self.elem_id("margin_size"))
with gr.Row(variant="compact", elem_id="swap_axes"): with gr.Row(variant="compact", elem_id="swap_axes"):
@@ -537,9 +533,9 @@ class Script(scripts.Script):
(z_values_dropdown, lambda params: get_dropdown_update_from_params("Z", params)), (z_values_dropdown, lambda params: get_dropdown_update_from_params("Z", params)),
) )
return [x_type, x_values, x_values_dropdown, y_type, y_values, y_values_dropdown, z_type, z_values, z_values_dropdown, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, vary_seeds_x, vary_seeds_y, vary_seeds_z, margin_size, csv_mode, draw_grid] return [x_type, x_values, x_values_dropdown, y_type, y_values, y_values_dropdown, z_type, z_values, z_values_dropdown, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, vary_seeds_x, vary_seeds_y, vary_seeds_z, margin_size, csv_mode]
def run(self, p, x_type, x_values, x_values_dropdown, y_type, y_values, y_values_dropdown, z_type, z_values, z_values_dropdown, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, vary_seeds_x, vary_seeds_y, vary_seeds_z, margin_size, csv_mode, draw_grid): def run(self, p, x_type, x_values, x_values_dropdown, y_type, y_values, y_values_dropdown, z_type, z_values, z_values_dropdown, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, vary_seeds_x, vary_seeds_y, vary_seeds_z, margin_size, csv_mode):
x_type, y_type, z_type = x_type or 0, y_type or 0, z_type or 0 # if axle type is None set to 0 x_type, y_type, z_type = x_type or 0, y_type or 0, z_type or 0 # if axle type is None set to 0
if not no_fixed_seeds: if not no_fixed_seeds:
@@ -784,8 +780,7 @@ class Script(scripts.Script):
include_sub_grids=include_sub_grids, include_sub_grids=include_sub_grids,
first_axes_processed=first_axes_processed, first_axes_processed=first_axes_processed,
second_axes_processed=second_axes_processed, second_axes_processed=second_axes_processed,
margin_size=margin_size, margin_size=margin_size
draw_grid=draw_grid,
) )
if not processed.images: if not processed.images:
@@ -794,15 +789,14 @@ class Script(scripts.Script):
z_count = len(zs) z_count = len(zs)
if draw_grid: # Set the grid infotexts to the real ones with extra_generation_params (1 main grid + z_count sub-grids)
# Set the grid infotexts to the real ones with extra_generation_params (1 main grid + z_count sub-grids) processed.infotexts[:1 + z_count] = grid_infotext[:1 + z_count]
processed.infotexts[:1 + z_count] = grid_infotext[:1 + z_count]
if not include_lone_images: if not include_lone_images:
# Don't need sub-images anymore, drop from list: # Don't need sub-images anymore, drop from list:
processed.images = processed.images[:z_count + 1] if draw_grid else [] processed.images = processed.images[:z_count + 1]
if draw_grid and opts.grid_save: if opts.grid_save:
# Auto-save main and sub-grids: # Auto-save main and sub-grids:
grid_count = z_count + 1 if z_count > 1 else 1 grid_count = z_count + 1 if z_count > 1 else 1
for g in range(grid_count): for g in range(grid_count):
@@ -812,7 +806,7 @@ class Script(scripts.Script):
if not include_sub_grids: # if not include_sub_grids then skip saving after the first grid if not include_sub_grids: # if not include_sub_grids then skip saving after the first grid
break break
if draw_grid and not include_sub_grids: if not include_sub_grids:
# Done with sub-grids, drop all related information: # Done with sub-grids, drop all related information:
for _ in range(z_count): for _ in range(z_count):
del processed.images[1] del processed.images[1]
+18 -17
View File
@@ -2,14 +2,6 @@
@import url('webui-assets/css/sourcesanspro.css'); @import url('webui-assets/css/sourcesanspro.css');
/* temporary fix to hide gradio crop tool until it's fixed https://github.com/gradio-app/gradio/issues/3810 */
div.gradio-image button[aria-label="Edit"] {
display: none;
}
/* general gradio fixes */ /* general gradio fixes */
:root, .dark{ :root, .dark{
@@ -137,6 +129,10 @@ div.gradio-html.min{
background: var(--input-background-fill); background: var(--input-background-fill);
} }
.gradio-gallery > button.preview{
width: 100%;
}
.gradio-container .prose a, .gradio-container .prose a:visited{ .gradio-container .prose a, .gradio-container .prose a:visited{
color: unset; color: unset;
text-decoration: none; text-decoration: none;
@@ -147,6 +143,15 @@ a{
cursor: pointer; cursor: pointer;
} }
.upload-container {
width: 100%;
max-width: 100%;
}
.layer-wrap > ul {
background: var(--background-fill-primary) !important;
}
/* gradio 3.39 puts a lot of overflow: hidden all over the place for an unknown reason. */ /* gradio 3.39 puts a lot of overflow: hidden all over the place for an unknown reason. */
div.gradio-container, .block.gradio-textbox, div.gradio-group, div.gradio-dropdown{ div.gradio-container, .block.gradio-textbox, div.gradio-group, div.gradio-dropdown{
overflow: visible !important; overflow: visible !important;
@@ -398,7 +403,7 @@ div#extras_scale_to_tab div.form{
z-index: 5; z-index: 5;
} }
.image-buttons > .form{ .image-buttons{
justify-content: center; justify-content: center;
} }
@@ -1098,9 +1103,9 @@ footer {
height:100%; height:100%;
} }
div.block.gradio-box.edit-user-metadata { .edit-user-metadata {
width: 56em; width: 56em;
background: var(--body-background-fill); background: var(--body-background-fill) !important;
padding: 2em !important; padding: 2em !important;
} }
@@ -1134,16 +1139,12 @@ div.block.gradio-box.edit-user-metadata {
margin-top: 1.5em; margin-top: 1.5em;
} }
div.block.gradio-box.popup-dialog, .popup-dialog { .popup-dialog {
width: 56em; width: 56em;
background: var(--body-background-fill); background: var(--body-background-fill) !important;
padding: 2em !important; padding: 2em !important;
} }
div.block.gradio-box.popup-dialog > div:last-child, .popup-dialog > div:last-child{
margin-top: 1em;
}
div.block.input-accordion{ div.block.input-accordion{
} }
+1 -10
View File
@@ -4,16 +4,7 @@ if exist webui.settings.bat (
call webui.settings.bat call webui.settings.bat
) )
if not defined PYTHON ( if not defined PYTHON (set PYTHON=python)
for /f "delims=" %%A in ('where python ^| findstr /n . ^| findstr ^^1:') do (
if /i "%%~xA" == ".exe" (
set PYTHON=python
) else (
set PYTHON=call python
)
)
)
if defined GIT (set "GIT_PYTHON_GIT_EXECUTABLE=%GIT%") if defined GIT (set "GIT_PYTHON_GIT_EXECUTABLE=%GIT%")
if not defined VENV_DIR (set "VENV_DIR=%~dp0%venv") if not defined VENV_DIR (set "VENV_DIR=%~dp0%venv")
-40
View File
@@ -45,44 +45,6 @@ def api_only():
) )
def warning_if_invalid_install_dir():
"""
Shows a warning if the webui is installed under a path that contains a leading dot in any of its parent directories.
Gradio '/file=' route will block access to files that have a leading dot in the path segments.
We use this route to serve files such as JavaScript and CSS to the webpage,
if those files are blocked, the webpage will not function properly.
See https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13292
This is a security feature was added to Gradio 3.32.0 and is removed in later versions,
this function replicates Gradio file access blocking logic.
This check should be removed when it's no longer applicable.
"""
from packaging.version import parse
from pathlib import Path
import gradio
if parse('3.32.0') <= parse(gradio.__version__) < parse('4'):
def abspath(path):
"""modified from Gradio 3.41.2 gradio.utils.abspath()"""
if path.is_absolute():
return path
is_symlink = path.is_symlink() or any(parent.is_symlink() for parent in path.parents)
return Path.cwd() / path if (is_symlink or path == path.resolve()) else path.resolve()
webui_root = Path(__file__).parent
if any(part.startswith(".") for part in abspath(webui_root).parts):
print(f'''{"!"*25} Warning {"!"*25}
WebUI is installed in a directory that has a leading dot (.) in one of its parent directories.
This will prevent WebUI from functioning properly.
Please move the installation to a different directory.
Current path: "{webui_root}"
For more information see: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13292
{"!"*25} Warning {"!"*25}''')
def webui(): def webui():
from modules.shared_cmd_options import cmd_opts from modules.shared_cmd_options import cmd_opts
@@ -91,8 +53,6 @@ def webui():
from modules import shared, ui_tempdir, script_callbacks, ui, progress, ui_extra_networks from modules import shared, ui_tempdir, script_callbacks, ui, progress, ui_extra_networks
warning_if_invalid_install_dir()
while 1: while 1:
if shared.opts.clean_temp_dir_at_start: if shared.opts.clean_temp_dir_at_start:
ui_tempdir.cleanup_tmpdr() ui_tempdir.cleanup_tmpdr()