Compare commits

..

2571 Commits

Author SHA1 Message Date
Mrq
db4cac5d1f DirectML kludge 2023-02-08 23:18:41 -06:00
AUTOMATIC1111
ea9bd9fc74
Merge pull request #7556 from EllangoK/master
Adds options for grid margins to XYZ Plot and Prompt Matrix
2023-02-05 13:34:36 +03:00
EllangoK
0ca1a64cfc adds grid margins to xyz plot and prompt matrix 2023-02-05 03:44:56 -05:00
AUTOMATIC1111
3993aa43e9
Merge pull request #7535 from mcmonkey4eva/fix-symlink-extra-network
fix symlinks in extra networks ui
2023-02-05 11:28:30 +03:00
AUTOMATIC1111
27a50d4b38
Merge pull request #7554 from techneconn/feature/prompt_hash_option
Add prompt_hash option for file/dir name pattern
2023-02-05 11:27:05 +03:00
AUTOMATIC1111
475095f50a
Merge pull request #7528 from spezialspezial/patch-1
Catch broken model symlinks early | Quickfix modelloader.py
2023-02-05 11:24:32 +03:00
AUTOMATIC
668d7e9b9a make it possible to load SD1 checkpoints without CLIP 2023-02-05 11:21:00 +03:00
techneconn
5a1b62e9f8 Add prompt_hash option for file/dir name pattern 2023-02-05 15:48:51 +09:00
Alex "mcmonkey" Goodwin
88a46e8427 fix symlinks in extra networks ui
'absolute' and 'resolve' are equivalent, but 'resolve' resolves symlinks (which is an obscure specialty behavior usually not wanted) whereas 'absolute' treats symlinks as folders (which is the expected behavior). This commit allows you to symlink folders within your models/embeddings/etc. dirs and have preview images load as expected without issue.
2023-02-04 09:10:00 -08:00
spezialspezial
6524478850
Update modelloader.py
os.path.getmtime(filename) throws exception later in codepath when meeting broken symlink. For now catch it here early but more checks could be added for robustness.
2023-02-04 16:52:15 +01:00
AUTOMATIC
3e0f9a7543 fix issue with switching back to checkpoint that had its checksum calculated during runtime mentioned in #7506 2023-02-04 15:23:16 +03:00
AUTOMATIC
40e51fd6ef add margin parameter to draw_grid_annotations 2023-02-04 13:29:04 +03:00
AUTOMATIC1111
21593c8082
Merge pull request #7466 from ctwrs/master
Add .jpg to allowed thumb formats
2023-02-04 12:07:45 +03:00
AUTOMATIC1111
c0e0b5844d
Merge pull request #7470 from cbrownstein-lambda/update-error-message-no-checkpoint
Update error message WRT missing checkpoint file
2023-02-04 12:07:12 +03:00
AUTOMATIC1111
dca632ab90
Merge pull request #7509 from mezotaken/fix-img2imgalt
Fix img2imgalt after samplers separation
2023-02-04 11:41:29 +03:00
AUTOMATIC
81823407d9 add --no-hashing 2023-02-04 11:38:56 +03:00
AUTOMATIC1111
30228c67ca
Merge pull request #7461 from brkirch/mac-fixes
Move Mac related code to separate file
2023-02-04 11:22:52 +03:00
AUTOMATIC
c4b9ed1a27 make Image CFG Scale only show if instrutpix2pix model is loaded 2023-02-04 11:18:44 +03:00
AUTOMATIC
72dd5785d9 merge CFGDenoiserEdit and CFGDenoiser into single object 2023-02-04 11:06:17 +03:00
brkirch
4306659c4d Remove unused code 2023-02-04 01:22:06 -05:00
AUTOMATIC1111
127bfb6c41
Merge pull request #7481 from Klace/master
img2img instruct-pix2pix support
2023-02-04 09:05:21 +03:00
Kyle
ba6a4e7e94 Use original CFGDenoiser if image_cfg_scale = 1
If image_cfg_scale is =1 then the original image is not used for the output. We can then use the original CFGDenoiser to get the same result to support AND functionality.

Maybe in the future AND can be supported with "Image CFG Scale"
2023-02-03 19:46:13 -05:00
Kyle
c27c0de0f7 txt2img Hires Fix 2023-02-03 19:15:32 -05:00
Kyle
6c6c6636bb Image CFG Added (Full Implementation)
Uses separate denoiser for edit (instruct-pix2pix) models

No impact to txt2img or regular img2img

"Image CFG Scale" will only apply to instruct-pix2pix models and metadata will only be added if using such model
2023-02-03 18:19:56 -05:00
Vladimir Repin
982295aee5 Fix img2imgalt after samplers separation 2023-02-04 01:50:38 +03:00
Kyle
3b2ad20ac1 Processing only, no CFGDenoiser change
Allows instruct-pix2pix
2023-02-02 19:19:45 -05:00
Kyle
cf0cfefe91 Revert "instruct-pix2pix support"
This reverts commit 269833067d.
2023-02-02 19:15:38 -05:00
Kyle
269833067d instruct-pix2pix support 2023-02-02 09:37:01 -05:00
Cody Brownstein
fb97acef63 Update error message WRT missing checkpoint file
The Safetensors format is also supported.
2023-02-01 14:51:06 -08:00
ctwrs
92bae77b88 Add .jpg to allowed thumb formats 2023-02-01 22:28:39 +01:00
brkirch
1b8af15f13 Refactor Mac specific code to a separate file
Move most Mac related code to a separate file, don't even load it unless web UI is run under macOS.
2023-02-01 14:05:56 -05:00
AUTOMATIC1111
226d840e84
Merge pull request #7334 from EllangoK/master
X/Y/Z plot now saves sub grids if opts.grid_save and honors draw_legend
2023-02-01 16:30:28 +03:00
AUTOMATIC1111
07edf57409
Merge pull request #7357 from EllangoK/btn-fix
Fixes switch height/width btn unbound error
2023-02-01 16:29:58 +03:00
AUTOMATIC1111
fa4fe45403
Merge pull request #7371 from hoblin/master
[Prompt Matrix] Support for negative prompt + delimiter selector
2023-02-01 16:28:27 +03:00
AUTOMATIC1111
814600f298
Merge pull request #7412 from Pomierski/master
Fix missing tooltip for 'Clear prompt' button
2023-02-01 16:22:36 +03:00
AUTOMATIC1111
30a64504b1
Merge pull request #7414 from joecodecreations/master
Changes use_original_name_batch to default to True
2023-02-01 16:22:16 +03:00
AUTOMATIC1111
b1873dbb77
Merge pull request #7455 from brkirch/put-fix-back
Refactor MPS PyTorch fixes, add fix still required for PyTorch nightly builds back
2023-02-01 16:11:40 +03:00
brkirch
2217331cd1 Refactor MPS fixes to CondFunc 2023-02-01 06:36:22 -05:00
brkirch
7738c057ce MPS fix is still needed :(
Apparently I did not test with large enough images to trigger the bug with torch.narrow on MPS
2023-02-01 05:23:58 -05:00
Joey Sanchez
0426b34789 Adding default true to use_original_name_batch as images should by default hold the same name to help keep sequenced images in their correct order 2023-01-30 21:46:52 -05:00
Piotr Pomierski
bfe7e7f15f Fix missing tooltip for 'Clear prompt' button 2023-01-31 01:51:07 +01:00
AUTOMATIC
2c1bb46c7a amend the error in previous commit 2023-01-30 18:48:10 +03:00
AUTOMATIC
19de2a626b make linux launch.py use XFORMERS_PACKAGE var too; thanks, acncagua 2023-01-30 15:48:09 +03:00
AUTOMATIC
ee9fdf7f62 Add --skip-version-check to disable messages asking users to upgrade torch. 2023-01-30 14:56:28 +03:00
AUTOMATIC
aa4688eb83 disable EMA weights for instructpix2pix model, whcih should get memory usage as well as image quality to what it was before d2ac95fa7b 2023-01-30 13:29:44 +03:00
AUTOMATIC
ab059b6e48 make the program read Discard penultimate sigma from generation parameters 2023-01-30 10:52:15 +03:00
AUTOMATIC
040ec7a80e make the program read Eta and Eta DDIM from generation parameters 2023-01-30 10:47:09 +03:00
AUTOMATIC
4df63d2d19 split samplers into one more files for k-diffusion 2023-01-30 10:11:30 +03:00
Andrey
274474105a Split history sd_samplers.py to sd_samplers_kdiffusion.py 2023-01-30 09:51:23 +03:00
Andrey
95916e3777 Split history sd_samplers.py to sd_samplers_kdiffusion.py 2023-01-30 09:51:23 +03:00
Andrey
2db8ed32cd Split history sd_samplers.py to sd_samplers_kdiffusion.py 2023-01-30 09:51:23 +03:00
Andrey
f4d0538bf2 Split history sd_samplers.py to sd_samplers_kdiffusion.py 2023-01-30 09:51:23 +03:00
AUTOMATIC
aa54a9d416 split compvis sampler and shared sampler stuff into their own files 2023-01-30 09:51:06 +03:00
Andrey
f8fcad502e Split history sd_samplers.py to sd_samplers_common.py 2023-01-30 09:37:51 +03:00
Andrey
58ae93b954 Split history sd_samplers.py to sd_samplers_common.py 2023-01-30 09:37:50 +03:00
Andrey
6e78f6a896 Split history sd_samplers.py to sd_samplers_common.py 2023-01-30 09:37:50 +03:00
Andrey
5feae71dd2 Split history sd_samplers.py to sd_samplers_common.py 2023-01-30 09:37:50 +03:00
Andrey
449531a6c5 Split history sd_samplers.py to sd_samplers_compvis.py 2023-01-30 09:35:53 +03:00
Andrey
9b8ed7f8ec Split history sd_samplers.py to sd_samplers_compvis.py 2023-01-30 09:35:53 +03:00
Andrey
9118b08606 Split history sd_samplers.py to sd_samplers_compvis.py 2023-01-30 09:35:52 +03:00
Andrey
0c7c36a6c6 Split history sd_samplers.py to sd_samplers_compvis.py 2023-01-30 09:35:52 +03:00
AUTOMATIC
cbd6329488 add an environment variable for selecting xformers package 2023-01-30 09:12:43 +03:00
AUTOMATIC
c81b52ffbd add override settings component to img2img 2023-01-30 02:40:26 +03:00
AUTOMATIC
847ceae1f7 make it possible to search checkpoint by its hash 2023-01-30 01:41:23 +03:00
AUTOMATIC
399720dac2 update prompt token counts after using the paste params button 2023-01-30 01:03:31 +03:00
AUTOMATIC
f91068f426 change disable_weights_auto_swap to true by default 2023-01-30 00:37:26 +03:00
AUTOMATIC
938578e8a9 make it so that setting options in pasted infotext (like Clip Skip and ENSD) do not get applied directly and instead are added as temporary overrides 2023-01-30 00:25:30 +03:00
Yevhenii Hurin
1e2b10d2dc Cleanup changes made by formatter 2023-01-29 17:14:46 +02:00
Yevhenii Hurin
5997457fd4 Compact options UI for Prompt Matrix 2023-01-29 16:23:29 +02:00
Yevhenii Hurin
edabd92729 Add delimiter selector to the Prompt Matrix script 2023-01-29 16:05:59 +02:00
Yevhenii Hurin
c46f3ad98b Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui 2023-01-29 15:47:14 +02:00
Yevhenii Hurin
7c53f81caf Prompt selector for Prompt Matrix script 2023-01-29 15:29:03 +02:00
AUTOMATIC
00dab8f10d remove Batch size and Batch pos from textinfo (goodbye) 2023-01-29 11:53:24 +03:00
AUTOMATIC
aa6e55e001 do not display the message for TI unless the list of loaded embeddings changed 2023-01-29 11:53:05 +03:00
EllangoK
920fe8057c fixes #7284 btn unbound error 2023-01-29 03:36:16 -05:00
AUTOMATIC
8d7382ab24 add buttons for auto-search in subdirectories for extra tabs 2023-01-29 11:34:58 +03:00
AUTOMATIC1111
e8efd2ec47
Merge pull request #7353 from EllangoK/preview-fix
Fixes thumbnail cards not loading the preview image
2023-01-29 10:41:36 +03:00
EllangoK
659d602dce only returns ckpt directories if they are not none 2023-01-29 02:32:53 -05:00
AUTOMATIC
f6b7768f84 support for searching subdirectory names for extra networks 2023-01-29 10:20:19 +03:00
AUTOMATIC1111
1d24665229
Merge pull request #7344 from glop102/master
Reduce grid rows if larger than number of images available
2023-01-29 09:29:23 +03:00
glop102
09a142a05a Reduce grid rows if larger than number of images available
When a set number of grid rows is specified in settings, then it leads
to situations where an entire row in the grid is empty.
The most noticable example is the processing preview when the row count
is set to 2, where it shows the preview just fine but with a black
rectangle under it.
2023-01-28 19:25:52 -05:00
EllangoK
fb58fa6240 xyz plot now saves sub grids if opts.grid_save
also fixed no draw legend for z grid
2023-01-28 15:37:01 -05:00
AUTOMATIC
0a8515085e make it so that clicking on hypernet/lora card one more time removes the related from the prompt 2023-01-28 23:31:48 +03:00
AUTOMATIC
1d8e06d542 add checkpoints tab for extra networks UI 2023-01-28 22:52:27 +03:00
AUTOMATIC1111
91c8d0dcfc
Merge pull request #7231 from EllangoK/master
Fixes X/Y/Z Plot parameters not being restored from images
2023-01-28 18:45:38 +03:00
AUTOMATIC1111
fecb990deb
Merge pull request #7309 from brkirch/fix-embeddings
Fix embeddings, upscalers, and refactor `--upcast-sampling`
2023-01-28 18:44:36 +03:00
AUTOMATIC1111
41e76d1209
Merge pull request #7258 from ItsOlegDm/master
Css fixes
2023-01-28 18:41:58 +03:00
ItsOlegDm
29d2d6a094 Train tab fix 2023-01-28 17:21:59 +02:00
AUTOMATIC
e2c71a4bd4 make prevent the browser from using cached version of scripts when they change 2023-01-28 18:13:03 +03:00
ItsOlegDm
1e22f48f4d img2img styled padding fix 2023-01-28 17:08:38 +02:00
ItsOlegDm
f4eeff659e Removed buttons centering 2023-01-28 17:05:08 +02:00
EllangoK
591b68e56c uses autos new regex, checks len of re_param 2023-01-28 10:04:09 -05:00
AUTOMATIC1111
cd7e8fb42b
Merge pull request #7319 from Thurion/img2img_batch_fix
Fix error when using img2img batch without masks
2023-01-28 17:31:39 +03:00
AUTOMATIC
b7d2af8c7f add dropdowns in settings for hypernets and loras 2023-01-28 17:18:47 +03:00
Thurion
1421e95960
allow empty mask dir 2023-01-28 14:42:24 +01:00
AUTOMATIC
5d14f282c2 fixed a bug where after switching to a checkpoint with unknown hash, you'd get empty space instead of checkpoint name in UI
fixed a bug where if you update a selected checkpoint on disk and then restart the program, a different checkpoint loads, but the name is shown for the the old one.
2023-01-28 16:23:49 +03:00
AUTOMATIC
f8feeaaedb add progressbar to extension update check; do not check for updates for disabled extensions 2023-01-28 15:57:56 +03:00
AUTOMATIC
d04e3e921e automatically detect v-parameterization for SD2 checkpoints 2023-01-28 15:24:41 +03:00
AUTOMATIC
4aa7f5b5b9 update image parameters regex for #7231 2023-01-28 15:24:40 +03:00
brkirch
f9edd578e9 Remove MPS fix no longer needed for PyTorch
The torch.narrow fix was required for nightly PyTorch builds for a while to prevent a hard crash, but newer nightly builds don't have this issue.
2023-01-28 04:16:27 -05:00
brkirch
02b8b957d7 Add --no-half-vae to default macOS arguments
Apparently the version of PyTorch macOS users are currently at doesn't always handle half precision VAEs correctly. We will probably want to update the default PyTorch version to 2.0 when it comes out which should fix that, and at this point nightly builds of PyTorch 2.0 are going to be recommended for most Mac users. Unfortunately someone has already reported that their M2 Mac doesn't work with the nightly PyTorch 2.0 build currently, so we can add --no-half-vae for now and give users that can install nightly PyTorch 2.0 builds a webui-user.sh configuration that overrides the default.
2023-01-28 04:16:27 -05:00
brkirch
ada17dbd7c Refactor conditional casting, fix upscalers 2023-01-28 04:16:25 -05:00
AUTOMATIC1111
e8a41df49f
Merge pull request #7217 from mezotaken/master
Ask user to clarify conditions
2023-01-28 10:52:53 +03:00
AUTOMATIC1111
bea31e849a
Merge pull request #7240 from Unstackd/master
Allow users to convert models to Instruct-pix2pix models by supporting merging Instruct-pix2pix models with regular ones in the merge tab
2023-01-28 10:52:28 +03:00
AUTOMATIC1111
60061eb8d4
Merge pull request #7303 from szhublox/pathshelp
don't replace regular --help with new paths.py parser help
2023-01-28 10:48:33 +03:00
AUTOMATIC
bd52a6d899 some more changes for python version warning; add a commandline flag to disable 2023-01-28 10:48:08 +03:00
Mackerel
3752aad23d don't replace regular --help with new paths.py parser help 2023-01-28 02:44:12 -05:00
AUTOMATIC
7d1f2a3a49 remove waiting for input on version mismatch warning, change supported versions 2023-01-28 10:21:31 +03:00
AUTOMATIC1111
28c4c9b907
Merge pull request #7200 from Spaceginner/master
Add a Python version check
2023-01-28 10:13:56 +03:00
AUTOMATIC1111
ce72af87d3
Merge pull request #7199 from maxaudron/feature/configurable-data-dir
Add flag to store user data sepperate from source code
2023-01-28 09:24:40 +03:00
AUTOMATIC
0834d4ce37 simplify #7284 2023-01-28 08:41:15 +03:00
AUTOMATIC1111
c99d705e57
Merge pull request #7284 from Gazzoo-byte/patch-1
Add button to switch width and height
2023-01-28 08:33:43 +03:00
AUTOMATIC1111
38d83665d9
Merge pull request #7285 from EllangoK/xyz-fixes
Allows for multiple Styles axii in X/Y/Z Plot
2023-01-28 08:31:23 +03:00
AUTOMATIC
4c52dfe4ac make the detection for -v models less broad 2023-01-28 08:30:17 +03:00
AUTOMATIC1111
41975c375c
Merge pull request #7294 from MrCheeze/model-detection
add v2-inpainting model detection, and broaden v-model detection to include anything with 768 in the name
2023-01-28 08:29:01 +03:00
AUTOMATIC1111
8ce0ccf336
Merge pull request #7295 from askaliuk/askaliuk-inpaint-batch-support
Basic inpainting batch support
2023-01-28 08:27:37 +03:00
Andrii Skaliuk
2aac1d9778 Basic inpainting batch support
Modifies batch UI to add optional inpainting support
2023-01-27 17:32:31 -08:00
MrCheeze
6b82efd737 add v2-inpainting model detection, and broaden v-model detection to include anything with 768 in the name 2023-01-27 20:06:19 -05:00
AUTOMATIC
cc8c9b7474 fix broken calls to find_checkpoint_config 2023-01-27 22:43:08 +03:00
EllangoK
32d389ef0f changes remaining text from X/Y -> X/Y/Z 2023-01-27 14:04:23 -05:00
EllangoK
a6a5bfb155 deepcopy pc.styles, allows for multiple style axis 2023-01-27 13:48:39 -05:00
Gazzoo-byte
eafaf14167
Add button to switch width and height
Adds a button to switch width and height, allowing quick and easy switching between landscape and portrait.
2023-01-27 18:34:41 +00:00
Max Audron
23a9d5e273 create user extensions directory if not exists 2023-01-27 14:44:34 +01:00
Max Audron
6b3981c068 clean up unused script_path imports 2023-01-27 14:44:34 +01:00
Max Audron
14c0884fd0 use python importlib to load and execute extension modules
previously module attributes like __file__ where not set correctly,
leading to scripts getting the directory of the stable-diffusion repo
location instead of their own script.

This causes problem when loading user data from an external location
using the --data-dir flag, as extensions would look for their own code
in the stable-diffusion repo location instead of the data dir location.

Using pythons importlib functions sets the modules specs correctly and
executes them. But this will break extensions if they build paths based
on the previously incorrect __file__ attribute.
2023-01-27 14:44:34 +01:00
Max Audron
5eee2ac398 add data-dir flag and set all user data directories based on it 2023-01-27 14:44:30 +01:00
Spaceginner
56c83e453a
Merge remote-tracking branch 'origin/master' 2023-01-27 17:35:54 +05:00
Spaceginner
9ecf1e827c
Made it only a warning 2023-01-27 17:35:24 +05:00
Ivan
63391419c1
Merge branch 'AUTOMATIC1111:master' into master 2023-01-27 17:21:48 +05:00
AUTOMATIC
9beb794e0b clarify the option to disable NaN check. 2023-01-27 13:08:00 +03:00
AUTOMATIC
6f31d2210c support detecting midas model
fix broken api for checkpoint list
2023-01-27 11:54:19 +03:00
AUTOMATIC
d2ac95fa7b remove the need to place configs near models 2023-01-27 11:28:12 +03:00
ItsOlegDm
a43fafb481 css fixes 2023-01-26 23:25:48 +02:00
AUTOMATIC
7a14c8ab45 add an option to enable sections from extras tab in txt2img/img2img
fix some style inconsistenices
2023-01-26 23:31:32 +03:00
ULTRANOX\Chris
cdc2fa209a Changed filename addition from "instrpix2pix" to the more readable ".instruct-pix2pix" for newly generated instruct pix2pix models. 2023-01-26 11:27:07 -05:00
brkirch
c4b9b07db6 Fix embeddings dtype mismatch 2023-01-26 09:00:15 -05:00
AUTOMATIC1111
645f4e7ef8
Merge pull request #7234 from brkirch/fix-full-previews
Fix full previews and--no-half-vae to work correctly with --upcast-sampling
2023-01-26 14:48:43 +03:00
ULTRANOX\Chris
9e72dc7434 Changed all references to "pix2pix" to the more precise name "instruct pix2pix". Also changed extension to instrpix2pix at least for now. 2023-01-26 06:05:40 -05:00
ULTRANOX\Chris
f90798c6b6 Added error check for the rare case a user merges a pix2pix model with a normal model using weighted sum. Also removed bad print message that interfered with merging progress bar. 2023-01-26 04:38:04 -05:00
ULTRANOX\Chris
f4ec411f2c Allow checkpoint merger to merge pix2pix models in the same way that it currently supports inpainting models. 2023-01-26 03:45:16 -05:00
Spaceginner
1619233a74
Only Linux will have max 3.11 2023-01-26 12:52:44 +05:00
brkirch
10421f93c3 Fix full previews, --no-half-vae 2023-01-26 01:43:35 -05:00
EllangoK
4d634dc592 adds components to infotext_fields
allows for loading script params
2023-01-26 00:18:41 -05:00
EllangoK
e57b5f7c55 re_param captures quotes with commas properly
and removes unnecessary regex
2023-01-25 22:36:14 -05:00
Vladimir Repin
d82d471bf7 Ask user to clarify conditions 2023-01-26 02:52:33 +03:00
AUTOMATIC
6cff440182 fix prompt editing break after first batch in img2img 2023-01-25 23:25:40 +03:00
AUTOMATIC
d1d6ce2983 add edit_image_conditioning from my earlier edits in case there's an attempt to inegrate pix2pix properly
this allows to use pix2pix model in img2img though it won't work well this way
2023-01-25 23:25:25 +03:00
AUTOMATIC1111
3cead6983e
Merge pull request #7197 from mcmonkey4eva/fix-ti-symlinks
allow symlinks in the textual inversion embeddings folder
2023-01-25 22:59:12 +03:00
AUTOMATIC1111
a85e22a127
Merge pull request #7201 from brkirch/update-macos-defaults
Update default Mac command line arguments to use --upcast-sampling instead of --no-half
2023-01-25 22:57:17 +03:00
brkirch
e0df864b8c Update arguments to use --upcast-sampling 2023-01-25 13:19:06 -05:00
Spaceginner
f5d73b6a66
Fixed typo 2023-01-25 22:56:09 +05:00
Spaceginner
0cc5f380d5
even more clarifications(?)
i have no idea what commit message should be
2023-01-25 22:41:51 +05:00
Spaceginner
2de99d62dd
some clarification 2023-01-25 22:38:28 +05:00
Ivan
dc0f05c57c
Merge branch 'AUTOMATIC1111:master' into master 2023-01-25 22:34:19 +05:00
Spaceginner
57096823fa
Remove a stacktrace from an assertion to not scare people 2023-01-25 22:33:35 +05:00
AUTOMATIC
15e89ef0f6 fix for unet hijack breaking the train tab 2023-01-25 20:11:01 +03:00
Ivan
2d92d05ca2
Merge branch 'AUTOMATIC1111:master' into master 2023-01-25 22:10:34 +05:00
Spaceginner
e425b9812b
Added Python version check 2023-01-25 22:07:48 +05:00
AUTOMATIC
789d47f832 make clicking extra networks button one more time close the extra networks UI 2023-01-25 19:55:31 +03:00
Alex "mcmonkey" Goodwin
e179b6098a allow symlinks in the textual inversion embeddings folder 2023-01-25 08:48:40 -08:00
AUTOMATIC
635499e832 add pix2pix credits 2023-01-25 19:42:26 +03:00
AUTOMATIC1111
1574e96729
Merge pull request #6510 from brkirch/unet16-upcast-precision
Add upcast options, full precision sampling from float16 UNet and upcasting attention for inference using SD 2.1 models without --no-half
2023-01-25 19:12:29 +03:00
AUTOMATIC1111
1982ef6890
Merge pull request #7138 from mykeehu/patch-4
Fix extra network thumbs label color
2023-01-25 18:59:11 +03:00
AUTOMATIC
57c1baa774 change to code for live preview fix on OSX to be bit more obvious 2023-01-25 18:56:23 +03:00
AUTOMATIC1111
23dafe6d86
Merge pull request #7151 from brkirch/fix-approx-nn
Fix Approx NN previews changing first generation result
2023-01-25 18:48:25 +03:00
AUTOMATIC1111
11485659dc
Merge pull request #7195 from Klace/instruct-pix2pix_model_load
Add instruct-pix2pix hijack
2023-01-25 18:33:15 +03:00
Kyle
bd9b55ee90 Update requirements transformers==4.25.1
Update requirement for transformers to version 4.25.1 to allow instruct-pix2pix demo code to work
2023-01-25 09:41:41 -05:00
Kyle
ee0a0da324 Add instruct-pix2pix hijack
Allows loading instruct-pix2pix models via same method as inpainting models in sd_models.py and sd_hijack_ip2p.py

Adds ddpm_edit.py necessary for instruct-pix2pix
2023-01-25 08:53:23 -05:00
AUTOMATIC1111
d5ce044bcd
Merge pull request #7146 from EllangoK/master
Adds X/Y/Z Grid Script
2023-01-25 11:56:26 +03:00
AUTOMATIC
1bfec873fa add an experimental option to apply loras to outputs rather than inputs 2023-01-25 11:29:46 +03:00
brkirch
e3b53fd295 Add UI setting for upcasting attention to float32
Adds "Upcast cross attention layer to float32" option in Stable Diffusion settings. This allows for generating images using SD 2.1 models without --no-half or xFormers.

In order to make upcasting cross attention layer optimizations possible it is necessary to indent several sections of code in sd_hijack_optimizations.py so that a context manager can be used to disable autocast. Also, even though Stable Diffusion (and Diffusers) only upcast q and k, unfortunately my findings were that most of the cross attention layer optimizations could not function unless v is upcast also.
2023-01-25 01:13:04 -05:00
brkirch
84d9ce30cb Add option for float32 sampling with float16 UNet
This also handles type casting so that ROCm and MPS torch devices work correctly without --no-half. One cast is required for deepbooru in deepbooru_model.py, some explicit casting is required for img2img and inpainting. depth_model can't be converted to float16 or it won't work correctly on some systems (it's known to have issues on MPS) so in sd_models.py model.depth_model is removed for model.half().
2023-01-25 01:13:02 -05:00
AUTOMATIC
48a15821de remove the pip install stuff because it does not work as i hoped it would 2023-01-25 00:49:16 +03:00
AUTOMATIC
bef1931895 add fastapi to requirements 2023-01-24 23:50:04 +03:00
AUTOMATIC
93fad28a97 print progress when installing torch
add PIP_INSTALLER_LOCATION env var to install pip if it's not installed
remove accidental call to accelerate when venv is disabled
add another env var to skip venv - SKIP_VENV
2023-01-24 21:13:05 +03:00
AUTOMATIC
5228ec8bda remove fairscale requirement, add fake fairscale to make BLIP not complain about it mk2 2023-01-24 20:30:43 +03:00
AUTOMATIC
28189985e6 remove fairscale requirement, add fake fairscale to make BLIP not complain about it 2023-01-24 20:24:27 +03:00
AUTOMATIC
dac45299dd make git commands not fail for extensions when you have spaces in webui directory 2023-01-24 20:22:19 +03:00
EllangoK
ec8774729e swaps xyz axes internally if one costs more 2023-01-24 02:53:35 -05:00
EllangoK
e46bfa5a9e handling sub grids and merging into one 2023-01-24 02:24:32 -05:00
EllangoK
9fc354e130 implements most of xyz grid script 2023-01-24 02:22:40 -05:00
EllangoK
d30ac02f28 renamed xy to xyz grid
this is mostly just so git can detect it properly
2023-01-24 02:21:32 -05:00
AUTOMATIC
602a1864b0 also return the removed field to sdapi/v1/upscalers because someone might have relied on it existing 2023-01-24 10:09:30 +03:00
AUTOMATIC
42a70d7477 repair sdapi/v1/upscalers returning bogus results 2023-01-24 10:05:45 +03:00
AUTOMATIC1111
8b903322e6
Merge pull request #7140 from vladmandic/api-decode-image
Add exception handling to API image decode
2023-01-24 09:54:20 +03:00
AUTOMATIC1111
848ef919b3
Merge pull request #7144 from catboxanon/swinir-interrupt
Make SwinIR upscaler interruptible and skippable
2023-01-24 09:51:53 +03:00
AUTOMATIC1111
393e09c1c3
Merge pull request #7148 from acncagua/improvement_launch.py
Set Linux xformers 0.0.16RC425
2023-01-24 09:38:10 +03:00
brkirch
f64af77adc Fix different first gen with Approx NN previews
The loading of the model for approx nn live previews can change the internal state of PyTorch, resulting in a different image. This can be avoided by preloading the approx nn model in advance.
2023-01-23 22:49:20 -05:00
acncagua
078e16e4d3 Set Linux xformers 0.0.16RC425 2023-01-24 12:21:07 +09:00
catboxanon
3c47b05036
Also make SwinIR skippable 2023-01-23 22:00:27 -05:00
catboxanon
f993525820
Make SwinIR interruptible 2023-01-23 21:50:59 -05:00
Vladimir Mandic
45e270dfc8
add image decod exception handling 2023-01-23 17:11:22 -05:00
Mykeehu
82a28bfe35
Fix extra network thumbs label color
Added white color for labels.
2023-01-23 22:36:27 +01:00
AUTOMATIC
5c1cb9263f fix BLIP failing to import depending on configuration 2023-01-24 00:24:17 +03:00
AUTOMATIC1111
7ba7f4ed6e
Merge pull request #7113 from vladmandic/interrogate
Add selector to interrogate categories
2023-01-24 00:09:14 +03:00
AUTOMATIC
7b1c7ba87b add support for apostrophe in extra network names 2023-01-23 23:11:34 +03:00
AUTOMATIC
865af20d8a suppress A matching Triton is not available message
you can all now stop worrying about it
2023-01-23 21:28:59 +03:00
Vladimir Mandic
04a561c11c
add option to skip interrogate categories 2023-01-23 12:29:23 -05:00
Vladimir Mandic
efa7287be0
Merge branch 'AUTOMATIC1111:master' into interrogate 2023-01-23 12:25:07 -05:00
AUTOMATIC
c6f20f7262 make loras before 0.4.0 ALSO work 2023-01-23 18:52:55 +03:00
AUTOMATIC1111
171a5b3bb9
Merge pull request #7032 from gmq/extra-network-styles
Extra network view style
2023-01-23 18:46:37 +03:00
AUTOMATIC1111
756a2c3c0f
Merge pull request #7116 from vladmandic/api-image-format
API should use same image format as specified in WebUI settings
2023-01-23 18:37:48 +03:00
Guillermo Moreno
dbcb6fac77 feat(extra-networks): replace icon background with border 2023-01-23 12:14:01 -03:00
AUTOMATIC
e407d1af89 add support for loras trained on kohya's scripts 0.4.0 (alphas) 2023-01-23 18:12:51 +03:00
Vladimir Mandic
6e1b296baf
api-image-format 2023-01-23 10:10:59 -05:00
AUTOMATIC
e8c3d03f7d a possible fix for broken image upscaling 2023-01-23 17:59:58 +03:00
AUTOMATIC
7ff1ef77dd add a message about new torch/xformers version and a way to upgrade by specifying a commandline flag 2023-01-23 17:17:31 +03:00
AUTOMATIC1111
56f63cd498
Merge pull request #5939 from petalas/petalas/torch-upgrade
upgrading torch, torchvision, xformers (windows), to use cu117
2023-01-23 17:15:51 +03:00
Vladimir Mandic
925dd09c91
improve interrogate 2023-01-23 09:03:17 -05:00
AUTOMATIC
59146621e2 better support for xformers flash attention on older versions of torch 2023-01-23 16:40:20 +03:00
AUTOMATIC
3fa482076a Merge remote-tracking branch 'takuma104/xformers-flash-attention' 2023-01-23 16:01:53 +03:00
AUTOMATIC
194cbd065e fix open directory button failing 2023-01-23 15:50:32 +03:00
AUTOMATIC1111
97ba01a213
Merge pull request #7081 from EllangoK/xy-hires
Adds Hires Steps to X/Y Plot, and updates step calculation
2023-01-23 15:24:11 +03:00
AUTOMATIC1111
663353098e
Merge pull request #7031 from EllangoK/master
Fixes various button overflowing UI and compact checkbox
2023-01-23 15:22:06 +03:00
AUTOMATIC1111
74608300d1
Merge pull request #7093 from Shondoit/fix-dark-mode
Fix dark mode
2023-01-23 15:09:26 +03:00
AUTOMATIC
41265a026d third time's the charm 2023-01-23 14:50:20 +03:00
AUTOMATIC
fabdae089e add missing import to previous commit 2023-01-23 14:42:49 +03:00
Shondoit
669dbd9725 Fix dark mode
Fixes #7048

Co-Authored-By: J.J. Tolton <jjtolton@gmail.com>
2023-01-23 09:54:42 +01:00
AUTOMATIC
b5230197a6 rework extras tab to use script system 2023-01-23 09:24:43 +03:00
Guillermo Moreno
f80ff3c1e4 feat(extra-networks): remove view dropdown 2023-01-22 22:01:24 -03:00
EllangoK
8a3f85c4cc adds hires steps to x/y plot and fixes total_steps calculation 2023-01-22 17:08:08 -05:00
Guillermo Moreno
66eef11ce7 feat(extra-networks): add default view setting 2023-01-22 12:18:21 -03:00
Guillermo Moreno
985c0b8e9a feat(extra-networks): add thumbs view style 2023-01-22 12:18:21 -03:00
AUTOMATIC
68303c96e5 split oversize extras.py to postprocessing.py 2023-01-22 15:38:39 +03:00
Andrey
c56b367122 Split history extras.py to postprocessing.py 2023-01-22 15:26:41 +03:00
Andrey
d63340a485 Split history extras.py to postprocessing.py 2023-01-22 15:26:40 +03:00
Andrey
b238b14ee4 Split history extras.py to postprocessing.py 2023-01-22 15:26:40 +03:00
Andrey
43ac9ff205 Split history extras.py to postprocessing.py 2023-01-22 15:26:40 +03:00
AUTOMATIC
c98cb0f8ec amend previous commit to work in a proper fashion when saving previews 2023-01-22 11:04:02 +03:00
AUTOMATIC
35419b2746 add an option to reorder tabs for extra networks 2023-01-22 11:00:05 +03:00
AUTOMATIC
159f05314d make extra networks search case-insensitive 2023-01-22 10:30:55 +03:00
AUTOMATIC
837ec11828 hint for discarding layers 2023-01-22 10:17:26 +03:00
AUTOMATIC
112416d041 add option to discard weights in checkpoint merger UI 2023-01-22 10:17:12 +03:00
AUTOMATIC
0792fae078 fix missing field for aesthetic embedding extension 2023-01-22 08:20:48 +03:00
AUTOMATIC
2621566153 attention ctrl+up/down enhancements 2023-01-22 08:07:18 +03:00
AUTOMATIC1111
fbb25fabf6
Merge pull request #7024 from mezotaken/master
Fix followup to #7022
2023-01-22 07:19:38 +03:00
EllangoK
5560150fda aligns the axis buttons in x/y plot 2023-01-21 16:58:45 -05:00
EllangoK
bf457b30fb compact checkbox and fix copy image btn overflow
also fixes type for #tab_extensions in style.css
2023-01-21 16:21:33 -05:00
AUTOMATIC
f2eae6127d fix broken textual inversion extras tab 2023-01-22 00:16:26 +03:00
Vladimir Repin
e5520232e8 make current_axis_options class variable 2023-01-22 00:06:06 +03:00
AUTOMATIC
fe7a623e6b add a slider for default value of added extra networks 2023-01-22 00:02:52 +03:00
AUTOMATIC
78f59a4e01 enable compact view for train tab
prevent  previews from ruining hypernetwork training
2023-01-22 00:02:51 +03:00
AUTOMATIC1111
216f67ec7c
Merge pull request #6938 from DaniAndTheWeb/AMD_automation-patch1
AMD environment variable and experimental APU support (Renoir)
2023-01-21 23:28:48 +03:00
AUTOMATIC1111
abf11215e0
Merge pull request #6955 from EllangoK/master
Adds descriptions for merging methods in UI
2023-01-21 23:17:06 +03:00
AUTOMATIC1111
1ceca5c726
Merge pull request #7020 from EllangoK/ui-fix
Fixes ui issues with checkbox and hires. sections
2023-01-21 23:15:41 +03:00
AUTOMATIC1111
bd4a24e0f9
Merge pull request #7022 from mezotaken/master
X/Y plot: Fix auto fill and repair separate axis options
2023-01-21 23:12:52 +03:00
AUTOMATIC
500d9a32c7 add --lora-dir commandline option 2023-01-21 23:11:37 +03:00
AUTOMATIC
4a8fe09652 remove the double loading text 2023-01-21 23:06:18 +03:00
AUTOMATIC
e4e0918f58 remove timestamp for js files, reformat code 2023-01-21 22:57:19 +03:00
Vladimir Repin
ac2eb97db9 fix auto fill and repair separate axisoptions 2023-01-21 22:43:37 +03:00
AUTOMATIC1111
7c8852b8e7
Merge pull request #7015 from jjtolton/serve-static-js
Compile and serve js files via `src` instead of embedded inline scripts
2023-01-21 22:43:18 +03:00
EllangoK
861fe750b0 fixes ui issues with checkbox and hires. sections 2023-01-21 14:26:07 -05:00
James Tolton
035459c9a2 remove dead import 2023-01-21 14:11:13 -05:00
James Tolton
50059ea661 server individually listed javascript files vs single compiled file 2023-01-21 14:07:48 -05:00
James Tolton
17af0fb955 remove commented out lines 2023-01-21 13:27:05 -05:00
James Tolton
f726df8a2f Compile and serve js from /statica instead of inline in html 2023-01-21 12:59:05 -05:00
AUTOMATIC
f53527f778 make it run on gradio < 3.16.2 2023-01-21 20:07:14 +03:00
AUTOMATIC
3deea34135 extract extra network data from prompt earlier 2023-01-21 19:36:08 +03:00
AUTOMATIC
a2749ec655 load Lora from .ckpt also 2023-01-21 18:52:45 +03:00
AUTOMATIC
63b824376c add --gradio-queue option to enable gradio queue 2023-01-21 18:47:54 +03:00
AUTOMATIC
424cefe118 add search box to extra networks 2023-01-21 17:20:24 +03:00
AUTOMATIC
92fb1096db make it so that extra networks are not removed from infotext 2023-01-21 16:41:25 +03:00
AUTOMATIC
855b9e3d1c Lora support!
update readme to reflect some recent changes
2023-01-21 16:15:53 +03:00
Takuma Mori
3262e825cc add --xformers-flash-attention option & impl 2023-01-21 17:42:04 +09:00
AUTOMATIC
cbfb463258 fix failing tests by removing then :^) 2023-01-21 11:22:16 +03:00
AUTOMATIC
184e23eb89 relocate tool buttons next to generate button
prevent extra network tabs from putting images into wrong prompts
prevent settings leaking into prompt
2023-01-21 09:58:57 +03:00
AUTOMATIC
6d805b669e make CLIP interrogator download original text files if the directory does not exist
remove random artist built-in extension (to re-added as a normal extension on demand)
remove artists.csv (but what does it mean????????????????????)
make interrogate buttons show Loading... when you click them
2023-01-21 09:14:27 +03:00
AUTOMATIC
40ff6db532 extra networks UI
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
2023-01-21 08:36:07 +03:00
DaniAndTheWeb
e0b6092bc9
Update webui.sh 2023-01-20 15:31:27 +01:00
AUTOMATIC
e33cace2c2 fix ctrl+up/down that stopped working 2023-01-20 12:19:30 +03:00
AUTOMATIC
7d3fb5cb03 add margin to interrogate column in img2img UI 2023-01-20 12:12:02 +03:00
AUTOMATIC
20a59ab3b1 move token counter to the location of the prompt, add token counting for the negative prompt 2023-01-20 10:18:41 +03:00
EllangoK
98466da4bc adds descriptions for merging methods in ui 2023-01-20 00:48:15 -05:00
AUTOMATIC
6c7a50d783 remove some unnecessary logging to javascript console 2023-01-20 08:36:37 +03:00
DaniAndTheWeb
fd651bd0bc
Update webui.sh 2023-01-20 00:21:51 +01:00
DaniAndTheWeb
0684a6819d
Usage explanation for Renoir users 2023-01-20 00:21:05 +01:00
DaniAndTheWeb
bc9442d1ec
Merge pull request #2 from DaniAndTheWeb/DaniAndTheWeb-sd-Renoir
Experimental support for AMD Renoir
2023-01-19 23:55:56 +01:00
DaniAndTheWeb
912285ae64
Experimental support for Renoir
This adds the GFX version 9.0.0 in order to use Renoir GPUs with at least 4 GB of VRAM (it's possible to increase the virtual VRAM from the BIOS settings of some vendors). This will only work if the remaining ram is at least 12 GB to avoid the system to become unresponsive on launch.).
This change also changes the GPU check to a case statement to be able to add more GPUs efficiently.
2023-01-19 23:42:12 +01:00
DaniAndTheWeb
36364bd76c
GFX env just for RDNA 1 and 2
This commit specifies which GPUs should use the GFX variable, RDNA 3 is excluded since it uses a newer GFX version
2023-01-19 20:05:49 +01:00
DaniAndTheWeb
48045545d9
Small reformat of the GPU check 2023-01-19 19:23:40 +01:00
DaniAndTheWeb
c09fb3d8f1
Simplify GPU check 2023-01-19 19:21:02 +01:00
AUTOMATIC1111
b165e341e7
Merge pull request #6930 from poiuty/master
internal progress relative path
2023-01-19 20:41:25 +03:00
AUTOMATIC
6073456c83 write a comment for fix_checkpoint function 2023-01-19 20:39:10 +03:00
AUTOMATIC1111
51517f3ea6
Merge pull request #6936 from EllangoK/master
Fixes minor typos around run_modelmerger
2023-01-19 19:58:16 +03:00
DaniAndTheWeb
4599e8ad0a
Environment variable on launch just for Navi cards
Setting HSA_OVERRIDE_GFX_VERSION=10.3.0 for all AMD cards seems to break compatibility for polaris and vega cards so it should just be enabled on Navi
2023-01-19 17:00:51 +01:00
AUTOMATIC
c1928cdd61 bring back short hashes to sd checkpoint selection 2023-01-19 18:58:08 +03:00
EllangoK
f2ae252987 fixes minor typos around run_modelmerger 2023-01-19 10:24:17 -05:00
AUTOMATIC
d1ea518dea remember the list of checkpoints after you press refresh button and reload the page 2023-01-19 18:07:37 +03:00
poiuty
81276cde90
internal progress relative path 2023-01-19 16:56:45 +03:00
AUTOMATIC
c12d7ddd72 add handling to some places in javascript that can potentially cause issues #6898 2023-01-19 16:08:09 +03:00
AUTOMATIC1111
79d802b48a
Merge pull request #6926 from vt-idiot/patch-2
Update shared.py
2023-01-19 14:15:59 +03:00
vt-idiot
b271e22f7a
Update shared.py
`Witdth/Height` was driving me insane. -> `Width/Height`
2023-01-19 06:12:19 -05:00
AUTOMATIC1111
aa60fc6660
Merge pull request #6922 from brkirch/cumsum-fix
Improve cumsum fix for MPS
2023-01-19 13:18:34 +03:00
AUTOMATIC1111
0f9cacaa0e
Merge pull request #6844 from guaneec/crop-ui
Add auto-sized cropping UI
2023-01-19 13:11:05 +03:00
dan
2985b317d7 Fix of fix 2023-01-19 17:39:30 +08:00
dan
18a09c7e00 Simplification and bugfix 2023-01-19 17:36:23 +08:00
AUTOMATIC
54674674b8 allow having at half precision when there is only one checkpoint in merger tab 2023-01-19 12:12:09 +03:00
AUTOMATIC
0f5dbfffd0 allow baking in VAE in checkpoint merger tab
do not save config if it's the default for checkpoint merger tab
change file naming scheme for checkpoint merger tab
allow just saving A without any merging for checkpoint merger tab
some stylistic changes for UI in checkpoint merger tab
2023-01-19 10:39:51 +03:00
AUTOMATIC
c7e50425f6 add progress bar to modelmerger 2023-01-19 09:25:37 +03:00
AUTOMATIC
7cfc645030 eliminate repetition of code in #6910 2023-01-19 08:53:50 +03:00
AUTOMATIC1111
01b1061a0b
Merge pull request #6910 from EllangoK/master
Check model name values are set before merging
2023-01-19 08:48:27 +03:00
AUTOMATIC
308b51012a fix an unlikely division by 0 error 2023-01-19 08:41:37 +03:00
AUTOMATIC1111
6620acff8c
Merge pull request #6906 from koalazak/fix_missing_lspci
fixing error using lspci on macOsX
2023-01-19 08:35:31 +03:00
EllangoK
26a6a78b16 only lookup tertiary model if theta_func1 is set 2023-01-18 21:21:52 -05:00
EllangoK
99207bc816 check model name values are set before merging 2023-01-18 19:13:15 -05:00
facu
956263b8a4 fixing error using lspci on macOsX 2023-01-18 19:15:53 -03:00
AUTOMATIC
bb0978ecfd fix hires fix ui weirdness caused by gradio update 2023-01-19 00:44:51 +03:00
AUTOMATIC1111
a8322ad75b
Merge pull request #6854 from EllangoK/master
Saves Extra Generation Parameters to params.txt
2023-01-18 23:25:56 +03:00
AUTOMATIC1111
43fd6eaab8
Merge pull request #6851 from ddPn08/master
Add `--vae-dir` argument
2023-01-18 23:23:09 +03:00
AUTOMATIC
b186d44dcd use DDIM in hires fix is the sampler is PLMS 2023-01-18 23:20:23 +03:00
AUTOMATIC1111
3b61007a66
Merge pull request #6888 from vladmandic/localization-typo
Fix typo causing syntax error in JS
2023-01-18 23:13:30 +03:00
AUTOMATIC1111
c94abc8862
Merge pull request #6895 from mezotaken/interrogate-all-tabs
Process interrogation on all img2img subtabs
2023-01-18 23:07:41 +03:00
AUTOMATIC
924e222004 add option to show/hide warnings
removed hiding warnings from LDSR
fixed/reworked few places that produced warnings
2023-01-18 23:04:24 +03:00
AUTOMATIC
889b851a52 make live previews not obscure multiselect dropdowns 2023-01-18 20:29:44 +03:00
Vladimir Repin
8683427bd9 Process interrogation on all img2img subtabs 2023-01-18 20:25:52 +03:00
Vladimir Mandic
05a779b0cd
fix syntax error 2023-01-18 09:47:38 -05:00
AUTOMATIC
6faae23239 repair broken quicksettings when some form-requiring options are added to it 2023-01-18 14:33:09 +03:00
AUTOMATIC
0c5913b9c2 re-enable image dragging on non-firefox browsers 2023-01-18 14:14:50 +03:00
AUTOMATIC
26fd444811 bump gradio to 3.16.2
change style selection to multiselect dropdown
2023-01-18 13:59:45 +03:00
AUTOMATIC
d8f8bcb821 enable progressbar without gallery 2023-01-18 13:20:47 +03:00
AUTOMATIC
dac59b9b07 return progress percentage to title bar 2023-01-18 06:13:45 +03:00
brkirch
a255dac4f8 Fix cumsum for MPS in newer torch
The prior fix assumed that testing int16 was enough to determine if a fix is needed, but a recent fix for cumsum has int16 working but not bool.
2023-01-17 20:54:18 -05:00
ddPn08
d906f87043
fix typo 2023-01-18 07:52:10 +09:00
AUTOMATIC
3a0d6b7729 make it so that PNG images with EXIF do not lose parameters in PNG info tab 2023-01-17 23:54:23 +03:00
EllangoK
5e15a0b422 Changed params.txt save to after manual init call 2023-01-17 11:42:44 -05:00
ddPn08
6e08da2c31
Add --vae-dir argument 2023-01-17 23:50:41 +09:00
AUTOMATIC
38b7186e6e update sending input event in java script to not cause exception in browser https://github.com/gradio-app/gradio/issues/2981 2023-01-17 14:15:47 +03:00
AUTOMATIC
aede265f1d Fix unable to find Real-ESRGAN model info error (AttributeError: 'NoneType' object has no attribute 'data_path') #6841 #5170 2023-01-17 13:57:55 +03:00
dan
4688bfff55 Add auto-sized cropping UI 2023-01-17 17:16:43 +08:00
AUTOMATIC
c361b89026 disable the new NaN check for the CI 2023-01-17 11:05:01 +03:00
AUTOMATIC1111
93d3b820d0
Merge pull request #6839 from nonetrix/patch-1
Fix typo
2023-01-17 07:12:40 +03:00
fuggy
eb2223340c
Fix typo 2023-01-16 21:50:30 -06:00
Nick Petalas
c091cf1b4a upgrading torch, torchvision, xformers (windows) to use u117 2023-01-16 20:35:11 +00:00
AUTOMATIC
e0e8005009 make StableDiffusionProcessing class not hold a reference to shared.sd_model object 2023-01-16 23:09:08 +03:00
AUTOMATIC
9991967f40 Add a check and explanation for tensor with all NaNs. 2023-01-16 22:59:46 +03:00
AUTOMATIC
52f6e94338 add --skip-install option to prevent running pip in launch.py and speedup launch a bit 2023-01-16 20:13:23 +03:00
AUTOMATIC
55947857f0 add a button for XY Plot to fill in available values for axes that support this 2023-01-16 17:36:56 +03:00
AUTOMATIC1111
d073637e10
Merge pull request #6803 from space-nuko/xy-grid-performance-improvement
Optimize XY grid to run slower axes fewer times
2023-01-16 16:14:41 +03:00
AUTOMATIC
064983c0ad return an option to hide progressbar 2023-01-16 12:56:30 +03:00
AUTOMATIC1111
fcbe0f35fb
Merge pull request #6802 from space-nuko/xy-grid-swap-axes-button
Add swap axes button for XY Grid
2023-01-16 11:50:49 +03:00
AUTOMATIC
972f578507 fix problems related to checkpoint/VAE switching in XY plot 2023-01-16 09:27:59 +03:00
space-nuko
2144c2eb7f Add swap axes button for XY Grid 2023-01-15 21:41:58 -08:00
space-nuko
029260b4ca Optimize XY grid to run slower axes fewer times 2023-01-15 21:40:57 -08:00
AUTOMATIC1111
dd292a925e
Merge pull request #6796 from space-nuko/faster-xy-grid-cancellation
Make XY grid cancellation much faster
2023-01-16 08:24:16 +03:00
space-nuko
f202ff1901 Make XY grid cancellation much faster 2023-01-15 19:43:34 -08:00
AUTOMATIC
ff6a5bcec1 bugfix for previous commit 2023-01-16 01:28:20 +03:00
AUTOMATIC
3f887f7f61 support old configs that say "auto" for ssd_vae
change sd_vae_as_default to True by default as it's a more sensible setting
2023-01-16 00:44:52 +03:00
AUTOMATIC1111
300d4a80df
Merge pull request #6785 from mezotaken/master
Escape special symbols in paths
2023-01-15 23:39:58 +03:00
AUTOMATIC
9a43acf94e add background color for live previews in dark mode 2023-01-15 23:37:34 +03:00
AUTOMATIC
3db22e6ee4 rename masking to inpaint in UI
make inpaint go to the right place for users who don't have it in config string
2023-01-15 23:32:38 +03:00
AUTOMATIC1111
30cfe4ed9b
Merge pull request #6758 from Poktay/allow_reorder_masking_controls
allow reordering of inpaint masking controls (like the other sections can be reordered)
2023-01-15 23:24:09 +03:00
AUTOMATIC
89314e79da fix an error that happens when you send an empty image from txt2img to img2img 2023-01-15 23:23:24 +03:00
AUTOMATIC
fc25af3939 remove unneeded log from progressbar 2023-01-15 23:23:24 +03:00
Vladimir Repin
db9b116179 fix paths with parentheses 2023-01-15 23:13:58 +03:00
AUTOMATIC1111
f3167b10ce
Merge pull request #6780 from vladmandic/train-logging
add fields to settings file
2023-01-15 22:55:45 +03:00
AUTOMATIC1111
d6fa8e92ca
Merge pull request #6782 from aria1th/fix-hypernetwork-loss
Fix tensorboard-hypernetwork integration correctly
2023-01-15 22:55:06 +03:00
AUTOMATIC1111
4385449933
Merge pull request #6778 from pangbo13/master
Fix unexpected behavior when show_progress_every_n_steps is set to -1
2023-01-15 22:54:14 +03:00
AUTOMATIC
8e2aeee4a1 add BREAK keyword to end current text chunk and start the next 2023-01-15 22:29:53 +03:00
aria1th
13445738d9 Fix tensorboard related functions 2023-01-16 03:02:54 +09:00
aria1th
598f7fcd84 Fix loss_dict problem 2023-01-16 02:46:21 +09:00
Vladimir Mandic
110d1a2d59
add fields to settings file 2023-01-15 12:41:00 -05:00
AUTOMATIC
205991df78 Merge remote-tracking branch 'origin/fix-mean-loss' 2023-01-15 20:30:42 +03:00
AUTOMATIC
b6ce041cdf put interrupt and skip buttons back where they were 2023-01-15 20:29:55 +03:00
AUTOMATIC
a534bdfc80 add setting for progressbar update period 2023-01-15 20:29:55 +03:00
AUTOMATIC
f6aac4c65a eliminate flicker for live previews 2023-01-15 20:29:55 +03:00
AngelBottomless
16f410893e
fix missing 'mean loss' for tensorboard integration 2023-01-16 02:08:47 +09:00
pangbo13
388708f7b1 fix when show_progress_every_n_steps == -1 2023-01-16 00:56:24 +08:00
AUTOMATIC1111
ce13ced5dc
Merge pull request #6772 from vladmandic/sha-calc-optimization
increase block size
2023-01-15 19:09:46 +03:00
AUTOMATIC1111
006997d180
Merge pull request #6770 from brkirch/approx-nn-fix
Fix Approx NN not working on torch devices other than CUDA
2023-01-15 19:09:06 +03:00
AUTOMATIC
d8b90ac121 big rework of progressbar/preview system to allow multiple users to prompts at the same time and do not get previews of each other 2023-01-15 18:51:04 +03:00
Vladimir Mandic
f0312565e5
increase block size 2023-01-15 09:42:34 -05:00
brkirch
eef1990a5e Fix Approx NN on devices other than CUDA 2023-01-15 08:13:33 -05:00
AUTOMATIC1111
ebfdd7baeb
Merge pull request #6709 from DaniAndTheWeb/master
Automatic setup for AMD GPUs on Linux
2023-01-15 13:48:25 +03:00
AUTOMATIC
d97f467c0d add license file 2023-01-15 09:24:48 +03:00
AUTOMATIC1111
4d158c1879
Merge pull request #6756 from Poktay/ar_fix_for_inpaint
Fix Aspect Ratio Overlay / AROverlay to work with Inpaint & Sketch
2023-01-15 08:10:57 +03:00
Josh R
9ef41df6f9 add inpaint masking controls to orderable section that the settings can order 2023-01-14 15:26:45 -08:00
Josh R
cbbdfc3609 Fix Aspect Ratio Overlay / AROverlay to work with Inpaint & Sketch 2023-01-14 14:45:08 -08:00
DaniAndTheWeb
ba077e2110
Fix TORCH_COMMAND check 2023-01-14 23:19:52 +01:00
DaniAndTheWeb
2e172cf831
Only set TORCH_COMMAND if wasn't set webui-user 2023-01-14 22:25:32 +01:00
Vladimir Mandic
ce9827a7c5
Merge pull request #6731 from vladmandic/state_server_start
Add server start time to state info
2023-01-14 16:03:29 -05:00
AUTOMATIC1111
beeec2b598
Merge pull request #6728 from bbc-mc/exclude_clip_index_from_merge_target
Exclude clip index from merge
2023-01-14 23:12:31 +03:00
AUTOMATIC
86359535d6 add buttons to copy images between img2img tabs 2023-01-14 22:43:01 +03:00
AUTOMATIC
f8c5124785 typo? 2023-01-14 20:00:12 +03:00
AUTOMATIC
a5bbcd2153 fix bug with "Ignore selected VAE for..." option completely disabling VAE election
rework VAE resolving code to be more simple
2023-01-14 19:56:09 +03:00
Vladimir Mandic
fad850fc3d
add server_start to shared.state 2023-01-14 11:18:05 -05:00
DaniAndTheWeb
c4ba34928e
Quick format fix 2023-01-14 15:58:50 +01:00
DaniAndTheWeb
6192a222bf
Export TORCH_COMMAND for AMD from the webui 2023-01-14 15:46:23 +01:00
DaniAndTheWeb
6296129353
Revert detection code 2023-01-14 15:45:07 +01:00
DaniAndTheWeb
934cba0f4c
Delete detection.py 2023-01-14 15:43:29 +01:00
AUTOMATIC
69781031e7 simplify expression in prompts from file script 2023-01-14 16:45:39 +03:00
AUTOMATIC1111
27c1b3b816
Merge pull request #6707 from zaprudin/bugfix/prompts-from-file-progress-bar
fix progress bar behavior for "Prompts from file or textbox" script
2023-01-14 16:41:04 +03:00
AUTOMATIC
f94a215abe add an option to choose what you want to see in live preview (Live preview subject) and moves live preview settings to its own tab 2023-01-14 16:29:23 +03:00
AUTOMATIC
08c6f009a5 load hashes from cache for checkpoints that have them
add checkpoint hash to footer
2023-01-14 15:55:40 +03:00
AUTOMATIC
865228a837 change style dropdowns to multiselect 2023-01-14 14:56:39 +03:00
DaniAndTheWeb
54fa77facc
Fix detection script on macos
This fixes the script on macos
2023-01-14 12:10:45 +01:00
bbc_mc
5f8685237e Exclude clip index from merge 2023-01-14 20:09:32 +09:00
AUTOMATIC
6eb72fd13f bump gradio to 3.16.1 2023-01-14 13:38:10 +03:00
AUTOMATIC
febd2b722e update key to use with checkpoints' sha256 in cache 2023-01-14 13:37:55 +03:00
AUTOMATIC
f9ac3352cb change hypernets to use sha256 hashes 2023-01-14 10:25:37 +03:00
AUTOMATIC
a95f135308 change hash to sha256 2023-01-14 09:56:59 +03:00
DaniAndTheWeb
a407c9f014
Automatic torch install for amd on linux
This commit allows the launch script to automatically download rocm's torch version for AMD GPUs using an external GPU detection script. It also prints the operative system and GPU in use.
2023-01-13 19:22:23 +01:00
DaniAndTheWeb
eaebcf6383
GPU detection script
This commit adds a script that detects which GPU is currently used in Windows and Linux
2023-01-13 19:20:18 +01:00
DaniAndTheWeb
cbf4b3472b
Automatic launch argument for AMD GPUs
This commit adds a few lines to detect if the system has an AMD gpu and adds an environment variable needed for torch to recognize the gpu.
2023-01-13 19:18:56 +01:00
Zaprudin Aleksey
d753a9df95 fix progress bar behavior for "Prompts from file or textbox" script 2023-01-13 22:25:33 +05:00
AUTOMATIC
82725f0ac4 fix a bug caused by merge 2023-01-13 15:04:37 +03:00
AUTOMATIC1111
1849f6eb80
Merge pull request #3264 from Melanpan/tensorboard
Add support for Tensorboard (training)
2023-01-13 14:58:03 +03:00
AUTOMATIC1111
9cd7716753
Merge branch 'master' into tensorboard 2023-01-13 14:57:38 +03:00
AUTOMATIC1111
544e7a233e
Merge pull request #6689 from Poktay/add_gradient_settings_to_logging_file
add gradient settings to training settings log files
2023-01-13 14:45:32 +03:00
AUTOMATIC1111
3ad1fdd99b
Merge pull request #6684 from space-nuko/save-extension-params-to-last-params
Fix extension parameters not being saved to last used parameters
2023-01-13 14:44:39 +03:00
AUTOMATIC1111
7bf3cfc427
Merge pull request #6685 from space-nuko/script-callback-fix-infotext
Add script callback for fixing infotext parameters
2023-01-13 14:43:42 +03:00
AUTOMATIC
a176d89487 print bucket sizes for training without resizing images #6620
fix an error when generating a picture with embedding in it
2023-01-13 14:32:15 +03:00
AUTOMATIC1111
486bda9b33
Merge pull request #6620 from guaneec/varsize_batch
Enable batch_size>1 for mixed-sized training
2023-01-13 14:03:31 +03:00
Josh R
0b262802b8 add gradient settings to training settings log files 2023-01-12 17:31:05 -08:00
space-nuko
6c88eaed4f Add script callback for fixing infotext parameters 2023-01-12 13:50:09 -08:00
space-nuko
88416ab5ff Fix extension parameters not being saved to last used parameters 2023-01-12 13:46:59 -08:00
AUTOMATIC1111
d7aec59c4e
Merge pull request #6667 from Shondoit/zero-vector-ti
Allow creation of zero vectors for TI
2023-01-12 22:05:10 +03:00
AUTOMATIC
6ffefdcc9f fix js errors when restarting UI 2023-01-12 19:47:44 +03:00
AUTOMATIC
5623a3e7b1 fix send to inpaint sending you to wrong place 2023-01-12 19:47:33 +03:00
Shondoit
d48dcbd2b2 Add zero vector feature to hints.js
Also add the note that some tokens might be skipped. Not everyone is aware of this.
2023-01-12 09:53:35 +01:00
Shondoit
d52a80f7f7 Allow creation of zero vectors for TI 2023-01-12 09:22:29 +01:00
AUTOMATIC
0b8911d883 img2img UI rework: obsolete --gradio-img2img-tool --gradio-inpaint-tool and always show all tools each in own tab 2023-01-11 20:33:24 +03:00
AUTOMATIC1111
590ff5ce5b
Merge pull request #6647 from vladmandic/update-progress-api
add textinfo to progress api response
2023-01-11 19:06:08 +03:00
AUTOMATIC1111
6d7f3d1072
Merge pull request #6648 from vladmandic/progress-description
Set TQDM progress bar and state textinfo description
2023-01-11 19:04:54 +03:00
AUTOMATIC1111
97ff69eff3
Merge pull request #6628 from catboxanon/fix/alternating-words-emphasis
Fix prompt parser default step transformer
2023-01-11 18:56:24 +03:00
AUTOMATIC
4bd490727e fix for an error caused by skipping initialization, for realsies this time: TypeError: expected str, bytes or os.PathLike object, not NoneType 2023-01-11 18:54:13 +03:00
Vladimir Mandic
3f43d8a966
set descriptions 2023-01-11 10:28:55 -05:00
Vladimir Mandic
39ea251945
add textinfo to progress response 2023-01-11 10:23:51 -05:00
catboxanon
0b38b72d31
Remove compat option for prompt parser 2023-01-11 09:01:37 -05:00
catboxanon
ab388d6f8b
Remove compat option check for prompt parser 2023-01-11 08:59:47 -05:00
catboxanon
035f2af050
Merge branch 'AUTOMATIC1111:master' into fix/alternating-words-emphasis 2023-01-11 08:58:43 -05:00
AUTOMATIC1111
45a8b758a7
Merge pull request #6635 from demiurge-ash/patch-1
Fix keyboard navigation in modal image viewer
2023-01-11 12:00:15 +03:00
Alexey Shirokov
b202714b65
Fix keyboard navigation in modal image viewer 2023-01-11 11:41:50 +03:00
AUTOMATIC
1a23dc32ac possible fix for fallback for fast model creation from config, attempt 2 2023-01-11 10:34:36 +03:00
AUTOMATIC
4fdacd31e4 possible fix for fallback for fast model creation from config 2023-01-11 10:24:56 +03:00
AUTOMATIC
954091697f add an option to copy config from one of models in checkpoint merger 2023-01-11 09:10:07 +03:00
AUTOMATIC1111
3e20244b0f
Merge pull request #6625 from PlasmaPower/textual-inversion-safetensors
Support loading textual inversion embeddings from safetensors files
2023-01-11 08:21:22 +03:00
AUTOMATIC1111
9757c0b3b2
Merge pull request #6633 from space-nuko/expose-xy-grid-to-extensions
Expose XY Grid module to extensions
2023-01-11 08:06:41 +03:00
space-nuko
37a2301121 Expose the compiled class module of scripts to extensions 2023-01-10 20:30:09 -08:00
catboxanon
7e45fba55b
Fix prompt parser default step transformer w/ test 2023-01-10 21:47:03 -05:00
catboxanon
5830095b73
Add old prompt parser compat option 2023-01-10 21:43:24 -05:00
Lee Bousfield
f9706acf43
Support loading textual inversion embeddings from safetensors files 2023-01-10 18:40:34 -07:00
AUTOMATIC
9cfd10cdef repair #6612 2023-01-11 01:27:00 +03:00
dan
6be644fa04 Enable batch_size>1 for mixed-sized training 2023-01-11 05:31:58 +08:00
AUTOMATIC
29fb532764 change color selector in settings to be part of form 2023-01-10 23:47:07 +03:00
AUTOMATIC1111
2bd2b8b7cf
Merge pull request #6612 from mezotaken/master
Make VENV environment variable accept absolute path instead of relative path.
2023-01-10 23:45:34 +03:00
Vladimir Repin
e2c8584f75 make VENV envvar accept absolute path instead of relative 2023-01-10 22:26:49 +03:00
AUTOMATIC
50fb20cedc Merge branch 'disable_initialization' 2023-01-10 19:11:47 +03:00
AUTOMATIC
0f8603a559 add support for transformers==4.25.1
add fallback for when quick model creation fails
2023-01-10 17:46:59 +03:00
AUTOMATIC
ce3f639ec8 add more stuff to ignore when creating model from config
prevent .vae.safetensors files from being listed as stable diffusion models
2023-01-10 16:51:04 +03:00
AUTOMATIC1111
a0ef416aa7
Merge pull request #6596 from mezotaken/master
Fix issues with testing process
2023-01-10 15:14:03 +03:00
AUTOMATIC
0c3feb202c disable torch weight initialization and CLIP downloading/reading checkpoint to speedup creating sd model from config 2023-01-10 14:08:29 +03:00
Vladimir Repin
76a21b9626 clear envvar, add assertion 2023-01-10 12:47:52 +03:00
AUTOMATIC
ef75c98053 Split history ui.py to ui_progress.py 2023-01-10 12:29:45 +03:00
Andrey
54dd5d6634 Split history ui.py to ui_progress.py 2023-01-10 11:54:49 +03:00
Andrey
f9c2147dfb Split history ui.py to ui_progress.py 2023-01-10 11:54:49 +03:00
Andrey
27ea6949d3 Split history ui.py to ui_progress.py 2023-01-10 11:54:48 +03:00
Andrey
e9f8292a3a Split history ui.py to ui_progress.py 2023-01-10 11:54:48 +03:00
AUTOMATIC1111
7ec275fae7
Merge pull request #6590 from aria1th/varaible-dropout-rate-rework
Variable dropout rate
2023-01-10 11:29:26 +03:00
aria1th
a4a5475cfa Variable dropout rate
Implements variable dropout rate from #4549

Fixes hypernetwork multiplier being able to modified during training, also fixes user-errors by setting multiplier value to lower values for training.

Changes function name to match torch.nn.module standard

Fixes RNG reset issue when generating previews by restoring RNG state
2023-01-10 14:56:57 +09:00
AUTOMATIC1111
bd4587d2f5
Merge pull request #6578 from vladmandic/fix-model-load
allow model load if previous model failed
2023-01-10 08:22:44 +03:00
AUTOMATIC1111
12dc8e09ca
Merge pull request #6584 from vladmandic/fix-get-memory
follow-up for pr #6466 to relax response type check enforcement in get_memory
2023-01-10 08:18:39 +03:00
Vladimir Mandic
2275f130bf
relax reponse type check enforcement 2023-01-09 21:23:58 -05:00
Vladimir Mandic
552d7b90bf
allow model load if previous model failed 2023-01-09 18:34:26 -05:00
AUTOMATIC
3fe9e9e54d fix broken resolution detection when pasting parameters with old hires fix enabled 2023-01-10 02:17:42 +03:00
AUTOMATIC1111
b1d976dca2
Merge pull request #6466 from vladmandic/api-get-memory
Implement API get-memory
2023-01-10 02:02:19 +03:00
Vladimir Mandic
95727312ca
remove bytes -> gb conversion 2023-01-09 16:54:12 -05:00
AUTOMATIC1111
50f91c2945
Merge pull request #6571 from mezotaken/master
Move PR template so github could use it
2023-01-09 23:45:49 +03:00
AUTOMATIC1111
9bec415bab
Merge pull request #6568 from mezotaken/fix-tests
Tests fixes and additions
2023-01-09 23:43:11 +03:00
AUTOMATIC
1fbb6f9ebe make a dropdown for prompt template selection 2023-01-09 23:35:40 +03:00
Vladimir Repin
56ed085edf move template so github could use it 2023-01-09 23:19:08 +03:00
AUTOMATIC
43bb5190fc remove/simplify some changes from #6481 2023-01-09 22:52:23 +03:00
AUTOMATIC1111
bdd57ad073
Merge pull request #6481 from guaneec/varsize
Allow mixed image sizes in TI/HN training
2023-01-09 22:45:45 +03:00
AUTOMATIC1111
18c001792a
Merge branch 'master' into varsize 2023-01-09 22:45:39 +03:00
Vladimir Repin
00005ac9af add more tests 2023-01-09 21:01:28 +03:00
AUTOMATIC1111
2b94ec7886
Merge pull request #6463 from akx/patch-1
CI: Use native actions/setup-python caching
2023-01-09 20:14:03 +03:00
AUTOMATIC
cdfcbd9959 Remove fallback for Protocol import and remove Protocol import and remove instances of Protocol in code
add some whitespace between functions to be in line with other code in the repo
2023-01-09 20:08:48 +03:00
AUTOMATIC1111
89c3663080
Merge pull request #6482 from ProGamerGov/patch-6
Add fallback for Protocol import
2023-01-09 20:06:50 +03:00
AUTOMATIC
49c4509ce2 use existing function for loading VAE weights from file 2023-01-09 19:58:35 +03:00
AUTOMATIC1111
99da2c5af6
Merge pull request #6528 from PlasmaPower/vae-safetensors
Add support for loading VAEs from safetensors files
2023-01-09 19:40:28 +03:00
Vladimir Repin
7d2bb86cce combine tests together, return set options test 2023-01-09 19:39:06 +03:00
AUTOMATIC1111
dd21af06fa
Merge pull request #6462 from Taithrah/small-touch-up
Update hints.js
2023-01-09 19:38:08 +03:00
AUTOMATIC
d4fd2418ef add an option to use old hiresfix width/height behavior
add a visual effect to inactive hires fix elements
2023-01-09 14:57:47 +03:00
Vladimir Repin
3af488bdff try all tests 2023-01-09 14:29:28 +03:00
Vladimir Repin
1d663a04da make tests runnable without specifying subdirectory 2023-01-09 14:11:37 +03:00
Taithrah
e9d7eff70a
Merge branch 'AUTOMATIC1111:master' into small-touch-up 2023-01-08 15:58:53 -05:00
Lee Bousfield
cb255faec6
Add support for loading VAEs from safetensor files 2023-01-08 10:17:50 -07:00
AUTOMATIC1111
8850fc23b6
Merge pull request #6478 from mezotaken/master
Fix quicksetting name overlapping with its controls.
2023-01-08 16:17:34 +03:00
AUTOMATIC1111
0d194e4ecc
Merge pull request #6465 from brkirch/fix-training
Fix training for newer PyTorch builds
2023-01-08 16:16:31 +03:00
AUTOMATIC1111
a479e08be1
Merge pull request #6514 from mezotaken/fix-featreq-issue-label
Use an actual label for feature requests.
2023-01-08 16:15:34 +03:00
AUTOMATIC
137ce534b2 remove some code duplication
remove calls to locals()
add a test for img2img with script
2023-01-08 16:14:38 +03:00
AUTOMATIC1111
e7f2f1e1b6
Merge pull request #6469 from noodleanon/scripts-from-api
Run scripts from API
2023-01-08 15:47:19 +03:00
noodleanon
6d0cc1e239
Corrected is_img2img param 2023-01-08 11:03:48 +00:00
Vladimir Repin
1aca26816e use actual label for feature requests 2023-01-08 13:07:35 +03:00
Vladimir Repin
cf2f6f2004
Merge branch 'AUTOMATIC1111:master' into master 2023-01-08 12:51:38 +03:00
AUTOMATIC
085427de0e make it possible for extensions/scripts to add their own embedding directories 2023-01-08 09:37:33 +03:00
AUTOMATIC
a0c87f1fdf skip images in embeddings dir if they have a second .preview extension 2023-01-08 08:52:26 +03:00
ProGamerGov
984b86dd0a
Add fallback for Protocol import 2023-01-07 13:08:21 -07:00
dan
72497895b9 Move batchsize check 2023-01-08 02:57:36 +08:00
dan
669fb18d52 Add checkbox for variable training dims 2023-01-08 02:31:40 +08:00
dan
448b9cedab Allow variable img size 2023-01-08 02:14:36 +08:00
Vladimir Repin
cabd95015b fix quicksettings name overlap 2023-01-07 19:24:35 +03:00
noodleanon
d38ede71d5
Added script support in txt2img endpoint 2023-01-07 14:21:31 +00:00
noodleanon
50e2536279
Merge branch 'AUTOMATIC1111:master' into img2img-api-scripts 2023-01-07 14:18:09 +00:00
Vladimir Mandic
47534577ed
api-get-memory 2023-01-07 07:51:35 -05:00
brkirch
df3b31eb55 In-place operations can break gradient calculation 2023-01-07 07:04:59 -05:00
Taithrah
8a27730da5
Update hints.js
Partial revert
2023-01-07 06:11:57 -05:00
AUTOMATIC
151233399c new screenshot 2023-01-07 13:30:06 +03:00
AUTOMATIC
fdfce47110 add "from" resolution for hires fix to be less confusing. 2023-01-07 13:29:47 +03:00
AUTOMATIC1111
983167e621
Merge pull request #6448 from aednzxy/patch-2
increase upscale api validation limit
2023-01-07 12:34:42 +03:00
Aarni Koskela
a77873974b
... also for tests. 2023-01-07 11:34:02 +02:00
AUTOMATIC1111
c295e4a244
Merge pull request #6055 from brkirch/sub-quad_attn_opt
Add Birch-san's sub-quadratic attention implementation
2023-01-07 12:26:55 +03:00
Aarni Koskela
0fc1848e40
CI: Use native actions/setup-python caching 2023-01-07 11:25:41 +02:00
Taithrah
a36e2744e2
Update hints.js
Small touch up to hints
2023-01-07 04:09:02 -05:00
AUTOMATIC
1a5b86ad65 rework hires fix preview for #6437: movie it to where it takes less place, make it actually account for all relevant sliders and calculate dimensions correctly 2023-01-07 09:56:37 +03:00
AUTOMATIC
de97380445 this breaks on default config because width, height, hr_scale are None at that point. 2023-01-07 08:53:53 +03:00
AUTOMATIC1111
01cc07b81a
Merge pull request #6437 from Mitchell1711/show-target-resolution
Show upscaled resolution on hires fix
2023-01-07 08:43:28 +03:00
AUTOMATIC
c4a221c405 Merge branch 'clip_hijack_rework' 2023-01-07 08:43:08 +03:00
AUTOMATIC
1740c33547 more comments 2023-01-07 07:48:44 +03:00
AUTOMATIC
08066676a4 make it not break on empty inputs; thank you tarded, we are 2023-01-07 07:22:07 +03:00
Mitchell Boot
f94cfc563b Changed HTML to textbox instead
Using HTML caused an issue where the row would expand for a frame when changing the sliders because of the loading animation. This solution also doesn't use any additional HTML padding
2023-01-07 01:15:22 +01:00
AUTOMATIC
79e39fae61 CLIP hijack rework 2023-01-07 01:46:13 +03:00
Dean Hopkins
82c1f10b14 increase upscale api validation limit 2023-01-06 22:10:03 +00:00
brkirch
c18add68ef Added license 2023-01-06 16:42:47 -05:00
Mitchell Boot
991368c8d5 remove camelcase 2023-01-06 18:24:29 +01:00
Mitchell Boot
3992ecbe6e Added UI elements
Added a new row to hires fix that shows the new resolution after scaling
2023-01-06 18:02:46 +01:00
AUTOMATIC1111
874b975bf8
Merge pull request #6432 from KumiIT/typo
typo UI fixes #6391
2023-01-06 18:29:55 +03:00
Kuma
50194de93f
typo UI fixes #6391 2023-01-06 16:12:45 +01:00
AUTOMATIC
3246a2d6b8 remove restriction for saving dropdowns to ui-config.json 2023-01-06 16:03:53 +03:00
AUTOMATIC1111
5265918b46
Merge pull request #6422 from brkirch/version-parse-fix
Fix version truncation for nightly PyTorch
2023-01-06 15:22:37 +03:00
brkirch
5e6566324b Always end version number with a digit 2023-01-06 07:06:26 -05:00
brkirch
848605fb65 Update README.md 2023-01-06 06:58:49 -05:00
AUTOMATIC
65ed4421e6 add callback for when the script is unloaded 2023-01-06 13:55:50 +03:00
AUTOMATIC
c9bded39ee sort extensions by date and add an option to sort by other columns 2023-01-06 12:32:44 +03:00
brkirch
5deb2a19cc Allow Doggettx's cross attention opt without CUDA 2023-01-06 01:33:15 -05:00
brkirch
b95a4c0ce5 Change sub-quad chunk threshold to use percentage 2023-01-06 01:01:51 -05:00
AUTOMATIC
683287d87f rework saving training params to file #6372 2023-01-06 08:52:06 +03:00
brkirch
b119815333 Use narrow instead of dynamic_slice 2023-01-06 00:15:24 -05:00
brkirch
3bfe2bb549 Merge remote-tracking branch 'upstream/master' into sub-quad_attn_opt 2023-01-06 00:15:22 -05:00
brkirch
f6ab5a39d7 Merge branch 'AUTOMATIC1111:master' into sub-quad_attn_opt 2023-01-06 00:14:20 -05:00
brkirch
d782a95967 Add Birch-san's sub-quadratic attention implementation 2023-01-06 00:14:13 -05:00
AUTOMATIC1111
88e01b237e
Merge pull request #6372 from timntorres/save-ti-hypernet-settings-to-txt-revised
Save hypernet and textual inversion settings to text file, revised.
2023-01-06 07:59:44 +03:00
AUTOMATIC1111
143ed5a42d
Merge pull request #6384 from faber6/loads-ti-from-subdirs
allow loading embeddings from subdirectories
2023-01-06 07:56:48 +03:00
AUTOMATIC1111
8a13afd216
Merge pull request #6401 from acncagua/wsl-open
wsl-open
2023-01-06 07:56:15 +03:00
AUTOMATIC1111
85fa4eacea
Merge pull request #6402 from brkirch/work-with-nightly-local-builds
Add support for using PyTorch nightly and local builds
2023-01-06 07:51:45 +03:00
AUTOMATIC1111
3ea354f274
Merge pull request #6364 from 0xb8/master
hires-fix: add "nearest-exact" latent upscale mode.
2023-01-06 07:49:11 +03:00
acncagua
d61a5aa4f6
Add files via upload 2023-01-06 10:58:22 +09:00
brkirch
8111b5569d Add support for PyTorch nightly and local builds 2023-01-05 20:54:52 -05:00
noodleanon
eadd1bf06a
allow sdupscale to accept upscaler name 2023-01-05 21:22:04 +00:00
noodleanon
b5253f0dab
allow img2img api to run scripts 2023-01-05 21:21:48 +00:00
Faber
81133d4168
allow loading embeddings from subdirectories 2023-01-06 03:38:37 +07:00
AUTOMATIC1111
310b71f669
Merge pull request #6376 from KumiIT/master
typo in TI
2023-01-05 22:10:07 +03:00
AUTOMATIC
847f869c67 experimental optimization 2023-01-05 21:00:52 +03:00
Kuma
fda04e620d
typo in TI 2023-01-05 18:44:19 +01:00
timntorres
b6bab2f052 Include model in log file. Exclude directory. 2023-01-05 09:14:56 -08:00
timntorres
b85c2b5cf4 Clean up ti, add same behavior to hypernetwork. 2023-01-05 08:14:38 -08:00
cat
19a81ac287 hires-fix: add "nearest-exact" latent upscale mode. 2023-01-05 20:25:02 +05:00
timntorres
eea8fc40e1 Add option to save ti settings to file. 2023-01-05 07:24:22 -08:00
AUTOMATIC
f8d0cf6a6e rework #6329 to remove duplicate code and add prevent tab names for showing in ids for scripts that only exist on one tab 2023-01-05 12:08:11 +03:00
AUTOMATIC
997461d3dd add footer with versions 2023-01-05 11:57:14 +03:00
AUTOMATIC1111
01a1fee874
Merge pull request #6329 from Kryptortio/add_even_more_element_ids
Add additional elem_id/HTML ids (again)
2023-01-05 11:56:13 +03:00
me
f185baeb28 Refactor elem_prefix as function elem_id 2023-01-05 09:29:07 +01:00
AUTOMATIC
42fcc79bd3 add Discard penultimate sigma to infotext 2023-01-05 10:43:21 +03:00
AUTOMATIC1111
c53852e257
Merge pull request #6044 from hentailord85ez/discard-penultimate-sigma
Allow always discarding of penultimate sigma and fix doing 1 less step than specified
2023-01-05 10:33:51 +03:00
me
c3109fa18a Adjusted prefix from i2i/t2i to txt2img and img2img and removed those prefixes from img exclusive scripts 2023-01-05 08:27:09 +01:00
AUTOMATIC1111
24e21c0710
Merge pull request #6328 from lolsuffocate/fix-png-info-api
Make pnginfoapi return all image info
2023-01-05 10:23:59 +03:00
AUTOMATIC
2e30997450 move sd_model assignment to the place where we change the sd_model 2023-01-05 10:21:17 +03:00
AUTOMATIC1111
f3df261508
Merge pull request #6334 from jchook/exec
Fixes webui.sh to exec LAUNCH_SCRIPT
2023-01-05 10:12:25 +03:00
AUTOMATIC1111
6745e8c5f2
Merge pull request #6349 from philpax/fix-sd-arch-switch-in-override-settings
fix(api): assign sd_model after settings change (v2)
2023-01-05 10:11:52 +03:00
Philpax
83ca8dd0c9
Merge branch 'AUTOMATIC1111:master' into fix-sd-arch-switch-in-override-settings 2023-01-05 05:00:58 +01:00
AUTOMATIC
5f4fa942b8 do not show full window image preview when right mouse button is used 2023-01-05 02:38:52 +03:00
Wes Roberts
066390eb56 Fixes webui.sh to exec LAUNCH_SCRIPT 2023-01-04 17:58:16 -05:00
AUTOMATIC
99b67cff0b make hires fix not do anything if the user chooses the second pass resolution to be the same as first pass resolution 2023-01-05 01:25:52 +03:00
AUTOMATIC
b663ee2cff fix fullscreen view showing wrong image on firefox 2023-01-05 00:36:10 +03:00
me
5851bc839b Add element ids for script components and a few more in ui.py 2023-01-04 22:14:30 +01:00
AUTOMATIC
bc43293c64 fix incorrect display/calculation for number of steps for hires fix in progress bars 2023-01-04 23:56:43 +03:00
Suffocate
1288a3bb7d Use the read_info_from_image function directly 2023-01-04 20:36:30 +00:00
AUTOMATIC
8149078094 added the option to specify target resolution with possibility of truncating for hires fix; also sampling steps 2023-01-04 22:04:40 +03:00
AUTOMATIC
24d4a0841d train tab visual updates
allow setting train tab values from ui-config.json
2023-01-04 20:10:40 +03:00
AUTOMATIC1111
9092e1ca77
Merge pull request #3842 from R-N/gradient-clipping
Gradient clipping in train tab
2023-01-04 19:57:02 +03:00
AUTOMATIC1111
eeb1de4388
Merge branch 'master' into gradient-clipping 2023-01-04 19:56:35 +03:00
AUTOMATIC1111
b7deea47ee
Merge pull request #5774 from AUTOMATIC1111/camenduru-patch-1
allow_credentials and allow_headers for api
2023-01-04 19:26:49 +03:00
AUTOMATIC1111
e9911391ca
Merge pull request #6305 from vladmandic/fix-jpeg
fix jpeg handling
2023-01-04 19:22:51 +03:00
AUTOMATIC
097a90b88b add XY plot parameters to grid image and do not add them to individual images 2023-01-04 19:19:11 +03:00
AUTOMATIC1111
3f23e6dabe
Merge pull request #1476 from RnDMonkey/xygrid_infotext_improvements
[xy_grid] script infotext improvements
2023-01-04 18:57:37 +03:00
AUTOMATIC1111
32547f2721
Merge branch 'master' into xygrid_infotext_improvements 2023-01-04 18:57:14 +03:00
AUTOMATIC
3dae545a03 rename weirdly named variables from #3176 2023-01-04 18:42:51 +03:00
AUTOMATIC1111
cf413d1fbe
Merge pull request #3176 from asimard1/master
Show PB texts at same time and earlier
2023-01-04 18:40:14 +03:00
AUTOMATIC1111
37aafdb059
Merge branch 'master' into master 2023-01-04 18:39:57 +03:00
AUTOMATIC
a8eb9e3bf8 Revert "Merge pull request #3791 from shirayu/fix/filename"
This reverts commit eed58279e7, reversing
changes made to 4ae960b01c.
2023-01-04 18:20:38 +03:00
AUTOMATIC1111
eed58279e7
Merge pull request #3791 from shirayu/fix/filename
Truncate too long filename (Fix #705)
2023-01-04 18:17:50 +03:00
AUTOMATIC1111
4ae960b01c
Merge pull request #4177 from eltociear/patch-2
Fix typo in ui.js
2023-01-04 17:59:31 +03:00
AUTOMATIC
525cea9245 use shared function from processing for creating dummy mask when training inpainting model 2023-01-04 17:58:07 +03:00
Vladimir Mandic
590c5ae016
update pillow 2023-01-04 09:48:54 -05:00
AUTOMATIC
184e670126 fix the merge 2023-01-04 17:45:01 +03:00
AUTOMATIC1111
8839b372bf
Merge pull request #3490 from Nerogar/inpaint_textual_inversion
Fix textual inversion training with inpainting models
2023-01-04 17:40:29 +03:00
AUTOMATIC1111
da5c1e8a73
Merge branch 'master' into inpaint_textual_inversion 2023-01-04 17:40:19 +03:00
AUTOMATIC1111
47df084901
Merge pull request #6304 from vladmandic/add-cross-attention-info
add cross-attention info
2023-01-04 17:30:43 +03:00
AUTOMATIC
4d66bf2c0d add infotext to "-before-highres-fix" images 2023-01-04 17:24:46 +03:00
Vladimir Mandic
79c682ad4f
fix jpeg 2023-01-04 08:20:42 -05:00
AUTOMATIC
1cfd8aec4a make it possible to work with opts.show_progress_every_n_steps = -1 with medvram 2023-01-04 16:05:42 +03:00
Vladimir Mandic
21ee77db31
add cross-attention info 2023-01-04 08:04:38 -05:00
AUTOMATIC1111
c923de0e05
Merge pull request #5969 from philpax/include-job-timestamp-in-progress-api
feat(api): include job_timestamp in progress
2023-01-04 15:28:51 +03:00
AUTOMATIC
642142556d use commandline-supplied cuda device name instead of cuda:0 for safetensors PR that doesn't fix anything 2023-01-04 15:09:53 +03:00
AUTOMATIC
68fbf4558f Merge remote-tracking branch 'Narsil/fix_safetensors_load_speed' 2023-01-04 14:53:03 +03:00
AUTOMATIC1111
c4796bcc67
Merge pull request #6302 from vladmandic/fix-api-logging
fix typo in api logging
2023-01-04 14:44:11 +03:00
Vladimir Mandic
11b8160a08
fix typo 2023-01-04 06:36:57 -05:00
AUTOMATIC
0cd6399b8b fix broken inpainting model 2023-01-04 14:29:13 +03:00
AUTOMATIC
3bd737767b disable broken API logging 2023-01-04 14:20:32 +03:00
AUTOMATIC1111
c6c56c807a
Merge pull request #6272 from stysmmaker/feat/image-paste-fallback
Add image paste fallback
2023-01-04 14:15:44 +03:00
AUTOMATIC1111
aa44f40cc1
Merge pull request #6251 from hithereai/master
update opencv package inside requirements.txt
2023-01-04 14:12:15 +03:00
AUTOMATIC1111
7bbd984dda
Merge pull request #6253 from Shondoit/ti-optim
Save Optimizer next to TI embedding
2023-01-04 14:09:13 +03:00
AUTOMATIC1111
545ae8cb1c
Merge pull request #6264 from vladmandic/add-state-info
add missing state info
2023-01-04 14:04:50 +03:00
AUTOMATIC1111
a8ad8666cd
Merge pull request #6261 from vladmandic/api-logging
add api logging
2023-01-04 14:04:11 +03:00
AUTOMATIC1111
a2ff57cfb0
Merge pull request #6295 from Gerschel/master
better targetting, class tabs was autoassigned
2023-01-04 13:47:53 +03:00
AUTOMATIC1111
6281c1bdb4
Merge pull request #6299 from stysmmaker/feat/latent-upscale-modes
Add more latent upscale modes
2023-01-04 13:47:36 +03:00
AUTOMATIC1111
77c3bc7747
Merge pull request #6298 from stysmmaker/fix/intermediate-step-full-res
Save full resolution of intermediate step
2023-01-04 13:46:21 +03:00
MMaker
b2151b934f
Rename bicubic antialiased option
Comma was causing the the value in PNG info to be quoted, which causes the upscaler dropdown option to be blank when sending to UI
2023-01-04 05:36:18 -05:00
MMaker
f49f917cdd
Merge branch 'AUTOMATIC1111:master' into feat/latent-upscale-modes 2023-01-04 04:27:52 -06:00
AUTOMATIC
4ec6470a1a fix checkpoint list API 2023-01-04 13:26:23 +03:00
MMaker
15fd0b8bc4
Update processing.py 2023-01-04 05:12:54 -05:00
MMaker
96cf15bede
Add new latent upscale modes 2023-01-04 05:12:06 -05:00
AUTOMATIC
8d8a05a3bb find configs for models at runtime rather than when starting 2023-01-04 12:47:42 +03:00
AUTOMATIC
02d7abf514 helpful error message when trying to load 2.0 without config
failing to load model weights from settings won't break generation for currently loaded model anymore
2023-01-04 12:35:07 +03:00
MMaker
e5b7ee910e
fix: Save full res of intermediate step 2023-01-04 04:22:01 -05:00
Gerschel
4fc8154207 better targetting, class tabs was autoassigned
I moved a preset manager into quicksettings, this function
was targeting my component instead of the tabs. This is
because class tabs is autoassigned, while element id #tabs
is not, this allows a tabbed component to live in the quicksettings.
2023-01-03 23:25:34 -08:00
AUTOMATIC1111
7e549468b3
Merge pull request #6285 from abextm/save-sampling-method
ui: save dropdown sampling method to the ui-config
2023-01-04 09:10:25 +03:00
Max Weber
917b5bd8d0
ui: save dropdown sampling method to the ui-config 2023-01-03 18:19:56 -07:00
AUTOMATIC
3e22e29413 fix broken send to extras button 2023-01-03 21:49:24 +03:00
MMaker
7c89f3718f
Add image paste fallback
Fixes Firefox pasting support
(and possibly other browsers)
2023-01-03 12:46:48 -05:00
AUTOMATIC
82cfc227d7 added licenses screen to settings
added footer
removed unused inpainting code
2023-01-03 20:23:17 +03:00
Vladimir Mandic
d8d206c168
add state to interrogate 2023-01-03 11:01:04 -05:00
Vladimir Mandic
cec209981e
log only sdapi 2023-01-03 10:58:52 -05:00
AUTOMATIC
8f96f92899 call script callbacks for reloaded model after loading embeddings 2023-01-03 18:39:14 +03:00
AUTOMATIC
2d5a5076bb Make it so that upscalers are not repeated when restarting UI. 2023-01-03 18:38:21 +03:00
Vladimir Mandic
192ddc04d6
add job info to modules 2023-01-03 10:34:51 -05:00
Vladimir Mandic
1d9dc48efd
init job and add info to model merge 2023-01-03 10:21:51 -05:00
Vladimir Mandic
aaa4c2aacb
add api logging 2023-01-03 09:45:16 -05:00
AUTOMATIC
e9fb9bb0c2 fix hires fix not working in API when user does not specify upscaler 2023-01-03 17:40:20 +03:00
Shondoit
bddebe09ed Save Optimizer next to TI embedding
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
2023-01-03 13:30:24 +01:00
AUTOMATIC
c0ee148870 add support for running with gradio 3.9 installed 2023-01-03 14:18:48 +03:00
hithereai
9a3b0ee960
update req.txt
The old 'opencv-python' package is very limiting in terms of optical flow - so I propose a package change to 'opencv-contrib-python', which has more cv2.optflow methods. 

These are needed for optical flow trickery in auto1111 and its extensions, and it cannot be installed by an extension as only a single package of opencv needs to be installed for optical flow to work properly. Change of the main one is Inevitable.
2023-01-03 11:22:06 +02:00
AUTOMATIC
fda1ed1843 some minor improvements for dark mode UI 2023-01-03 12:01:32 +03:00
AUTOMATIC
a1cf55a9d1 add option to reorder items in main UI 2023-01-03 10:39:21 +03:00
AUTOMATIC
9d4eff097d add a button to show all setting pages 2023-01-03 10:01:06 +03:00
AUTOMATIC
2bc86712ec make quicksettings UI elements appear in same order as they are listed in the setting 2023-01-03 09:13:35 +03:00
AUTOMATIC
18c03cdeac styling rework to make things more compact 2023-01-03 09:04:29 +03:00
AUTOMATIC
269f6e8676 change settings UI to use vertical tabs 2023-01-03 07:20:20 +03:00
AUTOMATIC
1d7a31def8 make edit fields for sliders not get hidden by slider's label when there's not enough space 2023-01-03 06:21:53 +03:00
AUTOMATIC
251ecee694 make "send to" buttons send actual dimension of the sent image rather than fields 2023-01-02 22:44:46 +03:00
AUTOMATIC
8d12a729b8 fix possible error with accessing nonexistent setting 2023-01-02 20:46:51 +03:00
AUTOMATIC
84dd7e8e24 error out with a readable message in chwewckpoint merger for incompatible tensor shapes (ie when trying to merge SD1.5 with SD2.0) 2023-01-02 20:30:02 +03:00
AUTOMATIC
4dbde228ff make it possible to use fractional values for SD upscale. 2023-01-02 20:01:16 +03:00
AUTOMATIC
ef27a18b6b Hires fix rework 2023-01-02 19:42:10 +03:00
AUTOMATIC1111
fd4461d44c
Merge pull request #6196 from philpax/add-embeddings-api
feat(api): add /sdapi/v1/embeddings
2023-01-02 06:11:10 +03:00
AUTOMATIC1111
f39a79d143
Merge pull request #6183 from Kryptortio/add_more_element_ids
Add additional elem_id/HTML ids
2023-01-02 06:10:26 +03:00
Philpax
c65909ad16 feat(api): return more data for embeddings 2023-01-02 12:21:48 +11:00
Philpax
b5819d9bf1 feat(api): add /sdapi/v1/embeddings 2023-01-02 10:18:11 +11:00
AUTOMATIC
311354c0bb fix the issue with training on SD2.0 2023-01-02 00:38:09 +03:00
me
a005fccddd Add a lot more elem_id/HTML id, modified some that were duplicates for seed section 2023-01-01 20:06:52 +01:00
AUTOMATIC
e672cfb074 rework of callback for #6094 2023-01-01 18:37:55 +03:00
AUTOMATIC1111
6062c85d4d
Merge pull request #6094 from AlUlkesh/master
Adding image numbers on grids
2023-01-01 18:31:01 +03:00
AUTOMATIC
524d532b38 moved roll artist to built-in extensions 2023-01-01 14:07:40 +03:00
AlUlkesh
5f12b23b8b Adding image numbers on grids
New grid option in settings enables adding of image numbers on grids. This makes identifying the images, especially in larger batches, much easier.

Revert "Adding image numbers on grids"

This reverts commit 3530c283b4b1d3a3cab40efbffe4cf2697938b6f.

Implements Callback for image grid loop

Necessary to make "Add image's number to its picture in the grid" extension possible.
2023-01-01 11:21:50 +01:00
AUTOMATIC
e5f1a37cb9 make refresh buttons look more nice 2023-01-01 13:08:40 +03:00
AUTOMATIC
b46b97fa29 more fixes for gradio update 2023-01-01 11:38:17 +03:00
AUTOMATIC
76f256fe8f Bump gradio version #YOLO 2023-01-01 11:08:39 +03:00
AUTOMATIC
11d432d92d add refresh buttons to checkpoint merger 2023-01-01 10:35:38 +03:00
AUTOMATIC
16b9661d27 change karras scheduler sigmas to values recommended by SD from old 0.1 to 10 with an option to revert to old 2023-01-01 09:51:37 +03:00
AUTOMATIC
a939e82a0b fix weird padding for sampler dropdown in chrome 2023-01-01 03:24:58 +03:00
AUTOMATIC
210449b374 fix 'RuntimeError: Expected all tensors to be on the same device' error preventing models from loading on lowvram/medvram. 2023-01-01 02:41:15 +03:00
AUTOMATIC
29a3a7eb13 show sampler selection in dropdown, add option selection to revert to old radio group 2023-01-01 01:19:10 +03:00
AUTOMATIC
360feed9b5 HAPPY NEW YEAR
make save to zip into its own button instead of a checkbox
2023-01-01 00:38:58 +03:00
AUTOMATIC
f4535f6e4f make it so that memory/embeddings info is displayed in a separate UI element from generation parameters, and is preserved when you change the displayed infotext by clicking on gallery images 2022-12-31 23:40:55 +03:00
AUTOMATIC
bdbe09827b changed embedding accepted shape detection to use existing code and support the new alt-diffusion model, and reformatted messages a bit #6149 2022-12-31 22:49:09 +03:00
AUTOMATIC1111
c24a314c5e
Merge pull request #6149 from vladmandic/validate-embeddings
validate textual inversion embeddings
2022-12-31 22:33:12 +03:00
AUTOMATIC1111
f378b8d53a
Merge pull request #6143 from vladmandic/fix-interrogate
fix interrogate
2022-12-31 22:20:56 +03:00
Vladimir Mandic
f55ac33d44
validate textual inversion embeddings 2022-12-31 11:27:02 -05:00
AUTOMATIC
f34c734172 alt-diffusion integration 2022-12-31 18:06:35 +03:00
Vladimir Mandic
65be1df7bb
initialize result so not to cause exception on empty results 2022-12-31 07:46:04 -05:00
AUTOMATIC
3f401cdb64 Merge remote-tracking branch 'baai-open-internal/master' into alt-diffusion 2022-12-31 13:02:28 +03:00
AUTOMATIC
fef98723b2 set sd_model for API later, inside the lock, to prevent multiple requests with different models ending up with incorrect results #5877 #6012 2022-12-31 12:44:26 +03:00
AUTOMATIC1111
26522c7dc8
Merge pull request #6015 from philpax/api-begin-end-in-queue
fix(api): only begin/end state in lock
2022-12-31 12:23:39 +03:00
AUTOMATIC1111
3d8256e40c
Merge pull request #6017 from hitomi/master
Add memory cache for VAE weights
2022-12-31 12:22:59 +03:00
AUTOMATIC1111
d81636a091
Merge pull request #6037 from vladmandic/master
fix rgba to rgb when using jpeg output
2022-12-31 12:14:41 +03:00
AUTOMATIC1111
03cb43c3c8
Merge pull request #6133 from vladmandic/memmon-stats
add additional memory states
2022-12-31 10:52:58 +03:00
AUTOMATIC1111
38f5787e67
Merge pull request #6134 from vladmandic/remove-console-message
remove unnecessary console message
2022-12-31 10:52:30 +03:00
AUTOMATIC1111
527886a33f
Merge pull request #6135 from vladmandic/shared-state
fix shared state dictionary
2022-12-31 10:52:05 +03:00
Vladimir Mandic
463048344f
fix shared state dictionary 2022-12-30 19:41:47 -05:00
Vladimir Mandic
d3aa2a48e1
remove unnecessary console message 2022-12-30 19:38:53 -05:00
Vladimir Mandic
5958bbd244
add additional memory states 2022-12-30 19:36:36 -05:00
Nicolas Patry
5a523d1305
Version 0.2.7 Fixes Windows SAFETENSORS_FAST_GPU path. 2022-12-27 11:27:40 +01:00
Nicolas Patry
5ba04f9ec0
Attempting to solve slow loads for safetensors.
Fixes #5893
2022-12-27 11:27:19 +01:00
hentailord85ez
03f486a239
Update shared.py 2022-12-26 20:49:33 +00:00
hentailord85ez
4df5009acb
Update sd_samplers.py 2022-12-26 20:49:13 +00:00
Vladimir Mandic
ae955b0146 fix rgba to rgb when using jpeg output 2022-12-26 09:56:19 -05:00
AUTOMATIC
4af3ca5393 make it so that blank ENSD does not break image generation 2022-12-26 10:11:28 +03:00
hitomi
893933e05a Add memory cache for VAE weights 2022-12-25 20:49:25 +08:00
Philpax
5be9387b23 fix(api): only begin/end state in lock 2022-12-25 21:45:44 +11:00
Philpax
fa931733f6 fix(api): assign sd_model after settings change 2022-12-25 20:17:49 +11:00
AUTOMATIC
c6f347b81f fix ruined SD upscale 2022-12-25 09:47:34 +03:00
AUTOMATIC1111
7b7f7e9361
Merge pull request #6003 from eaglgenes101/settings-css-classes
Add CSS classes for the settings panels
2022-12-25 09:17:34 +03:00
AUTOMATIC1111
b12de850ae
Merge pull request #5992 from yuvalabou/F541
Fix F541: f-string without any placeholders
2022-12-25 09:16:08 +03:00
AUTOMATIC1111
a66514e1a3
Merge pull request #6005 from allenbenz/patch-1
Fix clip interrogate from the webui
2022-12-25 09:12:29 +03:00
AUTOMATIC1111
c1512ef9ae
Merge pull request #5999 from vladmandic/trainapi
implement train api
2022-12-25 09:11:42 +03:00
AUTOMATIC
8eef9d8e78 a way to add an exception to unpickler without explicitly calling load_with_extra 2022-12-25 09:03:56 +03:00
Allen Benz
61a273236f
Fix clip interrogate from the webui
A recent change made the image RGBA, which makes the clip interrogator unhappy.
deepbooru and calling the interrogator from the api already do the conversion so this is the only place that needed it.
2022-12-24 20:23:12 -08:00
eaglgenes101
f60c24f812 Add CSS classes for the settings panels 2022-12-24 22:16:01 -05:00
Vladimir Mandic
5f1dfbbc95 implement train api 2022-12-24 18:02:22 -05:00
AUTOMATIC
c5bdba2089 change wording a bit 2022-12-24 22:41:35 +03:00
AUTOMATIC
56e557c6ff added cheap NN approximation for VAE 2022-12-24 22:39:10 +03:00
Yuval Aboulafia
3bf5591efe fix F541 f-string without any placeholders 2022-12-24 21:35:29 +02:00
AUTOMATIC1111
5927d3fa95
Merge pull request #5977 from philpax/api-dont-save-extras-output
fix(api): don't save extras output to disk
2022-12-24 18:50:45 +03:00
AUTOMATIC1111
a6a54a7529
Merge pull request #5976 from AbstractQbit/fast_preview
Add an option for faster low quality previews
2022-12-24 18:38:42 +03:00
AUTOMATIC
0b8acce6a9 separate part of denoiser code into a function to make it easier for extensions to override it 2022-12-24 18:38:16 +03:00
AUTOMATIC
03d7b39453 added an option to filter out deepbooru tags 2022-12-24 16:22:47 +03:00
AUTOMATIC1111
ccaacb1891
Merge pull request #5979 from linuxmobile/master
Removed lenght in sd_model at line 115
2022-12-24 16:22:18 +03:00
linuxmobile ( リナックス )
5a650055de
Removed lenght in sd_model at line 115
Commit eba60a4 is what is causing this error, delete the length check in sd_model starting at line 115 and it's fine.

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5971#issuecomment-1364507379
2022-12-24 09:25:35 -03:00
Philpax
6247f21a63 fix(api): don't save extras output to disk 2022-12-24 22:04:53 +11:00
AbstractQbit
11dd79e346 Add an option for faster low quality previews 2022-12-24 14:00:17 +03:00
Philpax
f23a822f1c feat(api): include job_timestamp in progress 2022-12-24 20:45:16 +11:00
AUTOMATIC1111
ca16278188
Merge pull request #5608 from Bwin4L/master
Make the generated image count in desktop notifications only count new gens in the currently active tab
2022-12-24 12:23:56 +03:00
AUTOMATIC1111
1b66d52763
Merge pull request #5595 from wywywywy/ldsr-safetensors
Add SafeTensors support to LDSR
2022-12-24 12:22:09 +03:00
AUTOMATIC1111
eba60a42eb
Merge pull request #5627 from deanpress/patch-1
fix: fallback model_checkpoint if it's empty
2022-12-24 12:20:31 +03:00
AUTOMATIC1111
adab48cb1b
Merge pull request #5637 from aednzxy/patch-1
API endpoint to refresh checkpoints
2022-12-24 12:19:43 +03:00
AUTOMATIC1111
8c9e6d3c7d
Merge pull request #5131 from uservar/inpainting-detection
Better should_hijack_inpainting detection
2022-12-24 12:19:06 +03:00
AUTOMATIC1111
67ff058b8d
Merge pull request #5622 from repligator/patch-1
Update hints.js - DPM adaptive
2022-12-24 12:18:16 +03:00
AUTOMATIC1111
ebd36c62b3
Merge pull request #5669 from codefionn/master
Bugfix: Use /usr/bin/env bash instead of just /bin/bash
2022-12-24 11:50:52 +03:00
AUTOMATIC1111
e9bf62da8b
Merge pull request #5699 from DavidVorick/prompts-from-file
Improve prompts-from-file script to support negative prompts and sampler-by-name
2022-12-24 11:16:08 +03:00
AUTOMATIC1111
064f7b8fd2
Merge pull request #5718 from space-nuko/feature/save-hypernetwork-hash
Save hypernetwork hash and fix hypernetwork parameter restoring
2022-12-24 11:14:19 +03:00
AUTOMATIC
c0a8401b5a rename the option for img2img latent upscale 2022-12-24 11:12:17 +03:00
AUTOMATIC1111
b2dbd4d698
Merge pull request #5521 from AndrewRyanChama/ryan/img2imglatentscale
Add latent upscale option to img2img
2022-12-24 11:10:35 +03:00
AUTOMATIC1111
34bc3616ec
Merge pull request #5838 from aliencaocao/fix_gradio_pil
Dirty fix for missing PIL supported file extensions
2022-12-24 10:24:33 +03:00
AUTOMATIC1111
ee65237d69
Merge pull request #5747 from yuvalabou/singleton-comparison
Format singleton comparisons
2022-12-24 10:17:38 +03:00
AUTOMATIC1111
7578b50ba6
Merge pull request #5873 from philpax/override-settings-restore-afterwards
feat(api): add override_settings_restore_afterwards
2022-12-24 10:15:04 +03:00
AUTOMATIC1111
fac92610d2
Merge pull request #5753 from calvinballing/master
Fix various typos
2022-12-24 09:58:28 +03:00
AUTOMATIC1111
94450b8877
Merge pull request #5589 from MrCheeze/better-special-model-support
Better support for 2.0-inpainting and 2.0-depth special models
2022-12-24 09:53:44 +03:00
AUTOMATIC
9441c28c94 add an option for img2img background color 2022-12-24 09:46:35 +03:00
AUTOMATIC
1d9eaf94e3 add blendmodes to requirements_versions, remove aenum as it's already required by blendmodes 2022-12-24 09:27:41 +03:00
AUTOMATIC1111
b81fa1e7f1
Merge pull request #5644 from ThereforeGames/master
Improve img2img color correction by performing a luminosity blend
2022-12-24 09:17:40 +03:00
AUTOMATIC1111
684d7059bc
Merge pull request #5808 from stysmmaker/patch/fix-fnt-size
Prevent overlapping in X/Y plot by changing font size
2022-12-24 09:13:05 +03:00
AUTOMATIC1111
55f3ef876b
Merge pull request #5814 from timntorres/5802-save-upscaler-to-filename
Add option to save upscaler to filename suffix in extras tab.
2022-12-24 09:07:00 +03:00
AUTOMATIC1111
992a877a4a
Merge pull request #4684 from simcop2387/fix-extension-docker
Fix docker tmp/ and extensions/ handling for docker.
2022-12-24 09:06:10 +03:00
AUTOMATIC1111
72e81d5b6c
Merge pull request #5840 from stysmmaker/feat/xy-plot-new-axes
Add upscale latent, VAE, styles as new axis options to X/Y plot
2022-12-24 09:05:12 +03:00
AUTOMATIC
399b229783 eliminate duplicated code
add an option to samplers for skipping next to last sigma
2022-12-24 09:03:45 +03:00
AUTOMATIC1111
5667ec4ca7
Merge pull request #5797 from mcmonkey4eva/dpm2-a-fix
Add a workaround patch for DPM2 a issue
2022-12-24 08:51:43 +03:00
AUTOMATIC1111
0fdf368a54
Merge pull request #5894 from florianchrometz/patch-1
Set httpcore version in requirements - fixes #4833
2022-12-24 08:37:20 +03:00
AUTOMATIC1111
3bfc6c07ae
Merge pull request #5810 from brkirch/fix-training-mps
Training fixes for MPS
2022-12-24 08:34:46 +03:00
AUTOMATIC1111
f0dfed2a17
Merge pull request #5796 from brkirch/invoke-fix
Improve InvokeAI cross attention reliability and speed when using MPS for large images
2022-12-24 08:21:19 +03:00
AUTOMATIC1111
7115ab5d1e
Merge pull request #5617 from mcmonkey4eva/fix-hints-file
fix hints file typo
2022-12-24 08:16:04 +03:00
AUTOMATIC
0c747d4013 add a comment for disable xformers hack 2022-12-24 07:57:56 +03:00
AUTOMATIC1111
5ee75e3c1e
Merge pull request #5794 from Akegarasu/master
fix: xformers
2022-12-24 07:49:15 +03:00
Akiba
13e0295ab6
fix: xformers use importlib 2022-12-24 11:17:21 +08:00
brkirch
35b1775b32 Use other MPS optimization for large q.shape[0] * q.shape[1]
Check if q.shape[0] * q.shape[1] is 2**18 or larger and use the lower memory usage MPS optimization if it is. This should prevent most crashes that were occurring at certain resolutions (e.g. 1024x1024, 2048x512, 512x2048).

Also included is a change to check slice_size and prevent it from being divisible by 4096 which also results in a crash. Otherwise a crash can occur at 1024x512 or 512x1024 resolution.
2022-12-20 21:30:00 -05:00
Florian Chrometz
d050bb7863
Set httpcore version in requirements - fixes #4833 2022-12-20 20:19:50 +01:00
Philpax
22f1527fa7 feat(api): add override_settings_restore_afterwards 2022-12-20 20:36:49 +11:00
Alex "mcmonkey" Goodwin
7ba9bc2fdb fix dpm2 in img2img as well 2022-12-18 19:16:42 -08:00
MMaker
492052b5df
feat: Add upscale latent, VAE, styles to X/Y plot
Adds upscale latent space for hires., VAE, and Styles as new axis options to the X/Y plot.
2022-12-18 10:47:02 -05:00
Billy Cao
c02ef0f428 Fix PIL being imported before its installed (for new users only) 2022-12-18 20:51:59 +08:00
Billy Cao
35f0698ae8 Dirty fix for missing PIL supported file extensions 2022-12-18 20:45:30 +08:00
timntorres
6fd91c9179 Update OptionInfo to match preexisting option. 2022-12-17 08:59:02 -08:00
timntorres
a7a039d53a Add option to include upscaler name in filename. 2022-12-17 08:50:20 -08:00
timntorres
a26fe85056 Add upscaler name as a suffix. 2022-12-17 05:11:06 -08:00
brkirch
cca16373de Add attributes used by MPS 2022-12-17 04:23:08 -05:00
brkirch
16b4509fa6 Add numpy fix for MPS on PyTorch 1.12.1
When saving training results with torch.save(), an exception is thrown:
"RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead."

So for MPS, check if Tensor.requires_grad and detach() if necessary.
2022-12-17 04:22:58 -05:00
MMaker
b7c478c3eb
fix: Modify font size when unable to fit in plot
This prevents scenarios where text without line breaks will start overlapping with each other when generating X/Y plots. This is most evident when generating X/Y plots with checkpoints, as most don't contain spaces and sometimes include extra information such as the epoch, making it extra long.
2022-12-17 00:45:43 -05:00
Alex "mcmonkey" Goodwin
180fdf7809 apply to DPM2 (non-ancestral) as well 2022-12-16 08:42:00 -08:00
Alex "mcmonkey" Goodwin
8b0703b8fc Add a workaround patch for DPM2 a issue
DPM2 a and DPM2 a Karras samplers are both affected by an issue described by https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3483 and can be resolved by a workaround suggested by the k-diffusion author at https://github.com/crowsonkb/k-diffusion/issues/43#issuecomment-1305195666
2022-12-16 08:18:29 -08:00
Akiba
35e1017e3e
fix: xformers 2022-12-16 20:43:09 +08:00
camenduru
9fd457e21d
allow_credentials and allow_headers for api
from https://fastapi.tiangolo.com/tutorial/cors/
2022-12-15 21:57:48 +03:00
Jim Hays
c0355caefe Fix various typos 2022-12-14 21:01:32 -05:00
Yuval Aboulafia
957e15c464 Correct singleton comparisons 2022-12-14 20:59:33 +02:00
space-nuko
5f407ebd61 Fix comment 2022-12-13 14:32:26 -08:00
space-nuko
1fcb959514 Correctly restore default hypernetwork strength 2022-12-13 14:30:54 -08:00
space-nuko
9d5948e5f7 Correctly restore hypernetwork from hash 2022-12-13 14:25:16 -08:00
space-nuko
7077428209 Save hypernetwork hash in infotext 2022-12-13 13:05:40 -08:00
David Vorick
27c0504bc4
add support for prompts, negative prompts, and sampler-by-name in text file script 2022-12-13 12:03:16 -05:00
Fionn Langhans
cb64439f41
Bugfix: Use /usr/bin/env bash instead of just /bin/bash
The problem: Some Linux distrubutions, like NixOS, use a
non-standard filesystem. This causes the bash program to not
be at /bin/bash (though /usr/bin/env is always there).
2022-12-12 21:27:46 +01:00
ThereforeGames
9bcf4165f8
Update requirements.txt 2022-12-11 18:09:18 -05:00
ThereforeGames
9170224d23
Delete venv/Lib/site-packages directory 2022-12-11 18:06:22 -05:00
ThereforeGames
2e8b5418e3
Improve color correction with luminosity blend 2022-12-11 18:03:36 -05:00
ThereforeGames
00ca6a6db4
Add files via upload 2022-12-11 17:59:59 -05:00
Dean Hopkins
960293d6b2
API endpoint to refresh checkpoints
API endpoint to refresh checkpoints
2022-12-11 19:16:44 +00:00
MrCheeze
ec0a48826f unconditionally set use_ema=False if value not specified (True never worked, and all configs except v1-inpainting-inference.yaml already correctly set it to False) 2022-12-11 11:18:34 -05:00
Dean van Dugteren
59c6511494
fix: fallback model_checkpoint if it's empty
This fixes the following error when SD attempts to start with a deleted checkpoint:

```
Traceback (most recent call last):
  File "D:\Web\stable-diffusion-webui\launch.py", line 295, in <module>
    start()
  File "D:\Web\stable-diffusion-webui\launch.py", line 290, in start
    webui.webui()
  File "D:\Web\stable-diffusion-webui\webui.py", line 132, in webui
    initialize()
  File "D:\Web\stable-diffusion-webui\webui.py", line 62, in initialize
    modules.sd_models.load_model()
  File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 283, in load_model
    checkpoint_info = checkpoint_info or select_checkpoint()
  File "D:\Web\stable-diffusion-webui\modules\sd_models.py", line 117, in select_checkpoint
    checkpoint_info = checkpoints_list.get(model_checkpoint, None)
TypeError: unhashable type: 'list'
```
2022-12-11 17:08:51 +01:00
repligator
d5a9de1662
Update hints.js - DPM adaptive
Added mouse-over for DPM adaptive
2022-12-11 09:39:56 -05:00
Alex "mcmonkey" Goodwin
7b0a28f8ee fix hints file typo 2022-12-11 02:17:14 -08:00
Bwin4L
303df25cc2
Make the generated image count only count new images in the currently active tab 2022-12-10 22:58:06 +01:00
wywywywy
8bcdd50461 Add safetensors support to LDSR 2022-12-10 18:57:18 +00:00
MrCheeze
bd81a09eac fix support for 2.0 inpainting model while maintaining support for 1.5 inpainting model 2022-12-10 11:29:26 -05:00
MrCheeze
a1c8ad8828 unload depth model if medvram/lowvram enabled 2022-12-10 11:02:47 -05:00
AUTOMATIC1111
685f9631b5
Merge pull request #5586 from wywywywy/ldsr-improvements
LDSR improvements - cache / optimization / opt_channelslast
2022-12-10 18:22:39 +03:00
AUTOMATIC1111
0a81dd5225
Merge pull request #5585 from Bwin4L/master
Fix token counter color on dark theme
2022-12-10 18:20:28 +03:00
wywywywy
1581d5a167 Made device agnostic 2022-12-10 14:07:27 +00:00
wywywywy
6df316c881 LDSR cache / optimization / opt_channelslast 2022-12-10 13:54:29 +00:00
Bwin4L
718dbe5e82
Fix token counter color on dark theme 2022-12-10 14:51:11 +01:00
AUTOMATIC1111
7cea280a8f
Merge pull request #5447 from apolinario/patch-1
Fix WebUI not working inside of iframes
2022-12-10 16:30:16 +03:00
AUTOMATIC1111
94a35ca9d6
Merge pull request #5191 from aliencaocao/enable_checkpoint_switching_in_override_settings
Support changing checkpoint and vae through override_settings
2022-12-10 16:29:40 +03:00
AUTOMATIC
713c48ddd7 add an 'installed' tag to extensions 2022-12-10 15:05:22 +03:00
AUTOMATIC
991e2dcee9 remove NSFW filter and its dependency; if you still want it, find it in the extensions section 2022-12-10 14:54:16 +03:00
apolinario
37139d8aac No code repeat 2022-12-10 12:51:40 +01:00
AUTOMATIC
d06592267c use less javascript for this non-js-only implementation of the clear prompt button. 2022-12-10 13:46:18 +03:00
AUTOMATIC1111
2028aa06c0
Merge pull request #3198 from papuSpartan/master
Add Clear Prompt button to roll_col
2022-12-10 13:35:51 +03:00
AUTOMATIC1111
854bb0b56c
Merge pull request #5179 from kaneda2004/master
Update SD Upscaler to include user selectable Scale Factor
2022-12-10 13:28:45 +03:00
AUTOMATIC1111
89237852f4
Merge pull request #5119 from 0xb8/master
Atomically rename saved image to avoid race condition with other processes
2022-12-10 13:26:07 +03:00
AUTOMATIC1111
59c2dfe1e6
Merge pull request #5361 from rick68/patch-1
Update launch.py
2022-12-10 11:25:10 +03:00
AUTOMATIC
7dab7c9759 repair #5438 2022-12-10 11:20:43 +03:00
AUTOMATIC1111
9763623610
Merge pull request #5438 from DavidVorick/prompt-matrix-keep-random
allow randomized seeds in prompt_matrix
2022-12-10 11:10:23 +03:00
AUTOMATIC1111
cce306cb67
Merge pull request #5441 from timntorres/add-5433-avoid-sending-size-option
Add option to avoid sending size between interfaces.
2022-12-10 11:07:16 +03:00
papuSpartan
6387043fd2
Merge branch 'AUTOMATIC1111:master' into master 2022-12-10 00:02:39 -08:00
AUTOMATIC1111
1d01404c56
Merge pull request #5422 from piEsposito/master
add gradio queueing on ui by default to avoid timeout on client side when share=True
2022-12-10 10:12:37 +03:00
AUTOMATIC1111
ec5e072124
Merge pull request #4841 from R-N/vae-fix-none
Fix None option of VAE selector
2022-12-10 09:58:20 +03:00
AUTOMATIC
e11d0d43b1 change color of the valid prompt to black - back to how it was 2022-12-10 09:53:54 +03:00
AUTOMATIC
bab91b1279 add Noise multiplier option to infotext 2022-12-10 09:51:26 +03:00
AUTOMATIC1111
8ee1acc1e4
Merge pull request #5373 from mezotaken/master
add noise strength parameter similar to NAI
2022-12-10 09:36:24 +03:00
AUTOMATIC1111
e5e557fa5d
Merge pull request #5404 from szhublox/merger-ram-usage
Merger ram usage
2022-12-10 09:33:39 +03:00
AUTOMATIC1111
22f916df79
Merge pull request #5502 from Bwin4L/master
Add bracket checking functionality to the txt2img/img2img prompts
2022-12-10 09:32:11 +03:00
AUTOMATIC1111
6edeabb700
Merge pull request #5489 from brkirch/set-python_cmd
macOS install improvements
2022-12-10 09:30:13 +03:00
AUTOMATIC1111
3896242e9e
Merge pull request #5415 from wywywywy/reinstate-ddpm-v1
Reinstate DDPM V1 to LDSR
2022-12-10 09:20:38 +03:00
AUTOMATIC
505ec7e4d9 cleanup some unneeded imports for hijack files 2022-12-10 09:17:39 +03:00
AUTOMATIC
7dbfd8a7d8 do not replace entire unet for the resolution hack 2022-12-10 09:14:45 +03:00
AUTOMATIC1111
2641d1b83b
Merge pull request #4978 from aliencaocao/support_any_resolution
Patch UNet Forward to support resolutions that are not multiples of 64
2022-12-10 08:45:41 +03:00
AUTOMATIC1111
4d5fe3bfc0
Merge pull request #5555 from ywx9/master
Bug fix (a few lines in modules/api/api.py)
2022-12-10 08:27:44 +03:00
AUTOMATIC1111
a42a8e9112
Merge pull request #5547 from Ju1-js/master
Make "# settings changed" grammatically correct
2022-12-10 08:20:22 +03:00
AUTOMATIC1111
feeca1954e
Merge pull request #5542 from JaySmithWpg/depth2img
Depth2Img model support: resolves #5372, partially addresses #5011
2022-12-10 07:52:20 +03:00
ywx9
9539c2045a Bug fix 2022-12-09 23:03:06 +09:00
Ju1-js
ce04ba71b8 Make # settings changed message grammatically correct
Make the ": " in the settings changed message not show if 0 settings were changed.
"0 settings changed: ." -> "0 settings changed."
2022-12-08 22:47:45 -08:00
Jay Smith
1ed4f0e228 Depth2img model support 2022-12-08 20:50:08 -06:00
Andrew Ryan
358a8628f6 Add latent upscale option to img2img
Recently, the option to do latent upscale was added to txt2img highres
fix. This feature runs by scaling the latent sample of the image, and
then running a second pass of img2img.

But, in this edition of highres fix, the image and parameters cannot be
changed between the first pass and second pass. We might want to do a
fixup in img2img before doing the second pass, or might want to run the
second pass at a different resolution.

This change adds the option for img2img to perform its upscale in latent
space, rather than image space, giving very similar results to highres
fix with latent upscale.  The result is not exactly the same because
there is an additional latent -> decoder -> image -> encoder -> latent
that won't happen in highres fix, but this conversion has relatively
small losses
2022-12-08 07:09:09 +00:00
Bwin4L
02f566f684
Add bracket checking functionality to the txt2img/img2img prompts and negative prompts. 2022-12-07 16:14:05 +01:00
brkirch
8bc16b7582 Add psutil if using macOS to requirements.txt 2022-12-06 19:54:01 -05:00
brkirch
a7b1ab28d5 Set install_dir (per Ephil012's suggestion)
Set install_dir to a correct value for macOS. Thanks to @Ephil012 for pointing out this is needed.
2022-12-06 19:49:30 -05:00
apolinario
8eb638cdd3
style change 2022-12-06 15:14:22 +01:00
apolinario
1075819b16
Use shadowRoot if inside of an iframe and don't use it if outside
This makes sure it will work everywhere
2022-12-06 15:13:41 +01:00
Zac Liu
9a5c689c49
Merge pull request #4 from 920232796/master
add hash and fix undo hijack bug
2022-12-06 16:16:29 +08:00
zhaohu xing
965fc5ac5a delete a file
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-12-06 16:15:15 +08:00
zhaohu xing
5dcc22606d add hash and fix undo hijack bug
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-12-06 16:04:50 +08:00
Zac Liu
a25dfebeed
Merge pull request #3 from 920232796/master
fix device support for mps
update the support for SD2.0
2022-12-06 09:17:57 +08:00
Zac Liu
3ebf977a6e
Merge branch 'AUTOMATIC1111:master' into master 2022-12-06 09:16:15 +08:00
zhaohu xing
4929503258 fix bugs
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-12-06 09:03:55 +08:00
brkirch
760a7836eb Use python3.10 for python_cmd if available 2022-12-05 18:36:31 -05:00
apolinario
2eb5f103ab
Fix WebUI not working inside of iframes 2022-12-05 16:30:15 +01:00
timntorres
7057c72ae3 Add opt. to avoid sending size between interfaces. 2022-12-05 03:41:36 -08:00
David Vorick
fa6478796a
allow randomized seeds in prompt_matrix 2022-12-05 00:21:37 -05:00
Pi Esposito
fcf372e5d0
set default to avoid breaking stuff 2022-12-04 14:13:31 -03:00
Pi Esposito
12ade469c8
add queuing by default to avoid timeout on client side when share=True 2022-12-04 12:33:15 -03:00
Mackerel
681c450ecd extras.py: use as little RAM as possible, misc fixes
maximum of 2 models loaded at once. delete unneeded model before next
step. fix 'teritary' -> 'tertiary'. gracefully fail when "add
difference" is selected without a tertiary model
2022-12-04 10:31:06 -05:00
wywywywy
a8ae263c69 Reinstate DDPM V1 to LDSR 2022-12-04 13:42:19 +00:00
AUTOMATIC
44c46f0ed3 make it possible to merge inpainting model with non-inpainting one 2022-12-04 12:30:44 +03:00
AUTOMATIC
8504db5170 fix #4459 breaking inpainting when the option is not specified. 2022-12-04 01:04:24 +03:00
AUTOMATIC
cefb5d6d7d fix accessing options when they are not ready for SwinIR. 2022-12-03 20:40:29 +03:00
AUTOMATIC1111
8fba733c09
Merge pull request #5286 from brkirch/launch-py-mac
Add macOS (Darwin) installation defaults
2022-12-03 18:51:22 +03:00
AUTOMATIC
60bd4d52a6 fix incorrect file extension filter for deepdanbooru models 2022-12-03 18:46:09 +03:00
AUTOMATIC
4b0dc206ed use modelloader for #4956 2022-12-03 18:45:51 +03:00
AUTOMATIC1111
2a649154ec
Merge pull request #4956 from TiagoSantos81/offline_BLIP
[CLIP interrogator] use local file, if available
2022-12-03 18:17:56 +03:00
AUTOMATIC
0d21624cee move #5216 to the extension 2022-12-03 18:16:19 +03:00
AUTOMATIC
89e1df013b Merge remote-tracking branch 'wywywywy/autoencoder-hijack' 2022-12-03 18:08:10 +03:00
AUTOMATIC
b6e5edd746 add built-in extension system
add support for adding upscalers in extensions
move LDSR, ScuNET and SwinIR to built-in extensions
2022-12-03 18:06:33 +03:00
Vladimir Repin
cf3e844d1d add noise strength parameter similar to NAI 2022-12-03 18:05:47 +03:00
AUTOMATIC
46b0d230e7 add comment for #4407 and remove seemingly unnecessary cudnn.enabled 2022-12-03 16:01:23 +03:00
AUTOMATIC
2651267e3a fix #4407 breaking UI entirely for card other than ones related to the PR 2022-12-03 15:57:52 +03:00
brkirch
5ec8981df4 Revert most launch.py changes, add mac user script
Adds an addition file to read environment variables from when the webui.sh is run from macOS.
2022-12-03 06:44:59 -05:00
Hsiang-Cheng Yang
b7ef99634c
Update launch.py
fix a typo
2022-12-03 17:35:17 +08:00
AUTOMATIC1111
ce049c471b
Merge pull request #4368 from byzod/master
fix #3451 scripts ignores file format settings for grids
2022-12-03 10:31:08 +03:00
AUTOMATIC1111
681c0003df
Merge pull request #4407 from yoinked-h/patch-1
Fix issue with 16xx cards
2022-12-03 10:30:34 +03:00
AUTOMATIC1111
37fc1fa401
Merge pull request #5229 from lolsuffocate/master
Slightly improve page scroll jumping with focus
2022-12-03 10:23:24 +03:00
AUTOMATIC1111
d2e5b4edfa
Merge pull request #5251 from adieyal/bug/negative-prompt-infotext
Fixed incorrect negative prompt text in infotext
2022-12-03 10:21:43 +03:00
AUTOMATIC1111
5267414319
Merge pull request #4271 from MarkovInequality/racecond_fix
Fixes #4137 caused by race condition in training when VAE is unloaded
2022-12-03 10:20:17 +03:00
AUTOMATIC1111
c9a2cfdf2a
Merge branch 'master' into racecond_fix 2022-12-03 10:19:51 +03:00
AUTOMATIC1111
5cd5a672f7
Merge pull request #4459 from kavorite/color-sketch-inpainting
add `--gradio-inpaint-tool` and option to specify `color-sketch`
2022-12-03 10:06:27 +03:00
AUTOMATIC1111
a2feaa95fc
Merge pull request #5194 from brkirch/autocast-and-mps-randn-fixes
Use devices.autocast() and fix MPS randn issues
2022-12-03 09:58:08 +03:00
AUTOMATIC
c7af672186 more simple config option name plus mouseover hint for clip skip 2022-12-03 09:41:39 +03:00
AUTOMATIC1111
c67d8bca4f
Merge pull request #5304 from space-nuko/fix/clip-skip-application
Fix clip skip of 1 not being restored from prompts
2022-12-03 09:37:10 +03:00
AUTOMATIC1111
28c79b8f05
Merge pull request #5328 from jcowens/fix-typo
fix typo
2022-12-03 09:20:39 +03:00
AUTOMATIC1111
eb0b8f92bc
Merge pull request #5331 from smirkingface/openaimodel_fix
Fixed AttributeError where openaimodel is not found
2022-12-03 09:18:36 +03:00
AUTOMATIC1111
bab6ea6b22
Merge pull request #5340 from PhytoEpidemic/master
Fix divide by 0 error
2022-12-03 09:17:54 +03:00
AUTOMATIC
b2f17dd367 prevent include_init_images from being passed to StableDiffusionProcessingImg2Img in API #4989 2022-12-03 09:15:24 +03:00
AUTOMATIC1111
ae81b377d4
Merge pull request #5165 from klimaleksus/fix-sequential-vae
Make VAE step sequential to prevent VRAM spikes, will fix #3059, #2082, #2561, #3462
2022-12-03 08:29:56 +03:00
AUTOMATIC1111
c3777777d0
Merge pull request #5327 from smirkingface/master
Fixed safety checker for ckpt files written with pytorch >=1.13
2022-12-03 08:28:30 +03:00
PhytoEpidemic
119a945ef7
Fix divide by 0 error
Fix of the edge case 0 weight that occasionally will pop up in some specific situations. This was crashing the script.
2022-12-02 12:16:29 -06:00
SmirkingFace
da698ca92e Fixed AttributeError where openaimodel is not found 2022-12-02 13:47:02 +01:00
jcowens
99b19b1a8f fix typo 2022-12-02 02:53:26 -08:00
SmirkingFace
e461477869 Fixed safe.py for pytorch 1.13 ckpt files 2022-12-02 11:12:13 +01:00
zhaohu xing
9c86fb8cac fix bug
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-12-02 16:08:46 +08:00
space-nuko
be2e6de94a Fix clip skip of 1 not being restored from prompts 2022-12-01 11:34:16 -08:00
brkirch
bef36597cc Fix run as root flag
Even though -f enables running webui.sh as root, the -f flag will also be passed to launch.py, causing it to exit with a usage message. This adds a line to launch.py to remove the -f flag if present.

In addition to the above, all the letters in the command line arguments after each '-' were being processed for 'f' and "illegal option" was displayed for each letter that didn't match. Instead, this commit silences those errors and stops processing if the first flag doesn't start with '-f'.
2022-12-01 04:49:49 -05:00
brkirch
79953e9b8b Add support for macOS (Darwin) in launch.py 2022-12-01 04:49:46 -05:00
brkirch
0fddb4a1c0 Rework MPS randn fix, add randn_like fix
torch.manual_seed() already sets a CPU generator, so there is no reason to create a CPU generator manually. torch.randn_like also needs a MPS fix for k-diffusion, but a torch hijack with randn_like already exists so it can also be used for that.
2022-11-30 10:33:42 -05:00
brkirch
4d5f1691dd Use devices.autocast instead of torch.autocast 2022-11-30 10:33:42 -05:00
brkirch
21effd629d Add workaround for using MPS with torchsde 2022-11-30 10:33:39 -05:00
Adi Eyal
a44994e2c9 Fixed incorrect negative prompt text in infotext
Previously only the first negative prompt in all_negative_prompts was
being used for infotext. This fixes that by selecting the index-th
negative prompt
2022-11-30 15:23:53 +02:00
Billy Cao
3a724e91a2 Change to steps of 8 2022-11-30 20:52:32 +08:00
Zac Liu
231fb72872
Merge pull request #2 from 920232796/master
fix bugs
2022-11-30 15:02:02 +08:00
zhaohu xing
52cc83d36b fix bugs
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-11-30 14:56:12 +08:00
Zac Liu
a39a57cb1f
Merge pull request #1 from 920232796/master
Add AltDiffusion
2022-11-30 11:14:04 +08:00
zhaohu xing
0831ab476c
Merge branch 'master' into master 2022-11-30 10:13:17 +08:00
Suffocate
c4067c5626 Restore scroll position on page if giving active element focus changes it 2022-11-30 01:11:01 +00:00
wywywywy
7193814cf7
Added purpose of this hijack to comments 2022-11-29 19:22:53 +00:00
wywywywy
36c3613d16
Add autoencoder to sd_hijack 2022-11-29 17:40:02 +00:00
wywywywy
241cbc4d2f
Hijack VQModelInterface back to AutoEncoder 2022-11-29 17:38:16 +00:00
AUTOMATIC
4b3c5bc24b Merge remote-tracking branch 'pattontim/safetensors' 2022-11-29 17:06:15 +03:00
Billy Cao
9a8678f61e Support changing checkpoint and vae through override_settings 2022-11-29 11:11:29 +08:00
zhaohu xing
ee3f5ea3ee delete old config file
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-11-29 10:30:19 +08:00
zhaohu xing
75c4511e6b add AltDiffusion to webui
Signed-off-by: zhaohu xing <920232796@qq.com>
2022-11-29 10:28:41 +08:00
brkirch
98ca437edf Refactor and instead check if mps is being used, not availability 2022-11-28 21:18:51 -05:00
kaneda2004
950d9c739e Update SD Upscaler to include user selectable Scale Factor 2022-11-28 12:29:49 -08:00
kaneda2004
91226829f8 Update SD Upscaler to include user selectable Scale Factor 2022-11-28 12:28:22 -08:00
kaneda2004
0202547696 Update SD Upscaler to include user selectable Scale Factor 2022-11-28 12:24:53 -08:00
klimaleksus
67efee33a6
Make VAE step sequential to prevent VRAM spikes 2022-11-28 16:29:43 +05:00
AUTOMATIC
0b5dcb3d7c fix an error that happens when you type into prompt while switching model, put queue stuff into separate file 2022-11-28 09:00:10 +03:00
AUTOMATIC
0376da180c make it possible to save nai model using safetensors 2022-11-28 08:39:59 +03:00
AUTOMATIC
bb11bee22a if image on disk was deleted between being generated and request being completed, do use temporary dir to store it for the browser 2022-11-27 23:14:13 +03:00
AUTOMATIC
aa12dfada0 fix the bug that makes it impossible to send images to other tabs 2022-11-27 23:04:42 +03:00
AUTOMATIC1111
39827a3998
Merge pull request #4688 from parasi22/resolve-embedding-name-in-filewords
resolve [name] after resolving [filewords] in training
2022-11-27 22:46:49 +03:00
uservar
9146a5884c
Better should hijack inpainting detection 2022-11-27 19:11:50 +00:00
AUTOMATIC1111
9e78d2c419
Merge pull request #4416 from Keavon/cors-regex
Add CORS-allow policy launch argument using regex
2022-11-27 18:50:12 +03:00
AUTOMATIC1111
9e8c4e2c7f
Merge pull request #4894 from morgil/progress-first-in-title
Move progress info to beginning of site title
2022-11-27 18:48:29 +03:00
AUTOMATIC
89d8804768 only run install.py for enabled extensions 2022-11-27 18:48:08 +03:00
AUTOMATIC1111
ef567b083c
Merge pull request #4919 from brkirch/deepbooru-fix
Fix support for devices other than CUDA in DeepBooru
2022-11-27 16:59:22 +03:00
AUTOMATIC1111
554787231a
Merge pull request #5117 from aliencaocao/fix_api_sampler_name
Fix api ignoring sampler_name settings
2022-11-27 16:51:47 +03:00
AUTOMATIC1111
8c8ad93bb5
Merge pull request #4635 from mezotaken/master
CI tests with github-actions and some improvements to testing
2022-11-27 16:40:26 +03:00
AUTOMATIC1111
b24aed0b69
Merge pull request #4960 from Hugo-Matias/master
fix null negative_prompt on get requests
2022-11-27 16:36:29 +03:00
AUTOMATIC
8c13f3a2a5 cherrypick from #4971 2022-11-27 16:35:35 +03:00
AUTOMATIC1111
c33b9a6da7
Merge pull request #4583 from NoCrypt/patch-1
Forcing HTTPS instead of HTTP for ngrok
2022-11-27 16:30:23 +03:00
AUTOMATIC
506d529d19 rework #5012 to also work for pictures dragged into the prompt and also add Clip skip + ENSD to parameters 2022-11-27 16:28:32 +03:00
cat
185ab3cbd1 Atomically rename saved image to avoid race condition with other processes. 2022-11-27 18:23:08 +05:00
Billy Cao
06ada734c7 Prevent warning on sampler_index if sampler_name is being used 2022-11-27 21:19:47 +08:00
Billy Cao
3cf93de24f Fix sampler_name for API requests are being ignored 2022-11-27 21:12:37 +08:00
AUTOMATIC1111
488f831d52
Merge pull request #5012 from Nandaka/master
Support NAI style exif in PNG Info for Send... buttons
2022-11-27 15:57:50 +03:00
AUTOMATIC1111
9ec0a41a58
Merge pull request #4977 from sena-nana/master
Fix API img2img not accepting bare base64 strings
2022-11-27 15:54:39 +03:00
AUTOMATIC
dac9b6f15d add safetensors support for model merging #4869 2022-11-27 15:51:29 +03:00
AUTOMATIC
6074175faa add safetensors to requirements 2022-11-27 14:46:40 +03:00
AUTOMATIC1111
f108782e30
Merge pull request #4930 from Narsil/allow_to_load_safetensors_file
Supporting `*.safetensors` format.
2022-11-27 14:36:55 +03:00
AUTOMATIC1111
a89d7f4f38
Merge pull request #4913 from dtlnor/deprecated-deepdanbooru-patch
Remove cmd args requirement for deepdanbooru
2022-11-27 14:19:32 +03:00
AUTOMATIC1111
88a01f94a8
Merge pull request #1904 from EternalNooblet/dev
Added a flag to run as root if needed
2022-11-27 14:17:44 +03:00
AUTOMATIC1111
eb08550108
Merge pull request #4663 from xucj98/draft
fix the model name error of Real-ESRGAN in the opts default value
2022-11-27 14:16:45 +03:00
AUTOMATIC1111
ca8c764af8
Merge pull request #4986 from mcmonkey4eva/add-model-name
add model_name pattern for saving
2022-11-27 13:58:34 +03:00
AUTOMATIC1111
8de897b3da
Merge pull request #5085 from MrCheeze/sd-2.0-automatic-2
no-half support for SD 2.0
2022-11-27 13:54:08 +03:00
AUTOMATIC1111
d46d73c3c2
Merge pull request #4899 from liamkerr/fix_gallery_index
Fixed issue with selected_gallery_index()
2022-11-27 13:53:38 +03:00
AUTOMATIC1111
01f2ed6844
Merge pull request #5065 from JaySmithWpg/vram-leak
#3449 - VRAM leak when switching to/from inpainting checkpoint
2022-11-27 13:52:14 +03:00
AUTOMATIC1111
151e2cc627
Merge pull request #4461 from brkirch/face-restoration-device-fix
Fix setting device for GFPGAN and CodeFormer
2022-11-27 13:48:25 +03:00
AUTOMATIC1111
cc90dcc933
Merge pull request #4918 from brkirch/pytorch-fixes
Fixes for PyTorch 1.12.1 when using MPS
2022-11-27 13:47:01 +03:00
AUTOMATIC
10923f9b3a calculate dictionary for sampler names only once 2022-11-27 13:43:10 +03:00
AUTOMATIC
40ca34b837 fix for broken sampler selection in img2img and xy plot #4860 #4909 2022-11-27 13:17:39 +03:00
AUTOMATIC
5b2c316890 eliminate duplicated code from #5095 2022-11-27 13:08:54 +03:00
AUTOMATIC1111
997ac57020
Merge pull request #5095 from mlmcgoogan/master
torch.cuda.empty_cache() defaults to cuda:0 device unless explicitly …
2022-11-27 12:56:02 +03:00
AUTOMATIC1111
d860b56c21
Merge pull request #4961 from uservar/DPM++SDE
Add DPM++ SDE sampler
2022-11-27 12:55:03 +03:00
AUTOMATIC1111
6df4945718
Merge branch 'master' into DPM++SDE 2022-11-27 12:54:45 +03:00
AUTOMATIC
b48b7999c8 Merge remote-tracking branch 'flamelaw/master' 2022-11-27 12:19:59 +03:00
AUTOMATIC
b006382784 serve images from where they are saved instead of a temporary directory
add an option to choose a different temporary directory in the UI
add an option to cleanup the selected temporary directory at startup
2022-11-27 11:52:53 +03:00
Billy Cao
349f0461ec
Merge branch 'master' into support_any_resolution 2022-11-27 12:39:31 +08:00
Matthew McGoogan
c67c40f983 torch.cuda.empty_cache() defaults to cuda:0 device unless explicitly set otherwise first. Updating torch_gc() to use the device set by --device-id if specified to avoid OOM edge cases on multi-GPU systems. 2022-11-26 23:25:16 +00:00
MrCheeze
1e506657e1 no-half support for SD 2.0 2022-11-26 13:28:44 -05:00
AUTOMATIC
b5050ad207 make SD2 compatible with --medvram setting 2022-11-26 20:52:16 +03:00
flamelaw
755df94b2a set TI AdamW default weight decay to 0 2022-11-27 00:35:44 +09:00
AUTOMATIC
64c7b7975c restore hypernetworks to seemingly working state 2022-11-26 16:45:57 +03:00
AUTOMATIC
1123f52cad add 1024 module for hypernets for the new open clip 2022-11-26 16:37:37 +03:00
AUTOMATIC
ce6911158b Add support Stable Diffusion 2.0 2022-11-26 16:10:46 +03:00
Jay Smith
c833d5bfaa fixes #3449 - VRAM leak when switching to/from inpainting model 2022-11-25 20:15:11 -06:00
xucj98
263b323de1
Merge branch 'AUTOMATIC1111:master' into draft 2022-11-25 17:07:00 +08:00
Tiago F. Santos
a2ae5a6555 [interrogator] mkdir check 2022-11-24 13:04:45 +00:00
Sena
fcd75bd874
Fix other apis 2022-11-24 13:10:40 +08:00
Nandaka
904121fecc Support NAI exif for PNG Info 2022-11-24 02:39:09 +00:00
Alex "mcmonkey" Goodwin
ffcbbcf385 add filename santization
Probably redundant, considering the model name *is* a filename, but I suppose better safe than sorry.
2022-11-23 06:44:20 -08:00
Alex "mcmonkey" Goodwin
6001684be3 add model_name pattern for saving 2022-11-23 06:35:44 -08:00
flamelaw
1bd57cc979 last_layer_dropout default to False 2022-11-23 20:21:52 +09:00
flamelaw
d2c97fc3fe fix dropout, implement train/eval mode 2022-11-23 20:00:00 +09:00
Billy Cao
adb6cb7619 Patch UNet Forward to support resolutions that are not multiples of 64
Also modifed the UI to no longer step in 64
2022-11-23 18:11:24 +08:00
Sena
75b67eebf2
Fix bare base64 not accept 2022-11-23 17:43:58 +08:00
flamelaw
89d8ecff09 small fixes 2022-11-23 02:49:01 +09:00
Tim Patton
ac90cf38c6 safetensors optional for now 2022-11-22 10:13:07 -05:00
uservar
45fd785436
Update launch.py 2022-11-22 14:52:16 +00:00
uservar
47ce73fbbf
Update requirements_versions.txt 2022-11-22 14:26:09 +00:00
uservar
3c3c46be5f
Update requirements.txt 2022-11-22 14:25:39 +00:00
uservar
0a01f50891
Add DPM++ SDE sampler 2022-11-22 14:24:50 +00:00
uservar
6ecf72b6f7
Update k-diffusion to Release 0.0.11 2022-11-22 14:24:10 +00:00
Rogerooo
c27a973c82 fix null negative_prompt on get requests
Small typo that causes a bug when returning negative prompts from the get request.
2022-11-22 14:02:59 +00:00
Tiago F. Santos
745f1e8f80 [CLIP interrogator] use local file, if available 2022-11-22 12:48:25 +00:00
Tim Patton
210cb4c128 Use GPU for loading safetensors, disable export 2022-11-21 16:40:18 -05:00
Tim Patton
e134b74ce9 Ignore safetensor files 2022-11-21 10:58:57 -05:00
Tim Patton
162fef394f Patch line ui endings 2022-11-21 10:50:57 -05:00
Nicolas Patry
0efffbb407 Supporting *.safetensors format.
If a model file exists with extension `.safetensors` then we can load it
more safely than with PyTorch weights.
2022-11-21 14:04:25 +01:00
brkirch
563ea3f6ff Change .cuda() to .to(devices.device) 2022-11-21 02:56:00 -05:00
brkirch
e247b7400a Add fixes for PyTorch 1.12.1
Fix typo "MasOS" -> "macOS"

If MPS is available and PyTorch is an earlier version than 1.13:
* Monkey patch torch.Tensor.to to ensure all tensors sent to MPS are contiguous
* Monkey patch torch.nn.functional.layer_norm to ensure input tensor is contiguous (required for this program to work with MPS on unmodified PyTorch 1.12.1)
2022-11-21 02:07:19 -05:00
dtlnor
9ae30b3450 remove cmd args requirement for deepdanbooru 2022-11-21 12:53:55 +09:00
flamelaw
5b57f61ba4 fix pin_memory with different latent sampling method 2022-11-21 10:15:46 +09:00
Liam
927d24ef82 made selected_gallery_index query selectors more restrictive 2022-11-20 13:52:18 -05:00
Tim Patton
637815632f Generalize SD torch load/save to implement safetensor merging compat 2022-11-20 13:36:05 -05:00
Jonas Böer
471189743a
Move progress info to beginning of title
because who has so few tabs open that they can see the end of a tab name?
2022-11-20 15:57:43 +01:00
AUTOMATIC1111
828438b4a1
Merge pull request #4120 from aliencaocao/enable-override-hypernet
Enable override_settings to take effect for hypernetworks
2022-11-20 16:49:06 +03:00
AUTOMATIC
c81d440d87 moved deepdanbooru to pure pytorch implementation 2022-11-20 16:39:20 +03:00
flamelaw
2d22d72cda fix random sampling with pin_memory 2022-11-20 16:14:27 +09:00
flamelaw
a4a5735d0a remove unnecessary comment 2022-11-20 12:38:18 +09:00
flamelaw
bd68e35de3 Gradient accumulation, autocast fix, new latent sampling method, etc 2022-11-20 12:35:26 +09:00
Tim Patton
ac7ecd2d84 Label and load SD .safetensors model files 2022-11-19 14:49:22 -05:00
Keavon Chambers
2f90496b19
Merge branch 'master' into cors-regex 2022-11-19 10:34:31 -08:00
AUTOMATIC
47a44c7e42 revert change to webui-user.bat 2022-11-19 21:06:58 +03:00
AUTOMATIC
3596af0749 Add API for scripts to add elements anywhere in UI. 2022-11-19 19:10:28 +03:00
AUTOMATIC1111
ccd73fc186
Merge pull request #4717 from papuSpartan/security
Add --server-name to the list of arguments considered insecure
2022-11-19 15:31:09 +03:00
AUTOMATIC1111
32718834f3
Merge pull request #4664 from piraka9011/fix-env-names
Fix env var name typos
2022-11-19 15:30:32 +03:00
AUTOMATIC1111
41e242b220
Merge pull request #4733 from MaikoTan/api-authorization
feat: add http basic authentication for api
2022-11-19 15:20:03 +03:00
AUTOMATIC
5a6387e189 make it possible to change models etc by editing options using API 2022-11-19 15:15:24 +03:00
Maiko Tan
336c341a7c
Merge branch 'master' into api-authorization 2022-11-19 20:13:07 +08:00
AUTOMATIC1111
84a6f211d4
Merge pull request #4358 from bamarillo/master
[API][Feature] Add Skip endpoint
2022-11-19 14:50:02 +03:00
Vladimir Repin
14dfede8dd
Minor fixes
Remove unused test completely
Change job name 
Don't use empty.pt as CLIP weights - it wont work.
Use latest version of actions/cache
2022-11-19 14:15:10 +03:00
AUTOMATIC1111
4b22ec4138
Merge pull request #4759 from dtlnor/kill-gradio-progress-bar
Hide Gradio progress again
2022-11-19 13:49:21 +03:00
AUTOMATIC
413c077969 prevent StableDiffusionProcessingImg2Img changing image_mask field as an alternative solution to #4765 2022-11-19 13:48:59 +03:00
AUTOMATIC1111
89daf778fb
Merge pull request #4812 from space-nuko/feature/interrupt-preprocessing
Add interrupt button to preprocessing
2022-11-19 13:26:33 +03:00
AUTOMATIC1111
fe03f9903c
Merge pull request #4819 from killfrenzy96/master
Cleanly undo circular hijack to fix tiling getting stuck on #4818
2022-11-19 13:26:03 +03:00
AUTOMATIC
617c5b486f make it possible for StableDiffusionProcessing to accept multiple different negative prompts in a batch 2022-11-19 13:23:25 +03:00
AUTOMATIC1111
e35d8b493f
Merge pull request #4778 from leppie/fix_unbounded_prompt_growth
Fix unbounded prompt growth/determinism in scripts that loop
2022-11-19 12:52:55 +03:00
AUTOMATIC
0d702930b0 renamed Inpainting strength infotext to Conditional mask weight, made it only appear if using inpainting model, made it possible to read the setting from it using the blue arrow button 2022-11-19 12:47:52 +03:00
Muhammad Rizqi Nur
8662b5e57f Merge branch 'a1111' into vae-fix-none 2022-11-19 16:38:21 +07:00
AUTOMATIC1111
ff35ae9abb
Merge pull request #4679 from Eugenii10/inpaint-strength-to-infotext
Add 'Inpainting strength' to the 'generation_params' of 'infotext' (params.txt or png chunks)
2022-11-19 12:24:44 +03:00
AUTOMATIC1111
aee611adb8
Merge pull request #4646 from mrauhu/force-update-extensions
Fix: `error: Your local changes to the following files would be overwritten by merge` when trying to update extensions in WSL2 Docker
2022-11-19 12:22:51 +03:00
AUTOMATIC1111
5bfef6e063
Merge pull request #4844 from R-N/vae-misc
Remove no longer necessary code from VAE selector, fix #4651
2022-11-19 12:21:22 +03:00
AUTOMATIC
cdc8020d13 change StableDiffusionProcessing to internally use sampler name instead of sampler index 2022-11-19 12:01:51 +03:00
Muhammad Rizqi Nur
45dca0562e Merge branch 'a1111' into vae-fix-none 2022-11-19 15:21:00 +07:00
Muhammad Rizqi Nur
f1bdf2b15f Merge branch 'a1111' into vae-misc 2022-11-19 15:20:07 +07:00
AUTOMATIC
d9fd4525a5 change text for sd_vae_as_default that makes more sense to me 2022-11-19 11:09:44 +03:00
AUTOMATIC1111
3951806058
Merge pull request #4842 from R-N/vae-as-default
Option to use selected VAE as default fallback instead of primary option
2022-11-19 10:59:42 +03:00
AUTOMATIC1111
0d098e4656
Merge pull request #4527 from d8ahazard/Accelerate
Optional Accelerate Launch
2022-11-19 10:49:44 +03:00
Muhammad Rizqi Nur
c8f7b5cdd7 Misc
Misc
2022-11-19 12:04:12 +07:00
Muhammad Rizqi Nur
271fd2d700 More verbose messages 2022-11-19 12:02:50 +07:00
Muhammad Rizqi Nur
2c5ca706a7 Remove no longer necessary parts and add vae_file safeguard 2022-11-19 12:01:41 +07:00
Muhammad Rizqi Nur
0663706d44 Option to use selected VAE as default fallback instead of primary option 2022-11-19 11:49:06 +07:00
Muhammad Rizqi Nur
028b67b635 Use underscore naming for "private" functions in sd_vae 2022-11-19 11:47:54 +07:00
Muhammad Rizqi Nur
9fdc343dca Fix model caching requiring deepcopy 2022-11-19 11:44:37 +07:00
Muhammad Rizqi Nur
c7be83bf02 Misc
Misc
2022-11-19 11:44:37 +07:00
Muhammad Rizqi Nur
abc1e79a5d Fix base VAE caching was done after loading VAE, also add safeguard 2022-11-19 11:41:41 +07:00
Muhammad Rizqi Nur
8ab4927452 Fix model wasn't restored even when choosing "None" 2022-11-19 11:41:21 +07:00
Vladimir Repin
694611cbd8
Apply suggestions from code review
Use last version of setup-python action
Remove unnecesarry multicommand from run
Remove current directory from artifact paths

Co-authored-by: Margen67 <Margen67@users.noreply.github.com>
2022-11-19 00:56:08 +03:00
killfrenzy96
17e4432820 cleanly undo circular hijack #4818 2022-11-18 21:22:55 +11:00
space-nuko
c8c40c8a64 Add interrupt button to preprocessing 2022-11-17 18:05:29 -08:00
brkirch
a5106a7cdc Remove extra .to(device) 2022-11-17 00:08:45 -05:00
brkirch
abfa22c16f Revert "MPS Upscalers Fix"
This reverts commit 768b95394a.
2022-11-17 00:08:21 -05:00
Llewellyn Pritchard
9bbe1e3c2e Fix unbounded prompt growth scripts that loop 2022-11-16 19:19:00 +02:00
dtlnor
72b52fbb77 add css override 2022-11-16 13:08:03 +09:00
Maiko Sinkyaet Tan
8f2ff861d3
feat: add http basic authentication for api 2022-11-15 16:12:34 +08:00
papuSpartan
3405acc6a4 Give --server-name priority over --listen and add check for --server-name in addition to --share and --listen 2022-11-14 14:07:13 -06:00
Vladimir Repin
a071079000 Use empty model as CLIP weights 2022-11-14 19:22:06 +03:00
Vladimir Repin
4a35c3744c remove test requiring codeformers 2022-11-14 18:57:14 +03:00
Vladimir Repin
9e4f68acad
Stop exporting cl args and upload stdout and stderr as artifacts 2022-11-14 18:40:15 +03:00
Vladimir Repin
5808241dd7 Use 80 port on launch 2022-11-14 15:14:52 +03:00
Vladimir Repin
7416ac8d3c Use localhost with 80 port, count errors as well 2022-11-14 14:55:39 +03:00
Vladimir Repin
0646040667 Propagate test error and try it without localhost 2022-11-14 14:36:07 +03:00
Vladimir Repin
3ffc1c6cee skip cuda test 2022-11-14 13:45:21 +03:00
Vladimir Repin
93d6c0209a Tests separated for github-actions CI 2022-11-14 13:39:22 +03:00
KEV
40ae95d532 Fix retrieving value for 'x/y plot' script. 2022-11-14 18:05:59 +10:00
parasi
9a1aff645a resolve [name] after resolving [filewords] in training 2022-11-13 13:49:28 -06:00
Ryan Voots
671c0e42b4 Fix docker tmp/ and extensions/ handling for docker. might also work for symlinks 2022-11-13 13:39:41 -05:00
KEV
6fa891b934 Add 'Inpainting strength' to the 'generation_params' dictionary of 'infotext' which is saved into the 'params.txt' or png chunks.
Value appears only if 'Denoising strength' appears too.
2022-11-14 00:25:38 +10:00
Anas Abou Allaban
084cf04390 Fix env var names 2022-11-12 22:41:22 -05:00
Xu Cuijie
d20dbe47e0 fix the model name error of Real-ESRGAN in the opts default value 2022-11-13 10:31:03 +08:00
Mrau Hu
d671d1d45d Fix: error: Your local changes to the following files would be overwritten by merge when run pull() method,
because WSL2 Docker set 755 file permissions instead of 644, this results to the error.

Updated `Extension` class: replaced `pull()` with `fetch_and_reset_hard()` method.

Updated `apply_and_restart()` function: replaced `ext.pull()` with `ext.fetch_and_reset_hard()` function.
2022-11-12 21:44:42 +03:00
Vladimir Repin
007f4f7314 Tests cleaned up 2022-11-12 15:12:15 +03:00
brkirch
f4a488f585 Set device for facelib/facexlib and gfpgan
* FaceXLib/FaceLib doesn't pass the device argument to RetinaFace but instead chooses one itself and sets it to a global - in order to use a device other than its internally chosen default it is necessary to manually replace the default value
* The GFPGAN constructor needs the device argument to work with MPS or a CUDA device ID that differs from the default
2022-11-12 03:34:13 -05:00
AUTOMATIC
98947d173e run installers for newly installed extensions 2022-11-12 11:11:47 +03:00
AUTOMATIC
a1a376331c make existing script loading and new preload code use same code for loading modules
limit extension preload scripts to just one file named preload.py
2022-11-12 10:56:06 +03:00
AUTOMATIC1111
e5690d0bf2
Merge pull request #4488 from d8ahazard/ExtensionPreload
Add option to preload extensions
2022-11-12 10:29:15 +03:00
AUTOMATIC
0ab0a50f9a change formatting to match the main program in devices.py 2022-11-12 10:00:49 +03:00
AUTOMATIC
c62d17aee3 use the new devices.has_mps() function in register_buffer for DDIM/PLMS fix for OSX 2022-11-12 10:00:22 +03:00
AUTOMATIC1111
526f0aa556
Merge pull request #4623 from fumiama/mps
Fix wrong mps fallback
2022-11-12 09:51:33 +03:00
源文雨
1130d5df66
Update devices.py 2022-11-12 11:09:28 +08:00
源文雨
76ab31e188 Fix wrong mps selection below MasOS 12.3 2022-11-12 11:02:40 +08:00
AUTOMATIC
7ba3923d5b move DDIM/PLMS fix for OSX out of the file with inpainting code. 2022-11-11 18:20:18 +03:00
AUTOMATIC1111
bb2e2c82ce
Merge pull request #4233 from thesved/patch-1
Make DDIM and PLMS work on Mac OS
2022-11-11 18:01:58 +03:00
AUTOMATIC1111
b8a2e38758
Merge pull request #4543 from tong-zeng/master
Fix a bug in list_files_with_name
2022-11-11 18:00:13 +03:00
NoCrypt
6165f07e74
Merge branch 'master' into patch-1 2022-11-11 21:14:10 +07:00
AUTOMATIC1111
e666220ee4
Merge pull request #4514 from cluder/4448_fix_ckpt_cache
#4448 fix checkpoint cache usage
2022-11-11 16:04:17 +03:00
AUTOMATIC1111
6a2044f566
Merge pull request #4563 from JingShing/master
Add username and password in ngrok
2022-11-11 15:57:24 +03:00
AUTOMATIC1111
ec95ced6fb
Merge pull request #4573 from liamkerr/4415-update-generation-info
4415 update generation info
2022-11-11 15:51:14 +03:00
AUTOMATIC1111
73776907ec
Merge pull request #4117 from TinkTheBoush/master
Adding optional tag shuffling for training
2022-11-11 15:46:20 +03:00
AUTOMATIC1111
6585cba200
Merge pull request #4395 from snowmeow2/master
Add DeepDanbooru to the interrogate API
2022-11-11 15:41:30 +03:00
KyuSeok Jung
a1e271207d
Update dataset.py 2022-11-11 10:56:53 +09:00
NoCrypt
c556d34523
Forcing HTTPS instead of HTTP for ngrok
For security reason.
2022-11-11 08:54:51 +07:00
KyuSeok Jung
b19af67d29
Update dataset.py 2022-11-11 10:54:19 +09:00
KyuSeok Jung
0959907f87
adding tag dropout option 2022-11-11 10:31:14 +09:00
KyuSeok Jung
13a2f1dca3
adding tag drop out option 2022-11-11 10:29:55 +09:00
KyuSeok Jung
6f8a807fe4
Update shared.py 2022-11-11 09:22:49 +09:00
Liam
b98740129c added event listener for the image gallery modal; moved js to separate file 2022-11-10 13:14:04 -05:00
d8ahazard
ac6fd2a5d9 Fix accelerate check when spaces in path 2022-11-10 09:39:43 -06:00
JingShing
1a01191e27
Add username and password in ngrok. 2022-11-10 20:42:41 +08:00
JingShing
2505f39e28
Add username and password in ngrok. 2022-11-10 20:39:20 +08:00
Tong Zeng
893191cab2 fix a bug in list_files_with_name 2022-11-10 10:34:03 +08:00
Liam
81f2575df9 updating the displayed generation info when user clicks images in the gallery. feature request 4415 2022-11-09 15:24:31 -05:00
d8ahazard
0a54bd3395 Optional Accelerate Launch 2022-11-09 11:15:17 -06:00
Muhammad Rizqi Nur
d85c2cb2d5 Merge branch 'master' into gradient-clipping 2022-11-09 16:29:37 +07:00
cluder
eebf49592a restore #4035 behavior
- if checkpoint cache is set to 1, keep 2 models in cache (current +1 more)
2022-11-09 07:17:09 +01:00
cluder
f37cce0e3d Merge branch 'master' of https://github.com/cluder/stable-diffusion-webui into 4448_fix_ckpt_cache 2022-11-09 05:50:43 +01:00
cluder
3b51d239ac - do not use ckpt cache, if disabled
- cache model after is has been loaded from file
2022-11-09 05:43:57 +01:00
kavorite
59bb1d36ea blur mask with color-sketch + add paint transparency slider 2022-11-08 22:06:29 -05:00
pepe10-gpu
62e9fec3df
actual better fix
thanks C43H66N12O12S2
2022-11-08 15:19:09 -08:00
d8ahazard
cfcadeae9a Add option to preload extensions
By creating a file called "preload.py" in an extension folder and declaring a preload(parser) method, we can add extra command-line args for an extension.
2022-11-08 10:03:56 -06:00
kavorite
c34542a483 add new color-sketch state to img2img invocation 2022-11-08 03:25:59 -05:00
AUTOMATIC
ac08562854 Remove old localizations from the main repo.
Where are they now? Here: https://github.com/AUTOMATIC1111/stable-diffusion-webui-old-localizations
2022-11-08 10:01:27 +03:00
AUTOMATIC
1610b32584 add callback for creating a tab in train UI 2022-11-08 08:38:10 +03:00
AUTOMATIC
8011be33c3 move functions out of main body for image preprocessing for easier hijacking 2022-11-08 08:37:05 +03:00
AUTOMATIC
c5334fc56b fix javascript duplication bug after pressing the restart UI button 2022-11-08 08:35:01 +03:00
pepe10-gpu
29eff4a194
terrible hack 2022-11-07 18:06:48 -08:00
kavorite
9ed4a126bd add gradio-inpaint-tool; color-sketch 2022-11-07 19:58:49 -05:00
Muhammad Rizqi Nur
cabd4e3b3b Merge branch 'master' into gradient-clipping 2022-11-07 22:43:38 +07:00
Keavon Chambers
a258fd60db Add CORS-allow policy launch argument using regex 2022-11-07 00:13:58 -08:00
papuSpartan
00ebc26c4e
Merge branch 'AUTOMATIC1111:master' into master 2022-11-06 21:05:28 -06:00
pepe10-gpu
cd6c55c1ab
16xx card fix
cudnn
2022-11-06 17:05:51 -08:00
AUTOMATIC
804d9fb83d bump gradio to 3.9 2022-11-06 23:22:36 +03:00
snowmeow2
67c8e11be7 Adding DeepDanbooru to the interrogation API 2022-11-07 02:32:06 +08:00
AUTOMATIC
32c0eab895 load all settings in one call instead of one by one when the page loads 2022-11-06 14:39:41 +03:00
Billy Cao
c13e234444
Merge branch 'master' into enable-override-hypernet 2022-11-06 16:33:08 +08:00
Billy Cao
55ca040958 Resolve conflict 2022-11-06 16:31:44 +08:00
AUTOMATIC1111
5302e2cdd4
Merge pull request #3810 from royshil/roy.add_simple_interrogate_api
Add a barebones CLIP interrogate API endpoint
2022-11-06 11:28:00 +03:00
AUTOMATIC1111
07d1bd4267
Merge branch 'master' into roy.add_simple_interrogate_api 2022-11-06 11:27:54 +03:00
AUTOMATIC
6e4de5b442 add load_with_extra function for modules to load checkpoints with extended whitelist 2022-11-06 11:20:23 +03:00
AUTOMATIC
9cd1a66648 remove localization people from CODEOWNERS add a note 2022-11-06 10:37:08 +03:00
AUTOMATIC
e5b4e3f820 add tags to extensions, and ability to filter out tags
list changed Settings keys in UI
do not print VRAM/etc stats everywhere but in calls that use GPU
2022-11-06 10:12:53 +03:00
AUTOMATIC
a2a1a2f727 add ability to create extensions that add localizations 2022-11-06 09:02:35 +03:00
AUTOMATIC1111
ea5b90b3b3
Merge pull request #4371 from hotdogee/master
Fixes #800 #1562 #2075 #2304 #2931 LDSR upscaler producing black bars
2022-11-06 08:18:16 +03:00
Han Lin
6603f63b7b Fixes LDSR upscaler producing black bars 2022-11-06 11:08:20 +08:00
byzod
9cc48fee48
fix scripts ignores file format settings for grids 2022-11-06 10:15:02 +08:00
byzod
4796db85b5
ignores file format settings for grids 2022-11-06 10:12:57 +08:00
Eugenio Buffo
2f47724b73
Merge pull request #4342 from Harvester62/Italian
Italian localization - Additions and updates
2022-11-06 00:54:09 +01:00
Bruno Seoane
7f63980e47 Remove unnecesary return 2022-11-05 19:09:13 -03:00
Bruno Seoane
3c72055c22 Add skip endpoint 2022-11-05 19:05:15 -03:00
Bruno Seoane
0ebf66b575 Fix set config endpoint 2022-11-05 19:00:47 -03:00
Bruno Seoane
99b05addb1 Fix options endpoint not showing the full list of options 2022-11-05 18:46:47 -03:00
Bruno Seoane
59ec427dff Merge remote-tracking branch 'upstream/master' 2022-11-05 15:56:41 -03:00
KyuSeok Jung
9b7289c349
Merge branch 'master' into master 2022-11-06 03:08:45 +09:00
Riccardo Giovanetti
0b028c84ab
Italian localization - Additions and updates
Updated localization with the latest version of these Scripts/Extensions:

SD-Chad - Stable Diffusion Aesthetic Scorer (added)
AlphaCanvas
StylePile
Alternate Sampler Noise Schedules
SD Latent Mirroring (new)
SD Dynamic Prompts
Dataset Tag Editor
Depth Maps (new)
Improved prompt matrix (new)

New Extensions:

Auto-sd-paint-ext (new)
training-picker (new)
Unprompted
NovelAI-2-local-prompt (new)
tokenizer (new)

Hope there aren't too many mistakes or wrong translations, in case let me know.
2022-11-05 18:12:06 +01:00
Dynamic
b08698a09a
Merge pull request #4337 from 36DB/kr-localization
Update ko_KR.json
2022-11-06 00:38:00 +09:00
Dynamic
29f48b7803
Update ko_KR.json
New setting option and some additional extension index strings
2022-11-06 00:37:37 +09:00
AUTOMATIC
159475e072 tweak names a bit for new samplers 2022-11-05 18:32:22 +03:00
AUTOMATIC1111
bbfdfa52c5
Merge pull request #4304 from hentailord85ez/k-diffusion-update
Add support for the new DPM-Solver++ samplers added to k-diffusion
2022-11-05 18:28:25 +03:00
AUTOMATIC1111
2e604233fd
Merge pull request #4329 from Blucknote/patch-1
Python 3.8 typing compatibility
2022-11-05 17:23:22 +03:00
AUTOMATIC1111
a546e2a8c9
Merge pull request #4173 from evshiron/fix/encode-pnginfo
add back png info in image api
2022-11-05 17:20:54 +03:00
evshiron
b6cfaaa20b Merge branch 'master' into fix/encode-pnginfo 2022-11-05 22:12:20 +08:00
AUTOMATIC
62e3d71aa7 rework the code to not use the walrus operator because colab's 3.7 does not support it 2022-11-05 17:09:42 +03:00
Evgeniy
a170e3d222
Python 3.8 typing compatibility
Solves problems with

```Traceback (most recent call last):
  File "webui.py", line 201, in <module>
    webui()
  File "webui.py", line 178, in webui
    create_api(app)
  File "webui.py", line 117, in create_api
    from modules.api.api import Api
  File "H:\AIart\stable-diffusion\stable-diffusion-webui\modules\api\api.py", line 9, in <module>
    from modules.api.models import *
  File "H:\AIart\stable-diffusion\stable-diffusion-webui\modules\api\models.py", line 194, in <module>
    class SamplerItem(BaseModel):
  File "H:\AIart\stable-diffusion\stable-diffusion-webui\modules\api\models.py", line 196, in SamplerItem
    aliases: list[str] = Field(title="Aliases")
TypeError: 'type' object is not subscriptable```

and

```Traceback (most recent call last):
  File "webui.py", line 201, in <module>
    webui()
  File "webui.py", line 178, in webui
    create_api(app)
  File "webui.py", line 117, in create_api
    from modules.api.api import Api
  File "H:\AIart\stable-diffusion\stable-diffusion-webui\modules\api\api.py", line 9, in <module>
    from modules.api.models import *
  File "H:\AIart\stable-diffusion\stable-diffusion-webui\modules\api\models.py", line 194, in <module>
    class SamplerItem(BaseModel):
  File "H:\AIart\stable-diffusion\stable-diffusion-webui\modules\api\models.py", line 197, in SamplerItem
    options: dict[str, str] = Field(title="Options")
TypeError: 'type' object is not subscriptable```
2022-11-05 17:06:56 +03:00
AUTOMATIC1111
b8f2dfed3c
Merge pull request #4297 from AUTOMATIC1111/aria1th-patch-1
Fix errors from commit f2b697 with --hide-ui-dir-config
2022-11-05 16:22:50 +03:00
AUTOMATIC1111
994136b9c2
Merge pull request #4294 from evshiron/feat/allow-origins
add --cors-allow-origins cmd opt
2022-11-05 16:20:45 +03:00
AUTOMATIC1111
37ba0070ec
Merge branch 'master' into feat/allow-origins 2022-11-05 16:20:40 +03:00
AUTOMATIC1111
c9b2eef6a3
Merge pull request #4293 from AUTOMATIC1111/innovaciones-patch-1
Open extensions links in new tab
2022-11-05 16:18:29 +03:00
AUTOMATIC1111
cb84a304f0
Merge pull request #4273 from Omegastick/ordered_hypernetworks
Sort hypernetworks list
2022-11-05 16:16:18 +03:00
AUTOMATIC1111
e96c434495
Merge pull request #3975 from aria1th/force-push-patch-13
Save/loading AdamW optimizer (for hypernetworks)
2022-11-05 16:15:00 +03:00
AUTOMATIC1111
477c09f4e7
Merge pull request #4311 from aliencaocao/fix_typing_compat_for_brlow_python3.10
Use typing.Optional instead of | to add support for Python 3.9 and below
2022-11-05 16:06:22 +03:00
AUTOMATIC1111
c71691933c
Merge pull request #4320 from papuSpartan/tls
Add support for SSL/TLS (provide Gradio TLS options)
2022-11-05 16:05:51 +03:00
AUTOMATIC
03b08c4a6b do not die when an extension's repo has no remote 2022-11-05 15:04:48 +03:00
papuSpartan
a02bad570e rm dbg 2022-11-05 04:14:21 -05:00
papuSpartan
e9a5562b9b add support for tls (gradio tls options) 2022-11-05 04:06:51 -05:00
Muhammad Rizqi Nur
bb832d7725 Simplify grad clip 2022-11-05 11:48:38 +07:00
Billy Cao
ebce0c57c7 Use typing.Optional instead of | to add support for Python 3.9 and below. 2022-11-05 11:38:24 +08:00
hentailord85ez
1b6c2fc749
Reorder samplers 2022-11-04 23:28:13 +00:00
hentailord85ez
f92dc505a0
Fix name 2022-11-04 23:12:48 +00:00
hentailord85ez
6008c0773e
Add support for new DPM-Solver++ samplers 2022-11-04 23:03:05 +00:00
hentailord85ez
c0f7dbda33
Update k-diffusion to release 0.0.10 2022-11-04 23:01:58 +00:00
AUTOMATIC
30b1bcc64e fix upscale loop erroneously applied multiple times 2022-11-04 22:56:18 +03:00
AUTOMATIC
822210bae5 Merge remote-tracking branch 'origin/master' 2022-11-04 22:47:59 +03:00
Bruno Seoane
fd66f669ea Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui 2022-11-04 16:40:13 -03:00
AngelBottomless
467d8b967b
Fix errors from commit f2b697 with --hide-ui-dir-config
f2b69709ea
2022-11-05 04:24:42 +09:00
evshiron
b8435e632f add --cors-allow-origins cmd opt 2022-11-05 02:36:47 +08:00
innovaciones
0d7e01d995
Open extensions links in new tab
Fixed for "Available" tab
2022-11-04 12:14:32 -06:00
evshiron
73e1cd6f53 Merge branch 'master' into fix/encode-pnginfo 2022-11-05 01:43:02 +08:00
benlisquare
89722fb5e4
Merge pull request #4285 from benlisquare/master
Minor fix to Traditional Chinese (zh_TW) JSON
2022-11-05 03:51:21 +11:00
benlisquare
d469186fb0 Minor fix to Traditional Chinese (zh_TW) JSON 2022-11-05 03:42:50 +11:00
AUTOMATIC1111
8eb0a97278
Merge pull request #4179 from AUTOMATIC1111/callback-structure
Convert callbacks into a private map, add utility functions
2022-11-04 19:27:54 +03:00
DepFA
5844ef8a9a
remove private underscore indicator 2022-11-04 16:02:25 +00:00
benlisquare
17087e306d
Merge pull request #4282 from benlisquare/master
General fixes to Traditional Chinese (zh_TW) localisation JSON
2022-11-05 02:59:24 +11:00
benlisquare
4a73d433b1 General fixes to Traditional Chinese (zh_TW) localisation JSON 2022-11-05 02:48:36 +11:00
Isaac Poulton
08feb4c364
Sort straight out of the glob 2022-11-04 20:53:11 +07:00
AUTOMATIC
116bcf730a disable setting options via API until it is fixed by the author 2022-11-04 16:49:05 +03:00
AUTOMATIC
f316280ad3 fix the error that prevents from setting some options 2022-11-04 16:49:04 +03:00
dtlnor
d61f0ded24
Merge pull request #4270 from shwang95/master
Rename confusing translation
2022-11-04 21:07:19 +08:00
DepFA
c3cd0d7a86
Should be one underscore for module privates not two 2022-11-04 12:19:16 +00:00
Muhammad Rizqi Nur
3277f90e93 Merge branch 'master' into gradient-clipping 2022-11-04 18:47:28 +07:00
Isaac Poulton
fd62727893
Sort hypernetworks 2022-11-04 18:34:35 +07:00
TinkTheBoush
45b65e87e0 remove ui option 2022-11-04 19:48:28 +09:00
TinkTheBoush
821e2b883d change option position to Training setting 2022-11-04 19:39:03 +09:00
Sihan Wang
5359fd30f2
Rename confusing translation
"Denoising strength" in UI was translated as "重绘幅度" while "denoising" in the X/Y plot is translated as "去噪", totally confusing.
2022-11-04 17:52:46 +08:00
Dynamic
81973091bc
Merge pull request #4269 from 36DB/kr-localization
Update ko_KR.json
2022-11-04 18:20:59 +09:00
Dynamic
db50a9ab6c
Update ko_KR.json
Added new strings/Revamped edited strings
2022-11-04 18:12:27 +09:00
Fampai
39541d7725 Fixes race condition in training when VAE is unloaded
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
2022-11-04 04:50:22 -04:00
AUTOMATIC
eeb0733013 change process_one virtual function for script to process_batch, add extra args and docs 2022-11-04 11:21:40 +03:00
AUTOMATIC
99043f3360 fix one of previous merges breaking the program 2022-11-04 11:20:42 +03:00
AngelBottomless
7278897982
Update shared.py 2022-11-04 17:12:28 +09:00
AUTOMATIC1111
c250d2a08f
Merge pull request #4182 from macrosoft/process_one
Process one
2022-11-04 11:07:05 +03:00
AUTOMATIC1111
1b9faaa1c3
Merge pull request #4196 from cobryan05/extras_no_rehash
extras - skip unnecessary second hash of image
2022-11-04 11:04:01 +03:00
AUTOMATIC1111
faa79b0850
Merge pull request #4199 from Ju1-js/patch-1
Make extension manager remote links open a new tab
2022-11-04 11:03:19 +03:00
AUTOMATIC1111
91c7659dc2
Merge pull request #4201 from aliencaocao/fix_nowebui_arg
Fix nowebui arg being ignored
2022-11-04 11:03:02 +03:00
AUTOMATIC1111
9b2ca18a5d
Merge pull request #4245 from 7flash/7flash/fix-api-compatibility
fixed api compatibility with python 3.8
2022-11-04 11:00:37 +03:00
AUTOMATIC1111
e9c767d8db
Merge branch 'master' into 7flash/fix-api-compatibility 2022-11-04 11:00:32 +03:00
AUTOMATIC1111
2913b9f025
Merge pull request #4178 from HeyImKyu/PreviewOnBatchCompletion
Added option to preview Created images on batch completion.
2022-11-04 10:59:27 +03:00
AUTOMATIC1111
17f9e55667
Merge pull request #4036 from R-N/fix-ckpt-cache
Fix 1 checkpoint cache count being useless #4035
2022-11-04 10:54:23 +03:00
AUTOMATIC1111
24fc05cf57
Merge branch 'master' into fix-ckpt-cache 2022-11-04 10:54:17 +03:00
AUTOMATIC1111
352b33106a
Merge pull request #4210 from byzod/patch-1
Fix #3904 (hotkey to edit weight of tag is not working in (most) localizations)
2022-11-04 10:47:49 +03:00
AUTOMATIC1111
371c4b990e
Merge pull request #4218 from bamarillo/utils-endpoints
[API][Feature] Utils endpoints
2022-11-04 10:46:51 +03:00
AUTOMATIC
f674c488d9 bugfix: save image for hires fix BEFORE upscaling latent space 2022-11-04 10:45:34 +03:00
AUTOMATIC
321e13ca17 produce a readable error message when setting an option fails on the settings screen 2022-11-04 10:35:30 +03:00
AUTOMATIC
ccf1a15412 add an option to enable installing extensions with --listen or --share 2022-11-04 10:16:19 +03:00
aria1th
1ca0bcd3a7 only save if option is enabled 2022-11-04 16:09:19 +09:00
AUTOMATIC
5f01171543 shut down gradio's "everything allowed" CORS policy; I checked the main functionality to work with this, but if this breaks some exotic workflow, I'm sorry. 2022-11-04 10:07:29 +03:00
aria1th
f5d394214d split before declaring file name 2022-11-04 16:04:03 +09:00
aria1th
283249d239 apply 2022-11-04 15:57:17 +09:00
AngelBottomless
179702adc4
Merge branch 'AUTOMATIC1111:master' into force-push-patch-13 2022-11-04 15:51:09 +09:00
AngelBottomless
0d07cbfa15
I blame code autocomplete 2022-11-04 15:50:54 +09:00
aria1th
0abb39f461 resolve conflict - first revert 2022-11-04 15:47:19 +09:00
AUTOMATIC
f2b69709ea move option access checking to options class out of various places scattered through code 2022-11-04 09:42:25 +03:00
AUTOMATIC1111
26108a7f1c
Merge pull request #3698 from guaneec/hn-activation
Remove activation from final layer of Hypernetworks
2022-11-04 09:02:21 +03:00
AUTOMATIC1111
4918eb6ce4
Merge branch 'master' into hn-activation 2022-11-04 09:02:15 +03:00
AUTOMATIC1111
2cf3d2ac15
Merge pull request #3923 from random-thoughtss/master
Fix weighted mask for highres fix
2022-11-04 08:59:12 +03:00
AUTOMATIC1111
3f0f3284b6
Merge pull request #4249 from digburn/fix-cache-vae
Fix loading a model without a VAE from the cache
2022-11-04 08:57:18 +03:00
AUTOMATIC1111
1ca4dd44c9
Merge pull request #4191 from digburn/fix-api-upscaling
Fix API Upscaling: Add required parameter to API extras route
2022-11-04 08:56:34 +03:00
AUTOMATIC1111
f12576fd6d
Merge pull request #4260 from timntorres/4246-lift-extras-generate-button
Lift extras "Generate" button.
2022-11-04 08:41:29 +03:00
AUTOMATIC
4dd898b8c1 do not mess with components' visibility for scripts; instead create group components and show/hide those; this will break scripts that create invisible components and rely on UI but the earlier i make this change the better 2022-11-04 08:38:19 +03:00
timntorres
e533ff61c1 Lift extras generate button a la #4246. 2022-11-03 22:28:22 -07:00
dtlnor
59a21a67d2
Merge pull request #4248 from dtlnor/master
Update zh_CN.json
2022-11-04 08:47:28 +08:00
benlisquare
91f53cf265
Merge pull request #4250 from benlisquare/master
Apply missing translations to Traditional Chinese (zh_TW) localisation JSON
2022-11-04 11:46:31 +11:00
digburn
3780ad3ad8 fix: loading models without vae from cache 2022-11-04 00:43:00 +00:00
benlisquare
f59855dce3 Apply missing translations to Traditional Chinese (zh_TW) localisation JSON 2022-11-04 11:42:53 +11:00
digburn
8eb64dab3e
fix: correct default val of upscale_first to False 2022-11-04 00:35:18 +00:00
random-thoughtss
243253ff4a
Merge branch 'AUTOMATIC1111:master' into master 2022-11-03 15:55:54 -07:00
Gur
b2c48091db fixed api compatibility with python 3.8 2022-11-04 06:55:03 +08:00
dtlnor
459e05c2bd Update zh_CN.json
- update new content
- polish some translation
2022-11-04 07:25:12 +09:00
dtlnor
20a860b525
Merge pull request #3915 from batvbs/localizations
[Finish] Update zh_CN.json localizations. 简体中文
2022-11-04 05:44:11 +08:00
thesved
86b7fc6e5e
Make DDIM and PLMS work on Mac OS
Fix register_buffer error on Mac OS
2022-11-03 19:44:47 +01:00
benlisquare
c2465f67db
Merge pull request #4228 from benlisquare/master
Update Traditional Chinese (zh_TW) localisation JSON
2022-11-04 04:34:03 +11:00
benlisquare
8b913ea03a Update Traditional Chinese (zh_TW) localisation JSON 2022-11-04 04:30:29 +11:00
batvbs
3bf8da4659 更新 zh_CN.json 2022-11-03 21:58:46 +08:00
Bruno Seoane
17bd3f4ea7 Add tests 2022-11-03 10:08:18 -03:00
Eugenio Buffo
212fba3139
Merge pull request #4193 from Harvester62/Italian
Italian localization - Additions and Updates (fix typos)
2022-11-03 13:34:27 +01:00
Eugenio Buffo
8bc003c9bb
Fixed misspelled word 2022-11-03 13:28:56 +01:00
batvbs
b81fad071d 部分无法本地化内容的替代方案 2022-11-03 20:28:06 +08:00
Eugenio Buffo
8b6a9035d5
Delete it_IT.json 2022-11-03 13:26:59 +01:00
Muhammad Rizqi Nur
a613fbc05e Merge branch 'master' into fix-ckpt-cache 2022-11-03 19:25:35 +07:00
Muhammad Rizqi Nur
31a98d0dc0 Merge branch 'master' into gradient-clipping 2022-11-03 19:25:23 +07:00
batvbs
70714be430 将无法本地化的内容移到底部 2022-11-03 19:28:25 +08:00
byzod
7e5f1562ec
Update edit-attention.js
Fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3904
(Some sort of a workaround, the best way is to add unique id or class name to those prompt boxes)
2022-11-03 18:54:25 +08:00
batvbs
8db85d597e
Update localizations/zh_CN.json
Co-authored-by: dtlnor <dtlnor@hotmail.com>
2022-11-03 16:47:45 +08:00
batvbs
5bbef814ad
Update localizations/zh_CN.json
Co-authored-by: dtlnor <dtlnor@hotmail.com>
2022-11-03 16:47:37 +08:00
batvbs
6f830f9346
Merge pull request #4 from dtlnor/add-updated-content-3
Add updated content 3
2022-11-03 15:02:59 +08:00
dtlnor
fd785eab48 polish translation content 2022-11-03 15:51:33 +09:00
aria1th
1764ac3c8b use hash to check valid optim 2022-11-03 14:49:26 +09:00
dtlnor
934fdd0408 Merge branch 'pr/3915' into add-updated-content-3 2022-11-03 14:33:17 +09:00
aria1th
0b143c1163 Separate .optim file from model 2022-11-03 14:30:53 +09:00
Billy Cao
688aa2c9c1
Merge branch 'AUTOMATIC1111:master' into fix_nowebui_arg 2022-11-03 13:08:26 +08:00
Billy Cao
fb1374791b Fix --nowebui argument being ineffective 2022-11-03 13:08:11 +08:00
batvbs
792b72fd6b 更新 zh_CN.json 2022-11-03 13:03:08 +08:00
Ju1-js
e33d6cbddd
Make extension manager Remote links open a new tab 2022-11-02 21:04:49 -07:00
Bruno Seoane
743fffa3d6 Remove unused endpoint 2022-11-03 00:52:01 -03:00
Bruno Seoane
7a2e36b583 Add config and lists endpoints 2022-11-03 00:51:22 -03:00
dtlnor
53e72e15f0 polish translation content 2022-11-03 12:27:32 +09:00
dtlnor
dcf73cf779 Update zh_CN.json
- re-order some element
- update new content
2022-11-03 11:45:24 +09:00
Chris OBryan
313e14de04 extras - skip unnecessary second hash of image
There is no need to re-hash the input image each iteration of the loop.
This also reverts PR #4026 as it was determined the cache hits it avoids
were actually valid.
2022-11-02 21:37:43 -05:00
dtlnor
29c43935fb unify translation style 2022-11-03 11:17:44 +09:00
dtlnor
0ce0705565 Merge branch 'pr/3915' into add-updated-content-3 2022-11-03 11:16:31 +09:00
batvbs
de11709479
Inpaint at full resolution 2022-11-03 07:34:23 +08:00
innovaciones
d98eacea40
Merge pull request #4192 from AUTOMATIC1111/innovaciones-spanish-updates
Spanish localization, new strings, some tweaks and fixes
2022-11-02 16:31:26 -06:00
Riccardo Giovanetti
17315499ea
Italian localization - Additions and Updates (fix typos)
Updated localization with the latest version of these Scripts/Extensions:

unprompted (new)
img2tiles
random
random grid

Some new options in the Extras and Settings have been translated too.

P.S.: I fixed a couple of typos. By mistake I uploaded this file also in the main branch of my fork and didn't know how to revert the commit. Sorry for the mess.
2022-11-02 23:15:17 +01:00
innovaciones
b2c08283bc
New strings, some tweaks and fixes 2022-11-02 16:14:44 -06:00
digburn
2ac25ea64f fix: Add required parameter to API extras route 2022-11-02 21:52:23 +00:00
Riccardo Giovanetti
32d95c1129
Italian localization - Additions and Updates
Updated localization with the latest version of these Scripts/Extensions:

unprompted (new script)
img2tiles
random
random grid

Some new options in the Extras and Settings have been translated too.
2022-11-02 22:42:40 +01:00
Artem Zagidulin
de64146ad2 add number of itter 2022-11-02 21:30:50 +03:00
Martucci
cd5eafaf03
Merge pull request #4180 from AUTOMATIC1111/M-art-ucci-patch-2
pt_BR minor issue with lat comma
2022-11-02 14:35:47 -03:00
Martucci
1c9db534bd
pt_BR minor issue with lat comma
and a few translation tweaks
2022-11-02 14:35:37 -03:00
DepFA
c07f1d0d78
Convert callbacks into a private map, add utility functions for removing callbacks 2022-11-02 16:59:10 +00:00
Kyu♥
f1b6ac64e4 Added option to preview Created images on batch completion. 2022-11-02 17:24:42 +01:00
Artem Zagidulin
a9e979977a process_one 2022-11-02 19:05:01 +03:00
Ikko Ashimine
d624cb82a7
Fix typo in ui.js
interation -> interaction
2022-11-03 01:05:00 +09:00
evshiron
e21fcd72fc add back png info in image api 2022-11-02 22:37:45 +08:00
Muhammad Rizqi Nur
fb3b564801 Merge branch 'master' into fix-ckpt-cache 2022-11-02 20:53:41 +07:00
Muhammad Rizqi Nur
237e79c77d Merge branch 'master' into gradient-clipping 2022-11-02 20:48:58 +07:00
AngelBottomless
7ea5956ad5
now add 2022-11-02 22:18:55 +09:00
AngelBottomless
10b280e9a2
Merge branch 'AUTOMATIC1111:master' into force-push-patch-13 2022-11-02 22:18:31 +09:00
AngelBottomless
9b5f85ac83
first revert 2022-11-02 22:18:04 +09:00
AngelBottomless
3178c35224
resolve conflicts 2022-11-02 22:16:32 +09:00
Dynamic
172c4bc09f
Merge pull request #4166 from 36DB/kr-localization
Update ko_KR.json
2022-11-02 21:01:09 +09:00
Dynamic
13cbc1622e
Update ko_KR.json
Fix some changed setting strings and added new ones
2022-11-02 21:00:24 +09:00
AUTOMATIC
f2a5cbe6f5 fix #3986 breaking --no-half-vae 2022-11-02 14:41:29 +03:00
AUTOMATIC1111
675b51ebd3
Merge pull request #3986 from R-N/vae-picker
VAE Selector
2022-11-02 14:12:27 +03:00
AUTOMATIC1111
e359268be9
Merge pull request #3976 from victorca25/esrgan_fea
multiple trivial changes for "extras" models
2022-11-02 14:09:38 +03:00
AUTOMATIC1111
bb21a4cb35
Merge pull request #3715 from shwang95/master
Fix error caused by EXIF transpose when using custom scripts
2022-11-02 13:41:35 +03:00
AUTOMATIC1111
e6060a7e6b
Merge pull request #4155 from MaikoTan/fix/register-api-in-api-only-mode
fix: should invoke callback as well in api only mode
2022-11-02 13:04:55 +03:00
AUTOMATIC
eb5e82c7dd do not unnecessarily run VAE one more time when saving intermediate image with hires fix 2022-11-02 12:45:03 +03:00
timntorres
9c67408004
Allow saving "before-highres-fix. (#4150)
* Save image/s before doing highres fix.
2022-11-02 12:18:21 +03:00
AUTOMATIC
4a8cf01f6f remove duplicate code from #3970 2022-11-02 12:12:32 +03:00
AUTOMATIC1111
e526f6b378
Merge pull request #3970 from evshiron/fix/progress-api
fix broken progress api and current image compatibility
2022-11-02 12:06:12 +03:00
Dynamic
905304243c
Merge pull request #4157 from 36DB/kr-localization
Update ko_KR.json
2022-11-02 17:17:17 +09:00
Dynamic
b421c5ee60
Update ko_KR.json
New options in scripts
2022-11-02 17:16:47 +09:00
KyuSeok Jung
af6fba2475
Merge branch 'master' into master 2022-11-02 17:10:56 +09:00
Muhammad Rizqi Nur
a5409a6e4b Save VAE provided by cmd_opts.vae_path 2022-11-02 14:37:22 +07:00
Maiko Tan
dd2108fdac
fix: should invoke callback as well in api only mode 2022-11-02 15:04:35 +08:00
AUTOMATIC
95c6308ccd switch to gradio 3.8 2022-11-02 09:48:02 +03:00
Sihan Wang
5c864be010
Merge branch 'AUTOMATIC1111:master' into master 2022-11-02 14:09:33 +08:00
Muhammad Rizqi Nur
056f06d373 Reload VAE without reloading sd checkpoint 2022-11-02 12:51:46 +07:00
evshiron
51e0a83969 Merge branch 'master' into fix/progress-api 2022-11-02 12:31:33 +08:00
AUTOMATIC1111
65522ff157
Merge pull request #4142 from jn-jairo/processing-close
Release processing resources after it finishes
2022-11-02 07:30:03 +03:00
AUTOMATIC1111
10f62546d3
Merge pull request #4021 from AUTOMATIC1111/add-kdiff-cfgdenoiser-callback
Add mid-kdiffusion cfgdenoiser script callback - access latents, conditionings and sigmas mid-sampling
2022-11-02 07:29:16 +03:00
AUTOMATIC
5510c282b1 fix for extensions' javascript not loading 2022-11-02 07:26:31 +03:00
AUTOMATIC
55688c4880 rename the seed option from #4146 2022-11-02 07:02:45 +03:00
AUTOMATIC1111
1a058ca578
Merge pull request #4146 from kdreibel/feature/prompts-from-file-seed-preservation
prompts_from_file: allow random seeds to be preserved for the list of prompts
2022-11-02 06:59:18 +03:00
Keith Dreibelbis
315bd7c9e8 prompts_from_file: allow random seeds to be preserved for the list of prompts 2022-11-01 19:45:35 -07:00
Martucci
2192b64c34
Merge pull request #4144 from AUTOMATIC1111/M-art-ucci-patch-1
Update for extensions tab and other minor fizes
2022-11-01 22:45:22 -03:00
Martucci
ba13643bdd
Update for extensions tab and other minor fizes
First commit of november.
Extensions tab localized and other minor translations fixes.

To other portuguese users: se você tiver sugestões de melhores traduções conforme uso, favor entrar em contato ou enviar PRs referentes a localização.
Também seria bem-vindo lusófonos de outros países, caso queiram discutir uma convergência entre diferentes variações do portguês.
2022-11-01 22:44:01 -03:00
Jairo Correa
c9148b2312 Release processing resources after it finishes 2022-11-01 21:56:47 -03:00
DepFA
5b6bedf6f2
Update class name and assign back to vars 2022-11-02 00:38:17 +00:00
DepFA
cd88e21dc5
Class Name typo and add descriptions to fields. 2022-11-02 00:34:58 +00:00
Eugenio Buffo
61af311f29
Merge pull request #4112 from Harvester62/Italian
Italian localization - Additions and updates
2022-11-02 00:14:23 +01:00
Eugenio Buffo
7b652c5df7
Fixed misspelled word on it_IT 2022-11-02 00:07:37 +01:00
Lunix
b59b238349
Merge pull request #4127 from dhwz/patch-1
Update de_DE.json
2022-11-01 23:32:13 +01:00
Nerogar
cffc240a73 fixed textual inversion training with inpainting models 2022-11-01 21:02:07 +01:00
papuSpartan
86d35526a1 make line evil again 2022-11-01 14:53:40 -05:00
papuSpartan
1dd5d6bafa clean py func defs 2022-11-01 14:33:55 -05:00
papuSpartan
d0d74e459d
Merge branch 'AUTOMATIC1111:master' into js 2022-11-01 14:05:57 -05:00
papuSpartan
401350cd59 clear on the client-side again 2022-11-01 14:03:56 -05:00
Muhammad Rizqi Nur
f8c6468d42
Merge branch 'master' into vae-picker 2022-11-02 00:25:08 +07:00
Riccardo Giovanetti
5637ef33b9
Merge branch 'AUTOMATIC1111:master' into Italian 2022-11-01 17:45:08 +01:00
dhwz
9e28f09735
Update de_DE.json 2022-11-01 17:39:09 +01:00
AUTOMATIC
198a1ffcfc fix API returning extra stuff in base64 encoded iamges for #3972 2022-11-01 19:14:10 +03:00
Billy Cao
b11713ec2a
Merge branch 'AUTOMATIC1111:master' into enable-override-hypernet 2022-11-01 23:37:03 +08:00
Riccardo Giovanetti
a3eab2f71e
Merge branch 'AUTOMATIC1111:master' into Italian 2022-11-01 16:36:27 +01:00
Dynamic
fb39314006
Merge pull request #4121 from 36DB/kr-localization
Update KR translation
2022-11-02 00:33:46 +09:00
Dynamic
e0695019ac
Update KR translation
More strings in the Extensions tab
2022-11-02 00:33:11 +09:00
Billy Cao
bc60768606 Enable override_settings to take effect for hypernetworks 2022-11-01 23:26:55 +08:00
AUTOMATIC1111
d51a5d6336
Merge pull request #4025 from evshiron/feat/interrupt-api-master
prototype interrupt api
2022-11-01 18:22:16 +03:00
AUTOMATIC1111
efd20a4519
Merge pull request #4026 from AUTOMATIC1111/extras-cache-key-extension
Extend extras image cache key with upscale_first arg
2022-11-01 18:21:54 +03:00
AUTOMATIC1111
d7622d97f2
Merge pull request #4004 from mamawr/master
Added "--clip-models-path" switch
2022-11-01 18:19:12 +03:00
AUTOMATIC1111
f071a1d25a
Merge pull request #4056 from MarkovInequality/TI_optimizations
Allow TI training using 6GB VRAM when xformers is available
2022-11-01 18:17:56 +03:00
AUTOMATIC1111
0e5d239f06
Merge pull request #4086 from timntorres/3875-allow-disabling-png-info
Add PNG info to pngs only if option is enabled.
2022-11-01 18:06:11 +03:00
Bruno Seoane
31db25ecc8 Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui 2022-11-01 12:06:10 -03:00
Riccardo Giovanetti
7b6a412709
Merge branch 'AUTOMATIC1111:master' into Italian 2022-11-01 16:04:11 +01:00
AUTOMATIC1111
458cca0391
Merge pull request #4110 from laksjdjf/master
Caused error in latest pytorch-lightning==1.8.0
2022-11-01 18:02:52 +03:00
batvbs
534bcfbac8
Update zh_CN.json 2022-11-01 22:54:21 +08:00
batvbs
82ba978d85
Update zh_CN.json 2022-11-01 22:47:28 +08:00
dtlnor
8a62d6431d re-order json content 2022-11-01 23:39:02 +09:00
dtlnor
72ea78cf64 Update zh_CN.json
include new changes
2022-11-01 23:38:19 +09:00
TinkTheBoush
467cae167a append_tag_shuffle 2022-11-01 23:29:12 +09:00
Riccardo Giovanetti
20c3d556f6
Italian localization - Additions and updates
Added translations for these Extensions/Scripts:

Dynamic Prompts
Alpha Canvas
Artists to study
Aesthetic Score

Added a few missing translations and corrected some others. Updated to the latest Extension management tool version.

While I was able to translate in the statistics the text "Time taken:" because the timing itself is a separate label, I couldn't do the same with the labels "Torch active/reserved: 3881/3892 MiB" and "Sys VRAM: 4859/8192 MiB (59.31%)" because the values (memory, percentage) are embedded in the labels (or perhaps I am not enough acknowledged to be sure I am doing it right simply translating the text).
2022-11-01 15:13:01 +01:00
laksjdjf
42b5c73352
Update requirements.txt 2022-11-01 22:50:47 +09:00
AUTOMATIC1111
c28de154b0
Merge pull request #4087 from ikasumi/feat/batch-img2img-improve
make save dir if save dir is not exists
2022-11-01 15:08:27 +03:00
AUTOMATIC1111
d79b93084c
Merge pull request #3723 from stysmmaker/patch/image-save-callback
Correct before image saved callback
2022-11-01 14:59:10 +03:00
AUTOMATIC
b85e83c3bd add PYTHONPATH for extension's install.py 2022-11-01 14:48:53 +03:00
AUTOMATIC
d35bf64945 make launch.py run installers for extensions that have ones
add some more classes to safety module for an extension
2022-11-01 14:20:15 +03:00
AUTOMATIC1111
f126986b76
Merge pull request #4098 from jn-jairo/load-model
Unload sd_model before loading the other to solve the issue #3449
2022-11-01 13:54:00 +03:00
AUTOMATIC1111
0874404040
Merge pull request #3982 from MaikoTan/on-started-callback
feat: add app started callback
2022-11-01 13:47:47 +03:00
batvbs
6bab858095
Update localizations/zh_CN.json
Co-authored-by: liggest <43201720+liggest@users.noreply.github.com>
2022-11-01 18:46:20 +08:00
AUTOMATIC1111
40b3a7e8a5
Merge pull request #3917 from MartinCairnsSQL/adjust-ddim-uniform-steps
Certain step counts for DDIM cause out of bounds error
2022-11-01 13:45:31 +03:00
Martin Cairns
b88505925b
Merge branch 'AUTOMATIC1111:master' into adjust-ddim-uniform-steps 2022-11-01 08:34:39 +00:00
Eugenio Buffo
dd02889124
Merge pull request #3993 from Harvester62/Italian
Localization Italian - Updates and additions
2022-11-01 08:52:52 +01:00
Eugenio Buffo
2c02fc8ba6
Removed duplicated string from it_IT localization
Removed "Peso dei bordi del punto focale" duplicated string from "Focal point edges weight"
2022-11-01 08:43:02 +01:00
Eugenio Buffo
5029d159b3
Deleted extra it_IT localization
Deleted it_IT localization file outside localization folder as it was not intentionally committed
2022-11-01 08:36:03 +01:00
Riccardo Giovanetti
68c486be61
Merge branch 'AUTOMATIC1111:master' into Italian 2022-11-01 08:27:34 +01:00
Jairo Correa
af758e97fa Unload sd_model before loading the other 2022-11-01 04:01:49 -03:00
AUTOMATIC
5b0f624bdc Added Available tab to extensions UI. 2022-11-01 09:59:10 +03:00
batvbs
19b59d320c
Update localizations/zh_CN.json
Co-authored-by: dtlnor <dtlnor@hotmail.com>
2022-11-01 14:33:20 +08:00
batvbs
bef1d0e836
Update localizations/zh_CN.json
Co-authored-by: dtlnor <dtlnor@hotmail.com>
2022-11-01 14:32:12 +08:00
batvbs
24a76340ba
Update zh_CN.json 2022-11-01 14:30:59 +08:00
batvbs
56660f0946
Update zh_CN.json 2022-11-01 11:17:30 +08:00
k_sugawara
525c1edf43 make save dir if save dir is not exists 2022-11-01 09:40:54 +09:00
timntorres
8792be5007 Add PNG info to pngs only if option is enabled. 2022-10-31 17:29:04 -07:00
Riccardo Giovanetti
8cc20f90f4
Italian localization - Updates Extensions
Added translations of the new Extensions tab, and a few corrections to some previously translated descriptions/terms.
2022-10-31 23:29:43 +01:00
papuSpartan
25de9df364
Merge branch 'AUTOMATIC1111:master' into master 2022-10-31 15:08:54 -05:00
Riccardo Giovanetti
44ae3504b0
Merge branch 'AUTOMATIC1111:master' into Italian 2022-10-31 17:44:58 +01:00
Dynamic
5c9b3625fa
Merge pull request #4063 from 36DB/kr-localization
Add Extension Manager strings
2022-11-01 01:30:58 +09:00
Dynamic
8954a6e706
Add Extension Manager strings
Since it's fixed and working I'm updating the translations
2022-11-01 01:29:46 +09:00
Roy Shilkrot
3f3d14afd5 nix unused thing 2022-10-31 11:51:21 -04:00
Roy Shilkrot
df6a7ebfe8 revert things to master 2022-10-31 11:50:33 -04:00
Roy Shilkrot
509fd1459b Merge remote-tracking branch 'upstream/master' into roy.add_simple_interrogate_api 2022-10-31 11:45:52 -04:00
AUTOMATIC
9e22a35754 fix the error with extension tab not working because of the previous commit 2022-10-31 18:45:50 +03:00
AUTOMATIC
58cc03edd0 fix scripts I broke with the extension tab changes 2022-10-31 18:40:47 +03:00
AUTOMATIC
dc7425a56e disable access to extension stuff for non-local servers 2022-10-31 18:33:44 +03:00
AUTOMATIC
f17769cfbc add requirements for GitPython 2022-10-31 17:57:16 +03:00
AUTOMATIC
910a097ae2 add initial version of the extensions tab
fix broken Restart Gradio button
2022-10-31 17:37:02 +03:00
Fampai
890e68aaf7 Fixed minor bug
when unloading vae during TI training, generating images after
training will error out
2022-10-31 10:07:12 -04:00
Fampai
3b0127e698 Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations 2022-10-31 09:54:51 -04:00
Riccardo Giovanetti
e0c1cd147d
Merge branch 'AUTOMATIC1111:master' into Italian 2022-10-31 14:33:38 +01:00
Dynamic
9b384dfb5c
Merge pull request #4051 from 36DB/kr-localization
Added some extension KR support
2022-10-31 22:26:44 +09:00
batvbs
965ed08e31 更新 zh_CN.json 2022-10-31 21:25:48 +08:00
Dynamic
81624f4dfb
Added some extension KR support
Supported extensions
https://github.com/adieyal/sd-dynamic-prompts
https://github.com/yfszzx/stable-diffusion-webui-images-browser
https://github.com/DominikDoom/a1111-sd-webui-tagcomplete
https://github.com/lilly1987/AI-WEBUI-scripts-Random
2022-10-31 22:25:05 +09:00
Muhammad Rizqi Nur
7c8c3715f5 Fix VAE refresh button stretching out
From https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/3986#issuecomment-1296990601
2022-10-31 20:15:33 +07:00
batvbs
3ac05d38eb bug修复
文本太长,遮挡提示框
2022-10-31 21:10:27 +08:00
batvbs
6cffcf6b6d 更新 zh_CN.json 2022-10-31 20:41:18 +08:00
batvbs
48787dc3d1 inpaint 2022-10-31 20:26:52 +08:00
batvbs
faa3639bf1 将无法本地化的内容移到底部 2022-10-31 20:15:36 +08:00
batvbs
f65bfd74ea outpainting 2022-10-31 19:28:31 +08:00
batvbs
49f0dd4300 修改争议部分 2022-10-31 19:26:38 +08:00
Fampai
006756f9cd Added TI training optimizations
option to use xattention optimizations when training
option to unload vae when training
2022-10-31 07:26:08 -04:00
Riccardo Giovanetti
f1a110a593
Italian localization - Updated a few terms
I've updated a few terms and descriptions. Waiting for review and, hopefully, approval.
2022-10-31 11:39:13 +01:00
Muhammad Rizqi Nur
bf7a699845 Fix #4035 for real now 2022-10-31 16:27:27 +07:00
mamawr
095931afa4
Merge branch 'AUTOMATIC1111:master' into master 2022-10-31 11:48:10 +03:00
Muhammad Rizqi Nur
36966e3200 Fix #4035 2022-10-31 15:38:58 +07:00
Muhammad Rizqi Nur
726769da35 Checkpoint cache by combination key of checkpoint and vae 2022-10-31 15:22:03 +07:00
batvbs
70841438a1
Merge pull request #3 from dtlnor/add-updated-content-2
Add updated content 2
2022-10-31 16:14:45 +08:00
Riccardo Giovanetti
43e6e282e2
Merge branch 'AUTOMATIC1111:master' into Italian 2022-10-31 09:02:15 +01:00
Dynamic
d4790fa6db
Update ko_KR.json
Added KR support for Dynamic Prompts extension - https://github.com/mkco5162/sd-dynamic-prompts
2022-10-31 16:57:51 +09:00
Dynamic
e19edbbf72
Update ko_KR.json
Added KR support for Dynamic Prompts extension  - https://github.com/mkco5162/sd-dynamic-prompts
2022-10-31 16:56:41 +09:00
Muhammad Rizqi Nur
b96d0c4e9e Fix typo from previous commit 2022-10-31 14:42:28 +07:00
dtlnor
f719b7d012 Update zh_CN.json
update tag complete.
2022-10-31 16:17:48 +09:00
Muhammad Rizqi Nur
d5ea878b2a Fix merge conflicts 2022-10-31 13:54:40 +07:00
Muhammad Rizqi Nur
4123be632a Fix merge conflicts 2022-10-31 13:53:22 +07:00
Muhammad Rizqi Nur
840307f237 Change default clip grad value to 0.1
It still defaults to disabled.

Ref for value: 732b15820a
2022-10-31 13:49:24 +07:00
batvbs
7581091ffb Denoising strength 2022-10-31 12:52:03 +08:00
batvbs
0018f3ed62 更新 zh_CN.json 2022-10-31 12:32:51 +08:00
DepFA
29f758afe9
Extend extras image cache with upscale_first arg 2022-10-31 02:39:55 +00:00
evshiron
adaa699e38 prototype interrupt api 2022-10-31 10:31:06 +08:00
Maiko Sinkyaet Tan
081df45da4
docs: add python doc (?)
not sure if this available...
2022-10-31 08:47:43 +08:00
DepFA
8ae0ea9dea
Add callback to sd_samplers 2022-10-30 23:48:33 +00:00
DepFA
8906be85ac
add callback cleardown 2022-10-30 23:47:08 +00:00
DepFA
21fba39c60
Add callbacks and param objects 2022-10-30 23:45:52 +00:00
random_thoughtss
d9e4e4d7a0 Fix non-square full resolution inpainting. 2022-10-30 15:33:02 -07:00
mawr
d587586d3b Added "--clip-models-path" switch to avoid using default "~/.cache/clip" and enable to run under unprivileged user without homedir 2022-10-31 00:14:07 +03:00
Riccardo Giovanetti
f87dcf1ad8
Localization Italian - Updates and additions
Updated localization with the latest version of these Scripts/Extensions:

animator v6
StylePile
Alpha Canvas

Changed the description for the script "To Infinity and Beyond" (Verso l'infinito e oltre) from 'n' to 'Esegui n volte' (Run n times)
Added localization for the 'Random' and 'Random Grid' scripts I changed both the title and the parameters Steps and CFG labels to make it clearer that the min/max order can be inverted.
Added a few missing translations and character's corrections here and there. Still a work in progress but I will slowly fix wrong thing when I find them.
2022-10-30 18:09:13 +01:00
Martin Cairns
6c9e427c0e
Merge branch 'AUTOMATIC1111:master' into adjust-ddim-uniform-steps 2022-10-30 17:03:25 +00:00
dtlnor
32ffc32416 Update zh_CN.json
putting back the sign
2022-10-31 00:29:43 +09:00
Muhammad Rizqi Nur
e1b2ea6e00 Change VAE search order and thus priority 2022-10-30 22:11:45 +07:00
dtlnor
9d7b665d3b update new content 2022-10-31 00:09:58 +09:00
dtlnor
0ccc982f09 re-order content to match the dump
Deprecated content was moved to the bottom to keep (somehow) backwards capability.
2022-10-31 00:03:58 +09:00
Muhammad Rizqi Nur
2468039df2 Forgot to add this folder 2022-10-30 21:58:31 +07:00
Muhammad Rizqi Nur
cb31abcf58 Settings to select VAE 2022-10-30 21:54:31 +07:00
Maiko Tan
423f222283
feat: add app started callback 2022-10-30 22:46:43 +08:00
batvbs
5d69f75e5b 更新 zh_CN.json 2022-10-30 21:24:28 +08:00
Muhammad Rizqi Nur
cd4d59c0de Merge master 2022-10-30 18:57:51 +07:00
victorca25
c9bb33dd43 add resrgan 8x, allow use 1x and up to 8x extra models, move BSRGAN model, add nearest 2022-10-30 12:54:06 +01:00
aria1th
9d96d7d0a0 resolve conflicts 2022-10-30 20:40:59 +09:00
AngelBottomless
20194fd975 We have duplicate linear now 2022-10-30 20:40:59 +09:00
AngelBottomless
4b8a192f68 add optimizer save option to shared.opts 2022-10-30 20:40:59 +09:00
batvbs
99c4e8d653 长文本添加逗号 2022-10-30 19:36:01 +08:00
Martin Cairns
34c86c12b0 Include PLMS in adjust steps as it also can fail in the same way 2022-10-30 11:04:27 +00:00
batvbs
b5e21e3348
Update zh_CN.json 2022-10-30 17:49:17 +08:00
evshiron
1a4ff2de6a fix current image in progress api when parallel processing enabled 2022-10-30 17:02:47 +08:00
evshiron
be27fd4690 fix broken progress api by previous rework 2022-10-30 17:01:10 +08:00
random_thoughtss
71571e3f05 Replaced master branch fix with updated fix. 2022-10-30 00:35:40 -07:00
random-thoughtss
15468c9939
Merge branch 'AUTOMATIC1111:master' into master 2022-10-30 00:30:18 -07:00
AUTOMATIC1111
17a2076f72
Merge pull request #3928 from R-N/validate-before-load
Optimize training a little
2022-10-30 09:51:36 +03:00
AUTOMATIC1111
3dc9a43f7e
Merge pull request #3898 from R-N/lr-comma
Allow trailing comma in learning rate
2022-10-30 09:29:29 +03:00
blackneoo
5612d03016
Merge pull request #3955 from AUTOMATIC1111/localization-arabic
Update ar_AR.json
2022-10-30 10:26:25 +04:00
AUTOMATIC
149784202c rework #3722 to not introduce duplicate code 2022-10-30 09:10:22 +03:00
AUTOMATIC1111
060ee5d3a7
Merge pull request #3722 from evshiron/feat/progress-api
prototype progress api
2022-10-30 09:02:01 +03:00
AUTOMATIC
61836bd544 shorten Hypernetwork strength in infotext and omit it when it's the default value. 2022-10-30 08:48:53 +03:00
AUTOMATIC1111
470f184176
Merge pull request #3831 from timntorres/3825-save-hypernet-strength-to-info
Save Hypernetwork strength to infotext.
2022-10-30 08:47:18 +03:00
AUTOMATIC
5a6e0cfba6 always add --api when running tests 2022-10-30 08:28:36 +03:00
AUTOMATIC
59dfe0845d launch tests from launch.py with --tests commandline argument 2022-10-30 08:22:44 +03:00
batvbs
2f125b0a97 更新 zh_CN.json 2022-10-30 13:07:25 +08:00
AUTOMATIC
05a657dd35 fix broken hires fix 2022-10-30 07:41:56 +03:00
AUTOMATIC1111
ded08fc1dc
Merge pull request #3941 from mezotaken/master
Automated testing through API [WIP, feedback required]
2022-10-30 07:41:39 +03:00
Dynamic
94840301f8
Merge pull request #3914 from 36DB/kr-localization
Update KR localization and fix hotkeys
2022-10-30 13:03:36 +09:00
batvbs
ec9a95c8ad
Merge pull request #2 from dtlnor/add-updated-content
Add new translated content
2022-10-30 10:50:00 +08:00
Modar M. Alfadly
ccf95b0e98
Merge pull request #3952 from Harvester62/Italian
Italian localization (extended) [Requires Feedback]
2022-10-30 03:07:43 +03:00
Modar M. Alfadly
3d9dd6c184
Update ar_AR.json 2022-10-30 02:03:15 +03:00
Riccardo Giovanetti
36c21723b5
Merge branch 'AUTOMATIC1111:master' into Italian 2022-10-30 00:27:20 +02:00
Martucci
700162a603
Merge pull request #3953 from AUTOMATIC1111/M-art-ucci-patch-1
Final commit for october (19:22) / 22
2022-10-29 19:24:42 -03:00
Martucci
4ae575853f
Final commit for october (19:22) / 22 2022-10-29 19:23:05 -03:00
Riccardo Giovanetti
35e95f574a
Italian localization (extended) [Requires Feedback]
This is my first version of an alternative localization into Italian language which is a follow-up of the current localization file made by @EugenioBuffo (#3725), which I thanks, and of my discussion "Italian localization (git newbie)" (#3633) which covers the main user interface, all the current the Extensions and Scripts, with the following exceptions:

    txt2img2img (I got errors therefore I removed it from my local installation of SD Web UI) 
    Parameter Sequencer (not installed locally)
    Booru tag autocompletion (not installed locally)
    Saving steps of the sampling process (not installed locally)
	
I do not forecast to translate the above scripts in the short period, unless I will install them locally on my machine.

I beg your pardon if I am brutally overwriting the originally submitted file but I find quite exhausting to edit and append over a thousand lines of code to the original file. If this is mandatory, then I will delete this commit and start a new one amending the original it_IT.json file.

It is for sure not perfect and there are some translations that can be improved, therefore I wish to invite @EugenioBuffo and any other Italian mother language person willing give advice and to help to review this extensive translation . I look forward read any feedback from the community and developers. Thank you.
2022-10-30 00:13:13 +02:00
Lunix
10fa7b6abd
Merge pull request #3950 from Strothis/fix-german-localization
Update German Localization
2022-10-30 00:03:46 +02:00
evshiron
9f4f894d74 allow skip current image in progress api 2022-10-30 06:03:32 +08:00
timntorres
66d038f6a4 Read hypernet strength from PNG info. 2022-10-29 15:00:08 -07:00
timntorres
e709afb0f7 Merge commit 'e7254746bbfbff45099db44a8d4d25dd6181877d' into 3825-save-hypernet-strength-to-info 2022-10-29 14:55:30 -07:00
Strothis
22a54b0582 Fix German Localization 2022-10-29 23:43:30 +02:00
evshiron
9f104b53c4 preview current image when opts.show_progress_every_n_steps is enabled 2022-10-30 05:19:17 +08:00
random_thoughtss
39f55c3c35 Re-add explicit device move 2022-10-29 14:13:02 -07:00
evshiron
88f46a5bec update progress response model 2022-10-30 05:04:29 +08:00
AUTOMATIC1111
e7254746bb
Merge pull request #3571 from szhublox/noautoupdate
webui.sh: no automatic git pull
2022-10-29 23:04:26 +03:00
evshiron
e9c6c2a51f add description for state field 2022-10-30 04:02:56 +08:00
evshiron
f62db4d5c7 fix progress response model 2022-10-30 03:56:44 +08:00
evshiron
7f5212fb5f Merge branch 'master' into feat/progress-api 2022-10-30 03:49:00 +08:00
evshiron
6b719c49b1 Merge branch 'master' into feat/progress-api 2022-10-30 03:45:29 +08:00
AUTOMATIC
d699720254 add translators to codeowners with their respective translation files 2022-10-29 22:39:10 +03:00
AUTOMATIC
4cb5983c30 rename french translation to be in line with others 2022-10-29 22:38:50 +03:00
AUTOMATIC1111
58fb4bc08c
Merge pull request #3783 from 36DB/management
CODEOWNERS update for localization management
2022-10-29 22:22:00 +03:00
AUTOMATIC1111
c328deb5f1
Merge pull request #3934 from bamarillo/api-add-png-info-endpoint
[API][Feature] Add png info endpoint
2022-10-29 22:20:50 +03:00
AUTOMATIC
9bb6b6509a add postprocess call for scripts 2022-10-29 22:20:02 +03:00
Bruno Seoane
83a1f44ae2 Fix space 2022-10-29 16:10:00 -03:00
Bruno Seoane
4609b83cd4 Add PNG Info endpoint 2022-10-29 16:09:19 -03:00
Vladimir Repin
ffc5b700c4 extras test template added 2022-10-29 21:50:06 +03:00
Vladimir Repin
2f3d8172c3 img2img test template and setUp added 2022-10-29 21:43:32 +03:00
Muhammad Rizqi Nur
3d58510f21 Fix dataset still being loaded even when training will be skipped 2022-10-30 00:54:59 +07:00
Muhammad Rizqi Nur
a07f054c86 Add missing info on hypernetwork/embedding model log
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513

Also group the saving into one
2022-10-30 00:49:29 +07:00
random_thoughtss
6e2ce4e735 Added image conditioning to latent upscale.
Only comuted  if the mask weight is not 1.0 to avoid extra memory.
Also includes some code cleanup.
2022-10-29 10:35:51 -07:00
Muhammad Rizqi Nur
ab05a74ead Revert "Add cleanup after training"
This reverts commit 3ce2bfdf95.
2022-10-30 00:32:02 +07:00
random_thoughtss
44ab954fab Fix latent upscale highres fix #3888 2022-10-29 10:02:56 -07:00
Mackerel
6515dedf57 webui.sh: no automatic git pull 2022-10-29 11:59:50 -04:00
dtlnor
f512b0828b Update zh_CN.json
update translation content to 35c45df28b
2022-10-30 00:45:30 +09:00
Vladimir Repin
af45b5a11a Testing with API added 2022-10-29 18:26:28 +03:00
Bruno Seoane
952ff32a5f Merge branch 'master' of https://github.com/bamarillo/stable-diffusion-webui 2022-10-29 12:17:37 -03:00
Martin Cairns
de1dc0d279 Add adjust_steps_if_invalid to find next valid step for ddim uniform sampler 2022-10-29 15:23:19 +01:00
Dynamic
cbdb5ced76
Add new translations
New settings option
New extras tab option
2022-10-29 22:33:51 +09:00
Dynamic
3d36d62d61
Merge branch 'AUTOMATIC1111:master' into kr-localization 2022-10-29 22:33:06 +09:00
batvbs
0089fa5ceb 更新 zh_CN.json 2022-10-29 21:09:05 +08:00
Muhammad Rizqi Nur
a27d19de2e Additional assert on dataset 2022-10-29 19:44:05 +07:00
Muhammad Rizqi Nur
3ce2bfdf95 Add cleanup after training 2022-10-29 19:43:21 +07:00
Muhammad Rizqi Nur
ab27c111d0 Add input validations before loading dataset for training 2022-10-29 18:09:17 +07:00
Muhammad Rizqi Nur
ef4c94e1cf Improve lr schedule error message 2022-10-29 15:42:51 +07:00
Muhammad Rizqi Nur
a5f3adbdd7 Allow trailing comma in learning rate 2022-10-29 15:37:24 +07:00
Muhammad Rizqi Nur
05e2e40537 Merge branch 'master' into gradient-clipping 2022-10-29 15:04:21 +07:00
AUTOMATIC
35c45df28b fix broken ↙ button, fix field paste ignoring most of useful fields for for #3768 2022-10-29 10:56:19 +03:00
timntorres
2c4d203884 Revert "Explicitly state when Hypernet is none." 2022-10-29 00:36:51 -07:00
timntorres
e98f72be33
Merge branch 'AUTOMATIC1111:master' into 3825-save-hypernet-strength-to-info 2022-10-29 00:31:23 -07:00
AUTOMATIC
beb6fc2979 move send seed option to UI section and make it false by default 2022-10-29 09:57:22 +03:00
AUTOMATIC1111
9553a7e071
Merge pull request #3818 from jwatzman/master
Reduce peak memory usage when changing models
2022-10-29 09:16:00 +03:00
AUTOMATIC
28e6d4a54e add element ids for save buttons for #3798 2022-10-29 09:13:36 +03:00
AUTOMATIC1111
1233bec13e
Merge pull request #3798 from aurror/modal-save-button-and-shortcut
added save image button and a hotkey to Modal Image View
2022-10-29 09:11:06 +03:00
AUTOMATIC1111
76086f6668
Merge branch 'master' into modal-save-button-and-shortcut 2022-10-29 09:11:00 +03:00
AUTOMATIC1111
02b547861e
Merge pull request #3757 from LunixWasTaken/master
Add german Localization file.
2022-10-29 09:04:23 +03:00
AUTOMATIC1111
09d00e71dd
Merge pull request #3725 from EugenioBuffo/master
Included localization file for IT language
2022-10-29 09:04:08 +03:00
AUTOMATIC1111
f3454b8a6b
Merge pull request #3691 from xmodar/arabic
Revamped Arabic localization
2022-10-29 09:03:35 +03:00
AUTOMATIC
78b879b442 delete the submodule dir (why do you keep doing this) 2022-10-29 09:02:02 +03:00
AUTOMATIC
68581bfc4e Merge remote-tracking branch 'origin/master' 2022-10-29 09:01:08 +03:00
AUTOMATIC
2922d8144f make existing image browser extension not break 2022-10-29 09:01:04 +03:00
AUTOMATIC
af547f63c3 Merge branch 'Inspiron' 2022-10-29 08:48:11 +03:00
AUTOMATIC
3c207ca684 add needed imports fr new code in copypaste.py 2022-10-29 08:42:34 +03:00
AUTOMATIC
45ca67f35a remove repeated gitignore entries 2022-10-29 08:29:50 +03:00
AUTOMATIC
a33d0a9a65 remove weird spaces added to ui.py over time 2022-10-29 08:28:48 +03:00
AUTOMATIC
2d220afb24 fix open folder button not working 2022-10-29 08:26:12 +03:00
AUTOMATIC1111
d885a4a57b
Merge pull request #3711 from benlisquare/master
Add localisation for Traditional Chinese 正體中文化
2022-10-29 08:13:14 +03:00
AUTOMATIC
a1e5e0d766 skip filenames starting with . for img2img and extras batch modes 2022-10-29 08:11:03 +03:00
AUTOMATIC1111
cf8da8e1b0
Merge pull request #3826 from ANTONIOPSD/patch-1
Natural sorting for dropdown checkpoint list
2022-10-29 08:02:03 +03:00
AUTOMATIC1111
810e6a407d
Merge pull request #3858 from R-N/log-csv
Fix log off by 1 #3847
2022-10-29 07:55:20 +03:00
AUTOMATIC1111
3019452927
Merge pull request #3803 from FlameLaw/master
Fixed proper dataset shuffling
2022-10-29 07:52:51 +03:00
AUTOMATIC1111
86e19fe873
Merge pull request #3669 from random-thoughtss/master
Added option to use unmasked conditioning image for inpainting model.
2022-10-29 07:49:48 +03:00
AUTOMATIC1111
1fba573d24
Merge pull request #3874 from cobryan05/extra_tweak
Extras Tab - Option to upscale before face fix, caching improvements
2022-10-29 07:44:17 +03:00
AUTOMATIC1111
2338ed9554
Merge pull request #3755 from M-art-ucci/master
Adding pt_BR (portuguese - Brazil) to localizations folder
2022-10-29 07:38:49 +03:00
AUTOMATIC
bce5adcd6d change default hypernet activation function to linear 2022-10-29 07:37:06 +03:00
AUTOMATIC1111
f3685281e2
Merge pull request #3877 from Yaiol/master
Filename tags are wrongly referencing to process size instead of image size
2022-10-29 07:32:11 +03:00
AUTOMATIC1111
d3b4b9d7ec
Merge pull request #3717 from benkyoujouzu/master
Add missing support for linear activation in hypernetwork
2022-10-29 07:30:14 +03:00
AUTOMATIC1111
fc89495df3
Merge pull request #3771 from aria1th/patch-12
Disable unavailable or duplicate options for Activation functions
2022-10-29 07:29:02 +03:00
AUTOMATIC1111
d5f31f1e14
Merge pull request #3511 from bamarillo/master
[API][Feature] Add extras endpoints
2022-10-29 07:24:37 +03:00
Bruno Seoane
0edf100d83
Merge branch 'AUTOMATIC1111:master' into master 2022-10-28 22:03:49 -03:00
AngelBottomless
f361e804eb
Re enable linear 2022-10-29 08:36:50 +09:00
Yaiol
539c0f51e4 Update images.py
Filename tags [height] and [width] are wrongly referencing to process size instead of resulting image size. Making all upscale files named wrongly.
2022-10-29 01:07:01 +02:00
Chris OBryan
d8b3661467 extras: upscaler blending should not be considered in cache key 2022-10-28 16:55:02 -05:00
Chris OBryan
5732c0282d extras-tweaks: autoformat changed lines 2022-10-28 16:36:25 -05:00
Chris OBryan
1f1b327959 extras: Make image cache LRU
This changes the extras image cache into a Least-Recently-Used
cache. This allows more experimentation with different upscalers
without missing the cache.

Max cache size is increased to 5 and is cleared on source image
update.
2022-10-28 16:14:21 -05:00
Chris OBryan
bde4731f1d extras: Rework image cache
Bit of a refactor to the image cache to make it easier to extend.
Also takes into account the entire image instead of just a cropped portion.
2022-10-28 14:44:25 -05:00
Chris OBryan
26d0819384 extras: Add option to run upscaling before face fixing
Face restoration can look much better if ran after upscaling, as it
allows the restoration to fix upscaling artifacts. This patch adds
an option to choose which order to run upscaling/face fixing in.
2022-10-28 13:33:49 -05:00
Muhammad Rizqi Nur
9ceef81f77 Fix log off by 1 2022-10-28 20:48:08 +07:00
Muhammad Rizqi Nur
16451ca573 Learning rate sched syntax support for grad clipping 2022-10-28 17:16:23 +07:00
timntorres
db5a354c48 Always ignore "None.pt" in the hypernet directory. 2022-10-28 01:41:57 -07:00
timntorres
c0677b3316 Explicitly state when Hypernet is none. 2022-10-27 23:31:45 -07:00
timntorres
d4a069a23c Read hypernet strength from PNG info. 2022-10-27 23:16:27 -07:00
timntorres
9e465c8aa5 Add strength to textinfo. 2022-10-27 23:03:34 -07:00
benkyoujouzu
b2a8b263b2 Add missing support for linear activation in hypernetwork 2022-10-28 12:54:59 +08:00
Antonio
5d5dc64064
Natural sorting for dropdown checkpoint list
Example:

Before					After

11.ckpt					11.ckpt
ab.ckpt					ab.ckpt
ade_pablo_step_1000.ckpt	ade_pablo_step_500.ckpt			
ade_pablo_step_500.ckpt	ade_pablo_step_1000.ckpt	
ade_step_1000.ckpt		ade_step_500.ckpt
ade_step_1500.ckpt		ade_step_1000.ckpt
ade_step_2000.ckpt		ade_step_1500.ckpt
ade_step_2500.ckpt		ade_step_2000.ckpt
ade_step_3000.ckpt		ade_step_2500.ckpt
ade_step_500.ckpt			ade_step_3000.ckpt
atp_step_5500.ckpt			atp_step_5500.ckpt
model1.ckpt				model1.ckpt
model10.ckpt				model10.ckpt
model1000.ckpt			model33.ckpt
model33.ckpt				model50.ckpt
model400.ckpt			model400.ckpt
model50.ckpt				model1000.ckpt
moo44.ckpt				moo44.ckpt
v1-4-pruned-emaonly.ckpt	v1-4-pruned-emaonly.ckpt
v1-5-pruned-emaonly.ckpt	v1-5-pruned-emaonly.ckpt
v1-5-pruned.ckpt			v1-5-pruned.ckpt
v1-5-vae.ckpt				v1-5-vae.ckpt
2022-10-28 05:49:39 +02:00
Muhammad Rizqi Nur
1618df41ba Gradient clipping for textual embedding 2022-10-28 10:31:27 +07:00
Muhammad Rizqi Nur
a133042c66 Forgot to remove this from train_embedding 2022-10-28 10:01:46 +07:00
benlisquare
ccde874974 adjustments to zh_TW localisation per suggestions by snowmeow2 2022-10-28 13:51:54 +11:00
Muhammad Rizqi Nur
2a25729623 Gradient clipping in train tab 2022-10-28 09:44:56 +07:00
Bruno Seoane
21cbba34f5 Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui 2022-10-27 22:06:17 -03:00
Florian Horn
403c5dba86 hide save btn for other tabs than txt2img and img2img 2022-10-28 00:58:18 +02:00
Martucci
d814db1c25
Update pt_BR.json 2022-10-27 18:09:45 -03:00
Martucci
4ca4900bd4
Update pt_BR.json 2022-10-27 18:04:08 -03:00
Josh Watzman
b50ff4f4e4 Reduce peak memory usage when changing models
A few tweaks to reduce peak memory usage, the biggest being that if we
aren't using the checkpoint cache, we shouldn't duplicate the model
state dict just to immediately throw it away.

On my machine with 16GB of RAM, this change means I can typically change
models, whereas before it would typically OOM.
2022-10-27 22:01:06 +01:00
Martucci
5e6344261d
Compromise with other PR for this fork 2022-10-27 17:24:20 -03:00
Roy Shilkrot
bdc9083798 Add a barebones interrogate API 2022-10-27 15:20:15 -04:00
random_thoughtss
b68c7c437e Updated name and hover text. 2022-10-27 11:45:35 -07:00
random_thoughtss
a38496c1de Moved mask weight config to SD section 2022-10-27 11:31:31 -07:00
random_thoughtss
26a3fd2fe9 Highres fix works with unmasked latent.
Also refactor the mask creation to make it more accesible.
2022-10-27 11:27:59 -07:00
random-thoughtss
f3f2ffd448
Merge branch 'AUTOMATIC1111:master' into master 2022-10-27 11:19:12 -07:00
FlameLaw
a0a7024c67
Fix random dataset shuffle on TI 2022-10-28 02:13:48 +09:00
xmodar
68760a48cb Add forced LTR for training progress 2022-10-27 17:46:00 +03:00
Florian Horn
bf25b51c31 fixed position to be in line with the other icons 2022-10-27 16:38:55 +02:00
Florian Horn
268159cfe3 fixed indentation 2022-10-27 16:32:10 +02:00
Florian Horn
0995e879ce added save button and shortcut (s) to Modal View 2022-10-27 16:20:01 +02:00
Martucci
3d38416352
Mais ajustes de tradução 2022-10-27 10:25:54 -03:00
Dynamic
a668444110
Attention editing hotkey fix part 2 2022-10-27 22:24:29 +09:00
Dynamic
9358a421cf
Remove files that shouldn't be here 2022-10-27 22:24:05 +09:00
Dynamic
6e10078b2b
Attention editing with hotkeys should work with KR now
Added the word "Prompt" in the placeholders to pass the check from edit-attention.js
2022-10-27 22:21:56 +09:00
Dynamic
96da2e0c33
Merge branch 'AUTOMATIC1111:master' into kr-localization 2022-10-27 22:19:55 +09:00
Yuta Hayashibe
c4b5ca5778 Truncate too long filename 2022-10-27 22:00:28 +09:00
xmodar
e64ccd18e1 Add minor edits to Arabic localization 2022-10-27 14:50:55 +03:00
Dynamic
34c9533811
Apparently brackets don't work, gitlab docs fooled me 2022-10-27 20:23:33 +09:00
Dynamic
6a0eebfbee
Update 2 cause I'm an idiot 2022-10-27 20:21:15 +09:00
Dynamic
9b50d9bbbf
Update CODEOWNERS file 2022-10-27 20:16:51 +09:00
Dynamic
035b875e1a
Edit CODEOWNERS for ko_KR.json permissions 2022-10-27 20:14:51 +09:00
yfszzx
e0cbf53f45 create send to buttons by extensions 2022-10-27 18:00:51 +08:00
AngelBottomless
462e6ba667
Disable unavailable or duplicate options 2022-10-27 15:40:24 +09:00
guaneec
80844ac861
Merge pull request #1 from aria1th/patch-11
fix dropouts for future hypernetworks
2022-10-27 14:14:03 +08:00
AngelBottomless
029d7c7543
Revert unresolved changes in Bias initialization
it should be zeros_ or parameterized in future properly.
2022-10-27 14:44:53 +09:00
yfszzx
ed0821de21 create send to buttons in one module 2022-10-27 13:43:16 +08:00
yfszzx
300e5774e7 create send to buttons in one module 2022-10-27 13:42:20 +08:00
guaneec
cc56df996e Fix dropout logic 2022-10-27 14:38:21 +09:00
AngelBottomless
85fcccc105 Squashed commit of fixing dropout silently
fix dropouts for future hypernetworks

add kwargs for Hypernetwork class

hypernet UI for gradio input

add recommended options

remove as options

revert adding options in ui
2022-10-27 14:38:21 +09:00
yfszzx
4a4647e0df create send to buttons in one module 2022-10-27 13:36:11 +08:00
LunixWasTaken
2ce6b646a7 Add german Localization file. 2022-10-27 03:03:14 +02:00
benlisquare
54e7bfd1ae
Update localizations/zh_TW.json per dtlnor's suggestion
Co-authored-by: dtlnor <dtlnor@hotmail.com>
2022-10-27 10:42:57 +11:00
benlisquare
84956d2377
Update localizations/zh_TW.json per dtlnor's suggestion
Co-authored-by: dtlnor <dtlnor@hotmail.com>
2022-10-27 10:42:44 +11:00
benlisquare
c40994a7b2 adjustments to zh_TW localisation per suggestions by dtlnor 2022-10-27 10:41:19 +11:00
Martucci
72f55bd325
Merge pull request #1 from M-art-ucci/pt_BR-localization
Localization file for portuguese (brazil)
2022-10-26 20:11:48 -03:00
Martucci
95b8e49e5b
Localization file for portuguese (brazil) 2022-10-26 20:04:48 -03:00
xmodar
bf7cbcdeef Add Arabic localization feedback revisions 2022-10-27 01:30:09 +03:00
xmodar
3de0365141 Add id access to scripts list in the css 2022-10-26 23:57:19 +03:00
xmodar
54cdd4e1f4 Add LTR checkpoint lists and updated Arabic localization 2022-10-26 22:49:45 +03:00
Eugenio Buffo
1a4216fabf Finalised IT localization file 2022-10-26 19:45:36 +02:00
Eugenio Buffo
859f3b359d Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui 2022-10-26 17:32:42 +02:00
Eugenio Buffo
57450003a9 Added localizations/it_IT.json 2022-10-26 17:32:35 +02:00
MMaker
0dd8480281
fix: Correct before image saved callback 2022-10-26 11:08:44 -04:00
evshiron
fddb4883f4 prototype progress api 2022-10-26 22:39:08 +08:00
DepFA
737eb28fac typo: cmd_opts.embedding_dir to cmd_opts.embeddings_dir 2022-10-26 17:38:08 +03:00
Bruno Seoane
b2e0d8ba78 Remove folder endpoint 2022-10-26 09:54:26 -03:00
Bruno Seoane
8320963dcb Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui 2022-10-26 09:50:26 -03:00
Sihan Wang
7bd8581e46
Fix error caused by EXIF transpose when using custom scripts
Some custom scripts read image directly and no need to select image in UI, this will cause error.
2022-10-26 20:32:55 +08:00
benlisquare
eba38de38d create new localisation JSON for zh_TW (Traditional Chinese, Taiwan locale) 2022-10-26 22:10:32 +11:00
Dynamic
dc6779f6f3
Update new strings
Translated new strings in PFF UI
2022-10-26 19:52:34 +09:00
Tony Beeman
99d728b5b1 Add Iterate Button and Improve PFF UI 2022-10-26 13:26:35 +03:00
dtlnor
af73cf50fd change file name 2022-10-26 13:17:21 +03:00
dtlnor
dde8c43598 Update zh-hans.json 2022-10-26 13:17:21 +03:00
dtlnor
68bdeb5b84 Update zh-hans.json 2022-10-26 13:17:21 +03:00
dtlnor
2439f4c32c Update zh-hans.json 2022-10-26 13:17:21 +03:00
dtlnor
0533580534 Update zh-hans.json 2022-10-26 13:17:21 +03:00
dtlnor
6cf030abfc re-order some element. update to latest 2022-10-26 13:17:21 +03:00
dtlnor
5394d0e696 Update zh-hans.json 2022-10-26 13:17:21 +03:00
dtlnor
bf7b50e6bc Update localizations/zh-hans.json
Co-authored-by: liggest <43201720+liggest@users.noreply.github.com>
2022-10-26 13:17:21 +03:00
dtlnor
0655797c53 Update localizations/zh-hans.json
Co-authored-by: liggest <43201720+liggest@users.noreply.github.com>
2022-10-26 13:17:21 +03:00
dtlnor
e53474d31c Update localizations/zh-hans.json
Co-authored-by: liggest <43201720+liggest@users.noreply.github.com>
2022-10-26 13:17:21 +03:00
dtlnor
dd15526e6b Update localizations/zh-hans.json
Co-authored-by: liggest <43201720+liggest@users.noreply.github.com>
2022-10-26 13:17:21 +03:00
dtlnor
f1103b6750 Update localizations/zh-hans.json
Co-authored-by: liggest <43201720+liggest@users.noreply.github.com>
2022-10-26 13:17:21 +03:00
dtlnor
f9e31c6bdd Update localizations/zh-hans.json
Co-authored-by: liggest <43201720+liggest@users.noreply.github.com>
2022-10-26 13:17:21 +03:00
dtlnor
57b90a8b19 Update localizations/zh-hans.json
Co-authored-by: liggest <43201720+liggest@users.noreply.github.com>
2022-10-26 13:17:21 +03:00
dtlnor
f9b20482fc Update zh-hans.json 2022-10-26 13:17:21 +03:00
dtlnor
c7e000bedd Update zh-hans.json 2022-10-26 13:17:21 +03:00
dtlnor
168aa1cf67 Update zh-hans.json 2022-10-26 13:17:21 +03:00
dtlnor
e34653eac0 Update zh-hans.json
- fix according to https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/3348#issuecomment-1287923525
2022-10-26 13:17:21 +03:00
dtlnor
e125f1cd2f Update localizations/zh-hans.json
Co-authored-by: liggest <43201720+liggest@users.noreply.github.com>
2022-10-26 13:17:21 +03:00
dtlnor
db23be8bcd Update localizations/zh-hans.json
Co-authored-by: liggest <43201720+liggest@users.noreply.github.com>
2022-10-26 13:17:21 +03:00
dtlnor
e05a4a90f2 Update localizations/zh-hans.json
Co-authored-by: liggest <43201720+liggest@users.noreply.github.com>
2022-10-26 13:17:21 +03:00
dtlnor
7e6a6266ad Update zh-hans.json
- update to latest
2022-10-26 13:17:21 +03:00
dtlnor
69576d3398 Update zh-hans.json
- unify the translation of upscale / upscaler
- unify the translation of seed / random seed
- correct images embedding's translation
2022-10-26 13:17:21 +03:00
dtlnor
046bb3b16a Update zh-hans.json
- fix translation of Tile / Tiling
- fix translation of Negative prompt
2022-10-26 13:17:21 +03:00
dtlnor
5784aae925 Update zh-hans.json 2022-10-26 13:17:21 +03:00
dtlnor
366fe8dbd0 Create zh-hans.json 2022-10-26 13:17:21 +03:00
Leo Mozoloa
7e0e4d21d4 Prevent people from just saying "latest version"
in the bug report form
2022-10-26 13:16:06 +03:00
AUTOMATIC1111
4ca4b9f28f
Merge pull request #3562 from 36DB/kr-localization
Update KR localization
2022-10-26 13:15:40 +03:00
camenduru
ef53f7ffcf Fix Turkish Localization
it's good enough to merge 🎉
2022-10-26 13:13:27 +03:00
camenduru
0c38baeb20 🧿 Turkish localization
translated with deepl, it needs some tweaks I will fix 🔧
2022-10-26 13:13:27 +03:00
AUTOMATIC
0cd7460253 add script callback for before image save and change callback for after image save to use a class with parameters 2022-10-26 13:12:44 +03:00
Dynamic
d3fc0f7fc4
Update new strings 2022-10-26 17:59:10 +09:00
Dynamic
9442de2aeb
Merge branch 'AUTOMATIC1111:master' into kr-localization 2022-10-26 17:58:39 +09:00
AUTOMATIC
1e428238db add override_settings to API as an alternative to #3629 2022-10-26 11:47:17 +03:00
guaneec
b6a8bb123b
Fix merge 2022-10-26 15:15:19 +08:00
timntorres
f4e1464217 Implement PR #3625 but for embeddings. 2022-10-26 10:14:35 +03:00
timntorres
4875a6c217 Implement PR #3309 but for embeddings. 2022-10-26 10:14:35 +03:00
timntorres
c2dc9bfa89 Implement PR #3189 but for embeddings. 2022-10-26 10:14:35 +03:00
timntorres
a524d137d0 patch bug (SeverianVoid's comment on 5245c7a) 2022-10-26 10:12:46 +03:00
timntorres
cb49800c08 img2img, use smartphone photos' EXIF orientation 2022-10-26 10:10:57 +03:00
guaneec
91bb35b1e6
Merge fix 2022-10-26 15:00:03 +08:00
guaneec
649d79a8ec
Merge branch 'master' into hn-activation 2022-10-26 14:58:04 +08:00
AUTOMATIC
9d82c351ac fix typo in on_save_imaged/on_image_saved; hope no extension is using it yet 2022-10-26 09:56:32 +03:00
w-e-w
757264c453 default_time_format if format is blank 2022-10-26 09:51:32 +03:00
guaneec
877d94f97c
Back compatibility 2022-10-26 14:50:58 +08:00
Milly
146856f66d images: allow nested bracket in filename pattern 2022-10-26 09:50:24 +03:00
tumbly
c7af69f893 Add missed fixes from salco's comments, tweak some 2022-10-26 09:47:40 +03:00
tumbly
1ecfa5977e Fix typos and proper wording per salco's comments 2022-10-26 09:47:40 +03:00
tumbly
e86e2f7181 Add French localization 2022-10-26 09:47:40 +03:00
Stephen
b46c64c6e5 clean 2022-10-26 09:46:17 +03:00
Stephen
db9ab1a46b [Bugfix][API] - Fix API response for colab users 2022-10-26 09:46:17 +03:00
AUTOMATIC
cbb857b675 enable creating embedding with --medvram 2022-10-26 09:44:02 +03:00
AUTOMATIC1111
ee73341f04
Merge pull request #3139 from captin411/focal-point-cropping
[Preprocess image] New option to auto crop based on complexity, edges, faces
2022-10-26 09:24:21 +03:00
AngelBottomless
7207e3bf49 remove duplicate keys and lowercase 2022-10-26 09:17:01 +03:00
AngelBottomless
de096d0ce7 Weight initialization and More activation func
add weight init

add weight init option in create_hypernetwork

fstringify hypernet info

save weight initialization info for further debugging

fill bias with zero for He/Xavier

initialize LayerNorm with Normal

fix loading weight_init
2022-10-26 09:17:01 +03:00
guaneec
c702d4d0df
Fix off-by-one 2022-10-26 13:43:04 +08:00
guaneec
2f4c91894d
Remove activation from final layer of HNs 2022-10-26 12:10:30 +08:00
captin411
df0c5ea29d update default weights 2022-10-25 17:06:59 -07:00
captin411
54f0c14824 download better face detection module dynamically 2022-10-25 16:14:13 -07:00
xmodar
53b48d93fb Add complete retranslation for the Arabic localization
Used the following resources:
  - https://www.almaany.com/
  - https://translate.google.com/
  - https://techtionary.thinktech.sa/
  - https://sdaia.gov.sa/files/Dictionary.pdf

The translations are ordered by the way they appear in the UI.
This version abandons literal translation and adds explainations where possible.
2022-10-26 02:12:53 +03:00
captin411
db8ed5fe5c Focal crop UI elements 2022-10-25 15:22:29 -07:00
captin411
6629446a2f Merge branch 'master' into focal-point-cropping 2022-10-25 13:22:27 -07:00
random_thoughtss
8b4f32779f Switch to a continous blend for cond. image. 2022-10-25 13:15:08 -07:00
captin411
3e6c2420c1 improve debug markers, fix algo weighting 2022-10-25 13:13:12 -07:00
random_thoughtss
605d27687f Added conditioning image masking to xy_grid.
Use `True` and `False` to select values.
2022-10-25 12:20:54 -07:00
random_thoughtss
f9549d1cbb Added option to use unmasked conditioning image. 2022-10-25 11:14:12 -07:00
不会画画的中医不是好程序员
4ff4730d82
Merge branch 'AUTOMATIC1111:master' into Inspiron 2022-10-25 19:09:38 +08:00
yfszzx
f300d0f2b4 Merge branch 'Inspiron' of https://github.com/yfszzx/stable-diffusion-webui-plus into Inspiron 2022-10-25 18:48:24 +08:00
yfszzx
9ba439b533 need some rights for extensions 2022-10-25 18:48:07 +08:00
Dynamic
46cc0b3bc6
Update strings for some custom script/extensions 2022-10-25 18:28:09 +09:00
Dynamic
563fb0aa39
Merge branch 'AUTOMATIC1111:master' into kr-localization 2022-10-25 18:27:32 +09:00
AUTOMATIC
3e15f8e0f5 update callbacks code for #3549 2022-10-25 12:16:25 +03:00
不会画画的中医不是好程序员
5bfa2b23ca
Merge branch 'AUTOMATIC1111:master' into Inspiron 2022-10-25 15:38:33 +08:00
yfszzx
ff305acd51 some rights for extensions 2022-10-25 15:33:43 +08:00
w-e-w
91c1e1e6a9 fix default filename pattern 2022-10-25 09:44:54 +03:00
brkirch
faed465a0b MPS Upscalers Fix
Get ESRGAN, SCUNet, and SwinIR working correctly on MPS by ensuring memory is contiguous for tensor views before sending to MPS device.
2022-10-25 09:42:53 +03:00
brkirch
4c24347e45 Remove BSRGAN from --use-cpu, add SwinIR 2022-10-25 09:42:53 +03:00
AUTOMATIC1111
f53ca51638
Merge pull request #3549 from tsngo/on-image-saved-callback
add callback after image is saved
2022-10-25 08:40:19 +03:00
AUTOMATIC1111
16416e42b5
Merge branch 'master' into on-image-saved-callback 2022-10-25 08:40:12 +03:00
AUTOMATIC
77a320f406 do not stop execution when script's callback misbehaves and report which script it was 2022-10-25 08:32:47 +03:00
ritosonn
b383702752 fix #3145 #3093 2022-10-25 08:32:33 +03:00
innovaciones
73c4a8138c Fixes 2022-10-25 08:17:41 +03:00
innovaciones
bdc4c203f2 Add Spanish localization 2022-10-25 08:17:41 +03:00
xmodar
d3eef0aa0c Remove rtl: false from localizations 2022-10-25 08:15:44 +03:00
xmodar
ca2ebc89c2 Add RTL languages support and improved Arabic localization 2022-10-25 08:15:44 +03:00
Melan
18f86e41f6 Removed two unused imports 2022-10-24 17:21:18 +02:00
Dynamic
e595b41c9d
Update translations for renewed tooltip texts 2022-10-25 00:17:46 +09:00
Dynamic
0b990d1d34
Merge branch 'AUTOMATIC1111:master' into kr-localization 2022-10-25 00:16:26 +09:00
w-e-w
df0a1f8381 add hints, [datetime<Format><Time Zone>] 2022-10-24 16:01:48 +03:00
w-e-w
0c0028a9d3 UnknownTimeZoneError 2022-10-24 16:01:48 +03:00
yfszzx
cb9d2f8705 move to img component to public 2022-10-24 20:06:53 +08:00
Dynamic
8d8d4d8a1b
Update new strings in Settings tab 2022-10-24 20:55:21 +09:00
Dynamic
ab7af93e1e
Merge branch 'AUTOMATIC1111:master' into kr-localization 2022-10-24 20:53:51 +09:00
AUTOMATIC
9f79e59a95 added info about History/Image browser to readme 2022-10-24 14:42:36 +03:00
Bruno Seoane
2267498a8c Merge remote-tracking branch 'upstream/master' 2022-10-24 08:37:37 -03:00
Bruno Seoane
595dca85af Reverse run_extras change
Update serialization on the batch images endpoint
2022-10-24 08:32:18 -03:00
blackneoo
5587ab7ea8 Add Arabic localization
Arabic translation for the UI interface, however arabic language is a right to left language, so for proper display (especially when text contain both Arabic and English text) it is required to add dir="rtl" in the style, i suggest adding a checkbox in setting to trigger this for right to left languages
2022-10-24 14:16:32 +03:00
AUTOMATIC
2c05e06ea7 rename api/processing to api/models for #3511 2022-10-24 14:11:14 +03:00
AUTOMATIC
8da1bd48bf add an option to skip adding number to filenames when saving.
rework filename pattern function go through the pattern once and not calculate any of replacements until they are actually encountered in the pattern.
2022-10-24 14:03:58 +03:00
yfszzx
994aaadf08 a strange bug 2022-10-24 16:44:36 +08:00
w-e-w
eb007e5884 use the same datetime object for [date] and [datetime] 2022-10-24 10:28:42 +03:00
w-e-w
5a981310e6 replace_datetime() can now accept a datetime parameter 2022-10-24 10:28:42 +03:00
w-e-w
8f6af4ed65 remove lowercasing file_decoration as it is not needed anymore 2022-10-24 10:28:42 +03:00
w-e-w
00952fb4a8 add sanitize_filename() to datetime 2022-10-24 10:28:42 +03:00
w-e-w
480d8e7646 replace "srt.replace()" in apply_filename_pattern() with equivalent re.sub()
the file_decoration passed into apply_filename_pattern() is formatted to lowercase to increase compatibility
the use of case sensitive srt.replace()

but because the newly implemented "time format" is case sensitive
the lowercasing the file_decoration will cause time format to be broken

in order to resolve this issue
I decided to replace every srt.replace() and in if "str" in x to regular expression (case insensitive) equivalent
2022-10-24 10:28:42 +03:00
w-e-w
37dd6deafb filename pattern [datetime], extended customizable Format and Time Zone
format:
[datetime]
[datetime<Format>]
[datetime<Format><Time Zone>]
2022-10-24 10:28:42 +03:00
w-e-w
7d4a4db9ea modify unnecessary sting assignment as it's going to get overwritten 2022-10-24 10:28:42 +03:00
w-e-w
c5d90628a4 move "file_decoration" initialize section
into "if forced_filename is None:"
no need to initialize it if it's not going to be used
2022-10-24 10:28:42 +03:00
w-e-w
3be6b29d81 indent=4 config.json 2022-10-24 10:22:48 +03:00
AUTOMATIC
c623fa1f0b add extensions dir 2022-10-24 09:51:17 +03:00
AUTOMATIC
876a96f0f9 remove erroneous dir in the extension directory
remove loading .js files from scripts dir (they go into javascript)
load scripts after models, for scripts that depend on loaded models
2022-10-24 09:39:46 +03:00
AUTOMATIC1111
999929bea4
Merge pull request #3537 from yfszzx/Inspiron
Move out images browser from project
2022-10-24 09:28:37 +03:00
Trung Ngo
734986dde3 add callback after image is saved 2022-10-24 01:25:31 -05:00
Kris57
edc0c907fa fix ja translation 2022-10-24 09:20:05 +03:00
Kris57
71d14a4c40 cleanup ja translation 2022-10-24 09:20:05 +03:00
Kris57
e33a05f263 update ja translation 2022-10-24 09:20:05 +03:00
Kris57
a921badac3 update ja translation 2022-10-24 09:20:05 +03:00
Kris57
c6459986cb update ja translation 2022-10-24 09:20:05 +03:00
AUTOMATIC
6cbb04f7a5 fix #3517 breaking txt2img 2022-10-24 09:15:26 +03:00
不会画画的中医不是好程序员
68931242cf
Merge branch 'AUTOMATIC1111:master' into Inspiron 2022-10-24 14:09:27 +08:00
AngelBottomless
e9a410b535 check length for variance 2022-10-24 09:07:39 +03:00
AngelBottomless
0d2e1dac40 convert deque -> list
I don't feel this being efficient
2022-10-24 09:07:39 +03:00
AngelBottomless
348f89c8d4 statistics for pbar 2022-10-24 09:07:39 +03:00
AngelBottomless
40b56c9289 cleanup some code 2022-10-24 09:07:39 +03:00
AngelBottomless
b297cc3324 Hypernetworks - fix KeyError in statistics caching
Statistics logging has changed to {filename : list[losses]}, so it has to use loss_info[key].pop()
2022-10-24 09:07:39 +03:00
Vladimir Repin
f2cc3f32d5 fix whitespaces 2022-10-24 08:58:56 +03:00
Vladimir Repin
9741969325 Save properly processed image before color correction 2022-10-24 08:58:56 +03:00
Dynamic
dd25722d6c Finalize ko_KR.json 2022-10-24 08:55:33 +03:00
Dynamic
ae7c830c3a Translation complete 2022-10-24 08:55:33 +03:00
Dynamic
016712fc4c Update ko_KR.json
Updated translation for everything except the Settings tab
2022-10-24 08:55:33 +03:00
Dynamic
6cfe23a6f1 Rename ko-KR.json to ko_KR.json 2022-10-24 08:55:33 +03:00
Dynamic
499713c546 Updated file with basic template and added new translations
Translation done in txt2img-img2img windows and following scripts
2022-10-24 08:55:33 +03:00
Dynamic
e210b61d6a update ko-KR.json 2022-10-24 08:55:33 +03:00
Dynamic
1a96f856c4 update ko-KR.json
Translated all text on txt2img window, plus some extra
2022-10-24 08:55:33 +03:00
Dynamic
021b02751e Move ko-KR.json 2022-10-24 08:55:33 +03:00
Dynamic
e7eea55571 Update ko-KR.json 2022-10-24 08:55:33 +03:00
Dynamic
68e9e97899 Initial KR support - WIP
Localization WIP
2022-10-24 08:55:33 +03:00
judgeou
fe9740d2f5 update deepdanbooru version 2022-10-24 08:46:31 +03:00
yfszzx
f132923d5f merged 2022-10-24 11:35:51 +08:00
yfszzx
394c498621 test 2022-10-24 11:29:45 +08:00
不会画画的中医不是好程序员
9dd17b8601
fix add git add mistake 2022-10-24 11:19:49 +08:00
yfszzx
a889c93f23 paste_fields add to public 2022-10-24 11:13:16 +08:00
yfszzx
d7987ef9da add paste_fields to global 2022-10-24 11:06:58 +08:00
yfszzx
cef1b89aa2 remove browser to extension 2022-10-24 10:10:33 +08:00
yfszzx
124e44cf1e remove browser to extension 2022-10-24 09:51:56 +08:00
Dynamic
2ce44fc48e
Finalize ko_KR.json 2022-10-24 04:38:16 +09:00
Dynamic
6124575e18
Translation complete 2022-10-24 04:29:19 +09:00
Bruno Seoane
90f02c7522 Remove unused field and class 2022-10-23 16:05:54 -03:00
Bruno Seoane
1e625624ba Add folder processing endpoint
Also minor refactor
2022-10-23 16:01:16 -03:00
Bruno Seoane
866b36d705 Move processing's models into models.py
It didn't make sense to have two differente files for the same and
"models" is a more descriptive name.
2022-10-23 15:35:49 -03:00
Bruno Seoane
e0ca4dfbc1 Update endpoints to use gradio's own utils functions 2022-10-23 15:13:37 -03:00
Bruno Seoane
e3f0e34cd6 Merge branch 'master' of https://github.com/bamarillo/stable-diffusion-webui 2022-10-23 13:14:54 -03:00
Bruno Seoane
4ff852ffb5 Add batch processing "extras" endpoint 2022-10-23 13:07:59 -03:00
Bruno Seoane
0523704dad Update run_extras to use the temp filename
In batch mode run_extras tries to preserve the original file name of the
images. The problem is that this makes no sense since the user only gets
a list of images in the UI, trying to manually save them shows that this
images have random temp names. Also, trying to keep "orig_name" in the
API is a hassle that adds complexity to the consuming UI since the
client has to use (or emulate) an input (type=file) element in a form.
Using the normal file name not only doesn't change the output and
functionality in the original UI but also helps keep the API simple.
2022-10-23 12:27:50 -03:00
Dynamic
c729cd4130
Update ko_KR.json
Updated translation for everything except the Settings tab
2022-10-23 22:38:49 +09:00
Dynamic
705bbf327f
Rename ko-KR.json to ko_KR.json 2022-10-23 22:37:40 +09:00
Dynamic
660ae690bd
Merge branch 'AUTOMATIC1111:master' into kr-localization 2022-10-23 22:36:56 +09:00
captin411
1be5933ba2
auto cropping now works with non square crops 2022-10-23 04:11:07 -07:00
AUTOMATIC1111
6bd6154a92
Merge pull request #2067 from victorca25/esrgan_mod
update ESRGAN architecture and model to support all ESRGAN models
2022-10-23 13:43:41 +03:00
w-e-w
696cb33e50 after initial launch, disable --autolaunch for subsequent restarts 2022-10-23 12:34:16 +03:00
yfszzx
6a9ea40d7f Move browser and Inspiration into extension 2022-10-23 16:17:37 +08:00
kabachuha
1ef32c8b8f Add ru_RU localization 2022-10-23 09:32:24 +03:00
Stephen
5dc0739ecd working mask 2022-10-23 09:26:56 +03:00
Stephen
9e1a8b7734 non-implemented mask with any type 2022-10-23 09:26:56 +03:00
Stephen
a7c213d0f5 [API][Feature] - Add img2img API endpoint 2022-10-23 09:26:56 +03:00
DepFA
1fbfc052eb Update hypernetwork.py 2022-10-23 08:34:33 +03:00
Bruno Seoane
28e26c2bef Add "extra" single image operation
- Separate extra modes into 3 endpoints so the user ddoesn't ahve to
handle so many unused parameters.
 - Add response model for codumentation
2022-10-22 23:17:27 -03:00
Bruno Seoane
b02926df13 Moved moodels to their own file and extracted base64 conversion to its own function 2022-10-22 20:24:04 -03:00
Bruno Seoane
1b4d04737a Remove unused imports 2022-10-22 20:13:16 -03:00
papuSpartan
ce42879438 fix js func signature and not forget to initialize confirmation var to prevent exception upon cancelling confirmation 2022-10-22 14:53:37 -05:00
AngelBottomless
48dbf99e84 Allow tracking real-time loss
Someone had 6000 images in their dataset, and it was shown as 0, which was confusing.
This will allow tracking real time dataset-average loss for registered objects.
2022-10-22 22:24:19 +03:00
AUTOMATIC
ca5a9e79dc fix for img2img color correction in a batch #3218 2022-10-22 22:06:54 +03:00
AUTOMATIC
be748e8b08 add --freeze-settings commandline argument to disable changing settings 2022-10-22 22:05:22 +03:00
AUTOMATIC
d213d6ca6f removed the option to use 2x more memory when generating previews
added an option to always only show one image in previews
removed duplicate code
2022-10-22 20:48:13 +03:00
Unnoen
4fdb53c1e9 Generate grid preview for progress image 2022-10-22 20:36:04 +03:00
AngelBottomless
24694e5983 Update hypernetwork.py 2022-10-22 20:25:32 +03:00
AUTOMATIC
321bacc6a9 call model_loaded_callback after setting shared.sd_model in case scripts refer to it using that 2022-10-22 20:15:12 +03:00
MrCheeze
0df94d3fcf fix aesthetic gradients doing nothing after loading a different model 2022-10-22 20:14:18 +03:00
AUTOMATIC
324c7c732d record First pass size as 0x0 for #3328 2022-10-22 20:09:51 +03:00
Kris57
774be6d2f2 improve ja translation 2022-10-22 19:38:34 +03:00
Kris57
f613c6b8c5 improve ja translation 2022-10-22 19:38:34 +03:00
Kris57
0262bf64dd improve ja translation 2022-10-22 19:38:34 +03:00
Kris57
7043f4eff3 improve ja translation 2022-10-22 19:38:34 +03:00
Kris57
070fda592b add ja translation 2022-10-22 19:38:34 +03:00
Kris57
eb2dae196e add ja translation 2022-10-22 19:38:34 +03:00
Kris57
96ee7d7707 add ja localization 2022-10-22 19:38:34 +03:00
random_thoughtss
7613ea12f2 Fixed img2imgalt after inpainting update 2022-10-22 19:36:57 +03:00
AUTOMATIC1111
ffea9b1509
Merge pull request #3414 from discus0434/master
[Hypernetworks] Add a feature to use dropout / more activation functions
2022-10-22 19:32:13 +03:00
Greendayle
e38625011c fix part2 2022-10-22 19:27:16 +03:00
Greendayle
72383abacd Deepdanbooru linux fix 2022-10-22 19:27:16 +03:00
AUTOMATIC
dbc8ab65f6 typo 2022-10-22 19:19:17 +03:00
AUTOMATIC
d37cfffd53 added callback for creating new settings in extensions 2022-10-22 19:18:56 +03:00
discus0434
6a4fa73a38 small fix 2022-10-22 13:44:39 +00:00
discus0434
97749b7c7d
Merge branch 'AUTOMATIC1111:master' into master 2022-10-22 22:00:59 +09:00
discus0434
7912acef72 small fix 2022-10-22 13:00:44 +00:00
discus0434
fccba4729d add an option to avoid dying relu 2022-10-22 12:02:41 +00:00
AUTOMATIC
7fd90128eb added a guard for hypernet training that will stop early if weights are getting no gradients 2022-10-22 14:48:43 +03:00
AUTOMATIC
1cd3ed7def fix for extensions without style.css 2022-10-22 14:28:56 +03:00
discus0434
dcb45dfecf Merge branch 'master' of upstream 2022-10-22 11:14:46 +00:00
discus0434
0e8ca8e7af add dropout 2022-10-22 11:07:00 +00:00
AUTOMATIC
50b5504401 remove parsing command line from devices.py 2022-10-22 14:04:14 +03:00
AUTOMATIC1111
e80bdcab91
Merge pull request #3377 from Extraltodeus/cuda-device-id-selection
Implementation of CUDA device id selection (--device-id 0/1/2)
2022-10-22 13:58:00 +03:00
AUTOMATIC1111
1fa53dab2c
Merge branch 'master' into cuda-device-id-selection 2022-10-22 13:57:20 +03:00
AUTOMATIC
5aa9525046 updated readme with info about Aesthetic Gradients 2022-10-22 13:40:07 +03:00
discus0434
6a02841fff
Merge pull request #2 from aria1th/patch-6
generalized some functions and option for ignoring first layer
2022-10-22 19:35:56 +09:00
AUTOMATIC
6398dc9b10 further support for extensions 2022-10-22 13:34:49 +03:00
AUTOMATIC
2b91251637 removed aesthetic gradients as built-in
added support for extensions
2022-10-22 12:23:58 +03:00
yfszzx
67b78f0ea6 inspiration perfected 2022-10-22 10:29:23 +08:00
yfszzx
d93ea5cdeb inspiration perfected 2022-10-22 10:21:21 +08:00
yfszzx
40ddb6df61 inspiration perfected 2022-10-22 10:16:22 +08:00
papuSpartan
700340448b forgot to clear neg prompt after moving to back. Add tooltip to hints 2022-10-21 17:24:04 -05:00
Extraltodeus
29bfacd63c
implement CUDA device selection, --device-id arg 2022-10-22 00:12:46 +02:00
Extraltodeus
57eb54b838
implement CUDA device selection by ID 2022-10-22 00:11:07 +02:00
papuSpartan
0c7cf08b3d some doc and formatting 2022-10-21 15:32:26 -05:00
papuSpartan
9e40520f00 refactor internal terminology to use 'clear' instead of 'trash' like #2728 2022-10-21 15:13:12 -05:00
papuSpartan
de70ddaf58 update token counter when clearing prompt 2022-10-21 15:00:35 -05:00
papuSpartan
ee0505dd00 only delete prompt on back end and remove client-side deletion 2022-10-21 14:24:14 -05:00
papuSpartan
9ba372de90 initial work on getting prompts cleared on the backend and synchronizing token counter 2022-10-21 13:55:48 -05:00
papuSpartan
4a9ff0891a
Merge branch 'AUTOMATIC1111:master' into master 2022-10-21 13:53:32 -05:00
yfszzx
58ee008f0f inspiration finished 2022-10-22 01:30:12 +08:00
yfszzx
2797b2cbf2 inspiration finished 2022-10-22 01:28:02 +08:00
yfszzx
bb0f1a2cda inspiration finished 2022-10-22 01:23:00 +08:00
AUTOMATIC
26d1073745 Merge remote-tracking branch 'historytab/master' 2022-10-21 18:49:56 +03:00
AUTOMATIC
f49c08ea56 prevent error spam when processing images without txt files for captions 2022-10-21 18:46:02 +03:00
AUTOMATIC1111
7464f367c3
Merge pull request #3246 from Milly/fix/train-preprocess-keep-ratio
Preprocess: fixed keep ratio and changed split behavior
2022-10-21 18:36:55 +03:00
AUTOMATIC1111
5e9afa5c8a
Merge branch 'master' into fix/train-preprocess-keep-ratio 2022-10-21 18:36:29 +03:00
AUTOMATIC
24ce67a13b make aspect ratio overlay work regardless of selected localization, pt2 2022-10-21 17:41:47 +03:00
AUTOMATIC
ac0aa2b18e loading SD VAE, see PR #3303 2022-10-21 17:35:51 +03:00
AUTOMATIC
3d898044e5 batch_size does not affect job count 2022-10-21 17:26:30 +03:00
AUTOMATIC
a7aa00d46a Merge remote-tracking branch 'mk2/outpainting-mk2-batch-out' 2022-10-21 17:22:47 +03:00
AUTOMATIC
704036ff07 make aspect ratio overlay work regardless of selected localization 2022-10-21 17:11:42 +03:00
Rcmcpe
02e4d4694d Change option description of unload_models_when_training 2022-10-21 16:53:06 +03:00
timntorres
272fa527bb Remove unused variable. 2022-10-21 16:52:24 +03:00
timntorres
fccad18a59 Refer to Hypernet's name, sensibly, by its name variable. 2022-10-21 16:52:24 +03:00
timntorres
19818f023c Match hypernet name with filename in all cases. 2022-10-21 16:52:24 +03:00
timntorres
51e3dc9cca Sanitize hypernet name input. 2022-10-21 16:52:24 +03:00
AUTOMATIC1111
3e12b5295c
Merge pull request #3321 from AUTOMATIC1111/features-to-readme
Features to readme
2022-10-21 16:51:08 +03:00
AUTOMATIC1111
ec37f8a45f
Merge branch 'master' into features-to-readme 2022-10-21 16:51:01 +03:00
parsec501
85cb5918ee Make commit hash mandatory field 2022-10-21 16:48:13 +03:00
DepFA
306e2ff6ab Update image_embedding.py 2022-10-21 16:47:37 +03:00
DepFA
d0ea471b0c Use opts in textual_inversion image_embedding.py for dynamic fonts 2022-10-21 16:47:37 +03:00
AUTOMATIC
9286fe53de make aestetic embedding ciompatible with prompts longer than 75 tokens 2022-10-21 16:38:06 +03:00
AUTOMATIC
e89e2f7c2c Merge remote-tracking branch 'origin/master' 2022-10-21 16:16:34 +03:00
AUTOMATIC
df57064093 do not load aesthetic clip model until it's needed
add refresh button for aesthetic embeddings
add aesthetic params to images' infotext
2022-10-21 16:10:51 +03:00
ClashSAN
003d2c7fe4
Update README.md 2022-10-21 11:40:37 +00:00
AUTOMATIC
7d6b388d71 Merge branch 'ae' 2022-10-21 13:35:01 +03:00
Leo Mozoloa
1ed227b3b5 wtf is happening 2022-10-21 12:16:48 +03:00
AUTOMATIC
bf30673f51 Fix Hypernet infotext string split bug for PR #3283 2022-10-21 10:19:25 +03:00
AUTOMATIC
03a1e288c4 turns out LayerNorm also has weight and bias and needs to be pre-multiplied and trained for hypernets 2022-10-21 10:13:24 +03:00
AUTOMATIC1111
e4877722e3
Merge pull request #3197 from AUTOMATIC1111/training-help-text
Training UI Changes
2022-10-21 09:58:16 +03:00
AUTOMATIC1111
0c5522ea21
Merge branch 'master' into training-help-text 2022-10-21 09:57:55 +03:00
timntorres
2273e752fb Remove redundant try/except. 2022-10-21 09:55:00 +03:00
timntorres
4ff274e1e3 Revise comments. 2022-10-21 09:55:00 +03:00
timntorres
6014fb8afb Do nothing if image file already exists. 2022-10-21 09:55:00 +03:00
timntorres
5245c7a493 Issue #2921-Give PNG info to Hypernet previews. 2022-10-21 09:55:00 +03:00
guaneec
b69c37d25e Allow datasets with only 1 image in TI 2022-10-21 09:54:09 +03:00
Patryk Wychowaniec
7157e5d064 interrogate: Fix CLIP-interrogation on CPU
Currently, trying to perform CLIP interrogation on a CPU fails, saying:

```
RuntimeError: "slow_conv2d_cpu" not implemented for 'Half'
```

This merge request fixes this issue by detecting whether the target
device is CPU and, if so, force-enabling `--no-half` and passing
`device="cpu"` to `clip.load()` (which then does some extra tricks to
ensure it works correctly on CPU).
2022-10-21 09:52:12 +03:00
Stephen
5f4fec307c [Bugfix][API] - Fix API arg in launch script 2022-10-21 09:50:57 +03:00
AUTOMATIC1111
6c54532a34
Merge pull request #3263 from mezotaken/master
Fix skip and interrupt buttons for Highres. fix option enabled
2022-10-21 09:49:40 +03:00
AUTOMATIC1111
d6bd6a425d
Merge branch 'master' into master 2022-10-21 09:49:32 +03:00
AUTOMATIC
c23f666dba a more strict check for activation type and a more reasonable check for type of layer in hypernets 2022-10-21 09:47:43 +03:00
AUTOMATIC1111
a26fc2834c
Merge pull request #3199 from discus0434/master
Add features to insert activation functions to hypernetworks
2022-10-21 09:34:45 +03:00
AUTOMATIC
12a97c5368 Merge remote-tracking branch 'origin/master' 2022-10-21 09:02:07 +03:00
winterspringsummer
9d71eef02e sort file list in alphabetical ordering in extras 2022-10-21 09:00:46 +03:00
winterspringsummer
a13c3bed3c Fixed path issue while extras batch processing 2022-10-21 09:00:46 +03:00
winterspringsummer
fb5a8cf0d9 Added try except to extras batch from directory 2022-10-21 09:00:45 +03:00
winterspringsummer
60872c5b40 Fixed path issue while extras batch processing 2022-10-21 09:00:45 +03:00
AUTOMATIC
74088c2a06 allow float sizes for hypernet's layer_structure 2022-10-21 09:00:45 +03:00
AUTOMATIC
4587218190 updated readme and some small stylistic changes to code 2022-10-21 09:00:39 +03:00
winterspringsummer
991a595686 sort file list in alphabetical ordering in extras 2022-10-21 07:59:00 +03:00
winterspringsummer
bc16b135b5 Fixed path issue while extras batch processing 2022-10-21 07:59:00 +03:00
winterspringsummer
aacc4c1ecb Added try except to extras batch from directory 2022-10-21 07:59:00 +03:00
winterspringsummer
0110429dc4 Fixed path issue while extras batch processing 2022-10-21 07:59:00 +03:00
wywywywy
1fc278bcc6
Fixed job count & single-output grid 2022-10-21 02:38:24 +01:00
papuSpartan
a3b047b7c7 add settings option to toggle button visibility 2022-10-20 19:28:58 -05:00
random_thoughtss
49533eed9e XY grid correctly re-assignes model when config changes 2022-10-20 16:01:27 -07:00
papuSpartan
a816514980 remove unnecessary assignment 2022-10-20 17:33:33 -05:00
papuSpartan
9cc4974d23 add confirmation dialogue 2022-10-20 17:03:51 -05:00
Vladimir Repin
d23a46ceaa Different approach to skip/interrupt with highres fix 2022-10-20 23:49:14 +03:00
Melan
7543cf5e3b Fixed some typos in the code 2022-10-20 22:43:08 +02:00
Melan
8f59129847 Some changes to the tensorboard code and hypernetwork support 2022-10-20 22:37:16 +02:00
random_thoughtss
708c3a7bd8 Added PLMS hijack and made sure to always replace methods 2022-10-20 13:28:43 -07:00
Vladimir Repin
d1cb08bfb2 fix skip and interrupt for highres. fix option 2022-10-20 22:49:06 +03:00
Melan
a6d593a6b5 Fixed a typo in a variable 2022-10-20 19:43:21 +02:00
random_thoughtss
92a17a7a4a Made dummy latents smaller. Minor code cleanups 2022-10-20 09:45:03 -07:00
aria1th
f89829ec3a Revert "fix bugs and optimizations"
This reverts commit 108be15500.
2022-10-21 01:37:11 +09:00
不会画画的中医不是好程序员
dc66540629
Merge branch 'AUTOMATIC1111:master' into Inspiron 2022-10-21 00:07:31 +08:00
AngelBottomless
108be15500
fix bugs and optimizations 2022-10-21 01:00:41 +09:00
yfszzx
d07cb46f34 inspiration pull request 2022-10-20 23:58:52 +08:00
wywywywy
18df060c3e
Fixed outpainting_mk2 output cropping 2022-10-20 16:16:09 +01:00
wywywywy
91efe138b3
Implemented batch_size logic in outpainting_mk2 2022-10-20 16:02:32 +01:00
AngelBottomless
a71e021236
only linear 2022-10-20 23:48:52 +09:00
AngelBottomless
d8acd34f66
generalized some functions and option for ignoring first layer 2022-10-20 23:43:03 +09:00
Milly
85dd62c4c7 train: ui: added Split image threshold and Split image overlap ratio to preprocess 2022-10-20 23:35:01 +09:00
Milly
9681419e42 train: fixed preprocess image ratio 2022-10-20 23:32:41 +09:00
wywywywy
4281f255d5
Implemented batch count logic to Outpainting mk2 2022-10-20 15:31:09 +01:00
Melan
29e74d6e71 Add support for Tensorboard for training embeddings 2022-10-20 16:26:16 +02:00
discus0434
f8733ad08b add linear as a act func (option for doin nothing) 2022-10-20 11:07:37 +00:00
Dynamic
21364c5c39
Updated file with basic template and added new translations
Translation done in txt2img-img2img windows and following scripts
2022-10-20 19:20:39 +09:00
discus0434
6b38c2c19c
Merge branch 'AUTOMATIC1111:master' into master 2022-10-20 18:51:12 +09:00
captin411
0ddaf8d202
improve face detection a lot 2022-10-20 00:34:55 -07:00
papuSpartan
8931a825f4
Merge branch 'AUTOMATIC1111:master' into master 2022-10-20 01:59:49 -05:00
papuSpartan
158d678f59 clear prompt button now works on both relevant tabs. Device detection stuff will be added later. 2022-10-20 01:08:24 -05:00
AUTOMATIC
7f8ab1ee8f Merge remote-tracking branch 'origin/master' 2022-10-20 08:18:19 +03:00
AUTOMATIC
930b4c64f7 allow float sizes for hypernet's layer_structure 2022-10-20 08:18:02 +03:00
random_thoughtss
aa7ff2a197 Fixed non-square highres fix generation 2022-10-19 21:46:13 -07:00
papuSpartan
c6345bd445 nerf line length 2022-10-19 21:23:57 -05:00
DepFA
858462f719
do caption copy for both flips 2022-10-20 02:57:18 +01:00
captin411
59ed744383
face detection algo, configurability, reusability
Try to move the crop in the direction of a face if it is present

More internal configuration options for choosing weights of each of the algorithm's findings

Move logic into its module
2022-10-19 17:19:02 -07:00
discus0434
ba469343e6 align ui.py imports with upstream 2022-10-20 00:17:04 +00:00
discus0434
ecb433b220 update 2022-10-20 00:14:16 +00:00
discus0434
6f98e89486 update 2022-10-20 00:10:45 +00:00
discus0434
4574eea589
Merge branch 'AUTOMATIC1111:master' into master 2022-10-20 09:08:47 +09:00
papuSpartan
8b74b9aa9a add symbol for clear button and simplify roll_col css selector 2022-10-19 19:06:14 -05:00
DepFA
55d8c6cce6
default to ignore existing captions 2022-10-20 00:53:29 +01:00
DepFA
9b65c4ecf4
pass preprocess_txt_action param 2022-10-20 00:49:23 +01:00
DepFA
ab353b141d
link existing txt option 2022-10-20 00:48:07 +01:00
DepFA
fbcce66601
add existing caption file handling 2022-10-20 00:46:54 +01:00
DepFA
4d6b9f76a5
reorder create_hypernetwork params 2022-10-20 00:27:16 +01:00
DepFA
c3835ec85c
pass overwrite old flag 2022-10-20 00:24:24 +01:00
DepFA
632e8d6602
split learn rates 2022-10-20 00:19:40 +01:00
DepFA
0087079c2d
allow overwrite old embedding 2022-10-20 00:10:59 +01:00
DepFA
166be3919b
allow overwrite old hn 2022-10-20 00:09:40 +01:00
DepFA
d6ea584137
change html output 2022-10-20 00:07:57 +01:00
random_thoughtss
c418467c03 Don't compute latent mask if were not using it. Also added support for fixed highres_fix generation. 2022-10-19 15:09:43 -07:00
random_thoughtss
dde9f96072 added support for ddim img2img 2022-10-19 14:14:24 -07:00
random_thoughtss
0719c10bf1 Fixed copying mistake 2022-10-19 13:56:26 -07:00
random_thoughtss
8e7097d06a Added support for RunwayML inpainting model 2022-10-19 13:47:45 -07:00
captin411
41e3877be2
fix entropy point calculation 2022-10-19 13:44:59 -07:00
DepFA
4d663055de
update ui with extra training options 2022-10-19 20:33:18 +01:00
Alexandre Simard
4fbdbddc18
Remove pad spaces from progress bar text 2022-10-19 15:21:36 -04:00
DepFA
eb7ba4b713
update training header text 2022-10-19 19:50:46 +01:00
ふぁ
604620a7f0 Add xformers message. 2022-10-19 21:31:16 +03:00
Mackerel
b748b583c0 generation_parameters_copypaste.py: fix indent 2022-10-19 21:30:32 +03:00
Leo Mozoloa
5d9e3acd4e Fixed additionnnnal typo, sorry 2022-10-19 21:30:02 +03:00
Leo Mozoloa
a0e50d5e70 Improved the OS/Platforms field 2022-10-19 21:30:02 +03:00
Leo Mozoloa
3e2a035ffa Removed obsolete legacy Hlky description 2022-10-19 21:30:02 +03:00
Leo Mozoloa
5292d1f092 Formatting the top description 2022-10-19 21:30:02 +03:00
Leo Mozoloa
62a1a97fe3 Fixed labels and created a brand new Feature Request yaml 2022-10-19 21:30:02 +03:00
Leo Mozoloa
8400e85474 Adding a confirmation checkbox that the user has checked the issues & commits before
Also small fixes
2022-10-19 21:30:02 +03:00
Leo Mozoloa
ca30e67289 removing the required tag as it obviously doesn't work, adding a top description 2022-10-19 21:30:02 +03:00
Leo Mozoloa
03cf7cf327 Fixes and trying to make dropdown required 2022-10-19 21:30:02 +03:00
Leo Mozoloa
45f188e0d3 fixing linebreak issue 2022-10-19 21:30:02 +03:00
Leo Mozoloa
dd66530a63 Fixes and adding step by step 2022-10-19 21:30:02 +03:00
Leo Mozoloa
d0042587ad Cleaning & improvements 2022-10-19 21:30:02 +03:00
Leo Mozoloa
57c48093a9 Delete .vscode directory 2022-10-19 21:30:02 +03:00
Leo Mozoloa
fd1008f1e0 Better Bug report form 2022-10-19 21:30:02 +03:00
Greg Fuller
13ed73beda Update Learning Rate tooltip 2022-10-19 21:29:32 +03:00
Alexandre Simard
14c1c2b935 Show PB texts at same time and earlier
For big tasks (1000+ steps), waiting 1 minute to see ETA is long and this changes it so the number of steps done plays a role in showing the text as well.
2022-10-19 13:53:52 -04:00
Vladimir Repin
46122c4ff6 Send empty prompts as valid generation parameter 2022-10-19 20:31:16 +03:00
timntorres
5e012e4dfa Infotext saves more specific hypernet name. 2022-10-19 20:20:25 +03:00
Alexandre Simard
1e4809b251 Added a bit of padding to the left 2022-10-19 20:06:41 +03:00
Alexandre Simard
57eb1a64c8 Update ui.py 2022-10-19 20:06:41 +03:00
discus0434
634acdd954
Merge branch 'AUTOMATIC1111:master' into master 2022-10-20 01:37:33 +09:00
discus0434
2ce52d32e4 fix for #3086 failing to load any previous hypernet 2022-10-19 16:31:12 +00:00
AUTOMATIC
c6e9fed500 fix for #3086 failing to load any previous hypernet 2022-10-19 19:21:16 +03:00
DepFA
019a3a88f0
Update ui.py 2022-10-19 17:15:47 +01:00
AUTOMATIC1111
c664b231a8
Merge pull request #3162 from discus0434/master
Added what I forgot to commit in already merged PR: #3086
2022-10-19 18:58:49 +03:00
discus0434
365d4b1650
Merge branch 'AUTOMATIC1111:master' into master 2022-10-20 00:48:13 +09:00
AUTOMATIC1111
f510a2277e
Merge pull request #3086 from discus0434/master
Add settings for multi-layer structure hypernetworks
2022-10-19 18:40:53 +03:00
discus0434
3770b8d2fa enable to write layer structure of hn himself 2022-10-19 15:28:42 +00:00
discus0434
42fbda83bb layer options moves into create hnet ui 2022-10-19 14:30:33 +00:00
captin411
087609ee18
UI changes for focal point image cropping 2022-10-19 03:19:35 -07:00
captin411
abeec4b630
Add auto focal point cropping to Preprocess images
This algorithm plots a bunch of points of interest on the source
image and averages their locations to find a center.

Most points come from OpenCV.  One point comes from an
entropy model. OpenCV points account for 50% of the weight and the
entropy based point is the other 50%.

The center of all weighted points is calculated and a bounding box
is drawn as close to centered over that point as possible.
2022-10-19 03:18:26 -07:00
AUTOMATIC
f894dd552f fix for broken checkpoint merger 2022-10-19 12:45:42 +03:00
AUTOMATIC
9931c0bd48 remove the unneeded line break introduced by #3092 2022-10-19 12:45:42 +03:00
Ikko Ashimine
bb0e7232b3 Fix typo in prompt_parser.py
assoicated -> associated
2022-10-19 11:52:12 +03:00
realryo1
83a517eb96 Fixed performance, vram style disorder 2022-10-19 11:50:25 +03:00
MalumaDev
2362d5f00e
Merge branch 'master' into test_resolve_conflicts 2022-10-19 10:22:39 +02:00
AUTOMATIC1111
1b91cbbc11
Merge pull request #2835 from zhengxiaoyao0716/hot-reload-javascript
hot-reload javascript files
2022-10-19 09:43:59 +03:00
AUTOMATIC1111
05315d8a23
Merge branch 'master' into hot-reload-javascript 2022-10-19 09:43:49 +03:00
Anastasius
1d4aa376e6 Predictable long operation check for time estimation 2022-10-19 09:39:28 +03:00
Anastasius
442dbedc15 Estimated time displayed if jobs take more 60 sec 2022-10-19 09:39:28 +03:00
Anastasius
bcfbb33e50 Added time left estimation 2022-10-19 09:39:28 +03:00
Cheka
2fd7935ef4 Remove wrong self reference in CUDA support for invokeai 2022-10-19 09:35:53 +03:00
discus0434
7f8670c4ef
Merge branch 'master' into master 2022-10-19 15:18:45 +09:00
Silent
da72becb13 Use training width/height when training hypernetworks. 2022-10-19 09:13:28 +03:00
discus0434
5d16f59794
Merge branch 'master' into master 2022-10-19 14:56:27 +09:00
AUTOMATIC
5daf9cbb98 Merge remote-tracking branch 'origin/api' 2022-10-19 08:44:51 +03:00
AUTOMATIC
10aca1ca3e more careful loading of model weights (eliminates some issues with checkpoints that have weird cond_stage_model layer names) 2022-10-19 08:42:22 +03:00
arcticfaded
0f0d6ab8e0 call sampler by name 2022-10-19 05:19:01 +00:00
yfszzx
538bc89c26 Image browser improved 2022-10-19 11:27:51 +08:00
arcticfaded
e7f4808505 provide sampler by name 2022-10-18 19:04:56 +00:00
discus0434
e40ba281f1 update 2022-10-19 01:03:58 +09:00
discus0434
7f2095c6c8 update 2022-10-19 01:01:22 +09:00
discus0434
a5611ea502 update 2022-10-19 01:00:01 +09:00
discus0434
6021f7a75f add options to custom hypernetwork layer structure 2022-10-19 00:51:36 +09:00
MalumaDev
c2765c9bcd
Merge branch 'master' into test_resolve_conflicts 2022-10-18 17:27:30 +02:00
supersteve3d
c1093b8051 Update artists.csv 2022-10-18 18:11:20 +03:00
supersteve3d
b76c9ded45 Update artists.csv 2022-10-18 18:11:20 +03:00
DepFA
82589c2d5e add windows equivalent 2022-10-18 17:24:21 +03:00
Matthew Clark
bdf1a8903a Pass arguments from bash to python 2022-10-18 17:24:21 +03:00
AUTOMATIC
cbf15edbf9 remove dependence on TQDM for sampler progress/interrupt functionality 2022-10-18 17:23:38 +03:00
yfszzx
b7e78ef692 Image browser improve 2022-10-18 22:21:54 +08:00
Dynamic
4f4e7fed7e
update ko-KR.json 2022-10-18 22:12:41 +09:00
AUTOMATIC
ec1924ee57 additional fix for difference model merging 2022-10-18 16:05:52 +03:00
Dynamic
684a31c4da
update ko-KR.json
Translated all text on txt2img window, plus some extra
2022-10-18 21:50:34 +09:00
AUTOMATIC
e20b7e30fe fix for add difference model merging 2022-10-18 15:33:32 +03:00
w-e-w
2f448d97a9 styles.csv encoding utf8 to utf-8-sig
utf-8-bom for better compatibility for some programs
2022-10-18 15:18:51 +03:00
AUTOMATIC
433a7525c1 remove shared option for update check (because it is not an argument of webui)
have launch.py examine both COMMANDLINE_ARGS as well as argv for its arguments
2022-10-18 15:18:02 +03:00
yfszzx
c6f778d9b1 Image browser 2022-10-18 20:15:08 +08:00
yfszzx
eb299527b1 Image browser 2022-10-18 20:14:11 +08:00
DepFA
4c605c5174 add shared option for update check 2022-10-18 15:10:09 +03:00
DepFA
e511b867a9 Make update check commandline option, give response on all paths. 2022-10-18 15:10:09 +03:00
DepFA
a647cbc618 move update check to after dep installation 2022-10-18 15:10:09 +03:00
DepFA
68e83f40bf add update warning to launch.py 2022-10-18 15:10:09 +03:00
ふぁ
02622b1919 update scripts.py 2022-10-18 15:08:23 +03:00
ふぁ
3003438088 Add visible for dropdown 2022-10-18 15:08:23 +03:00
ふぁ
de29ec0743 Remove exception handling 2022-10-18 15:08:23 +03:00
ふぁ
97d3ba3941 Add scripts to ui-config,json 2022-10-18 15:08:23 +03:00
camenduru
428080d469 Remove duplicate artists. 2022-10-18 14:39:27 +03:00
Justin Maier
7543787a0a Auto select attention block for editing 2022-10-18 14:24:01 +03:00
AUTOMATIC
d2f459c5cf clarify the comment for the new option from #2959 and move it to UI section. 2022-10-18 14:22:52 +03:00
trufty
8b02662215 Disable auto weights swap with config option 2022-10-18 14:19:06 +03:00
AUTOMATIC1111
cd9c6e0edf
Merge pull request #2984 from guaneec/D
Don't eat colons in booru tags
2022-10-18 14:18:05 +03:00
Dynamic
0530f07da3
Move ko-KR.json 2022-10-18 20:12:54 +09:00
Dynamic
50e34cf194
Update ko-KR.json 2022-10-18 20:11:17 +09:00
AUTOMATIC1111
c07dbd4cf9
Merge pull request #2939 from Michoko92/dark-mode
Added dark mode switch
2022-10-18 14:04:26 +03:00
AUTOMATIC1111
f6c758d055
Merge branch 'master' into dark-mode 2022-10-18 14:04:17 +03:00
Dynamic
7651b84968
Initial KR support - WIP
Localization WIP
2022-10-18 19:07:17 +09:00
ClashSAN
ca023f8a45
Update README.md 2022-10-18 08:57:05 +00:00
C43H66N12O12S2
c71008c741 Update sd_hijack_optimizations.py 2022-10-18 11:53:04 +03:00
C43H66N12O12S2
73b5dbf72a Update sd_hijack.py 2022-10-18 11:53:04 +03:00
C43H66N12O12S2
84823275e8 readd xformers attnblock 2022-10-18 11:53:04 +03:00
C43H66N12O12S2
2043c4a231 delete xformers attnblock 2022-10-18 11:53:04 +03:00
C43H66N12O12S2
786ed49922 use legacy attnblock 2022-10-18 11:53:04 +03:00
MalumaDev
1997ccff13
Merge branch 'master' into test_resolve_conflicts 2022-10-18 08:55:08 +02:00
arcticfaded
8d5d863a9d gradio and FastAPI 2022-10-18 06:51:53 +00:00
Mykeehu
7432b6f4d2 Fix typo "celem_id" to "elem_id" 2022-10-18 08:59:14 +03:00
Ryan Voots
1df3ff25e6 Add --nowebui as a means of disabling the webui and run on the other port 2022-10-18 08:44:50 +03:00
Ryan Voots
247aeb3aaa Put API under /sdapi/ so that routing is simpler in the future. This means that one could allow access to /sdapi/ but not the webui. 2022-10-18 08:44:50 +03:00
Ryan Voots
c3851a853d Re-use webui fastapi application rather than requiring one or the other, not both. 2022-10-18 08:44:50 +03:00
DepFA
d3338bdef1 extras extend cache key with new upscale to options 2022-10-18 08:29:52 +03:00
Adam Snodgrass
43cb1ddad2 prevent highlighting/selecting image 2022-10-18 07:58:51 +03:00
Jordan Hall
ab3f997c0c Fix typo in 'choices' when loading upscaler 2 config 2022-10-18 00:27:16 +03:00
arcticfaded
f29b16bad1 prevent API from saving 2022-10-17 20:36:14 +00:00
guaneec
2e28c841f4
Oops 2022-10-18 03:15:41 +08:00
arcticfaded
f80e914ac4 example API working with gradio 2022-10-17 19:10:36 +00:00
guaneec
d62ef76614
Don't eat colons in booru tags 2022-10-18 03:09:50 +08:00
AUTOMATIC
cf47d13c1e localization support 2022-10-17 21:15:32 +03:00
AUTOMATIC
695377a8b9 make modelmerger work with ui-config.json 2022-10-17 19:56:23 +03:00
Michoko
665beebc08 Use of a --theme argument for more flexibility
Added possibility to set the theme (light or dark)
2022-10-17 18:24:24 +02:00
yfszzx
2b5b62e768 fix two bug 2022-10-17 23:14:03 +08:00
yfszzx
2272cf2f35 fix two bug 2022-10-17 23:04:42 +08:00
yfszzx
de179cf8fd fix two bug 2022-10-17 22:38:46 +08:00
yfszzx
c408a0b41c fix two bug 2022-10-17 22:28:43 +08:00
AUTOMATIC
af3f6489d3 possibly defeat losing of focus for prompt when generating images with gallery open 2022-10-17 16:57:19 +03:00
Michoko
8c6a981d5d Added dark mode switch
Launch the UI in dark mode with the --dark-mode switch
2022-10-17 11:05:05 +02:00
AUTOMATIC
d42125baf6 add missing requirement for api and fix some typos 2022-10-17 11:50:20 +03:00
AUTOMATIC
964b63c042 add api() function to return webui() to how it was 2022-10-17 11:38:32 +03:00
Jonathan
71d42bb44b Update api.py 2022-10-17 11:34:22 +03:00
Jonathan
99013ba68a Update processing.py 2022-10-17 11:34:22 +03:00
Jonathan
832b490e51 Update processing.py 2022-10-17 11:34:22 +03:00
Jonathan
f3fe487e63 Update webui.py 2022-10-17 11:34:22 +03:00
arcticfaded
9e02812afd pydantic instrumentation 2022-10-17 11:34:22 +03:00
arcticfaded
60251c9456 initial prototype by borrowing contracts 2022-10-17 11:34:22 +03:00
yfszzx
9d702b16f0 fix two little bug 2022-10-17 16:11:03 +08:00
yfszzx
2a3e7ed872 Merge branch 'master' of https://github.com/yfszzx/stable-diffusion-webui-plus 2022-10-17 15:23:32 +08:00
yfszzx
5b1394bead speed up images history perfect 2022-10-17 15:20:16 +08:00
Greg Fuller
cccc5a20fc Safeguard setting restore logic against exceptions
also useful for keeping settings cache and restore logic together, and nice for code reuse (other third party scripts can import this class)
2022-10-17 08:43:41 +03:00
DepFA
62edfae257 print list of embeddings on reload 2022-10-17 08:42:17 +03:00
AUTOMATIC
b99d3cf6dd make CLIP interrogate ranks output sane values 2022-10-17 08:41:02 +03:00
AUTOMATIC
5c94aaf290 fix bug for latest model merge RAM improvement 2022-10-17 08:28:18 +03:00
DenkingOfficial
58f3ef7733 Fix CLIP Interrogator and disable ranks for it 2022-10-17 08:01:59 +03:00
DancingSnow
8aaadf56b3 add cache for workflow 2022-10-17 07:57:17 +03:00
AUTOMATIC
6f7b7a3dcd only read files with .py extension from the scripts dir 2022-10-17 07:56:23 +03:00
MrCheeze
0fd1307671 improve performance of 3-way merge on machines with not enough ram, by only accessing two of the models at a time 2022-10-17 07:54:36 +03:00
fortypercnt
a1d3cbf92c Fix #2750
left / top alignment was necessary with gradio 3.4.1. In gradio 3.5 the parent div of the image mask is centered, so the left / top alignment put the mask in the wrong place as described in #2750 #2795 #2805. This fix was tested on Windows 10 / Chrome.
2022-10-17 07:48:28 +03:00
MalumaDev
589215df22
Merge branch 'master' into test_resolve_conflicts 2022-10-16 21:06:21 +02:00
SGKoishi
c8045c5ad4 The hide_ui_dir_config flag also restrict write attempt to path settings 2022-10-16 20:59:06 +03:00
dvsilch
26a11776e4 fix: add null check when start running project the currentButton is null 2022-10-16 20:58:36 +03:00
MalumaDev
ae0fdad64a
Merge branch 'master' into test_resolve_conflicts 2022-10-16 17:55:58 +02:00
MalumaDev
9324cdaa31 ui fix, re organization of the code 2022-10-16 17:53:56 +02:00
yfszzx
a4de699e3c Images history speed up 2022-10-16 22:37:12 +08:00
AUTOMATIC
c57919ea2a keep focus on current element when updating gallery 2022-10-16 17:22:56 +03:00
DancingSnow
fc220a51cf fix dir_path in some path like D:/Pic/outputs 2022-10-16 16:40:04 +03:00
CookieHCl
adc0ea74e1 Better readablity of logs 2022-10-16 16:36:06 +03:00
CookieHCl
c9836279f5 Only make output dir when creating output 2022-10-16 16:36:06 +03:00
CookieHCl
91235d8008 Fix FileNotFoundError in history tab
Now only traverse images when directory exists
2022-10-16 16:36:06 +03:00
yfszzx
f62905fdf9 images history speed up 2022-10-16 21:22:38 +08:00
不会画画的中医不是好程序员
272d979d1c
Merge branch 'AUTOMATIC1111:master' into master 2022-10-16 21:16:08 +08:00
MalumaDev
e4f8b5f00d ui fix 2022-10-16 10:28:21 +02:00
MalumaDev
523140d780 ui fix 2022-10-16 10:23:30 +02:00
Junpeng Qiu
36a0ba357a Added Refresh Button to embedding and hypernetwork names in Train Tab
Problem
everytime I modified pt files in embedding_dir or hypernetwork_dir, I
need to restart webui to have the new files shown in the dropdown of
Train Tab

Solution
refactored create_refresh_button out of create_setting_component so we
can use this method to create button next to gr.Dropdowns of embedding
name and hypernetworks

Extra Modification
hypernetwork pt are now sorted in alphabetic order
2022-10-16 10:51:06 +03:00
AUTOMATIC
bd4f0fb9d9 revert changes to two bat files I asked to revert but the author couldn't in 863e9efc19. 2022-10-16 10:14:27 +03:00
Zeithrold
863e9efc19
Pull out some of URL to Env Variable (#2578)
* moved repository url to changeable environment variable

* move stable diffusion repo itself to env

* added missing env

* Remove default URL

Co-authored-by: AUTOMATIC1111 <16777216c@gmail.com>
2022-10-16 10:13:18 +03:00
CookieHCl
9258a33e37 Warn when user uses bad ui setting 2022-10-16 10:04:14 +03:00
CookieHCl
b65a3101ce Use default value when dropdown ui setting is bad
Default value is the first value of selectables.
Particually, None in styles.
2022-10-16 10:04:14 +03:00
CookieHCl
20bf99052a Make style configurable in ui-config.json 2022-10-16 10:04:14 +03:00
ddPn08
3395ba493f Allow specifying the region of ngrok. 2022-10-16 09:56:33 +03:00
AUTOMATIC
179e3ca752 honor --hide-ui-dir-config option for #2807 2022-10-16 09:51:01 +03:00
winterspringsummer
2ce27728f6 added extras batch work from directory 2022-10-16 09:47:31 +03:00
AUTOMATIC
0c5fa9a681 do not reload embeddings from disk when doing textual inversion 2022-10-16 09:09:04 +03:00
yfszzx
5d8c59eee5 Merge branch 'master' of https://github.com/yfszzx/stable-diffusion-webui-plus 2022-10-16 12:34:05 +08:00
不会画画的中医不是好程序员
d41ac174e2
Merge branch 'AUTOMATIC1111:master' into master 2022-10-16 10:04:05 +08:00
yfszzx
763b893f31 images history sorting files by date 2022-10-16 10:03:09 +08:00
MalumaDev
b694bba39a Merge remote-tracking branch 'origin/test_resolve_conflicts' into test_resolve_conflicts 2022-10-16 00:24:05 +02:00
MalumaDev
9325c85f78 fixed dropbox update 2022-10-16 00:23:47 +02:00
MalumaDev
97ceaa23d0
Merge branch 'master' into test_resolve_conflicts 2022-10-16 00:06:36 +02:00
MalumaDev
3d21684ee3 Add support to other img format, fixed dropbox update 2022-10-16 00:01:00 +02:00
zhengxiaoyao0716
9a33292ce4 reload javascript files when custom script bodies 2022-10-16 01:41:37 +08:00
C43H66N12O12S2
be1596ce30 fix typo 2022-10-15 20:25:27 +03:00
C43H66N12O12S2
8fb0b99152 Update launch.py 2022-10-15 20:25:27 +03:00
C43H66N12O12S2
529afbf4d7 Update sd_hijack.py 2022-10-15 20:25:27 +03:00
C43H66N12O12S2
09814e3cf3 Update launch.py 2022-10-15 20:25:27 +03:00
AUTOMATIC
74a9ee7002 fix saving images compatibility with gradio update 2022-10-15 20:09:45 +03:00
MalumaDev
3f5c3b981e
Update modules/ui.py
Co-authored-by: Víctor Gallego <vicgalle@ucm.es>
2022-10-15 18:41:46 +02:00
MalumaDev
ad9bc604a8
Update modules/ui.py
Co-authored-by: Víctor Gallego <vicgalle@ucm.es>
2022-10-15 18:41:18 +02:00
MalumaDev
0d4f5db235
Update modules/ui.py
Co-authored-by: Víctor Gallego <vicgalle@ucm.es>
2022-10-15 18:40:58 +02:00
MalumaDev
9b7705e057
Update modules/aesthetic_clip.py
Co-authored-by: Víctor Gallego <vicgalle@ucm.es>
2022-10-15 18:40:34 +02:00
MalumaDev
f7df06a981
Update README.md
Co-authored-by: Víctor Gallego <vicgalle@ucm.es>
2022-10-15 18:40:06 +02:00
MalumaDev
4387e4fe64
Update modules/ui.py
Co-authored-by: Víctor Gallego <vicgalle@ucm.es>
2022-10-15 18:39:29 +02:00
yfszzx
6e4f5566b5 sorting files 2022-10-15 23:53:49 +08:00
guaneec
606519813d Prevent modal content from being selected 2022-10-15 17:24:02 +03:00
DepFA
b6e3b96dab Change vector size footer label 2022-10-15 17:23:39 +03:00
DepFA
ddf6899df0 generalise to popular lossless formats 2022-10-15 17:23:39 +03:00
DepFA
9a1dcd78ed add webp for embed load 2022-10-15 17:23:39 +03:00
DepFA
939f16529a only save 1 image per embedding 2022-10-15 17:23:39 +03:00
DepFA
9e846083b7 add vector size to embed text 2022-10-15 17:23:39 +03:00
MalumaDev
7b7561f6e4
Merge branch 'master' into test_resolve_conflicts 2022-10-15 16:20:17 +02:00
AngelBottomless
703e6d9e4e check NaN for hypernetwork tuning 2022-10-15 17:15:26 +03:00
ruocaled
5fd638f14d fix download section layout 2022-10-15 17:14:58 +03:00
MalumaDev
37d7ffb415 fix to tokens lenght, addend embs generator, add new features to edit the embedding before the generation using text 2022-10-15 15:59:37 +02:00
Robert Smieja
d3ffc962dd Add basic Pylint to catch syntax errors on PRs 2022-10-15 16:26:07 +03:00
NO_ob
eef3bc6490 typo 2022-10-15 16:13:13 +03:00
AUTOMATIC
73901c3f01 make attention edit only work with ctrl as was initially intended 2022-10-15 15:51:57 +03:00
AUTOMATIC
97f0727489 add First pass size always regardless of whether it was auto chosen or specified 2022-10-15 15:47:02 +03:00
AUTOMATIC
20a1f68c75 fix gadio issue with sending files between tabs 2022-10-15 15:44:46 +03:00
AUTOMATIC
d3463bc59a change styling for top right corner UI
made save style button not die when you cancel
2022-10-15 14:22:30 +03:00
AUTOMATIC
f7ca63937a bring back scale latent option in settings 2022-10-15 13:23:12 +03:00
AUTOMATIC
5967d07d1a fix new gradio failing to preserve image params 2022-10-15 13:11:28 +03:00
AUTOMATIC
3631adfe96 make dragging to prompt work again 2022-10-15 12:58:53 +03:00
AUTOMATIC
e8729dd051 re-apply height hacks to work with new gradio 2022-10-15 12:54:23 +03:00
AUTOMATIC
4ed99d5996 bump gradio to 3.5 2022-10-15 12:10:52 +03:00
AUTOMATIC
7d6042b908 update for commandline args for btch prompts to parse string properly 2022-10-15 12:00:31 +03:00
AUTOMATIC1111
58e62312c3
Merge pull request #1446 from shirase-0/master
Tag/Option Parsing in Prompts From File
2022-10-15 10:47:50 +03:00
AUTOMATIC1111
f42e0aae6d
Merge branch 'master' into master 2022-10-15 10:47:26 +03:00
AUTOMATIC1111
d13ce89e20
Merge pull request #2573 from raefu/ckpt-cache
add --ckpt-cache option for faster model switching
2022-10-15 10:35:26 +03:00
AUTOMATIC1111
af144ebdc7
Merge branch 'master' into ckpt-cache 2022-10-15 10:35:18 +03:00
Daniel M
6a4e846710 Fix prerequisites check in webui.sh
- Check the actually used `$python_cmd` and `$GIT` executables instead
  of the hardcoded ones
- Fix typo in comment
2022-10-15 10:29:41 +03:00
AUTOMATIC
f756bc540a fix #2588 breaking launch.py (. . .) 2022-10-15 10:28:26 +03:00
CookieHCl
c24df4b486 Disable compiling deepbooru model
This is only necessary when you have to train,
and compiling model produces warning.
2022-10-15 10:21:22 +03:00
AUTOMATIC1111
9563636489
Merge pull request #2663 from space-nuko/fix-xyplot-steps
Fix XY-plot steps if highres fix is enabled
2022-10-15 10:20:29 +03:00
AUTOMATIC1111
5207cba56f
Merge pull request #2661 from Melanpan/master
Raise an assertion error if no training images have been found.
2022-10-15 10:13:23 +03:00
AUTOMATIC1111
ea8aa1701a
Merge branch 'master' into master 2022-10-15 10:13:16 +03:00
githublsx
a13af34b90 Set to -1 when seed input is none 2022-10-15 10:12:16 +03:00
Cassy-Lee
7855993bef Move index_url args into run_pip. 2022-10-15 10:10:22 +03:00
Cassy-Lee
77bf3525f8 Update launch.py
Allow change set --index-url for pip.
2022-10-15 10:10:22 +03:00
ddPn08
0da6c18099 use "outdir_samples" if specified 2022-10-15 10:07:45 +03:00
ddPn08
cd28465bf8 do not force relative paths in image history 2022-10-15 10:07:45 +03:00
aoirusann
db27b987a9 Add hint for ctrl/alt enter
And duplicate implementations are removed
2022-10-15 09:59:40 +03:00
ruocaled
661a61985c remove extra 100ms timeout 2022-10-15 09:32:01 +03:00
ruocaled
c7cd2fda5a re-attach full screen zoom listeners 2022-10-15 09:32:01 +03:00
ruocaled
b26efff8c4 allow re-open for multiple images gallery 2022-10-15 09:32:01 +03:00
ruocaled
c84eef8195 fix observer disconnect logic 2022-10-15 09:32:01 +03:00
ruocaled
6b5c54c187 remove console.log 2022-10-15 09:32:01 +03:00
ruocaled
3bd40bb77f auto re-open selected image after re-generation
attach an observer of gallery when generation in progress, if there was a image selected in gallery and gallery has only 1 image, auto re-select/open that image.

This matches behavior of prior to Gradio 3.4.1 version bump, is a quality of life feature many people enjoyed.
2022-10-15 09:32:01 +03:00
AUTOMATIC
c7a86f7fe9 add option to use batch size for training 2022-10-15 09:24:59 +03:00
AUTOMATIC
acedbe67d2 bring history tab back, make it behave; it's still slow but won't fuck anything up until you use it 2022-10-15 00:43:15 +03:00
space-nuko
a8f7722e4e Fix XY-plot steps if highres fix is enabled 2022-10-14 14:26:38 -07:00
AUTOMATIC
4bbe5d62e0 reformat lines in images_history.py 2022-10-15 00:25:09 +03:00
AUTOMATIC
4dc4265099 rename firstpass w/h to discard old user settings 2022-10-15 00:21:48 +03:00
Melan
4d19f3b7d4 Raise an assertion error if no training images have been found. 2022-10-14 22:45:26 +02:00
AUTOMATIC
368f4cc4c7 set firstpass w/h to 0 by default and rever to old behavior when any are 0 2022-10-14 23:19:05 +03:00
AUTOMATIC
cd58e44051 disabling history - i knew it was slow as fuck but i didn't realize it would also show galleries on launch 2022-10-14 23:17:28 +03:00
Rae Fu
e21f01f645 add checkpoint cache option to UI for faster model switching
switching time reduced from ~1500ms to ~280ms
2022-10-14 14:09:23 -06:00
AUTOMATIC
03d62538ae remove duplicate code for log loss, add step, make it read from options rather than gradio input 2022-10-14 22:43:55 +03:00
AUTOMATIC
326fe7d44b Merge remote-tracking branch 'Melanpan/master' 2022-10-14 22:14:50 +03:00
AUTOMATIC
989a552de3 remove the other Denoising 2022-10-14 22:04:08 +03:00
Naeaeaeaeae
4cc37e4cdf [xy_grid.py] add option denoising_strength 2022-10-14 22:03:33 +03:00
AUTOMATIC
c250cb289c change checkpoint merger to work in a more obvious way
remove sigmoid and inverse sigmoid because they just did the same thing as weighed sum only with changed multiplier
2022-10-14 22:02:32 +03:00
RnDMonkey
02382f7ce4 regression in xy_grid Var. seed fixing 2022-10-14 22:02:21 +03:00
ChucklesTheBeard
9b75ab144f fix typo 2022-10-14 21:26:54 +03:00
AUTOMATIC
2f0e089c7c should fix the issue with missing layers in chechpoint merger 2022-10-14 21:20:28 +03:00
AUTOMATIC
6cdf55627c restore borders for prompts 2022-10-14 21:12:52 +03:00
AUTOMATIC
c344ba3b32 add option to read generation params for learning previews from txt2img 2022-10-14 20:31:49 +03:00
AUTOMATIC
bb295f5478 rework the code for lowram a bit 2022-10-14 20:03:41 +03:00
Ljzd-PRO
4a216ded43 load models to VRAM when using --lowram param
load models to VRM instead of RAM (for machines which have bigger VRM than RAM such as free Google Colab server)
2022-10-14 19:57:23 +03:00
Ljzd-PRO
a8eeb2b7ad add --lowram parameter
load models to VRM instead of RAM (for machines which have bigger VRM than RAM such as free Google Colab server)
2022-10-14 19:57:23 +03:00
Gugubo
5f87dd1ee0 Add option to prevent empty spots in grid (2/2) 2022-10-14 19:54:24 +03:00
Gugubo
43f926aad1 Add option to prevent empty spots in grid (1/2) 2022-10-14 19:54:24 +03:00
Gugubo
2fb9891af3 Change grid row count autodetect to prevent empty spots
Instead of just rounding (sometimes resulting in grids with "empty" spots), find a divisor.
For example: 8 images will now result in a 4x2 grid instead of a 3x3 with one empty spot.
2022-10-14 19:54:24 +03:00
AUTOMATIC
6c64279460 remove user's liners from .gitigbore - those go into .git/info/exclude 2022-10-14 19:33:49 +03:00
AUTOMATIC1111
6b77af7a43
Merge pull request #2396 from yfszzx/master
Add a "History" tab
2022-10-14 19:32:19 +03:00
AUTOMATIC
67f447ddcc possibility to load checkpoint, clip skip, and hypernet from infotext 2022-10-14 19:30:28 +03:00
AUTOMATIC
0aec19d783 make pasting into img2img prompt work
make image params request not use temp files
2022-10-14 18:15:03 +03:00
AUTOMATIC
33ae6be55e fix paste not working in firefox
fix paste always going into txt2img field
2022-10-14 17:53:34 +03:00
AUTOMATIC
a156c097ab Merge branch 'param-loading' 2022-10-14 17:14:24 +03:00
AUTOMATIC
e644b5a80b remove scale latent and no-crop options from hires fix
support copy-pasting new parameters for hires fix
2022-10-14 17:03:03 +03:00
Buckzor
b382de2d77 Fixed Scale ratio problem 2022-10-14 16:47:16 +03:00
Buckzor
40d1c6e423 Option between stretch and crop for Highres. fix 2022-10-14 16:47:16 +03:00
Buckzor
b2261b53ae Added first_pass_width and height as adjustable inputs to "High Res Fix" 2022-10-14 16:47:16 +03:00
AUTOMATIC
9e5ca5077f extra message for unpicking fails 2022-10-14 16:37:36 +03:00
brkirch
fdef8253a4 Add 'interrogate' and 'all' choices to --use-cpu
* Add 'interrogate' and 'all' choices to --use-cpu
* Change type for --use-cpu argument to str.lower, so that choices are case insensitive
2022-10-14 16:31:39 +03:00
MalumaDev
bb57f30c2d init 2022-10-14 10:56:41 +02:00
不会画画的中医不是好程序员
f7712e28e5 Merge branch 'AUTOMATIC1111:master' into master 2022-10-14 14:43:44 +08:00
AUTOMATIC
fdecb63685 add an ability to merge three checkpoints 2022-10-14 09:20:24 +03:00
crackfoo
494afccbc1 Update hints.js
typo
2022-10-14 07:22:53 +03:00
yfszzx
d48f3470c8 Merge branch 'master' of https://github.com/yfszzx/stable-diffusion-webui-plus 2022-10-14 11:51:26 +08:00
yfszzx
4a37c7eede fix deep nesting directories problem 2022-10-14 11:48:28 +08:00
不会画画的中医不是好程序员
7c8903367c
Merge branch 'AUTOMATIC1111:master' into master 2022-10-14 07:35:07 +08:00
yfszzx
a1489f9428 images history fix all known bug 2022-10-14 07:13:38 +08:00
AUTOMATIC
08b3f7aef1 emergency fix for broken send to buttons 2022-10-13 20:42:27 +03:00
AUTOMATIC
354ef0da3b add hypernetwork multipliers 2022-10-13 20:12:37 +03:00
AUTOMATIC
a10b0e11fc options to refresh list of models and hypernetworks 2022-10-13 19:22:49 +03:00
Taithrah
dccc181b55 Update hints.js
typo
2022-10-13 18:03:56 +03:00
aoirusann
e548fc4aca [img2imgalt] Make sampler's override be optional 2022-10-13 18:03:17 +03:00
aoirusann
a4170875b0 [img2imgalt] Add override in UI for convenience.
Some params in img2imgalt are fixed,
such as `Sampling method` and  `Denosing Strength`.
And some params should be matched with those in decode, such as `steps`.
2022-10-13 18:03:17 +03:00
Kalle
cf1e8fcb30 Correct img gen count in notification
Display correct count of images generated in browser notification regardless of "Show grid in results for web" setting.
2022-10-13 17:36:49 +03:00
AUTOMATIC
bb7baf6b9c add option to change what's shown in quicksettings bar 2022-10-13 16:22:25 +03:00
Melan
8636b50aea Add learn_rate to csv and removed a left-over debug statement 2022-10-13 12:37:58 +02:00
Greg Fuller
fed7f0e281 Revert "fix prompt in log.csv"
This reverts commit e4b5d16964.
2022-10-13 13:25:29 +03:00
Greg Fuller
a3f02e4690 fix prompt in log.csv 2022-10-13 13:25:29 +03:00
Greg Fuller
8711c2fe01 Fix metadata contents 2022-10-13 13:25:29 +03:00
Greg Fuller
aeacbac218 Fix save error 2022-10-13 13:25:29 +03:00
Greg Fuller
94c01aa356 draw_xy_grid provides the option to also return lone images 2022-10-13 13:25:29 +03:00
AUTOMATIC
fde7fefa2e update #2336 to prevent reading params.txt when --hide-ui-dir-config option is enabled (for servers, since this will let some users access others' params) 2022-10-13 12:26:34 +03:00
Trung Ngo
e72adc999b Restore last generation params 2022-10-13 12:21:20 +03:00
Greg Fuller
04c0e643f2 Merge branch 'master' of https://github.com/HunterVacui/stable-diffusion-webui 2022-10-13 08:21:01 +03:00
AUTOMATIC1111
4f73e057a9
Merge pull request #2324 from HunterVacui/interrogate_include_ranks_in_output
Interrogate: add option to include ranks in output
2022-10-13 08:05:41 +03:00
DepFA
490494320e add missing id property 2022-10-13 07:47:41 +03:00
AUTOMATIC
78592d404a remove interrogate option I accidentally deleted 2022-10-13 07:40:03 +03:00
不会画画的中医不是好程序员
0186db178e
Merge branch 'AUTOMATIC1111:master' into master 2022-10-13 12:35:39 +08:00
yfszzx
716a9e034f images history delete a number of images consecutively next 2022-10-13 12:19:50 +08:00
d8ahazard
54e0051bdd Add drag/drop param loading.
Drop an image or generational text onto the prompt bar, it loads the info for parsing.
2022-10-12 18:17:26 -05:00
Melan
1cfc2a1898 Save a csv containing the loss while training 2022-10-12 23:36:29 +02:00
Greg Fuller
514456101b [3/?] [wip] fix incorrect variable reference
still needs testing
2022-10-12 13:14:13 -07:00
Greg Fuller
f776254b12 [2/?] [wip] ignore OPT_INCLUDE_RANKS for training filenames 2022-10-12 13:12:18 -07:00
Greg Fuller
efefa4862c [1/?] [wip] Reintroduce opts.interrogate_return_ranks
looks functionally correct, needs testing

Needs particular testing care around whether the colon usage (:) will break anything in whatever new use cases were introduced by https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2143
2022-10-12 13:03:00 -07:00
Greg Fuller
fb3cefb348 Merge remote-tracking branch 'upstream/master' into interrogate_include_ranks_in_output 2022-10-12 12:44:41 -07:00
AUTOMATIC
698d303b04 deepbooru: added option to use spaces or underscores
deepbooru: added option to quote (\) in tags
deepbooru/BLIP: write caption to file instead of image filename
deepbooru/BLIP: now possible to use both for captions
deepbooru: process is stopped even if an exception occurs
2022-10-12 21:55:43 +03:00
AUTOMATIC
c3c8eef9fd train: change filename processing to be more simple and configurable
train: make it possible to make text files with prompts
train: rework scheduler so that there's less repeating code in textual inversion and hypernets
train: move epochs setting to options
2022-10-12 20:49:47 +03:00
yfszzx
df97947b21 Merge branch 'master' of https://github.com/yfszzx/stable-diffusion-webui-plus 2022-10-13 00:28:37 +08:00
yfszzx
717ba4c71c images history improvement 2022-10-13 00:27:45 +08:00
不会画画的中医不是好程序员
324e6ed5d1
Merge branch 'AUTOMATIC1111:master' into master 2022-10-13 00:21:57 +08:00
yfszzx
a2aa2a68bc images history improvement 2022-10-13 00:21:16 +08:00
yfszzx
a1a94b8b5f images history improvement 2022-10-13 00:19:34 +08:00
yfszzx
c87c3b9c11 test 2022-10-12 21:24:40 +08:00
AUTOMATIC1111
cc5803603b
Merge pull request #2037 from AUTOMATIC1111/embed-embeddings-in-images
Add option to store TI embeddings in png chunks, and load from same.
2022-10-12 15:59:24 +03:00
yfszzx
511ca57e37 Merge branch 'master' of https://github.com/yfszzx/stable-diffusion-webui-plus 2022-10-12 20:48:03 +08:00
yfszzx
e05573e1ad images history improvement 2022-10-12 20:47:55 +08:00
DepFA
10a2de644f
formatting 2022-10-12 13:15:35 +01:00
DepFA
50be33e953
formatting 2022-10-12 13:13:25 +01:00
AUTOMATIC
429442f4a6 fix iterator bug for #2295 2022-10-12 13:38:03 +03:00
Kalle
8561d5762b Remove duplicate artist from file 2022-10-12 12:50:37 +03:00
hentailord85ez
80f3cf2bb2 Account when lines are mismatched 2022-10-12 11:38:41 +03:00
AUTOMATIC
ee015a1af6 change textual inversion tab to train
remake train interface to use tabs
2022-10-12 11:05:57 +03:00
Milly
2fffd4bddc xy_grid: Refactor confirm functions 2022-10-12 10:40:10 +03:00
Milly
7dba1c07cb xy_grid: Confirm that hypernetwork options are valid before starting 2022-10-12 10:40:10 +03:00
Milly
2d006ce16c xy_grid: Find hypernetwork by closest name 2022-10-12 10:40:10 +03:00
AUTOMATIC1111
4aeacaefbf
Merge pull request #2110 from JustMaier/feature/scale_to
Add "Scale to" option to Extras
2022-10-12 10:36:21 +03:00
AUTOMATIC1111
dc1432e0dd
Merge branch 'master' into feature/scale_to 2022-10-12 10:35:42 +03:00
LunixWasTaken
ca5efc316b Typo fix in watermark hint. 2022-10-12 10:11:06 +03:00
aoirusann
f421f2af2d [img2imgalt] Fix seed & Allow batch. 2022-10-12 10:03:46 +03:00
brkirch
57e03cdd24 Ensure the directory exists before saving to it
The directory for the images saved with the Save button may still not exist, so it needs to be created prior to opening the log.csv file.
2022-10-12 09:55:56 +03:00
AUTOMATIC
8aead63f1a emergency fix 2022-10-12 09:32:14 +03:00
James Noeckel
7edd58d90d update environment-wsl2.yaml 2022-10-12 09:08:44 +03:00
AUTOMATIC1111
c15c3b08df
Merge pull request #2312 from AUTOMATIC1111/edit-attention-nan-fix
edit attention key handler: return early when weight parse returns NaN
2022-10-12 09:07:49 +03:00
AUTOMATIC
fd07b103ae prevent SD model from loading when running in deepdanbooru process 2022-10-12 09:00:39 +03:00
AUTOMATIC
336bd8703c just add the deepdanbooru settings unconditionally 2022-10-12 09:00:07 +03:00
AUTOMATIC
ee10c41e2a Merge remote-tracking branch 'origin/steve3d' 2022-10-12 08:35:52 +03:00
AUTOMATIC1111
2e2d45b281
Merge pull request #2143 from JC-Array/deepdanbooru_pre_process
deepbooru tags for textual inversion preproccessing
2022-10-12 08:35:27 +03:00
Greg Fuller
fec2221eea Truncate error text to fix service lockup / stall
What:
* Update wrap_gradio_call to add a limit to the maximum amount of text output

Why:
* wrap_gradio_call currently prints out a list of the arguments provided to the failing function.
   * if that function is save_image, this causes the entire image to be printed to stderr
    * If the image is large, this can cause the service to lock up while attempting to print all the text
 * It is easy to generate large images using the x/y plot script
 * it is easy to encounter image save exceptions, including if the output directory does not exist / cannot be written to, or if the file is too big
  * The huge amount of log spam is confusing and not particularly helpful
2022-10-12 08:30:06 +03:00
AUTOMATIC
6ac2ec2b78 create dir for hypernetworks 2022-10-12 07:01:20 +03:00
Greg Fuller
d717eb079c Interrogate: add option to include ranks in output
Since the UI also allows users to specify ranks, it can be useful to show people what ranks are being returned by interrogate

This can also give much better results when feeding the interrogate results back into either img2img or txt2img, especially when trying to generate a specific character or scene for which you have a similar concept image

Testing Steps:

Launch Webui with command line arg: --deepdanbooru
Navigate to img2img tab, use interrogate DeepBooru, verify tags appears as before. Use "Interrogate CLIP", verify prompt appears as before
Navigate to Settings tab, enable new option, click "apply settings"
Navigate to img2img, Interrogate DeepBooru again, verify that weights appear and are properly formatted. Note that "Interrogate CLIP" prompt is still unchanged
In my testing, this change has no effect to "Interrogate CLIP", as it seems to generate a sentence-structured caption, and not a set of tags.

(reproduce changes from 6ed4faac46)
2022-10-11 18:02:41 -07:00
supersteve3d
65b973ac4e
Update shared.py
Correct typo to "Unload VAE and CLIP from VRAM when training" in settings tab.
2022-10-12 08:21:52 +08:00
DepFA
6d408c06c6
Prevent nans from failed float parsing from overwriting weights 2022-10-12 00:19:28 +01:00
JC_Array
f53f703aeb resolved conflicts, moved settings under interrogate section, settings only show if deepbooru flag is enabled 2022-10-11 18:12:12 -05:00
JC-Array
963d986396
Merge branch 'AUTOMATIC1111:master' into deepdanbooru_pre_process 2022-10-11 17:33:15 -05:00
AUTOMATIC
6be32b31d1 reports that training with medvram is possible. 2022-10-11 23:07:09 +03:00
DepFA
66ec505975
add file based test 2022-10-11 20:21:30 +01:00
DepFA
7e6a6e00ad
Add files via upload 2022-10-11 20:20:46 +01:00
DepFA
5f3317376b
spacing 2022-10-11 20:09:49 +01:00
DepFA
91d7ee0d09
update imports 2022-10-11 20:09:10 +01:00
DepFA
aa75d5cfe8
correct conflict resolution typo 2022-10-11 20:06:13 +01:00
AUTOMATIC
2f6ea2fbca Merge remote-tracking branch 'origin/master' 2022-10-11 22:03:57 +03:00
AUTOMATIC
d6fcc6b87b apply lr schedule to hypernets 2022-10-11 22:03:05 +03:00
DepFA
db71290d26
remove old caption method 2022-10-11 19:55:54 +01:00
DepFA
61788c0538
shift embedding logic out of textual_inversion 2022-10-11 19:50:50 +01:00
AUTOMATIC1111
12f4f4761b
Merge pull request #1795 from MarkovInequality/learnschedule
Added learning_rate scheduling for TI
2022-10-11 21:50:30 +03:00
AUTOMATIC1111
419e539fe3
Merge branch 'learning_rate-scheduling' into learnschedule 2022-10-11 21:50:19 +03:00
DepFA
e5fbf5c755
remove embedding related image functions from images 2022-10-11 19:46:33 +01:00
DepFA
c080f52cea
move embedding logic to separate file 2022-10-11 19:37:58 +01:00
nai-degen
9e5f6b5580 triggers 'input' event when using arrow keys to edit attention 2022-10-11 21:19:30 +03:00
AUTOMATIC
d7474a5185 bump gradio to 3.4.1 2022-10-11 21:10:55 +03:00
AUTOMATIC
6a9ea5b41c prevent extra modules from being saved/loaded with hypernet 2022-10-11 19:22:30 +03:00
AUTOMATIC
d4ea5f4d86 add an option to unload models during hypernetwork training to save VRAM 2022-10-11 19:03:08 +03:00
AUTOMATIC
6d09b8d1df produce error when training with medvram/lowvram enabled 2022-10-11 18:33:57 +03:00
JC_Array
ff4ef13dd5 removed unneeded print 2022-10-11 10:24:27 -05:00
AUTOMATIC
d682444ecc add option to select hypernetwork modules when creating 2022-10-11 18:04:47 +03:00
AUTOMATIC
5ba23cb41f change default for XY plot's Y to Nothing. 2022-10-11 17:28:17 +03:00
AUTOMATIC1111
4f96ffd0b5
Merge pull request #2201 from alg-wiki/textual__inversion
Textual Inversion: Preprocess and Training will only pick-up image files instead
2022-10-11 17:25:36 +03:00
brkirch
861db783c7 Use apply_hypernetwork function 2022-10-11 17:24:00 +03:00
brkirch
574c8e554a Add InvokeAI and lstein to credits, add back CUDA support 2022-10-11 17:24:00 +03:00
brkirch
98fd5cde72 Add check for psutil 2022-10-11 17:24:00 +03:00
brkirch
c0484f1b98 Add cross-attention optimization from InvokeAI
* Add cross-attention optimization from InvokeAI (~30% speed improvement on MPS)
* Add command line option for it
* Make it default when CUDA is unavailable
2022-10-11 17:24:00 +03:00
AUTOMATIC1111
f7e86aa420
Merge pull request #2227 from papuSpartan/master
Refresh list of models/ckpts upon hitting restart gradio in the setti…
2022-10-11 17:15:19 +03:00
DepFA
1eaad95533
Merge branch 'master' into embed-embeddings-in-images 2022-10-11 15:15:09 +01:00
AUTOMATIC
e0ee5bf703 add codeowners file so stop the great guys who are collaborating on the project from merging in PRs. 2022-10-11 17:08:03 +03:00
AUTOMATIC
66b7d7584f become even stricter with pickles
no pickle shall pass
thank you again, RyotaK
2022-10-11 17:03:16 +03:00
papuSpartan
d01a2d0156 move list refresh to webui.py and add stdout indicating it's doing so 2022-10-11 08:31:28 -05:00
C43H66N12O12S2
a05c824384
Merge pull request #2218 from AUTOMATIC1111/update-readme
add features, credit for composable diffusion
2022-10-11 16:25:38 +03:00
ClashSAN
5766ce21ab
Update README.md 2022-10-11 13:20:03 +00:00
不会画画的中医不是好程序员
a36dea9596
Merge branch 'master' into master 2022-10-11 21:03:41 +08:00
AUTOMATIC
b0583be088 more renames 2022-10-11 15:54:34 +03:00
AUTOMATIC
873efeed49 rename hypernetwork dir to hypernetworks to prevent clash with an old filename that people who use zip instead of git clone will have 2022-10-11 15:51:30 +03:00
JamnedZ
a004d1a855 Added new line at the end of ngrok.py 2022-10-11 15:38:53 +03:00
JamnedZ
5992564448 Cleaned ngrok integration 2022-10-11 15:38:53 +03:00
JamnedZ
4e485b7923 Added installation of pyngrok if needed 2022-10-11 15:38:53 +03:00
parsec501
210fd72bab Added 'suggestion' flair to suggestion template 2022-10-11 15:38:00 +03:00
Ben
54c519943a Update style.css 2022-10-11 15:37:04 +03:00
Ben
031dc8cd7f space holder 2022-10-11 15:37:04 +03:00
Ben
861297cefe add a space holder 2022-10-11 15:37:04 +03:00
Ben
87b77cad5f Layout fix 2022-10-11 15:37:04 +03:00
Ben
b372f5538b Save some space 2022-10-11 15:37:04 +03:00
yfszzx
87d63bbab5 images history improvement 2022-10-11 20:37:03 +08:00
Martin Cairns
eacc03b167 Fix typo in comments 2022-10-11 15:36:29 +03:00
Martin Cairns
1eae307607 Remove debug code for checking that first sigma value is same after code cleanup 2022-10-11 15:36:29 +03:00
Martin Cairns
92d7a13885 Handle different parameters for DPM fast & adaptive 2022-10-11 15:36:29 +03:00
DepFA
9b8faefde0 context menus closure 2022-10-11 15:34:48 +03:00
DepFA
45ada1c910 Correct list style, apply gen forever to both tabs, roll3 on both tabs 2022-10-11 15:34:48 +03:00
yfszzx
594ab4ba53 images history improvement 2022-10-11 20:23:41 +08:00
yfszzx
7b1db45e1f images history improvement 2022-10-11 20:17:27 +08:00
AUTOMATIC
dce7fc902a Merge remote-tracking branch 'origin/master' 2022-10-11 15:00:16 +03:00
AUTOMATIC
530103b586 fixes related to merge 2022-10-11 14:53:02 +03:00
DepFA
1a0a6a84c3
add incorrect start word guard to xy_grid (#2259) 2022-10-11 11:59:56 +01:00
DepFA
a8490e4019
revert sr warning 2022-10-11 11:42:41 +01:00
Rory Grieve
4b460fcb1a
Reset init img in loopback at start of each batch (#2214)
Before a new batch would use the last image from the previous batch. Now
each batch will use the original image for the init image at the start of the
batch.
2022-10-11 11:23:47 +01:00
aperullo
255be75d30
Error if prompt missing SR token to prevent mis-gens (#2209) 2022-10-11 11:16:57 +01:00
alg-wiki
8bacbca0a1
Removed my local edits to checkpoint image generation 2022-10-11 17:35:09 +09:00
alg-wiki
b2368a3bce
Switched to exception handling 2022-10-11 17:32:46 +09:00
AUTOMATIC
5de806184f Merge branch 'master' into hypernetwork-training 2022-10-11 11:14:36 +03:00
AUTOMATIC
948533950c replace duplicate code with a function 2022-10-11 11:10:17 +03:00
hentailord85ez
5e2627a1a6
Comma backtrack padding (#2192)
Comma backtrack padding
2022-10-11 09:55:28 +03:00
Kenneth
8617396c6d Added slider for deepbooru score threshold in settings 2022-10-11 09:43:16 +03:00
Jairo Correa
8b7d3f1bef Make the ctrl+enter shortcut use the generate button on the current tab 2022-10-11 09:32:03 +03:00
DepFA
7aa8fcac1e
use simple lcg in xor 2022-10-11 04:17:36 +01:00
papuSpartan
1add3cff84 Refresh list of models/ckpts upon hitting restart gradio in the settings pane 2022-10-10 19:57:43 -05:00
JC_Array
bb932dbf9f added alpha sort and threshold variables to create process method in preprocessing 2022-10-10 18:37:52 -05:00
ClashSAN
70b50b1dfc
add features, credit for Composable Diffusion
to readme

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2171
2022-10-10 23:23:12 +00:00
JC-Array
d66bc86159
Merge pull request #2 from JC-Array/master
resolve merge conflicts
2022-10-10 18:11:02 -05:00
JC-Array
47f5e216da
Merge branch 'deepdanbooru_pre_process' into master 2022-10-10 18:10:49 -05:00
JC-Array
aca1553bde
Merge pull request #1 from AUTOMATIC1111/master
updating files to resolve merge conflicts
2022-10-10 18:06:07 -05:00
JC_Array
76ef3d75f6 added deepbooru settings (threshold and sort by alpha or likelyhood) 2022-10-10 18:01:49 -05:00
DepFA
e0fbe6d27e
colour depth conversion fix 2022-10-10 23:26:24 +01:00
DepFA
767202a4c3
add dependency 2022-10-10 23:20:52 +01:00
DepFA
315d5a8ed9
update data dis[play style 2022-10-10 23:14:44 +01:00
JC_Array
b980e7188c corrected tag return in get_deepbooru_tags 2022-10-10 16:52:54 -05:00
JC_Array
a1a05ad2d1 import time missing, added to deepbooru fixxing error on get_deepbooru_tags 2022-10-10 16:47:58 -05:00
alg-wiki
907a88b2d0 Added .webp .bmp 2022-10-11 06:35:07 +09:00
Fampai
2536ecbb17 Refactored learning rate code 2022-10-10 17:10:29 -04:00
DepFA
42bf5fa325
Make cancel generate forever let the current gen complete (#2206) 2022-10-10 21:54:21 +01:00
AUTOMATIC
f98338faa8 add an option to not add watermark to created images 2022-10-10 23:15:48 +03:00
alg-wiki
f0ab972f85
Merge branch 'master' into textual__inversion 2022-10-11 03:35:28 +08:00
alg-wiki
bc3e183b73
Textual Inversion: Preprocess and Training will only pick-up image files 2022-10-11 04:30:13 +09:00
AUTOMATIC
5da1ba0e91 remove batch size restriction from X/Y plot 2022-10-10 21:24:11 +03:00
Justin Maier
1d64976dbc Simplify crop logic 2022-10-10 12:04:21 -06:00
AUTOMATIC
727e4d1086 no to different messages plus fix using != to compare to None 2022-10-10 20:46:55 +03:00
AUTOMATIC1111
b3d3b335cf
Merge pull request #2131 from ssysm/upstream-master
Add VAE Path Arguments
2022-10-10 20:45:14 +03:00
AUTOMATIC
39919c40dd add eta noise seed delta option 2022-10-10 20:32:44 +03:00
ssysm
af62ad4d25 change vae loading method 2022-10-10 13:25:28 -04:00
C43H66N12O12S2
ed769977f0 add swinir v2 support 2022-10-10 19:54:57 +03:00
C43H66N12O12S2
ece27fe989 Add files via upload 2022-10-10 19:54:57 +03:00
C43H66N12O12S2
e37d0cdd06 Update requirements_versions.txt 2022-10-10 19:54:07 +03:00
C43H66N12O12S2
5c3254b3ee Update requirements.txt 2022-10-10 19:54:07 +03:00
C43H66N12O12S2
3e7a981194 remove functorch 2022-10-10 19:54:07 +03:00
C43H66N12O12S2
623251ce2b allow pascal onwards 2022-10-10 19:54:07 +03:00
C43H66N12O12S2
b8c38f2bbf change prebuilt wheel 2022-10-10 19:54:07 +03:00
Vladimir Repin
9d33baba58 Always show previous mask and fix extras_send dest 2022-10-10 19:39:24 +03:00
Melan
6c36fe5719 Add ctrl+enter as a shortcut to quickly start a generation. 2022-10-10 19:32:30 +03:00
hentailord85ez
d5c14365fd Add back in output hidden states parameter 2022-10-10 18:54:48 +03:00
hentailord85ez
460bbae587 Pad beginning of textual inversion embedding 2022-10-10 18:54:48 +03:00
hentailord85ez
b340439586 Unlimited Token Works
Unlimited tokens actually work now. Works with textual inversion too. Replaces the previous not-so-much-working implementation.
2022-10-10 18:54:48 +03:00
RW21
f347ddfd80 Remove max_batch_count from ui.py 2022-10-10 18:53:40 +03:00
DepFA
df6d0d9286
convert back to rgb as some hosts add alpha 2022-10-10 15:43:09 +01:00
DepFA
707a431100
add pixel data footer 2022-10-10 15:34:49 +01:00
DepFA
ce2d7f7eac
Merge branch 'master' into embed-embeddings-in-images 2022-10-10 15:13:48 +01:00
Ben
ce37fdd30e maximize the view 2022-10-10 17:11:24 +03:00
alg-wiki
7a20f914ed Custom Width and Height 2022-10-10 17:05:12 +03:00
alg-wiki
6ad3a53e36 Fixed progress bar output for epoch 2022-10-10 17:05:12 +03:00
alg-wiki
ea00c1624b Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 2022-10-10 17:05:12 +03:00
AUTOMATIC
8f1efdc130 --no-half-vae pt2 2022-10-10 17:03:45 +03:00
alg-wiki
04c745ea4f
Custom Width and Height 2022-10-10 22:35:35 +09:00
AUTOMATIC
7349088d32 --no-half-vae 2022-10-10 16:16:29 +03:00
不会画画的中医不是好程序员
1e18a5ffcc
Merge branch 'AUTOMATIC1111:master' into master 2022-10-10 20:21:25 +08:00
Bepis
a357823339 Add a pull request template 2022-10-10 15:15:58 +03:00
yfszzx
23f2989799 images history over 2022-10-10 18:33:49 +08:00
JC_Array
2f94331df2 removed change in last commit, simplified to adding the visible argument to process_caption_deepbooru and it set to False if deepdanbooru argument is not set 2022-10-10 03:34:00 -05:00
alg-wiki
4ee7519fc2
Fixed progress bar output for epoch 2022-10-10 17:31:33 +09:00
JC_Array
8ec069e64d removed duplicate run_preprocess.click by creating run_preprocess_inputs list and appending deepbooru variable to input list if in scope 2022-10-10 03:23:24 -05:00
alg-wiki
3110f895b2
Textual Inversion: Added custom training image size and number of repeats per input image in a single epoch 2022-10-10 17:07:46 +09:00
yfszzx
8a7c07a214 show image history 2022-10-10 15:39:39 +08:00
brkirch
8acc901ba3 Newer versions of PyTorch use TypedStorage instead
Pytorch 1.13 and later will rename _TypedStorage to TypedStorage, so check for TypedStorage and use _TypedStorage if it is not available. Currently this is needed so that nightly builds of PyTorch work correctly.
2022-10-10 08:04:52 +03:00
JC_Array
1f92336be7 refactored the deepbooru module to improve speed on running multiple interogations in a row. Added the option to generate deepbooru tags for textual inversion preproccessing. 2022-10-09 23:58:18 -05:00
ssysm
6fdad291bd Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into upstream-master 2022-10-09 23:20:39 -04:00
ssysm
cc92dc1f8d add vae path args 2022-10-09 23:17:29 -04:00
Justin Maier
6435691bb1 Add "Scale to" option to Extras 2022-10-09 19:26:52 -06:00
DepFA
4117afff11
Merge branch 'master' into embed-embeddings-in-images 2022-10-10 00:38:54 +01:00
DepFA
e2c2925eb4
remove braces from steps 2022-10-10 00:12:53 +01:00
DepFA
d6a599ef9b
change caption method 2022-10-10 00:07:52 +01:00
DepFA
0ac3a07eec
add caption image with overlay 2022-10-10 00:05:36 +01:00
AUTOMATIC
45fbd1c5fe remove background for quicksettings row (for dark theme) 2022-10-10 00:42:18 +03:00
DepFA
01fd9cf0d2
change source of step count 2022-10-09 22:17:02 +01:00
DepFA
96f1e6be59
source checkpoint hash from current checkpoint 2022-10-09 22:14:50 +01:00
DepFA
6684610510
correct case on embeddingFromB64 2022-10-09 22:06:42 +01:00
DepFA
d0184b8f76
change json tensor key name 2022-10-09 22:06:12 +01:00
DepFA
5d12ec82d3
add encoder and decoder classes 2022-10-09 22:05:09 +01:00
DepFA
969bd8256e
add alternate checkpoint hash source 2022-10-09 22:02:28 +01:00
DepFA
03694e1f99
add embedding load and save from b64 json 2022-10-09 21:58:14 +01:00
AUTOMATIC
a65476718f add DoubleStorage to list of allowed classes for pickle 2022-10-09 23:38:49 +03:00
DepFA
fa0c5eb81b
Add pretty image captioning functions 2022-10-09 20:41:22 +01:00
AUTOMATIC
8d340cfb88 do not add clip skip to parameters if it's 1 or 0 2022-10-09 22:31:35 +03:00
Fampai
84ddd44113 Clip skip variable name change breaks x/y plot script. This fixes that 2022-10-09 22:31:23 +03:00
Fampai
1824e9ee3a Removed unnecessary tmp variable 2022-10-09 22:31:23 +03:00
Fampai
ad3ae44108 Updated code for legibility 2022-10-09 22:31:23 +03:00
Fampai
ec2bd9be75 Fix issues with CLIP ignore option name change 2022-10-09 22:31:23 +03:00
Fampai
a14f7bf113 Corrected CLIP Layer Ignore description and updated its range to the max possible 2022-10-09 22:31:23 +03:00
Fampai
e59c66c008 Optimized code for Ignoring last CLIP layers 2022-10-09 22:31:23 +03:00
AUTOMATIC
6c383d2e82 show model selection setting on top of page 2022-10-09 22:24:07 +03:00
AUTOMATIC
45bf9a6264 added clip skip to XY plot 2022-10-09 18:58:55 +03:00
supersteve3d
a2d70f25bf Add files via upload
Updated txt2img screenshot (UI as of Oct 9th) for github webui / README.md
2022-10-09 18:45:37 +03:00
Artem Zagidulin
9ecea0a8d6 fix missing png info when Extras Batch Process 2022-10-09 18:35:25 +03:00
DepFA
d3cd46b038
Update lightbox to change displayed image as soon as generation is complete (#1933)
* add updateOnBackgroundChange
* typo fixes.
* reindent to 4 spaces
2022-10-09 16:19:33 +01:00
AUTOMATIC
875ddfeecf added guard for torch.load to prevent loading pickles with unknown content 2022-10-09 17:58:43 +03:00
AUTOMATIC
bba2ac8324 reshuffle the code a bit in launcher to keep functions in one place for #2069 2022-10-09 15:22:51 +03:00
Greendayle
f8197976ef Shielded launch enviroment creation stuff from multiprocessing 2022-10-09 15:17:36 +03:00
victorca25
53154ba10a
Merge branch 'master' into esrgan_mod 2022-10-09 14:11:22 +02:00
AUTOMATIC
9d1138e294 fix typo in filename for ESRGAN arch 2022-10-09 15:08:27 +03:00
AUTOMATIC
2c52f4da7f fix broken samplers in XY plot 2022-10-09 15:01:42 +03:00
AUTOMATIC
e6e8cabe0c change up #2056 to make it work how i want it to plus make xy plot write correct values to images 2022-10-09 14:57:48 +03:00
William Moorehouse
594cbfd8fb Sanitize infotext output (for now) 2022-10-09 14:49:15 +03:00
William Moorehouse
006791c13d Fix grabbing the model name for infotext 2022-10-09 14:49:15 +03:00
William Moorehouse
d6d10a37bf Added extended model details to infotext 2022-10-09 14:49:15 +03:00
AUTOMATIC
542a3d3a4a fix btoken hypernetworks in XY plot 2022-10-09 14:33:22 +03:00
victorca25
ad4de819c4 update ESRGAN architecture and model to support all ESRGAN models in the DB, BSRGAN and real-ESRGAN models 2022-10-09 13:07:50 +02:00
AUTOMATIC
77a719648d fix logic error in #1832 2022-10-09 13:48:04 +03:00
AUTOMATIC
f4578b343d fix model switching not working properly if there is a different yaml config 2022-10-09 13:23:30 +03:00
AUTOMATIC
bd833409ac additional changes for saving pnginfo for #1803 2022-10-09 13:10:15 +03:00
Milly
0609ce06c0 Removed duplicate definition model_path 2022-10-09 12:46:07 +03:00
Brendan Byrd
a65a45272e Don't change the seed initially if "Keep -1 for seeds" is checked
Fixes #1049
2022-10-09 12:43:56 +03:00
Jesse Williams
d74c38108f Confirm that options are valid before starting
When using the 'Sampler' or 'Checkpoint' options, if one of the entered
names has a typo, an error will only be thrown once the `draw_xy_grid`
loop reaches that name. This can waste a lot of time for large grids
with a typo near the end of a list, since the script needs to start over
and re-generate any earlier images to finish making the grid.

Also fixing typo in variable name in `draw_xy_grid`.
2022-10-09 12:39:18 +03:00
AUTOMATIC
6f6798ddab prevent a possible code execution error (thanks, RyotaK) 2022-10-09 12:33:37 +03:00
AUTOMATIC
0241d811d2 Revert "Fix for Prompts_from_file showing extra textbox."
This reverts commit e2930f9821.
2022-10-09 12:04:44 +03:00
AUTOMATIC
ab4fe4f44c hide filenames for save button by default 2022-10-09 11:59:41 +03:00
Tony Beeman
cbf6dad02d Handle case where on_show returns the wrong number of arguments 2022-10-09 11:16:38 +03:00
Tony Beeman
86cb16886f Pull Request Code Review Fixes 2022-10-09 11:16:38 +03:00
Tony Beeman
e2930f9821 Fix for Prompts_from_file showing extra textbox. 2022-10-09 11:16:38 +03:00
Nicolas Noullet
1ffeb42d38 Fix typo 2022-10-09 11:10:13 +03:00
frostydad
ef93acdc73 remove line break 2022-10-09 11:09:17 +03:00
frostydad
03e570886f Fix incorrect sampler name in output 2022-10-09 11:09:17 +03:00
Fampai
122d42687b Fix VRAM Issue by only loading in hypernetwork when selected in settings 2022-10-09 11:08:11 +03:00
AUTOMATIC1111
e00b4df7c6
Merge pull request #1752 from Greendayle/dev/deepdanbooru
Added DeepDanbooru interrogator
2022-10-09 10:52:21 +03:00
aoirusann
14192c5b20 Support Download for txt files. 2022-10-09 10:49:11 +03:00
aoirusann
5ab7e88d9b Add Download & Download as zip 2022-10-09 10:49:11 +03:00
AUTOMATIC
4e569fd888 fixed incorrect message about loading config; thanks anon! 2022-10-09 10:31:47 +03:00
AUTOMATIC
c77c89cc83 make main model loading and model merger use the same code 2022-10-09 10:23:31 +03:00
DepFA
cd8673bd9b
add embed embedding to ui 2022-10-09 05:40:57 +01:00
DepFA
5841990b0d
Update textual_inversion.py 2022-10-09 05:38:38 +01:00
AUTOMATIC
050a6a798c support loading .yaml config with same name as model
support EMA weights in processing (????)
2022-10-08 23:26:48 +03:00
Aidan Holland
432782163a chore: Fix typos 2022-10-08 22:42:30 +03:00
Edouard Leurent
610a7f4e14 Break after finding the local directory of stable diffusion
Otherwise, we may override it with one of the next two path (. or ..) if it is present there, and then the local paths of other modules (taming transformers, codeformers, etc.) wont be found in sd_path/../.

Fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1085
2022-10-08 22:35:04 +03:00
AUTOMATIC
3b2141c5fb add 'Ignore last layers of CLIP model' option as a parameter to the infotext 2022-10-08 22:21:15 +03:00
AUTOMATIC
e6e42f98df make --force-enable-xformers work without needing --xformers 2022-10-08 22:12:23 +03:00
Fampai
1371d7608b Added ability to ignore last n layers in FrozenCLIPEmbedder 2022-10-08 22:10:37 +03:00
DepFA
b458fa48fe Update ui.py 2022-10-08 20:38:35 +03:00
DepFA
15c4278f1a TI preprocess wording
I had to check the code to work out what splitting was 🤷🏿
2022-10-08 20:38:35 +03:00
Greendayle
0ec80f0125
Merge branch 'master' into dev/deepdanbooru 2022-10-08 18:28:22 +02:00
AUTOMATIC
3061cdb7b6 add --force-enable-xformers option and also add messages to console regarding cross attention optimizations 2022-10-08 19:22:15 +03:00
AUTOMATIC
f9c5da1592 add fallback for xformers_attnblock_forward 2022-10-08 19:05:19 +03:00
Greendayle
01f8cb4447 made deepdanbooru optional, added to readme, automatic download of deepbooru model 2022-10-08 18:02:56 +02:00
Artem Zagidulin
a5550f0213 alternate prompt 2022-10-08 18:12:19 +03:00
DepFA
34acad1628 Add GZipMiddleware to root demo 2022-10-08 18:03:16 +03:00
C43H66N12O12S2
cc0258aea7 check for ampere without destroying the optimizations. again. 2022-10-08 17:54:16 +03:00
C43H66N12O12S2
017b6b8744 check for ampere 2022-10-08 17:54:16 +03:00
C43H66N12O12S2
7e639cd498 check for 3.10 2022-10-08 17:54:16 +03:00
Greendayle
5329d0aba0 Merge branch 'master' into dev/deepdanbooru 2022-10-08 16:30:28 +02:00
AUTOMATIC
cfc33f99d4 why did you do this 2022-10-08 17:29:06 +03:00
Greendayle
2e8ba0fa47 fix conflicts 2022-10-08 16:27:48 +02:00
Milly
4f33289d0f Fixed typo 2022-10-08 17:15:30 +03:00
AUTOMATIC
27032c47df restore old opt_split_attention/disable_opt_split_attention logic 2022-10-08 17:10:05 +03:00
AUTOMATIC
dc1117233e simplify xfrmers options: --xformers to enable and that's it 2022-10-08 17:02:18 +03:00
AUTOMATIC
7ff1170a2e emergency fix for xformers (continue + shared) 2022-10-08 16:33:39 +03:00
AUTOMATIC1111
48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
xformers attention
2022-10-08 16:29:59 +03:00
C43H66N12O12S2
970de9ee68
Update sd_hijack.py 2022-10-08 16:29:43 +03:00
C43H66N12O12S2
7ffea15078
Update requirements_versions.txt 2022-10-08 16:24:06 +03:00
C43H66N12O12S2
ca5f0f149c
Update launch.py 2022-10-08 16:22:38 +03:00
C43H66N12O12S2
69d0053583
update sd_hijack_opt to respect new env variables 2022-10-08 16:21:40 +03:00
C43H66N12O12S2
ddfa9a9786
add xformers_available shared variable 2022-10-08 16:20:41 +03:00
C43H66N12O12S2
26b459a379
default to split attention if cuda is available and xformers is not 2022-10-08 16:20:04 +03:00
C43H66N12O12S2
d0e85873ac
check for OS and env variable 2022-10-08 16:13:26 +03:00
MrCheeze
5f85a74b00 fix bug where when using prompt composition, hijack_comments generated before the final AND will be dropped 2022-10-08 15:48:04 +03:00
guaneec
32e428ff19 Remove duplicate event listeners 2022-10-08 15:47:24 +03:00
ddPn08
772db721a5 fix glob path in hypernetwork.py 2022-10-08 15:46:54 +03:00
AUTOMATIC
7001bffe02 fix AND broken for long prompts 2022-10-08 15:43:25 +03:00
AUTOMATIC
77f4237d1c fix bugs related to variable prompt lengths 2022-10-08 15:25:59 +03:00
C43H66N12O12S2
3f166be1b6
Update requirements.txt 2022-10-08 14:42:50 +03:00
C43H66N12O12S2
4201fd14f5
install xformers 2022-10-08 14:42:34 +03:00
AUTOMATIC
4999eb2ef9 do not let user choose his own prompt token count limit 2022-10-08 14:25:47 +03:00
Trung Ngo
00117a07ef check specifically for skipped 2022-10-08 13:40:39 +03:00
Trung Ngo
786d9f63aa Add button to skip the current iteration 2022-10-08 13:40:39 +03:00
AUTOMATIC
45cc0ce3c4 Merge remote-tracking branch 'origin/master' 2022-10-08 13:39:08 +03:00
AUTOMATIC
706d5944a0 let user choose his own prompt token count limit 2022-10-08 13:38:57 +03:00
leko
616b7218f7 fix: handles when state_dict does not exist 2022-10-08 12:38:50 +03:00
C43H66N12O12S2
91d66f5520
use new attnblock for xformers path 2022-10-08 11:56:01 +03:00
C43H66N12O12S2
76a616fa6b
Update sd_hijack_optimizations.py 2022-10-08 11:55:38 +03:00
C43H66N12O12S2
5d54f35c58
add xformers attnblock and hypernetwork support 2022-10-08 11:55:02 +03:00
AUTOMATIC
87db6f01cc add info about cross attention javascript shortcut code 2022-10-08 10:15:29 +03:00
DepFA
21679435e5 implement removal 2022-10-08 09:43:31 +03:00
DepFA
83749bfc72 context menu styling 2022-10-08 09:43:31 +03:00
DepFA
e21e473253 Context Menus 2022-10-08 09:43:31 +03:00
brkirch
f2055cb1d4 Add hypernetwork support to split cross attention v1
* Add hypernetwork support to split_cross_attention_forward_v1
* Fix device check in esrgan_model.py to use devices.device_esrgan instead of shared.device
2022-10-08 09:39:17 +03:00
Jairo Correa
a958f9b3fd edit-attention browser compatibility and readme typo 2022-10-08 09:38:44 +03:00
C43H66N12O12S2
b70eaeb200
delete broken and unnecessary aliases 2022-10-08 04:10:35 +03:00
C43H66N12O12S2
c9cc65b201
switch to the proper way of calling xformers 2022-10-08 04:09:18 +03:00
AUTOMATIC
12c4d5c6b5 hypernetwork training mk1 2022-10-07 23:22:22 +03:00
EternalNooblet
065364445d added a flag to run as root if needed 2022-10-07 15:25:01 -04:00
Greendayle
5f12e7efd9 linux test 2022-10-07 20:58:30 +02:00
Greendayle
fa2ea648db even more powerfull fix 2022-10-07 20:46:38 +02:00
Greendayle
54fa613c83 loading tf only in interrogation process 2022-10-07 20:37:43 +02:00
Greendayle
537da7a304 Merge branch 'master' into dev/deepdanbooru 2022-10-07 18:31:49 +02:00
AUTOMATIC
f7c787eb7c make it possible to use hypernetworks without opt split attention 2022-10-07 16:39:51 +03:00
AUTOMATIC
97bc0b9504 do not stop working on failed hypernetwork load 2022-10-07 13:22:50 +03:00
AUTOMATIC
d15b3ec001 support loading VAE 2022-10-07 10:40:22 +03:00
AUTOMATIC
bad7cb29ce added support for hypernetworks (???) 2022-10-07 10:17:52 +03:00
C43H66N12O12S2
5e3ff846c5
Update sd_hijack.py 2022-10-07 06:38:01 +03:00
C43H66N12O12S2
5303df2428
Update sd_hijack.py 2022-10-07 06:01:14 +03:00
C43H66N12O12S2
35d6b23162
Update sd_hijack.py 2022-10-07 05:31:53 +03:00
C43H66N12O12S2
cd8bb597c6
Update requirements.txt 2022-10-07 05:23:25 +03:00
C43H66N12O12S2
da4ab2707b
Update shared.py 2022-10-07 05:23:06 +03:00
C43H66N12O12S2
2eb911b056
Update sd_hijack.py 2022-10-07 05:22:28 +03:00
C43H66N12O12S2
f174fb2922
add xformers attention 2022-10-07 05:21:49 +03:00
AUTOMATIC
2995107fa2 added ctrl+up or ctrl+down hotkeys for attention 2022-10-06 23:44:54 +03:00
AUTOMATIC
b34b25b4c9 karras samplers for img2img? 2022-10-06 23:27:01 +03:00
Milly
405c8171d1 Prefer using Processed.sd_model_hash attribute when filename pattern 2022-10-06 20:41:23 +03:00
Milly
1cc36d170a Added job_timestamp to Processed
So `[job_timestamp]` pattern can use in saving image UI.
2022-10-06 20:41:23 +03:00
Milly
070b7d60cf Added styles to Processed
So `[styles]` pattern can use in saving image UI.
2022-10-06 20:41:23 +03:00
AUTOMATIC1111
a5a08b0bee
Merge pull request #1372 from fuzzytent/gallery-styling
Improve styling of gallery items
2022-10-06 20:30:36 +03:00
AUTOMATIC1111
ab4ddbf333
Merge branch 'master' into gallery-styling 2022-10-06 20:30:29 +03:00
Milly
cf7c784fcc Removed duplicate defined models_path
Use `modules.paths.models_path` instead `modules.shared.model_path`.
2022-10-06 20:29:12 +03:00
AUTOMATIC
dbc8a4d351 add generation parameters to images shown in web ui 2022-10-06 20:27:50 +03:00
AUTOMATIC
1069ec49a3 revert back to using list comprehension rather than list and map 2022-10-06 20:16:21 +03:00
Milly
0bb458f0ca Removed duplicate image saving codes
Use `modules.images.save_image()` instead.
2022-10-06 20:15:39 +03:00
AUTOMATIC1111
2cfcb23c16
Merge pull request #1283 from jn-jairo/fix-vram
Fix memory leak and reduce memory usage
2022-10-06 20:10:11 +03:00
Jairo Correa
b66aa334a9 Merge branch 'master' into fix-vram 2022-10-06 13:41:37 -03:00
DepFA
82eb8ea452 Update xy_grid.py
split vals not 's' from tests
2022-10-06 18:09:49 +03:00
DepFA
fd9e049168 strip() split comma delimited lines 2022-10-06 18:09:49 +03:00
DepFA
efa61d3168 use csv.reader 2022-10-06 18:09:49 +03:00
DepFA
5d0e6ab856 Allow escaping of commas in xy_grid 2022-10-06 18:09:49 +03:00
DepFA
fec71e4de2 Default window title progress updates on 2022-10-06 17:58:52 +03:00
DepFA
c06298d1d0 add check for progress in title setting 2022-10-06 17:58:52 +03:00
DepFA
be71115b1a Update shared.py 2022-10-06 17:58:52 +03:00
AUTOMATIC
f5490674a8 fix bad output for error when updating a git repo 2022-10-06 17:41:49 +03:00
Greendayle
4320f386d9 removing underscores and colons 2022-10-05 22:39:32 +02:00
Greendayle
17a99baf0c better model search 2022-10-05 22:07:28 +02:00
Greendayle
1506fab29a removing problematic tag 2022-10-05 21:15:08 +02:00
Greendayle
59a2b9e5af deepdanbooru interrogator 2022-10-05 20:55:26 +02:00
Jairo Correa
82380d9ac1 Removing parts no longer needed to fix vram 2022-10-04 22:31:40 -03:00
Jairo Correa
1f50971fb8 Merge branch 'master' into fix-vram 2022-10-04 19:53:52 -03:00
fuzzytent
2a7f48cdb8 Improve styling of gallery items, particularly in dark mode 2022-10-03 18:33:58 +02:00
RnDMonkey
fe6e2362e8
Update xy_grid.py
Changed XY Plot infotext value keys to not be so generic.
2022-10-02 22:04:28 -07:00
Jairo Correa
ad0cc85d1f Merge branch 'master' into stable 2022-10-02 18:31:19 -03:00
RnDMonkey
37c9073f58
Merge branch 'AUTOMATIC1111:master' into trunk 2022-10-01 15:52:21 -07:00
RnDMonkey
f6a97868e5 fix to allow empty {} values 2022-10-01 14:36:09 -07:00
RnDMonkey
b99a4f769f fixed expression error in condition 2022-10-01 14:26:12 -07:00
RnDMonkey
eba0c29dbc Updated xy_grid infotext formatting, parser regex 2022-10-01 13:56:29 -07:00
shirase-0
0e77ee24b0 Removed unnecessary library call and added some comments 2022-10-02 00:57:29 +10:00
shirase-0
27fbf3de4a Added tag parsing for prompts from file 2022-10-02 00:43:24 +10:00
RnDMonkey
cf141157e7 Added X/Y plot parameters to extra_generation_params 2022-09-30 22:02:29 -07:00
RnDMonkey
70931652a4 [xy_grid] made -1 seed fixing apply to Var. seed too 2022-09-30 18:02:46 -07:00
Jairo Correa
ad1fbbae93 Merge branch 'master' into fix-vram 2022-09-30 18:58:51 -03:00
Jairo Correa
c2d5b29040 Move silu to sd_hijack 2022-09-29 01:16:25 -03:00
Jairo Correa
c938679de7 Fix memory leak and reduce memory usage 2022-09-28 22:14:13 -03:00
199 changed files with 24878 additions and 7039 deletions

View File

@ -1,32 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug-report
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. Windows, Linux]
- Browser [e.g. chrome, safari]
- Commit revision [looks like this: e68484500f76a33ba477d5a99340ab30451e557b; can be seen when launching webui.bat, or obtained manually by running `git rev-parse HEAD`]
**Additional context**
Add any other context about the problem here.

100
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@ -0,0 +1,100 @@
name: Bug Report
description: You think somethings is broken in the UI
title: "[Bug]: "
labels: ["bug-report"]
body:
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an issue already exists for the bug you encountered, and that it hasn't been fixed in a recent build/commit.
options:
- label: I have searched the existing issues and checked the recent builds/commits
required: true
- type: markdown
attributes:
value: |
*Please fill this form with as much information as possible, don't forget to fill "What OS..." and "What browsers" and *provide screenshots if possible**
- type: textarea
id: what-did
attributes:
label: What happened?
description: Tell us what happened in a very clear and simple way
validations:
required: true
- type: textarea
id: steps
attributes:
label: Steps to reproduce the problem
description: Please provide us with precise step by step information on how to reproduce the bug
value: |
1. Go to ....
2. Press ....
3. ...
validations:
required: true
- type: textarea
id: what-should
attributes:
label: What should have happened?
description: Tell what you think the normal behavior should be
validations:
required: true
- type: input
id: commit
attributes:
label: Commit where the problem happens
description: Which commit are you running ? (Do not write *Latest version/repo/commit*, as this means nothing and will have changed by the time we read your issue. Rather, copy the **Commit** link at the bottom of the UI, or from the cmd/terminal if you can't launch it.)
validations:
required: true
- type: dropdown
id: platforms
attributes:
label: What platforms do you use to access the UI ?
multiple: true
options:
- Windows
- Linux
- MacOS
- iOS
- Android
- Other/Cloud
- type: dropdown
id: browsers
attributes:
label: What browsers do you use to access the UI ?
multiple: true
options:
- Mozilla Firefox
- Google Chrome
- Brave
- Apple Safari
- Microsoft Edge
- type: textarea
id: cmdargs
attributes:
label: Command Line Arguments
description: Are you using any launching parameters/command line arguments (modified webui-user .bat/.sh) ? If yes, please write them below. Write "No" otherwise.
render: Shell
validations:
required: true
- type: textarea
id: extensions
attributes:
label: List of extensions
description: Are you using any extensions other than built-ins? If yes, provide a list, you can copy it at "Extensions" tab. Write "No" otherwise.
validations:
required: true
- type: textarea
id: logs
attributes:
label: Console logs
description: Please provide **full** cmd/terminal logs from the moment you started UI to the end of it, after your bug happened. If it's very long, provide a link to pastebin or similar service.
render: Shell
validations:
required: true
- type: textarea
id: misc
attributes:
label: Additional information
description: Please provide us with any relevant additional info or context.

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: WebUI Community Support
url: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions
about: Please ask and answer questions here.

View File

@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@ -0,0 +1,40 @@
name: Feature request
description: Suggest an idea for this project
title: "[Feature Request]: "
labels: ["enhancement"]
body:
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an issue already exists for the feature you want, and that it's not implemented in a recent build/commit.
options:
- label: I have searched the existing issues and checked the recent builds/commits
required: true
- type: markdown
attributes:
value: |
*Please fill this form with as much information as possible, provide screenshots and/or illustrations of the feature if possible*
- type: textarea
id: feature
attributes:
label: What would your feature do ?
description: Tell us about your feature in a very clear and simple way, and what problem it would solve
validations:
required: true
- type: textarea
id: workflow
attributes:
label: Proposed workflow
description: Please provide us with step by step information on how you'd like the feature to be accessed and used
value: |
1. Go to ....
2. Press ....
3. ...
validations:
required: true
- type: textarea
id: misc
attributes:
label: Additional information
description: Add any other context or screenshots about the feature request here.

View File

@ -18,8 +18,8 @@ More technical discussion about your changes go here, plus anything that a maint
List the environment you have developed / tested this on. As per the contributing page, changes should be able to work on Windows out of the box.
- OS: [e.g. Windows, Linux]
- Browser [e.g. chrome, safari]
- Graphics card [e.g. NVIDIA RTX 2080 8GB, AMD RX 6600 8GB]
- Browser: [e.g. chrome, safari]
- Graphics card: [e.g. NVIDIA RTX 2080 8GB, AMD RX 6600 8GB]
**Screenshots or videos of your changes**

39
.github/workflows/on_pull_request.yaml vendored Normal file
View File

@ -0,0 +1,39 @@
# See https://github.com/actions/starter-workflows/blob/1067f16ad8a1eac328834e4b0ae24f7d206f810d/ci/pylint.yml for original reference file
name: Run Linting/Formatting on Pull Requests
on:
- push
- pull_request
# See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#onpull_requestpull_request_targetbranchesbranches-ignore for syntax docs
# if you want to filter out branches, delete the `- pull_request` and uncomment these lines :
# pull_request:
# branches:
# - master
# branches-ignore:
# - development
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v3
- name: Set up Python 3.10
uses: actions/setup-python@v4
with:
python-version: 3.10.6
cache: pip
cache-dependency-path: |
**/requirements*txt
- name: Install PyLint
run: |
python -m pip install --upgrade pip
pip install pylint
# This lets PyLint check to see if it can resolve imports
- name: Install dependencies
run: |
export COMMANDLINE_ARGS="--skip-torch-cuda-test --exit"
python launch.py
- name: Analysing the code with pylint
run: |
pylint $(git ls-files '*.py')

29
.github/workflows/run_tests.yaml vendored Normal file
View File

@ -0,0 +1,29 @@
name: Run basic features tests on CPU with empty SD model
on:
- push
- pull_request
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v3
- name: Set up Python 3.10
uses: actions/setup-python@v4
with:
python-version: 3.10.6
cache: pip
cache-dependency-path: |
**/requirements*txt
- name: Run tests
run: python launch.py --tests --no-half --disable-opt-split-attention --use-cpu all --skip-torch-cuda-test
- name: Upload main app stdout-stderr
uses: actions/upload-artifact@v3
if: always()
with:
name: stdout-stderr
path: |
test/stdout.txt
test/stderr.txt

7
.gitignore vendored
View File

@ -1,5 +1,6 @@
__pycache__
*.ckpt
*.safetensors
*.pth
/ESRGAN/*
/SwinIR/*
@ -17,6 +18,7 @@ __pycache__
/webui.settings.bat
/embeddings
/styles.csv
/params.txt
/styles.csv.bak
/webui-user.bat
/webui-user.sh
@ -26,3 +28,8 @@ __pycache__
notification.mp3
/SwinIR
/textual_inversion
.vscode
/extensions
/test/stdout.txt
/test/stderr.txt
/cache.json

3
.pylintrc Normal file
View File

@ -0,0 +1,3 @@
# See https://pylint.pycqa.org/en/latest/user_guide/messages/message_control.html
[MESSAGES CONTROL]
disable=C,R,W,E,I

12
CODEOWNERS Normal file
View File

@ -0,0 +1,12 @@
* @AUTOMATIC1111
# if you were managing a localization and were removed from this file, this is because
# the intended way to do localizations now is via extensions. See:
# https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions
# Make a repo with your localization and since you are still listed as a collaborator
# you can add it to the wiki page yourself. This change is because some people complained
# the git commit log is cluttered with things unrelated to almost everyone and
# because I believe this is the best overall for the project to handle localizations almost
# entirely without my oversight.

663
LICENSE.txt Normal file
View File

@ -0,0 +1,663 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (c) 2023 AUTOMATIC1111
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

View File

@ -1,9 +1,7 @@
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.
![](txt2img_Screenshot.png)
Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) wiki page for extra scripts developed by users.
![](screenshot.png)
## Features
[Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
@ -11,44 +9,49 @@ Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-web
- One click install and run script (but you still must install python and git)
- Outpainting
- Inpainting
- Color Sketch
- Prompt Matrix
- Stable Diffusion Upscale
- Attention, specify parts of text that the model should pay more attention to
- a man in a ((tuxedo)) - will pay more attention to tuxedo
- a man in a (tuxedo:1.21) - alternative syntax
- select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user)
- Loopback, run img2img processing multiple times
- X/Y plot, a way to draw a 2 dimensional plot of images with different parameters
- X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
- Textual Inversion
- have as many embeddings as you want and use any names you like for them
- use multiple embeddings with different numbers of vectors per token
- works with half precision floating point numbers
- train embeddings on 8GB (also reports of 6GB working)
- Extras tab with:
- GFPGAN, neural network that fixes faces
- CodeFormer, face restoration tool as an alternative to GFPGAN
- RealESRGAN, neural network upscaler
- ESRGAN, neural network upscaler with a lot of third party models
- SwinIR, neural network upscaler
- SwinIR and Swin2SR([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
- LDSR, Latent diffusion super resolution upscaling
- Resizing aspect ratio options
- Sampling method selection
- Adjust sampler eta values (noise multiplier)
- More advanced noise setting options
- Interrupt processing at any time
- 4GB video card support (also reports of 2GB working)
- Correct seeds for batches
- Prompt length validation
- get length of prompt in tokens as you type
- get a warning after generation if some text was truncated
- Live prompt token length validation
- Generation parameters
- parameters you used to generate images are saved with that image
- in PNG chunks for PNG, in EXIF for JPEG
- can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
- can be disabled in settings
- drag and drop an image/text-parameters to promptbox
- Read Generation Parameters Button, loads parameters in promptbox to UI
- Settings page
- Running arbitrary python code from UI (must run with --allow-code to enable)
- Mouseover hints for most UI elements
- Possible to change defaults/mix/max/step values for UI elements via text config
- Random artist button
- Tiling support, a checkbox to create images that can be tiled like textures
- Progress bar and live image generation preview
- Can use a separate neural network to produce previews with almost none VRAM or compute requirement
- Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
- Styles, a way to save part of prompt and easily apply them via dropdown later
- Variations, a way to generate same image but with tiny differences
@ -56,27 +59,53 @@ Check the [custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-web
- CLIP interrogator, a button that tries to guess prompt from an image
- Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
- Batch Processing, process a group of files using img2img
- Img2img Alternative
- Img2img Alternative, reverse Euler method of cross attention control
- Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
- Reloading checkpoints on the fly
- Checkpoint Merger, a tab that allows you to merge two checkpoints into one
- Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
- [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
- [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
- separate prompts using uppercase `AND`
- also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
- No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
- DeepDanbooru integration, creates danbooru style tags for anime prompts
- [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add --xformers to commandline args)
- via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
- Generate forever option
- Training tab
- hypernetworks and embeddings options
- Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt.
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
- Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML.
- via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
- [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
-
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Alternatively, use Google Colab:
Alternatively, use online services (like Google Colab):
- [Colab, maintained by Akaibu](https://colab.research.google.com/drive/1kw3egmSn-KgWsikYvOMjJkVDsPLjEMzl)
- [Colab, original by me, outdated](https://colab.research.google.com/drive/1Iy-xW9t1-OQWhb0hNxueGij8phCyluOh).
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Automatic Installation on Windows
1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking "Add Python to PATH"
2. Install [git](https://git-scm.com/download/win).
3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
4. Place `model.ckpt` in the `models` directory (see [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) for where to get it).
5. _*(Optional)*_ Place `GFPGANv1.4.pth` in the base directory, alongside `webui.py` (see [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) for where to get it).
6. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
4. Place stable diffusion checkpoint (`model.ckpt`) in the `models/Stable-diffusion` directory (see [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) for where to get it).
5. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
### Automatic Installation on Linux
1. Install the dependencies:
@ -104,18 +133,30 @@ Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC
The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
## Credits
Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
- Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
- Doggettx - Cross Attention layer optimization - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Rinon Gal - Textual Inversion - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
- Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
- Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
- Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
- Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
- Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
- CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
- Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
- xformers - https://github.com/facebookresearch/xformers
- DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
- Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
- Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
- Security advice - RyotaK
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,72 @@
model:
base_learning_rate: 1.0e-04
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 10000 ]
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: modules.xlmr.BertSeriesModelWithTransformation
params:
name: "XLMR-Large"

View File

@ -0,0 +1,98 @@
# File modified by authors of InstructPix2Pix from original (https://github.com/CompVis/stable-diffusion).
# See more details in LICENSE.
model:
base_learning_rate: 1.0e-04
target: modules.models.diffusion.ddpm_edit.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: edited
cond_stage_key: edit
# image_size: 64
# image_size: 32
image_size: 16
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: hybrid
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: false
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 0 ]
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 8
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
data:
target: main.DataModuleFromConfig
params:
batch_size: 128
num_workers: 1
wrap: false
validation:
target: edit_dataset.EditDataset
params:
path: data/clip-filtered-dataset
cache_dir: data/
cache_name: data_10k
split: val
min_text_sim: 0.2
min_image_sim: 0.75
min_direction_sim: 0.2
max_samples_per_prompt: 1
min_resize_res: 512
max_resize_res: 512
crop_res: 512
output_as_edit: False
real_input: True

70
configs/v1-inference.yaml Normal file
View File

@ -0,0 +1,70 @@
model:
base_learning_rate: 1.0e-04
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 10000 ]
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder

View File

@ -0,0 +1,70 @@
model:
base_learning_rate: 7.5e-05
target: ldm.models.diffusion.ddpm.LatentInpaintDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: hybrid # important
monitor: val/loss_simple_ema
scale_factor: 0.18215
finetune_keys: null
scheduler_config: # 10000 warmup steps
target: ldm.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 2500 ] # NOTE for resuming. use 10000 if starting from scratch
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 9 # 4 data + 4 downscaled image + 1 mask
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder

View File

@ -3,9 +3,9 @@ channels:
- pytorch
- defaults
dependencies:
- python=3.8.5
- pip=20.3
- python=3.10
- pip=22.2.2
- cudatoolkit=11.3
- pytorch=1.11.0
- torchvision=0.12.0
- numpy=1.19.2
- pytorch=1.12.1
- torchvision=0.13.1
- numpy=1.23.1

View File

@ -1,6 +1,6 @@
import os
import gc
import time
import warnings
import numpy as np
import torch
@ -8,27 +8,47 @@ import torchvision
from PIL import Image
from einops import rearrange, repeat
from omegaconf import OmegaConf
import safetensors.torch
from ldm.models.diffusion.ddim import DDIMSampler
from ldm.util import instantiate_from_config, ismap
from modules import shared, sd_hijack
warnings.filterwarnings("ignore", category=UserWarning)
cached_ldsr_model: torch.nn.Module = None
# Create LDSR Class
class LDSR:
def load_model_from_config(self, half_attention):
global cached_ldsr_model
if shared.opts.ldsr_cached and cached_ldsr_model is not None:
print("Loading model from cache")
model: torch.nn.Module = cached_ldsr_model
else:
print(f"Loading model from {self.modelPath}")
_, extension = os.path.splitext(self.modelPath)
if extension.lower() == ".safetensors":
pl_sd = safetensors.torch.load_file(self.modelPath, device="cpu")
else:
pl_sd = torch.load(self.modelPath, map_location="cpu")
sd = pl_sd["state_dict"]
sd = pl_sd["state_dict"] if "state_dict" in pl_sd else pl_sd
config = OmegaConf.load(self.yamlPath)
model = instantiate_from_config(config.model)
config.model.target = "ldm.models.diffusion.ddpm.LatentDiffusionV1"
model: torch.nn.Module = instantiate_from_config(config.model)
model.load_state_dict(sd, strict=False)
model.cuda()
model = model.to(shared.device)
if half_attention:
model = model.half()
if shared.cmd_opts.opt_channelslast:
model = model.to(memory_format=torch.channels_last)
sd_hijack.model_hijack.hijack(model) # apply optimization
model.eval()
if shared.opts.ldsr_cached:
cached_ldsr_model = model
return {"model": model}
def __init__(self, model_path, yaml_path):
@ -93,6 +113,7 @@ class LDSR:
down_sample_method = 'Lanczos'
gc.collect()
if torch.cuda.is_available:
torch.cuda.empty_cache()
im_og = image
@ -101,8 +122,8 @@ class LDSR:
down_sample_rate = target_scale / 4
wd = width_og * down_sample_rate
hd = height_og * down_sample_rate
width_downsampled_pre = int(wd)
height_downsampled_pre = int(hd)
width_downsampled_pre = int(np.ceil(wd))
height_downsampled_pre = int(np.ceil(hd))
if down_sample_rate != 1:
print(
@ -110,7 +131,12 @@ class LDSR:
im_og = im_og.resize((width_downsampled_pre, height_downsampled_pre), Image.LANCZOS)
else:
print(f"Down sample rate is 1 from {target_scale} / 4 (Not downsampling)")
logs = self.run(model["model"], im_og, diffusion_steps, eta)
# pad width and height to multiples of 64, pads with the edge values of image to avoid artifacts
pad_w, pad_h = np.max(((2, 2), np.ceil(np.array(im_og.size) / 64).astype(int)), axis=0) * 64 - im_og.size
im_padded = Image.fromarray(np.pad(np.array(im_og), ((0, pad_h), (0, pad_w), (0, 0)), mode='edge'))
logs = self.run(model["model"], im_padded, diffusion_steps, eta)
sample = logs["sample"]
sample = sample.detach().cpu()
@ -120,9 +146,14 @@ class LDSR:
sample = np.transpose(sample, (0, 2, 3, 1))
a = Image.fromarray(sample[0])
# remove padding
a = a.crop((0, 0) + tuple(np.array(im_og.size) * 4))
del model
gc.collect()
if torch.cuda.is_available:
torch.cuda.empty_cache()
return a
@ -137,7 +168,7 @@ def get_cond(selected_path):
c = rearrange(c, '1 c h w -> 1 h w c')
c = 2. * c - 1.
c = c.to(torch.device("cuda"))
c = c.to(shared.device)
example["LR_image"] = c
example["image"] = c_up

View File

@ -0,0 +1,6 @@
import os
from modules import paths
def preload(parser):
parser.add_argument("--ldsr-models-path", type=str, help="Path to directory with LDSR model file(s).", default=os.path.join(paths.models_path, 'LDSR'))

View File

@ -5,15 +5,14 @@ import traceback
from basicsr.utils.download_util import load_file_from_url
from modules.upscaler import Upscaler, UpscalerData
from modules.ldsr_model_arch import LDSR
from modules import shared
from modules.paths import models_path
from ldsr_model_arch import LDSR
from modules import shared, script_callbacks
import sd_hijack_autoencoder, sd_hijack_ddpm_v1
class UpscalerLDSR(Upscaler):
def __init__(self, user_path):
self.name = "LDSR"
self.model_path = os.path.join(models_path, self.name)
self.user_path = user_path
self.model_url = "https://heibox.uni-heidelberg.de/f/578df07c8fc04ffbadf3/?dl=1"
self.yaml_url = "https://heibox.uni-heidelberg.de/f/31a76b13ea27482981b4/?dl=1"
@ -26,6 +25,7 @@ class UpscalerLDSR(Upscaler):
yaml_path = os.path.join(self.model_path, "project.yaml")
old_model_path = os.path.join(self.model_path, "model.pth")
new_model_path = os.path.join(self.model_path, "model.ckpt")
safetensors_model_path = os.path.join(self.model_path, "model.safetensors")
if os.path.exists(yaml_path):
statinfo = os.stat(yaml_path)
if statinfo.st_size >= 10485760:
@ -34,6 +34,9 @@ class UpscalerLDSR(Upscaler):
if os.path.exists(old_model_path):
print("Renaming model from model.pth to model.ckpt")
os.rename(old_model_path, new_model_path)
if os.path.exists(safetensors_model_path):
model = safetensors_model_path
else:
model = load_file_from_url(url=self.model_url, model_dir=self.model_path,
file_name="model.ckpt", progress=True)
yaml = load_file_from_url(url=self.yaml_url, model_dir=self.model_path,
@ -54,3 +57,13 @@ class UpscalerLDSR(Upscaler):
return img
ddim_steps = shared.opts.ldsr_steps
return ldsr.super_resolution(img, ddim_steps, self.scale)
def on_ui_settings():
import gradio as gr
shared.opts.add_option("ldsr_steps", shared.OptionInfo(100, "LDSR processing steps. Lower = faster", gr.Slider, {"minimum": 1, "maximum": 200, "step": 1}, section=('upscaling', "Upscaling")))
shared.opts.add_option("ldsr_cached", shared.OptionInfo(False, "Cache LDSR model in memory", gr.Checkbox, {"interactive": True}, section=('upscaling', "Upscaling")))
script_callbacks.on_ui_settings(on_ui_settings)

View File

@ -0,0 +1,286 @@
# The content of this file comes from the ldm/models/autoencoder.py file of the compvis/stable-diffusion repo
# The VQModel & VQModelInterface were subsequently removed from ldm/models/autoencoder.py when we moved to the stability-ai/stablediffusion repo
# As the LDSR upscaler relies on VQModel & VQModelInterface, the hijack aims to put them back into the ldm.models.autoencoder
import torch
import pytorch_lightning as pl
import torch.nn.functional as F
from contextlib import contextmanager
from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer
from ldm.modules.diffusionmodules.model import Encoder, Decoder
from ldm.util import instantiate_from_config
import ldm.models.autoencoder
class VQModel(pl.LightningModule):
def __init__(self,
ddconfig,
lossconfig,
n_embed,
embed_dim,
ckpt_path=None,
ignore_keys=[],
image_key="image",
colorize_nlabels=None,
monitor=None,
batch_resize_range=None,
scheduler_config=None,
lr_g_factor=1.0,
remap=None,
sane_index_shape=False, # tell vector quantizer to return indices as bhw
use_ema=False
):
super().__init__()
self.embed_dim = embed_dim
self.n_embed = n_embed
self.image_key = image_key
self.encoder = Encoder(**ddconfig)
self.decoder = Decoder(**ddconfig)
self.loss = instantiate_from_config(lossconfig)
self.quantize = VectorQuantizer(n_embed, embed_dim, beta=0.25,
remap=remap,
sane_index_shape=sane_index_shape)
self.quant_conv = torch.nn.Conv2d(ddconfig["z_channels"], embed_dim, 1)
self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1)
if colorize_nlabels is not None:
assert type(colorize_nlabels)==int
self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1))
if monitor is not None:
self.monitor = monitor
self.batch_resize_range = batch_resize_range
if self.batch_resize_range is not None:
print(f"{self.__class__.__name__}: Using per-batch resizing in range {batch_resize_range}.")
self.use_ema = use_ema
if self.use_ema:
self.model_ema = LitEma(self)
print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.")
if ckpt_path is not None:
self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys)
self.scheduler_config = scheduler_config
self.lr_g_factor = lr_g_factor
@contextmanager
def ema_scope(self, context=None):
if self.use_ema:
self.model_ema.store(self.parameters())
self.model_ema.copy_to(self)
if context is not None:
print(f"{context}: Switched to EMA weights")
try:
yield None
finally:
if self.use_ema:
self.model_ema.restore(self.parameters())
if context is not None:
print(f"{context}: Restored training weights")
def init_from_ckpt(self, path, ignore_keys=list()):
sd = torch.load(path, map_location="cpu")["state_dict"]
keys = list(sd.keys())
for k in keys:
for ik in ignore_keys:
if k.startswith(ik):
print("Deleting key {} from state_dict.".format(k))
del sd[k]
missing, unexpected = self.load_state_dict(sd, strict=False)
print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys")
if len(missing) > 0:
print(f"Missing Keys: {missing}")
print(f"Unexpected Keys: {unexpected}")
def on_train_batch_end(self, *args, **kwargs):
if self.use_ema:
self.model_ema(self)
def encode(self, x):
h = self.encoder(x)
h = self.quant_conv(h)
quant, emb_loss, info = self.quantize(h)
return quant, emb_loss, info
def encode_to_prequant(self, x):
h = self.encoder(x)
h = self.quant_conv(h)
return h
def decode(self, quant):
quant = self.post_quant_conv(quant)
dec = self.decoder(quant)
return dec
def decode_code(self, code_b):
quant_b = self.quantize.embed_code(code_b)
dec = self.decode(quant_b)
return dec
def forward(self, input, return_pred_indices=False):
quant, diff, (_,_,ind) = self.encode(input)
dec = self.decode(quant)
if return_pred_indices:
return dec, diff, ind
return dec, diff
def get_input(self, batch, k):
x = batch[k]
if len(x.shape) == 3:
x = x[..., None]
x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float()
if self.batch_resize_range is not None:
lower_size = self.batch_resize_range[0]
upper_size = self.batch_resize_range[1]
if self.global_step <= 4:
# do the first few batches with max size to avoid later oom
new_resize = upper_size
else:
new_resize = np.random.choice(np.arange(lower_size, upper_size+16, 16))
if new_resize != x.shape[2]:
x = F.interpolate(x, size=new_resize, mode="bicubic")
x = x.detach()
return x
def training_step(self, batch, batch_idx, optimizer_idx):
# https://github.com/pytorch/pytorch/issues/37142
# try not to fool the heuristics
x = self.get_input(batch, self.image_key)
xrec, qloss, ind = self(x, return_pred_indices=True)
if optimizer_idx == 0:
# autoencode
aeloss, log_dict_ae = self.loss(qloss, x, xrec, optimizer_idx, self.global_step,
last_layer=self.get_last_layer(), split="train",
predicted_indices=ind)
self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True)
return aeloss
if optimizer_idx == 1:
# discriminator
discloss, log_dict_disc = self.loss(qloss, x, xrec, optimizer_idx, self.global_step,
last_layer=self.get_last_layer(), split="train")
self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=True)
return discloss
def validation_step(self, batch, batch_idx):
log_dict = self._validation_step(batch, batch_idx)
with self.ema_scope():
log_dict_ema = self._validation_step(batch, batch_idx, suffix="_ema")
return log_dict
def _validation_step(self, batch, batch_idx, suffix=""):
x = self.get_input(batch, self.image_key)
xrec, qloss, ind = self(x, return_pred_indices=True)
aeloss, log_dict_ae = self.loss(qloss, x, xrec, 0,
self.global_step,
last_layer=self.get_last_layer(),
split="val"+suffix,
predicted_indices=ind
)
discloss, log_dict_disc = self.loss(qloss, x, xrec, 1,
self.global_step,
last_layer=self.get_last_layer(),
split="val"+suffix,
predicted_indices=ind
)
rec_loss = log_dict_ae[f"val{suffix}/rec_loss"]
self.log(f"val{suffix}/rec_loss", rec_loss,
prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True)
self.log(f"val{suffix}/aeloss", aeloss,
prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True)
if version.parse(pl.__version__) >= version.parse('1.4.0'):
del log_dict_ae[f"val{suffix}/rec_loss"]
self.log_dict(log_dict_ae)
self.log_dict(log_dict_disc)
return self.log_dict
def configure_optimizers(self):
lr_d = self.learning_rate
lr_g = self.lr_g_factor*self.learning_rate
print("lr_d", lr_d)
print("lr_g", lr_g)
opt_ae = torch.optim.Adam(list(self.encoder.parameters())+
list(self.decoder.parameters())+
list(self.quantize.parameters())+
list(self.quant_conv.parameters())+
list(self.post_quant_conv.parameters()),
lr=lr_g, betas=(0.5, 0.9))
opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(),
lr=lr_d, betas=(0.5, 0.9))
if self.scheduler_config is not None:
scheduler = instantiate_from_config(self.scheduler_config)
print("Setting up LambdaLR scheduler...")
scheduler = [
{
'scheduler': LambdaLR(opt_ae, lr_lambda=scheduler.schedule),
'interval': 'step',
'frequency': 1
},
{
'scheduler': LambdaLR(opt_disc, lr_lambda=scheduler.schedule),
'interval': 'step',
'frequency': 1
},
]
return [opt_ae, opt_disc], scheduler
return [opt_ae, opt_disc], []
def get_last_layer(self):
return self.decoder.conv_out.weight
def log_images(self, batch, only_inputs=False, plot_ema=False, **kwargs):
log = dict()
x = self.get_input(batch, self.image_key)
x = x.to(self.device)
if only_inputs:
log["inputs"] = x
return log
xrec, _ = self(x)
if x.shape[1] > 3:
# colorize with random projection
assert xrec.shape[1] > 3
x = self.to_rgb(x)
xrec = self.to_rgb(xrec)
log["inputs"] = x
log["reconstructions"] = xrec
if plot_ema:
with self.ema_scope():
xrec_ema, _ = self(x)
if x.shape[1] > 3: xrec_ema = self.to_rgb(xrec_ema)
log["reconstructions_ema"] = xrec_ema
return log
def to_rgb(self, x):
assert self.image_key == "segmentation"
if not hasattr(self, "colorize"):
self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x))
x = F.conv2d(x, weight=self.colorize)
x = 2.*(x-x.min())/(x.max()-x.min()) - 1.
return x
class VQModelInterface(VQModel):
def __init__(self, embed_dim, *args, **kwargs):
super().__init__(embed_dim=embed_dim, *args, **kwargs)
self.embed_dim = embed_dim
def encode(self, x):
h = self.encoder(x)
h = self.quant_conv(h)
return h
def decode(self, h, force_not_quantize=False):
# also go through quantization layer
if not force_not_quantize:
quant, emb_loss, info = self.quantize(h)
else:
quant = h
quant = self.post_quant_conv(quant)
dec = self.decoder(quant)
return dec
setattr(ldm.models.autoencoder, "VQModel", VQModel)
setattr(ldm.models.autoencoder, "VQModelInterface", VQModelInterface)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,26 @@
from modules import extra_networks, shared
import lora
class ExtraNetworkLora(extra_networks.ExtraNetwork):
def __init__(self):
super().__init__('lora')
def activate(self, p, params_list):
additional = shared.opts.sd_lora
if additional != "" and additional in lora.available_loras and len([x for x in params_list if x.items[0] == additional]) == 0:
p.all_prompts = [x + f"<lora:{additional}:{shared.opts.extra_networks_default_multiplier}>" for x in p.all_prompts]
params_list.append(extra_networks.ExtraNetworkParams(items=[additional, shared.opts.extra_networks_default_multiplier]))
names = []
multipliers = []
for params in params_list:
assert len(params.items) > 0
names.append(params.items[0])
multipliers.append(float(params.items[1]) if len(params.items) > 1 else 1.0)
lora.load_loras(names, multipliers)
def deactivate(self, p):
pass

View File

@ -0,0 +1,207 @@
import glob
import os
import re
import torch
from modules import shared, devices, sd_models
re_digits = re.compile(r"\d+")
re_unet_down_blocks = re.compile(r"lora_unet_down_blocks_(\d+)_attentions_(\d+)_(.+)")
re_unet_mid_blocks = re.compile(r"lora_unet_mid_block_attentions_(\d+)_(.+)")
re_unet_up_blocks = re.compile(r"lora_unet_up_blocks_(\d+)_attentions_(\d+)_(.+)")
re_text_block = re.compile(r"lora_te_text_model_encoder_layers_(\d+)_(.+)")
def convert_diffusers_name_to_compvis(key):
def match(match_list, regex):
r = re.match(regex, key)
if not r:
return False
match_list.clear()
match_list.extend([int(x) if re.match(re_digits, x) else x for x in r.groups()])
return True
m = []
if match(m, re_unet_down_blocks):
return f"diffusion_model_input_blocks_{1 + m[0] * 3 + m[1]}_1_{m[2]}"
if match(m, re_unet_mid_blocks):
return f"diffusion_model_middle_block_1_{m[1]}"
if match(m, re_unet_up_blocks):
return f"diffusion_model_output_blocks_{m[0] * 3 + m[1]}_1_{m[2]}"
if match(m, re_text_block):
return f"transformer_text_model_encoder_layers_{m[0]}_{m[1]}"
return key
class LoraOnDisk:
def __init__(self, name, filename):
self.name = name
self.filename = filename
class LoraModule:
def __init__(self, name):
self.name = name
self.multiplier = 1.0
self.modules = {}
self.mtime = None
class LoraUpDownModule:
def __init__(self):
self.up = None
self.down = None
self.alpha = None
def assign_lora_names_to_compvis_modules(sd_model):
lora_layer_mapping = {}
for name, module in shared.sd_model.cond_stage_model.wrapped.named_modules():
lora_name = name.replace(".", "_")
lora_layer_mapping[lora_name] = module
module.lora_layer_name = lora_name
for name, module in shared.sd_model.model.named_modules():
lora_name = name.replace(".", "_")
lora_layer_mapping[lora_name] = module
module.lora_layer_name = lora_name
sd_model.lora_layer_mapping = lora_layer_mapping
def load_lora(name, filename):
lora = LoraModule(name)
lora.mtime = os.path.getmtime(filename)
sd = sd_models.read_state_dict(filename)
keys_failed_to_match = []
for key_diffusers, weight in sd.items():
fullkey = convert_diffusers_name_to_compvis(key_diffusers)
key, lora_key = fullkey.split(".", 1)
sd_module = shared.sd_model.lora_layer_mapping.get(key, None)
if sd_module is None:
keys_failed_to_match.append(key_diffusers)
continue
lora_module = lora.modules.get(key, None)
if lora_module is None:
lora_module = LoraUpDownModule()
lora.modules[key] = lora_module
if lora_key == "alpha":
lora_module.alpha = weight.item()
continue
if type(sd_module) == torch.nn.Linear:
module = torch.nn.Linear(weight.shape[1], weight.shape[0], bias=False)
elif type(sd_module) == torch.nn.Conv2d:
module = torch.nn.Conv2d(weight.shape[1], weight.shape[0], (1, 1), bias=False)
else:
assert False, f'Lora layer {key_diffusers} matched a layer with unsupported type: {type(sd_module).__name__}'
with torch.no_grad():
module.weight.copy_(weight)
module.to(device=devices.device, dtype=devices.dtype)
if lora_key == "lora_up.weight":
lora_module.up = module
elif lora_key == "lora_down.weight":
lora_module.down = module
else:
assert False, f'Bad Lora layer name: {key_diffusers} - must end in lora_up.weight, lora_down.weight or alpha'
if len(keys_failed_to_match) > 0:
print(f"Failed to match keys when loading Lora {filename}: {keys_failed_to_match}")
return lora
def load_loras(names, multipliers=None):
already_loaded = {}
for lora in loaded_loras:
if lora.name in names:
already_loaded[lora.name] = lora
loaded_loras.clear()
loras_on_disk = [available_loras.get(name, None) for name in names]
if any([x is None for x in loras_on_disk]):
list_available_loras()
loras_on_disk = [available_loras.get(name, None) for name in names]
for i, name in enumerate(names):
lora = already_loaded.get(name, None)
lora_on_disk = loras_on_disk[i]
if lora_on_disk is not None:
if lora is None or os.path.getmtime(lora_on_disk.filename) > lora.mtime:
lora = load_lora(name, lora_on_disk.filename)
if lora is None:
print(f"Couldn't find Lora with name {name}")
continue
lora.multiplier = multipliers[i] if multipliers else 1.0
loaded_loras.append(lora)
def lora_forward(module, input, res):
if len(loaded_loras) == 0:
return res
lora_layer_name = getattr(module, 'lora_layer_name', None)
for lora in loaded_loras:
module = lora.modules.get(lora_layer_name, None)
if module is not None:
if shared.opts.lora_apply_to_outputs and res.shape == input.shape:
res = res + module.up(module.down(res)) * lora.multiplier * (module.alpha / module.up.weight.shape[1] if module.alpha else 1.0)
else:
res = res + module.up(module.down(input)) * lora.multiplier * (module.alpha / module.up.weight.shape[1] if module.alpha else 1.0)
return res
def lora_Linear_forward(self, input):
return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input))
def lora_Conv2d_forward(self, input):
return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
def list_available_loras():
available_loras.clear()
os.makedirs(shared.cmd_opts.lora_dir, exist_ok=True)
candidates = \
glob.glob(os.path.join(shared.cmd_opts.lora_dir, '**/*.pt'), recursive=True) + \
glob.glob(os.path.join(shared.cmd_opts.lora_dir, '**/*.safetensors'), recursive=True) + \
glob.glob(os.path.join(shared.cmd_opts.lora_dir, '**/*.ckpt'), recursive=True)
for filename in sorted(candidates):
if os.path.isdir(filename):
continue
name = os.path.splitext(os.path.basename(filename))[0]
available_loras[name] = LoraOnDisk(name, filename)
available_loras = {}
loaded_loras = []
list_available_loras()

View File

@ -0,0 +1,6 @@
import os
from modules import paths
def preload(parser):
parser.add_argument("--lora-dir", type=str, help="Path to directory with Lora networks.", default=os.path.join(paths.models_path, 'Lora'))

View File

@ -0,0 +1,38 @@
import torch
import gradio as gr
import lora
import extra_networks_lora
import ui_extra_networks_lora
from modules import script_callbacks, ui_extra_networks, extra_networks, shared
def unload():
torch.nn.Linear.forward = torch.nn.Linear_forward_before_lora
torch.nn.Conv2d.forward = torch.nn.Conv2d_forward_before_lora
def before_ui():
ui_extra_networks.register_page(ui_extra_networks_lora.ExtraNetworksPageLora())
extra_networks.register_extra_network(extra_networks_lora.ExtraNetworkLora())
if not hasattr(torch.nn, 'Linear_forward_before_lora'):
torch.nn.Linear_forward_before_lora = torch.nn.Linear.forward
if not hasattr(torch.nn, 'Conv2d_forward_before_lora'):
torch.nn.Conv2d_forward_before_lora = torch.nn.Conv2d.forward
torch.nn.Linear.forward = lora.lora_Linear_forward
torch.nn.Conv2d.forward = lora.lora_Conv2d_forward
script_callbacks.on_model_loaded(lora.assign_lora_names_to_compvis_modules)
script_callbacks.on_script_unloaded(unload)
script_callbacks.on_before_ui(before_ui)
shared.options_templates.update(shared.options_section(('extra_networks', "Extra Networks"), {
"sd_lora": shared.OptionInfo("None", "Add Lora to prompt", gr.Dropdown, lambda: {"choices": [""] + [x for x in lora.available_loras]}, refresh=lora.list_available_loras),
"lora_apply_to_outputs": shared.OptionInfo(False, "Apply Lora to outputs rather than inputs when possible (experimental)"),
}))

View File

@ -0,0 +1,37 @@
import json
import os
import lora
from modules import shared, ui_extra_networks
class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage):
def __init__(self):
super().__init__('Lora')
def refresh(self):
lora.list_available_loras()
def list_items(self):
for name, lora_on_disk in lora.available_loras.items():
path, ext = os.path.splitext(lora_on_disk.filename)
previews = [path + ".png", path + ".preview.png"]
preview = None
for file in previews:
if os.path.isfile(file):
preview = self.link_preview(file)
break
yield {
"name": name,
"filename": path,
"preview": preview,
"search_term": self.search_terms_from_path(lora_on_disk.filename),
"prompt": json.dumps(f"<lora:{name}:") + " + opts.extra_networks_default_multiplier + " + json.dumps(">"),
"local_preview": path + ".png",
}
def allowed_directories_for_previews(self):
return [shared.cmd_opts.lora_dir]

View File

@ -0,0 +1,6 @@
import os
from modules import paths
def preload(parser):
parser.add_argument("--scunet-models-path", type=str, help="Path to directory with ScuNET model file(s).", default=os.path.join(paths.models_path, 'ScuNET'))

View File

@ -9,14 +9,12 @@ from basicsr.utils.download_util import load_file_from_url
import modules.upscaler
from modules import devices, modelloader
from modules.paths import models_path
from modules.scunet_model_arch import SCUNet as net
from scunet_model_arch import SCUNet as net
class UpscalerScuNET(modules.upscaler.Upscaler):
def __init__(self, dirname):
self.name = "ScuNET"
self.model_path = os.path.join(models_path, self.name)
self.model_name = "ScuNET GAN"
self.model_name2 = "ScuNET PSNR"
self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/scunet_color_real_gan.pth"
@ -51,14 +49,13 @@ class UpscalerScuNET(modules.upscaler.Upscaler):
if model is None:
return img
device = devices.device_scunet
device = devices.get_device_for('scunet')
img = np.array(img)
img = img[:, :, ::-1]
img = np.moveaxis(img, 2, 0) / 255
img = torch.from_numpy(img).float()
img = img.unsqueeze(0).to(device)
img = img.to(device)
with torch.no_grad():
output = model(img)
output = output.squeeze().float().cpu().clamp_(0, 1).numpy()
@ -69,7 +66,7 @@ class UpscalerScuNET(modules.upscaler.Upscaler):
return PIL.Image.fromarray(output, 'RGB')
def load_model(self, path: str):
device = devices.device_scunet
device = devices.get_device_for('scunet')
if "http" in path:
filename = load_file_from_url(url=self.model_url, model_dir=self.model_path, file_name="%s.pth" % self.name,
progress=True)

View File

@ -40,7 +40,7 @@ class WMSA(nn.Module):
Returns:
attn_mask: should be (1 1 w p p),
"""
# supporting sqaure.
# supporting square.
attn_mask = torch.zeros(h, w, p, p, p, p, dtype=torch.bool, device=self.relative_position_params.device)
if self.type == 'W':
return attn_mask
@ -65,7 +65,7 @@ class WMSA(nn.Module):
x = rearrange(x, 'b (w1 p1) (w2 p2) c -> b w1 w2 p1 p2 c', p1=self.window_size, p2=self.window_size)
h_windows = x.size(1)
w_windows = x.size(2)
# sqaure validation
# square validation
# assert h_windows == w_windows
x = rearrange(x, 'b w1 w2 p1 p2 c -> b (w1 w2) (p1 p2) c', p1=self.window_size, p2=self.window_size)

View File

@ -0,0 +1,6 @@
import os
from modules import paths
def preload(parser):
parser.add_argument("--swinir-models-path", type=str, help="Path to directory with SwinIR model file(s).", default=os.path.join(paths.models_path, 'SwinIR'))

View File

@ -7,15 +7,14 @@ from PIL import Image
from basicsr.utils.download_util import load_file_from_url
from tqdm import tqdm
from modules import modelloader
from modules.paths import models_path
from modules.shared import cmd_opts, opts, device
from modules.swinir_model_arch import SwinIR as net
from modules import modelloader, devices, script_callbacks, shared
from modules.shared import cmd_opts, opts, state
from swinir_model_arch import SwinIR as net
from swinir_model_arch_v2 import Swin2SR as net2
from modules.upscaler import Upscaler, UpscalerData
precision_scope = (
torch.autocast if cmd_opts.precision == "autocast" else contextlib.nullcontext
)
device_swinir = devices.get_device_for('swinir')
class UpscalerSwinIR(Upscaler):
@ -25,7 +24,6 @@ class UpscalerSwinIR(Upscaler):
"/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR" \
"-L_x4_GAN.pth "
self.model_name = "SwinIR 4x"
self.model_path = os.path.join(models_path, self.name)
self.user_path = dirname
super().__init__()
scalers = []
@ -43,7 +41,7 @@ class UpscalerSwinIR(Upscaler):
model = self.load_model(model_file)
if model is None:
return img
model = model.to(device)
model = model.to(device_swinir, dtype=devices.dtype)
img = upscale(img, model)
try:
torch.cuda.empty_cache()
@ -59,6 +57,22 @@ class UpscalerSwinIR(Upscaler):
filename = path
if filename is None or not os.path.exists(filename):
return None
if filename.endswith(".v2.pth"):
model = net2(
upscale=scale,
in_chans=3,
img_size=64,
window_size=8,
img_range=1.0,
depths=[6, 6, 6, 6, 6, 6],
embed_dim=180,
num_heads=[6, 6, 6, 6, 6, 6],
mlp_ratio=2,
upsampler="nearest+conv",
resi_connection="1conv",
)
params = None
else:
model = net(
upscale=scale,
in_chans=3,
@ -72,28 +86,34 @@ class UpscalerSwinIR(Upscaler):
upsampler="nearest+conv",
resi_connection="3conv",
)
params = "params_ema"
pretrained_model = torch.load(filename)
model.load_state_dict(pretrained_model["params_ema"], strict=True)
if not cmd_opts.no_half:
model = model.half()
if params is not None:
model.load_state_dict(pretrained_model[params], strict=True)
else:
model.load_state_dict(pretrained_model, strict=True)
return model
def upscale(
img,
model,
tile=opts.SWIN_tile,
tile_overlap=opts.SWIN_tile_overlap,
tile=None,
tile_overlap=None,
window_size=8,
scale=4,
):
tile = tile or opts.SWIN_tile
tile_overlap = tile_overlap or opts.SWIN_tile_overlap
img = np.array(img)
img = img[:, :, ::-1]
img = np.moveaxis(img, 2, 0) / 255
img = torch.from_numpy(img).float()
img = img.unsqueeze(0).to(device)
with torch.no_grad(), precision_scope("cuda"):
img = img.unsqueeze(0).to(device_swinir, dtype=devices.dtype)
with torch.no_grad(), devices.autocast():
_, _, h_old, w_old = img.size()
h_pad = (h_old // window_size + 1) * window_size - h_old
w_pad = (w_old // window_size + 1) * window_size - w_old
@ -120,12 +140,18 @@ def inference(img, model, tile, tile_overlap, window_size, scale):
stride = tile - tile_overlap
h_idx_list = list(range(0, h - tile, stride)) + [h - tile]
w_idx_list = list(range(0, w - tile, stride)) + [w - tile]
E = torch.zeros(b, c, h * sf, w * sf, dtype=torch.half, device=device).type_as(img)
W = torch.zeros_like(E, dtype=torch.half, device=device)
E = torch.zeros(b, c, h * sf, w * sf, dtype=devices.dtype, device=device_swinir).type_as(img)
W = torch.zeros_like(E, dtype=devices.dtype, device=device_swinir)
with tqdm(total=len(h_idx_list) * len(w_idx_list), desc="SwinIR tiles") as pbar:
for h_idx in h_idx_list:
if state.interrupted or state.skipped:
break
for w_idx in w_idx_list:
if state.interrupted or state.skipped:
break
in_patch = img[..., h_idx: h_idx + tile, w_idx: w_idx + tile]
out_patch = model(in_patch)
out_patch_mask = torch.ones_like(out_patch)
@ -140,3 +166,13 @@ def inference(img, model, tile, tile_overlap, window_size, scale):
output = E.div_(W)
return output
def on_ui_settings():
import gradio as gr
shared.opts.add_option("SWIN_tile", shared.OptionInfo(192, "Tile size for all SwinIR.", gr.Slider, {"minimum": 16, "maximum": 512, "step": 16}, section=('upscaling', "Upscaling")))
shared.opts.add_option("SWIN_tile_overlap", shared.OptionInfo(8, "Tile overlap, in pixels for SwinIR. Low values = visible seam.", gr.Slider, {"minimum": 0, "maximum": 48, "step": 1}, section=('upscaling', "Upscaling")))
script_callbacks.on_ui_settings(on_ui_settings)

View File

@ -166,7 +166,7 @@ class SwinTransformerBlock(nn.Module):
Args:
dim (int): Number of input channels.
input_resolution (tuple[int]): Input resulotion.
input_resolution (tuple[int]): Input resolution.
num_heads (int): Number of attention heads.
window_size (int): Window size.
shift_size (int): Shift size for SW-MSA.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,110 @@
// Stable Diffusion WebUI - Bracket checker
// Version 1.0
// By Hingashi no Florin/Bwin4L
// Counts open and closed brackets (round, square, curly) in the prompt and negative prompt text boxes in the txt2img and img2img tabs.
// If there's a mismatch, the keyword counter turns red and if you hover on it, a tooltip tells you what's wrong.
function checkBrackets(evt, textArea, counterElt) {
errorStringParen = '(...) - Different number of opening and closing parentheses detected.\n';
errorStringSquare = '[...] - Different number of opening and closing square brackets detected.\n';
errorStringCurly = '{...} - Different number of opening and closing curly brackets detected.\n';
openBracketRegExp = /\(/g;
closeBracketRegExp = /\)/g;
openSquareBracketRegExp = /\[/g;
closeSquareBracketRegExp = /\]/g;
openCurlyBracketRegExp = /\{/g;
closeCurlyBracketRegExp = /\}/g;
totalOpenBracketMatches = 0;
totalCloseBracketMatches = 0;
totalOpenSquareBracketMatches = 0;
totalCloseSquareBracketMatches = 0;
totalOpenCurlyBracketMatches = 0;
totalCloseCurlyBracketMatches = 0;
openBracketMatches = textArea.value.match(openBracketRegExp);
if(openBracketMatches) {
totalOpenBracketMatches = openBracketMatches.length;
}
closeBracketMatches = textArea.value.match(closeBracketRegExp);
if(closeBracketMatches) {
totalCloseBracketMatches = closeBracketMatches.length;
}
openSquareBracketMatches = textArea.value.match(openSquareBracketRegExp);
if(openSquareBracketMatches) {
totalOpenSquareBracketMatches = openSquareBracketMatches.length;
}
closeSquareBracketMatches = textArea.value.match(closeSquareBracketRegExp);
if(closeSquareBracketMatches) {
totalCloseSquareBracketMatches = closeSquareBracketMatches.length;
}
openCurlyBracketMatches = textArea.value.match(openCurlyBracketRegExp);
if(openCurlyBracketMatches) {
totalOpenCurlyBracketMatches = openCurlyBracketMatches.length;
}
closeCurlyBracketMatches = textArea.value.match(closeCurlyBracketRegExp);
if(closeCurlyBracketMatches) {
totalCloseCurlyBracketMatches = closeCurlyBracketMatches.length;
}
if(totalOpenBracketMatches != totalCloseBracketMatches) {
if(!counterElt.title.includes(errorStringParen)) {
counterElt.title += errorStringParen;
}
} else {
counterElt.title = counterElt.title.replace(errorStringParen, '');
}
if(totalOpenSquareBracketMatches != totalCloseSquareBracketMatches) {
if(!counterElt.title.includes(errorStringSquare)) {
counterElt.title += errorStringSquare;
}
} else {
counterElt.title = counterElt.title.replace(errorStringSquare, '');
}
if(totalOpenCurlyBracketMatches != totalCloseCurlyBracketMatches) {
if(!counterElt.title.includes(errorStringCurly)) {
counterElt.title += errorStringCurly;
}
} else {
counterElt.title = counterElt.title.replace(errorStringCurly, '');
}
if(counterElt.title != '') {
counterElt.classList.add('error');
} else {
counterElt.classList.remove('error');
}
}
function setupBracketChecking(id_prompt, id_counter){
var textarea = gradioApp().querySelector("#" + id_prompt + " > label > textarea");
var counter = gradioApp().getElementById(id_counter)
textarea.addEventListener("input", function(evt){
checkBrackets(evt, textarea, counter)
});
}
var shadowRootLoaded = setInterval(function() {
var shadowRoot = document.querySelector('gradio-app').shadowRoot;
if(! shadowRoot) return false;
var shadowTextArea = shadowRoot.querySelectorAll('#txt2img_prompt > label > textarea');
if(shadowTextArea.length < 1) return false;
clearInterval(shadowRootLoaded);
setupBracketChecking('txt2img_prompt', 'txt2img_token_counter')
setupBracketChecking('txt2img_neg_prompt', 'txt2img_negative_token_counter')
setupBracketChecking('img2img_prompt', 'imgimg_token_counter')
setupBracketChecking('img2img_neg_prompt', 'img2img_negative_token_counter')
}, 1000);

View File

BIN
html/card-no-preview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

View File

@ -0,0 +1,12 @@
<div class='card' {preview_html} onclick={card_clicked}>
<div class='actions'>
<div class='additional'>
<ul>
<a href="#" title="replace preview image with currently selected in gallery" onclick={save_card_preview}>replace preview</a>
</ul>
<span style="display:none" class='search_term'>{search_term}</span>
</div>
<span class='name'>{name}</span>
</div>
</div>

View File

@ -0,0 +1,8 @@
<div class='nocards'>
<h1>Nothing here. Add some content to the following directories:</h1>
<ul>
{dirs}
</ul>
</div>

13
html/footer.html Normal file
View File

@ -0,0 +1,13 @@
<div>
<a href="/docs">API</a>
 • 
<a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">Github</a>
 • 
<a href="https://gradio.app">Gradio</a>
 • 
<a href="/" onclick="javascript:gradioApp().getElementById('settings_restart_gradio').click(); return false">Reload UI</a>
</div>
<br />
<div class="versions">
{versions}
</div>

7
html/image-update.svg Normal file
View File

@ -0,0 +1,7 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
<filter id='shadow' color-interpolation-filters="sRGB">
<feDropShadow flood-color="black" dx="0" dy="0" flood-opacity="0.9" stdDeviation="0.5"/>
<feDropShadow flood-color="black" dx="0" dy="0" flood-opacity="0.9" stdDeviation="0.5"/>
</filter>
<path style="filter:url(#shadow);" fill="#FFFFFF" d="M13.18 19C13.35 19.72 13.64 20.39 14.03 21H5C3.9 21 3 20.11 3 19V5C3 3.9 3.9 3 5 3H19C20.11 3 21 3.9 21 5V11.18C20.5 11.07 20 11 19.5 11C19.33 11 19.17 11 19 11.03V5H5V19H13.18M11.21 15.83L9.25 13.47L6.5 17H13.03C13.14 15.54 13.73 14.22 14.64 13.19L13.96 12.29L11.21 15.83M19 13.5V12L16.75 14.25L19 16.5V15C20.38 15 21.5 16.12 21.5 17.5C21.5 17.9 21.41 18.28 21.24 18.62L22.33 19.71C22.75 19.08 23 18.32 23 17.5C23 15.29 21.21 13.5 19 13.5M19 20C17.62 20 16.5 18.88 16.5 17.5C16.5 17.1 16.59 16.72 16.76 16.38L15.67 15.29C15.25 15.92 15 16.68 15 17.5C15 19.71 16.79 21.5 19 21.5V23L21.25 20.75L19 18.5V20Z" />
</svg>

After

Width:  |  Height:  |  Size: 989 B

419
html/licenses.html Normal file
View File

@ -0,0 +1,419 @@
<style>
#licenses h2 {font-size: 1.2em; font-weight: bold; margin-bottom: 0.2em;}
#licenses small {font-size: 0.95em; opacity: 0.85;}
#licenses pre { margin: 1em 0 2em 0;}
</style>
<h2><a href="https://github.com/sczhou/CodeFormer/blob/master/LICENSE">CodeFormer</a></h2>
<small>Parts of CodeFormer code had to be copied to be compatible with GFPGAN.</small>
<pre>
S-Lab License 1.0
Copyright 2022 S-Lab
Redistribution and use for non-commercial purpose in source and
binary forms, with or without modification, are permitted provided
that the following conditions are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
In the event that redistribution and/or use for commercial purpose in
source or binary forms, with or without modification is required,
please contact the contributor(s) of the work.
</pre>
<h2><a href="https://github.com/victorca25/iNNfer/blob/main/LICENSE">ESRGAN</a></h2>
<small>Code for architecture and reading models copied.</small>
<pre>
MIT License
Copyright (c) 2021 victorca25
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
</pre>
<h2><a href="https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE">Real-ESRGAN</a></h2>
<small>Some code is copied to support ESRGAN models.</small>
<pre>
BSD 3-Clause License
Copyright (c) 2021, Xintao Wang
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
</pre>
<h2><a href="https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE">InvokeAI</a></h2>
<small>Some code for compatibility with OSX is taken from lstein's repository.</small>
<pre>
MIT License
Copyright (c) 2022 InvokeAI Team
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
</pre>
<h2><a href="https://github.com/Hafiidz/latent-diffusion/blob/main/LICENSE">LDSR</a></h2>
<small>Code added by contirubtors, most likely copied from this repository.</small>
<pre>
MIT License
Copyright (c) 2022 Machine Vision and Learning Group, LMU Munich
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
</pre>
<h2><a href="https://github.com/pharmapsychotic/clip-interrogator/blob/main/LICENSE">CLIP Interrogator</a></h2>
<small>Some small amounts of code borrowed and reworked.</small>
<pre>
MIT License
Copyright (c) 2022 pharmapsychotic
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
</pre>
<h2><a href="https://github.com/JingyunLiang/SwinIR/blob/main/LICENSE">SwinIR</a></h2>
<small>Code added by contributors, most likely copied from this repository.</small>
<pre>
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [2021] [SwinIR Authors]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
</pre>
<h2><a href="https://github.com/AminRezaei0x443/memory-efficient-attention/blob/main/LICENSE">Memory Efficient Attention</a></h2>
<small>The sub-quadratic cross attention optimization uses modified code from the Memory Efficient Attention package that Alex Birch optimized for 3D tensors. This license is updated to reflect that.</small>
<pre>
MIT License
Copyright (c) 2023 Alex Birch
Copyright (c) 2023 Amin Rezaei
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
</pre>

View File

@ -3,12 +3,12 @@ let currentWidth = null;
let currentHeight = null;
let arFrameTimeout = setTimeout(function(){},0);
function dimensionChange(e,dimname){
function dimensionChange(e, is_width, is_height){
if(dimname == 'Width'){
if(is_width){
currentWidth = e.target.value*1.0
}
if(dimname == 'Height'){
if(is_height){
currentHeight = e.target.value*1.0
}
@ -18,24 +18,20 @@ function dimensionChange(e,dimname){
return;
}
var img2imgMode = gradioApp().querySelector('#mode_img2img.tabs > div > button.rounded-t-lg.border-gray-200')
if(img2imgMode){
img2imgMode=img2imgMode.innerText
}else{
return;
}
var redrawImage = gradioApp().querySelector('div[data-testid=image] img');
var inpaintImage = gradioApp().querySelector('#img2maskimg div[data-testid=image] img')
var targetElement = null;
if(img2imgMode=='img2img' && redrawImage){
targetElement = redrawImage;
}else if(img2imgMode=='Inpaint' && inpaintImage){
targetElement = inpaintImage;
var tabIndex = get_tab_index('mode_img2img')
if(tabIndex == 0){ // img2img
targetElement = gradioApp().querySelector('div[data-testid=image] img');
} else if(tabIndex == 1){ //Sketch
targetElement = gradioApp().querySelector('#img2img_sketch div[data-testid=image] img');
} else if(tabIndex == 2){ // Inpaint
targetElement = gradioApp().querySelector('#img2maskimg div[data-testid=image] img');
} else if(tabIndex == 3){ // Inpaint sketch
targetElement = gradioApp().querySelector('#inpaint_sketch div[data-testid=image] img');
}
if(targetElement){
var arPreviewRect = gradioApp().querySelector('#imageARPreview');
@ -99,21 +95,19 @@ onUiUpdate(function(){
if(inImg2img){
let inputs = gradioApp().querySelectorAll('input');
inputs.forEach(function(e){
let parentLabel = e.parentElement.querySelector('label')
if(parentLabel && parentLabel.innerText){
if(!e.classList.contains('scrollwatch')){
if(parentLabel.innerText == 'Width' || parentLabel.innerText == 'Height'){
e.addEventListener('input', function(e){dimensionChange(e,parentLabel.innerText)} )
var is_width = e.parentElement.id == "img2img_width"
var is_height = e.parentElement.id == "img2img_height"
if((is_width || is_height) && !e.classList.contains('scrollwatch')){
e.addEventListener('input', function(e){dimensionChange(e, is_width, is_height)} )
e.classList.add('scrollwatch')
}
if(parentLabel.innerText == 'Width'){
if(is_width){
currentWidth = e.value*1.0
}
if(parentLabel.innerText == 'Height'){
if(is_height){
currentHeight = e.value*1.0
}
}
}
})
}
});

177
javascript/contextMenus.js Normal file
View File

@ -0,0 +1,177 @@
contextMenuInit = function(){
let eventListenerApplied=false;
let menuSpecs = new Map();
const uid = function(){
return Date.now().toString(36) + Math.random().toString(36).substr(2);
}
function showContextMenu(event,element,menuEntries){
let posx = event.clientX + document.body.scrollLeft + document.documentElement.scrollLeft;
let posy = event.clientY + document.body.scrollTop + document.documentElement.scrollTop;
let oldMenu = gradioApp().querySelector('#context-menu')
if(oldMenu){
oldMenu.remove()
}
let tabButton = uiCurrentTab
let baseStyle = window.getComputedStyle(tabButton)
const contextMenu = document.createElement('nav')
contextMenu.id = "context-menu"
contextMenu.style.background = baseStyle.background
contextMenu.style.color = baseStyle.color
contextMenu.style.fontFamily = baseStyle.fontFamily
contextMenu.style.top = posy+'px'
contextMenu.style.left = posx+'px'
const contextMenuList = document.createElement('ul')
contextMenuList.className = 'context-menu-items';
contextMenu.append(contextMenuList);
menuEntries.forEach(function(entry){
let contextMenuEntry = document.createElement('a')
contextMenuEntry.innerHTML = entry['name']
contextMenuEntry.addEventListener("click", function(e) {
entry['func']();
})
contextMenuList.append(contextMenuEntry);
})
gradioApp().getRootNode().appendChild(contextMenu)
let menuWidth = contextMenu.offsetWidth + 4;
let menuHeight = contextMenu.offsetHeight + 4;
let windowWidth = window.innerWidth;
let windowHeight = window.innerHeight;
if ( (windowWidth - posx) < menuWidth ) {
contextMenu.style.left = windowWidth - menuWidth + "px";
}
if ( (windowHeight - posy) < menuHeight ) {
contextMenu.style.top = windowHeight - menuHeight + "px";
}
}
function appendContextMenuOption(targetElementSelector,entryName,entryFunction){
currentItems = menuSpecs.get(targetElementSelector)
if(!currentItems){
currentItems = []
menuSpecs.set(targetElementSelector,currentItems);
}
let newItem = {'id':targetElementSelector+'_'+uid(),
'name':entryName,
'func':entryFunction,
'isNew':true}
currentItems.push(newItem)
return newItem['id']
}
function removeContextMenuOption(uid){
menuSpecs.forEach(function(v,k) {
let index = -1
v.forEach(function(e,ei){if(e['id']==uid){index=ei}})
if(index>=0){
v.splice(index, 1);
}
})
}
function addContextMenuEventListener(){
if(eventListenerApplied){
return;
}
gradioApp().addEventListener("click", function(e) {
let source = e.composedPath()[0]
if(source.id && source.id.indexOf('check_progress')>-1){
return
}
let oldMenu = gradioApp().querySelector('#context-menu')
if(oldMenu){
oldMenu.remove()
}
});
gradioApp().addEventListener("contextmenu", function(e) {
let oldMenu = gradioApp().querySelector('#context-menu')
if(oldMenu){
oldMenu.remove()
}
menuSpecs.forEach(function(v,k) {
if(e.composedPath()[0].matches(k)){
showContextMenu(e,e.composedPath()[0],v)
e.preventDefault()
return
}
})
});
eventListenerApplied=true
}
return [appendContextMenuOption, removeContextMenuOption, addContextMenuEventListener]
}
initResponse = contextMenuInit();
appendContextMenuOption = initResponse[0];
removeContextMenuOption = initResponse[1];
addContextMenuEventListener = initResponse[2];
(function(){
//Start example Context Menu Items
let generateOnRepeat = function(genbuttonid,interruptbuttonid){
let genbutton = gradioApp().querySelector(genbuttonid);
let interruptbutton = gradioApp().querySelector(interruptbuttonid);
if(!interruptbutton.offsetParent){
genbutton.click();
}
clearInterval(window.generateOnRepeatInterval)
window.generateOnRepeatInterval = setInterval(function(){
if(!interruptbutton.offsetParent){
genbutton.click();
}
},
500)
}
appendContextMenuOption('#txt2img_generate','Generate forever',function(){
generateOnRepeat('#txt2img_generate','#txt2img_interrupt');
})
appendContextMenuOption('#img2img_generate','Generate forever',function(){
generateOnRepeat('#img2img_generate','#img2img_interrupt');
})
let cancelGenerateForever = function(){
clearInterval(window.generateOnRepeatInterval)
}
appendContextMenuOption('#txt2img_interrupt','Cancel generate forever',cancelGenerateForever)
appendContextMenuOption('#txt2img_generate', 'Cancel generate forever',cancelGenerateForever)
appendContextMenuOption('#img2img_interrupt','Cancel generate forever',cancelGenerateForever)
appendContextMenuOption('#img2img_generate', 'Cancel generate forever',cancelGenerateForever)
appendContextMenuOption('#roll','Roll three',
function(){
let rollbutton = get_uiCurrentTabContent().querySelector('#roll');
setTimeout(function(){rollbutton.click()},100)
setTimeout(function(){rollbutton.click()},200)
setTimeout(function(){rollbutton.click()},300)
}
)
})();
//End example Context Menu Items
onUiUpdate(function(){
addContextMenuEventListener()
});

View File

@ -9,11 +9,19 @@ function dropReplaceImage( imgWrap, files ) {
return;
}
const tmpFile = files[0];
imgWrap.querySelector('.modify-upload button + button, .touch-none + div button + button')?.click();
const callback = () => {
const fileInput = imgWrap.querySelector('input[type="file"]');
if ( fileInput ) {
if ( files.length === 0 ) {
files = new DataTransfer();
files.items.add(tmpFile);
fileInput.files = files.files;
} else {
fileInput.files = files;
}
fileInput.dispatchEvent(new Event('change'));
}
};
@ -43,7 +51,7 @@ function dropReplaceImage( imgWrap, files ) {
window.document.addEventListener('dragover', e => {
const target = e.composedPath()[0];
const imgWrap = target.closest('[data-testid="image"]');
if ( !imgWrap ) {
if ( !imgWrap && target.placeholder && target.placeholder.indexOf("Prompt") == -1) {
return;
}
e.stopPropagation();
@ -53,6 +61,9 @@ window.document.addEventListener('dragover', e => {
window.document.addEventListener('drop', e => {
const target = e.composedPath()[0];
if (target.placeholder.indexOf("Prompt") == -1) {
return;
}
const imgWrap = target.closest('[data-testid="image"]');
if ( !imgWrap ) {
return;

View File

@ -0,0 +1,96 @@
function keyupEditAttention(event){
let target = event.originalTarget || event.composedPath()[0];
if (!target.matches("[id*='_toprow'] textarea.gr-text-input[placeholder]")) return;
if (! (event.metaKey || event.ctrlKey)) return;
let isPlus = event.key == "ArrowUp"
let isMinus = event.key == "ArrowDown"
if (!isPlus && !isMinus) return;
let selectionStart = target.selectionStart;
let selectionEnd = target.selectionEnd;
let text = target.value;
function selectCurrentParenthesisBlock(OPEN, CLOSE){
if (selectionStart !== selectionEnd) return false;
// Find opening parenthesis around current cursor
const before = text.substring(0, selectionStart);
let beforeParen = before.lastIndexOf(OPEN);
if (beforeParen == -1) return false;
let beforeParenClose = before.lastIndexOf(CLOSE);
while (beforeParenClose !== -1 && beforeParenClose > beforeParen) {
beforeParen = before.lastIndexOf(OPEN, beforeParen - 1);
beforeParenClose = before.lastIndexOf(CLOSE, beforeParenClose - 1);
}
// Find closing parenthesis around current cursor
const after = text.substring(selectionStart);
let afterParen = after.indexOf(CLOSE);
if (afterParen == -1) return false;
let afterParenOpen = after.indexOf(OPEN);
while (afterParenOpen !== -1 && afterParen > afterParenOpen) {
afterParen = after.indexOf(CLOSE, afterParen + 1);
afterParenOpen = after.indexOf(OPEN, afterParenOpen + 1);
}
if (beforeParen === -1 || afterParen === -1) return false;
// Set the selection to the text between the parenthesis
const parenContent = text.substring(beforeParen + 1, selectionStart + afterParen);
const lastColon = parenContent.lastIndexOf(":");
selectionStart = beforeParen + 1;
selectionEnd = selectionStart + lastColon;
target.setSelectionRange(selectionStart, selectionEnd);
return true;
}
// If the user hasn't selected anything, let's select their current parenthesis block
if(! selectCurrentParenthesisBlock('<', '>')){
selectCurrentParenthesisBlock('(', ')')
}
event.preventDefault();
closeCharacter = ')'
delta = opts.keyedit_precision_attention
if (selectionStart > 0 && text[selectionStart - 1] == '<'){
closeCharacter = '>'
delta = opts.keyedit_precision_extra
} else if (selectionStart == 0 || text[selectionStart - 1] != "(") {
// do not include spaces at the end
while(selectionEnd > selectionStart && text[selectionEnd-1] == ' '){
selectionEnd -= 1;
}
if(selectionStart == selectionEnd){
return
}
text = text.slice(0, selectionStart) + "(" + text.slice(selectionStart, selectionEnd) + ":1.0)" + text.slice(selectionEnd);
selectionStart += 1;
selectionEnd += 1;
}
end = text.slice(selectionEnd + 1).indexOf(closeCharacter) + 1;
weight = parseFloat(text.slice(selectionEnd + 1, selectionEnd + 1 + end));
if (isNaN(weight)) return;
weight += isPlus ? delta : -delta;
weight = parseFloat(weight.toPrecision(12));
if(String(weight).length == 1) weight += ".0"
text = text.slice(0, selectionEnd + 1) + weight + text.slice(selectionEnd + 1 + end - 1);
target.focus();
target.value = text;
target.selectionStart = selectionStart;
target.selectionEnd = selectionEnd;
updateInput(target)
}
addEventListener('keydown', (event) => {
keyupEditAttention(event);
});

49
javascript/extensions.js Normal file
View File

@ -0,0 +1,49 @@
function extensions_apply(_, _){
var disable = []
var update = []
gradioApp().querySelectorAll('#extensions input[type="checkbox"]').forEach(function(x){
if(x.name.startsWith("enable_") && ! x.checked)
disable.push(x.name.substr(7))
if(x.name.startsWith("update_") && x.checked)
update.push(x.name.substr(7))
})
restart_reload()
return [JSON.stringify(disable), JSON.stringify(update)]
}
function extensions_check(){
var disable = []
gradioApp().querySelectorAll('#extensions input[type="checkbox"]').forEach(function(x){
if(x.name.startsWith("enable_") && ! x.checked)
disable.push(x.name.substr(7))
})
gradioApp().querySelectorAll('#extensions .extension_status').forEach(function(x){
x.innerHTML = "Loading..."
})
var id = randomId()
requestProgress(id, gradioApp().getElementById('extensions_installed_top'), null, function(){
})
return [id, JSON.stringify(disable)]
}
function install_extension_from_index(button, url){
button.disabled = "disabled"
button.value = "Installing..."
textarea = gradioApp().querySelector('#extension_to_install textarea')
textarea.value = url
updateInput(textarea)
gradioApp().querySelector('#install_extension_button').click()
}

107
javascript/extraNetworks.js Normal file
View File

@ -0,0 +1,107 @@
function setupExtraNetworksForTab(tabname){
gradioApp().querySelector('#'+tabname+'_extra_tabs').classList.add('extra-networks')
var tabs = gradioApp().querySelector('#'+tabname+'_extra_tabs > div')
var search = gradioApp().querySelector('#'+tabname+'_extra_search textarea')
var refresh = gradioApp().getElementById(tabname+'_extra_refresh')
var close = gradioApp().getElementById(tabname+'_extra_close')
search.classList.add('search')
tabs.appendChild(search)
tabs.appendChild(refresh)
tabs.appendChild(close)
search.addEventListener("input", function(evt){
searchTerm = search.value.toLowerCase()
gradioApp().querySelectorAll('#'+tabname+'_extra_tabs div.card').forEach(function(elem){
text = elem.querySelector('.name').textContent.toLowerCase() + " " + elem.querySelector('.search_term').textContent.toLowerCase()
elem.style.display = text.indexOf(searchTerm) == -1 ? "none" : ""
})
});
}
var activePromptTextarea = {};
function setupExtraNetworks(){
setupExtraNetworksForTab('txt2img')
setupExtraNetworksForTab('img2img')
function registerPrompt(tabname, id){
var textarea = gradioApp().querySelector("#" + id + " > label > textarea");
if (! activePromptTextarea[tabname]){
activePromptTextarea[tabname] = textarea
}
textarea.addEventListener("focus", function(){
activePromptTextarea[tabname] = textarea;
});
}
registerPrompt('txt2img', 'txt2img_prompt')
registerPrompt('txt2img', 'txt2img_neg_prompt')
registerPrompt('img2img', 'img2img_prompt')
registerPrompt('img2img', 'img2img_neg_prompt')
}
onUiLoaded(setupExtraNetworks)
var re_extranet = /<([^:]+:[^:]+):[\d\.]+>/;
var re_extranet_g = /\s+<([^:]+:[^:]+):[\d\.]+>/g;
function tryToRemoveExtraNetworkFromPrompt(textarea, text){
var m = text.match(re_extranet)
if(! m) return false
var partToSearch = m[1]
var replaced = false
var newTextareaText = textarea.value.replaceAll(re_extranet_g, function(found, index){
m = found.match(re_extranet);
if(m[1] == partToSearch){
replaced = true;
return ""
}
return found;
})
if(replaced){
textarea.value = newTextareaText
return true;
}
return false
}
function cardClicked(tabname, textToAdd, allowNegativePrompt){
var textarea = allowNegativePrompt ? activePromptTextarea[tabname] : gradioApp().querySelector("#" + tabname + "_prompt > label > textarea")
if(! tryToRemoveExtraNetworkFromPrompt(textarea, textToAdd)){
textarea.value = textarea.value + " " + textToAdd
}
updateInput(textarea)
}
function saveCardPreview(event, tabname, filename){
var textarea = gradioApp().querySelector("#" + tabname + '_preview_filename > label > textarea')
var button = gradioApp().getElementById(tabname + '_save_preview')
textarea.value = filename
updateInput(textarea)
button.click()
event.stopPropagation()
event.preventDefault()
}
function extraNetworksSearchButton(tabs_id, event){
searchTextarea = gradioApp().querySelector("#" + tabs_id + ' > div > textarea')
button = event.target
text = button.classList.contains("search-all") ? "" : button.textContent.trim()
searchTextarea.value = text
updateInput(searchTextarea)
}

View File

@ -0,0 +1,33 @@
// attaches listeners to the txt2img and img2img galleries to update displayed generation param text when the image changes
let txt2img_gallery, img2img_gallery, modal = undefined;
onUiUpdate(function(){
if (!txt2img_gallery) {
txt2img_gallery = attachGalleryListeners("txt2img")
}
if (!img2img_gallery) {
img2img_gallery = attachGalleryListeners("img2img")
}
if (!modal) {
modal = gradioApp().getElementById('lightboxModal')
modalObserver.observe(modal, { attributes : true, attributeFilter : ['style'] });
}
});
let modalObserver = new MutationObserver(function(mutations) {
mutations.forEach(function(mutationRecord) {
let selectedTab = gradioApp().querySelector('#tabs div button.bg-white')?.innerText
if (mutationRecord.target.style.display === 'none' && selectedTab === 'txt2img' || selectedTab === 'img2img')
gradioApp().getElementById(selectedTab+"_generation_info_button").click()
});
});
function attachGalleryListeners(tab_name) {
gallery = gradioApp().querySelector('#'+tab_name+'_gallery')
gallery?.addEventListener('click', () => gradioApp().getElementById(tab_name+"_generation_info_button").click());
gallery?.addEventListener('keydown', (e) => {
if (e.keyCode == 37 || e.keyCode == 39) // left or right arrow
gradioApp().getElementById(tab_name+"_generation_info_button").click()
});
return gallery;
}

View File

@ -4,8 +4,9 @@ titles = {
"Sampling steps": "How many times to improve the generated image iteratively; higher values take longer; very low values can produce bad results",
"Sampling method": "Which algorithm to use to produce the image",
"GFPGAN": "Restore low quality faces using GFPGAN neural network",
"Euler a": "Euler Ancestral - very creative, each can get a completely different picture depending on step count, setting steps to higher than 30-40 does not help",
"Euler a": "Euler Ancestral - very creative, each can get a completely different picture depending on step count, setting steps higher than 30-40 does not help",
"DDIM": "Denoising Diffusion Implicit Models - best at inpainting",
"DPM adaptive": "Ignores step count - uses a number of steps determined by the CFG and resolution",
"Batch count": "How many batches of images to create",
"Batch size": "How many image to create in a single batch",
@ -13,9 +14,14 @@ titles = {
"Seed": "A value that determines the output of random number generator - if you create an image with same parameters and seed as another image, you'll get the same result",
"\u{1f3b2}\ufe0f": "Set seed to -1, which will cause a new random number to be used every time",
"\u267b\ufe0f": "Reuse seed from last generation, mostly useful if it was randomed",
"\u{1f3a8}": "Add a random artist to the prompt.",
"\u2199\ufe0f": "Read generation parameters from prompt into user interface.",
"\u2199\ufe0f": "Read generation parameters from prompt or last generation if prompt is empty into user interface.",
"\u{1f4c2}": "Open images output directory",
"\u{1f4be}": "Save style",
"\u{1f5d1}": "Clear prompt",
"\u{1f4cb}": "Apply selected styles to current prompt",
"\u{1f4d2}": "Paste available values into the field",
"\u{1f3b4}": "Show extra networks",
"Inpaint a part of image": "Draw a mask over an image, and the script will regenerate the masked area with content according to prompt",
"SD upscale": "Upscale image normally, split result into tiles, improve each tile using img2img, merge whole image back",
@ -35,6 +41,7 @@ titles = {
"Denoising strength": "Determines how little respect the algorithm should have for image's content. At 0, nothing will change, and at 1 you'll get an unrelated image. With values below 1.0, processing will take less steps than the Sampling Steps slider specifies.",
"Denoising strength change factor": "In loopback mode, on each loop the denoising strength is multiplied by this value. <1 means decreasing variety so your sequence will converge on a fixed picture. >1 means increasing variety so your sequence will become more and more chaotic.",
"Skip": "Stop processing current image and continue processing.",
"Interrupt": "Stop processing images and return any results accumulated so far.",
"Save": "Write image to a directory (default - log/images) and generation parameters into csv file.",
@ -43,7 +50,7 @@ titles = {
"None": "Do not do anything special",
"Prompt matrix": "Separate prompts into parts using vertical pipe character (|) and the script will create a picture for every combination of them (except for the first part, which will be present in all combinations)",
"X/Y plot": "Create a grid where images will have different parameters. Use inputs below to specify which parameters will be shared by columns and rows",
"X/Y/Z plot": "Create grid(s) where images will have different parameters. Use inputs below to specify which parameters will be shared by columns and rows",
"Custom code": "Run Python code. Advanced user only. Must run program with --allow-code for this to work",
"Prompt S/R": "Separate a list of words with commas, and the first word will be used as a keyword: script will search for this word in the prompt, and replace it with others",
@ -59,8 +66,8 @@ titles = {
"Interrogate": "Reconstruct prompt from existing image and put it into the prompt field.",
"Images filename pattern": "Use following tags to define how filenames for images are chosen: [steps], [cfg], [prompt], [prompt_no_styles], [prompt_spaces], [width], [height], [styles], [sampler], [seed], [model_hash], [prompt_words], [date], [datetime], [job_timestamp]; leave empty for default.",
"Directory name pattern": "Use following tags to define how subdirectories for images and grids are chosen: [steps], [cfg], [prompt], [prompt_no_styles], [prompt_spaces], [width], [height], [styles], [sampler], [seed], [model_hash], [prompt_words], [date], [datetime], [job_timestamp]; leave empty for default.",
"Images filename pattern": "Use following tags to define how filenames for images are chosen: [steps], [cfg], [prompt_hash], [prompt], [prompt_no_styles], [prompt_spaces], [width], [height], [styles], [sampler], [seed], [model_hash], [model_name], [prompt_words], [date], [datetime], [datetime<Format>], [datetime<Format><Time Zone>], [job_timestamp]; leave empty for default.",
"Directory name pattern": "Use following tags to define how subdirectories for images and grids are chosen: [steps], [cfg],[prompt_hash], [prompt], [prompt_no_styles], [prompt_spaces], [width], [height], [styles], [sampler], [seed], [model_hash], [model_name], [prompt_words], [date], [datetime], [datetime<Format>], [datetime<Format><Time Zone>], [job_timestamp]; leave empty for default.",
"Max prompt words": "Set the maximum number of words to be used in the [prompt_words] option; ATTENTION: If the words are too long, they may exceed the maximum length of the file path that the system can handle",
"Loopback": "Process an image, use it as an input, repeat.",
@ -69,15 +76,41 @@ titles = {
"Style 1": "Style to apply; styles have components for both positive and negative prompts and apply to both",
"Style 2": "Style to apply; styles have components for both positive and negative prompts and apply to both",
"Apply style": "Insert selected styles into prompt fields",
"Create style": "Save current prompts as a style. If you add the token {prompt} to the text, the style use that as placeholder for your prompt when you use the style in the future.",
"Create style": "Save current prompts as a style. If you add the token {prompt} to the text, the style uses that as a placeholder for your prompt when you use the style in the future.",
"Checkpoint name": "Loads weights from checkpoint before making images. You can either use hash or a part of filename (as seen in settings) for checkpoint name. Recommended to use with Y axis for less switching.",
"Inpainting conditioning mask strength": "Only applies to inpainting models. Determines how strongly to mask off the original image for inpainting and img2img. 1.0 means fully masked, which is the default behaviour. 0.0 means a fully unmasked conditioning. Lower values will help preserve the overall composition of the image, but will struggle with large changes.",
"vram": "Torch active: Peak amount of VRAM used by Torch during generation, excluding cached data.\nTorch reserved: Peak amount of VRAM allocated by Torch, including all active and cached data.\nSys VRAM: Peak amount of VRAM allocation across all applications / total GPU VRAM (peak utilization%).",
"Highres. fix": "Use a two step process to partially create an image at smaller resolution, upscale, and then improve details in it without changing composition",
"Scale latent": "Uscale the image in latent space. Alternative is to produce the full image from latent representation, upscale that, and then move it back to latent space.",
"Eta noise seed delta": "If this values is non-zero, it will be added to seed and used to initialize RNG for noises when using samplers with Eta. You can use this to produce even more variation of images, or you can use this to match images of other software if you know what you are doing.",
"Do not add watermark to images": "If this option is enabled, watermark will not be added to created images. Warning: if you do not add watermark, you may be behaving in an unethical manner.",
"Filename word regex": "This regular expression will be used extract words from filename, and they will be joined using the option below into label text used for training. Leave empty to keep filename text as it is.",
"Filename join string": "This string will be used to join split words into a single line if the option above is enabled.",
"Quicksettings list": "List of setting names, separated by commas, for settings that should go to the quick access bar at the top, rather than the usual setting tab. See modules/shared.py for setting names. Requires restarting to apply.",
"Weighted sum": "Result = A * (1 - M) + B * M",
"Add difference": "Result = A + (B - C) * M",
"No interpolation": "Result = A",
"Initialization text": "If the number of tokens is more than the number of vectors, some may be skipped.\nLeave the textbox empty to start with zeroed out vectors",
"Learning rate": "How fast should training go. Low values will take longer to train, high values may fail to converge (not generate accurate results) and/or may break the embedding (This has happened if you see Loss: nan in the training info textbox. If this happens, you need to manually restore your embedding from an older not-broken backup).\n\nYou can set a single numeric value, or multiple learning rates using the syntax:\n\n rate_1:max_steps_1, rate_2:max_steps_2, ...\n\nEG: 0.005:100, 1e-3:1000, 1e-5\n\nWill train with rate of 0.005 for first 100 steps, then 1e-3 until 1000 steps, then 1e-5 for all remaining steps.",
"Clip skip": "Early stopping parameter for CLIP model; 1 is stop at last layer as usual, 2 is stop at penultimate layer, etc.",
"Approx NN": "Cheap neural network approximation. Very fast compared to VAE, but produces pictures with 4 times smaller horizontal/vertical resolution and lower quality.",
"Approx cheap": "Very cheap approximation. Very fast compared to VAE, but produces pictures with 8 times smaller horizontal/vertical resolution and extremely low quality.",
"Hires. fix": "Use a two step process to partially create an image at smaller resolution, upscale, and then improve details in it without changing composition",
"Hires steps": "Number of sampling steps for upscaled picture. If 0, uses same as for original.",
"Upscale by": "Adjusts the size of the image by multiplying the original width and height by the selected value. Ignored if either Resize width to or Resize height to are non-zero.",
"Resize width to": "Resizes image to this width. If 0, width is inferred from either of two nearby sliders.",
"Resize height to": "Resizes image to this height. If 0, height is inferred from either of two nearby sliders.",
"Multiplier for extra networks": "When adding extra network such as Hypernetwork or Lora to prompt, use this multiplier for it.",
"Discard weights with matching name": "Regular expression; if weights's name matches it, the weights is not written to the resulting checkpoint. Use ^model_ema to discard EMA weights.",
"Extra networks tab order": "Comma-separated list of tab names; tabs listed here will appear in the extra networks UI first and in order lsited."
}

22
javascript/hires_fix.js Normal file
View File

@ -0,0 +1,22 @@
function setInactive(elem, inactive){
if(inactive){
elem.classList.add('inactive')
} else{
elem.classList.remove('inactive')
}
}
function onCalcResolutionHires(enable, width, height, hr_scale, hr_resize_x, hr_resize_y){
hrUpscaleBy = gradioApp().getElementById('txt2img_hr_scale')
hrResizeX = gradioApp().getElementById('txt2img_hr_resize_x')
hrResizeY = gradioApp().getElementById('txt2img_hr_resize_y')
gradioApp().getElementById('txt2img_hires_fix_row2').style.display = opts.use_old_hires_fix_width_height ? "none" : ""
setInactive(hrUpscaleBy, opts.use_old_hires_fix_width_height || hr_resize_x > 0 || hr_resize_y > 0)
setInactive(hrResizeX, opts.use_old_hires_fix_width_height || hr_resize_x == 0)
setInactive(hrResizeY, opts.use_old_hires_fix_width_height || hr_resize_y == 0)
return [enable, width, height, hr_scale, hr_resize_x, hr_resize_y]
}

View File

@ -31,8 +31,8 @@ function imageMaskResize() {
wrapper.style.width = `${wW}px`;
wrapper.style.height = `${wH}px`;
wrapper.style.left = `${(w-wW)/2}px`;
wrapper.style.top = `${(h-wH)/2}px`;
wrapper.style.left = `0px`;
wrapper.style.top = `0px`;
canvases.forEach( c => {
c.style.width = c.style.height = '';

19
javascript/imageParams.js Normal file
View File

@ -0,0 +1,19 @@
window.onload = (function(){
window.addEventListener('drop', e => {
const target = e.composedPath()[0];
const idx = selected_gallery_index();
if (target.placeholder.indexOf("Prompt") == -1) return;
let prompt_target = get_tab_index('tabs') == 1 ? "img2img_prompt_image" : "txt2img_prompt_image";
e.stopPropagation();
e.preventDefault();
const imgParent = gradioApp().getElementById(prompt_target);
const files = e.dataTransfer.files;
const fileInput = imgParent.querySelector('input[type="file"]');
if ( fileInput ) {
fileInput.files = files;
fileInput.dispatchEvent(new Event('change'));
}
});
});

View File

@ -1,5 +1,4 @@
// A full size 'lightbox' preview modal shown when left clicking on gallery previews
function closeModal() {
gradioApp().getElementById("lightboxModal").style.display = "none";
}
@ -14,6 +13,15 @@ function showModal(event) {
}
lb.style.display = "block";
lb.focus()
const tabTxt2Img = gradioApp().getElementById("tab_txt2img")
const tabImg2Img = gradioApp().getElementById("tab_img2img")
// show the save button in modal only on txt2img or img2img tabs
if (tabTxt2Img.style.display != "none" || tabImg2Img.style.display != "none") {
gradioApp().getElementById("modal_save").style.display = "inline"
} else {
gradioApp().getElementById("modal_save").style.display = "none"
}
event.stopPropagation()
}
@ -21,6 +29,26 @@ function negmod(n, m) {
return ((n % m) + m) % m;
}
function updateOnBackgroundChange() {
const modalImage = gradioApp().getElementById("modalImage")
if (modalImage && modalImage.offsetParent) {
let allcurrentButtons = gradioApp().querySelectorAll(".gallery-item.transition-all.\\!ring-2")
let currentButton = null
allcurrentButtons.forEach(function(elem) {
if (elem.parentElement.offsetParent) {
currentButton = elem;
}
})
if (currentButton?.children?.length > 0 && modalImage.src != currentButton.children[0].src) {
modalImage.src = currentButton.children[0].src;
if (modalImage.style.display === 'none') {
modal.style.setProperty('background-image', `url(${modalImage.src})`)
}
}
}
}
function modalImageSwitch(offset) {
var allgalleryButtons = gradioApp().querySelectorAll(".gallery-item.transition-all")
var galleryButtons = []
@ -40,7 +68,11 @@ function modalImageSwitch(offset){
})
var result = -1
galleryButtons.forEach(function(v, i){ if(v==currentButton) { result = i } })
galleryButtons.forEach(function(v, i) {
if (v == currentButton) {
result = i
}
})
if (result != -1) {
nextButton = galleryButtons[negmod((result + offset), galleryButtons.length)]
@ -51,11 +83,32 @@ function modalImageSwitch(offset){
if (modalImage.style.display === 'none') {
modal.style.setProperty('background-image', `url(${modalImage.src})`)
}
setTimeout( function(){modal.focus()},10)
setTimeout(function() {
modal.focus()
}, 10)
}
}
}
function saveImage(){
const tabTxt2Img = gradioApp().getElementById("tab_txt2img")
const tabImg2Img = gradioApp().getElementById("tab_img2img")
const saveTxt2Img = "save_txt2img"
const saveImg2Img = "save_img2img"
if (tabTxt2Img.style.display != "none") {
gradioApp().getElementById(saveTxt2Img).click()
} else if (tabImg2Img.style.display != "none") {
gradioApp().getElementById(saveImg2Img).click()
} else {
console.error("missing implementation for saving modal of this type")
}
}
function modalSaveImage(event) {
saveImage()
event.stopPropagation()
}
function modalNextImage(event) {
modalImageSwitch(1)
event.stopPropagation()
@ -68,6 +121,9 @@ function modalPrevImage(event){
function modalKeyHandler(event) {
switch (event.key) {
case "s":
saveImage()
break;
case "ArrowLeft":
modalPrevImage(event)
break;
@ -86,13 +142,24 @@ function showGalleryImage(){
if (fullImg_preview != null) {
fullImg_preview.forEach(function function_name(e) {
if (e.dataset.modded)
return;
e.dataset.modded = true;
if(e && e.parentElement.tagName == 'DIV'){
e.style.cursor='pointer'
e.style.userSelect='none'
e.addEventListener('click', function (evt) {
if(!opts.js_modal_lightbox) return;
modalZoomSet(gradioApp().getElementById('modalImage'), opts.js_modal_lightbox_initialy_zoomed)
var isFirefox = isFirefox = navigator.userAgent.toLowerCase().indexOf('firefox') > -1
// For Firefox, listening on click first switched to next image then shows the lightbox.
// If you know how to fix this without switching to mousedown event, please.
// For other browsers the event is click to make it possiblr to drag picture.
var event = isFirefox ? 'mousedown' : 'click'
e.addEventListener(event, function (evt) {
if(!opts.js_modal_lightbox || evt.button != 0) return;
modalZoomSet(gradioApp().getElementById('modalImage'), opts.js_modal_lightbox_initially_zoomed)
evt.preventDefault()
showModal(evt)
}, true);
}
@ -142,6 +209,7 @@ onUiUpdate(function(){
if (fullImg_preview != null) {
fullImg_preview.forEach(galleryImageHandler);
}
updateOnBackgroundChange();
})
document.addEventListener("DOMContentLoaded", function() {
@ -170,6 +238,14 @@ document.addEventListener("DOMContentLoaded", function() {
modalTileImage.title = "Preview tiling";
modalControls.appendChild(modalTileImage)
const modalSave = document.createElement("span")
modalSave.className = "modalSave cursor"
modalSave.id = "modal_save"
modalSave.innerHTML = "&#x1F5AB;"
modalSave.addEventListener("click", modalSaveImage, true)
modalSave.title = "Save Image(s)"
modalControls.appendChild(modalSave)
const modalClose = document.createElement('span')
modalClose.className = 'modalClose cursor';
modalClose.innerHTML = '&times;'

165
javascript/localization.js Normal file
View File

@ -0,0 +1,165 @@
// localization = {} -- the dict with translations is created by the backend
ignore_ids_for_localization={
setting_sd_hypernetwork: 'OPTION',
setting_sd_model_checkpoint: 'OPTION',
setting_realesrgan_enabled_models: 'OPTION',
modelmerger_primary_model_name: 'OPTION',
modelmerger_secondary_model_name: 'OPTION',
modelmerger_tertiary_model_name: 'OPTION',
train_embedding: 'OPTION',
train_hypernetwork: 'OPTION',
txt2img_styles: 'OPTION',
img2img_styles: 'OPTION',
setting_random_artist_categories: 'SPAN',
setting_face_restoration_model: 'SPAN',
setting_realesrgan_enabled_models: 'SPAN',
extras_upscaler_1: 'SPAN',
extras_upscaler_2: 'SPAN',
}
re_num = /^[\.\d]+$/
re_emoji = /[\p{Extended_Pictographic}\u{1F3FB}-\u{1F3FF}\u{1F9B0}-\u{1F9B3}]/u
original_lines = {}
translated_lines = {}
function textNodesUnder(el){
var n, a=[], walk=document.createTreeWalker(el,NodeFilter.SHOW_TEXT,null,false);
while(n=walk.nextNode()) a.push(n);
return a;
}
function canBeTranslated(node, text){
if(! text) return false;
if(! node.parentElement) return false;
parentType = node.parentElement.nodeName
if(parentType=='SCRIPT' || parentType=='STYLE' || parentType=='TEXTAREA') return false;
if (parentType=='OPTION' || parentType=='SPAN'){
pnode = node
for(var level=0; level<4; level++){
pnode = pnode.parentElement
if(! pnode) break;
if(ignore_ids_for_localization[pnode.id] == parentType) return false;
}
}
if(re_num.test(text)) return false;
if(re_emoji.test(text)) return false;
return true
}
function getTranslation(text){
if(! text) return undefined
if(translated_lines[text] === undefined){
original_lines[text] = 1
}
tl = localization[text]
if(tl !== undefined){
translated_lines[tl] = 1
}
return tl
}
function processTextNode(node){
text = node.textContent.trim()
if(! canBeTranslated(node, text)) return
tl = getTranslation(text)
if(tl !== undefined){
node.textContent = tl
}
}
function processNode(node){
if(node.nodeType == 3){
processTextNode(node)
return
}
if(node.title){
tl = getTranslation(node.title)
if(tl !== undefined){
node.title = tl
}
}
if(node.placeholder){
tl = getTranslation(node.placeholder)
if(tl !== undefined){
node.placeholder = tl
}
}
textNodesUnder(node).forEach(function(node){
processTextNode(node)
})
}
function dumpTranslations(){
dumped = {}
if (localization.rtl) {
dumped.rtl = true
}
Object.keys(original_lines).forEach(function(text){
if(dumped[text] !== undefined) return
dumped[text] = localization[text] || text
})
return dumped
}
onUiUpdate(function(m){
m.forEach(function(mutation){
mutation.addedNodes.forEach(function(node){
processNode(node)
})
});
})
document.addEventListener("DOMContentLoaded", function() {
processNode(gradioApp())
if (localization.rtl) { // if the language is from right to left,
(new MutationObserver((mutations, observer) => { // wait for the style to load
mutations.forEach(mutation => {
mutation.addedNodes.forEach(node => {
if (node.tagName === 'STYLE') {
observer.disconnect();
for (const x of node.sheet.rules) { // find all rtl media rules
if (Array.from(x.media || []).includes('rtl')) {
x.media.appendMedium('all'); // enable them
}
}
}
})
});
})).observe(gradioApp(), { childList: true });
}
})
function download_localization() {
text = JSON.stringify(dumpTranslations(), null, 4)
var element = document.createElement('a');
element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
element.setAttribute('download', "localization.json");
element.style.display = 'none';
document.body.appendChild(element);
element.click();
document.body.removeChild(element);
}

View File

@ -15,7 +15,7 @@ onUiUpdate(function(){
}
}
const galleryPreviews = gradioApp().querySelectorAll('img.h-full.w-full.overflow-hidden');
const galleryPreviews = gradioApp().querySelectorAll('div[id^="tab_"][style*="display: block"] img.h-full.w-full.overflow-hidden');
if (galleryPreviews == null) return;
@ -36,7 +36,7 @@ onUiUpdate(function(){
const notification = new Notification(
'Stable Diffusion',
{
body: `Generated ${imgs.size > 1 ? imgs.size - 1 : 1} image${imgs.size > 1 ? 's' : ''}`,
body: `Generated ${imgs.size > 1 ? imgs.size - opts.return_grid : 1} image${imgs.size > 1 ? 's' : ''}`,
icon: headImg,
image: headImg,
}

View File

@ -1,68 +1,243 @@
// code related to showing and updating progressbar shown as the image is being made
global_progressbars = {}
function check_progressbar(id_part, id_progressbar, id_progressbar_span, id_interrupt, id_preview, id_gallery){
var progressbar = gradioApp().getElementById(id_progressbar)
var interrupt = gradioApp().getElementById(id_interrupt)
if(progressbar && progressbar.offsetParent){
if(progressbar.innerText){
let newtitle = 'Stable Diffusion - ' + progressbar.innerText
if(document.title != newtitle){
document.title = newtitle;
}
}else{
let newtitle = 'Stable Diffusion'
if(document.title != newtitle){
document.title = newtitle;
}
}
galleries = {}
storedGallerySelections = {}
galleryObservers = {}
function rememberGallerySelection(id_gallery){
storedGallerySelections[id_gallery] = getGallerySelectedIndex(id_gallery)
}
if(progressbar!= null && progressbar != global_progressbars[id_progressbar]){
global_progressbars[id_progressbar] = progressbar
function getGallerySelectedIndex(id_gallery){
let galleryButtons = gradioApp().querySelectorAll('#'+id_gallery+' .gallery-item')
let galleryBtnSelected = gradioApp().querySelector('#'+id_gallery+' .gallery-item.\\!ring-2')
var mutationObserver = new MutationObserver(function(m){
preview = gradioApp().getElementById(id_preview)
gallery = gradioApp().getElementById(id_gallery)
let currentlySelectedIndex = -1
galleryButtons.forEach(function(v, i){ if(v==galleryBtnSelected) { currentlySelectedIndex = i } })
if(preview != null && gallery != null){
preview.style.width = gallery.clientWidth + "px"
preview.style.height = gallery.clientHeight + "px"
var progressDiv = gradioApp().querySelectorAll('#' + id_progressbar_span).length > 0;
if(!progressDiv){
interrupt.style.display = "none"
}
return currentlySelectedIndex
}
window.setTimeout(function(){ requestMoreProgress(id_part, id_progressbar_span, id_interrupt) }, 500)
});
mutationObserver.observe( progressbar, { childList:true, subtree:true })
// this is a workaround for https://github.com/gradio-app/gradio/issues/2984
function check_gallery(id_gallery){
let gallery = gradioApp().getElementById(id_gallery)
// if gallery has no change, no need to setting up observer again.
if (gallery && galleries[id_gallery] !== gallery){
galleries[id_gallery] = gallery;
if(galleryObservers[id_gallery]){
galleryObservers[id_gallery].disconnect();
}
storedGallerySelections[id_gallery] = -1
galleryObservers[id_gallery] = new MutationObserver(function (){
let galleryButtons = gradioApp().querySelectorAll('#'+id_gallery+' .gallery-item')
let galleryBtnSelected = gradioApp().querySelector('#'+id_gallery+' .gallery-item.\\!ring-2')
let currentlySelectedIndex = getGallerySelectedIndex(id_gallery)
prevSelectedIndex = storedGallerySelections[id_gallery]
storedGallerySelections[id_gallery] = -1
if (prevSelectedIndex !== -1 && galleryButtons.length>prevSelectedIndex && !galleryBtnSelected) {
// automatically re-open previously selected index (if exists)
activeElement = gradioApp().activeElement;
let scrollX = window.scrollX;
let scrollY = window.scrollY;
galleryButtons[prevSelectedIndex].click();
showGalleryImage();
// When the gallery button is clicked, it gains focus and scrolls itself into view
// We need to scroll back to the previous position
setTimeout(function (){
window.scrollTo(scrollX, scrollY);
}, 50);
if(activeElement){
// i fought this for about an hour; i don't know why the focus is lost or why this helps recover it
// if someone has a better solution please by all means
setTimeout(function (){
activeElement.focus({
preventScroll: true // Refocus the element that was focused before the gallery was opened without scrolling to it
})
}, 1);
}
}
})
galleryObservers[id_gallery].observe( gallery, { childList:true, subtree:false })
}
}
onUiUpdate(function(){
check_progressbar('txt2img', 'txt2img_progressbar', 'txt2img_progress_span', 'txt2img_interrupt', 'txt2img_preview', 'txt2img_gallery')
check_progressbar('img2img', 'img2img_progressbar', 'img2img_progress_span', 'img2img_interrupt', 'img2img_preview', 'img2img_gallery')
check_progressbar('ti', 'ti_progressbar', 'ti_progress_span', 'ti_interrupt', 'ti_preview', 'ti_gallery')
check_gallery('txt2img_gallery')
check_gallery('img2img_gallery')
})
function requestMoreProgress(id_part, id_progressbar_span, id_interrupt){
btn = gradioApp().getElementById(id_part+"_check_progress");
if(btn==null) return;
function request(url, data, handler, errorHandler){
var xhr = new XMLHttpRequest();
var url = url;
xhr.open("POST", url, true);
xhr.setRequestHeader("Content-Type", "application/json");
xhr.onreadystatechange = function () {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
try {
var js = JSON.parse(xhr.responseText);
handler(js)
} catch (error) {
console.error(error);
errorHandler()
}
} else{
errorHandler()
}
}
};
var js = JSON.stringify(data);
xhr.send(js);
}
btn.click();
var progressDiv = gradioApp().querySelectorAll('#' + id_progressbar_span).length > 0;
var interrupt = gradioApp().getElementById(id_interrupt)
if(progressDiv && interrupt){
interrupt.style.display = "block"
function pad2(x){
return x<10 ? '0'+x : x
}
function formatTime(secs){
if(secs > 3600){
return pad2(Math.floor(secs/60/60)) + ":" + pad2(Math.floor(secs/60)%60) + ":" + pad2(Math.floor(secs)%60)
} else if(secs > 60){
return pad2(Math.floor(secs/60)) + ":" + pad2(Math.floor(secs)%60)
} else{
return Math.floor(secs) + "s"
}
}
function requestProgress(id_part){
btn = gradioApp().getElementById(id_part+"_check_progress_initial");
if(btn==null) return;
function setTitle(progress){
var title = 'Stable Diffusion'
btn.click();
if(opts.show_progress_in_title && progress){
title = '[' + progress.trim() + '] ' + title;
}
if(document.title != title){
document.title = title;
}
}
function randomId(){
return "task(" + Math.random().toString(36).slice(2, 7) + Math.random().toString(36).slice(2, 7) + Math.random().toString(36).slice(2, 7)+")"
}
// starts sending progress requests to "/internal/progress" uri, creating progressbar above progressbarContainer element and
// preview inside gallery element. Cleans up all created stuff when the task is over and calls atEnd.
// calls onProgress every time there is a progress update
function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgress){
var dateStart = new Date()
var wasEverActive = false
var parentProgressbar = progressbarContainer.parentNode
var parentGallery = gallery ? gallery.parentNode : null
var divProgress = document.createElement('div')
divProgress.className='progressDiv'
divProgress.style.display = opts.show_progressbar ? "" : "none"
var divInner = document.createElement('div')
divInner.className='progress'
divProgress.appendChild(divInner)
parentProgressbar.insertBefore(divProgress, progressbarContainer)
if(parentGallery){
var livePreview = document.createElement('div')
livePreview.className='livePreview'
parentGallery.insertBefore(livePreview, gallery)
}
var removeProgressBar = function(){
setTitle("")
parentProgressbar.removeChild(divProgress)
if(parentGallery) parentGallery.removeChild(livePreview)
atEnd()
}
var fun = function(id_task, id_live_preview){
request("./internal/progress", {"id_task": id_task, "id_live_preview": id_live_preview}, function(res){
if(res.completed){
removeProgressBar()
return
}
var rect = progressbarContainer.getBoundingClientRect()
if(rect.width){
divProgress.style.width = rect.width + "px";
}
progressText = ""
divInner.style.width = ((res.progress || 0) * 100.0) + '%'
divInner.style.background = res.progress ? "" : "transparent"
if(res.progress > 0){
progressText = ((res.progress || 0) * 100.0).toFixed(0) + '%'
}
if(res.eta){
progressText += " ETA: " + formatTime(res.eta)
}
setTitle(progressText)
if(res.textinfo && res.textinfo.indexOf("\n") == -1){
progressText = res.textinfo + " " + progressText
}
divInner.textContent = progressText
var elapsedFromStart = (new Date() - dateStart) / 1000
if(res.active) wasEverActive = true;
if(! res.active && wasEverActive){
removeProgressBar()
return
}
if(elapsedFromStart > 5 && !res.queued && !res.active){
removeProgressBar()
return
}
if(res.live_preview && gallery){
var rect = gallery.getBoundingClientRect()
if(rect.width){
livePreview.style.width = rect.width + "px"
livePreview.style.height = rect.height + "px"
}
var img = new Image();
img.onload = function() {
livePreview.appendChild(img)
if(livePreview.childElementCount > 2){
livePreview.removeChild(livePreview.firstElementChild)
}
}
img.src = res.live_preview;
}
if(onProgress){
onProgress(res)
}
setTimeout(() => {
fun(id_task, res.id_live_preview);
}, opts.live_preview_refresh_period || 500)
}, function(){
removeProgressBar()
})
}
fun(id_task, 0)
}

View File

@ -1,8 +1,17 @@
function start_training_textual_inversion(){
requestProgress('ti')
gradioApp().querySelector('#ti_error').innerHTML=''
return args_to_array(arguments)
var id = randomId()
requestProgress(id, gradioApp().getElementById('ti_output'), gradioApp().getElementById('ti_gallery'), function(){}, function(progress){
gradioApp().getElementById('ti_progress').innerHTML = progress.textinfo
})
var res = args_to_array(arguments)
res[0] = id
return res
}

View File

@ -1,8 +1,15 @@
// various functions for interation with ui.py not large enough to warrant putting them in separate files
// various functions for interaction with ui.py not large enough to warrant putting them in separate files
function set_theme(theme){
gradioURL = window.location.href
if (!gradioURL.includes('?__theme=')) {
window.location.replace(gradioURL + '?__theme=' + theme);
}
}
function selected_gallery_index(){
var buttons = gradioApp().querySelectorAll('[style="display: block;"].tabitem .gallery-item')
var button = gradioApp().querySelector('[style="display: block;"].tabitem .gallery-item.\\!ring-2')
var buttons = gradioApp().querySelectorAll('[style="display: block;"].tabitem div[id$=_gallery] .gallery-item')
var button = gradioApp().querySelector('[style="display: block;"].tabitem div[id$=_gallery] .gallery-item.\\!ring-2')
var result = -1
buttons.forEach(function(v, i){ if(v==button) { result = i } })
@ -12,7 +19,7 @@ function selected_gallery_index(){
function extract_image_from_gallery(gallery){
if(gallery.length == 1){
return gallery[0]
return [gallery[0]]
}
index = selected_gallery_index()
@ -21,7 +28,7 @@ function extract_image_from_gallery(gallery){
return [null]
}
return gallery[index];
return [gallery[index]];
}
function args_to_array(args){
@ -33,51 +40,48 @@ function args_to_array(args){
}
function switch_to_txt2img(){
gradioApp().querySelectorAll('button')[0].click();
gradioApp().querySelector('#tabs').querySelectorAll('button')[0].click();
return args_to_array(arguments);
}
function switch_to_img2img_img2img(){
gradioApp().querySelectorAll('button')[1].click();
gradioApp().getElementById('mode_img2img').querySelectorAll('button')[0].click();
function switch_to_img2img_tab(no){
gradioApp().querySelector('#tabs').querySelectorAll('button')[1].click();
gradioApp().getElementById('mode_img2img').querySelectorAll('button')[no].click();
}
function switch_to_img2img(){
switch_to_img2img_tab(0);
return args_to_array(arguments);
}
function switch_to_img2img_inpaint(){
gradioApp().querySelectorAll('button')[1].click();
gradioApp().getElementById('mode_img2img').querySelectorAll('button')[1].click();
function switch_to_sketch(){
switch_to_img2img_tab(1);
return args_to_array(arguments);
}
function switch_to_inpaint(){
switch_to_img2img_tab(2);
return args_to_array(arguments);
}
function switch_to_inpaint_sketch(){
switch_to_img2img_tab(3);
return args_to_array(arguments);
}
function switch_to_inpaint(){
gradioApp().querySelector('#tabs').querySelectorAll('button')[1].click();
gradioApp().getElementById('mode_img2img').querySelectorAll('button')[2].click();
return args_to_array(arguments);
}
function switch_to_extras(){
gradioApp().querySelectorAll('button')[2].click();
gradioApp().querySelector('#tabs').querySelectorAll('button')[2].click();
return args_to_array(arguments);
}
function extract_image_from_gallery_txt2img(gallery){
switch_to_txt2img()
return extract_image_from_gallery(gallery);
}
function extract_image_from_gallery_img2img(gallery){
switch_to_img2img_img2img()
return extract_image_from_gallery(gallery);
}
function extract_image_from_gallery_inpaint(gallery){
switch_to_img2img_inpaint()
return extract_image_from_gallery(gallery);
}
function extract_image_from_gallery_extras(gallery){
switch_to_extras()
return extract_image_from_gallery(gallery);
}
function get_tab_index(tabId){
var res = 0
@ -100,8 +104,11 @@ function create_tab_index_args(tabId, args){
return res
}
function get_extras_tab_index(){
return create_tab_index_args('mode_extras', arguments)
function get_img2img_tab_index() {
let res = args_to_array(arguments)
res.splice(-2)
res[0] = get_tab_index('mode_img2img')
return res
}
function create_submit_args(args){
@ -112,7 +119,7 @@ function create_submit_args(args){
// As it is currently, txt2img and img2img send back the previous output args (txt2img_gallery, generation_info, html_info) whenever you generate a new image.
// This can lead to uploading a huge gallery of previously generated images, which leads to an unnecessary delay between submitting and beginning to generate.
// I don't know why gradio is seding outputs along with inputs, but we can prevent sending the image gallery here, which seems to be an issue for some.
// I don't know why gradio is sending outputs along with inputs, but we can prevent sending the image gallery here, which seems to be an issue for some.
// If gradio at some point stops sending outputs, this may break something
if(Array.isArray(res[res.length - 3])){
res[res.length - 3] = null
@ -121,49 +128,102 @@ function create_submit_args(args){
return res
}
function submit(){
requestProgress('txt2img')
function showSubmitButtons(tabname, show){
gradioApp().getElementById(tabname+'_interrupt').style.display = show ? "none" : "block"
gradioApp().getElementById(tabname+'_skip').style.display = show ? "none" : "block"
}
return create_submit_args(arguments)
function submit(){
rememberGallerySelection('txt2img_gallery')
showSubmitButtons('txt2img', false)
var id = randomId()
requestProgress(id, gradioApp().getElementById('txt2img_gallery_container'), gradioApp().getElementById('txt2img_gallery'), function(){
showSubmitButtons('txt2img', true)
})
var res = create_submit_args(arguments)
res[0] = id
return res
}
function submit_img2img(){
requestProgress('img2img')
rememberGallerySelection('img2img_gallery')
showSubmitButtons('img2img', false)
res = create_submit_args(arguments)
var id = randomId()
requestProgress(id, gradioApp().getElementById('img2img_gallery_container'), gradioApp().getElementById('img2img_gallery'), function(){
showSubmitButtons('img2img', true)
})
res[0] = get_tab_index('mode_img2img')
var res = create_submit_args(arguments)
res[0] = id
res[1] = get_tab_index('mode_img2img')
return res
}
function modelmerger(){
var id = randomId()
requestProgress(id, gradioApp().getElementById('modelmerger_results_panel'), null, function(){})
var res = create_submit_args(arguments)
res[0] = id
return res
}
function ask_for_style_name(_, prompt_text, negative_prompt_text) {
name_ = prompt('Style name:')
return name_ === null ? [null, null, null]: [name_, prompt_text, negative_prompt_text]
return [name_, prompt_text, negative_prompt_text]
}
function confirm_clear_prompt(prompt, negative_prompt) {
if(confirm("Delete prompt?")) {
prompt = ""
negative_prompt = ""
}
return [prompt, negative_prompt]
}
promptTokecountUpdateFuncs = {}
function recalculatePromptTokens(name){
if(promptTokecountUpdateFuncs[name]){
promptTokecountUpdateFuncs[name]()
}
}
function recalculate_prompts_txt2img(){
recalculatePromptTokens('txt2img_prompt')
recalculatePromptTokens('txt2img_neg_prompt')
return args_to_array(arguments);
}
function recalculate_prompts_img2img(){
recalculatePromptTokens('img2img_prompt')
recalculatePromptTokens('img2img_neg_prompt')
return args_to_array(arguments);
}
opts = {}
function apply_settings(jsdata){
console.log(jsdata)
opts = JSON.parse(jsdata)
return jsdata
}
onUiUpdate(function(){
if(Object.keys(opts).length != 0) return;
json_elem = gradioApp().getElementById('settings_json')
if(json_elem == null) return;
textarea = json_elem.querySelector('textarea')
jsdata = textarea.value
var textarea = json_elem.querySelector('textarea')
var jsdata = textarea.value
opts = JSON.parse(jsdata)
executeCallbacks(optionsChangedCallbacks);
Object.defineProperty(textarea, 'value', {
set: function(newValue) {
@ -174,6 +234,8 @@ onUiUpdate(function(){
if (oldValue != newValue) {
opts = JSON.parse(textarea.value)
}
executeCallbacks(optionsChangedCallbacks);
},
get: function() {
var valueProp = Object.getOwnPropertyDescriptor(HTMLTextAreaElement.prototype, 'value');
@ -183,21 +245,55 @@ onUiUpdate(function(){
json_elem.parentElement.style.display="none"
if (!txt2img_textarea) {
txt2img_textarea = gradioApp().querySelector("#txt2img_prompt > label > textarea");
txt2img_textarea?.addEventListener("input", () => update_token_counter("txt2img_token_button"));
txt2img_textarea?.addEventListener("keyup", (event) => submit_prompt(event, "txt2img_generate"));
function registerTextarea(id, id_counter, id_button){
var prompt = gradioApp().getElementById(id)
var counter = gradioApp().getElementById(id_counter)
var textarea = gradioApp().querySelector("#" + id + " > label > textarea");
if(counter.parentElement == prompt.parentElement){
return
}
if (!img2img_textarea) {
img2img_textarea = gradioApp().querySelector("#img2img_prompt > label > textarea");
img2img_textarea?.addEventListener("input", () => update_token_counter("img2img_token_button"));
img2img_textarea?.addEventListener("keyup", (event) => submit_prompt(event, "img2img_generate"));
prompt.parentElement.insertBefore(counter, prompt)
counter.classList.add("token-counter")
prompt.parentElement.style.position = "relative"
promptTokecountUpdateFuncs[id] = function(){ update_token_counter(id_button); }
textarea.addEventListener("input", promptTokecountUpdateFuncs[id]);
}
registerTextarea('txt2img_prompt', 'txt2img_token_counter', 'txt2img_token_button')
registerTextarea('txt2img_neg_prompt', 'txt2img_negative_token_counter', 'txt2img_negative_token_button')
registerTextarea('img2img_prompt', 'img2img_token_counter', 'img2img_token_button')
registerTextarea('img2img_neg_prompt', 'img2img_negative_token_counter', 'img2img_negative_token_button')
show_all_pages = gradioApp().getElementById('settings_show_all_pages')
settings_tabs = gradioApp().querySelector('#settings div')
if(show_all_pages && settings_tabs){
settings_tabs.appendChild(show_all_pages)
show_all_pages.onclick = function(){
gradioApp().querySelectorAll('#settings > div').forEach(function(elem){
elem.style.display = "block";
})
}
}
})
onOptionsChanged(function(){
elem = gradioApp().getElementById('sd_checkpoint_hash')
sd_checkpoint_hash = opts.sd_checkpoint_hash || ""
shorthash = sd_checkpoint_hash.substr(0,10)
if(elem && elem.textContent != shorthash){
elem.textContent = shorthash
elem.title = sd_checkpoint_hash
elem.href = "https://google.com/search?q=" + sd_checkpoint_hash
}
})
let txt2img_textarea, img2img_textarea = undefined;
let wait_time = 800
let token_timeout;
let token_timeouts = {};
function update_txt2img_tokens(...args) {
update_token_counter("txt2img_token_button")
@ -214,20 +310,29 @@ function update_img2img_tokens(...args) {
}
function update_token_counter(button_id) {
if (token_timeout)
clearTimeout(token_timeout);
token_timeout = setTimeout(() => gradioApp().getElementById(button_id)?.click(), wait_time);
}
function submit_prompt(event, generate_button_id) {
if (event.altKey && event.keyCode === 13) {
event.preventDefault();
gradioApp().getElementById(generate_button_id).click();
return;
}
if (token_timeouts[button_id])
clearTimeout(token_timeouts[button_id]);
token_timeouts[button_id] = setTimeout(() => gradioApp().getElementById(button_id)?.click(), wait_time);
}
function restart_reload(){
document.body.innerHTML='<h1 style="font-family:monospace;margin-top:20%;color:lightgray;text-align:center;">Reloading...</h1>';
setTimeout(function(){location.reload()},2000)
return []
}
// Simulate an `input` DOM event for Gradio Textbox component. Needed after you edit its contents in javascript, otherwise your edits
// will only visible on web page and not sent to python.
function updateInput(target){
let e = new Event("input", { bubbles: true })
Object.defineProperty(e, "target", {value: target})
target.dispatchEvent(e);
}
var desiredCheckpointName = null;
function selectCheckpoint(name){
desiredCheckpointName = name;
gradioApp().getElementById('change_checkpoint').click()
}

313
launch.py
View File

@ -4,44 +4,95 @@ import os
import sys
import importlib.util
import shlex
import platform
import argparse
import json
dir_repos = "repositories"
dir_tmp = "tmp"
dir_extensions = "extensions"
python = sys.executable
git = os.environ.get('GIT', "git")
torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113")
requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
commandline_args = os.environ.get('COMMANDLINE_ARGS', "")
index_url = os.environ.get('INDEX_URL', "")
stored_commit_hash = None
skip_install = False
gfpgan_package = os.environ.get('GFPGAN_PACKAGE', "git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379")
clip_package = os.environ.get('CLIP_PACKAGE', "git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1")
stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc")
taming_transformers_commit_hash = os.environ.get('TAMING_TRANSFORMERS_COMMIT_HASH', "24268930bf1dce879235a7fddd0b2355b84d7ea6")
k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "f4e99857772fc3a126ba886aadf795a332774878")
codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', "c5b4593074ba6214284d6acd5f1719b6c5d739af")
blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")
def check_python_version():
is_windows = platform.system() == "Windows"
major = sys.version_info.major
minor = sys.version_info.minor
micro = sys.version_info.micro
args = shlex.split(commandline_args)
if is_windows:
supported_minors = [10]
else:
supported_minors = [7, 8, 9, 10, 11]
if not (major == 3 and minor in supported_minors):
import modules.errors
modules.errors.print_error_explanation(f"""
INCOMPATIBLE PYTHON VERSION
This program is tested with 3.10.6 Python, but you have {major}.{minor}.{micro}.
If you encounter an error with "RuntimeError: Couldn't install torch." message,
or any other error regarding unsuccessful package (library) installation,
please downgrade (or upgrade) to the latest version of 3.10 Python
and delete current Python and "venv" folder in WebUI's directory.
You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3109/
{"Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases" if is_windows else ""}
Use --skip-python-version-check to suppress this warning.
""")
def commit_hash():
global stored_commit_hash
if stored_commit_hash is not None:
return stored_commit_hash
try:
stored_commit_hash = run(f"{git} rev-parse HEAD").strip()
except Exception:
stored_commit_hash = "<none>"
return stored_commit_hash
def extract_arg(args, name):
return [x for x in args if x != name], name in args
args, skip_torch_cuda_test = extract_arg(args, '--skip-torch-cuda-test')
def extract_opt(args, name):
opt = None
is_present = False
if name in args:
is_present = True
idx = args.index(name)
del args[idx]
if idx < len(args) and args[idx][0] != "-":
opt = args[idx]
del args[idx]
return args, is_present, opt
def repo_dir(name):
return os.path.join(dir_repos, name)
def run(command, desc=None, errdesc=None):
def run(command, desc=None, errdesc=None, custom_env=None, live=False):
if desc is not None:
print(desc)
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
if live:
result = subprocess.run(command, shell=True, env=os.environ if custom_env is None else custom_env)
if result.returncode != 0:
raise RuntimeError(f"""{errdesc or 'Error running command'}.
Command: {command}
Error code: {result.returncode}""")
return ""
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=os.environ if custom_env is None else custom_env)
if result.returncode != 0:
@ -56,23 +107,11 @@ stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.st
return result.stdout.decode(encoding="utf8", errors="ignore")
def run_python(code, desc=None, errdesc=None):
return run(f'"{python}" -c "{code}"', desc, errdesc)
def run_pip(args, desc=None):
return run(f'"{python}" -m pip {args} --prefer-binary', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
def check_run(command):
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
return result.returncode == 0
def check_run_python(code):
return check_run(f'"{python}" -c "{code}"')
def is_installed(package):
try:
spec = importlib.util.find_spec(package)
@ -82,6 +121,26 @@ def is_installed(package):
return spec is not None
def repo_dir(name):
return os.path.join(dir_repos, name)
def run_python(code, desc=None, errdesc=None):
return run(f'"{python}" -c "{code}"', desc, errdesc)
def run_pip(args, desc=None):
if skip_install:
return
index_url_line = f' --index-url {index_url}' if index_url != '' else ''
return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
def check_run_python(code):
return check_run(f'"{python}" -c "{code}"')
def git_clone(url, dir, name, commithash=None):
# TODO clone into temporary dir and move if successful
@ -89,31 +148,125 @@ def git_clone(url, dir, name, commithash=None):
if commithash is None:
return
current_hash = run(f'"{git}" -C {dir} rev-parse HEAD', None, "Couldn't determine {name}'s hash: {commithash}").strip()
current_hash = run(f'"{git}" -C "{dir}" rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}").strip()
if current_hash == commithash:
return
run(f'"{git}" -C {dir} fetch', f"Fetching updates for {name}...", f"Couldn't fetch {name}")
run(f'"{git}" -C {dir} checkout {commithash}', f"Checking out commint for {name} with hash: {commithash}...", f"Couldn't checkout commit {commithash} for {name}")
run(f'"{git}" -C "{dir}" fetch', f"Fetching updates for {name}...", f"Couldn't fetch {name}")
run(f'"{git}" -C "{dir}" checkout {commithash}', f"Checking out commit for {name} with hash: {commithash}...", f"Couldn't checkout commit {commithash} for {name}")
return
run(f'"{git}" clone "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}")
if commithash is not None:
run(f'"{git}" -C {dir} checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}")
run(f'"{git}" -C "{dir}" checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}")
def version_check(commit):
try:
import requests
commits = requests.get('https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/branches/master').json()
if commit != "<none>" and commits['commit']['sha'] != commit:
print("--------------------------------------------------------")
print("| You are not up to date with the most recent release. |")
print("| Consider running `git pull` to update. |")
print("--------------------------------------------------------")
elif commits['commit']['sha'] == commit:
print("You are up to date with the most recent release.")
else:
print("Not a git clone, can't perform version check.")
except Exception as e:
print("version check failed", e)
def run_extension_installer(extension_dir):
path_installer = os.path.join(extension_dir, "install.py")
if not os.path.isfile(path_installer):
return
try:
commit = run(f"{git} rev-parse HEAD").strip()
except Exception:
commit = "<none>"
env = os.environ.copy()
env['PYTHONPATH'] = os.path.abspath(".")
print(run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env))
except Exception as e:
print(e, file=sys.stderr)
def list_extensions(settings_file):
settings = {}
try:
if os.path.isfile(settings_file):
with open(settings_file, "r", encoding="utf8") as file:
settings = json.load(file)
except Exception as e:
print(e, file=sys.stderr)
disabled_extensions = set(settings.get('disabled_extensions', []))
return [x for x in os.listdir(dir_extensions) if x not in disabled_extensions]
def run_extensions_installers(settings_file):
if not os.path.isdir(dir_extensions):
return
for dirname_extension in list_extensions(settings_file):
run_extension_installer(os.path.join(dir_extensions, dirname_extension))
def prepare_environment():
global skip_install
torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117")
requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
commandline_args = os.environ.get('COMMANDLINE_ARGS', "")
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.16rc425')
gfpgan_package = os.environ.get('GFPGAN_PACKAGE', "git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379")
clip_package = os.environ.get('CLIP_PACKAGE', "git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1")
openclip_package = os.environ.get('OPENCLIP_PACKAGE', "git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b")
stable_diffusion_repo = os.environ.get('STABLE_DIFFUSION_REPO', "https://github.com/Stability-AI/stablediffusion.git")
taming_transformers_repo = os.environ.get('TAMING_TRANSFORMERS_REPO', "https://github.com/CompVis/taming-transformers.git")
k_diffusion_repo = os.environ.get('K_DIFFUSION_REPO', 'https://github.com/crowsonkb/k-diffusion.git')
codeformer_repo = os.environ.get('CODEFORMER_REPO', 'https://github.com/sczhou/CodeFormer.git')
blip_repo = os.environ.get('BLIP_REPO', 'https://github.com/salesforce/BLIP.git')
stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "47b6b607fdd31875c9279cd2f4f16b92e4ea958e")
taming_transformers_commit_hash = os.environ.get('TAMING_TRANSFORMERS_COMMIT_HASH', "24268930bf1dce879235a7fddd0b2355b84d7ea6")
k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "5b3af030dd83e0297272d861c19477735d0317ec")
codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', "c5b4593074ba6214284d6acd5f1719b6c5d739af")
blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")
sys.argv += shlex.split(commandline_args)
parser = argparse.ArgumentParser()
parser.add_argument("--ui-settings-file", type=str, help="filename to use for ui settings", default='config.json')
args, _ = parser.parse_known_args(sys.argv)
sys.argv, _ = extract_arg(sys.argv, '-f')
sys.argv, skip_torch_cuda_test = extract_arg(sys.argv, '--skip-torch-cuda-test')
sys.argv, skip_python_version_check = extract_arg(sys.argv, '--skip-python-version-check')
sys.argv, reinstall_xformers = extract_arg(sys.argv, '--reinstall-xformers')
sys.argv, reinstall_torch = extract_arg(sys.argv, '--reinstall-torch')
sys.argv, update_check = extract_arg(sys.argv, '--update-check')
sys.argv, run_tests, test_dir = extract_opt(sys.argv, '--tests')
sys.argv, skip_install = extract_arg(sys.argv, '--skip-install')
xformers = '--xformers' in sys.argv
ngrok = '--ngrok' in sys.argv
if not skip_python_version_check:
check_python_version()
commit = commit_hash()
print(f"Python {sys.version}")
print(f"Commit hash: {commit}")
if not is_installed("torch") or not is_installed("torchvision"):
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch")
if reinstall_torch or not is_installed("torch") or not is_installed("torchvision"):
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
if not skip_torch_cuda_test:
run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
@ -124,29 +277,85 @@ if not is_installed("gfpgan"):
if not is_installed("clip"):
run_pip(f"install {clip_package}", "clip")
if not is_installed("open_clip"):
run_pip(f"install {openclip_package}", "open_clip")
if (not is_installed("xformers") or reinstall_xformers) and xformers:
if platform.system() == "Windows":
if platform.python_version().startswith("3.10"):
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers")
else:
print("Installation of xformers is not supported in this version of Python.")
print("You can also check this and build manually: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers#building-xformers-on-windows-by-duckness")
if not is_installed("xformers"):
exit(0)
elif platform.system() == "Linux":
run_pip(f"install {xformers_package}", "xformers")
if not is_installed("pyngrok") and ngrok:
run_pip("install pyngrok", "ngrok")
os.makedirs(dir_repos, exist_ok=True)
git_clone("https://github.com/CompVis/stable-diffusion.git", repo_dir('stable-diffusion'), "Stable Diffusion", stable_diffusion_commit_hash)
git_clone("https://github.com/CompVis/taming-transformers.git", repo_dir('taming-transformers'), "Taming Transformers", taming_transformers_commit_hash)
git_clone("https://github.com/crowsonkb/k-diffusion.git", repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash)
git_clone("https://github.com/sczhou/CodeFormer.git", repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash)
git_clone("https://github.com/salesforce/BLIP.git", repo_dir('BLIP'), "BLIP", blip_commit_hash)
git_clone(stable_diffusion_repo, repo_dir('stable-diffusion-stability-ai'), "Stable Diffusion", stable_diffusion_commit_hash)
git_clone(taming_transformers_repo, repo_dir('taming-transformers'), "Taming Transformers", taming_transformers_commit_hash)
git_clone(k_diffusion_repo, repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash)
git_clone(codeformer_repo, repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash)
git_clone(blip_repo, repo_dir('BLIP'), "BLIP", blip_commit_hash)
if not is_installed("lpips"):
run_pip(f"install -r {os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}", "requirements for CodeFormer")
run_pip(f"install -r {requirements_file}", "requirements for Web UI")
sys.argv += args
run_extensions_installers(settings_file=args.ui_settings_file)
if "--exit" in args:
if update_check:
version_check(commit)
if "--exit" in sys.argv:
print("Exiting because of --exit argument")
exit(0)
def start_webui():
print(f"Launching Web UI with arguments: {' '.join(sys.argv[1:])}")
if run_tests:
exitcode = tests(test_dir)
exit(exitcode)
def tests(test_dir):
if "--api" not in sys.argv:
sys.argv.append("--api")
if "--ckpt" not in sys.argv:
sys.argv.append("--ckpt")
sys.argv.append("./test/test_files/empty.pt")
if "--skip-torch-cuda-test" not in sys.argv:
sys.argv.append("--skip-torch-cuda-test")
if "--disable-nan-check" not in sys.argv:
sys.argv.append("--disable-nan-check")
print(f"Launching Web UI in another process for testing with arguments: {' '.join(sys.argv[1:])}")
os.environ['COMMANDLINE_ARGS'] = ""
with open('test/stdout.txt', "w", encoding="utf8") as stdout, open('test/stderr.txt', "w", encoding="utf8") as stderr:
proc = subprocess.Popen([sys.executable, *sys.argv], stdout=stdout, stderr=stderr)
import test.server_poll
exitcode = test.server_poll.run_tests(proc, test_dir)
print(f"Stopping Web UI process with id {proc.pid}")
proc.kill()
return exitcode
def start():
print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with arguments: {' '.join(sys.argv[1:])}")
import webui
if '--nowebui' in sys.argv:
webui.api_only()
else:
webui.webui()
if __name__ == "__main__":
start_webui()
prepare_environment()
start()

BIN
models/VAE-approx/model.pt Normal file

Binary file not shown.

View File

551
modules/api/api.py Normal file
View File

@ -0,0 +1,551 @@
import base64
import io
import time
import datetime
import uvicorn
from threading import Lock
from io import BytesIO
from gradio.processing_utils import decode_base64_to_file
from fastapi import APIRouter, Depends, FastAPI, HTTPException, Request, Response
from fastapi.security import HTTPBasic, HTTPBasicCredentials
from secrets import compare_digest
import modules.shared as shared
from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing
from modules.api.models import *
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images
from modules.textual_inversion.textual_inversion import create_embedding, train_embedding
from modules.textual_inversion.preprocess import preprocess
from modules.hypernetworks.hypernetwork import create_hypernetwork, train_hypernetwork
from PIL import PngImagePlugin,Image
from modules.sd_models import checkpoints_list
from modules.sd_models_config import find_checkpoint_config_near_filename
from modules.realesrgan_model import get_realesrgan_models
from modules import devices
from typing import List
import piexif
import piexif.helper
def upscaler_to_index(name: str):
try:
return [x.name.lower() for x in shared.sd_upscalers].index(name.lower())
except:
raise HTTPException(status_code=400, detail=f"Invalid upscaler, needs to be one of these: {' , '.join([x.name for x in sd_upscalers])}")
def script_name_to_index(name, scripts):
try:
return [script.title().lower() for script in scripts].index(name.lower())
except:
raise HTTPException(status_code=422, detail=f"Script '{name}' not found")
def validate_sampler_name(name):
config = sd_samplers.all_samplers_map.get(name, None)
if config is None:
raise HTTPException(status_code=404, detail="Sampler not found")
return name
def setUpscalers(req: dict):
reqDict = vars(req)
reqDict['extras_upscaler_1'] = reqDict.pop('upscaler_1', None)
reqDict['extras_upscaler_2'] = reqDict.pop('upscaler_2', None)
return reqDict
def decode_base64_to_image(encoding):
if encoding.startswith("data:image/"):
encoding = encoding.split(";")[1].split(",")[1]
try:
image = Image.open(BytesIO(base64.b64decode(encoding)))
return image
except Exception as err:
raise HTTPException(status_code=500, detail="Invalid encoded image")
def encode_pil_to_base64(image):
with io.BytesIO() as output_bytes:
if opts.samples_format.lower() == 'png':
use_metadata = False
metadata = PngImagePlugin.PngInfo()
for key, value in image.info.items():
if isinstance(key, str) and isinstance(value, str):
metadata.add_text(key, value)
use_metadata = True
image.save(output_bytes, format="PNG", pnginfo=(metadata if use_metadata else None), quality=opts.jpeg_quality)
elif opts.samples_format.lower() in ("jpg", "jpeg", "webp"):
parameters = image.info.get('parameters', None)
exif_bytes = piexif.dump({
"Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(parameters or "", encoding="unicode") }
})
if opts.samples_format.lower() in ("jpg", "jpeg"):
image.save(output_bytes, format="JPEG", exif = exif_bytes, quality=opts.jpeg_quality)
else:
image.save(output_bytes, format="WEBP", exif = exif_bytes, quality=opts.jpeg_quality)
else:
raise HTTPException(status_code=500, detail="Invalid image format")
bytes_data = output_bytes.getvalue()
return base64.b64encode(bytes_data)
def api_middleware(app: FastAPI):
@app.middleware("http")
async def log_and_time(req: Request, call_next):
ts = time.time()
res: Response = await call_next(req)
duration = str(round(time.time() - ts, 4))
res.headers["X-Process-Time"] = duration
endpoint = req.scope.get('path', 'err')
if shared.cmd_opts.api_log and endpoint.startswith('/sdapi'):
print('API {t} {code} {prot}/{ver} {method} {endpoint} {cli} {duration}'.format(
t = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f"),
code = res.status_code,
ver = req.scope.get('http_version', '0.0'),
cli = req.scope.get('client', ('0:0.0.0', 0))[0],
prot = req.scope.get('scheme', 'err'),
method = req.scope.get('method', 'err'),
endpoint = endpoint,
duration = duration,
))
return res
class Api:
def __init__(self, app: FastAPI, queue_lock: Lock):
if shared.cmd_opts.api_auth:
self.credentials = dict()
for auth in shared.cmd_opts.api_auth.split(","):
user, password = auth.split(":")
self.credentials[user] = password
self.router = APIRouter()
self.app = app
self.queue_lock = queue_lock
api_middleware(self.app)
self.add_api_route("/sdapi/v1/txt2img", self.text2imgapi, methods=["POST"], response_model=TextToImageResponse)
self.add_api_route("/sdapi/v1/img2img", self.img2imgapi, methods=["POST"], response_model=ImageToImageResponse)
self.add_api_route("/sdapi/v1/extra-single-image", self.extras_single_image_api, methods=["POST"], response_model=ExtrasSingleImageResponse)
self.add_api_route("/sdapi/v1/extra-batch-images", self.extras_batch_images_api, methods=["POST"], response_model=ExtrasBatchImagesResponse)
self.add_api_route("/sdapi/v1/png-info", self.pnginfoapi, methods=["POST"], response_model=PNGInfoResponse)
self.add_api_route("/sdapi/v1/progress", self.progressapi, methods=["GET"], response_model=ProgressResponse)
self.add_api_route("/sdapi/v1/interrogate", self.interrogateapi, methods=["POST"])
self.add_api_route("/sdapi/v1/interrupt", self.interruptapi, methods=["POST"])
self.add_api_route("/sdapi/v1/skip", self.skip, methods=["POST"])
self.add_api_route("/sdapi/v1/options", self.get_config, methods=["GET"], response_model=OptionsModel)
self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"])
self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=FlagsModel)
self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=List[SamplerItem])
self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=List[UpscalerItem])
self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=List[SDModelItem])
self.add_api_route("/sdapi/v1/hypernetworks", self.get_hypernetworks, methods=["GET"], response_model=List[HypernetworkItem])
self.add_api_route("/sdapi/v1/face-restorers", self.get_face_restorers, methods=["GET"], response_model=List[FaceRestorerItem])
self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=List[RealesrganItem])
self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=List[PromptStyleItem])
self.add_api_route("/sdapi/v1/embeddings", self.get_embeddings, methods=["GET"], response_model=EmbeddingsResponse)
self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"])
self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=CreateResponse)
self.add_api_route("/sdapi/v1/create/hypernetwork", self.create_hypernetwork, methods=["POST"], response_model=CreateResponse)
self.add_api_route("/sdapi/v1/preprocess", self.preprocess, methods=["POST"], response_model=PreprocessResponse)
self.add_api_route("/sdapi/v1/train/embedding", self.train_embedding, methods=["POST"], response_model=TrainResponse)
self.add_api_route("/sdapi/v1/train/hypernetwork", self.train_hypernetwork, methods=["POST"], response_model=TrainResponse)
self.add_api_route("/sdapi/v1/memory", self.get_memory, methods=["GET"], response_model=MemoryResponse)
def add_api_route(self, path: str, endpoint, **kwargs):
if shared.cmd_opts.api_auth:
return self.app.add_api_route(path, endpoint, dependencies=[Depends(self.auth)], **kwargs)
return self.app.add_api_route(path, endpoint, **kwargs)
def auth(self, credentials: HTTPBasicCredentials = Depends(HTTPBasic())):
if credentials.username in self.credentials:
if compare_digest(credentials.password, self.credentials[credentials.username]):
return True
raise HTTPException(status_code=401, detail="Incorrect username or password", headers={"WWW-Authenticate": "Basic"})
def get_script(self, script_name, script_runner):
if script_name is None:
return None, None
if not script_runner.scripts:
script_runner.initialize_scripts(False)
ui.create_ui()
script_idx = script_name_to_index(script_name, script_runner.selectable_scripts)
script = script_runner.selectable_scripts[script_idx]
return script, script_idx
def text2imgapi(self, txt2imgreq: StableDiffusionTxt2ImgProcessingAPI):
script, script_idx = self.get_script(txt2imgreq.script_name, scripts.scripts_txt2img)
populate = txt2imgreq.copy(update={ # Override __init__ params
"sampler_name": validate_sampler_name(txt2imgreq.sampler_name or txt2imgreq.sampler_index),
"do_not_save_samples": True,
"do_not_save_grid": True
}
)
if populate.sampler_name:
populate.sampler_index = None # prevent a warning later on
args = vars(populate)
args.pop('script_name', None)
with self.queue_lock:
p = StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args)
shared.state.begin()
if script is not None:
p.outpath_grids = opts.outdir_txt2img_grids
p.outpath_samples = opts.outdir_txt2img_samples
p.script_args = [script_idx + 1] + [None] * (script.args_from - 1) + p.script_args
processed = scripts.scripts_txt2img.run(p, *p.script_args)
else:
processed = process_images(p)
shared.state.end()
b64images = list(map(encode_pil_to_base64, processed.images))
return TextToImageResponse(images=b64images, parameters=vars(txt2imgreq), info=processed.js())
def img2imgapi(self, img2imgreq: StableDiffusionImg2ImgProcessingAPI):
init_images = img2imgreq.init_images
if init_images is None:
raise HTTPException(status_code=404, detail="Init image not found")
script, script_idx = self.get_script(img2imgreq.script_name, scripts.scripts_img2img)
mask = img2imgreq.mask
if mask:
mask = decode_base64_to_image(mask)
populate = img2imgreq.copy(update={ # Override __init__ params
"sampler_name": validate_sampler_name(img2imgreq.sampler_name or img2imgreq.sampler_index),
"do_not_save_samples": True,
"do_not_save_grid": True,
"mask": mask
}
)
if populate.sampler_name:
populate.sampler_index = None # prevent a warning later on
args = vars(populate)
args.pop('include_init_images', None) # this is meant to be done by "exclude": True in model, but it's for a reason that I cannot determine.
args.pop('script_name', None)
with self.queue_lock:
p = StableDiffusionProcessingImg2Img(sd_model=shared.sd_model, **args)
p.init_images = [decode_base64_to_image(x) for x in init_images]
shared.state.begin()
if script is not None:
p.outpath_grids = opts.outdir_img2img_grids
p.outpath_samples = opts.outdir_img2img_samples
p.script_args = [script_idx + 1] + [None] * (script.args_from - 1) + p.script_args
processed = scripts.scripts_img2img.run(p, *p.script_args)
else:
processed = process_images(p)
shared.state.end()
b64images = list(map(encode_pil_to_base64, processed.images))
if not img2imgreq.include_init_images:
img2imgreq.init_images = None
img2imgreq.mask = None
return ImageToImageResponse(images=b64images, parameters=vars(img2imgreq), info=processed.js())
def extras_single_image_api(self, req: ExtrasSingleImageRequest):
reqDict = setUpscalers(req)
reqDict['image'] = decode_base64_to_image(reqDict['image'])
with self.queue_lock:
result = postprocessing.run_extras(extras_mode=0, image_folder="", input_dir="", output_dir="", save_output=False, **reqDict)
return ExtrasSingleImageResponse(image=encode_pil_to_base64(result[0][0]), html_info=result[1])
def extras_batch_images_api(self, req: ExtrasBatchImagesRequest):
reqDict = setUpscalers(req)
def prepareFiles(file):
file = decode_base64_to_file(file.data, file_path=file.name)
file.orig_name = file.name
return file
reqDict['image_folder'] = list(map(prepareFiles, reqDict['imageList']))
reqDict.pop('imageList')
with self.queue_lock:
result = postprocessing.run_extras(extras_mode=1, image="", input_dir="", output_dir="", save_output=False, **reqDict)
return ExtrasBatchImagesResponse(images=list(map(encode_pil_to_base64, result[0])), html_info=result[1])
def pnginfoapi(self, req: PNGInfoRequest):
if(not req.image.strip()):
return PNGInfoResponse(info="")
image = decode_base64_to_image(req.image.strip())
if image is None:
return PNGInfoResponse(info="")
geninfo, items = images.read_info_from_image(image)
if geninfo is None:
geninfo = ""
items = {**{'parameters': geninfo}, **items}
return PNGInfoResponse(info=geninfo, items=items)
def progressapi(self, req: ProgressRequest = Depends()):
# copy from check_progress_call of ui.py
if shared.state.job_count == 0:
return ProgressResponse(progress=0, eta_relative=0, state=shared.state.dict(), textinfo=shared.state.textinfo)
# avoid dividing zero
progress = 0.01
if shared.state.job_count > 0:
progress += shared.state.job_no / shared.state.job_count
if shared.state.sampling_steps > 0:
progress += 1 / shared.state.job_count * shared.state.sampling_step / shared.state.sampling_steps
time_since_start = time.time() - shared.state.time_start
eta = (time_since_start/progress)
eta_relative = eta-time_since_start
progress = min(progress, 1)
shared.state.set_current_image()
current_image = None
if shared.state.current_image and not req.skip_current_image:
current_image = encode_pil_to_base64(shared.state.current_image)
return ProgressResponse(progress=progress, eta_relative=eta_relative, state=shared.state.dict(), current_image=current_image, textinfo=shared.state.textinfo)
def interrogateapi(self, interrogatereq: InterrogateRequest):
image_b64 = interrogatereq.image
if image_b64 is None:
raise HTTPException(status_code=404, detail="Image not found")
img = decode_base64_to_image(image_b64)
img = img.convert('RGB')
# Override object param
with self.queue_lock:
if interrogatereq.model == "clip":
processed = shared.interrogator.interrogate(img)
elif interrogatereq.model == "deepdanbooru":
processed = deepbooru.model.tag(img)
else:
raise HTTPException(status_code=404, detail="Model not found")
return InterrogateResponse(caption=processed)
def interruptapi(self):
shared.state.interrupt()
return {}
def skip(self):
shared.state.skip()
def get_config(self):
options = {}
for key in shared.opts.data.keys():
metadata = shared.opts.data_labels.get(key)
if(metadata is not None):
options.update({key: shared.opts.data.get(key, shared.opts.data_labels.get(key).default)})
else:
options.update({key: shared.opts.data.get(key, None)})
return options
def set_config(self, req: Dict[str, Any]):
for k, v in req.items():
shared.opts.set(k, v)
shared.opts.save(shared.config_filename)
return
def get_cmd_flags(self):
return vars(shared.cmd_opts)
def get_samplers(self):
return [{"name": sampler[0], "aliases":sampler[2], "options":sampler[3]} for sampler in sd_samplers.all_samplers]
def get_upscalers(self):
return [
{
"name": upscaler.name,
"model_name": upscaler.scaler.model_name,
"model_path": upscaler.data_path,
"model_url": None,
"scale": upscaler.scale,
}
for upscaler in shared.sd_upscalers
]
def get_sd_models(self):
return [{"title": x.title, "model_name": x.model_name, "hash": x.shorthash, "sha256": x.sha256, "filename": x.filename, "config": find_checkpoint_config_near_filename(x)} for x in checkpoints_list.values()]
def get_hypernetworks(self):
return [{"name": name, "path": shared.hypernetworks[name]} for name in shared.hypernetworks]
def get_face_restorers(self):
return [{"name":x.name(), "cmd_dir": getattr(x, "cmd_dir", None)} for x in shared.face_restorers]
def get_realesrgan_models(self):
return [{"name":x.name,"path":x.data_path, "scale":x.scale} for x in get_realesrgan_models(None)]
def get_prompt_styles(self):
styleList = []
for k in shared.prompt_styles.styles:
style = shared.prompt_styles.styles[k]
styleList.append({"name":style[0], "prompt": style[1], "negative_prompt": style[2]})
return styleList
def get_embeddings(self):
db = sd_hijack.model_hijack.embedding_db
def convert_embedding(embedding):
return {
"step": embedding.step,
"sd_checkpoint": embedding.sd_checkpoint,
"sd_checkpoint_name": embedding.sd_checkpoint_name,
"shape": embedding.shape,
"vectors": embedding.vectors,
}
def convert_embeddings(embeddings):
return {embedding.name: convert_embedding(embedding) for embedding in embeddings.values()}
return {
"loaded": convert_embeddings(db.word_embeddings),
"skipped": convert_embeddings(db.skipped_embeddings),
}
def refresh_checkpoints(self):
shared.refresh_checkpoints()
def create_embedding(self, args: dict):
try:
shared.state.begin()
filename = create_embedding(**args) # create empty embedding
sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings() # reload embeddings so new one can be immediately used
shared.state.end()
return CreateResponse(info = "create embedding filename: {filename}".format(filename = filename))
except AssertionError as e:
shared.state.end()
return TrainResponse(info = "create embedding error: {error}".format(error = e))
def create_hypernetwork(self, args: dict):
try:
shared.state.begin()
filename = create_hypernetwork(**args) # create empty embedding
shared.state.end()
return CreateResponse(info = "create hypernetwork filename: {filename}".format(filename = filename))
except AssertionError as e:
shared.state.end()
return TrainResponse(info = "create hypernetwork error: {error}".format(error = e))
def preprocess(self, args: dict):
try:
shared.state.begin()
preprocess(**args) # quick operation unless blip/booru interrogation is enabled
shared.state.end()
return PreprocessResponse(info = 'preprocess complete')
except KeyError as e:
shared.state.end()
return PreprocessResponse(info = "preprocess error: invalid token: {error}".format(error = e))
except AssertionError as e:
shared.state.end()
return PreprocessResponse(info = "preprocess error: {error}".format(error = e))
except FileNotFoundError as e:
shared.state.end()
return PreprocessResponse(info = 'preprocess error: {error}'.format(error = e))
def train_embedding(self, args: dict):
try:
shared.state.begin()
apply_optimizations = shared.opts.training_xattention_optimizations
error = None
filename = ''
if not apply_optimizations:
sd_hijack.undo_optimizations()
try:
embedding, filename = train_embedding(**args) # can take a long time to complete
except Exception as e:
error = e
finally:
if not apply_optimizations:
sd_hijack.apply_optimizations()
shared.state.end()
return TrainResponse(info = "train embedding complete: filename: {filename} error: {error}".format(filename = filename, error = error))
except AssertionError as msg:
shared.state.end()
return TrainResponse(info = "train embedding error: {msg}".format(msg = msg))
def train_hypernetwork(self, args: dict):
try:
shared.state.begin()
shared.loaded_hypernetworks = []
apply_optimizations = shared.opts.training_xattention_optimizations
error = None
filename = ''
if not apply_optimizations:
sd_hijack.undo_optimizations()
try:
hypernetwork, filename = train_hypernetwork(*args)
except Exception as e:
error = e
finally:
shared.sd_model.cond_stage_model.to(devices.device)
shared.sd_model.first_stage_model.to(devices.device)
if not apply_optimizations:
sd_hijack.apply_optimizations()
shared.state.end()
return TrainResponse(info="train embedding complete: filename: {filename} error: {error}".format(filename=filename, error=error))
except AssertionError as msg:
shared.state.end()
return TrainResponse(info="train embedding error: {error}".format(error=error))
def get_memory(self):
try:
import os, psutil
process = psutil.Process(os.getpid())
res = process.memory_info() # only rss is cross-platform guaranteed so we dont rely on other values
ram_total = 100 * res.rss / process.memory_percent() # and total memory is calculated as actual value is not cross-platform safe
ram = { 'free': ram_total - res.rss, 'used': res.rss, 'total': ram_total }
except Exception as err:
ram = { 'error': f'{err}' }
try:
import torch
if torch.cuda.is_available():
s = torch.cuda.mem_get_info()
system = { 'free': s[0], 'used': s[1] - s[0], 'total': s[1] }
s = dict(torch.cuda.memory_stats(shared.device))
allocated = { 'current': s['allocated_bytes.all.current'], 'peak': s['allocated_bytes.all.peak'] }
reserved = { 'current': s['reserved_bytes.all.current'], 'peak': s['reserved_bytes.all.peak'] }
active = { 'current': s['active_bytes.all.current'], 'peak': s['active_bytes.all.peak'] }
inactive = { 'current': s['inactive_split_bytes.all.current'], 'peak': s['inactive_split_bytes.all.peak'] }
warnings = { 'retries': s['num_alloc_retries'], 'oom': s['num_ooms'] }
cuda = {
'system': system,
'active': active,
'allocated': allocated,
'reserved': reserved,
'inactive': inactive,
'events': warnings,
}
else:
cuda = { 'error': 'unavailable' }
except Exception as err:
cuda = { 'error': f'{err}' }
return MemoryResponse(ram = ram, cuda = cuda)
def launch(self, server_name, port):
self.app.include_router(self.router)
uvicorn.run(self.app, host=server_name, port=port)

269
modules/api/models.py Normal file
View File

@ -0,0 +1,269 @@
import inspect
from pydantic import BaseModel, Field, create_model
from typing import Any, Optional
from typing_extensions import Literal
from inflection import underscore
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img
from modules.shared import sd_upscalers, opts, parser
from typing import Dict, List
API_NOT_ALLOWED = [
"self",
"kwargs",
"sd_model",
"outpath_samples",
"outpath_grids",
"sampler_index",
"do_not_save_samples",
"do_not_save_grid",
"extra_generation_params",
"overlay_images",
"do_not_reload_embeddings",
"seed_enable_extras",
"prompt_for_display",
"sampler_noise_scheduler_override",
"ddim_discretize"
]
class ModelDef(BaseModel):
"""Assistance Class for Pydantic Dynamic Model Generation"""
field: str
field_alias: str
field_type: Any
field_value: Any
field_exclude: bool = False
class PydanticModelGenerator:
"""
Takes in created classes and stubs them out in a way FastAPI/Pydantic is happy about:
source_data is a snapshot of the default values produced by the class
params are the names of the actual keys required by __init__
"""
def __init__(
self,
model_name: str = None,
class_instance = None,
additional_fields = None,
):
def field_type_generator(k, v):
# field_type = str if not overrides.get(k) else overrides[k]["type"]
# print(k, v.annotation, v.default)
field_type = v.annotation
return Optional[field_type]
def merge_class_params(class_):
all_classes = list(filter(lambda x: x is not object, inspect.getmro(class_)))
parameters = {}
for classes in all_classes:
parameters = {**parameters, **inspect.signature(classes.__init__).parameters}
return parameters
self._model_name = model_name
self._class_data = merge_class_params(class_instance)
self._model_def = [
ModelDef(
field=underscore(k),
field_alias=k,
field_type=field_type_generator(k, v),
field_value=v.default
)
for (k,v) in self._class_data.items() if k not in API_NOT_ALLOWED
]
for fields in additional_fields:
self._model_def.append(ModelDef(
field=underscore(fields["key"]),
field_alias=fields["key"],
field_type=fields["type"],
field_value=fields["default"],
field_exclude=fields["exclude"] if "exclude" in fields else False))
def generate_model(self):
"""
Creates a pydantic BaseModel
from the json and overrides provided at initialization
"""
fields = {
d.field: (d.field_type, Field(default=d.field_value, alias=d.field_alias, exclude=d.field_exclude)) for d in self._model_def
}
DynamicModel = create_model(self._model_name, **fields)
DynamicModel.__config__.allow_population_by_field_name = True
DynamicModel.__config__.allow_mutation = True
return DynamicModel
StableDiffusionTxt2ImgProcessingAPI = PydanticModelGenerator(
"StableDiffusionProcessingTxt2Img",
StableDiffusionProcessingTxt2Img,
[{"key": "sampler_index", "type": str, "default": "Euler"}, {"key": "script_name", "type": str, "default": None}, {"key": "script_args", "type": list, "default": []}]
).generate_model()
StableDiffusionImg2ImgProcessingAPI = PydanticModelGenerator(
"StableDiffusionProcessingImg2Img",
StableDiffusionProcessingImg2Img,
[{"key": "sampler_index", "type": str, "default": "Euler"}, {"key": "init_images", "type": list, "default": None}, {"key": "denoising_strength", "type": float, "default": 0.75}, {"key": "mask", "type": str, "default": None}, {"key": "include_init_images", "type": bool, "default": False, "exclude" : True}, {"key": "script_name", "type": str, "default": None}, {"key": "script_args", "type": list, "default": []}]
).generate_model()
class TextToImageResponse(BaseModel):
images: List[str] = Field(default=None, title="Image", description="The generated image in base64 format.")
parameters: dict
info: str
class ImageToImageResponse(BaseModel):
images: List[str] = Field(default=None, title="Image", description="The generated image in base64 format.")
parameters: dict
info: str
class ExtrasBaseRequest(BaseModel):
resize_mode: Literal[0, 1] = Field(default=0, title="Resize Mode", description="Sets the resize mode: 0 to upscale by upscaling_resize amount, 1 to upscale up to upscaling_resize_h x upscaling_resize_w.")
show_extras_results: bool = Field(default=True, title="Show results", description="Should the backend return the generated image?")
gfpgan_visibility: float = Field(default=0, title="GFPGAN Visibility", ge=0, le=1, allow_inf_nan=False, description="Sets the visibility of GFPGAN, values should be between 0 and 1.")
codeformer_visibility: float = Field(default=0, title="CodeFormer Visibility", ge=0, le=1, allow_inf_nan=False, description="Sets the visibility of CodeFormer, values should be between 0 and 1.")
codeformer_weight: float = Field(default=0, title="CodeFormer Weight", ge=0, le=1, allow_inf_nan=False, description="Sets the weight of CodeFormer, values should be between 0 and 1.")
upscaling_resize: float = Field(default=2, title="Upscaling Factor", ge=1, le=8, description="By how much to upscale the image, only used when resize_mode=0.")
upscaling_resize_w: int = Field(default=512, title="Target Width", ge=1, description="Target width for the upscaler to hit. Only used when resize_mode=1.")
upscaling_resize_h: int = Field(default=512, title="Target Height", ge=1, description="Target height for the upscaler to hit. Only used when resize_mode=1.")
upscaling_crop: bool = Field(default=True, title="Crop to fit", description="Should the upscaler crop the image to fit in the chosen size?")
upscaler_1: str = Field(default="None", title="Main upscaler", description=f"The name of the main upscaler to use, it has to be one of this list: {' , '.join([x.name for x in sd_upscalers])}")
upscaler_2: str = Field(default="None", title="Secondary upscaler", description=f"The name of the secondary upscaler to use, it has to be one of this list: {' , '.join([x.name for x in sd_upscalers])}")
extras_upscaler_2_visibility: float = Field(default=0, title="Secondary upscaler visibility", ge=0, le=1, allow_inf_nan=False, description="Sets the visibility of secondary upscaler, values should be between 0 and 1.")
upscale_first: bool = Field(default=False, title="Upscale first", description="Should the upscaler run before restoring faces?")
class ExtraBaseResponse(BaseModel):
html_info: str = Field(title="HTML info", description="A series of HTML tags containing the process info.")
class ExtrasSingleImageRequest(ExtrasBaseRequest):
image: str = Field(default="", title="Image", description="Image to work on, must be a Base64 string containing the image's data.")
class ExtrasSingleImageResponse(ExtraBaseResponse):
image: str = Field(default=None, title="Image", description="The generated image in base64 format.")
class FileData(BaseModel):
data: str = Field(title="File data", description="Base64 representation of the file")
name: str = Field(title="File name")
class ExtrasBatchImagesRequest(ExtrasBaseRequest):
imageList: List[FileData] = Field(title="Images", description="List of images to work on. Must be Base64 strings")
class ExtrasBatchImagesResponse(ExtraBaseResponse):
images: List[str] = Field(title="Images", description="The generated images in base64 format.")
class PNGInfoRequest(BaseModel):
image: str = Field(title="Image", description="The base64 encoded PNG image")
class PNGInfoResponse(BaseModel):
info: str = Field(title="Image info", description="A string with the parameters used to generate the image")
items: dict = Field(title="Items", description="An object containing all the info the image had")
class ProgressRequest(BaseModel):
skip_current_image: bool = Field(default=False, title="Skip current image", description="Skip current image serialization")
class ProgressResponse(BaseModel):
progress: float = Field(title="Progress", description="The progress with a range of 0 to 1")
eta_relative: float = Field(title="ETA in secs")
state: dict = Field(title="State", description="The current state snapshot")
current_image: str = Field(default=None, title="Current image", description="The current image in base64 format. opts.show_progress_every_n_steps is required for this to work.")
textinfo: str = Field(default=None, title="Info text", description="Info text used by WebUI.")
class InterrogateRequest(BaseModel):
image: str = Field(default="", title="Image", description="Image to work on, must be a Base64 string containing the image's data.")
model: str = Field(default="clip", title="Model", description="The interrogate model used.")
class InterrogateResponse(BaseModel):
caption: str = Field(default=None, title="Caption", description="The generated caption for the image.")
class TrainResponse(BaseModel):
info: str = Field(title="Train info", description="Response string from train embedding or hypernetwork task.")
class CreateResponse(BaseModel):
info: str = Field(title="Create info", description="Response string from create embedding or hypernetwork task.")
class PreprocessResponse(BaseModel):
info: str = Field(title="Preprocess info", description="Response string from preprocessing task.")
fields = {}
for key, metadata in opts.data_labels.items():
value = opts.data.get(key)
optType = opts.typemap.get(type(metadata.default), type(value))
if (metadata is not None):
fields.update({key: (Optional[optType], Field(
default=metadata.default ,description=metadata.label))})
else:
fields.update({key: (Optional[optType], Field())})
OptionsModel = create_model("Options", **fields)
flags = {}
_options = vars(parser)['_option_string_actions']
for key in _options:
if(_options[key].dest != 'help'):
flag = _options[key]
_type = str
if _options[key].default is not None: _type = type(_options[key].default)
flags.update({flag.dest: (_type,Field(default=flag.default, description=flag.help))})
FlagsModel = create_model("Flags", **flags)
class SamplerItem(BaseModel):
name: str = Field(title="Name")
aliases: List[str] = Field(title="Aliases")
options: Dict[str, str] = Field(title="Options")
class UpscalerItem(BaseModel):
name: str = Field(title="Name")
model_name: Optional[str] = Field(title="Model Name")
model_path: Optional[str] = Field(title="Path")
model_url: Optional[str] = Field(title="URL")
scale: Optional[float] = Field(title="Scale")
class SDModelItem(BaseModel):
title: str = Field(title="Title")
model_name: str = Field(title="Model Name")
hash: Optional[str] = Field(title="Short hash")
sha256: Optional[str] = Field(title="sha256 hash")
filename: str = Field(title="Filename")
config: Optional[str] = Field(title="Config file")
class HypernetworkItem(BaseModel):
name: str = Field(title="Name")
path: Optional[str] = Field(title="Path")
class FaceRestorerItem(BaseModel):
name: str = Field(title="Name")
cmd_dir: Optional[str] = Field(title="Path")
class RealesrganItem(BaseModel):
name: str = Field(title="Name")
path: Optional[str] = Field(title="Path")
scale: Optional[int] = Field(title="Scale")
class PromptStyleItem(BaseModel):
name: str = Field(title="Name")
prompt: Optional[str] = Field(title="Prompt")
negative_prompt: Optional[str] = Field(title="Negative Prompt")
class ArtistItem(BaseModel):
name: str = Field(title="Name")
score: float = Field(title="Score")
category: str = Field(title="Category")
class EmbeddingItem(BaseModel):
step: Optional[int] = Field(title="Step", description="The number of steps that were used to train this embedding, if available")
sd_checkpoint: Optional[str] = Field(title="SD Checkpoint", description="The hash of the checkpoint this embedding was trained on, if available")
sd_checkpoint_name: Optional[str] = Field(title="SD Checkpoint Name", description="The name of the checkpoint this embedding was trained on, if available. Note that this is the name that was used by the trainer; for a stable identifier, use `sd_checkpoint` instead")
shape: int = Field(title="Shape", description="The length of each individual vector in the embedding")
vectors: int = Field(title="Vectors", description="The number of vectors in the embedding")
class EmbeddingsResponse(BaseModel):
loaded: Dict[str, EmbeddingItem] = Field(title="Loaded", description="Embeddings loaded for the current model")
skipped: Dict[str, EmbeddingItem] = Field(title="Skipped", description="Embeddings skipped for the current model (likely due to architecture incompatibility)")
class MemoryResponse(BaseModel):
ram: dict = Field(title="RAM", description="System memory stats")
cuda: dict = Field(title="CUDA", description="nVidia CUDA memory stats")

View File

@ -1,25 +0,0 @@
import os.path
import csv
from collections import namedtuple
Artist = namedtuple("Artist", ['name', 'weight', 'category'])
class ArtistsDatabase:
def __init__(self, filename):
self.cats = set()
self.artists = []
if not os.path.exists(filename):
return
with open(filename, "r", newline='', encoding="utf8") as file:
reader = csv.DictReader(file)
for row in reader:
artist = Artist(row["artist"], float(row["score"]), row["category"])
self.artists.append(artist)
self.cats.add(artist.category)
def categories(self):
return sorted(self.cats)

View File

@ -1,78 +0,0 @@
import os.path
import sys
import traceback
import PIL.Image
import numpy as np
import torch
from basicsr.utils.download_util import load_file_from_url
import modules.upscaler
from modules import devices, modelloader
from modules.bsrgan_model_arch import RRDBNet
from modules.paths import models_path
class UpscalerBSRGAN(modules.upscaler.Upscaler):
def __init__(self, dirname):
self.name = "BSRGAN"
self.model_path = os.path.join(models_path, self.name)
self.model_name = "BSRGAN 4x"
self.model_url = "https://github.com/cszn/KAIR/releases/download/v1.0/BSRGAN.pth"
self.user_path = dirname
super().__init__()
model_paths = self.find_models(ext_filter=[".pt", ".pth"])
scalers = []
if len(model_paths) == 0:
scaler_data = modules.upscaler.UpscalerData(self.model_name, self.model_url, self, 4)
scalers.append(scaler_data)
for file in model_paths:
if "http" in file:
name = self.model_name
else:
name = modelloader.friendly_name(file)
try:
scaler_data = modules.upscaler.UpscalerData(name, file, self, 4)
scalers.append(scaler_data)
except Exception:
print(f"Error loading BSRGAN model: {file}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
self.scalers = scalers
def do_upscale(self, img: PIL.Image, selected_file):
torch.cuda.empty_cache()
model = self.load_model(selected_file)
if model is None:
return img
model.to(devices.device_bsrgan)
torch.cuda.empty_cache()
img = np.array(img)
img = img[:, :, ::-1]
img = np.moveaxis(img, 2, 0) / 255
img = torch.from_numpy(img).float()
img = img.unsqueeze(0).to(devices.device_bsrgan)
with torch.no_grad():
output = model(img)
output = output.squeeze().float().cpu().clamp_(0, 1).numpy()
output = 255. * np.moveaxis(output, 0, 2)
output = output.astype(np.uint8)
output = output[:, :, ::-1]
torch.cuda.empty_cache()
return PIL.Image.fromarray(output, 'RGB')
def load_model(self, path: str):
if "http" in path:
filename = load_file_from_url(url=self.model_url, model_dir=self.model_path, file_name="%s.pth" % self.name,
progress=True)
else:
filename = path
if not os.path.exists(filename) or filename is None:
print(f"BSRGAN: Unable to load model from {filename}", file=sys.stderr)
return None
model = RRDBNet(in_nc=3, out_nc=3, nf=64, nb=23, gc=32, sf=4) # define network
model.load_state_dict(torch.load(filename), strict=True)
model.eval()
for k, v in model.named_parameters():
v.requires_grad = False
return model

View File

@ -1,102 +0,0 @@
import functools
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
def initialize_weights(net_l, scale=1):
if not isinstance(net_l, list):
net_l = [net_l]
for net in net_l:
for m in net.modules():
if isinstance(m, nn.Conv2d):
init.kaiming_normal_(m.weight, a=0, mode='fan_in')
m.weight.data *= scale # for residual block
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
init.kaiming_normal_(m.weight, a=0, mode='fan_in')
m.weight.data *= scale
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
init.constant_(m.weight, 1)
init.constant_(m.bias.data, 0.0)
def make_layer(block, n_layers):
layers = []
for _ in range(n_layers):
layers.append(block())
return nn.Sequential(*layers)
class ResidualDenseBlock_5C(nn.Module):
def __init__(self, nf=64, gc=32, bias=True):
super(ResidualDenseBlock_5C, self).__init__()
# gc: growth channel, i.e. intermediate channels
self.conv1 = nn.Conv2d(nf, gc, 3, 1, 1, bias=bias)
self.conv2 = nn.Conv2d(nf + gc, gc, 3, 1, 1, bias=bias)
self.conv3 = nn.Conv2d(nf + 2 * gc, gc, 3, 1, 1, bias=bias)
self.conv4 = nn.Conv2d(nf + 3 * gc, gc, 3, 1, 1, bias=bias)
self.conv5 = nn.Conv2d(nf + 4 * gc, nf, 3, 1, 1, bias=bias)
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
# initialization
initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1)
def forward(self, x):
x1 = self.lrelu(self.conv1(x))
x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1)))
x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))
x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1)))
x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))
return x5 * 0.2 + x
class RRDB(nn.Module):
'''Residual in Residual Dense Block'''
def __init__(self, nf, gc=32):
super(RRDB, self).__init__()
self.RDB1 = ResidualDenseBlock_5C(nf, gc)
self.RDB2 = ResidualDenseBlock_5C(nf, gc)
self.RDB3 = ResidualDenseBlock_5C(nf, gc)
def forward(self, x):
out = self.RDB1(x)
out = self.RDB2(out)
out = self.RDB3(out)
return out * 0.2 + x
class RRDBNet(nn.Module):
def __init__(self, in_nc=3, out_nc=3, nf=64, nb=23, gc=32, sf=4):
super(RRDBNet, self).__init__()
RRDB_block_f = functools.partial(RRDB, nf=nf, gc=gc)
self.sf = sf
self.conv_first = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
self.RRDB_trunk = make_layer(RRDB_block_f, nb)
self.trunk_conv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
#### upsampling
self.upconv1 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
if self.sf==4:
self.upconv2 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
self.HRconv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
self.conv_last = nn.Conv2d(nf, out_nc, 3, 1, 1, bias=True)
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
def forward(self, x):
fea = self.conv_first(x)
trunk = self.trunk_conv(self.RRDB_trunk(fea))
fea = fea + trunk
fea = self.lrelu(self.upconv1(F.interpolate(fea, scale_factor=2, mode='nearest')))
if self.sf==4:
fea = self.lrelu(self.upconv2(F.interpolate(fea, scale_factor=2, mode='nearest')))
out = self.conv_last(self.lrelu(self.HRconv(fea)))
return out

109
modules/call_queue.py Normal file
View File

@ -0,0 +1,109 @@
import html
import sys
import threading
import traceback
import time
from modules import shared, progress
queue_lock = threading.Lock()
def wrap_queued_call(func):
def f(*args, **kwargs):
with queue_lock:
res = func(*args, **kwargs)
return res
return f
def wrap_gradio_gpu_call(func, extra_outputs=None):
def f(*args, **kwargs):
# if the first argument is a string that says "task(...)", it is treated as a job id
if len(args) > 0 and type(args[0]) == str and args[0][0:5] == "task(" and args[0][-1] == ")":
id_task = args[0]
progress.add_task_to_queue(id_task)
else:
id_task = None
with queue_lock:
shared.state.begin()
progress.start_task(id_task)
try:
res = func(*args, **kwargs)
finally:
progress.finish_task(id_task)
shared.state.end()
return res
return wrap_gradio_call(f, extra_outputs=extra_outputs, add_stats=True)
def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
def f(*args, extra_outputs_array=extra_outputs, **kwargs):
run_memmon = shared.opts.memmon_poll_rate > 0 and not shared.mem_mon.disabled and add_stats
if run_memmon:
shared.mem_mon.monitor()
t = time.perf_counter()
try:
res = list(func(*args, **kwargs))
except Exception as e:
# When printing out our debug argument list, do not print out more than a MB of text
max_debug_str_len = 131072 # (1024*1024)/8
print("Error completing request", file=sys.stderr)
argStr = f"Arguments: {str(args)} {str(kwargs)}"
print(argStr[:max_debug_str_len], file=sys.stderr)
if len(argStr) > max_debug_str_len:
print(f"(Argument list truncated at {max_debug_str_len}/{len(argStr)} characters)", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
shared.state.job = ""
shared.state.job_count = 0
if extra_outputs_array is None:
extra_outputs_array = [None, '']
res = extra_outputs_array + [f"<div class='error'>{html.escape(type(e).__name__+': '+str(e))}</div>"]
shared.state.skipped = False
shared.state.interrupted = False
shared.state.job_count = 0
if not add_stats:
return tuple(res)
elapsed = time.perf_counter() - t
elapsed_m = int(elapsed // 60)
elapsed_s = elapsed % 60
elapsed_text = f"{elapsed_s:.2f}s"
if elapsed_m > 0:
elapsed_text = f"{elapsed_m}m "+elapsed_text
if run_memmon:
mem_stats = {k: -(v//-(1024*1024)) for k, v in shared.mem_mon.stop().items()}
active_peak = mem_stats['active_peak']
reserved_peak = mem_stats['reserved_peak']
sys_peak = mem_stats['system_peak']
sys_total = mem_stats['total']
sys_pct = round(sys_peak/max(sys_total, 1) * 100, 2)
vram_html = f"<p class='vram'>Torch active/reserved: {active_peak}/{reserved_peak} MiB, <wbr>Sys VRAM: {sys_peak}/{sys_total} MiB ({sys_pct}%)</p>"
else:
vram_html = ''
# last item is always HTML
res[-1] += f"<div class='performance'><p class='time'>Time taken: <wbr>{elapsed_text}</p>{vram_html}</div>"
return tuple(res)
return f

View File

@ -382,7 +382,7 @@ class VQAutoEncoder(nn.Module):
self.load_state_dict(torch.load(model_path, map_location='cpu')['params'])
logger.info(f'vqgan is loaded from: {model_path} [params]')
else:
raise ValueError(f'Wrong params!')
raise ValueError('Wrong params!')
def forward(self, x):
@ -431,7 +431,7 @@ class VQGANDiscriminator(nn.Module):
elif 'params' in chkpt:
self.load_state_dict(torch.load(model_path, map_location='cpu')['params'])
else:
raise ValueError(f'Wrong params!')
raise ValueError('Wrong params!')
def forward(self, x):
return self.main(x)

View File

@ -8,7 +8,7 @@ import torch
import modules.face_restoration
import modules.shared
from modules import shared, devices, modelloader
from modules.paths import script_path, models_path
from modules.paths import models_path
# codeformer people made a choice to include modified basicsr library to their project which makes
# it utterly impossible to use it alongside with other libraries that also use basicsr, like GFPGAN.
@ -36,6 +36,7 @@ def setup_model(dirname):
from basicsr.utils.download_util import load_file_from_url
from basicsr.utils import imwrite, img2tensor, tensor2img
from facelib.utils.face_restoration_helper import FaceRestoreHelper
from facelib.detection.retinaface import retinaface
from modules.shared import cmd_opts
net_class = CodeFormer
@ -65,6 +66,8 @@ def setup_model(dirname):
net.load_state_dict(checkpoint)
net.eval()
if hasattr(retinaface, 'device'):
retinaface.device = devices.device_codeformer
face_helper = FaceRestoreHelper(1, face_size=512, crop_ratio=(1, 1), det_model='retinaface_resnet50', save_ext='png', use_parse=True, device=devices.device_codeformer)
self.net = net

99
modules/deepbooru.py Normal file
View File

@ -0,0 +1,99 @@
import os
import re
import torch
from PIL import Image
import numpy as np
from modules import modelloader, paths, deepbooru_model, devices, images, shared
re_special = re.compile(r'([\\()])')
class DeepDanbooru:
def __init__(self):
self.model = None
def load(self):
if self.model is not None:
return
files = modelloader.load_models(
model_path=os.path.join(paths.models_path, "torch_deepdanbooru"),
model_url='https://github.com/AUTOMATIC1111/TorchDeepDanbooru/releases/download/v1/model-resnet_custom_v3.pt',
ext_filter=[".pt"],
download_name='model-resnet_custom_v3.pt',
)
self.model = deepbooru_model.DeepDanbooruModel()
self.model.load_state_dict(torch.load(files[0], map_location="cpu"))
self.model.eval()
self.model.to(devices.cpu, devices.dtype)
def start(self):
self.load()
self.model.to(devices.device)
def stop(self):
if not shared.opts.interrogate_keep_models_in_memory:
self.model.to(devices.cpu)
devices.torch_gc()
def tag(self, pil_image):
self.start()
res = self.tag_multi(pil_image)
self.stop()
return res
def tag_multi(self, pil_image, force_disable_ranks=False):
threshold = shared.opts.interrogate_deepbooru_score_threshold
use_spaces = shared.opts.deepbooru_use_spaces
use_escape = shared.opts.deepbooru_escape
alpha_sort = shared.opts.deepbooru_sort_alpha
include_ranks = shared.opts.interrogate_return_ranks and not force_disable_ranks
pic = images.resize_image(2, pil_image.convert("RGB"), 512, 512)
a = np.expand_dims(np.array(pic, dtype=np.float32), 0) / 255
with torch.no_grad(), devices.autocast():
x = torch.from_numpy(a).to(devices.device)
y = self.model(x)[0].detach().cpu().numpy()
probability_dict = {}
for tag, probability in zip(self.model.tags, y):
if probability < threshold:
continue
if tag.startswith("rating:"):
continue
probability_dict[tag] = probability
if alpha_sort:
tags = sorted(probability_dict)
else:
tags = [tag for tag, _ in sorted(probability_dict.items(), key=lambda x: -x[1])]
res = []
filtertags = set([x.strip().replace(' ', '_') for x in shared.opts.deepbooru_filter_tags.split(",")])
for tag in [x for x in tags if x not in filtertags]:
probability = probability_dict[tag]
tag_outformat = tag
if use_spaces:
tag_outformat = tag_outformat.replace('_', ' ')
if use_escape:
tag_outformat = re.sub(re_special, r'\\\1', tag_outformat)
if include_ranks:
tag_outformat = f"({tag_outformat}:{probability:.3f})"
res.append(tag_outformat)
return ", ".join(res)
model = DeepDanbooru()

678
modules/deepbooru_model.py Normal file
View File

@ -0,0 +1,678 @@
import torch
import torch.nn as nn
import torch.nn.functional as F
from modules import devices
# see https://github.com/AUTOMATIC1111/TorchDeepDanbooru for more
class DeepDanbooruModel(nn.Module):
def __init__(self):
super(DeepDanbooruModel, self).__init__()
self.tags = []
self.n_Conv_0 = nn.Conv2d(kernel_size=(7, 7), in_channels=3, out_channels=64, stride=(2, 2))
self.n_MaxPool_0 = nn.MaxPool2d(kernel_size=(3, 3), stride=(2, 2))
self.n_Conv_1 = nn.Conv2d(kernel_size=(1, 1), in_channels=64, out_channels=256)
self.n_Conv_2 = nn.Conv2d(kernel_size=(1, 1), in_channels=64, out_channels=64)
self.n_Conv_3 = nn.Conv2d(kernel_size=(3, 3), in_channels=64, out_channels=64)
self.n_Conv_4 = nn.Conv2d(kernel_size=(1, 1), in_channels=64, out_channels=256)
self.n_Conv_5 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=64)
self.n_Conv_6 = nn.Conv2d(kernel_size=(3, 3), in_channels=64, out_channels=64)
self.n_Conv_7 = nn.Conv2d(kernel_size=(1, 1), in_channels=64, out_channels=256)
self.n_Conv_8 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=64)
self.n_Conv_9 = nn.Conv2d(kernel_size=(3, 3), in_channels=64, out_channels=64)
self.n_Conv_10 = nn.Conv2d(kernel_size=(1, 1), in_channels=64, out_channels=256)
self.n_Conv_11 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=512, stride=(2, 2))
self.n_Conv_12 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=128)
self.n_Conv_13 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128, stride=(2, 2))
self.n_Conv_14 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
self.n_Conv_15 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
self.n_Conv_16 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
self.n_Conv_17 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
self.n_Conv_18 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
self.n_Conv_19 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
self.n_Conv_20 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
self.n_Conv_21 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
self.n_Conv_22 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
self.n_Conv_23 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
self.n_Conv_24 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
self.n_Conv_25 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
self.n_Conv_26 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
self.n_Conv_27 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
self.n_Conv_28 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
self.n_Conv_29 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
self.n_Conv_30 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
self.n_Conv_31 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
self.n_Conv_32 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
self.n_Conv_33 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=128)
self.n_Conv_34 = nn.Conv2d(kernel_size=(3, 3), in_channels=128, out_channels=128)
self.n_Conv_35 = nn.Conv2d(kernel_size=(1, 1), in_channels=128, out_channels=512)
self.n_Conv_36 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=1024, stride=(2, 2))
self.n_Conv_37 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=256)
self.n_Conv_38 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256, stride=(2, 2))
self.n_Conv_39 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_40 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_41 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_42 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_43 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_44 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_45 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_46 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_47 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_48 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_49 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_50 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_51 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_52 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_53 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_54 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_55 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_56 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_57 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_58 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_59 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_60 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_61 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_62 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_63 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_64 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_65 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_66 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_67 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_68 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_69 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_70 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_71 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_72 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_73 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_74 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_75 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_76 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_77 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_78 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_79 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_80 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_81 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_82 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_83 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_84 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_85 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_86 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_87 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_88 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_89 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_90 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_91 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_92 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_93 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_94 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_95 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_96 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_97 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_98 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256, stride=(2, 2))
self.n_Conv_99 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_100 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=1024, stride=(2, 2))
self.n_Conv_101 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_102 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_103 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_104 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_105 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_106 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_107 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_108 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_109 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_110 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_111 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_112 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_113 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_114 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_115 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_116 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_117 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_118 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_119 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_120 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_121 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_122 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_123 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_124 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_125 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_126 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_127 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_128 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_129 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_130 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_131 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_132 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_133 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_134 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_135 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_136 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_137 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_138 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_139 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_140 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_141 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_142 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_143 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_144 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_145 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_146 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_147 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_148 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_149 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_150 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_151 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_152 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_153 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_154 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_155 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=256)
self.n_Conv_156 = nn.Conv2d(kernel_size=(3, 3), in_channels=256, out_channels=256)
self.n_Conv_157 = nn.Conv2d(kernel_size=(1, 1), in_channels=256, out_channels=1024)
self.n_Conv_158 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=2048, stride=(2, 2))
self.n_Conv_159 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=512)
self.n_Conv_160 = nn.Conv2d(kernel_size=(3, 3), in_channels=512, out_channels=512, stride=(2, 2))
self.n_Conv_161 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=2048)
self.n_Conv_162 = nn.Conv2d(kernel_size=(1, 1), in_channels=2048, out_channels=512)
self.n_Conv_163 = nn.Conv2d(kernel_size=(3, 3), in_channels=512, out_channels=512)
self.n_Conv_164 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=2048)
self.n_Conv_165 = nn.Conv2d(kernel_size=(1, 1), in_channels=2048, out_channels=512)
self.n_Conv_166 = nn.Conv2d(kernel_size=(3, 3), in_channels=512, out_channels=512)
self.n_Conv_167 = nn.Conv2d(kernel_size=(1, 1), in_channels=512, out_channels=2048)
self.n_Conv_168 = nn.Conv2d(kernel_size=(1, 1), in_channels=2048, out_channels=4096, stride=(2, 2))
self.n_Conv_169 = nn.Conv2d(kernel_size=(1, 1), in_channels=2048, out_channels=1024)
self.n_Conv_170 = nn.Conv2d(kernel_size=(3, 3), in_channels=1024, out_channels=1024, stride=(2, 2))
self.n_Conv_171 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=4096)
self.n_Conv_172 = nn.Conv2d(kernel_size=(1, 1), in_channels=4096, out_channels=1024)
self.n_Conv_173 = nn.Conv2d(kernel_size=(3, 3), in_channels=1024, out_channels=1024)
self.n_Conv_174 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=4096)
self.n_Conv_175 = nn.Conv2d(kernel_size=(1, 1), in_channels=4096, out_channels=1024)
self.n_Conv_176 = nn.Conv2d(kernel_size=(3, 3), in_channels=1024, out_channels=1024)
self.n_Conv_177 = nn.Conv2d(kernel_size=(1, 1), in_channels=1024, out_channels=4096)
self.n_Conv_178 = nn.Conv2d(kernel_size=(1, 1), in_channels=4096, out_channels=9176, bias=False)
def forward(self, *inputs):
t_358, = inputs
t_359 = t_358.permute(*[0, 3, 1, 2])
t_359_padded = F.pad(t_359, [2, 3, 2, 3], value=0)
t_360 = self.n_Conv_0(t_359_padded.to(self.n_Conv_0.bias.dtype) if devices.unet_needs_upcast else t_359_padded)
t_361 = F.relu(t_360)
t_361 = F.pad(t_361, [0, 1, 0, 1], value=float('-inf'))
t_362 = self.n_MaxPool_0(t_361)
t_363 = self.n_Conv_1(t_362)
t_364 = self.n_Conv_2(t_362)
t_365 = F.relu(t_364)
t_365_padded = F.pad(t_365, [1, 1, 1, 1], value=0)
t_366 = self.n_Conv_3(t_365_padded)
t_367 = F.relu(t_366)
t_368 = self.n_Conv_4(t_367)
t_369 = torch.add(t_368, t_363)
t_370 = F.relu(t_369)
t_371 = self.n_Conv_5(t_370)
t_372 = F.relu(t_371)
t_372_padded = F.pad(t_372, [1, 1, 1, 1], value=0)
t_373 = self.n_Conv_6(t_372_padded)
t_374 = F.relu(t_373)
t_375 = self.n_Conv_7(t_374)
t_376 = torch.add(t_375, t_370)
t_377 = F.relu(t_376)
t_378 = self.n_Conv_8(t_377)
t_379 = F.relu(t_378)
t_379_padded = F.pad(t_379, [1, 1, 1, 1], value=0)
t_380 = self.n_Conv_9(t_379_padded)
t_381 = F.relu(t_380)
t_382 = self.n_Conv_10(t_381)
t_383 = torch.add(t_382, t_377)
t_384 = F.relu(t_383)
t_385 = self.n_Conv_11(t_384)
t_386 = self.n_Conv_12(t_384)
t_387 = F.relu(t_386)
t_387_padded = F.pad(t_387, [0, 1, 0, 1], value=0)
t_388 = self.n_Conv_13(t_387_padded)
t_389 = F.relu(t_388)
t_390 = self.n_Conv_14(t_389)
t_391 = torch.add(t_390, t_385)
t_392 = F.relu(t_391)
t_393 = self.n_Conv_15(t_392)
t_394 = F.relu(t_393)
t_394_padded = F.pad(t_394, [1, 1, 1, 1], value=0)
t_395 = self.n_Conv_16(t_394_padded)
t_396 = F.relu(t_395)
t_397 = self.n_Conv_17(t_396)
t_398 = torch.add(t_397, t_392)
t_399 = F.relu(t_398)
t_400 = self.n_Conv_18(t_399)
t_401 = F.relu(t_400)
t_401_padded = F.pad(t_401, [1, 1, 1, 1], value=0)
t_402 = self.n_Conv_19(t_401_padded)
t_403 = F.relu(t_402)
t_404 = self.n_Conv_20(t_403)
t_405 = torch.add(t_404, t_399)
t_406 = F.relu(t_405)
t_407 = self.n_Conv_21(t_406)
t_408 = F.relu(t_407)
t_408_padded = F.pad(t_408, [1, 1, 1, 1], value=0)
t_409 = self.n_Conv_22(t_408_padded)
t_410 = F.relu(t_409)
t_411 = self.n_Conv_23(t_410)
t_412 = torch.add(t_411, t_406)
t_413 = F.relu(t_412)
t_414 = self.n_Conv_24(t_413)
t_415 = F.relu(t_414)
t_415_padded = F.pad(t_415, [1, 1, 1, 1], value=0)
t_416 = self.n_Conv_25(t_415_padded)
t_417 = F.relu(t_416)
t_418 = self.n_Conv_26(t_417)
t_419 = torch.add(t_418, t_413)
t_420 = F.relu(t_419)
t_421 = self.n_Conv_27(t_420)
t_422 = F.relu(t_421)
t_422_padded = F.pad(t_422, [1, 1, 1, 1], value=0)
t_423 = self.n_Conv_28(t_422_padded)
t_424 = F.relu(t_423)
t_425 = self.n_Conv_29(t_424)
t_426 = torch.add(t_425, t_420)
t_427 = F.relu(t_426)
t_428 = self.n_Conv_30(t_427)
t_429 = F.relu(t_428)
t_429_padded = F.pad(t_429, [1, 1, 1, 1], value=0)
t_430 = self.n_Conv_31(t_429_padded)
t_431 = F.relu(t_430)
t_432 = self.n_Conv_32(t_431)
t_433 = torch.add(t_432, t_427)
t_434 = F.relu(t_433)
t_435 = self.n_Conv_33(t_434)
t_436 = F.relu(t_435)
t_436_padded = F.pad(t_436, [1, 1, 1, 1], value=0)
t_437 = self.n_Conv_34(t_436_padded)
t_438 = F.relu(t_437)
t_439 = self.n_Conv_35(t_438)
t_440 = torch.add(t_439, t_434)
t_441 = F.relu(t_440)
t_442 = self.n_Conv_36(t_441)
t_443 = self.n_Conv_37(t_441)
t_444 = F.relu(t_443)
t_444_padded = F.pad(t_444, [0, 1, 0, 1], value=0)
t_445 = self.n_Conv_38(t_444_padded)
t_446 = F.relu(t_445)
t_447 = self.n_Conv_39(t_446)
t_448 = torch.add(t_447, t_442)
t_449 = F.relu(t_448)
t_450 = self.n_Conv_40(t_449)
t_451 = F.relu(t_450)
t_451_padded = F.pad(t_451, [1, 1, 1, 1], value=0)
t_452 = self.n_Conv_41(t_451_padded)
t_453 = F.relu(t_452)
t_454 = self.n_Conv_42(t_453)
t_455 = torch.add(t_454, t_449)
t_456 = F.relu(t_455)
t_457 = self.n_Conv_43(t_456)
t_458 = F.relu(t_457)
t_458_padded = F.pad(t_458, [1, 1, 1, 1], value=0)
t_459 = self.n_Conv_44(t_458_padded)
t_460 = F.relu(t_459)
t_461 = self.n_Conv_45(t_460)
t_462 = torch.add(t_461, t_456)
t_463 = F.relu(t_462)
t_464 = self.n_Conv_46(t_463)
t_465 = F.relu(t_464)
t_465_padded = F.pad(t_465, [1, 1, 1, 1], value=0)
t_466 = self.n_Conv_47(t_465_padded)
t_467 = F.relu(t_466)
t_468 = self.n_Conv_48(t_467)
t_469 = torch.add(t_468, t_463)
t_470 = F.relu(t_469)
t_471 = self.n_Conv_49(t_470)
t_472 = F.relu(t_471)
t_472_padded = F.pad(t_472, [1, 1, 1, 1], value=0)
t_473 = self.n_Conv_50(t_472_padded)
t_474 = F.relu(t_473)
t_475 = self.n_Conv_51(t_474)
t_476 = torch.add(t_475, t_470)
t_477 = F.relu(t_476)
t_478 = self.n_Conv_52(t_477)
t_479 = F.relu(t_478)
t_479_padded = F.pad(t_479, [1, 1, 1, 1], value=0)
t_480 = self.n_Conv_53(t_479_padded)
t_481 = F.relu(t_480)
t_482 = self.n_Conv_54(t_481)
t_483 = torch.add(t_482, t_477)
t_484 = F.relu(t_483)
t_485 = self.n_Conv_55(t_484)
t_486 = F.relu(t_485)
t_486_padded = F.pad(t_486, [1, 1, 1, 1], value=0)
t_487 = self.n_Conv_56(t_486_padded)
t_488 = F.relu(t_487)
t_489 = self.n_Conv_57(t_488)
t_490 = torch.add(t_489, t_484)
t_491 = F.relu(t_490)
t_492 = self.n_Conv_58(t_491)
t_493 = F.relu(t_492)
t_493_padded = F.pad(t_493, [1, 1, 1, 1], value=0)
t_494 = self.n_Conv_59(t_493_padded)
t_495 = F.relu(t_494)
t_496 = self.n_Conv_60(t_495)
t_497 = torch.add(t_496, t_491)
t_498 = F.relu(t_497)
t_499 = self.n_Conv_61(t_498)
t_500 = F.relu(t_499)
t_500_padded = F.pad(t_500, [1, 1, 1, 1], value=0)
t_501 = self.n_Conv_62(t_500_padded)
t_502 = F.relu(t_501)
t_503 = self.n_Conv_63(t_502)
t_504 = torch.add(t_503, t_498)
t_505 = F.relu(t_504)
t_506 = self.n_Conv_64(t_505)
t_507 = F.relu(t_506)
t_507_padded = F.pad(t_507, [1, 1, 1, 1], value=0)
t_508 = self.n_Conv_65(t_507_padded)
t_509 = F.relu(t_508)
t_510 = self.n_Conv_66(t_509)
t_511 = torch.add(t_510, t_505)
t_512 = F.relu(t_511)
t_513 = self.n_Conv_67(t_512)
t_514 = F.relu(t_513)
t_514_padded = F.pad(t_514, [1, 1, 1, 1], value=0)
t_515 = self.n_Conv_68(t_514_padded)
t_516 = F.relu(t_515)
t_517 = self.n_Conv_69(t_516)
t_518 = torch.add(t_517, t_512)
t_519 = F.relu(t_518)
t_520 = self.n_Conv_70(t_519)
t_521 = F.relu(t_520)
t_521_padded = F.pad(t_521, [1, 1, 1, 1], value=0)
t_522 = self.n_Conv_71(t_521_padded)
t_523 = F.relu(t_522)
t_524 = self.n_Conv_72(t_523)
t_525 = torch.add(t_524, t_519)
t_526 = F.relu(t_525)
t_527 = self.n_Conv_73(t_526)
t_528 = F.relu(t_527)
t_528_padded = F.pad(t_528, [1, 1, 1, 1], value=0)
t_529 = self.n_Conv_74(t_528_padded)
t_530 = F.relu(t_529)
t_531 = self.n_Conv_75(t_530)
t_532 = torch.add(t_531, t_526)
t_533 = F.relu(t_532)
t_534 = self.n_Conv_76(t_533)
t_535 = F.relu(t_534)
t_535_padded = F.pad(t_535, [1, 1, 1, 1], value=0)
t_536 = self.n_Conv_77(t_535_padded)
t_537 = F.relu(t_536)
t_538 = self.n_Conv_78(t_537)
t_539 = torch.add(t_538, t_533)
t_540 = F.relu(t_539)
t_541 = self.n_Conv_79(t_540)
t_542 = F.relu(t_541)
t_542_padded = F.pad(t_542, [1, 1, 1, 1], value=0)
t_543 = self.n_Conv_80(t_542_padded)
t_544 = F.relu(t_543)
t_545 = self.n_Conv_81(t_544)
t_546 = torch.add(t_545, t_540)
t_547 = F.relu(t_546)
t_548 = self.n_Conv_82(t_547)
t_549 = F.relu(t_548)
t_549_padded = F.pad(t_549, [1, 1, 1, 1], value=0)
t_550 = self.n_Conv_83(t_549_padded)
t_551 = F.relu(t_550)
t_552 = self.n_Conv_84(t_551)
t_553 = torch.add(t_552, t_547)
t_554 = F.relu(t_553)
t_555 = self.n_Conv_85(t_554)
t_556 = F.relu(t_555)
t_556_padded = F.pad(t_556, [1, 1, 1, 1], value=0)
t_557 = self.n_Conv_86(t_556_padded)
t_558 = F.relu(t_557)
t_559 = self.n_Conv_87(t_558)
t_560 = torch.add(t_559, t_554)
t_561 = F.relu(t_560)
t_562 = self.n_Conv_88(t_561)
t_563 = F.relu(t_562)
t_563_padded = F.pad(t_563, [1, 1, 1, 1], value=0)
t_564 = self.n_Conv_89(t_563_padded)
t_565 = F.relu(t_564)
t_566 = self.n_Conv_90(t_565)
t_567 = torch.add(t_566, t_561)
t_568 = F.relu(t_567)
t_569 = self.n_Conv_91(t_568)
t_570 = F.relu(t_569)
t_570_padded = F.pad(t_570, [1, 1, 1, 1], value=0)
t_571 = self.n_Conv_92(t_570_padded)
t_572 = F.relu(t_571)
t_573 = self.n_Conv_93(t_572)
t_574 = torch.add(t_573, t_568)
t_575 = F.relu(t_574)
t_576 = self.n_Conv_94(t_575)
t_577 = F.relu(t_576)
t_577_padded = F.pad(t_577, [1, 1, 1, 1], value=0)
t_578 = self.n_Conv_95(t_577_padded)
t_579 = F.relu(t_578)
t_580 = self.n_Conv_96(t_579)
t_581 = torch.add(t_580, t_575)
t_582 = F.relu(t_581)
t_583 = self.n_Conv_97(t_582)
t_584 = F.relu(t_583)
t_584_padded = F.pad(t_584, [0, 1, 0, 1], value=0)
t_585 = self.n_Conv_98(t_584_padded)
t_586 = F.relu(t_585)
t_587 = self.n_Conv_99(t_586)
t_588 = self.n_Conv_100(t_582)
t_589 = torch.add(t_587, t_588)
t_590 = F.relu(t_589)
t_591 = self.n_Conv_101(t_590)
t_592 = F.relu(t_591)
t_592_padded = F.pad(t_592, [1, 1, 1, 1], value=0)
t_593 = self.n_Conv_102(t_592_padded)
t_594 = F.relu(t_593)
t_595 = self.n_Conv_103(t_594)
t_596 = torch.add(t_595, t_590)
t_597 = F.relu(t_596)
t_598 = self.n_Conv_104(t_597)
t_599 = F.relu(t_598)
t_599_padded = F.pad(t_599, [1, 1, 1, 1], value=0)
t_600 = self.n_Conv_105(t_599_padded)
t_601 = F.relu(t_600)
t_602 = self.n_Conv_106(t_601)
t_603 = torch.add(t_602, t_597)
t_604 = F.relu(t_603)
t_605 = self.n_Conv_107(t_604)
t_606 = F.relu(t_605)
t_606_padded = F.pad(t_606, [1, 1, 1, 1], value=0)
t_607 = self.n_Conv_108(t_606_padded)
t_608 = F.relu(t_607)
t_609 = self.n_Conv_109(t_608)
t_610 = torch.add(t_609, t_604)
t_611 = F.relu(t_610)
t_612 = self.n_Conv_110(t_611)
t_613 = F.relu(t_612)
t_613_padded = F.pad(t_613, [1, 1, 1, 1], value=0)
t_614 = self.n_Conv_111(t_613_padded)
t_615 = F.relu(t_614)
t_616 = self.n_Conv_112(t_615)
t_617 = torch.add(t_616, t_611)
t_618 = F.relu(t_617)
t_619 = self.n_Conv_113(t_618)
t_620 = F.relu(t_619)
t_620_padded = F.pad(t_620, [1, 1, 1, 1], value=0)
t_621 = self.n_Conv_114(t_620_padded)
t_622 = F.relu(t_621)
t_623 = self.n_Conv_115(t_622)
t_624 = torch.add(t_623, t_618)
t_625 = F.relu(t_624)
t_626 = self.n_Conv_116(t_625)
t_627 = F.relu(t_626)
t_627_padded = F.pad(t_627, [1, 1, 1, 1], value=0)
t_628 = self.n_Conv_117(t_627_padded)
t_629 = F.relu(t_628)
t_630 = self.n_Conv_118(t_629)
t_631 = torch.add(t_630, t_625)
t_632 = F.relu(t_631)
t_633 = self.n_Conv_119(t_632)
t_634 = F.relu(t_633)
t_634_padded = F.pad(t_634, [1, 1, 1, 1], value=0)
t_635 = self.n_Conv_120(t_634_padded)
t_636 = F.relu(t_635)
t_637 = self.n_Conv_121(t_636)
t_638 = torch.add(t_637, t_632)
t_639 = F.relu(t_638)
t_640 = self.n_Conv_122(t_639)
t_641 = F.relu(t_640)
t_641_padded = F.pad(t_641, [1, 1, 1, 1], value=0)
t_642 = self.n_Conv_123(t_641_padded)
t_643 = F.relu(t_642)
t_644 = self.n_Conv_124(t_643)
t_645 = torch.add(t_644, t_639)
t_646 = F.relu(t_645)
t_647 = self.n_Conv_125(t_646)
t_648 = F.relu(t_647)
t_648_padded = F.pad(t_648, [1, 1, 1, 1], value=0)
t_649 = self.n_Conv_126(t_648_padded)
t_650 = F.relu(t_649)
t_651 = self.n_Conv_127(t_650)
t_652 = torch.add(t_651, t_646)
t_653 = F.relu(t_652)
t_654 = self.n_Conv_128(t_653)
t_655 = F.relu(t_654)
t_655_padded = F.pad(t_655, [1, 1, 1, 1], value=0)
t_656 = self.n_Conv_129(t_655_padded)
t_657 = F.relu(t_656)
t_658 = self.n_Conv_130(t_657)
t_659 = torch.add(t_658, t_653)
t_660 = F.relu(t_659)
t_661 = self.n_Conv_131(t_660)
t_662 = F.relu(t_661)
t_662_padded = F.pad(t_662, [1, 1, 1, 1], value=0)
t_663 = self.n_Conv_132(t_662_padded)
t_664 = F.relu(t_663)
t_665 = self.n_Conv_133(t_664)
t_666 = torch.add(t_665, t_660)
t_667 = F.relu(t_666)
t_668 = self.n_Conv_134(t_667)
t_669 = F.relu(t_668)
t_669_padded = F.pad(t_669, [1, 1, 1, 1], value=0)
t_670 = self.n_Conv_135(t_669_padded)
t_671 = F.relu(t_670)
t_672 = self.n_Conv_136(t_671)
t_673 = torch.add(t_672, t_667)
t_674 = F.relu(t_673)
t_675 = self.n_Conv_137(t_674)
t_676 = F.relu(t_675)
t_676_padded = F.pad(t_676, [1, 1, 1, 1], value=0)
t_677 = self.n_Conv_138(t_676_padded)
t_678 = F.relu(t_677)
t_679 = self.n_Conv_139(t_678)
t_680 = torch.add(t_679, t_674)
t_681 = F.relu(t_680)
t_682 = self.n_Conv_140(t_681)
t_683 = F.relu(t_682)
t_683_padded = F.pad(t_683, [1, 1, 1, 1], value=0)
t_684 = self.n_Conv_141(t_683_padded)
t_685 = F.relu(t_684)
t_686 = self.n_Conv_142(t_685)
t_687 = torch.add(t_686, t_681)
t_688 = F.relu(t_687)
t_689 = self.n_Conv_143(t_688)
t_690 = F.relu(t_689)
t_690_padded = F.pad(t_690, [1, 1, 1, 1], value=0)
t_691 = self.n_Conv_144(t_690_padded)
t_692 = F.relu(t_691)
t_693 = self.n_Conv_145(t_692)
t_694 = torch.add(t_693, t_688)
t_695 = F.relu(t_694)
t_696 = self.n_Conv_146(t_695)
t_697 = F.relu(t_696)
t_697_padded = F.pad(t_697, [1, 1, 1, 1], value=0)
t_698 = self.n_Conv_147(t_697_padded)
t_699 = F.relu(t_698)
t_700 = self.n_Conv_148(t_699)
t_701 = torch.add(t_700, t_695)
t_702 = F.relu(t_701)
t_703 = self.n_Conv_149(t_702)
t_704 = F.relu(t_703)
t_704_padded = F.pad(t_704, [1, 1, 1, 1], value=0)
t_705 = self.n_Conv_150(t_704_padded)
t_706 = F.relu(t_705)
t_707 = self.n_Conv_151(t_706)
t_708 = torch.add(t_707, t_702)
t_709 = F.relu(t_708)
t_710 = self.n_Conv_152(t_709)
t_711 = F.relu(t_710)
t_711_padded = F.pad(t_711, [1, 1, 1, 1], value=0)
t_712 = self.n_Conv_153(t_711_padded)
t_713 = F.relu(t_712)
t_714 = self.n_Conv_154(t_713)
t_715 = torch.add(t_714, t_709)
t_716 = F.relu(t_715)
t_717 = self.n_Conv_155(t_716)
t_718 = F.relu(t_717)
t_718_padded = F.pad(t_718, [1, 1, 1, 1], value=0)
t_719 = self.n_Conv_156(t_718_padded)
t_720 = F.relu(t_719)
t_721 = self.n_Conv_157(t_720)
t_722 = torch.add(t_721, t_716)
t_723 = F.relu(t_722)
t_724 = self.n_Conv_158(t_723)
t_725 = self.n_Conv_159(t_723)
t_726 = F.relu(t_725)
t_726_padded = F.pad(t_726, [0, 1, 0, 1], value=0)
t_727 = self.n_Conv_160(t_726_padded)
t_728 = F.relu(t_727)
t_729 = self.n_Conv_161(t_728)
t_730 = torch.add(t_729, t_724)
t_731 = F.relu(t_730)
t_732 = self.n_Conv_162(t_731)
t_733 = F.relu(t_732)
t_733_padded = F.pad(t_733, [1, 1, 1, 1], value=0)
t_734 = self.n_Conv_163(t_733_padded)
t_735 = F.relu(t_734)
t_736 = self.n_Conv_164(t_735)
t_737 = torch.add(t_736, t_731)
t_738 = F.relu(t_737)
t_739 = self.n_Conv_165(t_738)
t_740 = F.relu(t_739)
t_740_padded = F.pad(t_740, [1, 1, 1, 1], value=0)
t_741 = self.n_Conv_166(t_740_padded)
t_742 = F.relu(t_741)
t_743 = self.n_Conv_167(t_742)
t_744 = torch.add(t_743, t_738)
t_745 = F.relu(t_744)
t_746 = self.n_Conv_168(t_745)
t_747 = self.n_Conv_169(t_745)
t_748 = F.relu(t_747)
t_748_padded = F.pad(t_748, [0, 1, 0, 1], value=0)
t_749 = self.n_Conv_170(t_748_padded)
t_750 = F.relu(t_749)
t_751 = self.n_Conv_171(t_750)
t_752 = torch.add(t_751, t_746)
t_753 = F.relu(t_752)
t_754 = self.n_Conv_172(t_753)
t_755 = F.relu(t_754)
t_755_padded = F.pad(t_755, [1, 1, 1, 1], value=0)
t_756 = self.n_Conv_173(t_755_padded)
t_757 = F.relu(t_756)
t_758 = self.n_Conv_174(t_757)
t_759 = torch.add(t_758, t_753)
t_760 = F.relu(t_759)
t_761 = self.n_Conv_175(t_760)
t_762 = F.relu(t_761)
t_762_padded = F.pad(t_762, [1, 1, 1, 1], value=0)
t_763 = self.n_Conv_176(t_762_padded)
t_764 = F.relu(t_763)
t_765 = self.n_Conv_177(t_764)
t_766 = torch.add(t_765, t_760)
t_767 = F.relu(t_766)
t_768 = self.n_Conv_178(t_767)
t_769 = F.avg_pool2d(t_768, kernel_size=t_768.shape[-2:])
t_770 = torch.squeeze(t_769, 3)
t_770 = torch.squeeze(t_770, 2)
t_771 = torch.sigmoid(t_770)
return t_771
def load_state_dict(self, state_dict, **kwargs):
self.tags = state_dict.get('tags', [])
super(DeepDanbooruModel, self).load_state_dict({k: v for k, v in state_dict.items() if k != 'tags'})

View File

@ -1,68 +1,239 @@
import sys, os, shlex
import contextlib
import torch
from modules import errors
from packaging import version
# has_mps is only available in nightly pytorch (for now), `getattr` for compatibility
has_mps = getattr(torch, 'has_mps', False)
cpu = torch.device("cpu")
# has_mps is only available in nightly pytorch (for now) and macOS 12.3+.
# check `getattr` and try it for compatibility
def has_mps() -> bool:
if not getattr(torch, 'has_mps', False):
return False
try:
torch.zeros(1).to(torch.device("mps"))
return True
except Exception:
return False
def has_dml():
import importlib
loader = importlib.find_loader('torch_directml')
return loader is not None
def extract_device_id(args, name):
for x in range(len(args)):
if name in args[x]:
return args[x + 1]
return None
def get_cuda_device_string():
from modules import shared
if shared.cmd_opts.device_id is not None:
return f"cuda:{shared.cmd_opts.device_id}"
return "cuda"
def get_optimal_device_name():
if has_dml():
return "dml"
if has_mps():
return "mps"
if torch.cuda.is_available():
return get_cuda_device_string()
return "cpu"
def get_optimal_device():
if torch.cuda.is_available():
return torch.device("cuda")
if get_optimal_device_name() == "dml":
import torch_directml
return torch_directml.device()
if has_mps:
return torch.device("mps")
return torch.device(get_optimal_device_name())
def get_device_for(task):
from modules import shared
if task in shared.cmd_opts.use_cpu:
return cpu
return get_optimal_device()
def torch_gc():
if torch.cuda.is_available():
with torch.cuda.device(get_cuda_device_string()):
torch.cuda.empty_cache()
torch.cuda.ipc_collect()
def enable_tf32():
if torch.cuda.is_available():
# enabling benchmark option seems to enable a range of cards to do fp16 when they otherwise can't
# see https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4407
if any([torch.cuda.get_device_capability(devid) == (7, 5) for devid in range(0, torch.cuda.device_count())]):
torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
errors.run(enable_tf32, "Enabling TF32")
device = device_gfpgan = device_bsrgan = device_esrgan = device_scunet = device_codeformer = get_optimal_device()
cpu = torch.device("cpu")
device = device_interrogate = device_gfpgan = device_esrgan = device_codeformer = None
dtype = torch.float16
dtype_vae = torch.float16
dtype_unet = torch.float16
unet_needs_upcast = False
def cond_cast_unet(input):
return input.to(dtype_unet) if unet_needs_upcast else input
def cond_cast_float(input):
return input.float() if unet_needs_upcast else input
def randn(seed, shape):
# Pytorch currently doesn't handle setting randomness correctly when the metal backend is used.
if device.type == 'mps':
generator = torch.Generator(device=cpu)
generator.manual_seed(seed)
noise = torch.randn(shape, generator=generator, device=cpu).to(device)
return noise
torch.manual_seed(seed)
if device.type == 'mps':
return torch.randn(shape, device=cpu).to(device)
return torch.randn(shape, device=device)
def randn_without_seed(shape):
# Pytorch currently doesn't handle setting randomness correctly when the metal backend is used.
if device.type == 'mps':
generator = torch.Generator(device=cpu)
noise = torch.randn(shape, generator=generator, device=cpu).to(device)
return noise
return torch.randn(shape, device=cpu).to(device)
return torch.randn(shape, device=device)
def autocast():
def autocast(disable=False):
from modules import shared
if disable:
return contextlib.nullcontext()
if dtype == torch.float32 or shared.cmd_opts.precision == "full":
return contextlib.nullcontext()
return torch.autocast("cuda")
def without_autocast(disable=False):
return torch.autocast("cuda", enabled=False) if torch.is_autocast_enabled() and not disable else contextlib.nullcontext()
class NansException(Exception):
pass
def test_for_nans(x, where):
from modules import shared
if shared.cmd_opts.disable_nan_check:
return
if not torch.all(torch.isnan(x)).item():
return
if where == "unet":
message = "A tensor with all NaNs was produced in Unet."
if not shared.cmd_opts.no_half:
message += " This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the \"Upcast cross attention layer to float32\" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this."
elif where == "vae":
message = "A tensor with all NaNs was produced in VAE."
if not shared.cmd_opts.no_half and not shared.cmd_opts.no_half_vae:
message += " This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this."
else:
message = "A tensor with all NaNs was produced."
message += " Use --disable-nan-check commandline argument to disable this check."
raise NansException(message)
# MPS workaround for https://github.com/pytorch/pytorch/issues/79383
orig_tensor_to = torch.Tensor.to
def tensor_to_fix(self, *args, **kwargs):
if self.device.type != 'mps' and \
((len(args) > 0 and isinstance(args[0], torch.device) and args[0].type == 'mps') or \
(isinstance(kwargs.get('device'), torch.device) and kwargs['device'].type == 'mps')):
self = self.contiguous()
return orig_tensor_to(self, *args, **kwargs)
# MPS workaround for https://github.com/pytorch/pytorch/issues/80800
orig_layer_norm = torch.nn.functional.layer_norm
def layer_norm_fix(*args, **kwargs):
if len(args) > 0 and isinstance(args[0], torch.Tensor) and args[0].device.type == 'mps':
args = list(args)
args[0] = args[0].contiguous()
return orig_layer_norm(*args, **kwargs)
# MPS workaround for https://github.com/pytorch/pytorch/issues/90532
orig_tensor_numpy = torch.Tensor.numpy
def numpy_fix(self, *args, **kwargs):
if self.requires_grad:
self = self.detach()
return orig_tensor_numpy(self, *args, **kwargs)
# MPS workaround for https://github.com/pytorch/pytorch/issues/89784
orig_cumsum = torch.cumsum
orig_Tensor_cumsum = torch.Tensor.cumsum
def cumsum_fix(input, cumsum_func, *args, **kwargs):
if input.device.type == 'mps':
output_dtype = kwargs.get('dtype', input.dtype)
if output_dtype == torch.int64:
return cumsum_func(input.cpu(), *args, **kwargs).to(input.device)
elif cumsum_needs_bool_fix and output_dtype == torch.bool or cumsum_needs_int_fix and (output_dtype == torch.int8 or output_dtype == torch.int16):
return cumsum_func(input.to(torch.int32), *args, **kwargs).to(torch.int64)
return cumsum_func(input, *args, **kwargs)
if has_mps():
if version.parse(torch.__version__) < version.parse("1.13"):
# PyTorch 1.13 doesn't need these fixes but unfortunately is slower and has regressions that prevent training from working
torch.Tensor.to = tensor_to_fix
torch.nn.functional.layer_norm = layer_norm_fix
torch.Tensor.numpy = numpy_fix
elif version.parse(torch.__version__) > version.parse("1.13.1"):
cumsum_needs_int_fix = not torch.Tensor([1,2]).to(torch.device("mps")).equal(torch.ShortTensor([1,1]).to(torch.device("mps")).cumsum(0))
cumsum_needs_bool_fix = not torch.BoolTensor([True,True]).to(device=torch.device("mps"), dtype=torch.int64).equal(torch.BoolTensor([True,False]).to(torch.device("mps")).cumsum(0))
torch.cumsum = lambda input, *args, **kwargs: ( cumsum_fix(input, orig_cumsum, *args, **kwargs) )
torch.Tensor.cumsum = lambda self, *args, **kwargs: ( cumsum_fix(self, orig_Tensor_cumsum, *args, **kwargs) )
if has_dml():
_cumsum = torch.cumsum
_repeat_interleave = torch.repeat_interleave
_multinomial = torch.multinomial
_Tensor_new = torch.Tensor.new
_Tensor_cumsum = torch.Tensor.cumsum
_Tensor_repeat_interleave = torch.Tensor.repeat_interleave
_Tensor_multinomial = torch.Tensor.multinomial
torch.cumsum = lambda input, *args, **kwargs: ( _cumsum(input.to("cpu"), *args, **kwargs).to(input.device) )
torch.repeat_interleave = lambda input, *args, **kwargs: ( _repeat_interleave(input.to("cpu"), *args, **kwargs).to(input.device) )
torch.multinomial = lambda input, *args, **kwargs: ( _multinomial(input.to("cpu"), *args, **kwargs).to(input.device) )
torch.Tensor.new = lambda self, *args, **kwargs: ( _Tensor_new(self.to("cpu"), *args, **kwargs).to(self.device) )
torch.Tensor.cumsum = lambda self, *args, **kwargs: ( _Tensor_cumsum(self.to("cpu"), *args, **kwargs).to(self.device) )
torch.Tensor.repeat_interleave = lambda self, *args, **kwargs: ( _Tensor_repeat_interleave(self.to("cpu"), *args, **kwargs).to(self.device) )
torch.Tensor.multinomial = lambda self, *args, **kwargs: ( _Tensor_multinomial(self.to("cpu"), *args, **kwargs).to(self.device) )

View File

@ -2,9 +2,42 @@ import sys
import traceback
def print_error_explanation(message):
lines = message.strip().split("\n")
max_len = max([len(x) for x in lines])
print('=' * max_len, file=sys.stderr)
for line in lines:
print(line, file=sys.stderr)
print('=' * max_len, file=sys.stderr)
def display(e: Exception, task):
print(f"{task or 'error'}: {type(e).__name__}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
message = str(e)
if "copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768])" in message:
print_error_explanation("""
The most likely cause of this is you are trying to load Stable Diffusion 2.0 model without specifying its config file.
See https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20 for how to solve this.
""")
already_displayed = {}
def display_once(e: Exception, task):
if task in already_displayed:
return
display(e, task)
already_displayed[task] = 1
def run(code, task):
try:
code()
except Exception as e:
print(f"{task}: {type(e).__name__}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
display(task, e)

View File

@ -1,80 +0,0 @@
# this file is taken from https://github.com/xinntao/ESRGAN
import functools
import torch
import torch.nn as nn
import torch.nn.functional as F
def make_layer(block, n_layers):
layers = []
for _ in range(n_layers):
layers.append(block())
return nn.Sequential(*layers)
class ResidualDenseBlock_5C(nn.Module):
def __init__(self, nf=64, gc=32, bias=True):
super(ResidualDenseBlock_5C, self).__init__()
# gc: growth channel, i.e. intermediate channels
self.conv1 = nn.Conv2d(nf, gc, 3, 1, 1, bias=bias)
self.conv2 = nn.Conv2d(nf + gc, gc, 3, 1, 1, bias=bias)
self.conv3 = nn.Conv2d(nf + 2 * gc, gc, 3, 1, 1, bias=bias)
self.conv4 = nn.Conv2d(nf + 3 * gc, gc, 3, 1, 1, bias=bias)
self.conv5 = nn.Conv2d(nf + 4 * gc, nf, 3, 1, 1, bias=bias)
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
# initialization
# mutil.initialize_weights([self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1)
def forward(self, x):
x1 = self.lrelu(self.conv1(x))
x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1)))
x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))
x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1)))
x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))
return x5 * 0.2 + x
class RRDB(nn.Module):
'''Residual in Residual Dense Block'''
def __init__(self, nf, gc=32):
super(RRDB, self).__init__()
self.RDB1 = ResidualDenseBlock_5C(nf, gc)
self.RDB2 = ResidualDenseBlock_5C(nf, gc)
self.RDB3 = ResidualDenseBlock_5C(nf, gc)
def forward(self, x):
out = self.RDB1(x)
out = self.RDB2(out)
out = self.RDB3(out)
return out * 0.2 + x
class RRDBNet(nn.Module):
def __init__(self, in_nc, out_nc, nf, nb, gc=32):
super(RRDBNet, self).__init__()
RRDB_block_f = functools.partial(RRDB, nf=nf, gc=gc)
self.conv_first = nn.Conv2d(in_nc, nf, 3, 1, 1, bias=True)
self.RRDB_trunk = make_layer(RRDB_block_f, nb)
self.trunk_conv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
#### upsampling
self.upconv1 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
self.upconv2 = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
self.HRconv = nn.Conv2d(nf, nf, 3, 1, 1, bias=True)
self.conv_last = nn.Conv2d(nf, out_nc, 3, 1, 1, bias=True)
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
def forward(self, x):
fea = self.conv_first(x)
trunk = self.trunk_conv(self.RRDB_trunk(fea))
fea = fea + trunk
fea = self.lrelu(self.upconv1(F.interpolate(fea, scale_factor=2, mode='nearest')))
fea = self.lrelu(self.upconv2(F.interpolate(fea, scale_factor=2, mode='nearest')))
out = self.conv_last(self.lrelu(self.HRconv(fea)))
return out

View File

@ -5,69 +5,124 @@ import torch
from PIL import Image
from basicsr.utils.download_util import load_file_from_url
import modules.esrgam_model_arch as arch
import modules.esrgan_model_arch as arch
from modules import shared, modelloader, images, devices
from modules.paths import models_path
from modules.upscaler import Upscaler, UpscalerData
from modules.shared import opts
def fix_model_layers(crt_model, pretrained_net):
# this code is adapted from https://github.com/xinntao/ESRGAN
if 'conv_first.weight' in pretrained_net:
return pretrained_net
if 'model.0.weight' not in pretrained_net:
is_realesrgan = "params_ema" in pretrained_net and 'body.0.rdb1.conv1.weight' in pretrained_net["params_ema"]
if is_realesrgan:
raise Exception("The file is a RealESRGAN model, it can't be used as a ESRGAN model.")
else:
raise Exception("The file is not a ESRGAN model.")
def mod2normal(state_dict):
# this code is copied from https://github.com/victorca25/iNNfer
if 'conv_first.weight' in state_dict:
crt_net = {}
items = []
for k, v in state_dict.items():
items.append(k)
crt_net = crt_model.state_dict()
load_net_clean = {}
for k, v in pretrained_net.items():
if k.startswith('module.'):
load_net_clean[k[7:]] = v
else:
load_net_clean[k] = v
pretrained_net = load_net_clean
crt_net['model.0.weight'] = state_dict['conv_first.weight']
crt_net['model.0.bias'] = state_dict['conv_first.bias']
tbd = []
for k, v in crt_net.items():
tbd.append(k)
# directly copy
for k, v in crt_net.items():
if k in pretrained_net and pretrained_net[k].size() == v.size():
crt_net[k] = pretrained_net[k]
tbd.remove(k)
crt_net['conv_first.weight'] = pretrained_net['model.0.weight']
crt_net['conv_first.bias'] = pretrained_net['model.0.bias']
for k in tbd.copy():
for k in items.copy():
if 'RDB' in k:
ori_k = k.replace('RRDB_trunk.', 'model.1.sub.')
if '.weight' in k:
ori_k = ori_k.replace('.weight', '.0.weight')
elif '.bias' in k:
ori_k = ori_k.replace('.bias', '.0.bias')
crt_net[k] = pretrained_net[ori_k]
tbd.remove(k)
crt_net[ori_k] = state_dict[k]
items.remove(k)
crt_net['trunk_conv.weight'] = pretrained_net['model.1.sub.23.weight']
crt_net['trunk_conv.bias'] = pretrained_net['model.1.sub.23.bias']
crt_net['upconv1.weight'] = pretrained_net['model.3.weight']
crt_net['upconv1.bias'] = pretrained_net['model.3.bias']
crt_net['upconv2.weight'] = pretrained_net['model.6.weight']
crt_net['upconv2.bias'] = pretrained_net['model.6.bias']
crt_net['HRconv.weight'] = pretrained_net['model.8.weight']
crt_net['HRconv.bias'] = pretrained_net['model.8.bias']
crt_net['conv_last.weight'] = pretrained_net['model.10.weight']
crt_net['conv_last.bias'] = pretrained_net['model.10.bias']
crt_net['model.1.sub.23.weight'] = state_dict['trunk_conv.weight']
crt_net['model.1.sub.23.bias'] = state_dict['trunk_conv.bias']
crt_net['model.3.weight'] = state_dict['upconv1.weight']
crt_net['model.3.bias'] = state_dict['upconv1.bias']
crt_net['model.6.weight'] = state_dict['upconv2.weight']
crt_net['model.6.bias'] = state_dict['upconv2.bias']
crt_net['model.8.weight'] = state_dict['HRconv.weight']
crt_net['model.8.bias'] = state_dict['HRconv.bias']
crt_net['model.10.weight'] = state_dict['conv_last.weight']
crt_net['model.10.bias'] = state_dict['conv_last.bias']
state_dict = crt_net
return state_dict
def resrgan2normal(state_dict, nb=23):
# this code is copied from https://github.com/victorca25/iNNfer
if "conv_first.weight" in state_dict and "body.0.rdb1.conv1.weight" in state_dict:
re8x = 0
crt_net = {}
items = []
for k, v in state_dict.items():
items.append(k)
crt_net['model.0.weight'] = state_dict['conv_first.weight']
crt_net['model.0.bias'] = state_dict['conv_first.bias']
for k in items.copy():
if "rdb" in k:
ori_k = k.replace('body.', 'model.1.sub.')
ori_k = ori_k.replace('.rdb', '.RDB')
if '.weight' in k:
ori_k = ori_k.replace('.weight', '.0.weight')
elif '.bias' in k:
ori_k = ori_k.replace('.bias', '.0.bias')
crt_net[ori_k] = state_dict[k]
items.remove(k)
crt_net[f'model.1.sub.{nb}.weight'] = state_dict['conv_body.weight']
crt_net[f'model.1.sub.{nb}.bias'] = state_dict['conv_body.bias']
crt_net['model.3.weight'] = state_dict['conv_up1.weight']
crt_net['model.3.bias'] = state_dict['conv_up1.bias']
crt_net['model.6.weight'] = state_dict['conv_up2.weight']
crt_net['model.6.bias'] = state_dict['conv_up2.bias']
if 'conv_up3.weight' in state_dict:
# modification supporting: https://github.com/ai-forever/Real-ESRGAN/blob/main/RealESRGAN/rrdbnet_arch.py
re8x = 3
crt_net['model.9.weight'] = state_dict['conv_up3.weight']
crt_net['model.9.bias'] = state_dict['conv_up3.bias']
crt_net[f'model.{8+re8x}.weight'] = state_dict['conv_hr.weight']
crt_net[f'model.{8+re8x}.bias'] = state_dict['conv_hr.bias']
crt_net[f'model.{10+re8x}.weight'] = state_dict['conv_last.weight']
crt_net[f'model.{10+re8x}.bias'] = state_dict['conv_last.bias']
state_dict = crt_net
return state_dict
def infer_params(state_dict):
# this code is copied from https://github.com/victorca25/iNNfer
scale2x = 0
scalemin = 6
n_uplayer = 0
plus = False
for block in list(state_dict):
parts = block.split(".")
n_parts = len(parts)
if n_parts == 5 and parts[2] == "sub":
nb = int(parts[3])
elif n_parts == 3:
part_num = int(parts[1])
if (part_num > scalemin
and parts[0] == "model"
and parts[2] == "weight"):
scale2x += 1
if part_num > n_uplayer:
n_uplayer = part_num
out_nc = state_dict[block].shape[0]
if not plus and "conv1x1" in block:
plus = True
nf = state_dict["model.0.weight"].shape[0]
in_nc = state_dict["model.0.weight"].shape[1]
out_nc = out_nc
scale = 2 ** scale2x
return in_nc, out_nc, nf, nb, plus, scale
return crt_net
class UpscalerESRGAN(Upscaler):
def __init__(self, dirname):
@ -76,7 +131,6 @@ class UpscalerESRGAN(Upscaler):
self.model_name = "ESRGAN_4x"
self.scalers = []
self.user_path = dirname
self.model_path = os.path.join(models_path, self.name)
super().__init__()
model_paths = self.find_models(ext_filter=[".pt", ".pth"])
scalers = []
@ -111,20 +165,39 @@ class UpscalerESRGAN(Upscaler):
print("Unable to load %s from %s" % (self.model_path, filename))
return None
pretrained_net = torch.load(filename, map_location='cpu' if shared.device.type == 'mps' else None)
crt_model = arch.RRDBNet(3, 3, 64, 23, gc=32)
state_dict = torch.load(filename, map_location='cpu' if devices.device_esrgan.type == 'mps' else None)
pretrained_net = fix_model_layers(crt_model, pretrained_net)
crt_model.load_state_dict(pretrained_net)
crt_model.eval()
if "params_ema" in state_dict:
state_dict = state_dict["params_ema"]
elif "params" in state_dict:
state_dict = state_dict["params"]
num_conv = 16 if "realesr-animevideov3" in filename else 32
model = arch.SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=num_conv, upscale=4, act_type='prelu')
model.load_state_dict(state_dict)
model.eval()
return model
return crt_model
if "body.0.rdb1.conv1.weight" in state_dict and "conv_first.weight" in state_dict:
nb = 6 if "RealESRGAN_x4plus_anime_6B" in filename else 23
state_dict = resrgan2normal(state_dict, nb)
elif "conv_first.weight" in state_dict:
state_dict = mod2normal(state_dict)
elif "model.0.weight" not in state_dict:
raise Exception("The file is not a recognized ESRGAN model.")
in_nc, out_nc, nf, nb, plus, mscale = infer_params(state_dict)
model = arch.RRDBNet(in_nc=in_nc, out_nc=out_nc, nf=nf, nb=nb, upscale=mscale, plus=plus)
model.load_state_dict(state_dict)
model.eval()
return model
def upscale_without_tiling(model, img):
img = np.array(img)
img = img[:, :, ::-1]
img = np.moveaxis(img, 2, 0) / 255
img = np.ascontiguousarray(np.transpose(img, (2, 0, 1))) / 255
img = torch.from_numpy(img).float()
img = img.unsqueeze(0).to(devices.device_esrgan)
with torch.no_grad():

View File

@ -0,0 +1,463 @@
# this file is adapted from https://github.com/victorca25/iNNfer
import math
import functools
import torch
import torch.nn as nn
import torch.nn.functional as F
####################
# RRDBNet Generator
####################
class RRDBNet(nn.Module):
def __init__(self, in_nc, out_nc, nf, nb, nr=3, gc=32, upscale=4, norm_type=None,
act_type='leakyrelu', mode='CNA', upsample_mode='upconv', convtype='Conv2D',
finalact=None, gaussian_noise=False, plus=False):
super(RRDBNet, self).__init__()
n_upscale = int(math.log(upscale, 2))
if upscale == 3:
n_upscale = 1
self.resrgan_scale = 0
if in_nc % 16 == 0:
self.resrgan_scale = 1
elif in_nc != 4 and in_nc % 4 == 0:
self.resrgan_scale = 2
fea_conv = conv_block(in_nc, nf, kernel_size=3, norm_type=None, act_type=None, convtype=convtype)
rb_blocks = [RRDB(nf, nr, kernel_size=3, gc=32, stride=1, bias=1, pad_type='zero',
norm_type=norm_type, act_type=act_type, mode='CNA', convtype=convtype,
gaussian_noise=gaussian_noise, plus=plus) for _ in range(nb)]
LR_conv = conv_block(nf, nf, kernel_size=3, norm_type=norm_type, act_type=None, mode=mode, convtype=convtype)
if upsample_mode == 'upconv':
upsample_block = upconv_block
elif upsample_mode == 'pixelshuffle':
upsample_block = pixelshuffle_block
else:
raise NotImplementedError('upsample mode [{:s}] is not found'.format(upsample_mode))
if upscale == 3:
upsampler = upsample_block(nf, nf, 3, act_type=act_type, convtype=convtype)
else:
upsampler = [upsample_block(nf, nf, act_type=act_type, convtype=convtype) for _ in range(n_upscale)]
HR_conv0 = conv_block(nf, nf, kernel_size=3, norm_type=None, act_type=act_type, convtype=convtype)
HR_conv1 = conv_block(nf, out_nc, kernel_size=3, norm_type=None, act_type=None, convtype=convtype)
outact = act(finalact) if finalact else None
self.model = sequential(fea_conv, ShortcutBlock(sequential(*rb_blocks, LR_conv)),
*upsampler, HR_conv0, HR_conv1, outact)
def forward(self, x, outm=None):
if self.resrgan_scale == 1:
feat = pixel_unshuffle(x, scale=4)
elif self.resrgan_scale == 2:
feat = pixel_unshuffle(x, scale=2)
else:
feat = x
return self.model(feat)
class RRDB(nn.Module):
"""
Residual in Residual Dense Block
(ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks)
"""
def __init__(self, nf, nr=3, kernel_size=3, gc=32, stride=1, bias=1, pad_type='zero',
norm_type=None, act_type='leakyrelu', mode='CNA', convtype='Conv2D',
spectral_norm=False, gaussian_noise=False, plus=False):
super(RRDB, self).__init__()
# This is for backwards compatibility with existing models
if nr == 3:
self.RDB1 = ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
gaussian_noise=gaussian_noise, plus=plus)
self.RDB2 = ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
gaussian_noise=gaussian_noise, plus=plus)
self.RDB3 = ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
gaussian_noise=gaussian_noise, plus=plus)
else:
RDB_list = [ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
gaussian_noise=gaussian_noise, plus=plus) for _ in range(nr)]
self.RDBs = nn.Sequential(*RDB_list)
def forward(self, x):
if hasattr(self, 'RDB1'):
out = self.RDB1(x)
out = self.RDB2(out)
out = self.RDB3(out)
else:
out = self.RDBs(x)
return out * 0.2 + x
class ResidualDenseBlock_5C(nn.Module):
"""
Residual Dense Block
The core module of paper: (Residual Dense Network for Image Super-Resolution, CVPR 18)
Modified options that can be used:
- "Partial Convolution based Padding" arXiv:1811.11718
- "Spectral normalization" arXiv:1802.05957
- "ICASSP 2020 - ESRGAN+ : Further Improving ESRGAN" N. C.
{Rakotonirina} and A. {Rasoanaivo}
"""
def __init__(self, nf=64, kernel_size=3, gc=32, stride=1, bias=1, pad_type='zero',
norm_type=None, act_type='leakyrelu', mode='CNA', convtype='Conv2D',
spectral_norm=False, gaussian_noise=False, plus=False):
super(ResidualDenseBlock_5C, self).__init__()
self.noise = GaussianNoise() if gaussian_noise else None
self.conv1x1 = conv1x1(nf, gc) if plus else None
self.conv1 = conv_block(nf, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
spectral_norm=spectral_norm)
self.conv2 = conv_block(nf+gc, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
spectral_norm=spectral_norm)
self.conv3 = conv_block(nf+2*gc, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
spectral_norm=spectral_norm)
self.conv4 = conv_block(nf+3*gc, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
spectral_norm=spectral_norm)
if mode == 'CNA':
last_act = None
else:
last_act = act_type
self.conv5 = conv_block(nf+4*gc, nf, 3, stride, bias=bias, pad_type=pad_type,
norm_type=norm_type, act_type=last_act, mode=mode, convtype=convtype,
spectral_norm=spectral_norm)
def forward(self, x):
x1 = self.conv1(x)
x2 = self.conv2(torch.cat((x, x1), 1))
if self.conv1x1:
x2 = x2 + self.conv1x1(x)
x3 = self.conv3(torch.cat((x, x1, x2), 1))
x4 = self.conv4(torch.cat((x, x1, x2, x3), 1))
if self.conv1x1:
x4 = x4 + x2
x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))
if self.noise:
return self.noise(x5.mul(0.2) + x)
else:
return x5 * 0.2 + x
####################
# ESRGANplus
####################
class GaussianNoise(nn.Module):
def __init__(self, sigma=0.1, is_relative_detach=False):
super().__init__()
self.sigma = sigma
self.is_relative_detach = is_relative_detach
self.noise = torch.tensor(0, dtype=torch.float)
def forward(self, x):
if self.training and self.sigma != 0:
self.noise = self.noise.to(x.device)
scale = self.sigma * x.detach() if self.is_relative_detach else self.sigma * x
sampled_noise = self.noise.repeat(*x.size()).normal_() * scale
x = x + sampled_noise
return x
def conv1x1(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
####################
# SRVGGNetCompact
####################
class SRVGGNetCompact(nn.Module):
"""A compact VGG-style network structure for super-resolution.
This class is copied from https://github.com/xinntao/Real-ESRGAN
"""
def __init__(self, num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu'):
super(SRVGGNetCompact, self).__init__()
self.num_in_ch = num_in_ch
self.num_out_ch = num_out_ch
self.num_feat = num_feat
self.num_conv = num_conv
self.upscale = upscale
self.act_type = act_type
self.body = nn.ModuleList()
# the first conv
self.body.append(nn.Conv2d(num_in_ch, num_feat, 3, 1, 1))
# the first activation
if act_type == 'relu':
activation = nn.ReLU(inplace=True)
elif act_type == 'prelu':
activation = nn.PReLU(num_parameters=num_feat)
elif act_type == 'leakyrelu':
activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
self.body.append(activation)
# the body structure
for _ in range(num_conv):
self.body.append(nn.Conv2d(num_feat, num_feat, 3, 1, 1))
# activation
if act_type == 'relu':
activation = nn.ReLU(inplace=True)
elif act_type == 'prelu':
activation = nn.PReLU(num_parameters=num_feat)
elif act_type == 'leakyrelu':
activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
self.body.append(activation)
# the last conv
self.body.append(nn.Conv2d(num_feat, num_out_ch * upscale * upscale, 3, 1, 1))
# upsample
self.upsampler = nn.PixelShuffle(upscale)
def forward(self, x):
out = x
for i in range(0, len(self.body)):
out = self.body[i](out)
out = self.upsampler(out)
# add the nearest upsampled image, so that the network learns the residual
base = F.interpolate(x, scale_factor=self.upscale, mode='nearest')
out += base
return out
####################
# Upsampler
####################
class Upsample(nn.Module):
r"""Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.
The input data is assumed to be of the form
`minibatch x channels x [optional depth] x [optional height] x width`.
"""
def __init__(self, size=None, scale_factor=None, mode="nearest", align_corners=None):
super(Upsample, self).__init__()
if isinstance(scale_factor, tuple):
self.scale_factor = tuple(float(factor) for factor in scale_factor)
else:
self.scale_factor = float(scale_factor) if scale_factor else None
self.mode = mode
self.size = size
self.align_corners = align_corners
def forward(self, x):
return nn.functional.interpolate(x, size=self.size, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners)
def extra_repr(self):
if self.scale_factor is not None:
info = 'scale_factor=' + str(self.scale_factor)
else:
info = 'size=' + str(self.size)
info += ', mode=' + self.mode
return info
def pixel_unshuffle(x, scale):
""" Pixel unshuffle.
Args:
x (Tensor): Input feature with shape (b, c, hh, hw).
scale (int): Downsample ratio.
Returns:
Tensor: the pixel unshuffled feature.
"""
b, c, hh, hw = x.size()
out_channel = c * (scale**2)
assert hh % scale == 0 and hw % scale == 0
h = hh // scale
w = hw // scale
x_view = x.view(b, c, h, scale, w, scale)
return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w)
def pixelshuffle_block(in_nc, out_nc, upscale_factor=2, kernel_size=3, stride=1, bias=True,
pad_type='zero', norm_type=None, act_type='relu', convtype='Conv2D'):
"""
Pixel shuffle layer
(Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional
Neural Network, CVPR17)
"""
conv = conv_block(in_nc, out_nc * (upscale_factor ** 2), kernel_size, stride, bias=bias,
pad_type=pad_type, norm_type=None, act_type=None, convtype=convtype)
pixel_shuffle = nn.PixelShuffle(upscale_factor)
n = norm(norm_type, out_nc) if norm_type else None
a = act(act_type) if act_type else None
return sequential(conv, pixel_shuffle, n, a)
def upconv_block(in_nc, out_nc, upscale_factor=2, kernel_size=3, stride=1, bias=True,
pad_type='zero', norm_type=None, act_type='relu', mode='nearest', convtype='Conv2D'):
""" Upconv layer """
upscale_factor = (1, upscale_factor, upscale_factor) if convtype == 'Conv3D' else upscale_factor
upsample = Upsample(scale_factor=upscale_factor, mode=mode)
conv = conv_block(in_nc, out_nc, kernel_size, stride, bias=bias,
pad_type=pad_type, norm_type=norm_type, act_type=act_type, convtype=convtype)
return sequential(upsample, conv)
####################
# Basic blocks
####################
def make_layer(basic_block, num_basic_block, **kwarg):
"""Make layers by stacking the same blocks.
Args:
basic_block (nn.module): nn.module class for basic block. (block)
num_basic_block (int): number of blocks. (n_layers)
Returns:
nn.Sequential: Stacked blocks in nn.Sequential.
"""
layers = []
for _ in range(num_basic_block):
layers.append(basic_block(**kwarg))
return nn.Sequential(*layers)
def act(act_type, inplace=True, neg_slope=0.2, n_prelu=1, beta=1.0):
""" activation helper """
act_type = act_type.lower()
if act_type == 'relu':
layer = nn.ReLU(inplace)
elif act_type in ('leakyrelu', 'lrelu'):
layer = nn.LeakyReLU(neg_slope, inplace)
elif act_type == 'prelu':
layer = nn.PReLU(num_parameters=n_prelu, init=neg_slope)
elif act_type == 'tanh': # [-1, 1] range output
layer = nn.Tanh()
elif act_type == 'sigmoid': # [0, 1] range output
layer = nn.Sigmoid()
else:
raise NotImplementedError('activation layer [{:s}] is not found'.format(act_type))
return layer
class Identity(nn.Module):
def __init__(self, *kwargs):
super(Identity, self).__init__()
def forward(self, x, *kwargs):
return x
def norm(norm_type, nc):
""" Return a normalization layer """
norm_type = norm_type.lower()
if norm_type == 'batch':
layer = nn.BatchNorm2d(nc, affine=True)
elif norm_type == 'instance':
layer = nn.InstanceNorm2d(nc, affine=False)
elif norm_type == 'none':
def norm_layer(x): return Identity()
else:
raise NotImplementedError('normalization layer [{:s}] is not found'.format(norm_type))
return layer
def pad(pad_type, padding):
""" padding layer helper """
pad_type = pad_type.lower()
if padding == 0:
return None
if pad_type == 'reflect':
layer = nn.ReflectionPad2d(padding)
elif pad_type == 'replicate':
layer = nn.ReplicationPad2d(padding)
elif pad_type == 'zero':
layer = nn.ZeroPad2d(padding)
else:
raise NotImplementedError('padding layer [{:s}] is not implemented'.format(pad_type))
return layer
def get_valid_padding(kernel_size, dilation):
kernel_size = kernel_size + (kernel_size - 1) * (dilation - 1)
padding = (kernel_size - 1) // 2
return padding
class ShortcutBlock(nn.Module):
""" Elementwise sum the output of a submodule to its input """
def __init__(self, submodule):
super(ShortcutBlock, self).__init__()
self.sub = submodule
def forward(self, x):
output = x + self.sub(x)
return output
def __repr__(self):
return 'Identity + \n|' + self.sub.__repr__().replace('\n', '\n|')
def sequential(*args):
""" Flatten Sequential. It unwraps nn.Sequential. """
if len(args) == 1:
if isinstance(args[0], OrderedDict):
raise NotImplementedError('sequential does not support OrderedDict input.')
return args[0] # No sequential is needed.
modules = []
for module in args:
if isinstance(module, nn.Sequential):
for submodule in module.children():
modules.append(submodule)
elif isinstance(module, nn.Module):
modules.append(module)
return nn.Sequential(*modules)
def conv_block(in_nc, out_nc, kernel_size, stride=1, dilation=1, groups=1, bias=True,
pad_type='zero', norm_type=None, act_type='relu', mode='CNA', convtype='Conv2D',
spectral_norm=False):
""" Conv layer with padding, normalization, activation """
assert mode in ['CNA', 'NAC', 'CNAC'], 'Wrong conv mode [{:s}]'.format(mode)
padding = get_valid_padding(kernel_size, dilation)
p = pad(pad_type, padding) if pad_type and pad_type != 'zero' else None
padding = padding if pad_type == 'zero' else 0
if convtype=='PartialConv2D':
c = PartialConv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
dilation=dilation, bias=bias, groups=groups)
elif convtype=='DeformConv2D':
c = DeformConv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
dilation=dilation, bias=bias, groups=groups)
elif convtype=='Conv3D':
c = nn.Conv3d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
dilation=dilation, bias=bias, groups=groups)
else:
c = nn.Conv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
dilation=dilation, bias=bias, groups=groups)
if spectral_norm:
c = nn.utils.spectral_norm(c)
a = act(act_type) if act_type else None
if 'CNA' in mode:
n = norm(norm_type, out_nc) if norm_type else None
return sequential(p, c, n, a)
elif mode == 'NAC':
if norm_type is None and act_type is not None:
a = act(act_type, inplace=False)
n = norm(norm_type, in_nc) if norm_type else None
return sequential(n, a, p, c)

101
modules/extensions.py Normal file
View File

@ -0,0 +1,101 @@
import os
import sys
import traceback
import git
from modules import paths, shared
extensions = []
extensions_dir = os.path.join(paths.data_path, "extensions")
extensions_builtin_dir = os.path.join(paths.script_path, "extensions-builtin")
if not os.path.exists(extensions_dir):
os.makedirs(extensions_dir)
def active():
return [x for x in extensions if x.enabled]
class Extension:
def __init__(self, name, path, enabled=True, is_builtin=False):
self.name = name
self.path = path
self.enabled = enabled
self.status = ''
self.can_update = False
self.is_builtin = is_builtin
repo = None
try:
if os.path.exists(os.path.join(path, ".git")):
repo = git.Repo(path)
except Exception:
print(f"Error reading github repository info from {path}:", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
if repo is None or repo.bare:
self.remote = None
else:
try:
self.remote = next(repo.remote().urls, None)
self.status = 'unknown'
except Exception:
self.remote = None
def list_files(self, subdir, extension):
from modules import scripts
dirpath = os.path.join(self.path, subdir)
if not os.path.isdir(dirpath):
return []
res = []
for filename in sorted(os.listdir(dirpath)):
res.append(scripts.ScriptFile(self.path, filename, os.path.join(dirpath, filename)))
res = [x for x in res if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)]
return res
def check_updates(self):
repo = git.Repo(self.path)
for fetch in repo.remote().fetch("--dry-run"):
if fetch.flags != fetch.HEAD_UPTODATE:
self.can_update = True
self.status = "behind"
return
self.can_update = False
self.status = "latest"
def fetch_and_reset_hard(self):
repo = git.Repo(self.path)
# Fix: `error: Your local changes to the following files would be overwritten by merge`,
# because WSL2 Docker set 755 file permissions instead of 644, this results to the error.
repo.git.fetch('--all')
repo.git.reset('--hard', 'origin')
def list_extensions():
extensions.clear()
if not os.path.isdir(extensions_dir):
return
paths = []
for dirname in [extensions_dir, extensions_builtin_dir]:
if not os.path.isdir(dirname):
return
for extension_dirname in sorted(os.listdir(dirname)):
path = os.path.join(dirname, extension_dirname)
if not os.path.isdir(path):
continue
paths.append((extension_dirname, path, dirname == extensions_builtin_dir))
for dirname, path, is_builtin in paths:
extension = Extension(name=dirname, path=path, enabled=dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin)
extensions.append(extension)

147
modules/extra_networks.py Normal file
View File

@ -0,0 +1,147 @@
import re
from collections import defaultdict
from modules import errors
extra_network_registry = {}
def initialize():
extra_network_registry.clear()
def register_extra_network(extra_network):
extra_network_registry[extra_network.name] = extra_network
class ExtraNetworkParams:
def __init__(self, items=None):
self.items = items or []
class ExtraNetwork:
def __init__(self, name):
self.name = name
def activate(self, p, params_list):
"""
Called by processing on every run. Whatever the extra network is meant to do should be activated here.
Passes arguments related to this extra network in params_list.
User passes arguments by specifying this in his prompt:
<name:arg1:arg2:arg3>
Where name matches the name of this ExtraNetwork object, and arg1:arg2:arg3 are any natural number of text arguments
separated by colon.
Even if the user does not mention this ExtraNetwork in his prompt, the call will stil be made, with empty params_list -
in this case, all effects of this extra networks should be disabled.
Can be called multiple times before deactivate() - each new call should override the previous call completely.
For example, if this ExtraNetwork's name is 'hypernet' and user's prompt is:
> "1girl, <hypernet:agm:1.1> <extrasupernet:master:12:13:14> <hypernet:ray>"
params_list will be:
[
ExtraNetworkParams(items=["agm", "1.1"]),
ExtraNetworkParams(items=["ray"])
]
"""
raise NotImplementedError
def deactivate(self, p):
"""
Called at the end of processing for housekeeping. No need to do anything here.
"""
raise NotImplementedError
def activate(p, extra_network_data):
"""call activate for extra networks in extra_network_data in specified order, then call
activate for all remaining registered networks with an empty argument list"""
for extra_network_name, extra_network_args in extra_network_data.items():
extra_network = extra_network_registry.get(extra_network_name, None)
if extra_network is None:
print(f"Skipping unknown extra network: {extra_network_name}")
continue
try:
extra_network.activate(p, extra_network_args)
except Exception as e:
errors.display(e, f"activating extra network {extra_network_name} with arguments {extra_network_args}")
for extra_network_name, extra_network in extra_network_registry.items():
args = extra_network_data.get(extra_network_name, None)
if args is not None:
continue
try:
extra_network.activate(p, [])
except Exception as e:
errors.display(e, f"activating extra network {extra_network_name}")
def deactivate(p, extra_network_data):
"""call deactivate for extra networks in extra_network_data in specified order, then call
deactivate for all remaining registered networks"""
for extra_network_name, extra_network_args in extra_network_data.items():
extra_network = extra_network_registry.get(extra_network_name, None)
if extra_network is None:
continue
try:
extra_network.deactivate(p)
except Exception as e:
errors.display(e, f"deactivating extra network {extra_network_name}")
for extra_network_name, extra_network in extra_network_registry.items():
args = extra_network_data.get(extra_network_name, None)
if args is not None:
continue
try:
extra_network.deactivate(p)
except Exception as e:
errors.display(e, f"deactivating unmentioned extra network {extra_network_name}")
re_extra_net = re.compile(r"<(\w+):([^>]+)>")
def parse_prompt(prompt):
res = defaultdict(list)
def found(m):
name = m.group(1)
args = m.group(2)
res[name].append(ExtraNetworkParams(items=args.split(":")))
return ""
prompt = re.sub(re_extra_net, found, prompt)
return prompt, res
def parse_prompts(prompts):
res = []
extra_data = None
for prompt in prompts:
updated_prompt, parsed_extra_data = parse_prompt(prompt)
if extra_data is None:
extra_data = parsed_extra_data
res.append(updated_prompt)
return res, extra_data

View File

@ -0,0 +1,27 @@
from modules import extra_networks, shared, extra_networks
from modules.hypernetworks import hypernetwork
class ExtraNetworkHypernet(extra_networks.ExtraNetwork):
def __init__(self):
super().__init__('hypernet')
def activate(self, p, params_list):
additional = shared.opts.sd_hypernetwork
if additional != "" and additional in shared.hypernetworks and len([x for x in params_list if x.items[0] == additional]) == 0:
p.all_prompts = [x + f"<hypernet:{additional}:{shared.opts.extra_networks_default_multiplier}>" for x in p.all_prompts]
params_list.append(extra_networks.ExtraNetworkParams(items=[additional, shared.opts.extra_networks_default_multiplier]))
names = []
multipliers = []
for params in params_list:
assert len(params.items) > 0
names.append(params.items[0])
multipliers.append(float(params.items[1]) if len(params.items) > 1 else 1.0)
hypernetwork.load_hypernetworks(names, multipliers)
def deactivate(self, p):
pass

View File

@ -1,131 +1,23 @@
import os
import re
import shutil
import numpy as np
from PIL import Image
import torch
import tqdm
from modules import processing, shared, images, devices, sd_models
from modules.shared import opts
import modules.gfpgan_model
from modules.ui import plaintext_to_html
import modules.codeformer_model
import piexif
import piexif.helper
from modules import shared, images, sd_models, sd_vae, sd_models_config
from modules.ui_common import plaintext_to_html
import gradio as gr
cached_images = {}
def run_extras(extras_mode, image, image_folder, gfpgan_visibility, codeformer_visibility, codeformer_weight, upscaling_resize, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility):
devices.torch_gc()
imageArr = []
# Also keep track of original file names
imageNameArr = []
if extras_mode == 1:
#convert file to pillow image
for img in image_folder:
image = Image.fromarray(np.array(Image.open(img)))
imageArr.append(image)
imageNameArr.append(os.path.splitext(img.orig_name)[0])
else:
imageArr.append(image)
imageNameArr.append(None)
outpath = opts.outdir_samples or opts.outdir_extras_samples
outputs = []
for image, image_name in zip(imageArr, imageNameArr):
if image is None:
return outputs, "Please select an input image.", ''
existing_pnginfo = image.info or {}
image = image.convert("RGB")
info = ""
if gfpgan_visibility > 0:
restored_img = modules.gfpgan_model.gfpgan_fix_faces(np.array(image, dtype=np.uint8))
res = Image.fromarray(restored_img)
if gfpgan_visibility < 1.0:
res = Image.blend(image, res, gfpgan_visibility)
info += f"GFPGAN visibility:{round(gfpgan_visibility, 2)}\n"
image = res
if codeformer_visibility > 0:
restored_img = modules.codeformer_model.codeformer.restore(np.array(image, dtype=np.uint8), w=codeformer_weight)
res = Image.fromarray(restored_img)
if codeformer_visibility < 1.0:
res = Image.blend(image, res, codeformer_visibility)
info += f"CodeFormer w: {round(codeformer_weight, 2)}, CodeFormer visibility:{round(codeformer_visibility, 2)}\n"
image = res
if upscaling_resize != 1.0:
def upscale(image, scaler_index, resize):
small = image.crop((image.width // 2, image.height // 2, image.width // 2 + 10, image.height // 2 + 10))
pixels = tuple(np.array(small).flatten().tolist())
key = (resize, scaler_index, image.width, image.height, gfpgan_visibility, codeformer_visibility, codeformer_weight) + pixels
c = cached_images.get(key)
if c is None:
upscaler = shared.sd_upscalers[scaler_index]
c = upscaler.scaler.upscale(image, resize, upscaler.data_path)
cached_images[key] = c
return c
info += f"Upscale: {round(upscaling_resize, 3)}, model:{shared.sd_upscalers[extras_upscaler_1].name}\n"
res = upscale(image, extras_upscaler_1, upscaling_resize)
if extras_upscaler_2 != 0 and extras_upscaler_2_visibility > 0:
res2 = upscale(image, extras_upscaler_2, upscaling_resize)
info += f"Upscale: {round(upscaling_resize, 3)}, visibility: {round(extras_upscaler_2_visibility, 3)}, model:{shared.sd_upscalers[extras_upscaler_2].name}\n"
res = Image.blend(res, res2, extras_upscaler_2_visibility)
image = res
while len(cached_images) > 2:
del cached_images[next(iter(cached_images.keys()))]
images.save_image(image, path=outpath, basename="", seed=None, prompt=None, extension=opts.samples_format, info=info, short_filename=True,
no_prompt=True, grid=False, pnginfo_section_name="extras", existing_info=existing_pnginfo,
forced_filename=image_name if opts.use_original_name_batch else None)
outputs.append(image)
return outputs, plaintext_to_html(info), ''
import safetensors.torch
def run_pnginfo(image):
if image is None:
return '', '', ''
items = image.info
geninfo = ''
if "exif" in image.info:
exif = piexif.load(image.info["exif"])
exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'')
try:
exif_comment = piexif.helper.UserComment.load(exif_comment)
except ValueError:
exif_comment = exif_comment.decode('utf8', errors="ignore")
items['exif comment'] = exif_comment
geninfo = exif_comment
for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif',
'loop', 'background', 'timestamp', 'duration']:
items.pop(field, None)
geninfo = items.get('parameters', geninfo)
geninfo, items = images.read_info_from_image(image)
items = {**{'parameters': geninfo}, **items}
info = ''
for key, text in items.items():
@ -143,64 +35,224 @@ def run_pnginfo(image):
return '', geninfo, info
def run_modelmerger(primary_model_name, secondary_model_name, interp_method, interp_amount, save_as_half, custom_name):
# Linear interpolation (https://en.wikipedia.org/wiki/Linear_interpolation)
def create_config(ckpt_result, config_source, a, b, c):
def config(x):
res = sd_models_config.find_checkpoint_config_near_filename(x) if x else None
return res if res != shared.sd_default_config else None
if config_source == 0:
cfg = config(a) or config(b) or config(c)
elif config_source == 1:
cfg = config(b)
elif config_source == 2:
cfg = config(c)
else:
cfg = None
if cfg is None:
return
filename, _ = os.path.splitext(ckpt_result)
checkpoint_filename = filename + ".yaml"
print("Copying config:")
print(" from:", cfg)
print(" to:", checkpoint_filename)
shutil.copyfile(cfg, checkpoint_filename)
checkpoint_dict_skip_on_merge = ["cond_stage_model.transformer.text_model.embeddings.position_ids"]
def to_half(tensor, enable):
if enable and tensor.dtype == torch.float:
return tensor.half()
return tensor
def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_model_name, interp_method, multiplier, save_as_half, custom_name, checkpoint_format, config_source, bake_in_vae, discard_weights):
shared.state.begin()
shared.state.job = 'model-merge'
def fail(message):
shared.state.textinfo = message
shared.state.end()
return [*[gr.update() for _ in range(4)], message]
def weighted_sum(theta0, theta1, alpha):
return ((1 - alpha) * theta0) + (alpha * theta1)
# Smoothstep (https://en.wikipedia.org/wiki/Smoothstep)
def sigmoid(theta0, theta1, alpha):
alpha = alpha * alpha * (3 - (2 * alpha))
return theta0 + ((theta1 - theta0) * alpha)
def get_difference(theta1, theta2):
return theta1 - theta2
# Inverse Smoothstep (https://en.wikipedia.org/wiki/Smoothstep)
def inv_sigmoid(theta0, theta1, alpha):
import math
alpha = 0.5 - math.sin(math.asin(1.0 - 2.0 * alpha) / 3.0)
return theta0 + ((theta1 - theta0) * alpha)
def add_difference(theta0, theta1_2_diff, alpha):
return theta0 + (alpha * theta1_2_diff)
primary_model_info = sd_models.checkpoints_list[primary_model_name]
secondary_model_info = sd_models.checkpoints_list[secondary_model_name]
def filename_weighted_sum():
a = primary_model_info.model_name
b = secondary_model_info.model_name
Ma = round(1 - multiplier, 2)
Mb = round(multiplier, 2)
print(f"Loading {primary_model_info.filename}...")
primary_model = torch.load(primary_model_info.filename, map_location='cpu')
return f"{Ma}({a}) + {Mb}({b})"
print(f"Loading {secondary_model_info.filename}...")
secondary_model = torch.load(secondary_model_info.filename, map_location='cpu')
def filename_add_difference():
a = primary_model_info.model_name
b = secondary_model_info.model_name
c = tertiary_model_info.model_name
M = round(multiplier, 2)
theta_0 = primary_model['state_dict']
theta_1 = secondary_model['state_dict']
return f"{a} + {M}({b} - {c})"
def filename_nothing():
return primary_model_info.model_name
theta_funcs = {
"Weighted Sum": weighted_sum,
"Sigmoid": sigmoid,
"Inverse Sigmoid": inv_sigmoid,
"Weighted sum": (filename_weighted_sum, None, weighted_sum),
"Add difference": (filename_add_difference, get_difference, add_difference),
"No interpolation": (filename_nothing, None, None),
}
theta_func = theta_funcs[interp_method]
filename_generator, theta_func1, theta_func2 = theta_funcs[interp_method]
shared.state.job_count = (1 if theta_func1 else 0) + (1 if theta_func2 else 0)
print(f"Merging...")
if not primary_model_name:
return fail("Failed: Merging requires a primary model.")
primary_model_info = sd_models.checkpoints_list[primary_model_name]
if theta_func2 and not secondary_model_name:
return fail("Failed: Merging requires a secondary model.")
secondary_model_info = sd_models.checkpoints_list[secondary_model_name] if theta_func2 else None
if theta_func1 and not tertiary_model_name:
return fail(f"Failed: Interpolation method ({interp_method}) requires a tertiary model.")
tertiary_model_info = sd_models.checkpoints_list[tertiary_model_name] if theta_func1 else None
result_is_inpainting_model = False
result_is_instruct_pix2pix_model = False
if theta_func2:
shared.state.textinfo = f"Loading B"
print(f"Loading {secondary_model_info.filename}...")
theta_1 = sd_models.read_state_dict(secondary_model_info.filename, map_location='cpu')
else:
theta_1 = None
if theta_func1:
shared.state.textinfo = f"Loading C"
print(f"Loading {tertiary_model_info.filename}...")
theta_2 = sd_models.read_state_dict(tertiary_model_info.filename, map_location='cpu')
shared.state.textinfo = 'Merging B and C'
shared.state.sampling_steps = len(theta_1.keys())
for key in tqdm.tqdm(theta_1.keys()):
if key in checkpoint_dict_skip_on_merge:
continue
if 'model' in key:
if key in theta_2:
t2 = theta_2.get(key, torch.zeros_like(theta_1[key]))
theta_1[key] = theta_func1(theta_1[key], t2)
else:
theta_1[key] = torch.zeros_like(theta_1[key])
shared.state.sampling_step += 1
del theta_2
shared.state.nextjob()
shared.state.textinfo = f"Loading {primary_model_info.filename}..."
print(f"Loading {primary_model_info.filename}...")
theta_0 = sd_models.read_state_dict(primary_model_info.filename, map_location='cpu')
print("Merging...")
shared.state.textinfo = 'Merging A and B'
shared.state.sampling_steps = len(theta_0.keys())
for key in tqdm.tqdm(theta_0.keys()):
if 'model' in key and key in theta_1:
theta_0[key] = theta_func(theta_0[key], theta_1[key], (float(1.0) - interp_amount)) # Need to reverse the interp_amount to match the desired mix ration in the merged checkpoint
if save_as_half:
theta_0[key] = theta_0[key].half()
if theta_1 and 'model' in key and key in theta_1:
for key in theta_1.keys():
if 'model' in key and key not in theta_0:
theta_0[key] = theta_1[key]
if save_as_half:
theta_0[key] = theta_0[key].half()
if key in checkpoint_dict_skip_on_merge:
continue
a = theta_0[key]
b = theta_1[key]
# this enables merging an inpainting model (A) with another one (B);
# where normal model would have 4 channels, for latenst space, inpainting model would
# have another 4 channels for unmasked picture's latent space, plus one channel for mask, for a total of 9
if a.shape != b.shape and a.shape[0:1] + a.shape[2:] == b.shape[0:1] + b.shape[2:]:
if a.shape[1] == 4 and b.shape[1] == 9:
raise RuntimeError("When merging inpainting model with a normal one, A must be the inpainting model.")
if a.shape[1] == 4 and b.shape[1] == 8:
raise RuntimeError("When merging instruct-pix2pix model with a normal one, A must be the instruct-pix2pix model.")
if a.shape[1] == 8 and b.shape[1] == 4:#If we have an Instruct-Pix2Pix model...
theta_0[key][:, 0:4, :, :] = theta_func2(a[:, 0:4, :, :], b, multiplier)#Merge only the vectors the models have in common. Otherwise we get an error due to dimension mismatch.
result_is_instruct_pix2pix_model = True
else:
assert a.shape[1] == 9 and b.shape[1] == 4, f"Bad dimensions for merged layer {key}: A={a.shape}, B={b.shape}"
theta_0[key][:, 0:4, :, :] = theta_func2(a[:, 0:4, :, :], b, multiplier)
result_is_inpainting_model = True
else:
theta_0[key] = theta_func2(a, b, multiplier)
theta_0[key] = to_half(theta_0[key], save_as_half)
shared.state.sampling_step += 1
del theta_1
bake_in_vae_filename = sd_vae.vae_dict.get(bake_in_vae, None)
if bake_in_vae_filename is not None:
print(f"Baking in VAE from {bake_in_vae_filename}")
shared.state.textinfo = 'Baking in VAE'
vae_dict = sd_vae.load_vae_dict(bake_in_vae_filename, map_location='cpu')
for key in vae_dict.keys():
theta_0_key = 'first_stage_model.' + key
if theta_0_key in theta_0:
theta_0[theta_0_key] = to_half(vae_dict[key], save_as_half)
del vae_dict
if save_as_half and not theta_func2:
for key in theta_0.keys():
theta_0[key] = to_half(theta_0[key], save_as_half)
if discard_weights:
regex = re.compile(discard_weights)
for key in list(theta_0):
if re.search(regex, key):
theta_0.pop(key, None)
ckpt_dir = shared.cmd_opts.ckpt_dir or sd_models.model_path
filename = primary_model_info.model_name + '_' + str(round(interp_amount, 2)) + '-' + secondary_model_info.model_name + '_' + str(round((float(1.0) - interp_amount), 2)) + '-' + interp_method.replace(" ", "_") + '-merged.ckpt'
filename = filename if custom_name == '' else (custom_name + '.ckpt')
filename = filename_generator() if custom_name == '' else custom_name
filename += ".inpainting" if result_is_inpainting_model else ""
filename += ".instruct-pix2pix" if result_is_instruct_pix2pix_model else ""
filename += "." + checkpoint_format
output_modelname = os.path.join(ckpt_dir, filename)
shared.state.nextjob()
shared.state.textinfo = "Saving"
print(f"Saving to {output_modelname}...")
torch.save(primary_model, output_modelname)
_, extension = os.path.splitext(output_modelname)
if extension.lower() == ".safetensors":
safetensors.torch.save_file(theta_0, output_modelname, metadata={"format": "pt"})
else:
torch.save(theta_0, output_modelname)
sd_models.list_models()
print(f"Checkpoint saved.")
return ["Checkpoint saved to " + output_modelname] + [gr.Dropdown.update(choices=sd_models.checkpoint_tiles()) for _ in range(3)]
create_config(output_modelname, config_source, primary_model_info, secondary_model_info, tertiary_model_info)
print(f"Checkpoint saved to {output_modelname}.")
shared.state.textinfo = "Checkpoint saved"
shared.state.end()
return [*[gr.Dropdown.update(choices=sd_models.checkpoint_tiles()) for _ in range(4)], "Checkpoint saved to " + output_modelname]

View File

@ -1,12 +1,226 @@
import base64
import html
import io
import math
import os
import re
import gradio as gr
from pathlib import Path
re_param_code = r"\s*([\w ]+):\s*([^,]+)(?:,|$)"
import gradio as gr
from modules.paths import data_path
from modules import shared, ui_tempdir, script_callbacks
import tempfile
from PIL import Image
re_param_code = r'\s*([\w ]+):\s*("(?:\\"[^,]|\\"|\\|[^\"])+"|[^,]*)(?:,|$)'
re_param = re.compile(re_param_code)
re_params = re.compile(r"^(?:" + re_param_code + "){3,}$")
re_imagesize = re.compile(r"^(\d+)x(\d+)$")
re_hypernet_hash = re.compile("\(([0-9a-f]+)\)$")
type_of_gr_update = type(gr.update())
paste_fields = {}
registered_param_bindings = []
class ParamBinding:
def __init__(self, paste_button, tabname, source_text_component=None, source_image_component=None, source_tabname=None, override_settings_component=None):
self.paste_button = paste_button
self.tabname = tabname
self.source_text_component = source_text_component
self.source_image_component = source_image_component
self.source_tabname = source_tabname
self.override_settings_component = override_settings_component
def reset():
paste_fields.clear()
def quote(text):
if ',' not in str(text):
return text
text = str(text)
text = text.replace('\\', '\\\\')
text = text.replace('"', '\\"')
return f'"{text}"'
def image_from_url_text(filedata):
if filedata is None:
return None
if type(filedata) == list and len(filedata) > 0 and type(filedata[0]) == dict and filedata[0].get("is_file", False):
filedata = filedata[0]
if type(filedata) == dict and filedata.get("is_file", False):
filename = filedata["name"]
is_in_right_dir = ui_tempdir.check_tmp_file(shared.demo, filename)
assert is_in_right_dir, 'trying to open image file outside of allowed directories'
return Image.open(filename)
if type(filedata) == list:
if len(filedata) == 0:
return None
filedata = filedata[0]
if filedata.startswith("data:image/png;base64,"):
filedata = filedata[len("data:image/png;base64,"):]
filedata = base64.decodebytes(filedata.encode('utf-8'))
image = Image.open(io.BytesIO(filedata))
return image
def add_paste_fields(tabname, init_img, fields):
paste_fields[tabname] = {"init_img": init_img, "fields": fields}
# backwards compatibility for existing extensions
import modules.ui
if tabname == 'txt2img':
modules.ui.txt2img_paste_fields = fields
elif tabname == 'img2img':
modules.ui.img2img_paste_fields = fields
def create_buttons(tabs_list):
buttons = {}
for tab in tabs_list:
buttons[tab] = gr.Button(f"Send to {tab}", elem_id=f"{tab}_tab")
return buttons
def bind_buttons(buttons, send_image, send_generate_info):
"""old function for backwards compatibility; do not use this, use register_paste_params_button"""
for tabname, button in buttons.items():
source_text_component = send_generate_info if isinstance(send_generate_info, gr.components.Component) else None
source_tabname = send_generate_info if isinstance(send_generate_info, str) else None
register_paste_params_button(ParamBinding(paste_button=button, tabname=tabname, source_text_component=source_text_component, source_image_component=send_image, source_tabname=source_tabname))
def register_paste_params_button(binding: ParamBinding):
registered_param_bindings.append(binding)
def connect_paste_params_buttons():
binding: ParamBinding
for binding in registered_param_bindings:
destination_image_component = paste_fields[binding.tabname]["init_img"]
fields = paste_fields[binding.tabname]["fields"]
destination_width_component = next(iter([field for field, name in fields if name == "Size-1"] if fields else []), None)
destination_height_component = next(iter([field for field, name in fields if name == "Size-2"] if fields else []), None)
if binding.source_image_component and destination_image_component:
if isinstance(binding.source_image_component, gr.Gallery):
func = send_image_and_dimensions if destination_width_component else image_from_url_text
jsfunc = "extract_image_from_gallery"
else:
func = send_image_and_dimensions if destination_width_component else lambda x: x
jsfunc = None
binding.paste_button.click(
fn=func,
_js=jsfunc,
inputs=[binding.source_image_component],
outputs=[destination_image_component, destination_width_component, destination_height_component] if destination_width_component else [destination_image_component],
)
if binding.source_text_component is not None and fields is not None:
connect_paste(binding.paste_button, fields, binding.source_text_component, binding.override_settings_component, binding.tabname)
if binding.source_tabname is not None and fields is not None:
paste_field_names = ['Prompt', 'Negative prompt', 'Steps', 'Face restoration'] + (["Seed"] if shared.opts.send_seed else [])
binding.paste_button.click(
fn=lambda *x: x,
inputs=[field for field, name in paste_fields[binding.source_tabname]["fields"] if name in paste_field_names],
outputs=[field for field, name in fields if name in paste_field_names],
)
binding.paste_button.click(
fn=None,
_js=f"switch_to_{binding.tabname}",
inputs=None,
outputs=None,
)
def send_image_and_dimensions(x):
if isinstance(x, Image.Image):
img = x
else:
img = image_from_url_text(x)
if shared.opts.send_size and isinstance(img, Image.Image):
w = img.width
h = img.height
else:
w = gr.update()
h = gr.update()
return img, w, h
def find_hypernetwork_key(hypernet_name, hypernet_hash=None):
"""Determines the config parameter name to use for the hypernet based on the parameters in the infotext.
Example: an infotext provides "Hypernet: ke-ta" and "Hypernet hash: 1234abcd". For the "Hypernet" config
parameter this means there should be an entry that looks like "ke-ta-10000(1234abcd)" to set it to.
If the infotext has no hash, then a hypernet with the same name will be selected instead.
"""
hypernet_name = hypernet_name.lower()
if hypernet_hash is not None:
# Try to match the hash in the name
for hypernet_key in shared.hypernetworks.keys():
result = re_hypernet_hash.search(hypernet_key)
if result is not None and result[1] == hypernet_hash:
return hypernet_key
else:
# Fall back to a hypernet with the same name
for hypernet_key in shared.hypernetworks.keys():
if hypernet_key.lower().startswith(hypernet_name):
return hypernet_key
return None
def restore_old_hires_fix_params(res):
"""for infotexts that specify old First pass size parameter, convert it into
width, height, and hr scale"""
firstpass_width = res.get('First pass size-1', None)
firstpass_height = res.get('First pass size-2', None)
if shared.opts.use_old_hires_fix_width_height:
hires_width = int(res.get("Hires resize-1", 0))
hires_height = int(res.get("Hires resize-2", 0))
if hires_width and hires_height:
res['Size-1'] = hires_width
res['Size-2'] = hires_height
return
if firstpass_width is None or firstpass_height is None:
return
firstpass_width, firstpass_height = int(firstpass_width), int(firstpass_height)
width = int(res.get("Size-1", 512))
height = int(res.get("Size-2", 512))
if firstpass_width == 0 or firstpass_height == 0:
from modules import processing
firstpass_width, firstpass_height = processing.old_hires_fix_first_pass_dimensions(width, height)
res['Size-1'] = firstpass_width
res['Size-2'] = firstpass_height
res['Hires resize-1'] = width
res['Hires resize-2'] = height
def parse_generation_parameters(x: str):
"""parses generation parameters string, the one you see in text field under the picture in UI:
@ -27,7 +241,7 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
done_with_prompt = False
*lines, lastline = x.strip().split("\n")
if not re_params.match(lastline):
if len(re_param.findall(lastline)) < 3:
lines.append(lastline)
lastline = ''
@ -42,13 +256,11 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
else:
prompt += ("" if prompt == "" else "\n") + line
if len(prompt) > 0:
res["Prompt"] = prompt
if len(negative_prompt) > 0:
res["Negative prompt"] = negative_prompt
for k, v in re_param.findall(lastline):
v = v[1:-1] if v[0] == '"' and v[-1] == '"' else v
m = re_imagesize.match(v)
if m is not None:
res[k+"-1"] = m.group(1)
@ -56,12 +268,76 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
else:
res[k] = v
# Missing CLIP skip means it was set to 1 (the default)
if "Clip skip" not in res:
res["Clip skip"] = "1"
hypernet = res.get("Hypernet", None)
if hypernet is not None:
res["Prompt"] += f"""<hypernet:{hypernet}:{res.get("Hypernet strength", "1.0")}>"""
if "Hires resize-1" not in res:
res["Hires resize-1"] = 0
res["Hires resize-2"] = 0
restore_old_hires_fix_params(res)
return res
def connect_paste(button, paste_fields, input_comp, js=None):
settings_map = {}
infotext_to_setting_name_mapping = [
('Clip skip', 'CLIP_stop_at_last_layers', ),
('Conditional mask weight', 'inpainting_mask_weight'),
('Model hash', 'sd_model_checkpoint'),
('ENSD', 'eta_noise_seed_delta'),
('Noise multiplier', 'initial_noise_multiplier'),
('Eta', 'eta_ancestral'),
('Eta DDIM', 'eta_ddim'),
('Discard penultimate sigma', 'always_discard_next_to_last_sigma')
]
def create_override_settings_dict(text_pairs):
"""creates processing's override_settings parameters from gradio's multiselect
Example input:
['Clip skip: 2', 'Model hash: e6e99610c4', 'ENSD: 31337']
Example output:
{'CLIP_stop_at_last_layers': 2, 'sd_model_checkpoint': 'e6e99610c4', 'eta_noise_seed_delta': 31337}
"""
res = {}
params = {}
for pair in text_pairs:
k, v = pair.split(":", maxsplit=1)
params[k] = v.strip()
for param_name, setting_name in infotext_to_setting_name_mapping:
value = params.get(param_name, None)
if value is None:
continue
res[setting_name] = shared.opts.cast_value(setting_name, value)
return res
def connect_paste(button, paste_fields, input_comp, override_settings_component, tabname):
def paste_func(prompt):
if not prompt and not shared.cmd_opts.hide_ui_dir_config:
filename = os.path.join(data_path, "params.txt")
if os.path.exists(filename):
with open(filename, "r", encoding="utf8") as file:
prompt = file.read()
params = parse_generation_parameters(prompt)
script_callbacks.infotext_pasted_callback(prompt, params)
res = []
for output, key in paste_fields:
@ -77,16 +353,49 @@ def connect_paste(button, paste_fields, input_comp, js=None):
else:
try:
valtype = type(output.value)
if valtype == bool and v == "False":
val = False
else:
val = valtype(v)
res.append(gr.update(value=val))
except Exception:
res.append(gr.update())
return res
if override_settings_component is not None:
def paste_settings(params):
vals = {}
for param_name, setting_name in infotext_to_setting_name_mapping:
v = params.get(param_name, None)
if v is None:
continue
if setting_name == "sd_model_checkpoint" and shared.opts.disable_weights_auto_swap:
continue
v = shared.opts.cast_value(setting_name, v)
current_value = getattr(shared.opts, setting_name, None)
if v == current_value:
continue
vals[param_name] = v
vals_pairs = [f"{k}: {v}" for k, v in vals.items()]
return gr.Dropdown.update(value=vals_pairs, choices=vals_pairs, visible=len(vals_pairs) > 0)
paste_fields = paste_fields + [(override_settings_component, paste_settings)]
button.click(
fn=paste_func,
_js=js,
_js=f"recalculate_prompts_{tabname}",
inputs=[input_comp],
outputs=[x[0] for x in paste_fields],
)

View File

@ -6,12 +6,11 @@ import facexlib
import gfpgan
import modules.face_restoration
from modules import shared, devices, modelloader
from modules.paths import models_path
from modules import paths, shared, devices, modelloader
model_dir = "GFPGAN"
user_path = None
model_path = os.path.join(models_path, model_dir)
model_path = os.path.join(paths.models_path, model_dir)
model_url = "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth"
have_gfpgan = False
loaded_gfpgan_model = None
@ -36,7 +35,9 @@ def gfpgann():
else:
print("Unable to load gfpgan model!")
return None
model = gfpgan_constructor(model_path=model_file, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None)
if hasattr(facexlib.detection.retinaface, 'device'):
facexlib.detection.retinaface.device = devices.device_gfpgan
model = gfpgan_constructor(model_path=model_file, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=devices.device_gfpgan)
loaded_gfpgan_model = model
return model

91
modules/hashes.py Normal file
View File

@ -0,0 +1,91 @@
import hashlib
import json
import os.path
import filelock
from modules import shared
from modules.paths import data_path
cache_filename = os.path.join(data_path, "cache.json")
cache_data = None
def dump_cache():
with filelock.FileLock(cache_filename+".lock"):
with open(cache_filename, "w", encoding="utf8") as file:
json.dump(cache_data, file, indent=4)
def cache(subsection):
global cache_data
if cache_data is None:
with filelock.FileLock(cache_filename+".lock"):
if not os.path.isfile(cache_filename):
cache_data = {}
else:
with open(cache_filename, "r", encoding="utf8") as file:
cache_data = json.load(file)
s = cache_data.get(subsection, {})
cache_data[subsection] = s
return s
def calculate_sha256(filename):
hash_sha256 = hashlib.sha256()
blksize = 1024 * 1024
with open(filename, "rb") as f:
for chunk in iter(lambda: f.read(blksize), b""):
hash_sha256.update(chunk)
return hash_sha256.hexdigest()
def sha256_from_cache(filename, title):
hashes = cache("hashes")
ondisk_mtime = os.path.getmtime(filename)
if title not in hashes:
return None
cached_sha256 = hashes[title].get("sha256", None)
cached_mtime = hashes[title].get("mtime", 0)
if ondisk_mtime > cached_mtime or cached_sha256 is None:
return None
return cached_sha256
def sha256(filename, title):
hashes = cache("hashes")
sha256_value = sha256_from_cache(filename, title)
if sha256_value is not None:
return sha256_value
if shared.cmd_opts.no_hashing:
return None
print(f"Calculating sha256 for {filename}: ", end='')
sha256_value = calculate_sha256(filename)
print(f"{sha256_value}")
hashes[title] = {
"mtime": os.path.getmtime(filename),
"sha256": sha256_value,
}
dump_cache()
return sha256_value

View File

@ -0,0 +1,805 @@
import csv
import datetime
import glob
import html
import os
import sys
import traceback
import inspect
import modules.textual_inversion.dataset
import torch
import tqdm
from einops import rearrange, repeat
from ldm.util import default
from modules import devices, processing, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint
from modules.textual_inversion import textual_inversion, logging
from modules.textual_inversion.learn_schedule import LearnRateScheduler
from torch import einsum
from torch.nn.init import normal_, xavier_normal_, xavier_uniform_, kaiming_normal_, kaiming_uniform_, zeros_
from collections import defaultdict, deque
from statistics import stdev, mean
optimizer_dict = {optim_name : cls_obj for optim_name, cls_obj in inspect.getmembers(torch.optim, inspect.isclass) if optim_name != "Optimizer"}
class HypernetworkModule(torch.nn.Module):
activation_dict = {
"linear": torch.nn.Identity,
"relu": torch.nn.ReLU,
"leakyrelu": torch.nn.LeakyReLU,
"elu": torch.nn.ELU,
"swish": torch.nn.Hardswish,
"tanh": torch.nn.Tanh,
"sigmoid": torch.nn.Sigmoid,
}
activation_dict.update({cls_name.lower(): cls_obj for cls_name, cls_obj in inspect.getmembers(torch.nn.modules.activation) if inspect.isclass(cls_obj) and cls_obj.__module__ == 'torch.nn.modules.activation'})
def __init__(self, dim, state_dict=None, layer_structure=None, activation_func=None, weight_init='Normal',
add_layer_norm=False, activate_output=False, dropout_structure=None):
super().__init__()
self.multiplier = 1.0
assert layer_structure is not None, "layer_structure must not be None"
assert layer_structure[0] == 1, "Multiplier Sequence should start with size 1!"
assert layer_structure[-1] == 1, "Multiplier Sequence should end with size 1!"
linears = []
for i in range(len(layer_structure) - 1):
# Add a fully-connected layer
linears.append(torch.nn.Linear(int(dim * layer_structure[i]), int(dim * layer_structure[i+1])))
# Add an activation func except last layer
if activation_func == "linear" or activation_func is None or (i >= len(layer_structure) - 2 and not activate_output):
pass
elif activation_func in self.activation_dict:
linears.append(self.activation_dict[activation_func]())
else:
raise RuntimeError(f'hypernetwork uses an unsupported activation function: {activation_func}')
# Add layer normalization
if add_layer_norm:
linears.append(torch.nn.LayerNorm(int(dim * layer_structure[i+1])))
# Everything should be now parsed into dropout structure, and applied here.
# Since we only have dropouts after layers, dropout structure should start with 0 and end with 0.
if dropout_structure is not None and dropout_structure[i+1] > 0:
assert 0 < dropout_structure[i+1] < 1, "Dropout probability should be 0 or float between 0 and 1!"
linears.append(torch.nn.Dropout(p=dropout_structure[i+1]))
# Code explanation : [1, 2, 1] -> dropout is missing when last_layer_dropout is false. [1, 2, 2, 1] -> [0, 0.3, 0, 0], when its True, [0, 0.3, 0.3, 0].
self.linear = torch.nn.Sequential(*linears)
if state_dict is not None:
self.fix_old_state_dict(state_dict)
self.load_state_dict(state_dict)
else:
for layer in self.linear:
if type(layer) == torch.nn.Linear or type(layer) == torch.nn.LayerNorm:
w, b = layer.weight.data, layer.bias.data
if weight_init == "Normal" or type(layer) == torch.nn.LayerNorm:
normal_(w, mean=0.0, std=0.01)
normal_(b, mean=0.0, std=0)
elif weight_init == 'XavierUniform':
xavier_uniform_(w)
zeros_(b)
elif weight_init == 'XavierNormal':
xavier_normal_(w)
zeros_(b)
elif weight_init == 'KaimingUniform':
kaiming_uniform_(w, nonlinearity='leaky_relu' if 'leakyrelu' == activation_func else 'relu')
zeros_(b)
elif weight_init == 'KaimingNormal':
kaiming_normal_(w, nonlinearity='leaky_relu' if 'leakyrelu' == activation_func else 'relu')
zeros_(b)
else:
raise KeyError(f"Key {weight_init} is not defined as initialization!")
self.to(devices.device)
def fix_old_state_dict(self, state_dict):
changes = {
'linear1.bias': 'linear.0.bias',
'linear1.weight': 'linear.0.weight',
'linear2.bias': 'linear.1.bias',
'linear2.weight': 'linear.1.weight',
}
for fr, to in changes.items():
x = state_dict.get(fr, None)
if x is None:
continue
del state_dict[fr]
state_dict[to] = x
def forward(self, x):
return x + self.linear(x) * (self.multiplier if not self.training else 1)
def trainables(self):
layer_structure = []
for layer in self.linear:
if type(layer) == torch.nn.Linear or type(layer) == torch.nn.LayerNorm:
layer_structure += [layer.weight, layer.bias]
return layer_structure
#param layer_structure : sequence used for length, use_dropout : controlling boolean, last_layer_dropout : for compatibility check.
def parse_dropout_structure(layer_structure, use_dropout, last_layer_dropout):
if layer_structure is None:
layer_structure = [1, 2, 1]
if not use_dropout:
return [0] * len(layer_structure)
dropout_values = [0]
dropout_values.extend([0.3] * (len(layer_structure) - 3))
if last_layer_dropout:
dropout_values.append(0.3)
else:
dropout_values.append(0)
dropout_values.append(0)
return dropout_values
class Hypernetwork:
filename = None
name = None
def __init__(self, name=None, enable_sizes=None, layer_structure=None, activation_func=None, weight_init=None, add_layer_norm=False, use_dropout=False, activate_output=False, **kwargs):
self.filename = None
self.name = name
self.layers = {}
self.step = 0
self.sd_checkpoint = None
self.sd_checkpoint_name = None
self.layer_structure = layer_structure
self.activation_func = activation_func
self.weight_init = weight_init
self.add_layer_norm = add_layer_norm
self.use_dropout = use_dropout
self.activate_output = activate_output
self.last_layer_dropout = kwargs.get('last_layer_dropout', True)
self.dropout_structure = kwargs.get('dropout_structure', None)
if self.dropout_structure is None:
self.dropout_structure = parse_dropout_structure(self.layer_structure, self.use_dropout, self.last_layer_dropout)
self.optimizer_name = None
self.optimizer_state_dict = None
self.optional_info = None
for size in enable_sizes or []:
self.layers[size] = (
HypernetworkModule(size, None, self.layer_structure, self.activation_func, self.weight_init,
self.add_layer_norm, self.activate_output, dropout_structure=self.dropout_structure),
HypernetworkModule(size, None, self.layer_structure, self.activation_func, self.weight_init,
self.add_layer_norm, self.activate_output, dropout_structure=self.dropout_structure),
)
self.eval()
def weights(self):
res = []
for k, layers in self.layers.items():
for layer in layers:
res += layer.parameters()
return res
def train(self, mode=True):
for k, layers in self.layers.items():
for layer in layers:
layer.train(mode=mode)
for param in layer.parameters():
param.requires_grad = mode
def to(self, device):
for k, layers in self.layers.items():
for layer in layers:
layer.to(device)
return self
def set_multiplier(self, multiplier):
for k, layers in self.layers.items():
for layer in layers:
layer.multiplier = multiplier
return self
def eval(self):
for k, layers in self.layers.items():
for layer in layers:
layer.eval()
for param in layer.parameters():
param.requires_grad = False
def save(self, filename):
state_dict = {}
optimizer_saved_dict = {}
for k, v in self.layers.items():
state_dict[k] = (v[0].state_dict(), v[1].state_dict())
state_dict['step'] = self.step
state_dict['name'] = self.name
state_dict['layer_structure'] = self.layer_structure
state_dict['activation_func'] = self.activation_func
state_dict['is_layer_norm'] = self.add_layer_norm
state_dict['weight_initialization'] = self.weight_init
state_dict['sd_checkpoint'] = self.sd_checkpoint
state_dict['sd_checkpoint_name'] = self.sd_checkpoint_name
state_dict['activate_output'] = self.activate_output
state_dict['use_dropout'] = self.use_dropout
state_dict['dropout_structure'] = self.dropout_structure
state_dict['last_layer_dropout'] = (self.dropout_structure[-2] != 0) if self.dropout_structure is not None else self.last_layer_dropout
state_dict['optional_info'] = self.optional_info if self.optional_info else None
if self.optimizer_name is not None:
optimizer_saved_dict['optimizer_name'] = self.optimizer_name
torch.save(state_dict, filename)
if shared.opts.save_optimizer_state and self.optimizer_state_dict:
optimizer_saved_dict['hash'] = self.shorthash()
optimizer_saved_dict['optimizer_state_dict'] = self.optimizer_state_dict
torch.save(optimizer_saved_dict, filename + '.optim')
def load(self, filename):
self.filename = filename
if self.name is None:
self.name = os.path.splitext(os.path.basename(filename))[0]
state_dict = torch.load(filename, map_location='cpu')
self.layer_structure = state_dict.get('layer_structure', [1, 2, 1])
self.optional_info = state_dict.get('optional_info', None)
self.activation_func = state_dict.get('activation_func', None)
self.weight_init = state_dict.get('weight_initialization', 'Normal')
self.add_layer_norm = state_dict.get('is_layer_norm', False)
self.dropout_structure = state_dict.get('dropout_structure', None)
self.use_dropout = True if self.dropout_structure is not None and any(self.dropout_structure) else state_dict.get('use_dropout', False)
self.activate_output = state_dict.get('activate_output', True)
self.last_layer_dropout = state_dict.get('last_layer_dropout', False)
# Dropout structure should have same length as layer structure, Every digits should be in [0,1), and last digit must be 0.
if self.dropout_structure is None:
self.dropout_structure = parse_dropout_structure(self.layer_structure, self.use_dropout, self.last_layer_dropout)
if shared.opts.print_hypernet_extra:
if self.optional_info is not None:
print(f" INFO:\n {self.optional_info}\n")
print(f" Layer structure: {self.layer_structure}")
print(f" Activation function: {self.activation_func}")
print(f" Weight initialization: {self.weight_init}")
print(f" Layer norm: {self.add_layer_norm}")
print(f" Dropout usage: {self.use_dropout}" )
print(f" Activate last layer: {self.activate_output}")
print(f" Dropout structure: {self.dropout_structure}")
optimizer_saved_dict = torch.load(self.filename + '.optim', map_location='cpu') if os.path.exists(self.filename + '.optim') else {}
if self.shorthash() == optimizer_saved_dict.get('hash', None):
self.optimizer_state_dict = optimizer_saved_dict.get('optimizer_state_dict', None)
else:
self.optimizer_state_dict = None
if self.optimizer_state_dict:
self.optimizer_name = optimizer_saved_dict.get('optimizer_name', 'AdamW')
if shared.opts.print_hypernet_extra:
print("Loaded existing optimizer from checkpoint")
print(f"Optimizer name is {self.optimizer_name}")
else:
self.optimizer_name = "AdamW"
if shared.opts.print_hypernet_extra:
print("No saved optimizer exists in checkpoint")
for size, sd in state_dict.items():
if type(size) == int:
self.layers[size] = (
HypernetworkModule(size, sd[0], self.layer_structure, self.activation_func, self.weight_init,
self.add_layer_norm, self.activate_output, self.dropout_structure),
HypernetworkModule(size, sd[1], self.layer_structure, self.activation_func, self.weight_init,
self.add_layer_norm, self.activate_output, self.dropout_structure),
)
self.name = state_dict.get('name', self.name)
self.step = state_dict.get('step', 0)
self.sd_checkpoint = state_dict.get('sd_checkpoint', None)
self.sd_checkpoint_name = state_dict.get('sd_checkpoint_name', None)
self.eval()
def shorthash(self):
sha256 = hashes.sha256(self.filename, f'hypernet/{self.name}')
return sha256[0:10] if sha256 else None
def list_hypernetworks(path):
res = {}
for filename in sorted(glob.iglob(os.path.join(path, '**/*.pt'), recursive=True)):
name = os.path.splitext(os.path.basename(filename))[0]
# Prevent a hypothetical "None.pt" from being listed.
if name != "None":
res[name] = filename
return res
def load_hypernetwork(name):
path = shared.hypernetworks.get(name, None)
if path is None:
return None
hypernetwork = Hypernetwork()
try:
hypernetwork.load(path)
except Exception:
print(f"Error loading hypernetwork {path}", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
return None
return hypernetwork
def load_hypernetworks(names, multipliers=None):
already_loaded = {}
for hypernetwork in shared.loaded_hypernetworks:
if hypernetwork.name in names:
already_loaded[hypernetwork.name] = hypernetwork
shared.loaded_hypernetworks.clear()
for i, name in enumerate(names):
hypernetwork = already_loaded.get(name, None)
if hypernetwork is None:
hypernetwork = load_hypernetwork(name)
if hypernetwork is None:
continue
hypernetwork.set_multiplier(multipliers[i] if multipliers else 1.0)
shared.loaded_hypernetworks.append(hypernetwork)
def find_closest_hypernetwork_name(search: str):
if not search:
return None
search = search.lower()
applicable = [name for name in shared.hypernetworks if search in name.lower()]
if not applicable:
return None
applicable = sorted(applicable, key=lambda name: len(name))
return applicable[0]
def apply_single_hypernetwork(hypernetwork, context_k, context_v, layer=None):
hypernetwork_layers = (hypernetwork.layers if hypernetwork is not None else {}).get(context_k.shape[2], None)
if hypernetwork_layers is None:
return context_k, context_v
if layer is not None:
layer.hyper_k = hypernetwork_layers[0]
layer.hyper_v = hypernetwork_layers[1]
context_k = hypernetwork_layers[0](context_k)
context_v = hypernetwork_layers[1](context_v)
return context_k, context_v
def apply_hypernetworks(hypernetworks, context, layer=None):
context_k = context
context_v = context
for hypernetwork in hypernetworks:
context_k, context_v = apply_single_hypernetwork(hypernetwork, context_k, context_v, layer)
return context_k, context_v
def attention_CrossAttention_forward(self, x, context=None, mask=None):
h = self.heads
q = self.to_q(x)
context = default(context, x)
context_k, context_v = apply_hypernetworks(shared.loaded_hypernetworks, context, self)
k = self.to_k(context_k)
v = self.to_v(context_v)
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
sim = einsum('b i d, b j d -> b i j', q, k) * self.scale
if mask is not None:
mask = rearrange(mask, 'b ... -> b (...)')
max_neg_value = -torch.finfo(sim.dtype).max
mask = repeat(mask, 'b j -> (b h) () j', h=h)
sim.masked_fill_(~mask, max_neg_value)
# attention, what we cannot get enough of
attn = sim.softmax(dim=-1)
out = einsum('b i j, b j d -> b i d', attn, v)
out = rearrange(out, '(b h) n d -> b n (h d)', h=h)
return self.to_out(out)
def stack_conds(conds):
if len(conds) == 1:
return torch.stack(conds)
# same as in reconstruct_multicond_batch
token_count = max([x.shape[0] for x in conds])
for i in range(len(conds)):
if conds[i].shape[0] != token_count:
last_vector = conds[i][-1:]
last_vector_repeated = last_vector.repeat([token_count - conds[i].shape[0], 1])
conds[i] = torch.vstack([conds[i], last_vector_repeated])
return torch.stack(conds)
def statistics(data):
if len(data) < 2:
std = 0
else:
std = stdev(data)
total_information = f"loss:{mean(data):.3f}" + u"\u00B1" + f"({std/ (len(data) ** 0.5):.3f})"
recent_data = data[-32:]
if len(recent_data) < 2:
std = 0
else:
std = stdev(recent_data)
recent_information = f"recent 32 loss:{mean(recent_data):.3f}" + u"\u00B1" + f"({std / (len(recent_data) ** 0.5):.3f})"
return total_information, recent_information
def report_statistics(loss_info:dict):
keys = sorted(loss_info.keys(), key=lambda x: sum(loss_info[x]) / len(loss_info[x]))
for key in keys:
try:
print("Loss statistics for file " + key)
info, recent = statistics(list(loss_info[key]))
print(info)
print(recent)
except Exception as e:
print(e)
def create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure=None, activation_func=None, weight_init=None, add_layer_norm=False, use_dropout=False, dropout_structure=None):
# Remove illegal characters from name.
name = "".join( x for x in name if (x.isalnum() or x in "._- "))
assert name, "Name cannot be empty!"
fn = os.path.join(shared.cmd_opts.hypernetwork_dir, f"{name}.pt")
if not overwrite_old:
assert not os.path.exists(fn), f"file {fn} already exists"
if type(layer_structure) == str:
layer_structure = [float(x.strip()) for x in layer_structure.split(",")]
if use_dropout and dropout_structure and type(dropout_structure) == str:
dropout_structure = [float(x.strip()) for x in dropout_structure.split(",")]
else:
dropout_structure = [0] * len(layer_structure)
hypernet = modules.hypernetworks.hypernetwork.Hypernetwork(
name=name,
enable_sizes=[int(x) for x in enable_sizes],
layer_structure=layer_structure,
activation_func=activation_func,
weight_init=weight_init,
add_layer_norm=add_layer_norm,
use_dropout=use_dropout,
dropout_structure=dropout_structure
)
hypernet.save(fn)
shared.reload_hypernetworks()
def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradient_step, data_root, log_directory, training_width, training_height, varsize, steps, clip_grad_mode, clip_grad_value, shuffle_tags, tag_drop_out, latent_sampling_method, create_image_every, save_hypernetwork_every, template_filename, preview_from_txt2img, preview_prompt, preview_negative_prompt, preview_steps, preview_sampler_index, preview_cfg_scale, preview_seed, preview_width, preview_height):
# images allows training previews to have infotext. Importing it at the top causes a circular import problem.
from modules import images
save_hypernetwork_every = save_hypernetwork_every or 0
create_image_every = create_image_every or 0
template_file = textual_inversion.textual_inversion_templates.get(template_filename, None)
textual_inversion.validate_train_inputs(hypernetwork_name, learn_rate, batch_size, gradient_step, data_root, template_file, template_filename, steps, save_hypernetwork_every, create_image_every, log_directory, name="hypernetwork")
template_file = template_file.path
path = shared.hypernetworks.get(hypernetwork_name, None)
hypernetwork = Hypernetwork()
hypernetwork.load(path)
shared.loaded_hypernetworks = [hypernetwork]
shared.state.job = "train-hypernetwork"
shared.state.textinfo = "Initializing hypernetwork training..."
shared.state.job_count = steps
hypernetwork_name = hypernetwork_name.rsplit('(', 1)[0]
filename = os.path.join(shared.cmd_opts.hypernetwork_dir, f'{hypernetwork_name}.pt')
log_directory = os.path.join(log_directory, datetime.datetime.now().strftime("%Y-%m-%d"), hypernetwork_name)
unload = shared.opts.unload_models_when_training
if save_hypernetwork_every > 0:
hypernetwork_dir = os.path.join(log_directory, "hypernetworks")
os.makedirs(hypernetwork_dir, exist_ok=True)
else:
hypernetwork_dir = None
if create_image_every > 0:
images_dir = os.path.join(log_directory, "images")
os.makedirs(images_dir, exist_ok=True)
else:
images_dir = None
checkpoint = sd_models.select_checkpoint()
initial_step = hypernetwork.step or 0
if initial_step >= steps:
shared.state.textinfo = "Model has already been trained beyond specified max steps"
return hypernetwork, filename
scheduler = LearnRateScheduler(learn_rate, steps, initial_step)
clip_grad = torch.nn.utils.clip_grad_value_ if clip_grad_mode == "value" else torch.nn.utils.clip_grad_norm_ if clip_grad_mode == "norm" else None
if clip_grad:
clip_grad_sched = LearnRateScheduler(clip_grad_value, steps, initial_step, verbose=False)
if shared.opts.training_enable_tensorboard:
tensorboard_writer = textual_inversion.tensorboard_setup(log_directory)
# dataset loading may take a while, so input validations and early returns should be done before this
shared.state.textinfo = f"Preparing dataset from {html.escape(data_root)}..."
pin_memory = shared.opts.pin_memory
ds = modules.textual_inversion.dataset.PersonalizedBase(data_root=data_root, width=training_width, height=training_height, repeats=shared.opts.training_image_repeats_per_epoch, placeholder_token=hypernetwork_name, model=shared.sd_model, cond_model=shared.sd_model.cond_stage_model, device=devices.device, template_file=template_file, include_cond=True, batch_size=batch_size, gradient_step=gradient_step, shuffle_tags=shuffle_tags, tag_drop_out=tag_drop_out, latent_sampling_method=latent_sampling_method, varsize=varsize)
if shared.opts.save_training_settings_to_txt:
saved_params = dict(
model_name=checkpoint.model_name, model_hash=checkpoint.shorthash, num_of_dataset_images=len(ds),
**{field: getattr(hypernetwork, field) for field in ['layer_structure', 'activation_func', 'weight_init', 'add_layer_norm', 'use_dropout', ]}
)
logging.save_settings_to_file(log_directory, {**saved_params, **locals()})
latent_sampling_method = ds.latent_sampling_method
dl = modules.textual_inversion.dataset.PersonalizedDataLoader(ds, latent_sampling_method=latent_sampling_method, batch_size=ds.batch_size, pin_memory=pin_memory)
old_parallel_processing_allowed = shared.parallel_processing_allowed
if unload:
shared.parallel_processing_allowed = False
shared.sd_model.cond_stage_model.to(devices.cpu)
shared.sd_model.first_stage_model.to(devices.cpu)
weights = hypernetwork.weights()
hypernetwork.train()
# Here we use optimizer from saved HN, or we can specify as UI option.
if hypernetwork.optimizer_name in optimizer_dict:
optimizer = optimizer_dict[hypernetwork.optimizer_name](params=weights, lr=scheduler.learn_rate)
optimizer_name = hypernetwork.optimizer_name
else:
print(f"Optimizer type {hypernetwork.optimizer_name} is not defined!")
optimizer = torch.optim.AdamW(params=weights, lr=scheduler.learn_rate)
optimizer_name = 'AdamW'
if hypernetwork.optimizer_state_dict: # This line must be changed if Optimizer type can be different from saved optimizer.
try:
optimizer.load_state_dict(hypernetwork.optimizer_state_dict)
except RuntimeError as e:
print("Cannot resume from saved optimizer!")
print(e)
scaler = torch.cuda.amp.GradScaler()
batch_size = ds.batch_size
gradient_step = ds.gradient_step
# n steps = batch_size * gradient_step * n image processed
steps_per_epoch = len(ds) // batch_size // gradient_step
max_steps_per_epoch = len(ds) // batch_size - (len(ds) // batch_size) % gradient_step
loss_step = 0
_loss_step = 0 #internal
# size = len(ds.indexes)
# loss_dict = defaultdict(lambda : deque(maxlen = 1024))
loss_logging = deque(maxlen=len(ds) * 3) # this should be configurable parameter, this is 3 * epoch(dataset size)
# losses = torch.zeros((size,))
# previous_mean_losses = [0]
# previous_mean_loss = 0
# print("Mean loss of {} elements".format(size))
steps_without_grad = 0
last_saved_file = "<none>"
last_saved_image = "<none>"
forced_filename = "<none>"
pbar = tqdm.tqdm(total=steps - initial_step)
try:
sd_hijack_checkpoint.add()
for i in range((steps-initial_step) * gradient_step):
if scheduler.finished:
break
if shared.state.interrupted:
break
for j, batch in enumerate(dl):
# works as a drop_last=True for gradient accumulation
if j == max_steps_per_epoch:
break
scheduler.apply(optimizer, hypernetwork.step)
if scheduler.finished:
break
if shared.state.interrupted:
break
if clip_grad:
clip_grad_sched.step(hypernetwork.step)
with devices.autocast():
x = batch.latent_sample.to(devices.device, non_blocking=pin_memory)
if tag_drop_out != 0 or shuffle_tags:
shared.sd_model.cond_stage_model.to(devices.device)
c = shared.sd_model.cond_stage_model(batch.cond_text).to(devices.device, non_blocking=pin_memory)
shared.sd_model.cond_stage_model.to(devices.cpu)
else:
c = stack_conds(batch.cond).to(devices.device, non_blocking=pin_memory)
loss = shared.sd_model(x, c)[0] / gradient_step
del x
del c
_loss_step += loss.item()
scaler.scale(loss).backward()
# go back until we reach gradient accumulation steps
if (j + 1) % gradient_step != 0:
continue
loss_logging.append(_loss_step)
if clip_grad:
clip_grad(weights, clip_grad_sched.learn_rate)
scaler.step(optimizer)
scaler.update()
hypernetwork.step += 1
pbar.update()
optimizer.zero_grad(set_to_none=True)
loss_step = _loss_step
_loss_step = 0
steps_done = hypernetwork.step + 1
epoch_num = hypernetwork.step // steps_per_epoch
epoch_step = hypernetwork.step % steps_per_epoch
description = f"Training hypernetwork [Epoch {epoch_num}: {epoch_step+1}/{steps_per_epoch}]loss: {loss_step:.7f}"
pbar.set_description(description)
if hypernetwork_dir is not None and steps_done % save_hypernetwork_every == 0:
# Before saving, change name to match current checkpoint.
hypernetwork_name_every = f'{hypernetwork_name}-{steps_done}'
last_saved_file = os.path.join(hypernetwork_dir, f'{hypernetwork_name_every}.pt')
hypernetwork.optimizer_name = optimizer_name
if shared.opts.save_optimizer_state:
hypernetwork.optimizer_state_dict = optimizer.state_dict()
save_hypernetwork(hypernetwork, checkpoint, hypernetwork_name, last_saved_file)
hypernetwork.optimizer_state_dict = None # dereference it after saving, to save memory.
if shared.opts.training_enable_tensorboard:
epoch_num = hypernetwork.step // len(ds)
epoch_step = hypernetwork.step - (epoch_num * len(ds)) + 1
mean_loss = sum(loss_logging) / len(loss_logging)
textual_inversion.tensorboard_add(tensorboard_writer, loss=mean_loss, global_step=hypernetwork.step, step=epoch_step, learn_rate=scheduler.learn_rate, epoch_num=epoch_num)
textual_inversion.write_loss(log_directory, "hypernetwork_loss.csv", hypernetwork.step, steps_per_epoch, {
"loss": f"{loss_step:.7f}",
"learn_rate": scheduler.learn_rate
})
if images_dir is not None and steps_done % create_image_every == 0:
forced_filename = f'{hypernetwork_name}-{steps_done}'
last_saved_image = os.path.join(images_dir, forced_filename)
hypernetwork.eval()
rng_state = torch.get_rng_state()
cuda_rng_state = None
if torch.cuda.is_available():
cuda_rng_state = torch.cuda.get_rng_state_all()
shared.sd_model.cond_stage_model.to(devices.device)
shared.sd_model.first_stage_model.to(devices.device)
p = processing.StableDiffusionProcessingTxt2Img(
sd_model=shared.sd_model,
do_not_save_grid=True,
do_not_save_samples=True,
)
p.disable_extra_networks = True
if preview_from_txt2img:
p.prompt = preview_prompt
p.negative_prompt = preview_negative_prompt
p.steps = preview_steps
p.sampler_name = sd_samplers.samplers[preview_sampler_index].name
p.cfg_scale = preview_cfg_scale
p.seed = preview_seed
p.width = preview_width
p.height = preview_height
else:
p.prompt = batch.cond_text[0]
p.steps = 20
p.width = training_width
p.height = training_height
preview_text = p.prompt
processed = processing.process_images(p)
image = processed.images[0] if len(processed.images) > 0 else None
if unload:
shared.sd_model.cond_stage_model.to(devices.cpu)
shared.sd_model.first_stage_model.to(devices.cpu)
torch.set_rng_state(rng_state)
if torch.cuda.is_available():
torch.cuda.set_rng_state_all(cuda_rng_state)
hypernetwork.train()
if image is not None:
shared.state.assign_current_image(image)
if shared.opts.training_enable_tensorboard and shared.opts.training_tensorboard_save_images:
textual_inversion.tensorboard_add_image(tensorboard_writer,
f"Validation at epoch {epoch_num}", image,
hypernetwork.step)
last_saved_image, last_text_info = images.save_image(image, images_dir, "", p.seed, p.prompt, shared.opts.samples_format, processed.infotexts[0], p=p, forced_filename=forced_filename, save_to_dirs=False)
last_saved_image += f", prompt: {preview_text}"
shared.state.job_no = hypernetwork.step
shared.state.textinfo = f"""
<p>
Loss: {loss_step:.7f}<br/>
Step: {steps_done}<br/>
Last prompt: {html.escape(batch.cond_text[0])}<br/>
Last saved hypernetwork: {html.escape(last_saved_file)}<br/>
Last saved image: {html.escape(last_saved_image)}<br/>
</p>
"""
except Exception:
print(traceback.format_exc(), file=sys.stderr)
finally:
pbar.leave = False
pbar.close()
hypernetwork.eval()
#report_statistics(loss_dict)
sd_hijack_checkpoint.remove()
filename = os.path.join(shared.cmd_opts.hypernetwork_dir, f'{hypernetwork_name}.pt')
hypernetwork.optimizer_name = optimizer_name
if shared.opts.save_optimizer_state:
hypernetwork.optimizer_state_dict = optimizer.state_dict()
save_hypernetwork(hypernetwork, checkpoint, hypernetwork_name, filename)
del optimizer
hypernetwork.optimizer_state_dict = None # dereference it after saving, to save memory.
shared.sd_model.cond_stage_model.to(devices.device)
shared.sd_model.first_stage_model.to(devices.device)
shared.parallel_processing_allowed = old_parallel_processing_allowed
return hypernetwork, filename
def save_hypernetwork(hypernetwork, checkpoint, hypernetwork_name, filename):
old_hypernetwork_name = hypernetwork.name
old_sd_checkpoint = hypernetwork.sd_checkpoint if hasattr(hypernetwork, "sd_checkpoint") else None
old_sd_checkpoint_name = hypernetwork.sd_checkpoint_name if hasattr(hypernetwork, "sd_checkpoint_name") else None
try:
hypernetwork.sd_checkpoint = checkpoint.shorthash
hypernetwork.sd_checkpoint_name = checkpoint.model_name
hypernetwork.name = hypernetwork_name
hypernetwork.save(filename)
except:
hypernetwork.sd_checkpoint = old_sd_checkpoint
hypernetwork.sd_checkpoint_name = old_sd_checkpoint_name
hypernetwork.name = old_hypernetwork_name
raise

View File

@ -0,0 +1,40 @@
import html
import os
import re
import gradio as gr
import modules.hypernetworks.hypernetwork
from modules import devices, sd_hijack, shared
not_available = ["hardswish", "multiheadattention"]
keys = list(x for x in modules.hypernetworks.hypernetwork.HypernetworkModule.activation_dict.keys() if x not in not_available)
def create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure=None, activation_func=None, weight_init=None, add_layer_norm=False, use_dropout=False, dropout_structure=None):
filename = modules.hypernetworks.hypernetwork.create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure, activation_func, weight_init, add_layer_norm, use_dropout, dropout_structure)
return gr.Dropdown.update(choices=sorted([x for x in shared.hypernetworks.keys()])), f"Created: {filename}", ""
def train_hypernetwork(*args):
shared.loaded_hypernetworks = []
assert not shared.cmd_opts.lowvram, 'Training models with lowvram is not possible'
try:
sd_hijack.undo_optimizations()
hypernetwork, filename = modules.hypernetworks.hypernetwork.train_hypernetwork(*args)
res = f"""
Training {'interrupted' if shared.state.interrupted else 'finished'} at {hypernetwork.step} steps.
Hypernetwork saved to {html.escape(filename)}
"""
return res, ""
except Exception:
raise
finally:
shared.sd_model.cond_stage_model.to(devices.device)
shared.sd_model.first_stage_model.to(devices.device)
sd_hijack.apply_optimizations()

View File

@ -1,4 +1,9 @@
import datetime
import sys
import traceback
import pytz
import io
import math
import os
from collections import namedtuple
@ -10,8 +15,10 @@ import piexif.helper
from PIL import Image, ImageFont, ImageDraw, PngImagePlugin
from fonts.ttf import Roboto
import string
import json
import hashlib
from modules import sd_samplers, shared
from modules import sd_samplers, shared, script_callbacks
from modules.shared import opts, cmd_opts
LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
@ -23,17 +30,26 @@ def image_grid(imgs, batch_size=1, rows=None):
rows = opts.n_rows
elif opts.n_rows == 0:
rows = batch_size
elif opts.grid_prevent_empty_spots:
rows = math.floor(math.sqrt(len(imgs)))
while len(imgs) % rows != 0:
rows -= 1
else:
rows = math.sqrt(len(imgs))
rows = round(rows)
if rows > len(imgs):
rows = len(imgs)
cols = math.ceil(len(imgs) / rows)
w, h = imgs[0].size
grid = Image.new('RGB', size=(cols * w, rows * h), color='black')
params = script_callbacks.ImageGridLoopParams(imgs, cols, rows)
script_callbacks.image_grid_callback(params)
for i, img in enumerate(imgs):
grid.paste(img, box=(i % cols * w, i // cols * h))
w, h = imgs[0].size
grid = Image.new('RGB', size=(params.cols * w, params.rows * h), color='black')
for i, img in enumerate(params.imgs):
grid.paste(img, box=(i % params.cols * w, i // params.cols * h))
return grid
@ -115,7 +131,7 @@ class GridAnnotation:
self.size = None
def draw_grid_annotations(im, width, height, hor_texts, ver_texts):
def draw_grid_annotations(im, width, height, hor_texts, ver_texts, margin=0):
def wrap(drawing, text, font, line_length):
lines = ['']
for word in text.split():
@ -126,8 +142,19 @@ def draw_grid_annotations(im, width, height, hor_texts, ver_texts):
lines.append(word)
return lines
def draw_texts(drawing, draw_x, draw_y, lines):
def get_font(fontsize):
try:
return ImageFont.truetype(opts.font or Roboto, fontsize)
except Exception:
return ImageFont.truetype(Roboto, fontsize)
def draw_texts(drawing, draw_x, draw_y, lines, initial_fnt, initial_fontsize):
for i, line in enumerate(lines):
fnt = initial_fnt
fontsize = initial_fontsize
while drawing.multiline_textsize(line.text, font=fnt)[0] > line.allowed_width and fontsize > 0:
fontsize -= 1
fnt = get_font(fontsize)
drawing.multiline_text((draw_x, draw_y + line.size[1] / 2), line.text, font=fnt, fill=color_active if line.is_active else color_inactive, anchor="mm", align="center")
if not line.is_active:
@ -138,10 +165,7 @@ def draw_grid_annotations(im, width, height, hor_texts, ver_texts):
fontsize = (width + height) // 25
line_spacing = fontsize // 2
try:
fnt = ImageFont.truetype(opts.font or Roboto, fontsize)
except Exception:
fnt = ImageFont.truetype(Roboto, fontsize)
fnt = get_font(fontsize)
color_active = (0, 0, 0)
color_inactive = (153, 153, 153)
@ -168,34 +192,38 @@ def draw_grid_annotations(im, width, height, hor_texts, ver_texts):
for line in texts:
bbox = calc_d.multiline_textbbox((0, 0), line.text, font=fnt)
line.size = (bbox[2] - bbox[0], bbox[3] - bbox[1])
line.allowed_width = allowed_width
hor_text_heights = [sum([line.size[1] + line_spacing for line in lines]) - line_spacing for lines in hor_texts]
ver_text_heights = [sum([line.size[1] + line_spacing for line in lines]) - line_spacing * len(lines) for lines in
ver_texts]
ver_text_heights = [sum([line.size[1] + line_spacing for line in lines]) - line_spacing * len(lines) for lines in ver_texts]
pad_top = max(hor_text_heights) + line_spacing * 2
pad_top = 0 if sum(hor_text_heights) == 0 else max(hor_text_heights) + line_spacing * 2
result = Image.new("RGB", (im.width + pad_left, im.height + pad_top), "white")
result.paste(im, (pad_left, pad_top))
result = Image.new("RGB", (im.width + pad_left + margin * (cols-1), im.height + pad_top + margin * (rows-1)), "white")
for row in range(rows):
for col in range(cols):
cell = im.crop((width * col, height * row, width * (col+1), height * (row+1)))
result.paste(cell, (pad_left + (width + margin) * col, pad_top + (height + margin) * row))
d = ImageDraw.Draw(result)
for col in range(cols):
x = pad_left + width * col + width / 2
x = pad_left + (width + margin) * col + width / 2
y = pad_top / 2 - hor_text_heights[col] / 2
draw_texts(d, x, y, hor_texts[col])
draw_texts(d, x, y, hor_texts[col], fnt, fontsize)
for row in range(rows):
x = pad_left / 2
y = pad_top + height * row + height / 2 - ver_text_heights[row] / 2
y = pad_top + (height + margin) * row + height / 2 - ver_text_heights[row] / 2
draw_texts(d, x, y, ver_texts[row])
draw_texts(d, x, y, ver_texts[row], fnt, fontsize)
return result
def draw_prompt_matrix(im, width, height, all_prompts):
def draw_prompt_matrix(im, width, height, all_prompts, margin=0):
prompts = all_prompts[1:]
boundary = math.ceil(len(prompts) / 2)
@ -205,19 +233,35 @@ def draw_prompt_matrix(im, width, height, all_prompts):
hor_texts = [[GridAnnotation(x, is_active=pos & (1 << i) != 0) for i, x in enumerate(prompts_horiz)] for pos in range(1 << len(prompts_horiz))]
ver_texts = [[GridAnnotation(x, is_active=pos & (1 << i) != 0) for i, x in enumerate(prompts_vert)] for pos in range(1 << len(prompts_vert))]
return draw_grid_annotations(im, width, height, hor_texts, ver_texts)
return draw_grid_annotations(im, width, height, hor_texts, ver_texts, margin)
def resize_image(resize_mode, im, width, height):
def resize_image(resize_mode, im, width, height, upscaler_name=None):
"""
Resizes an image with the specified resize_mode, width, and height.
Args:
resize_mode: The mode to use when resizing the image.
0: Resize the image to the specified width and height.
1: Resize the image to fill the specified width and height, maintaining the aspect ratio, and then center the image within the dimensions, cropping the excess.
2: Resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image within the dimensions, filling empty with data from image.
im: The image to resize.
width: The width to resize the image to.
height: The height to resize the image to.
upscaler_name: The name of the upscaler to use. If not provided, defaults to opts.upscaler_for_img2img.
"""
upscaler_name = upscaler_name or opts.upscaler_for_img2img
def resize(im, w, h):
if opts.upscaler_for_img2img is None or opts.upscaler_for_img2img == "None" or im.mode == 'L':
if upscaler_name is None or upscaler_name == "None" or im.mode == 'L':
return im.resize((w, h), resample=LANCZOS)
scale = max(w / im.width, h / im.height)
if scale > 1.0:
upscalers = [x for x in shared.sd_upscalers if x.name == opts.upscaler_for_img2img]
assert len(upscalers) > 0, f"could not find upscaler named {opts.upscaler_for_img2img}"
upscalers = [x for x in shared.sd_upscalers if x.name == upscaler_name]
assert len(upscalers) > 0, f"could not find upscaler named {upscaler_name}"
upscaler = upscalers[0]
im = upscaler.scaler.upscale(im, scale, upscaler.data_path)
@ -268,10 +312,15 @@ invalid_filename_chars = '<>:"/\\|?*\n'
invalid_filename_prefix = ' '
invalid_filename_postfix = ' .'
re_nonletters = re.compile(r'[\s' + string.punctuation + ']+')
re_pattern = re.compile(r"(.*?)(?:\[([^\[\]]+)\]|$)")
re_pattern_arg = re.compile(r"(.*)<([^>]*)>$")
max_filename_part_length = 128
def sanitize_filename_part(text, replace_spaces=True):
if text is None:
return None
if replace_spaces:
text = text.replace(' ', '_')
@ -281,54 +330,106 @@ def sanitize_filename_part(text, replace_spaces=True):
return text
def apply_filename_pattern(x, p, seed, prompt):
max_prompt_words = opts.directories_max_prompt_words
class FilenameGenerator:
replacements = {
'seed': lambda self: self.seed if self.seed is not None else '',
'steps': lambda self: self.p and self.p.steps,
'cfg': lambda self: self.p and self.p.cfg_scale,
'width': lambda self: self.image.width,
'height': lambda self: self.image.height,
'styles': lambda self: self.p and sanitize_filename_part(", ".join([style for style in self.p.styles if not style == "None"]) or "None", replace_spaces=False),
'sampler': lambda self: self.p and sanitize_filename_part(self.p.sampler_name, replace_spaces=False),
'model_hash': lambda self: getattr(self.p, "sd_model_hash", shared.sd_model.sd_model_hash),
'model_name': lambda self: sanitize_filename_part(shared.sd_model.sd_checkpoint_info.model_name, replace_spaces=False),
'date': lambda self: datetime.datetime.now().strftime('%Y-%m-%d'),
'datetime': lambda self, *args: self.datetime(*args), # accepts formats: [datetime], [datetime<Format>], [datetime<Format><Time Zone>]
'job_timestamp': lambda self: getattr(self.p, "job_timestamp", shared.state.job_timestamp),
'prompt_hash': lambda self: hashlib.sha256(self.prompt.encode()).hexdigest()[0:8],
'prompt': lambda self: sanitize_filename_part(self.prompt),
'prompt_no_styles': lambda self: self.prompt_no_style(),
'prompt_spaces': lambda self: sanitize_filename_part(self.prompt, replace_spaces=False),
'prompt_words': lambda self: self.prompt_words(),
}
default_time_format = '%Y%m%d%H%M%S'
if seed is not None:
x = x.replace("[seed]", str(seed))
def __init__(self, p, seed, prompt, image):
self.p = p
self.seed = seed
self.prompt = prompt
self.image = image
if p is not None:
x = x.replace("[steps]", str(p.steps))
x = x.replace("[cfg]", str(p.cfg_scale))
x = x.replace("[width]", str(p.width))
x = x.replace("[height]", str(p.height))
def prompt_no_style(self):
if self.p is None or self.prompt is None:
return None
#currently disabled if using the save button, will work otherwise
# if enabled it will cause a bug because styles is not included in the save_files data dictionary
if hasattr(p, "styles"):
x = x.replace("[styles]", sanitize_filename_part(", ".join([x for x in p.styles if not x == "None"]) or "None", replace_spaces=False))
x = x.replace("[sampler]", sanitize_filename_part(sd_samplers.samplers[p.sampler_index].name, replace_spaces=False))
x = x.replace("[model_hash]", shared.sd_model.sd_model_hash)
x = x.replace("[date]", datetime.date.today().isoformat())
x = x.replace("[datetime]", datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
x = x.replace("[job_timestamp]", shared.state.job_timestamp)
# Apply [prompt] at last. Because it may contain any replacement word.^M
if prompt is not None:
x = x.replace("[prompt]", sanitize_filename_part(prompt))
if "[prompt_no_styles]" in x:
prompt_no_style = prompt
for style in shared.prompt_styles.get_style_prompts(p.styles):
prompt_no_style = self.prompt
for style in shared.prompt_styles.get_style_prompts(self.p.styles):
if len(style) > 0:
style_parts = [y for y in style.split("{prompt}")]
for part in style_parts:
for part in style.split("{prompt}"):
prompt_no_style = prompt_no_style.replace(part, "").replace(", ,", ",").strip().strip(',')
prompt_no_style = prompt_no_style.replace(style, "").strip().strip(',').strip()
x = x.replace("[prompt_no_styles]", sanitize_filename_part(prompt_no_style, replace_spaces=False))
x = x.replace("[prompt_spaces]", sanitize_filename_part(prompt, replace_spaces=False))
if "[prompt_words]" in x:
words = [x for x in re_nonletters.split(prompt or "") if len(x) > 0]
prompt_no_style = prompt_no_style.replace(style, "").strip().strip(',').strip()
return sanitize_filename_part(prompt_no_style, replace_spaces=False)
def prompt_words(self):
words = [x for x in re_nonletters.split(self.prompt or "") if len(x) > 0]
if len(words) == 0:
words = ["empty"]
x = x.replace("[prompt_words]", sanitize_filename_part(" ".join(words[0:max_prompt_words]), replace_spaces=False))
return sanitize_filename_part(" ".join(words[0:opts.directories_max_prompt_words]), replace_spaces=False)
if cmd_opts.hide_ui_dir_config:
x = re.sub(r'^[\\/]+|\.{2,}[\\/]+|[\\/]+\.{2,}', '', x)
def datetime(self, *args):
time_datetime = datetime.datetime.now()
return x
time_format = args[0] if len(args) > 0 and args[0] != "" else self.default_time_format
try:
time_zone = pytz.timezone(args[1]) if len(args) > 1 else None
except pytz.exceptions.UnknownTimeZoneError as _:
time_zone = None
time_zone_time = time_datetime.astimezone(time_zone)
try:
formatted_time = time_zone_time.strftime(time_format)
except (ValueError, TypeError) as _:
formatted_time = time_zone_time.strftime(self.default_time_format)
return sanitize_filename_part(formatted_time, replace_spaces=False)
def apply(self, x):
res = ''
for m in re_pattern.finditer(x):
text, pattern = m.groups()
res += text
if pattern is None:
continue
pattern_args = []
while True:
m = re_pattern_arg.match(pattern)
if m is None:
break
pattern, arg = m.groups()
pattern_args.insert(0, arg)
fun = self.replacements.get(pattern.lower())
if fun is not None:
try:
replacement = fun(self, *pattern_args)
except Exception:
replacement = None
print(f"Error adding [{pattern}] to filename", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
if replacement is not None:
res += str(replacement)
continue
res += f'[{pattern}]'
return res
def get_next_sequence_number(path, basename):
@ -353,65 +454,126 @@ def get_next_sequence_number(path, basename):
return result + 1
def save_image(image, path, basename, seed=None, prompt=None, extension='png', info=None, short_filename=False, no_prompt=False, grid=False, pnginfo_section_name='parameters', p=None, existing_info=None, forced_filename=None, suffix=""):
if short_filename or prompt is None or seed is None:
def save_image(image, path, basename, seed=None, prompt=None, extension='png', info=None, short_filename=False, no_prompt=False, grid=False, pnginfo_section_name='parameters', p=None, existing_info=None, forced_filename=None, suffix="", save_to_dirs=None):
"""Save an image.
Args:
image (`PIL.Image`):
The image to be saved.
path (`str`):
The directory to save the image. Note, the option `save_to_dirs` will make the image to be saved into a sub directory.
basename (`str`):
The base filename which will be applied to `filename pattern`.
seed, prompt, short_filename,
extension (`str`):
Image file extension, default is `png`.
pngsectionname (`str`):
Specify the name of the section which `info` will be saved in.
info (`str` or `PngImagePlugin.iTXt`):
PNG info chunks.
existing_info (`dict`):
Additional PNG info. `existing_info == {pngsectionname: info, ...}`
no_prompt:
TODO I don't know its meaning.
p (`StableDiffusionProcessing`)
forced_filename (`str`):
If specified, `basename` and filename pattern will be ignored.
save_to_dirs (bool):
If true, the image will be saved into a subdirectory of `path`.
Returns: (fullfn, txt_fullfn)
fullfn (`str`):
The full path of the saved imaged.
txt_fullfn (`str` or None):
If a text file is saved for this image, this will be its full path. Otherwise None.
"""
namegen = FilenameGenerator(p, seed, prompt, image)
if save_to_dirs is None:
save_to_dirs = (grid and opts.grid_save_to_dirs) or (not grid and opts.save_to_dirs and not no_prompt)
if save_to_dirs:
dirname = namegen.apply(opts.directories_filename_pattern or "[prompt_words]").lstrip(' ').rstrip('\\ /')
path = os.path.join(path, dirname)
os.makedirs(path, exist_ok=True)
if forced_filename is None:
if short_filename or seed is None:
file_decoration = ""
elif opts.save_to_dirs:
file_decoration = opts.samples_filename_pattern or "[seed]"
else:
file_decoration = opts.samples_filename_pattern or "[seed]-[prompt_spaces]"
if file_decoration != "":
file_decoration = "-" + file_decoration.lower()
add_number = opts.save_images_add_number or file_decoration == ''
file_decoration = apply_filename_pattern(file_decoration, p, seed, prompt) + suffix
if file_decoration != "" and add_number:
file_decoration = "-" + file_decoration
if extension == 'png' and opts.enable_pnginfo and info is not None:
pnginfo = PngImagePlugin.PngInfo()
file_decoration = namegen.apply(file_decoration) + suffix
if existing_info is not None:
for k, v in existing_info.items():
pnginfo.add_text(k, str(v))
pnginfo.add_text(pnginfo_section_name, info)
else:
pnginfo = None
save_to_dirs = (grid and opts.grid_save_to_dirs) or (not grid and opts.save_to_dirs and not no_prompt)
if save_to_dirs:
dirname = apply_filename_pattern(opts.directories_filename_pattern or "[prompt_words]", p, seed, prompt).strip('\\ /')
path = os.path.join(path, dirname)
os.makedirs(path, exist_ok=True)
if forced_filename is None:
if add_number:
basecount = get_next_sequence_number(path, basename)
fullfn = "a.png"
fullfn_without_extension = "a"
fullfn = None
for i in range(500):
fn = f"{basecount + i:05}" if basename == '' else f"{basename}-{basecount + i:04}"
fullfn = os.path.join(path, f"{fn}{file_decoration}.{extension}")
fullfn_without_extension = os.path.join(path, f"{fn}{file_decoration}")
if not os.path.exists(fullfn):
break
else:
fullfn = os.path.join(path, f"{file_decoration}.{extension}")
else:
fullfn = os.path.join(path, f"{forced_filename}.{extension}")
fullfn_without_extension = os.path.join(path, forced_filename)
def exif_bytes():
return piexif.dump({
pnginfo = existing_info or {}
if info is not None:
pnginfo[pnginfo_section_name] = info
params = script_callbacks.ImageSaveParams(image, p, fullfn, pnginfo)
script_callbacks.before_image_saved_callback(params)
image = params.image
fullfn = params.filename
info = params.pnginfo.get(pnginfo_section_name, None)
def _atomically_save_image(image_to_save, filename_without_extension, extension):
# save image with .tmp extension to avoid race condition when another process detects new image in the directory
temp_file_path = filename_without_extension + ".tmp"
image_format = Image.registered_extensions()[extension]
if extension.lower() == '.png':
pnginfo_data = PngImagePlugin.PngInfo()
if opts.enable_pnginfo:
for k, v in params.pnginfo.items():
pnginfo_data.add_text(k, str(v))
image_to_save.save(temp_file_path, format=image_format, quality=opts.jpeg_quality, pnginfo=pnginfo_data)
elif extension.lower() in (".jpg", ".jpeg", ".webp"):
if image_to_save.mode == 'RGBA':
image_to_save = image_to_save.convert("RGB")
image_to_save.save(temp_file_path, format=image_format, quality=opts.jpeg_quality)
if opts.enable_pnginfo and info is not None:
exif_bytes = piexif.dump({
"Exif": {
piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(info or "", encoding="unicode")
},
})
if extension.lower() in ("jpg", "jpeg", "webp"):
image.save(fullfn, quality=opts.jpeg_quality)
if opts.enable_pnginfo and info is not None:
piexif.insert(exif_bytes(), fullfn)
piexif.insert(exif_bytes, temp_file_path)
else:
image.save(fullfn, quality=opts.jpeg_quality, pnginfo=pnginfo)
image_to_save.save(temp_file_path, format=image_format, quality=opts.jpeg_quality)
# atomically rename the file with correct extension
os.replace(temp_file_path, filename_without_extension + extension)
fullfn_without_extension, extension = os.path.splitext(params.filename)
_atomically_save_image(image, fullfn_without_extension, extension)
image.already_saved_as = fullfn
target_side_length = 4000
oversize = image.width > target_side_length or image.height > target_side_length
@ -423,12 +585,81 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i
elif oversize:
image = image.resize((image.width * target_side_length // image.height, target_side_length), LANCZOS)
image.save(fullfn_without_extension + ".jpg", quality=opts.jpeg_quality)
if opts.enable_pnginfo and info is not None:
piexif.insert(exif_bytes(), fullfn_without_extension + ".jpg")
_atomically_save_image(image, fullfn_without_extension, ".jpg")
if opts.save_txt and info is not None:
with open(f"{fullfn_without_extension}.txt", "w", encoding="utf8") as file:
txt_fullfn = f"{fullfn_without_extension}.txt"
with open(txt_fullfn, "w", encoding="utf8") as file:
file.write(info + "\n")
else:
txt_fullfn = None
script_callbacks.image_saved_callback(params)
return fullfn, txt_fullfn
def read_info_from_image(image):
items = image.info or {}
geninfo = items.pop('parameters', None)
if "exif" in items:
exif = piexif.load(items["exif"])
exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'')
try:
exif_comment = piexif.helper.UserComment.load(exif_comment)
except ValueError:
exif_comment = exif_comment.decode('utf8', errors="ignore")
if exif_comment:
items['exif comment'] = exif_comment
geninfo = exif_comment
for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif',
'loop', 'background', 'timestamp', 'duration']:
items.pop(field, None)
if items.get("Software", None) == "NovelAI":
try:
json_info = json.loads(items["Comment"])
sampler = sd_samplers.samplers_map.get(json_info["sampler"], "Euler a")
geninfo = f"""{items["Description"]}
Negative prompt: {json_info["uc"]}
Steps: {json_info["steps"]}, Sampler: {sampler}, CFG scale: {json_info["scale"]}, Seed: {json_info["seed"]}, Size: {image.width}x{image.height}, Clip skip: 2, ENSD: 31337"""
except Exception:
print("Error parsing NovelAI image generation parameters:", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
return geninfo, items
def image_data(data):
try:
image = Image.open(io.BytesIO(data))
textinfo, _ = read_info_from_image(image)
return textinfo, None
except Exception:
pass
try:
text = data.decode('utf8')
assert len(text) < 10000
return text, None
except Exception:
pass
return '', None
def flatten(img, bgcolor):
"""replaces transparency with bgcolor (example: "#ffffff"), returning an RGB mode image with no transparency"""
if img.mode == "RGBA":
background = Image.new('RGBA', img.size, bgcolor)
background.paste(img, mask=img)
img = background
return img.convert('RGB')

View File

@ -4,9 +4,10 @@ import sys
import traceback
import numpy as np
from PIL import Image, ImageOps, ImageChops
from PIL import Image, ImageOps, ImageFilter, ImageEnhance, ImageChops
from modules import devices
from modules import devices, sd_samplers
from modules.generation_parameters_copypaste import create_override_settings_dict
from modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images
from modules.shared import opts, state
import modules.shared as shared
@ -16,10 +17,17 @@ import modules.images as images
import modules.scripts
def process_batch(p, input_dir, output_dir, args):
def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args):
processing.fix_seed(p)
images = [file for file in [os.path.join(input_dir, x) for x in os.listdir(input_dir)] if os.path.isfile(file)]
images = shared.listfiles(input_dir)
is_inpaint_batch = False
if inpaint_mask_dir:
inpaint_masks = shared.listfiles(inpaint_mask_dir)
is_inpaint_batch = len(inpaint_masks) > 0
if is_inpaint_batch:
print(f"\nInpaint batch is enabled. {len(inpaint_masks)} masks found.")
print(f"Will process {len(images)} images, creating {p.n_iter * p.batch_size} new images for each.")
@ -32,13 +40,26 @@ def process_batch(p, input_dir, output_dir, args):
for i, image in enumerate(images):
state.job = f"{i+1} out of {len(images)}"
if state.skipped:
state.skipped = False
if state.interrupted:
break
img = Image.open(image)
# Use the EXIF orientation of photos taken by smartphones.
img = ImageOps.exif_transpose(img)
p.init_images = [img] * p.batch_size
if is_inpaint_batch:
# try to find corresponding mask for an image using simple filename matching
mask_image_path = os.path.join(inpaint_mask_dir, os.path.basename(image))
# if not found use first one ("same mask for all images" use-case)
if not mask_image_path in inpaint_masks:
mask_image_path = inpaint_masks[0]
mask_image = Image.open(mask_image_path)
p.image_mask = mask_image
proc = modules.scripts.scripts_img2img.run(p, *args)
if proc is None:
proc = process_images(p)
@ -51,27 +72,46 @@ def process_batch(p, input_dir, output_dir, args):
filename = f"{left}-{n}{right}"
if not save_normally:
os.makedirs(output_dir, exist_ok=True)
processed_image.save(os.path.join(output_dir, filename))
def img2img(mode: int, prompt: str, negative_prompt: str, prompt_style: str, prompt_style2: str, init_img, init_img_with_mask, init_img_inpaint, init_mask_inpaint, mask_mode, steps: int, sampler_index: int, mask_blur: int, inpainting_fill: int, restore_faces: bool, tiling: bool, n_iter: int, batch_size: int, cfg_scale: float, denoising_strength: float, seed: int, subseed: int, subseed_strength: float, seed_resize_from_h: int, seed_resize_from_w: int, seed_enable_extras: bool, height: int, width: int, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, *args):
is_inpaint = mode == 1
is_batch = mode == 2
def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, steps: int, sampler_index: int, mask_blur: int, mask_alpha: float, inpainting_fill: int, restore_faces: bool, tiling: bool, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, seed: int, subseed: int, subseed_strength: float, seed_resize_from_h: int, seed_resize_from_w: int, seed_enable_extras: bool, height: int, width: int, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, *args):
override_settings = create_override_settings_dict(override_settings_texts)
if is_inpaint:
if mask_mode == 0:
image = init_img_with_mask['image']
mask = init_img_with_mask['mask']
is_batch = mode == 5
if mode == 0: # img2img
image = init_img.convert("RGB")
mask = None
elif mode == 1: # img2img sketch
image = sketch.convert("RGB")
mask = None
elif mode == 2: # inpaint
image, mask = init_img_with_mask["image"], init_img_with_mask["mask"]
alpha_mask = ImageOps.invert(image.split()[-1]).convert('L').point(lambda x: 255 if x > 0 else 0, mode='1')
mask = ImageChops.lighter(alpha_mask, mask.convert('L')).convert('L')
image = image.convert('RGB')
else:
image = image.convert("RGB")
elif mode == 3: # inpaint sketch
image = inpaint_color_sketch
orig = inpaint_color_sketch_orig or inpaint_color_sketch
pred = np.any(np.array(image) != np.array(orig), axis=-1)
mask = Image.fromarray(pred.astype(np.uint8) * 255, "L")
mask = ImageEnhance.Brightness(mask).enhance(1 - mask_alpha / 100)
blur = ImageFilter.GaussianBlur(mask_blur)
image = Image.composite(image.filter(blur), orig, mask.filter(blur))
image = image.convert("RGB")
elif mode == 4: # inpaint upload mask
image = init_img_inpaint
mask = init_mask_inpaint
else:
image = init_img
image = None
mask = None
# Use the EXIF orientation of photos taken by smartphones.
if image is not None:
image = ImageOps.exif_transpose(image)
assert 0. <= denoising_strength <= 1., 'can only work with strength in [0.0, 1.0]'
p = StableDiffusionProcessingImg2Img(
@ -80,14 +120,14 @@ def img2img(mode: int, prompt: str, negative_prompt: str, prompt_style: str, pro
outpath_grids=opts.outdir_grids or opts.outdir_img2img_grids,
prompt=prompt,
negative_prompt=negative_prompt,
styles=[prompt_style, prompt_style2],
styles=prompt_styles,
seed=seed,
subseed=subseed,
subseed_strength=subseed_strength,
seed_resize_from_h=seed_resize_from_h,
seed_resize_from_w=seed_resize_from_w,
seed_enable_extras=seed_enable_extras,
sampler_index=sampler_index,
sampler_name=sd_samplers.samplers_for_img2img[sampler_index].name,
batch_size=batch_size,
n_iter=n_iter,
steps=steps,
@ -102,11 +142,16 @@ def img2img(mode: int, prompt: str, negative_prompt: str, prompt_style: str, pro
inpainting_fill=inpainting_fill,
resize_mode=resize_mode,
denoising_strength=denoising_strength,
image_cfg_scale=image_cfg_scale,
inpaint_full_res=inpaint_full_res,
inpaint_full_res_padding=inpaint_full_res_padding,
inpainting_mask_invert=inpainting_mask_invert,
override_settings=override_settings,
)
p.scripts = modules.scripts.scripts_txt2img
p.script_args = args
if shared.cmd_opts.enable_console_prompts:
print(f"\nimg2img: {prompt}", file=shared.progress_print_out)
@ -115,7 +160,7 @@ def img2img(mode: int, prompt: str, negative_prompt: str, prompt_style: str, pro
if is_batch:
assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled"
process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, args)
process_batch(p, img2img_batch_input_dir, img2img_batch_output_dir, img2img_batch_inpaint_mask_dir, args)
processed = Processed(p, [], p.seed, "")
else:
@ -123,6 +168,8 @@ def img2img(mode: int, prompt: str, negative_prompt: str, prompt_style: str, pro
if processed is None:
processed = process_images(p)
p.close()
shared.total_tqdm.clear()
generation_info_js = processed.js()
@ -132,4 +179,4 @@ def img2img(mode: int, prompt: str, negative_prompt: str, prompt_style: str, pro
if opts.do_not_show_images:
processed.images = []
return processed.images, generation_info_js, plaintext_to_html(processed.info)
return processed.images, generation_info_js, plaintext_to_html(processed.info), plaintext_to_html(processed.comments)

5
modules/import_hook.py Normal file
View File

@ -0,0 +1,5 @@
import sys
# this will break any attempt to import xformers which will prevent stability diffusion repo from trying to use it
if "--xformers" not in "".join(sys.argv):
sys.modules["xformers"] = None

View File

@ -1,51 +1,106 @@
import contextlib
import os
import sys
import traceback
from collections import namedtuple
from pathlib import Path
import re
import torch
import torch.hub
from torchvision import transforms
from torchvision.transforms.functional import InterpolationMode
import modules.shared as shared
from modules import devices, paths, lowvram
from modules import devices, paths, shared, lowvram, modelloader, errors
blip_image_eval_size = 384
blip_model_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_caption_capfilt_large.pth'
clip_model_name = 'ViT-L/14'
Category = namedtuple("Category", ["name", "topn", "items"])
re_topn = re.compile(r"\.top(\d+)\.")
def category_types():
return [f.stem for f in Path(shared.interrogator.content_dir).glob('*.txt')]
def download_default_clip_interrogate_categories(content_dir):
print("Downloading CLIP categories...")
tmpdir = content_dir + "_tmp"
category_types = ["artists", "flavors", "mediums", "movements"]
try:
os.makedirs(tmpdir)
for category_type in category_types:
torch.hub.download_url_to_file(f"https://raw.githubusercontent.com/pharmapsychotic/clip-interrogator/main/clip_interrogator/data/{category_type}.txt", os.path.join(tmpdir, f"{category_type}.txt"))
os.rename(tmpdir, content_dir)
except Exception as e:
errors.display(e, "downloading default CLIP interrogate categories")
finally:
if os.path.exists(tmpdir):
os.remove(tmpdir)
class InterrogateModels:
blip_model = None
clip_model = None
clip_preprocess = None
categories = None
dtype = None
running_on_cpu = None
def __init__(self, content_dir):
self.categories = []
self.loaded_categories = None
self.skip_categories = []
self.content_dir = content_dir
self.running_on_cpu = devices.device_interrogate == torch.device("cpu")
if os.path.exists(content_dir):
for filename in os.listdir(content_dir):
m = re_topn.search(filename)
def categories(self):
if not os.path.exists(self.content_dir):
download_default_clip_interrogate_categories(self.content_dir)
if self.loaded_categories is not None and self.skip_categories == shared.opts.interrogate_clip_skip_categories:
return self.loaded_categories
self.loaded_categories = []
if os.path.exists(self.content_dir):
self.skip_categories = shared.opts.interrogate_clip_skip_categories
category_types = []
for filename in Path(self.content_dir).glob('*.txt'):
category_types.append(filename.stem)
if filename.stem in self.skip_categories:
continue
m = re_topn.search(filename.stem)
topn = 1 if m is None else int(m.group(1))
with open(os.path.join(content_dir, filename), "r", encoding="utf8") as file:
with open(filename, "r", encoding="utf8") as file:
lines = [x.strip() for x in file.readlines()]
self.categories.append(Category(name=filename, topn=topn, items=lines))
self.loaded_categories.append(Category(name=filename.stem, topn=topn, items=lines))
return self.loaded_categories
def create_fake_fairscale(self):
class FakeFairscale:
def checkpoint_wrapper(self):
pass
sys.modules["fairscale.nn.checkpoint.checkpoint_activations"] = FakeFairscale
def load_blip_model(self):
self.create_fake_fairscale()
import models.blip
blip_model = models.blip.blip_decoder(pretrained=blip_model_url, image_size=blip_image_eval_size, vit='base', med_config=os.path.join(paths.paths["BLIP"], "configs", "med_config.json"))
files = modelloader.load_models(
model_path=os.path.join(paths.models_path, "BLIP"),
model_url='https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_caption_capfilt_large.pth',
ext_filter=[".pth"],
download_name='model_base_caption_capfilt_large.pth',
)
blip_model = models.blip.blip_decoder(pretrained=files[0], image_size=blip_image_eval_size, vit='base', med_config=os.path.join(paths.paths["BLIP"], "configs", "med_config.json"))
blip_model.eval()
return blip_model
@ -53,26 +108,30 @@ class InterrogateModels:
def load_clip_model(self):
import clip
model, preprocess = clip.load(clip_model_name)
if self.running_on_cpu:
model, preprocess = clip.load(clip_model_name, device="cpu", download_root=shared.cmd_opts.clip_models_path)
else:
model, preprocess = clip.load(clip_model_name, download_root=shared.cmd_opts.clip_models_path)
model.eval()
model = model.to(shared.device)
model = model.to(devices.device_interrogate)
return model, preprocess
def load(self):
if self.blip_model is None:
self.blip_model = self.load_blip_model()
if not shared.cmd_opts.no_half:
if not shared.cmd_opts.no_half and not self.running_on_cpu:
self.blip_model = self.blip_model.half()
self.blip_model = self.blip_model.to(shared.device)
self.blip_model = self.blip_model.to(devices.device_interrogate)
if self.clip_model is None:
self.clip_model, self.clip_preprocess = self.load_clip_model()
if not shared.cmd_opts.no_half:
if not shared.cmd_opts.no_half and not self.running_on_cpu:
self.clip_model = self.clip_model.half()
self.clip_model = self.clip_model.to(shared.device)
self.clip_model = self.clip_model.to(devices.device_interrogate)
self.dtype = next(self.clip_model.parameters()).dtype
@ -95,15 +154,17 @@ class InterrogateModels:
def rank(self, image_features, text_array, top_count=1):
import clip
devices.torch_gc()
if shared.opts.interrogate_clip_dict_limit != 0:
text_array = text_array[0:int(shared.opts.interrogate_clip_dict_limit)]
top_count = min(top_count, len(text_array))
text_tokens = clip.tokenize([text for text in text_array], truncate=True).to(shared.device)
text_tokens = clip.tokenize([text for text in text_array], truncate=True).to(devices.device_interrogate)
text_features = self.clip_model.encode_text(text_tokens).type(self.dtype)
text_features /= text_features.norm(dim=-1, keepdim=True)
similarity = torch.zeros((1, len(text_array))).to(shared.device)
similarity = torch.zeros((1, len(text_array))).to(devices.device_interrogate)
for i in range(image_features.shape[0]):
similarity += (100.0 * image_features[i].unsqueeze(0) @ text_features.T).softmax(dim=-1)
similarity /= image_features.shape[0]
@ -116,7 +177,7 @@ class InterrogateModels:
transforms.Resize((blip_image_eval_size, blip_image_eval_size), interpolation=InterpolationMode.BICUBIC),
transforms.ToTensor(),
transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711))
])(pil_image).unsqueeze(0).type(self.dtype).to(shared.device)
])(pil_image).unsqueeze(0).type(self.dtype).to(devices.device_interrogate)
with torch.no_grad():
caption = self.blip_model.generate(gpu_image, sample=False, num_beams=shared.opts.interrogate_clip_num_beams, min_length=shared.opts.interrogate_clip_min_length, max_length=shared.opts.interrogate_clip_max_length)
@ -124,10 +185,10 @@ class InterrogateModels:
return caption[0]
def interrogate(self, pil_image):
res = None
res = ""
shared.state.begin()
shared.state.job = 'interrogate'
try:
if shared.cmd_opts.lowvram or shared.cmd_opts.medvram:
lowvram.send_everything_to_cpu()
devices.torch_gc()
@ -140,29 +201,27 @@ class InterrogateModels:
res = caption
cilp_image = self.clip_preprocess(pil_image).unsqueeze(0).type(self.dtype).to(shared.device)
clip_image = self.clip_preprocess(pil_image).unsqueeze(0).type(self.dtype).to(devices.device_interrogate)
precision_scope = torch.autocast if shared.cmd_opts.precision == "autocast" else contextlib.nullcontext
with torch.no_grad(), precision_scope("cuda"):
image_features = self.clip_model.encode_image(cilp_image).type(self.dtype)
with torch.no_grad(), devices.autocast():
image_features = self.clip_model.encode_image(clip_image).type(self.dtype)
image_features /= image_features.norm(dim=-1, keepdim=True)
if shared.opts.interrogate_use_builtin_artists:
artist = self.rank(image_features, ["by " + artist.name for artist in shared.artist_db.artists])[0]
res += ", " + artist[0]
for name, topn, items in self.categories:
for name, topn, items in self.categories():
matches = self.rank(image_features, items, top_count=topn)
for match, score in matches:
if shared.opts.interrogate_return_ranks:
res += f", ({match}:{score/100:.3f})"
else:
res += ", " + match
except Exception:
print(f"Error interrogating", file=sys.stderr)
print("Error interrogating", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
res += "<error>"
self.unload()
shared.state.end()
return res

37
modules/localization.py Normal file
View File

@ -0,0 +1,37 @@
import json
import os
import sys
import traceback
localizations = {}
def list_localizations(dirname):
localizations.clear()
for file in os.listdir(dirname):
fn, ext = os.path.splitext(file)
if ext.lower() != ".json":
continue
localizations[fn] = os.path.join(dirname, file)
from modules import scripts
for file in scripts.list_scripts("localizations", ".json"):
fn, ext = os.path.splitext(file.filename)
localizations[fn] = file.path
def localization_js(current_localization_name):
fn = localizations.get(current_localization_name, None)
data = {}
if fn is not None:
try:
with open(fn, "r", encoding="utf8") as file:
data = json.load(file)
except Exception:
print(f"Error loading localization from {fn}:", file=sys.stderr)
print(traceback.format_exc(), file=sys.stderr)
return f"var localization = {json.dumps(data)}\n"

View File

@ -1,9 +1,8 @@
import torch
from modules.devices import get_optimal_device
from modules import devices
module_in_gpu = None
cpu = torch.device("cpu")
device = gpu = get_optimal_device()
def send_everything_to_cpu():
@ -33,34 +32,49 @@ def setup_for_low_vram(sd_model, use_medvram):
if module_in_gpu is not None:
module_in_gpu.to(cpu)
module.to(gpu)
module.to(devices.device)
module_in_gpu = module
# see below for register_forward_pre_hook;
# first_stage_model does not use forward(), it uses encode/decode, so register_forward_pre_hook is
# useless here, and we just replace those methods
def first_stage_model_encode_wrap(self, encoder, x):
send_me_to_gpu(self, None)
return encoder(x)
def first_stage_model_decode_wrap(self, decoder, z):
send_me_to_gpu(self, None)
return decoder(z)
first_stage_model = sd_model.first_stage_model
first_stage_model_encode = sd_model.first_stage_model.encode
first_stage_model_decode = sd_model.first_stage_model.decode
# remove three big modules, cond, first_stage, and unet from the model and then
def first_stage_model_encode_wrap(x):
send_me_to_gpu(first_stage_model, None)
return first_stage_model_encode(x)
def first_stage_model_decode_wrap(z):
send_me_to_gpu(first_stage_model, None)
return first_stage_model_decode(z)
# for SD1, cond_stage_model is CLIP and its NN is in the tranformer frield, but for SD2, it's open clip, and it's in model field
if hasattr(sd_model.cond_stage_model, 'model'):
sd_model.cond_stage_model.transformer = sd_model.cond_stage_model.model
# remove four big modules, cond, first_stage, depth (if applicable), and unet from the model and then
# send the model to GPU. Then put modules back. the modules will be in CPU.
stored = sd_model.cond_stage_model.transformer, sd_model.first_stage_model, sd_model.model
sd_model.cond_stage_model.transformer, sd_model.first_stage_model, sd_model.model = None, None, None
sd_model.to(device)
sd_model.cond_stage_model.transformer, sd_model.first_stage_model, sd_model.model = stored
stored = sd_model.cond_stage_model.transformer, sd_model.first_stage_model, getattr(sd_model, 'depth_model', None), sd_model.model
sd_model.cond_stage_model.transformer, sd_model.first_stage_model, sd_model.depth_model, sd_model.model = None, None, None, None
sd_model.to(devices.device)
sd_model.cond_stage_model.transformer, sd_model.first_stage_model, sd_model.depth_model, sd_model.model = stored
# register hooks for those the first two models
# register hooks for those the first three models
sd_model.cond_stage_model.transformer.register_forward_pre_hook(send_me_to_gpu)
sd_model.first_stage_model.register_forward_pre_hook(send_me_to_gpu)
sd_model.first_stage_model.encode = lambda x, en=sd_model.first_stage_model.encode: first_stage_model_encode_wrap(sd_model.first_stage_model, en, x)
sd_model.first_stage_model.decode = lambda z, de=sd_model.first_stage_model.decode: first_stage_model_decode_wrap(sd_model.first_stage_model, de, z)
sd_model.first_stage_model.encode = first_stage_model_encode_wrap
sd_model.first_stage_model.decode = first_stage_model_decode_wrap
if sd_model.depth_model:
sd_model.depth_model.register_forward_pre_hook(send_me_to_gpu)
parents[sd_model.cond_stage_model.transformer] = sd_model.cond_stage_model
if hasattr(sd_model.cond_stage_model, 'model'):
sd_model.cond_stage_model.model = sd_model.cond_stage_model.transformer
del sd_model.cond_stage_model.transformer
if use_medvram:
sd_model.model.register_forward_pre_hook(send_me_to_gpu)
else:
@ -70,7 +84,7 @@ def setup_for_low_vram(sd_model, use_medvram):
# so that only one of them is in GPU at a time
stored = diff_model.input_blocks, diff_model.middle_block, diff_model.output_blocks, diff_model.time_embed
diff_model.input_blocks, diff_model.middle_block, diff_model.output_blocks, diff_model.time_embed = None, None, None, None
sd_model.model.to(device)
sd_model.model.to(devices.device)
diff_model.input_blocks, diff_model.middle_block, diff_model.output_blocks, diff_model.time_embed = stored
# install hooks for bits of third model

53
modules/mac_specific.py Normal file
View File

@ -0,0 +1,53 @@
import torch
from modules import paths
from modules.sd_hijack_utils import CondFunc
from packaging import version
# has_mps is only available in nightly pytorch (for now) and macOS 12.3+.
# check `getattr` and try it for compatibility
def check_for_mps() -> bool:
if not getattr(torch, 'has_mps', False):
return False
try:
torch.zeros(1).to(torch.device("mps"))
return True
except Exception:
return False
has_mps = check_for_mps()
# MPS workaround for https://github.com/pytorch/pytorch/issues/89784
def cumsum_fix(input, cumsum_func, *args, **kwargs):
if input.device.type == 'mps':
output_dtype = kwargs.get('dtype', input.dtype)
if output_dtype == torch.int64:
return cumsum_func(input.cpu(), *args, **kwargs).to(input.device)
elif cumsum_needs_bool_fix and output_dtype == torch.bool or cumsum_needs_int_fix and (output_dtype == torch.int8 or output_dtype == torch.int16):
return cumsum_func(input.to(torch.int32), *args, **kwargs).to(torch.int64)
return cumsum_func(input, *args, **kwargs)
if has_mps:
# MPS fix for randn in torchsde
CondFunc('torchsde._brownian.brownian_interval._randn', lambda _, size, dtype, device, seed: torch.randn(size, dtype=dtype, device=torch.device("cpu"), generator=torch.Generator(torch.device("cpu")).manual_seed(int(seed))).to(device), lambda _, size, dtype, device, seed: device.type == 'mps')
if version.parse(torch.__version__) < version.parse("1.13"):
# PyTorch 1.13 doesn't need these fixes but unfortunately is slower and has regressions that prevent training from working
# MPS workaround for https://github.com/pytorch/pytorch/issues/79383
CondFunc('torch.Tensor.to', lambda orig_func, self, *args, **kwargs: orig_func(self.contiguous(), *args, **kwargs),
lambda _, self, *args, **kwargs: self.device.type != 'mps' and (args and isinstance(args[0], torch.device) and args[0].type == 'mps' or isinstance(kwargs.get('device'), torch.device) and kwargs['device'].type == 'mps'))
# MPS workaround for https://github.com/pytorch/pytorch/issues/80800
CondFunc('torch.nn.functional.layer_norm', lambda orig_func, *args, **kwargs: orig_func(*([args[0].contiguous()] + list(args[1:])), **kwargs),
lambda _, *args, **kwargs: args and isinstance(args[0], torch.Tensor) and args[0].device.type == 'mps')
# MPS workaround for https://github.com/pytorch/pytorch/issues/90532
CondFunc('torch.Tensor.numpy', lambda orig_func, self, *args, **kwargs: orig_func(self.detach(), *args, **kwargs), lambda _, self, *args, **kwargs: self.requires_grad)
elif version.parse(torch.__version__) > version.parse("1.13.1"):
cumsum_needs_int_fix = not torch.Tensor([1,2]).to(torch.device("mps")).equal(torch.ShortTensor([1,1]).to(torch.device("mps")).cumsum(0))
cumsum_needs_bool_fix = not torch.BoolTensor([True,True]).to(device=torch.device("mps"), dtype=torch.int64).equal(torch.BoolTensor([True,False]).to(torch.device("mps")).cumsum(0))
cumsum_fix_func = lambda orig_func, input, *args, **kwargs: cumsum_fix(input, orig_func, *args, **kwargs)
CondFunc('torch.cumsum', cumsum_fix_func, None)
CondFunc('torch.Tensor.cumsum', cumsum_fix_func, None)
CondFunc('torch.narrow', lambda orig_func, *args, **kwargs: orig_func(*args, **kwargs).clone(), None)

View File

@ -49,7 +49,7 @@ def expand_crop_region(crop_region, processing_width, processing_height, image_w
ratio_processing = processing_width / processing_height
if ratio_crop_region > ratio_processing:
desired_height = (x2 - x1) * ratio_processing
desired_height = (x2 - x1) / ratio_processing
desired_height_diff = int(desired_height - (y2-y1))
y1 -= desired_height_diff//2
y2 += desired_height_diff - desired_height_diff//2

View File

@ -71,10 +71,13 @@ class MemUsageMonitor(threading.Thread):
def read(self):
if not self.disabled:
free, total = torch.cuda.mem_get_info()
self.data["free"] = free
self.data["total"] = total
torch_stats = torch.cuda.memory_stats(self.device)
self.data["active"] = torch_stats["active.all.current"]
self.data["active_peak"] = torch_stats["active_bytes.all.peak"]
self.data["reserved"] = torch_stats["reserved_bytes.all.current"]
self.data["reserved_peak"] = torch_stats["reserved_bytes.all.peak"]
self.data["system_peak"] = total - self.data["min_free"]

View File

@ -10,7 +10,7 @@ from modules.upscaler import Upscaler
from modules.paths import script_path, models_path
def load_models(model_path: str, model_url: str = None, command_path: str = None, ext_filter=None, download_name=None) -> list:
def load_models(model_path: str, model_url: str = None, command_path: str = None, ext_filter=None, download_name=None, ext_blacklist=None) -> list:
"""
A one-and done loader to try finding the desired models in specified directories.
@ -45,6 +45,11 @@ def load_models(model_path: str, model_url: str = None, command_path: str = None
full_path = file
if os.path.isdir(full_path):
continue
if os.path.islink(full_path) and not os.path.exists(full_path):
print(f"Skipping broken symlink: {full_path}")
continue
if ext_blacklist is not None and any([full_path.endswith(x) for x in ext_blacklist]):
continue
if len(ext_filter) != 0:
model_name, extension = os.path.splitext(file)
if extension not in ext_filter:
@ -82,9 +87,13 @@ def cleanup_models():
src_path = models_path
dest_path = os.path.join(models_path, "Stable-diffusion")
move_files(src_path, dest_path, ".ckpt")
move_files(src_path, dest_path, ".safetensors")
src_path = os.path.join(root_path, "ESRGAN")
dest_path = os.path.join(models_path, "ESRGAN")
move_files(src_path, dest_path)
src_path = os.path.join(models_path, "BSRGAN")
dest_path = os.path.join(models_path, "ESRGAN")
move_files(src_path, dest_path, ".pth")
src_path = os.path.join(root_path, "gfpgan")
dest_path = os.path.join(models_path, "GFPGAN")
move_files(src_path, dest_path)
@ -119,11 +128,27 @@ def move_files(src_path: str, dest_path: str, ext_filter: str = None):
pass
builtin_upscaler_classes = []
forbidden_upscaler_classes = set()
def list_builtin_upscalers():
load_upscalers()
builtin_upscaler_classes.clear()
builtin_upscaler_classes.extend(Upscaler.__subclasses__())
def forbid_loaded_nonbuiltin_upscalers():
for cls in Upscaler.__subclasses__():
if cls not in builtin_upscaler_classes:
forbidden_upscaler_classes.add(cls)
def load_upscalers():
sd = shared.script_path
# We can only do this 'magic' method to dynamically load upscalers if they are referenced,
# so we'll try to import any _model.py files before looking in __subclasses__
modules_dir = os.path.join(sd, "modules")
modules_dir = os.path.join(shared.script_path, "modules")
for file in os.listdir(modules_dir):
if "_model.py" in file:
model_name = file.replace("_model.py", "")
@ -132,22 +157,16 @@ def load_upscalers():
importlib.import_module(full_model)
except:
pass
datas = []
c_o = vars(shared.cmd_opts)
commandline_options = vars(shared.cmd_opts)
for cls in Upscaler.__subclasses__():
if cls in forbidden_upscaler_classes:
continue
name = cls.__name__
module_name = cls.__module__
module = importlib.import_module(module_name)
class_ = getattr(module, name)
cmd_name = f"{name.lower().replace('upscaler', '')}_models_path"
opt_string = None
try:
if cmd_name in c_o:
opt_string = c_o[cmd_name]
except:
pass
scaler = class_(opt_string)
for child in scaler.scalers:
datas.append(child)
scaler = cls(commandline_options.get(cmd_name, None))
datas += scaler.scalers
shared.sd_upscalers = datas

Some files were not shown because too many files have changed in this diff Show More