Stable Diffusion web UI (with a DirectML patch)
Go to file
2022-09-20 23:31:06 +03:00
.github/ISSUE_TEMPLATE Update issue templates 2022-09-12 14:39:02 +03:00
ESRGAN
javascript Reset image input when dragover new image 2022-09-20 23:30:36 +03:00
models updated readme to reflect new model location 2022-09-17 12:12:55 +03:00
modules Update shared.py 2022-09-20 23:31:06 +03:00
scripts add extra generation params 2022-09-20 19:07:09 +03:00
.gitignore Exclude checkpoint files in models 2022-09-18 15:20:46 +01:00
artists.csv added random artist button 2022-09-05 23:08:06 +03:00
environment-wsl2.yaml Update readme.md to use environment-wsl2.yaml 2022-09-10 00:27:54 +03:00
launch.py added __name__ == __main__ to launch.py so it can be imported without launching the webUI 2022-09-20 17:23:26 +03:00
README.md Update README.md 2022-09-19 22:49:53 +03:00
requirements_versions.txt Add buttons for random and reuse seed. 2022-09-18 15:13:28 +02:00
requirements.txt add piexif to requirements.txt 2022-09-13 18:11:46 +03:00
screenshot.png
script.js add read access to settings for jsavascript 2022-09-18 22:25:18 +03:00
style.css lightbox image scaling fix 2022-09-19 19:14:33 +03:00
webui-user.bat Revert "Update webui-user.bat" 2022-09-14 08:58:13 +03:00
webui-user.sh Remove default git variables to prevent issues 2022-09-20 17:23:01 +03:00
webui.bat moved most of functionality from webui.bat into cross-platform launch.py 2022-09-13 16:48:18 +03:00
webui.py Update webui.py 2022-09-20 23:31:06 +03:00
webui.sh Fix bad bad torch command variable 2022-09-18 16:55:57 +03:00

Stable Diffusion web UI

A browser interface based on Gradio library for Stable Diffusion.

Features

Detailed feature showcase with images:

  • Original txt2img and img2img modes
  • One click install and run script (but you still must install python and git)
  • Outpainting
  • Inpainting
  • Prompt matrix
  • Stable Diffusion upscale
  • Attention
  • Loopback
  • X/Y plot
  • Textual Inversion
  • Extras tab with:
    • GFPGAN, neural network that fixes faces
    • CodeFormer, face restoration tool as an alternative to GFPGAN
    • RealESRGAN, neural network upscaler
    • ESRGAN, neural network with a lot of third party models
  • Resizing aspect ratio options
  • Sampling method selection
  • Interrupt processing at any time
  • 4GB video card support
  • Correct seeds for batches
  • Prompt length validation
  • Generation parameters added as text to PNG
  • Tab to view an existing picture's generation parameters
  • Settings page
  • Running custom code from UI
  • Mouseover hints for most UI elements
  • Possible to change defaults/mix/max/step values for UI elements via text config
  • Random artist button
  • Tiling support: UI checkbox to create images that can be tiled like textures
  • Progress bar and live image generation preview
  • Negative prompt
  • Styles
  • Variations
  • Seed resizing
  • CLIP interrogator
  • Prompt Editing

Installation and Running

Make sure the required dependencies are met and follow the instructions available for both NVidia (recommended) and AMD GPUs.

Alternatively, use Google Colab.

Automatic Installation on Windows

  1. Install Python 3.10.6, checking "Add Python to PATH"
  2. Install git.
  3. Download the stable-diffusion-webui repository, for example by running git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git.
  4. Place model.ckpt in the models directory (see dependencies for where to get it).
  5. (Optional) Place GFPGANv1.4.pth in the base directory, alongside webui.py (see dependencies for where to get it).
  6. Run webui-user.bat from Windows Explorer as normal, non-administrator, user.

Automatic Installation on Linux

  1. Install the dependencies:
# Debian-based:
sudo apt install wget git python3 python3-venv
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
sudo pacman -S wget git python3
  1. To install in /home/$(whoami)/stable-diffusion-webui/, run:
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)

Documentation

The documentation was moved from this README over to the project's wiki.

Credits