yay, comma separated tags for training prompts; YUCK, the epoch setting did nothing after all

master
mrq 2022-10-12 19:37:23 +07:00
parent c0249cb3f8
commit f000aab55f
1 changed files with 5 additions and 4 deletions

@ -28,9 +28,7 @@ Below is a list of terms clarified. I notice I'll use some terms interchangably
* `embedding`: the trained "model" of the subject or style in question. "Model" would be wrong to call the trained output, as Textual Inversion isn't true training
* `hypernetwork`: a different way to train custom content against a model, almost all of the same prinicples here apply for hypernetworks
* `loss rate`: a calculated value determining how close the actual output is to the expected output. Typically a value between `0.1` and `0.15` seem to be a good sign
* `epoch`: a term derived from typical neural network training
- normally, it's referred to as a full training cycle over your source material
- in this context, it's the above times the number of repeats per single image.
* `epoch`: a term derived from typical neural network training, normally, it's referred to as a full training cycle over your source material, but the web UI doesn't actually do anything substantial with it.
## Preface
@ -206,10 +204,13 @@ I'm not quite clear on the differences by including the `by`, but the yiffy mode
## Training
Now that everything is set up, it's time to start training. For systems with "enough" VRAM (I don't have a number on what is adequate), you're free to run the web UI with `--no-half --precision full` (whatever "adequate entails"). You'll take a very slight performance hit, but quality improves barely enough I was able to notice. The Xformers feature seems to get disabled during training, but appears to make preview generations faster? So don't worry about getting xformers configured.
Make sure you're using the correct model you want to train against, as training uses the currently selected model.
**!**OPTIONAL**!** Make sure to go into the Settings tab, find the `Training` section, then under `Filename join string`, set it to `, `, as this will keep your training prompts comma separated. This doesn't make *too* big of a difference, but it's another step for correctness.
Run the Web UI, and click the `Training` sub-tab.
Create your embedding to train on by providing the following under the `Create embedding`:
@ -237,7 +238,7 @@ Next, under the `Train` sub-tab:
* `prompt template file`: put in the path to the prompt file you created earlier. if you put it in the same folder as the web UI's default prompts, just rename the filename there
* `width` and `height`: I assume this determines the size of the image to generate when requested. Or it could actually work for training at different aspect ratios. I'd leave it to the default 512x512 for now.
* `max steps`: adjust how long you want the training to be done before terminating. Paperspace seems to let me do ~70000 on an A6000 before shutting down after 6 hours. An 80GB A100 will let me get shy of the full 100000 before auto-shutting down after 6 hours.
* `epoch length`: this value (*allegedly*) governs the learning rate correction when training based on defining how long an epoch is. for larger training sets, you would want to decrease this. I don't see any differences with this at the meantime.
* `epoch length`: this value is only cosmetic and doesn't actually do the dream idea of it actually correcting the learning rate per epoch. don't even bother with this.
* `save an image/copy`: these two values are creature comforts and have no real effect on training, values are up to player preference.
* `preview prompt`: the prompt to use for the preview training image. if left empty, it'll use the last prompt used for training. it's useful for accurately measuring coherence between generations. I highly recommend using this with a prompt you want to use later. takes the same `[name]` and `[fileword]` keywords passed through to the template