|
1b18b3e335
|
forgot to save the simplified training input json first before touching any of the settings that dump to the yaml
|
2023-03-09 02:27:20 +00:00 |
|
|
221ac38b32
|
forgot to update to finetune subdir
|
2023-03-09 02:25:32 +00:00 |
|
|
0e80e311b0
|
added VRAM validation for a given batch:gradient accumulation size ratio (based emprically off of 6GiB, 16GiB, and 16x2GiB, would be nice to have more data on what's safe)
|
2023-03-09 02:08:06 +00:00 |
|
|
3f321fe664
|
big cleanup to make my life easier when i add more parameters
|
2023-03-09 00:26:47 +00:00 |
|
|
8494628f3c
|
normalize validation batch size because i oom'd without it getting scaled
|
2023-03-08 05:27:20 +00:00 |
|
|
ff07f707cb
|
disable validation if validation dataset not found, clamp validation batch size to validation dataset size instead of simply reusing batch size, switch to adamw_zero optimizier when training with multi-gpus (because the yaml comment said to and I think it might be why I'm absolutely having garbage luck training this japanese dataset)
|
2023-03-08 04:47:05 +00:00 |
|
|
b4098dca73
|
made validation working (will document later)
|
2023-03-08 02:58:00 +00:00 |
|
|
e862169e7f
|
set validation to save rate and validation file if exists (need to test later)
|
2023-03-07 20:38:31 +00:00 |
|
|
fe8bf7a9d1
|
added helper script to cull short enough lines from training set as a validation set (if it yields good results doing validation during training, i'll add it to the web ui)
|
2023-03-07 20:16:49 +00:00 |
|
|
3899f9b4e3
|
added (yet another) experimental voice latent calculation mode (when chunk size is 0 and theres a dataset generated, itll leverage it by padding to a common size then computing them, should help avoid splitting mid-phoneme)
|
2023-03-07 03:55:35 +00:00 |
|
|
0f0b394445
|
moved (actually not working) setting to use BigVGAN to a dropdown to select between vocoders (for when slotting in future ones), and ability to load a new vocoder while TTS is loaded
|
2023-03-07 02:45:22 +00:00 |
|
|
14779a5020
|
Added option to skip transcribing if it exists in the output text file, because apparently whisperx will throw a "max files opened" error when using ROCm because it does not close some file descriptors if you're batch-transcribing or something, so poor little me, who's retranscribing his japanese dataset for the 305823042th time woke up to it partially done i am so mad I have to wait another few hours for it to continue when I was hoping to wake up to it done
|
2023-03-06 10:47:06 +00:00 |
|
|
0e3bbc55f8
|
added api_name for generation, added whisperx backend, relocated use whispercpp option to whisper backend list
|
2023-03-06 05:21:33 +00:00 |
|
|
5be14abc21
|
UI cleanup, actually fix syncing the epoch counter (i hope), setting auto-suggest voice chunk size whatever to 0 will just split based on the average duration length, signal when a NaN info value is detected (there's some safeties in the training, but it will inevitably fuck the model)
|
2023-03-05 23:55:27 +00:00 |
|
|
3e220ed306
|
added option to set worker size in training config generator (because the default is overkill), for whisper transcriptions, load a specialized language model if it exists (for now, only english), output transcription to web UI when done transcribing
|
2023-03-05 05:17:19 +00:00 |
|
|
5026d93ecd
|
sloppy fix to actually kill children when using multi-GPU distributed training, set GPU training count based on what CUDA exposes automatically so I don't have to keep setting it to 2
|
2023-03-04 20:42:54 +00:00 |
|
|
1a9d159b2a
|
forgot to add 'bs / gradient accum < 2 clamp validation logic
|
2023-03-04 17:37:08 +00:00 |
|
|
df24827b9a
|
renamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details)
|
2023-03-04 15:55:06 +00:00 |
|
|
6d5e1e1a80
|
fixed user inputted LR schedule not actually getting used (oops)
|
2023-03-04 04:41:56 +00:00 |
|
|
6d8c2dd459
|
auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slider
|
2023-03-03 21:13:48 +00:00 |
|
|
740b5587df
|
added option to specify using BigVGAN as the vocoder for mrq/tortoise-tts
|
2023-03-03 06:39:37 +00:00 |
|
|
e859a7c01d
|
experimental multi-gpu training (Linux only, because I can't into batch files)
|
2023-03-03 04:37:18 +00:00 |
|
|
c956d81baf
|
added button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.sh
|
2023-03-02 01:35:12 +00:00 |
|
|
b989123bd4
|
leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training)
|
2023-03-01 19:32:11 +00:00 |
|
|
c2726fa0d4
|
added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earlier
|
2023-03-01 01:17:38 +00:00 |
|
|
5037752059
|
oops
|
2023-02-28 22:13:21 +00:00 |
|
|
787b44807a
|
added to embedded metadata: datetime, model path, model hash
|
2023-02-28 15:36:06 +00:00 |
|
|
81eb58f0d6
|
show different losses, rewordings
|
2023-02-28 06:18:18 +00:00 |
|
|
bc0d9ab3ed
|
added graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something else
|
2023-02-28 01:01:50 +00:00 |
|
|
6925ec731b
|
I don't remember.
|
2023-02-27 19:20:06 +00:00 |
|
|
92553973be
|
Added option to disable bitsandbytesoptimizations for systems that do not support it (systems without a Turing-onward Nvidia card), saves use of float16 and bitsandbytes for training into the config json
|
2023-02-26 01:57:56 +00:00 |
|
|
d5d8821a9d
|
fixed some files not copying for bitsandbytes (I was wrong to assume it copied folders too), fixed stopping generating and training, some other thing that I forgot since it's been slowly worked on in my small free times
|
2023-02-24 23:13:13 +00:00 |
|
|
f31ea9d5bc
|
oops
|
2023-02-24 16:23:30 +00:00 |
|
|
f6d0b66e10
|
finally added model refresh button, also searches in the training folder for outputted models so you don't even need to copy them
|
2023-02-24 12:58:41 +00:00 |
|
|
1e0fec4358
|
god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutes
|
2023-02-23 23:22:23 +00:00 |
|
|
1cbcf14cff
|
oops
|
2023-02-23 13:18:51 +00:00 |
|
|
225dee22d4
|
huge success
|
2023-02-23 06:24:54 +00:00 |
|
|
2aa70532e8
|
added '''suggested''' voice chunk size (it just updates it to how many files you have, not based on combined voice length, like it should
|
2023-02-22 03:31:46 +00:00 |
|
|
cc47ed7242
|
kmsing
|
2023-02-22 03:27:28 +00:00 |
|
|
93b061fb4d
|
oops
|
2023-02-22 03:21:03 +00:00 |
|
|
fefc7aba03
|
oops
|
2023-02-21 22:13:30 +00:00 |
|
|
9e64dad785
|
clamp batch size to sample count when generating for the sickos that want that, added setting to remove non-final output after a generation, something else I forgot already
|
2023-02-21 21:50:05 +00:00 |
|
|
8a1a48f31e
|
Added very experimental float16 training for cards with not enough VRAM (10GiB and below, maybe) \!NOTE\! this is VERY EXPERIMETNAL, I have zero free time to validate it right now, I'll do it later
|
2023-02-21 19:31:57 +00:00 |
|
|
bbc2d26289
|
I finally figured out how to fix gr.Dropdown.change, so a lot of dumb UI decisions are fixed and makes sense
|
2023-02-21 03:00:45 +00:00 |
|
|
ee95616dfd
|
optimize batch sizes to be as evenly divisible as possible (noticed the calculated epochs mismatched the inputted epochs)
|
2023-02-19 21:06:14 +00:00 |
|
|
6260594a1e
|
Forgot to base print/save frequencies in terms of epochs in the UI, will get converted when saving the YAML
|
2023-02-19 20:38:00 +00:00 |
|
|
4694d622f4
|
doing something completely unrelated had me realize it's 1000x easier to just base things in terms of epochs, and calculate iteratsions from there
|
2023-02-19 20:22:03 +00:00 |
|
|
ec76676b16
|
i hate gradio I hate having to specify step=1
|
2023-02-19 17:12:39 +00:00 |
|
|
092dd7b2d7
|
added more safeties and parameters to training yaml generator, I think I tested it extensively enough
|
2023-02-19 16:16:44 +00:00 |
|
|
e7d0cfaa82
|
added some output parsing during training (print current iteration step, and checkpoint save), added option for verbose output (for debugging), added buffer size for output, full console output gets dumped on terminating training
|
2023-02-19 05:05:30 +00:00 |
|