ccbf2e6aff
blame mrq/ai-voice-cloning#122
2023-03-12 17:51:52 +00:00
9238df0b03
fixed last generation settings not actually load because brain worms
2023-03-12 15:49:50 +00:00
9594a960b0
Disable loss ETA for now until I fix it
2023-03-12 15:39:54 +00:00
mrq
be8b290a1a
Merge branch 'master' into save_more_user_config
2023-03-12 15:38:08 +00:00
098d7ad635
uh I don't remember, small things
2023-03-12 14:47:48 +00:00
233baa4e45
updated several default configurations to not cause null/empty errors. also default samples/iterations to 16-30 ultra fast which is typically suggested.
2023-03-12 16:08:02 +02:00
9e320a34c8
Fixed Keep X Previous States
2023-03-12 08:00:03 +02:00
ede9804b76
added option to trim silence using torchaudio's VAD
2023-03-11 21:41:35 +00:00
dea2fa9caf
added fields to offset start/end slices to apply in bulk when slicing
2023-03-11 21:34:29 +00:00
89bb3d4419
rename transcribe button since it does more than transcribe
2023-03-11 21:18:04 +00:00
382a3e4104
rely on the whisper.json for handling a lot more things
2023-03-11 21:17:11 +00:00
94551fb9ac
split slicing dataset routine so it can be done after the fact
2023-03-11 17:27:01 +00:00
2424c455cb
added option to not slice audio when transcribing, added option to prepare validation dataset on audio duration, added a warning if youre using whisperx and you're slicing audio
2023-03-11 16:32:35 +00:00
tigi6346
dcdcf8516c
master ( #112 )
...
Fixes Gradio bugging out when attempting to load a missing train.json.
Reviewed-on: mrq/ai-voice-cloning#112
Co-authored-by: tigi6346 <tigi6346@noreply.localhost>
Co-committed-by: tigi6346 <tigi6346@noreply.localhost>
2023-03-11 03:28:04 +00:00
7f2da0f5fb
rewrote how AIVC gets training metrics (need to clean up later)
2023-03-10 22:35:32 +00:00
8e890d3023
forgot to fix reset settings to use the new arg-agnostic way
2023-03-10 13:49:39 +00:00
cb273b8428
cleanup
2023-03-09 18:34:52 +00:00
7c71f7239c
expose options for CosineAnnealingLR_Restart (seems to be able to train very quickly due to the restarts
2023-03-09 14:17:01 +00:00
2f6dd9c076
some cleanup
2023-03-09 06:20:05 +00:00
5460e191b0
added loss graph, because I'm going to experiment with cosine annealing LR and I need to view my loss
2023-03-09 05:54:08 +00:00
1b18b3e335
forgot to save the simplified training input json first before touching any of the settings that dump to the yaml
2023-03-09 02:27:20 +00:00
221ac38b32
forgot to update to finetune subdir
2023-03-09 02:25:32 +00:00
0e80e311b0
added VRAM validation for a given batch:gradient accumulation size ratio (based emprically off of 6GiB, 16GiB, and 16x2GiB, would be nice to have more data on what's safe)
2023-03-09 02:08:06 +00:00
3f321fe664
big cleanup to make my life easier when i add more parameters
2023-03-09 00:26:47 +00:00
8494628f3c
normalize validation batch size because i oom'd without it getting scaled
2023-03-08 05:27:20 +00:00
ff07f707cb
disable validation if validation dataset not found, clamp validation batch size to validation dataset size instead of simply reusing batch size, switch to adamw_zero optimizier when training with multi-gpus (because the yaml comment said to and I think it might be why I'm absolutely having garbage luck training this japanese dataset)
2023-03-08 04:47:05 +00:00
b4098dca73
made validation working (will document later)
2023-03-08 02:58:00 +00:00
e862169e7f
set validation to save rate and validation file if exists (need to test later)
2023-03-07 20:38:31 +00:00
fe8bf7a9d1
added helper script to cull short enough lines from training set as a validation set (if it yields good results doing validation during training, i'll add it to the web ui)
2023-03-07 20:16:49 +00:00
3899f9b4e3
added (yet another) experimental voice latent calculation mode (when chunk size is 0 and theres a dataset generated, itll leverage it by padding to a common size then computing them, should help avoid splitting mid-phoneme)
2023-03-07 03:55:35 +00:00
0f0b394445
moved (actually not working) setting to use BigVGAN to a dropdown to select between vocoders (for when slotting in future ones), and ability to load a new vocoder while TTS is loaded
2023-03-07 02:45:22 +00:00
14779a5020
Added option to skip transcribing if it exists in the output text file, because apparently whisperx will throw a "max files opened" error when using ROCm because it does not close some file descriptors if you're batch-transcribing or something, so poor little me, who's retranscribing his japanese dataset for the 305823042th time woke up to it partially done i am so mad I have to wait another few hours for it to continue when I was hoping to wake up to it done
2023-03-06 10:47:06 +00:00
0e3bbc55f8
added api_name for generation, added whisperx backend, relocated use whispercpp option to whisper backend list
2023-03-06 05:21:33 +00:00
5be14abc21
UI cleanup, actually fix syncing the epoch counter (i hope), setting auto-suggest voice chunk size whatever to 0 will just split based on the average duration length, signal when a NaN info value is detected (there's some safeties in the training, but it will inevitably fuck the model)
2023-03-05 23:55:27 +00:00
3e220ed306
added option to set worker size in training config generator (because the default is overkill), for whisper transcriptions, load a specialized language model if it exists (for now, only english), output transcription to web UI when done transcribing
2023-03-05 05:17:19 +00:00
5026d93ecd
sloppy fix to actually kill children when using multi-GPU distributed training, set GPU training count based on what CUDA exposes automatically so I don't have to keep setting it to 2
2023-03-04 20:42:54 +00:00
1a9d159b2a
forgot to add 'bs / gradient accum < 2 clamp validation logic
2023-03-04 17:37:08 +00:00
df24827b9a
renamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details)
2023-03-04 15:55:06 +00:00
6d5e1e1a80
fixed user inputted LR schedule not actually getting used (oops)
2023-03-04 04:41:56 +00:00
6d8c2dd459
auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slider
2023-03-03 21:13:48 +00:00
740b5587df
added option to specify using BigVGAN as the vocoder for mrq/tortoise-tts
2023-03-03 06:39:37 +00:00
e859a7c01d
experimental multi-gpu training (Linux only, because I can't into batch files)
2023-03-03 04:37:18 +00:00
c956d81baf
added button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.sh
2023-03-02 01:35:12 +00:00
b989123bd4
leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training)
2023-03-01 19:32:11 +00:00
c2726fa0d4
added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earlier
2023-03-01 01:17:38 +00:00
5037752059
oops
2023-02-28 22:13:21 +00:00
787b44807a
added to embedded metadata: datetime, model path, model hash
2023-02-28 15:36:06 +00:00
81eb58f0d6
show different losses, rewordings
2023-02-28 06:18:18 +00:00
bc0d9ab3ed
added graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something else
2023-02-28 01:01:50 +00:00
6925ec731b
I don't remember.
2023-02-27 19:20:06 +00:00