|
51339671ec
|
typo
|
2023-03-09 02:29:08 +00:00 |
|
|
1b18b3e335
|
forgot to save the simplified training input json first before touching any of the settings that dump to the yaml
|
2023-03-09 02:27:20 +00:00 |
|
|
221ac38b32
|
forgot to update to finetune subdir
|
2023-03-09 02:25:32 +00:00 |
|
|
0e80e311b0
|
added VRAM validation for a given batch:gradient accumulation size ratio (based emprically off of 6GiB, 16GiB, and 16x2GiB, would be nice to have more data on what's safe)
|
2023-03-09 02:08:06 +00:00 |
|
|
ef7b957fff
|
oops
|
2023-03-09 00:53:00 +00:00 |
|
|
b0baa1909a
|
forgot template
|
2023-03-09 00:32:35 +00:00 |
|
|
3f321fe664
|
big cleanup to make my life easier when i add more parameters
|
2023-03-09 00:26:47 +00:00 |
|
|
0ab091e7ff
|
oops
|
2023-03-08 16:09:29 +00:00 |
|
|
34dcb845b5
|
actually make using adamw_zero optimizer for multi-gpus work
|
2023-03-08 15:31:33 +00:00 |
|
|
8494628f3c
|
normalize validation batch size because i oom'd without it getting scaled
|
2023-03-08 05:27:20 +00:00 |
|
|
d7e75a51cf
|
I forgot about the changelog and never kept up with it, so I'll just not use a changelog
|
2023-03-08 05:14:50 +00:00 |
|
|
ff07f707cb
|
disable validation if validation dataset not found, clamp validation batch size to validation dataset size instead of simply reusing batch size, switch to adamw_zero optimizier when training with multi-gpus (because the yaml comment said to and I think it might be why I'm absolutely having garbage luck training this japanese dataset)
|
2023-03-08 04:47:05 +00:00 |
|
|
f1788a5639
|
lazy wrap around the voicefixer block because sometimes it just an heros itself despite having a specific block to load it beforehand
|
2023-03-08 04:12:22 +00:00 |
|
|
83b5125854
|
fixed notebooks, provided paperspace notebook
|
2023-03-08 03:29:12 +00:00 |
|
|
b4098dca73
|
made validation working (will document later)
|
2023-03-08 02:58:00 +00:00 |
|
|
a7e0dc9127
|
oops
|
2023-03-08 00:51:51 +00:00 |
|
|
e862169e7f
|
set validation to save rate and validation file if exists (need to test later)
|
2023-03-07 20:38:31 +00:00 |
|
|
fe8bf7a9d1
|
added helper script to cull short enough lines from training set as a validation set (if it yields good results doing validation during training, i'll add it to the web ui)
|
2023-03-07 20:16:49 +00:00 |
|
|
7f89e8058a
|
fixed update checker for dlas+tortoise-tts
|
2023-03-07 19:33:56 +00:00 |
|
|
6d7e143f53
|
added override for large training plots
|
2023-03-07 19:29:09 +00:00 |
|
|
3718e9d0fb
|
set NaN alarm to show the iteration it happened it
|
2023-03-07 19:22:11 +00:00 |
|
|
c27ee3ce95
|
added update checking for dlas and tortoise-tts, caching voices (for a given model and voice name) so random latents will remain the same
|
2023-03-07 17:04:45 +00:00 |
|
|
166d491a98
|
fixes
|
2023-03-07 13:40:41 +00:00 |
|
|
df5ba634c0
|
brain dead
|
2023-03-07 05:43:26 +00:00 |
|
|
2726d98ee1
|
fried my brain trying to nail out bugs involving using solely ar model=auto
|
2023-03-07 05:35:21 +00:00 |
|
|
d7a5ad9fd9
|
cleaned up some model loading logic, added 'auto' mode for AR model (deduced by current voice)
|
2023-03-07 04:34:39 +00:00 |
|
|
3899f9b4e3
|
added (yet another) experimental voice latent calculation mode (when chunk size is 0 and theres a dataset generated, itll leverage it by padding to a common size then computing them, should help avoid splitting mid-phoneme)
|
2023-03-07 03:55:35 +00:00 |
|
|
5063728bb0
|
brain worms and headaches
|
2023-03-07 03:01:02 +00:00 |
|
|
0f31c34120
|
download dvae.pth for the people who managed to somehow put the web UI into a state where it never initializes TTS at all somehow
|
2023-03-07 02:47:10 +00:00 |
|
|
0f0b394445
|
moved (actually not working) setting to use BigVGAN to a dropdown to select between vocoders (for when slotting in future ones), and ability to load a new vocoder while TTS is loaded
|
2023-03-07 02:45:22 +00:00 |
|
|
e731b9ba84
|
reworked generating metadata to embed, should now store overrided settings
|
2023-03-06 23:07:16 +00:00 |
|
|
7798767fc6
|
added settings editing (will add a guide on what to do later, and an example)
|
2023-03-06 21:48:34 +00:00 |
|
|
119ac50c58
|
forgot to re-append the existing transcription when skipping existing (have to go back again and do the first 10% of my giant dataset
|
2023-03-06 16:50:55 +00:00 |
|
|
12c51b6057
|
Im not too sure if manually invoking gc actually closes all the open files from whisperx (or ROCm), but it seems to have gone away longside setting 'ulimit -Sn' to half the output of 'ulimit -Hn'
|
2023-03-06 16:39:37 +00:00 |
|
|
999878d9c6
|
and it turned out I wasn't even using the aligned segments, kmsing now that I have to *redo* my dataset again
|
2023-03-06 11:01:33 +00:00 |
|
|
14779a5020
|
Added option to skip transcribing if it exists in the output text file, because apparently whisperx will throw a "max files opened" error when using ROCm because it does not close some file descriptors if you're batch-transcribing or something, so poor little me, who's retranscribing his japanese dataset for the 305823042th time woke up to it partially done i am so mad I have to wait another few hours for it to continue when I was hoping to wake up to it done
|
2023-03-06 10:47:06 +00:00 |
|
|
0e3bbc55f8
|
added api_name for generation, added whisperx backend, relocated use whispercpp option to whisper backend list
|
2023-03-06 05:21:33 +00:00 |
|
|
788a957f79
|
stretch loss plot to target iteration just so its not so misleading with the scale
|
2023-03-06 00:44:29 +00:00 |
|
|
5be14abc21
|
UI cleanup, actually fix syncing the epoch counter (i hope), setting auto-suggest voice chunk size whatever to 0 will just split based on the average duration length, signal when a NaN info value is detected (there's some safeties in the training, but it will inevitably fuck the model)
|
2023-03-05 23:55:27 +00:00 |
|
|
287738a338
|
(should) fix reported epoch metric desyncing from defacto metric, fixed finding next milestone from wrong sign because of 2AM brain
|
2023-03-05 20:42:45 +00:00 |
|
|
206a14fdbe
|
brianworms
|
2023-03-05 20:30:27 +00:00 |
|
|
b82961ba8a
|
typo
|
2023-03-05 20:13:39 +00:00 |
|
|
b2e89d8da3
|
oops
|
2023-03-05 19:58:15 +00:00 |
|
|
8094401a6d
|
print in e-notation for LR
|
2023-03-05 19:48:24 +00:00 |
|
|
8b9c9e1bbf
|
remove redundant stats, add showing LR
|
2023-03-05 18:53:12 +00:00 |
|
|
0231550287
|
forgot to remove a debug print
|
2023-03-05 18:27:16 +00:00 |
|
|
d97639e138
|
whispercpp actually works now (language loading was weird, slicing needed to divide time by 100), transcribing audio checks for silence and discards them
|
2023-03-05 17:54:36 +00:00 |
|
|
b8a620e8d7
|
actually accumulate derivatives when estimating milestones and final loss by using half of the log
|
2023-03-05 14:39:24 +00:00 |
|
|
35225a35da
|
oops v2
|
2023-03-05 14:19:41 +00:00 |
|
|
b5e9899bbf
|
5 hour sleep brained
|
2023-03-05 13:37:05 +00:00 |
|
|
cd8702ab0d
|
oops
|
2023-03-05 13:24:07 +00:00 |
|
|
d312019d05
|
reordered things so it uses fresh data and not last-updated data
|
2023-03-05 07:37:27 +00:00 |
|
|
ce3866d0cd
|
added '''estimating''' iterations until milestones (lr=[1, 0.5, 0.1] and final lr, very, very inaccurate because it uses instantaneous delta lr, I'll need to do a riemann sum later
|
2023-03-05 06:45:07 +00:00 |
|
|
1316331be3
|
forgot to try and have it try and auto-detect for openai/whisper when no language is specified
|
2023-03-05 05:22:35 +00:00 |
|
|
3e220ed306
|
added option to set worker size in training config generator (because the default is overkill), for whisper transcriptions, load a specialized language model if it exists (for now, only english), output transcription to web UI when done transcribing
|
2023-03-05 05:17:19 +00:00 |
|
|
37cab14272
|
use torchrun instead for multigpu
|
2023-03-04 20:53:00 +00:00 |
|
|
5026d93ecd
|
sloppy fix to actually kill children when using multi-GPU distributed training, set GPU training count based on what CUDA exposes automatically so I don't have to keep setting it to 2
|
2023-03-04 20:42:54 +00:00 |
|
|
1a9d159b2a
|
forgot to add 'bs / gradient accum < 2 clamp validation logic
|
2023-03-04 17:37:08 +00:00 |
|
|
df24827b9a
|
renamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details)
|
2023-03-04 15:55:06 +00:00 |
|
|
6d5e1e1a80
|
fixed user inputted LR schedule not actually getting used (oops)
|
2023-03-04 04:41:56 +00:00 |
|
|
6d8c2dd459
|
auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slider
|
2023-03-03 21:13:48 +00:00 |
|
|
e1f3ffa08c
|
oops
|
2023-03-03 18:51:33 +00:00 |
|
|
9fb4aa7917
|
validated whispercpp working, fixed args.listen not being saved due to brainworms
|
2023-03-03 07:23:10 +00:00 |
|
|
740b5587df
|
added option to specify using BigVGAN as the vocoder for mrq/tortoise-tts
|
2023-03-03 06:39:37 +00:00 |
|
|
68f4858ce9
|
oops
|
2023-03-03 05:51:17 +00:00 |
|
|
e859a7c01d
|
experimental multi-gpu training (Linux only, because I can't into batch files)
|
2023-03-03 04:37:18 +00:00 |
|
|
c956d81baf
|
added button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.sh
|
2023-03-02 01:35:12 +00:00 |
|
|
534a761e49
|
added loading/saving of voice latents by model hash, so no more needing to manually regenerate every time you change models
|
2023-03-02 00:46:52 +00:00 |
|
|
5a41db978e
|
oops
|
2023-03-01 19:39:43 +00:00 |
|
|
b989123bd4
|
leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training)
|
2023-03-01 19:32:11 +00:00 |
|
|
c2726fa0d4
|
added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earlier
|
2023-03-01 01:17:38 +00:00 |
|
|
5037752059
|
oops
|
2023-02-28 22:13:21 +00:00 |
|
|
787b44807a
|
added to embedded metadata: datetime, model path, model hash
|
2023-02-28 15:36:06 +00:00 |
|
|
81eb58f0d6
|
show different losses, rewordings
|
2023-02-28 06:18:18 +00:00 |
|
|
fda47156ec
|
oops
|
2023-02-28 01:08:07 +00:00 |
|
|
bc0d9ab3ed
|
added graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something else
|
2023-02-28 01:01:50 +00:00 |
|
|
6925ec731b
|
I don't remember.
|
2023-02-27 19:20:06 +00:00 |
|
|
92553973be
|
Added option to disable bitsandbytesoptimizations for systems that do not support it (systems without a Turing-onward Nvidia card), saves use of float16 and bitsandbytes for training into the config json
|
2023-02-26 01:57:56 +00:00 |
|
|
aafeb9f96a
|
actually fixed the training output text parser
|
2023-02-25 16:44:25 +00:00 |
|
|
65329dba31
|
oops, epoch increments twice
|
2023-02-25 15:31:18 +00:00 |
|
|
8b4da29d5f
|
csome adjustments to the training output parser, now updates per iteration for really large batches (like the one I'm doing for a dataset size of 19420)
|
2023-02-25 13:55:25 +00:00 |
|
|
d5d8821a9d
|
fixed some files not copying for bitsandbytes (I was wrong to assume it copied folders too), fixed stopping generating and training, some other thing that I forgot since it's been slowly worked on in my small free times
|
2023-02-24 23:13:13 +00:00 |
|
|
f31ea9d5bc
|
oops
|
2023-02-24 16:23:30 +00:00 |
|
|
2104dbdbc5
|
ops
|
2023-02-24 13:05:08 +00:00 |
|
|
f6d0b66e10
|
finally added model refresh button, also searches in the training folder for outputted models so you don't even need to copy them
|
2023-02-24 12:58:41 +00:00 |
|
|
1e0fec4358
|
god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutes
|
2023-02-23 23:22:23 +00:00 |
|
|
7d1220e83e
|
forgot to mult by batch size
|
2023-02-23 15:38:04 +00:00 |
|
|
487f2ebf32
|
fixed the brain worm discrepancy between epochs, iterations, and steps
|
2023-02-23 15:31:43 +00:00 |
|
|
1cbcf14cff
|
oops
|
2023-02-23 13:18:51 +00:00 |
|
|
41fca1a101
|
ugh
|
2023-02-23 07:20:40 +00:00 |
|
|
941a27d2b3
|
removed the logic to toggle BNB capabilities, since I guess I can't do that from outside the module
|
2023-02-23 07:05:39 +00:00 |
|
|
225dee22d4
|
huge success
|
2023-02-23 06:24:54 +00:00 |
|
|
526a430c2a
|
how did this revert...
|
2023-02-22 13:24:03 +00:00 |
|
|
2aa70532e8
|
added '''suggested''' voice chunk size (it just updates it to how many files you have, not based on combined voice length, like it should
|
2023-02-22 03:31:46 +00:00 |
|
|
cc47ed7242
|
kmsing
|
2023-02-22 03:27:28 +00:00 |
|
|
93b061fb4d
|
oops
|
2023-02-22 03:21:03 +00:00 |
|
|
c4b41e07fa
|
properly placed the line toe xtract starting iteration
|
2023-02-22 01:17:09 +00:00 |
|
|
fefc7aba03
|
oops
|
2023-02-21 22:13:30 +00:00 |
|
|
9e64dad785
|
clamp batch size to sample count when generating for the sickos that want that, added setting to remove non-final output after a generation, something else I forgot already
|
2023-02-21 21:50:05 +00:00 |
|
|
f119993fb5
|
explicitly use python3 because some OSs will not have python alias to python3, allow batch size 1
|
2023-02-21 20:20:52 +00:00 |
|
|
8a1a48f31e
|
Added very experimental float16 training for cards with not enough VRAM (10GiB and below, maybe) \!NOTE\! this is VERY EXPERIMETNAL, I have zero free time to validate it right now, I'll do it later
|
2023-02-21 19:31:57 +00:00 |
|
|
ed2cf9f5ee
|
wrap checking for metadata when adding a voice in case it throws an error
|
2023-02-21 17:35:30 +00:00 |
|
|
b6f7aa6264
|
fixes
|
2023-02-21 04:22:11 +00:00 |
|
|
bbc2d26289
|
I finally figured out how to fix gr.Dropdown.change, so a lot of dumb UI decisions are fixed and makes sense
|
2023-02-21 03:00:45 +00:00 |
|
|
1fd88afcca
|
updated notebook for newer setup structure, added formatting of getting it/s and lass loss rate (have not tested loss rate yet)
|
2023-02-20 22:56:39 +00:00 |
|
|
bacac6daea
|
handled paths that contain spaces because python for whatever god forsaken reason will always split on spaces even if wrapping an argument in quotes
|
2023-02-20 20:23:22 +00:00 |
|
|
37ffa60d14
|
brain worms forgot a global, hate global semantics
|
2023-02-20 15:31:38 +00:00 |
|
|
d17f6fafb0
|
clean up, reordered, added some rather liberal loading/unloading auxiliary models, can't really focus right now to keep testing it, report any issues and I'll get around to it
|
2023-02-20 00:21:16 +00:00 |
|
|
c99cacec2e
|
oops
|
2023-02-19 23:29:12 +00:00 |
|
|
ee95616dfd
|
optimize batch sizes to be as evenly divisible as possible (noticed the calculated epochs mismatched the inputted epochs)
|
2023-02-19 21:06:14 +00:00 |
|
|
6260594a1e
|
Forgot to base print/save frequencies in terms of epochs in the UI, will get converted when saving the YAML
|
2023-02-19 20:38:00 +00:00 |
|
|
4694d622f4
|
doing something completely unrelated had me realize it's 1000x easier to just base things in terms of epochs, and calculate iteratsions from there
|
2023-02-19 20:22:03 +00:00 |
|
|
ec76676b16
|
i hate gradio I hate having to specify step=1
|
2023-02-19 17:12:39 +00:00 |
|
|
4f79b3724b
|
Fixed model setting not getting updated when TTS is unloaded, for when you change it and then load TTS (sorry for that brain worm)
|
2023-02-19 16:24:06 +00:00 |
|
|
092dd7b2d7
|
added more safeties and parameters to training yaml generator, I think I tested it extensively enough
|
2023-02-19 16:16:44 +00:00 |
|
|
d89b7d60e0
|
forgot to divide checkpoint freq by iterations to get checkpoint counts
|
2023-02-19 07:05:11 +00:00 |
|
|
485319c2bb
|
don't know what brain worms had me throw printing training output under verbose
|
2023-02-19 06:28:53 +00:00 |
|
|
debdf6049a
|
forgot to copy again from dev folder to git folder
|
2023-02-19 06:04:46 +00:00 |
|
|
ae5d4023aa
|
fix for (I assume) some inconsistency with gradio sometimes-but-not-all-the-time coercing an empty Textbox into an empty string or sometimes None, but I also assume that might be a deserialization issue from JSON (cannot be assed to ask people to screenshot UI or send their ./config/generation.json for analysis, so get this hot monkeyshit patch)
|
2023-02-19 06:02:47 +00:00 |
|
|
57060190af
|
absolutely detest global semantics
|
2023-02-19 05:12:09 +00:00 |
|
|
f44239a85a
|
added polyfill for loading autoregressive models in case mrq/tortoise-tts absolutely refuses to update
|
2023-02-19 05:10:08 +00:00 |
|
|
e7d0cfaa82
|
added some output parsing during training (print current iteration step, and checkpoint save), added option for verbose output (for debugging), added buffer size for output, full console output gets dumped on terminating training
|
2023-02-19 05:05:30 +00:00 |
|
|
5fcdb19f8b
|
I forgot to make it update the whisper model at runtime
|
2023-02-19 01:47:06 +00:00 |
|
|
47058db67f
|
oops
|
2023-02-18 20:56:34 +00:00 |
|
|
fc5b303319
|
we do a little garbage collection
|
2023-02-18 20:37:37 +00:00 |
|
|
58c981d714
|
Fix killing a voice generation because I must have broken it during migration
|
2023-02-18 19:54:21 +00:00 |
|
|
cd8919e65c
|
fix sloppy copy paste job when looking for new models
|
2023-02-18 19:46:26 +00:00 |
|
|
ebbc85fb6a
|
finetuned => finetunes
|
2023-02-18 19:41:21 +00:00 |
|
lightmare
|
4807072894
|
Using zfill in utils.pad
|
2023-02-18 19:09:25 +00:00 |
|
|
1f4cdcb8a9
|
rude
|
2023-02-18 17:23:44 +00:00 |
|
|
cf758f4732
|
oops
|
2023-02-18 15:50:51 +00:00 |
|
|
843bfbfb96
|
Simplified generating training YAML, cleaned it up, training output is cleaned up and will "autoscroll" (only show the last 8 lines, refer to console for a full trace if needed)
|
2023-02-18 14:51:00 +00:00 |
|
|
0dd5640a89
|
forgot that call only worked if shell=True
|
2023-02-18 14:14:42 +00:00 |
|
|
2615cafd75
|
added dropdown to select autoregressive model for TTS, fixed a bug where the settings saveer constantly fires I hate gradio so much why are dropdown.change broken to contiuously fire and send an empty array
|
2023-02-18 14:10:26 +00:00 |
|
|
a9bd17c353
|
fixes #2
|
2023-02-18 13:07:23 +00:00 |
|
|
809012c84d
|
debugging in colab is pure cock and ball torture because sometimes the files don't actually update when edited, and sometimes they update after I restart the runtime, notebook can't use venv because I can't source it in a subprocess shell call
|
2023-02-18 03:31:44 +00:00 |
|
|
915ab5f65d
|
fixes
|
2023-02-18 03:17:46 +00:00 |
|
|
650eada8d5
|
fix spawning training subprocess for unixes
|
2023-02-18 02:40:30 +00:00 |
|
|
d5c1433268
|
a bit of UI cleanup, import multiple audio files at once, actually shows progress when importing voices, hides audio metadata / latents if no generated settings are detected, preparing datasets shows its progress, saving a training YAML shows a message when done, training now works within the web UI, training output shows to web UI, provided notebook is cleaned up and uses a venv, etc.
|
2023-02-18 02:07:22 +00:00 |
|
|
c75d0bc5da
|
pulls DLAS for any updates since I might be actually updating it, added option to not load TTS on initialization to save VRAM when training
|
2023-02-17 20:43:12 +00:00 |
|
|
ad4adc960f
|
small fixes
|
2023-02-17 20:10:27 +00:00 |
|
|
bcec64af0f
|
cleanup, "injected" dvae.pth to download through tortoise's model loader, so I don't need to keep copying it
|
2023-02-17 19:06:05 +00:00 |
|
|
13c9920b7f
|
caveats while I tighten some nuts
|
2023-02-17 17:44:52 +00:00 |
|
|
8d268bc7a3
|
training added, seems to work, need to test it more
|
2023-02-17 16:29:27 +00:00 |
|
|
f87764e7d0
|
Slight fix, getting close to be able to train from the web UI directly
|
2023-02-17 13:57:03 +00:00 |
|
|
8482131e10
|
oops x2
|
2023-02-17 06:25:00 +00:00 |
|
|
a16e6b150f
|
oops
|
2023-02-17 06:11:04 +00:00 |
|
|
59d0f08244
|
https://arch.b4k.co/v/search/text/%22TAKE%20YOUR%20DAMN%20CLOTHES%20OFF%22/type/op/
|
2023-02-17 06:06:50 +00:00 |
|
|
12933cfd60
|
added dropdown to select which whisper model to use for transcription, added note that FFMPEG is required
|
2023-02-17 06:01:14 +00:00 |
|
|
96e9acdeec
|
added preparation of LJSpeech-esque dataset
|
2023-02-17 05:42:55 +00:00 |
|
|
9c0e4666d2
|
updated notebooks to use the new "main" setup
|
2023-02-17 03:30:53 +00:00 |
|
|
f8249aa826
|
tab to generate the training YAML
|
2023-02-17 03:05:27 +00:00 |
|
|
3a078df95e
|
Initial refractor
|
2023-02-17 00:08:27 +00:00 |
|