|
c231d842aa
|
make dependencies after the one in this repo force reinstall to downgrade, i hope, I hav eother things to do than validate this works
|
2023-03-10 03:53:21 +00:00 |
|
|
c92b006129
|
I really hate YAML
|
2023-03-10 03:48:46 +00:00 |
|
|
d3184004fd
|
only God knows why the YAML spec lets you specify string values without quotes
|
2023-03-10 01:58:30 +00:00 |
|
|
eb1551ee92
|
what I thought was an override and not a ternary
|
2023-03-09 23:04:02 +00:00 |
|
|
c3b43d2429
|
today I learned adamw_zero actually negates ANY LR schemes
|
2023-03-09 19:42:31 +00:00 |
|
|
cb273b8428
|
cleanup
|
2023-03-09 18:34:52 +00:00 |
|
|
7c71f7239c
|
expose options for CosineAnnealingLR_Restart (seems to be able to train very quickly due to the restarts
|
2023-03-09 14:17:01 +00:00 |
|
|
2f6dd9c076
|
some cleanup
|
2023-03-09 06:20:05 +00:00 |
|
|
5460e191b0
|
added loss graph, because I'm going to experiment with cosine annealing LR and I need to view my loss
|
2023-03-09 05:54:08 +00:00 |
|
|
a182df8f4e
|
is
|
2023-03-09 04:33:12 +00:00 |
|
|
a01eb10960
|
(try to) unload voicefixer if it raises an error during loading voicefixer
|
2023-03-09 04:28:14 +00:00 |
|
|
dc1902b91c
|
cleanup block that makes embedding latents for random/microphone happen, remove builtin voice options from voice list to avoid duplicates
|
2023-03-09 04:23:36 +00:00 |
|
|
797882336b
|
maybe remedy an issue that crops up if you have a non-wav and non-json file in a results folder (assuming)
|
2023-03-09 04:06:07 +00:00 |
|
|
b64948d966
|
while I'm breaking things, migrating dependencies to modules folder for tidiness
|
2023-03-09 04:03:57 +00:00 |
|
|
b8867a5fb0
|
added the mysterious tortoise_compat flag mentioned in DLAS repo
|
2023-03-09 03:41:40 +00:00 |
|
|
3b4f4500d1
|
when you have three separate machines running and you test one one, but you accidentally revert changes because you then test on another
|
2023-03-09 03:26:18 +00:00 |
|
|
ef75dba995
|
I hate commas make tuples
|
2023-03-09 02:43:05 +00:00 |
|
|
f795dd5c20
|
you might be wondering why so many small commits instead of rolling the HEAD back one to just combine them, i don't want to force push and roll back the paperspace i'm testing in
|
2023-03-09 02:31:32 +00:00 |
|
|
51339671ec
|
typo
|
2023-03-09 02:29:08 +00:00 |
|
|
1b18b3e335
|
forgot to save the simplified training input json first before touching any of the settings that dump to the yaml
|
2023-03-09 02:27:20 +00:00 |
|
|
221ac38b32
|
forgot to update to finetune subdir
|
2023-03-09 02:25:32 +00:00 |
|
|
0e80e311b0
|
added VRAM validation for a given batch:gradient accumulation size ratio (based emprically off of 6GiB, 16GiB, and 16x2GiB, would be nice to have more data on what's safe)
|
2023-03-09 02:08:06 +00:00 |
|
|
ef7b957fff
|
oops
|
2023-03-09 00:53:00 +00:00 |
|
|
b0baa1909a
|
forgot template
|
2023-03-09 00:32:35 +00:00 |
|
|
3f321fe664
|
big cleanup to make my life easier when i add more parameters
|
2023-03-09 00:26:47 +00:00 |
|
|
0ab091e7ff
|
oops
|
2023-03-08 16:09:29 +00:00 |
|
|
40e8d0774e
|
share if you
|
2023-03-08 15:59:16 +00:00 |
|
|
d58b67004a
|
colab notebook uses venv and normal scripts to keep it on parity with a local install (and it literally just works stop creating issues for someething inconsistent with known solutions)
|
2023-03-08 15:51:13 +00:00 |
|
|
34dcb845b5
|
actually make using adamw_zero optimizer for multi-gpus work
|
2023-03-08 15:31:33 +00:00 |
|
|
8494628f3c
|
normalize validation batch size because i oom'd without it getting scaled
|
2023-03-08 05:27:20 +00:00 |
|
|
d7e75a51cf
|
I forgot about the changelog and never kept up with it, so I'll just not use a changelog
|
2023-03-08 05:14:50 +00:00 |
|
|
ff07f707cb
|
disable validation if validation dataset not found, clamp validation batch size to validation dataset size instead of simply reusing batch size, switch to adamw_zero optimizier when training with multi-gpus (because the yaml comment said to and I think it might be why I'm absolutely having garbage luck training this japanese dataset)
|
2023-03-08 04:47:05 +00:00 |
|
|
f1788a5639
|
lazy wrap around the voicefixer block because sometimes it just an heros itself despite having a specific block to load it beforehand
|
2023-03-08 04:12:22 +00:00 |
|
|
83b5125854
|
fixed notebooks, provided paperspace notebook
|
2023-03-08 03:29:12 +00:00 |
|
|
b4098dca73
|
made validation working (will document later)
|
2023-03-08 02:58:00 +00:00 |
|
|
a7e0dc9127
|
oops
|
2023-03-08 00:51:51 +00:00 |
|
|
e862169e7f
|
set validation to save rate and validation file if exists (need to test later)
|
2023-03-07 20:38:31 +00:00 |
|
|
fe8bf7a9d1
|
added helper script to cull short enough lines from training set as a validation set (if it yields good results doing validation during training, i'll add it to the web ui)
|
2023-03-07 20:16:49 +00:00 |
|
|
7f89e8058a
|
fixed update checker for dlas+tortoise-tts
|
2023-03-07 19:33:56 +00:00 |
|
|
6d7e143f53
|
added override for large training plots
|
2023-03-07 19:29:09 +00:00 |
|
|
3718e9d0fb
|
set NaN alarm to show the iteration it happened it
|
2023-03-07 19:22:11 +00:00 |
|
|
c27ee3ce95
|
added update checking for dlas and tortoise-tts, caching voices (for a given model and voice name) so random latents will remain the same
|
2023-03-07 17:04:45 +00:00 |
|
|
166d491a98
|
fixes
|
2023-03-07 13:40:41 +00:00 |
|
|
df5ba634c0
|
brain dead
|
2023-03-07 05:43:26 +00:00 |
|
|
2726d98ee1
|
fried my brain trying to nail out bugs involving using solely ar model=auto
|
2023-03-07 05:35:21 +00:00 |
|
|
d7a5ad9fd9
|
cleaned up some model loading logic, added 'auto' mode for AR model (deduced by current voice)
|
2023-03-07 04:34:39 +00:00 |
|
|
3899f9b4e3
|
added (yet another) experimental voice latent calculation mode (when chunk size is 0 and theres a dataset generated, itll leverage it by padding to a common size then computing them, should help avoid splitting mid-phoneme)
|
2023-03-07 03:55:35 +00:00 |
|
|
5063728bb0
|
brain worms and headaches
|
2023-03-07 03:01:02 +00:00 |
|
|
0f31c34120
|
download dvae.pth for the people who managed to somehow put the web UI into a state where it never initializes TTS at all somehow
|
2023-03-07 02:47:10 +00:00 |
|
|
0f0b394445
|
moved (actually not working) setting to use BigVGAN to a dropdown to select between vocoders (for when slotting in future ones), and ability to load a new vocoder while TTS is loaded
|
2023-03-07 02:45:22 +00:00 |
|