e3fdb79b49
rocm5.2 works for me desu so I bumped it back up
2023-03-11 17:02:56 +00:00
cf41492f76
fall back to normal behavior if theres actually no audiofiles loaded from the dataset when using it for computing latents
2023-03-11 16:46:03 +00:00
b90c164778
Farewell, parasite
2023-03-11 16:40:34 +00:00
2424c455cb
added option to not slice audio when transcribing, added option to prepare validation dataset on audio duration, added a warning if youre using whisperx and you're slicing audio
2023-03-11 16:32:35 +00:00
tigi6346
dcdcf8516c
master ( #112 )
...
Fixes Gradio bugging out when attempting to load a missing train.json.
Reviewed-on: mrq/ai-voice-cloning#112
Co-authored-by: tigi6346 <tigi6346@noreply.localhost>
Co-committed-by: tigi6346 <tigi6346@noreply.localhost>
2023-03-11 03:28:04 +00:00
008a1f5f8f
simplified spawning the training process by having it spawn the distributed training processes in the train.py script, so it should work on Windows too
2023-03-11 01:37:00 +00:00
2feb6da0c0
cleanups and fixes, fix DLAS throwing errors from '''too short of sound files''' by just culling them during transcription
2023-03-11 01:19:49 +00:00
7f2da0f5fb
rewrote how AIVC gets training metrics (need to clean up later)
2023-03-10 22:35:32 +00:00
df0edacc60
fix the cleanup actually only doing 2 despite requesting more than 2, surprised no one has pointed it out
2023-03-10 14:04:07 +00:00
8e890d3023
forgot to fix reset settings to use the new arg-agnostic way
2023-03-10 13:49:39 +00:00
c92b006129
I really hate YAML
2023-03-10 03:48:46 +00:00
eb1551ee92
what I thought was an override and not a ternary
2023-03-09 23:04:02 +00:00
c3b43d2429
today I learned adamw_zero actually negates ANY LR schemes
2023-03-09 19:42:31 +00:00
cb273b8428
cleanup
2023-03-09 18:34:52 +00:00
7c71f7239c
expose options for CosineAnnealingLR_Restart (seems to be able to train very quickly due to the restarts
2023-03-09 14:17:01 +00:00
2f6dd9c076
some cleanup
2023-03-09 06:20:05 +00:00
5460e191b0
added loss graph, because I'm going to experiment with cosine annealing LR and I need to view my loss
2023-03-09 05:54:08 +00:00
a182df8f4e
is
2023-03-09 04:33:12 +00:00
a01eb10960
(try to) unload voicefixer if it raises an error during loading voicefixer
2023-03-09 04:28:14 +00:00
dc1902b91c
cleanup block that makes embedding latents for random/microphone happen, remove builtin voice options from voice list to avoid duplicates
2023-03-09 04:23:36 +00:00
797882336b
maybe remedy an issue that crops up if you have a non-wav and non-json file in a results folder (assuming)
2023-03-09 04:06:07 +00:00
b64948d966
while I'm breaking things, migrating dependencies to modules folder for tidiness
2023-03-09 04:03:57 +00:00
3b4f4500d1
when you have three separate machines running and you test one one, but you accidentally revert changes because you then test on another
2023-03-09 03:26:18 +00:00
ef75dba995
I hate commas make tuples
2023-03-09 02:43:05 +00:00
f795dd5c20
you might be wondering why so many small commits instead of rolling the HEAD back one to just combine them, i don't want to force push and roll back the paperspace i'm testing in
2023-03-09 02:31:32 +00:00
51339671ec
typo
2023-03-09 02:29:08 +00:00
1b18b3e335
forgot to save the simplified training input json first before touching any of the settings that dump to the yaml
2023-03-09 02:27:20 +00:00
221ac38b32
forgot to update to finetune subdir
2023-03-09 02:25:32 +00:00
0e80e311b0
added VRAM validation for a given batch:gradient accumulation size ratio (based emprically off of 6GiB, 16GiB, and 16x2GiB, would be nice to have more data on what's safe)
2023-03-09 02:08:06 +00:00
ef7b957fff
oops
2023-03-09 00:53:00 +00:00
b0baa1909a
forgot template
2023-03-09 00:32:35 +00:00
3f321fe664
big cleanup to make my life easier when i add more parameters
2023-03-09 00:26:47 +00:00
0ab091e7ff
oops
2023-03-08 16:09:29 +00:00
34dcb845b5
actually make using adamw_zero optimizer for multi-gpus work
2023-03-08 15:31:33 +00:00
8494628f3c
normalize validation batch size because i oom'd without it getting scaled
2023-03-08 05:27:20 +00:00
d7e75a51cf
I forgot about the changelog and never kept up with it, so I'll just not use a changelog
2023-03-08 05:14:50 +00:00
ff07f707cb
disable validation if validation dataset not found, clamp validation batch size to validation dataset size instead of simply reusing batch size, switch to adamw_zero optimizier when training with multi-gpus (because the yaml comment said to and I think it might be why I'm absolutely having garbage luck training this japanese dataset)
2023-03-08 04:47:05 +00:00
f1788a5639
lazy wrap around the voicefixer block because sometimes it just an heros itself despite having a specific block to load it beforehand
2023-03-08 04:12:22 +00:00
83b5125854
fixed notebooks, provided paperspace notebook
2023-03-08 03:29:12 +00:00
b4098dca73
made validation working (will document later)
2023-03-08 02:58:00 +00:00
a7e0dc9127
oops
2023-03-08 00:51:51 +00:00
e862169e7f
set validation to save rate and validation file if exists (need to test later)
2023-03-07 20:38:31 +00:00
fe8bf7a9d1
added helper script to cull short enough lines from training set as a validation set (if it yields good results doing validation during training, i'll add it to the web ui)
2023-03-07 20:16:49 +00:00
7f89e8058a
fixed update checker for dlas+tortoise-tts
2023-03-07 19:33:56 +00:00
6d7e143f53
added override for large training plots
2023-03-07 19:29:09 +00:00
3718e9d0fb
set NaN alarm to show the iteration it happened it
2023-03-07 19:22:11 +00:00
c27ee3ce95
added update checking for dlas and tortoise-tts, caching voices (for a given model and voice name) so random latents will remain the same
2023-03-07 17:04:45 +00:00
166d491a98
fixes
2023-03-07 13:40:41 +00:00
df5ba634c0
brain dead
2023-03-07 05:43:26 +00:00
2726d98ee1
fried my brain trying to nail out bugs involving using solely ar model=auto
2023-03-07 05:35:21 +00:00