Commit Graph

266 Commits

Author SHA1 Message Date
mrq
dea2fa9caf added fields to offset start/end slices to apply in bulk when slicing 2023-03-11 21:34:29 +00:00
mrq
89bb3d4419 rename transcribe button since it does more than transcribe 2023-03-11 21:18:04 +00:00
mrq
382a3e4104 rely on the whisper.json for handling a lot more things 2023-03-11 21:17:11 +00:00
mrq
9b376c381f brain worm 2023-03-11 18:14:32 +00:00
mrq
94551fb9ac split slicing dataset routine so it can be done after the fact 2023-03-11 17:27:01 +00:00
mrq
e3fdb79b49 rocm5.2 works for me desu so I bumped it back up 2023-03-11 17:02:56 +00:00
mrq
e680d84a13 removed the hotfix pip installs that whisperx requires now that whisperx is gone 2023-03-11 16:55:19 +00:00
mrq
cf41492f76 fall back to normal behavior if theres actually no audiofiles loaded from the dataset when using it for computing latents 2023-03-11 16:46:03 +00:00
mrq
b90c164778 Farewell, parasite 2023-03-11 16:40:34 +00:00
mrq
2424c455cb added option to not slice audio when transcribing, added option to prepare validation dataset on audio duration, added a warning if youre using whisperx and you're slicing audio 2023-03-11 16:32:35 +00:00
tigi6346
dcdcf8516c master (#112)
Fixes Gradio bugging out when attempting to load a missing train.json.

Reviewed-on: #112
Co-authored-by: tigi6346 <tigi6346@noreply.localhost>
Co-committed-by: tigi6346 <tigi6346@noreply.localhost>
2023-03-11 03:28:04 +00:00
mrq
008a1f5f8f simplified spawning the training process by having it spawn the distributed training processes in the train.py script, so it should work on Windows too 2023-03-11 01:37:00 +00:00
mrq
2feb6da0c0 cleanups and fixes, fix DLAS throwing errors from '''too short of sound files''' by just culling them during transcription 2023-03-11 01:19:49 +00:00
mrq
7f2da0f5fb rewrote how AIVC gets training metrics (need to clean up later) 2023-03-10 22:35:32 +00:00
mrq
df0edacc60 fix the cleanup actually only doing 2 despite requesting more than 2, surprised no one has pointed it out 2023-03-10 14:04:07 +00:00
mrq
8e890d3023 forgot to fix reset settings to use the new arg-agnostic way 2023-03-10 13:49:39 +00:00
mrq
d250e0ec17 brain fried 2023-03-10 04:27:34 +00:00
mrq
0b364b590e maybe don't --force-reinstall to try and force downgrading, it just forces everything to uninstall then reinstall 2023-03-10 04:22:47 +00:00
mrq
c231d842aa make dependencies after the one in this repo force reinstall to downgrade, i hope, I hav eother things to do than validate this works 2023-03-10 03:53:21 +00:00
mrq
c92b006129 I really hate YAML 2023-03-10 03:48:46 +00:00
mrq
d3184004fd only God knows why the YAML spec lets you specify string values without quotes 2023-03-10 01:58:30 +00:00
mrq
eb1551ee92 what I thought was an override and not a ternary 2023-03-09 23:04:02 +00:00
mrq
c3b43d2429 today I learned adamw_zero actually negates ANY LR schemes 2023-03-09 19:42:31 +00:00
mrq
cb273b8428 cleanup 2023-03-09 18:34:52 +00:00
mrq
7c71f7239c expose options for CosineAnnealingLR_Restart (seems to be able to train very quickly due to the restarts 2023-03-09 14:17:01 +00:00
mrq
2f6dd9c076 some cleanup 2023-03-09 06:20:05 +00:00
mrq
5460e191b0 added loss graph, because I'm going to experiment with cosine annealing LR and I need to view my loss 2023-03-09 05:54:08 +00:00
mrq
a182df8f4e is 2023-03-09 04:33:12 +00:00
mrq
a01eb10960 (try to) unload voicefixer if it raises an error during loading voicefixer 2023-03-09 04:28:14 +00:00
mrq
dc1902b91c cleanup block that makes embedding latents for random/microphone happen, remove builtin voice options from voice list to avoid duplicates 2023-03-09 04:23:36 +00:00
mrq
797882336b maybe remedy an issue that crops up if you have a non-wav and non-json file in a results folder (assuming) 2023-03-09 04:06:07 +00:00
mrq
b64948d966 while I'm breaking things, migrating dependencies to modules folder for tidiness 2023-03-09 04:03:57 +00:00
mrq
b8867a5fb0 added the mysterious tortoise_compat flag mentioned in DLAS repo 2023-03-09 03:41:40 +00:00
mrq
3b4f4500d1 when you have three separate machines running and you test one one, but you accidentally revert changes because you then test on another 2023-03-09 03:26:18 +00:00
mrq
ef75dba995 I hate commas make tuples 2023-03-09 02:43:05 +00:00
mrq
f795dd5c20 you might be wondering why so many small commits instead of rolling the HEAD back one to just combine them, i don't want to force push and roll back the paperspace i'm testing in 2023-03-09 02:31:32 +00:00
mrq
51339671ec typo 2023-03-09 02:29:08 +00:00
mrq
1b18b3e335 forgot to save the simplified training input json first before touching any of the settings that dump to the yaml 2023-03-09 02:27:20 +00:00
mrq
221ac38b32 forgot to update to finetune subdir 2023-03-09 02:25:32 +00:00
mrq
0e80e311b0 added VRAM validation for a given batch:gradient accumulation size ratio (based emprically off of 6GiB, 16GiB, and 16x2GiB, would be nice to have more data on what's safe) 2023-03-09 02:08:06 +00:00
mrq
ef7b957fff oops 2023-03-09 00:53:00 +00:00
mrq
b0baa1909a forgot template 2023-03-09 00:32:35 +00:00
mrq
3f321fe664 big cleanup to make my life easier when i add more parameters 2023-03-09 00:26:47 +00:00
mrq
0ab091e7ff oops 2023-03-08 16:09:29 +00:00
mrq
40e8d0774e share if you 2023-03-08 15:59:16 +00:00
mrq
d58b67004a colab notebook uses venv and normal scripts to keep it on parity with a local install (and it literally just works stop creating issues for someething inconsistent with known solutions) 2023-03-08 15:51:13 +00:00
mrq
34dcb845b5 actually make using adamw_zero optimizer for multi-gpus work 2023-03-08 15:31:33 +00:00
mrq
8494628f3c normalize validation batch size because i oom'd without it getting scaled 2023-03-08 05:27:20 +00:00
mrq
d7e75a51cf I forgot about the changelog and never kept up with it, so I'll just not use a changelog 2023-03-08 05:14:50 +00:00
mrq
ff07f707cb disable validation if validation dataset not found, clamp validation batch size to validation dataset size instead of simply reusing batch size, switch to adamw_zero optimizier when training with multi-gpus (because the yaml comment said to and I think it might be why I'm absolutely having garbage luck training this japanese dataset) 2023-03-08 04:47:05 +00:00