Commit Graph

221 Commits

Author SHA1 Message Date
mrq
119ac50c58 forgot to re-append the existing transcription when skipping existing (have to go back again and do the first 10% of my giant dataset 2023-03-06 16:50:55 +00:00
mrq
12c51b6057 Im not too sure if manually invoking gc actually closes all the open files from whisperx (or ROCm), but it seems to have gone away longside setting 'ulimit -Sn' to half the output of 'ulimit -Hn' 2023-03-06 16:39:37 +00:00
mrq
999878d9c6 and it turned out I wasn't even using the aligned segments, kmsing now that I have to *redo* my dataset again 2023-03-06 11:01:33 +00:00
mrq
14779a5020 Added option to skip transcribing if it exists in the output text file, because apparently whisperx will throw a "max files opened" error when using ROCm because it does not close some file descriptors if you're batch-transcribing or something, so poor little me, who's retranscribing his japanese dataset for the 305823042th time woke up to it partially done i am so mad I have to wait another few hours for it to continue when I was hoping to wake up to it done 2023-03-06 10:47:06 +00:00
mrq
0e3bbc55f8 added api_name for generation, added whisperx backend, relocated use whispercpp option to whisper backend list 2023-03-06 05:21:33 +00:00
mrq
788a957f79 stretch loss plot to target iteration just so its not so misleading with the scale 2023-03-06 00:44:29 +00:00
mrq
5be14abc21 UI cleanup, actually fix syncing the epoch counter (i hope), setting auto-suggest voice chunk size whatever to 0 will just split based on the average duration length, signal when a NaN info value is detected (there's some safeties in the training, but it will inevitably fuck the model) 2023-03-05 23:55:27 +00:00
mrq
287738a338 (should) fix reported epoch metric desyncing from defacto metric, fixed finding next milestone from wrong sign because of 2AM brain 2023-03-05 20:42:45 +00:00
mrq
206a14fdbe brianworms 2023-03-05 20:30:27 +00:00
mrq
b82961ba8a typo 2023-03-05 20:13:39 +00:00
mrq
b2e89d8da3 oops 2023-03-05 19:58:15 +00:00
mrq
8094401a6d print in e-notation for LR 2023-03-05 19:48:24 +00:00
mrq
8b9c9e1bbf remove redundant stats, add showing LR 2023-03-05 18:53:12 +00:00
mrq
0231550287 forgot to remove a debug print 2023-03-05 18:27:16 +00:00
mrq
d97639e138 whispercpp actually works now (language loading was weird, slicing needed to divide time by 100), transcribing audio checks for silence and discards them 2023-03-05 17:54:36 +00:00
mrq
b8a620e8d7 actually accumulate derivatives when estimating milestones and final loss by using half of the log 2023-03-05 14:39:24 +00:00
mrq
35225a35da oops v2 2023-03-05 14:19:41 +00:00
mrq
b5e9899bbf 5 hour sleep brained 2023-03-05 13:37:05 +00:00
mrq
cd8702ab0d oops 2023-03-05 13:24:07 +00:00
mrq
d312019d05 reordered things so it uses fresh data and not last-updated data 2023-03-05 07:37:27 +00:00
mrq
ce3866d0cd added '''estimating''' iterations until milestones (lr=[1, 0.5, 0.1] and final lr, very, very inaccurate because it uses instantaneous delta lr, I'll need to do a riemann sum later 2023-03-05 06:45:07 +00:00
mrq
1316331be3 forgot to try and have it try and auto-detect for openai/whisper when no language is specified 2023-03-05 05:22:35 +00:00
mrq
3e220ed306 added option to set worker size in training config generator (because the default is overkill), for whisper transcriptions, load a specialized language model if it exists (for now, only english), output transcription to web UI when done transcribing 2023-03-05 05:17:19 +00:00
mrq
37cab14272 use torchrun instead for multigpu 2023-03-04 20:53:00 +00:00
mrq
5026d93ecd sloppy fix to actually kill children when using multi-GPU distributed training, set GPU training count based on what CUDA exposes automatically so I don't have to keep setting it to 2 2023-03-04 20:42:54 +00:00
mrq
1a9d159b2a forgot to add 'bs / gradient accum < 2 clamp validation logic 2023-03-04 17:37:08 +00:00
mrq
df24827b9a renamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details) 2023-03-04 15:55:06 +00:00
mrq
6d5e1e1a80 fixed user inputted LR schedule not actually getting used (oops) 2023-03-04 04:41:56 +00:00
mrq
6d8c2dd459 auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slider 2023-03-03 21:13:48 +00:00
mrq
e1f3ffa08c oops 2023-03-03 18:51:33 +00:00
mrq
9fb4aa7917 validated whispercpp working, fixed args.listen not being saved due to brainworms 2023-03-03 07:23:10 +00:00
mrq
740b5587df added option to specify using BigVGAN as the vocoder for mrq/tortoise-tts 2023-03-03 06:39:37 +00:00
mrq
68f4858ce9 oops 2023-03-03 05:51:17 +00:00
mrq
e859a7c01d experimental multi-gpu training (Linux only, because I can't into batch files) 2023-03-03 04:37:18 +00:00
mrq
c956d81baf added button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.sh 2023-03-02 01:35:12 +00:00
mrq
534a761e49 added loading/saving of voice latents by model hash, so no more needing to manually regenerate every time you change models 2023-03-02 00:46:52 +00:00
mrq
5a41db978e oops 2023-03-01 19:39:43 +00:00
mrq
b989123bd4 leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training) 2023-03-01 19:32:11 +00:00
mrq
c2726fa0d4 added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earlier 2023-03-01 01:17:38 +00:00
mrq
5037752059 oops 2023-02-28 22:13:21 +00:00
mrq
787b44807a added to embedded metadata: datetime, model path, model hash 2023-02-28 15:36:06 +00:00
mrq
81eb58f0d6 show different losses, rewordings 2023-02-28 06:18:18 +00:00
mrq
fda47156ec oops 2023-02-28 01:08:07 +00:00
mrq
bc0d9ab3ed added graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something else 2023-02-28 01:01:50 +00:00
mrq
6925ec731b I don't remember. 2023-02-27 19:20:06 +00:00
mrq
92553973be Added option to disable bitsandbytesoptimizations for systems that do not support it (systems without a Turing-onward Nvidia card), saves use of float16 and bitsandbytes for training into the config json 2023-02-26 01:57:56 +00:00
mrq
aafeb9f96a actually fixed the training output text parser 2023-02-25 16:44:25 +00:00
mrq
65329dba31 oops, epoch increments twice 2023-02-25 15:31:18 +00:00
mrq
8b4da29d5f csome adjustments to the training output parser, now updates per iteration for really large batches (like the one I'm doing for a dataset size of 19420) 2023-02-25 13:55:25 +00:00
mrq
d5d8821a9d fixed some files not copying for bitsandbytes (I was wrong to assume it copied folders too), fixed stopping generating and training, some other thing that I forgot since it's been slowly worked on in my small free times 2023-02-24 23:13:13 +00:00
mrq
f31ea9d5bc oops 2023-02-24 16:23:30 +00:00
mrq
2104dbdbc5 ops 2023-02-24 13:05:08 +00:00
mrq
f6d0b66e10 finally added model refresh button, also searches in the training folder for outputted models so you don't even need to copy them 2023-02-24 12:58:41 +00:00
mrq
1e0fec4358 god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutes 2023-02-23 23:22:23 +00:00
mrq
7d1220e83e forgot to mult by batch size 2023-02-23 15:38:04 +00:00
mrq
487f2ebf32 fixed the brain worm discrepancy between epochs, iterations, and steps 2023-02-23 15:31:43 +00:00
mrq
1cbcf14cff oops 2023-02-23 13:18:51 +00:00
mrq
41fca1a101 ugh 2023-02-23 07:20:40 +00:00
mrq
941a27d2b3 removed the logic to toggle BNB capabilities, since I guess I can't do that from outside the module 2023-02-23 07:05:39 +00:00
mrq
225dee22d4 huge success 2023-02-23 06:24:54 +00:00
mrq
526a430c2a how did this revert... 2023-02-22 13:24:03 +00:00
mrq
2aa70532e8 added '''suggested''' voice chunk size (it just updates it to how many files you have, not based on combined voice length, like it should 2023-02-22 03:31:46 +00:00
mrq
cc47ed7242 kmsing 2023-02-22 03:27:28 +00:00
mrq
93b061fb4d oops 2023-02-22 03:21:03 +00:00
mrq
c4b41e07fa properly placed the line toe xtract starting iteration 2023-02-22 01:17:09 +00:00
mrq
fefc7aba03 oops 2023-02-21 22:13:30 +00:00
mrq
9e64dad785 clamp batch size to sample count when generating for the sickos that want that, added setting to remove non-final output after a generation, something else I forgot already 2023-02-21 21:50:05 +00:00
mrq
f119993fb5 explicitly use python3 because some OSs will not have python alias to python3, allow batch size 1 2023-02-21 20:20:52 +00:00
mrq
8a1a48f31e Added very experimental float16 training for cards with not enough VRAM (10GiB and below, maybe) \!NOTE\! this is VERY EXPERIMETNAL, I have zero free time to validate it right now, I'll do it later 2023-02-21 19:31:57 +00:00
mrq
ed2cf9f5ee wrap checking for metadata when adding a voice in case it throws an error 2023-02-21 17:35:30 +00:00
mrq
b6f7aa6264 fixes 2023-02-21 04:22:11 +00:00
mrq
bbc2d26289 I finally figured out how to fix gr.Dropdown.change, so a lot of dumb UI decisions are fixed and makes sense 2023-02-21 03:00:45 +00:00
mrq
1fd88afcca updated notebook for newer setup structure, added formatting of getting it/s and lass loss rate (have not tested loss rate yet) 2023-02-20 22:56:39 +00:00
mrq
bacac6daea handled paths that contain spaces because python for whatever god forsaken reason will always split on spaces even if wrapping an argument in quotes 2023-02-20 20:23:22 +00:00
mrq
37ffa60d14 brain worms forgot a global, hate global semantics 2023-02-20 15:31:38 +00:00
mrq
d17f6fafb0 clean up, reordered, added some rather liberal loading/unloading auxiliary models, can't really focus right now to keep testing it, report any issues and I'll get around to it 2023-02-20 00:21:16 +00:00
mrq
c99cacec2e oops 2023-02-19 23:29:12 +00:00
mrq
ee95616dfd optimize batch sizes to be as evenly divisible as possible (noticed the calculated epochs mismatched the inputted epochs) 2023-02-19 21:06:14 +00:00
mrq
6260594a1e Forgot to base print/save frequencies in terms of epochs in the UI, will get converted when saving the YAML 2023-02-19 20:38:00 +00:00
mrq
4694d622f4 doing something completely unrelated had me realize it's 1000x easier to just base things in terms of epochs, and calculate iteratsions from there 2023-02-19 20:22:03 +00:00
mrq
ec76676b16 i hate gradio I hate having to specify step=1 2023-02-19 17:12:39 +00:00
mrq
4f79b3724b Fixed model setting not getting updated when TTS is unloaded, for when you change it and then load TTS (sorry for that brain worm) 2023-02-19 16:24:06 +00:00
mrq
092dd7b2d7 added more safeties and parameters to training yaml generator, I think I tested it extensively enough 2023-02-19 16:16:44 +00:00
mrq
d89b7d60e0 forgot to divide checkpoint freq by iterations to get checkpoint counts 2023-02-19 07:05:11 +00:00
mrq
485319c2bb don't know what brain worms had me throw printing training output under verbose 2023-02-19 06:28:53 +00:00
mrq
debdf6049a forgot to copy again from dev folder to git folder 2023-02-19 06:04:46 +00:00
mrq
ae5d4023aa fix for (I assume) some inconsistency with gradio sometimes-but-not-all-the-time coercing an empty Textbox into an empty string or sometimes None, but I also assume that might be a deserialization issue from JSON (cannot be assed to ask people to screenshot UI or send their ./config/generation.json for analysis, so get this hot monkeyshit patch) 2023-02-19 06:02:47 +00:00
mrq
57060190af absolutely detest global semantics 2023-02-19 05:12:09 +00:00
mrq
f44239a85a added polyfill for loading autoregressive models in case mrq/tortoise-tts absolutely refuses to update 2023-02-19 05:10:08 +00:00
mrq
e7d0cfaa82 added some output parsing during training (print current iteration step, and checkpoint save), added option for verbose output (for debugging), added buffer size for output, full console output gets dumped on terminating training 2023-02-19 05:05:30 +00:00
mrq
5fcdb19f8b I forgot to make it update the whisper model at runtime 2023-02-19 01:47:06 +00:00
mrq
47058db67f oops 2023-02-18 20:56:34 +00:00
mrq
fc5b303319 we do a little garbage collection 2023-02-18 20:37:37 +00:00
mrq
58c981d714 Fix killing a voice generation because I must have broken it during migration 2023-02-18 19:54:21 +00:00
mrq
cd8919e65c fix sloppy copy paste job when looking for new models 2023-02-18 19:46:26 +00:00
mrq
ebbc85fb6a finetuned => finetunes 2023-02-18 19:41:21 +00:00
lightmare
4807072894 Using zfill in utils.pad 2023-02-18 19:09:25 +00:00
mrq
1f4cdcb8a9 rude 2023-02-18 17:23:44 +00:00
mrq
cf758f4732 oops 2023-02-18 15:50:51 +00:00
mrq
843bfbfb96 Simplified generating training YAML, cleaned it up, training output is cleaned up and will "autoscroll" (only show the last 8 lines, refer to console for a full trace if needed) 2023-02-18 14:51:00 +00:00
mrq
0dd5640a89 forgot that call only worked if shell=True 2023-02-18 14:14:42 +00:00
mrq
2615cafd75 added dropdown to select autoregressive model for TTS, fixed a bug where the settings saveer constantly fires I hate gradio so much why are dropdown.change broken to contiuously fire and send an empty array 2023-02-18 14:10:26 +00:00
mrq
a9bd17c353 fixes #2 2023-02-18 13:07:23 +00:00
mrq
809012c84d debugging in colab is pure cock and ball torture because sometimes the files don't actually update when edited, and sometimes they update after I restart the runtime, notebook can't use venv because I can't source it in a subprocess shell call 2023-02-18 03:31:44 +00:00
mrq
915ab5f65d fixes 2023-02-18 03:17:46 +00:00
mrq
650eada8d5 fix spawning training subprocess for unixes 2023-02-18 02:40:30 +00:00
mrq
d5c1433268 a bit of UI cleanup, import multiple audio files at once, actually shows progress when importing voices, hides audio metadata / latents if no generated settings are detected, preparing datasets shows its progress, saving a training YAML shows a message when done, training now works within the web UI, training output shows to web UI, provided notebook is cleaned up and uses a venv, etc. 2023-02-18 02:07:22 +00:00
mrq
c75d0bc5da pulls DLAS for any updates since I might be actually updating it, added option to not load TTS on initialization to save VRAM when training 2023-02-17 20:43:12 +00:00
mrq
ad4adc960f small fixes 2023-02-17 20:10:27 +00:00
mrq
bcec64af0f cleanup, "injected" dvae.pth to download through tortoise's model loader, so I don't need to keep copying it 2023-02-17 19:06:05 +00:00
mrq
13c9920b7f caveats while I tighten some nuts 2023-02-17 17:44:52 +00:00
mrq
8d268bc7a3 training added, seems to work, need to test it more 2023-02-17 16:29:27 +00:00
mrq
f87764e7d0 Slight fix, getting close to be able to train from the web UI directly 2023-02-17 13:57:03 +00:00
mrq
8482131e10 oops x2 2023-02-17 06:25:00 +00:00
mrq
a16e6b150f oops 2023-02-17 06:11:04 +00:00
mrq
59d0f08244 https://arch.b4k.co/v/search/text/%22TAKE%20YOUR%20DAMN%20CLOTHES%20OFF%22/type/op/ 2023-02-17 06:06:50 +00:00
mrq
12933cfd60 added dropdown to select which whisper model to use for transcription, added note that FFMPEG is required 2023-02-17 06:01:14 +00:00
mrq
96e9acdeec added preparation of LJSpeech-esque dataset 2023-02-17 05:42:55 +00:00
mrq
9c0e4666d2 updated notebooks to use the new "main" setup 2023-02-17 03:30:53 +00:00
mrq
f8249aa826 tab to generate the training YAML 2023-02-17 03:05:27 +00:00
mrq
3a078df95e Initial refractor 2023-02-17 00:08:27 +00:00