1
0
Fork 0
Commit Graph

212 Commits (469dd47a44c92dd9fe0a4b1c9bca6f12ae75e786)

Author SHA1 Message Date
mrq 5460e191b0 added loss graph, because I'm going to experiment with cosine annealing LR and I need to view my loss 2023-03-09 05:54:08 +07:00
mrq a182df8f4e is 2023-03-09 04:33:12 +07:00
mrq a01eb10960 (try to) unload voicefixer if it raises an error during loading voicefixer 2023-03-09 04:28:14 +07:00
mrq dc1902b91c cleanup block that makes embedding latents for random/microphone happen, remove builtin voice options from voice list to avoid duplicates 2023-03-09 04:23:36 +07:00
mrq 797882336b maybe remedy an issue that crops up if you have a non-wav and non-json file in a results folder (assuming) 2023-03-09 04:06:07 +07:00
mrq b64948d966 while I'm breaking things, migrating dependencies to modules folder for tidiness 2023-03-09 04:03:57 +07:00
mrq 3b4f4500d1 when you have three separate machines running and you test one one, but you accidentally revert changes because you then test on another 2023-03-09 03:26:18 +07:00
mrq ef75dba995 I hate commas make tuples 2023-03-09 02:43:05 +07:00
mrq f795dd5c20 you might be wondering why so many small commits instead of rolling the HEAD back one to just combine them, i don't want to force push and roll back the paperspace i'm testing in 2023-03-09 02:31:32 +07:00
mrq 51339671ec typo 2023-03-09 02:29:08 +07:00
mrq 1b18b3e335 forgot to save the simplified training input json first before touching any of the settings that dump to the yaml 2023-03-09 02:27:20 +07:00
mrq 221ac38b32 forgot to update to finetune subdir 2023-03-09 02:25:32 +07:00
mrq 0e80e311b0 added VRAM validation for a given batch:gradient accumulation size ratio (based emprically off of 6GiB, 16GiB, and 16x2GiB, would be nice to have more data on what's safe) 2023-03-09 02:08:06 +07:00
mrq ef7b957fff oops 2023-03-09 00:53:00 +07:00
mrq b0baa1909a forgot template 2023-03-09 00:32:35 +07:00
mrq 3f321fe664 big cleanup to make my life easier when i add more parameters 2023-03-09 00:26:47 +07:00
mrq 0ab091e7ff oops 2023-03-08 16:09:29 +07:00
mrq 34dcb845b5 actually make using adamw_zero optimizer for multi-gpus work 2023-03-08 15:31:33 +07:00
mrq 8494628f3c normalize validation batch size because i oom'd without it getting scaled 2023-03-08 05:27:20 +07:00
mrq d7e75a51cf I forgot about the changelog and never kept up with it, so I'll just not use a changelog 2023-03-08 05:14:50 +07:00
mrq ff07f707cb disable validation if validation dataset not found, clamp validation batch size to validation dataset size instead of simply reusing batch size, switch to adamw_zero optimizier when training with multi-gpus (because the yaml comment said to and I think it might be why I'm absolutely having garbage luck training this japanese dataset) 2023-03-08 04:47:05 +07:00
mrq f1788a5639 lazy wrap around the voicefixer block because sometimes it just an heros itself despite having a specific block to load it beforehand 2023-03-08 04:12:22 +07:00
mrq 83b5125854 fixed notebooks, provided paperspace notebook 2023-03-08 03:29:12 +07:00
mrq b4098dca73 made validation working (will document later) 2023-03-08 02:58:00 +07:00
mrq a7e0dc9127 oops 2023-03-08 00:51:51 +07:00
mrq e862169e7f set validation to save rate and validation file if exists (need to test later) 2023-03-07 20:38:31 +07:00
mrq fe8bf7a9d1 added helper script to cull short enough lines from training set as a validation set (if it yields good results doing validation during training, i'll add it to the web ui) 2023-03-07 20:16:49 +07:00
mrq 7f89e8058a fixed update checker for dlas+tortoise-tts 2023-03-07 19:33:56 +07:00
mrq 6d7e143f53 added override for large training plots 2023-03-07 19:29:09 +07:00
mrq 3718e9d0fb set NaN alarm to show the iteration it happened it 2023-03-07 19:22:11 +07:00
mrq c27ee3ce95 added update checking for dlas and tortoise-tts, caching voices (for a given model and voice name) so random latents will remain the same 2023-03-07 17:04:45 +07:00
mrq 166d491a98 fixes 2023-03-07 13:40:41 +07:00
mrq df5ba634c0 brain dead 2023-03-07 05:43:26 +07:00
mrq 2726d98ee1 fried my brain trying to nail out bugs involving using solely ar model=auto 2023-03-07 05:35:21 +07:00
mrq d7a5ad9fd9 cleaned up some model loading logic, added 'auto' mode for AR model (deduced by current voice) 2023-03-07 04:34:39 +07:00
mrq 3899f9b4e3 added (yet another) experimental voice latent calculation mode (when chunk size is 0 and theres a dataset generated, itll leverage it by padding to a common size then computing them, should help avoid splitting mid-phoneme) 2023-03-07 03:55:35 +07:00
mrq 5063728bb0 brain worms and headaches 2023-03-07 03:01:02 +07:00
mrq 0f31c34120 download dvae.pth for the people who managed to somehow put the web UI into a state where it never initializes TTS at all somehow 2023-03-07 02:47:10 +07:00
mrq 0f0b394445 moved (actually not working) setting to use BigVGAN to a dropdown to select between vocoders (for when slotting in future ones), and ability to load a new vocoder while TTS is loaded 2023-03-07 02:45:22 +07:00
mrq e731b9ba84 reworked generating metadata to embed, should now store overrided settings 2023-03-06 23:07:16 +07:00
mrq 7798767fc6 added settings editing (will add a guide on what to do later, and an example) 2023-03-06 21:48:34 +07:00
mrq 119ac50c58 forgot to re-append the existing transcription when skipping existing (have to go back again and do the first 10% of my giant dataset 2023-03-06 16:50:55 +07:00
mrq 12c51b6057 Im not too sure if manually invoking gc actually closes all the open files from whisperx (or ROCm), but it seems to have gone away longside setting 'ulimit -Sn' to half the output of 'ulimit -Hn' 2023-03-06 16:39:37 +07:00
mrq 999878d9c6 and it turned out I wasn't even using the aligned segments, kmsing now that I have to *redo* my dataset again 2023-03-06 11:01:33 +07:00
mrq 14779a5020 Added option to skip transcribing if it exists in the output text file, because apparently whisperx will throw a "max files opened" error when using ROCm because it does not close some file descriptors if you're batch-transcribing or something, so poor little me, who's retranscribing his japanese dataset for the 305823042th time woke up to it partially done i am so mad I have to wait another few hours for it to continue when I was hoping to wake up to it done 2023-03-06 10:47:06 +07:00
mrq 0e3bbc55f8 added api_name for generation, added whisperx backend, relocated use whispercpp option to whisper backend list 2023-03-06 05:21:33 +07:00
mrq 788a957f79 stretch loss plot to target iteration just so its not so misleading with the scale 2023-03-06 00:44:29 +07:00
mrq 5be14abc21 UI cleanup, actually fix syncing the epoch counter (i hope), setting auto-suggest voice chunk size whatever to 0 will just split based on the average duration length, signal when a NaN info value is detected (there's some safeties in the training, but it will inevitably fuck the model) 2023-03-05 23:55:27 +07:00
mrq 287738a338 (should) fix reported epoch metric desyncing from defacto metric, fixed finding next milestone from wrong sign because of 2AM brain 2023-03-05 20:42:45 +07:00
mrq 206a14fdbe brianworms 2023-03-05 20:30:27 +07:00
mrq b82961ba8a typo 2023-03-05 20:13:39 +07:00
mrq b2e89d8da3 oops 2023-03-05 19:58:15 +07:00
mrq 8094401a6d print in e-notation for LR 2023-03-05 19:48:24 +07:00
mrq 8b9c9e1bbf remove redundant stats, add showing LR 2023-03-05 18:53:12 +07:00
mrq 0231550287 forgot to remove a debug print 2023-03-05 18:27:16 +07:00
mrq d97639e138 whispercpp actually works now (language loading was weird, slicing needed to divide time by 100), transcribing audio checks for silence and discards them 2023-03-05 17:54:36 +07:00
mrq b8a620e8d7 actually accumulate derivatives when estimating milestones and final loss by using half of the log 2023-03-05 14:39:24 +07:00
mrq 35225a35da oops v2 2023-03-05 14:19:41 +07:00
mrq b5e9899bbf 5 hour sleep brained 2023-03-05 13:37:05 +07:00
mrq cd8702ab0d oops 2023-03-05 13:24:07 +07:00
mrq d312019d05 reordered things so it uses fresh data and not last-updated data 2023-03-05 07:37:27 +07:00
mrq ce3866d0cd added '''estimating''' iterations until milestones (lr=[1, 0.5, 0.1] and final lr, very, very inaccurate because it uses instantaneous delta lr, I'll need to do a riemann sum later 2023-03-05 06:45:07 +07:00
mrq 1316331be3 forgot to try and have it try and auto-detect for openai/whisper when no language is specified 2023-03-05 05:22:35 +07:00
mrq 3e220ed306 added option to set worker size in training config generator (because the default is overkill), for whisper transcriptions, load a specialized language model if it exists (for now, only english), output transcription to web UI when done transcribing 2023-03-05 05:17:19 +07:00
mrq 37cab14272 use torchrun instead for multigpu 2023-03-04 20:53:00 +07:00
mrq 5026d93ecd sloppy fix to actually kill children when using multi-GPU distributed training, set GPU training count based on what CUDA exposes automatically so I don't have to keep setting it to 2 2023-03-04 20:42:54 +07:00
mrq 1a9d159b2a forgot to add 'bs / gradient accum < 2 clamp validation logic 2023-03-04 17:37:08 +07:00
mrq df24827b9a renamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details) 2023-03-04 15:55:06 +07:00
mrq 6d5e1e1a80 fixed user inputted LR schedule not actually getting used (oops) 2023-03-04 04:41:56 +07:00
mrq 6d8c2dd459 auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slider 2023-03-03 21:13:48 +07:00
mrq e1f3ffa08c oops 2023-03-03 18:51:33 +07:00
mrq 9fb4aa7917 validated whispercpp working, fixed args.listen not being saved due to brainworms 2023-03-03 07:23:10 +07:00
mrq 740b5587df added option to specify using BigVGAN as the vocoder for mrq/tortoise-tts 2023-03-03 06:39:37 +07:00
mrq 68f4858ce9 oops 2023-03-03 05:51:17 +07:00
mrq e859a7c01d experimental multi-gpu training (Linux only, because I can't into batch files) 2023-03-03 04:37:18 +07:00
mrq c956d81baf added button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.sh 2023-03-02 01:35:12 +07:00
mrq 534a761e49 added loading/saving of voice latents by model hash, so no more needing to manually regenerate every time you change models 2023-03-02 00:46:52 +07:00
mrq 5a41db978e oops 2023-03-01 19:39:43 +07:00
mrq b989123bd4 leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training) 2023-03-01 19:32:11 +07:00
mrq c2726fa0d4 added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earlier 2023-03-01 01:17:38 +07:00
mrq 5037752059 oops 2023-02-28 22:13:21 +07:00
mrq 787b44807a added to embedded metadata: datetime, model path, model hash 2023-02-28 15:36:06 +07:00
mrq 81eb58f0d6 show different losses, rewordings 2023-02-28 06:18:18 +07:00
mrq fda47156ec oops 2023-02-28 01:08:07 +07:00
mrq bc0d9ab3ed added graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something else 2023-02-28 01:01:50 +07:00
mrq 6925ec731b I don't remember. 2023-02-27 19:20:06 +07:00
mrq 92553973be Added option to disable bitsandbytesoptimizations for systems that do not support it (systems without a Turing-onward Nvidia card), saves use of float16 and bitsandbytes for training into the config json 2023-02-26 01:57:56 +07:00
mrq aafeb9f96a actually fixed the training output text parser 2023-02-25 16:44:25 +07:00
mrq 65329dba31 oops, epoch increments twice 2023-02-25 15:31:18 +07:00
mrq 8b4da29d5f csome adjustments to the training output parser, now updates per iteration for really large batches (like the one I'm doing for a dataset size of 19420) 2023-02-25 13:55:25 +07:00
mrq d5d8821a9d fixed some files not copying for bitsandbytes (I was wrong to assume it copied folders too), fixed stopping generating and training, some other thing that I forgot since it's been slowly worked on in my small free times 2023-02-24 23:13:13 +07:00
mrq f31ea9d5bc oops 2023-02-24 16:23:30 +07:00
mrq 2104dbdbc5 ops 2023-02-24 13:05:08 +07:00
mrq f6d0b66e10 finally added model refresh button, also searches in the training folder for outputted models so you don't even need to copy them 2023-02-24 12:58:41 +07:00
mrq 1e0fec4358 god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutes 2023-02-23 23:22:23 +07:00
mrq 7d1220e83e forgot to mult by batch size 2023-02-23 15:38:04 +07:00
mrq 487f2ebf32 fixed the brain worm discrepancy between epochs, iterations, and steps 2023-02-23 15:31:43 +07:00
mrq 1cbcf14cff oops 2023-02-23 13:18:51 +07:00
mrq 41fca1a101 ugh 2023-02-23 07:20:40 +07:00
mrq 941a27d2b3 removed the logic to toggle BNB capabilities, since I guess I can't do that from outside the module 2023-02-23 07:05:39 +07:00