Commit Graph

  • 3b4f4500d1 when you have three separate machines running and you test one one, but you accidentally revert changes because you then test on another mrq 2023-03-09 03:26:18 +0000
  • ef75dba995 I hate commas make tuples mrq 2023-03-09 02:43:05 +0000
  • f795dd5c20 you might be wondering why so many small commits instead of rolling the HEAD back one to just combine them, i don't want to force push and roll back the paperspace i'm testing in mrq 2023-03-09 02:31:32 +0000
  • 51339671ec typo mrq 2023-03-09 02:29:08 +0000
  • 1b18b3e335 forgot to save the simplified training input json first before touching any of the settings that dump to the yaml mrq 2023-03-09 02:27:20 +0000
  • 221ac38b32 forgot to update to finetune subdir mrq 2023-03-09 02:25:32 +0000
  • 0e80e311b0 added VRAM validation for a given batch:gradient accumulation size ratio (based emprically off of 6GiB, 16GiB, and 16x2GiB, would be nice to have more data on what's safe) mrq 2023-03-09 02:08:06 +0000
  • ef7b957fff oops mrq 2023-03-09 00:53:00 +0000
  • b0baa1909a forgot template mrq 2023-03-09 00:32:35 +0000
  • 3f321fe664 big cleanup to make my life easier when i add more parameters mrq 2023-03-09 00:26:47 +0000
  • 0ab091e7ff oops mrq 2023-03-08 16:09:29 +0000
  • 40e8d0774e share if you mrq 2023-03-08 15:59:16 +0000
  • d58b67004a colab notebook uses venv and normal scripts to keep it on parity with a local install (and it literally just works stop creating issues for someething inconsistent with known solutions) mrq 2023-03-08 15:51:13 +0000
  • 34dcb845b5 actually make using adamw_zero optimizer for multi-gpus work mrq 2023-03-08 15:31:33 +0000
  • 8494628f3c normalize validation batch size because i oom'd without it getting scaled mrq 2023-03-08 05:27:20 +0000
  • d7e75a51cf I forgot about the changelog and never kept up with it, so I'll just not use a changelog mrq 2023-03-08 05:14:50 +0000
  • ff07f707cb disable validation if validation dataset not found, clamp validation batch size to validation dataset size instead of simply reusing batch size, switch to adamw_zero optimizier when training with multi-gpus (because the yaml comment said to and I think it might be why I'm absolutely having garbage luck training this japanese dataset) mrq 2023-03-08 04:47:05 +0000
  • f1788a5639 lazy wrap around the voicefixer block because sometimes it just an heros itself despite having a specific block to load it beforehand mrq 2023-03-08 04:12:22 +0000
  • 83b5125854 fixed notebooks, provided paperspace notebook mrq 2023-03-08 03:29:12 +0000
  • b4098dca73 made validation working (will document later) mrq 2023-03-08 02:58:00 +0000
  • a7e0dc9127 oops mrq 2023-03-08 00:51:51 +0000
  • e862169e7f set validation to save rate and validation file if exists (need to test later) mrq 2023-03-07 20:38:31 +0000
  • fe8bf7a9d1 added helper script to cull short enough lines from training set as a validation set (if it yields good results doing validation during training, i'll add it to the web ui) mrq 2023-03-07 20:16:49 +0000
  • 7f89e8058a fixed update checker for dlas+tortoise-tts mrq 2023-03-07 19:33:56 +0000
  • 6d7e143f53 added override for large training plots mrq 2023-03-07 19:29:09 +0000
  • 3718e9d0fb set NaN alarm to show the iteration it happened it mrq 2023-03-07 19:22:11 +0000
  • c27ee3ce95 added update checking for dlas and tortoise-tts, caching voices (for a given model and voice name) so random latents will remain the same mrq 2023-03-07 17:04:45 +0000
  • 166d491a98 fixes mrq 2023-03-07 13:40:41 +0000
  • df5ba634c0 brain dead mrq 2023-03-07 05:43:26 +0000
  • 2726d98ee1 fried my brain trying to nail out bugs involving using solely ar model=auto mrq 2023-03-07 05:35:21 +0000
  • d7a5ad9fd9 cleaned up some model loading logic, added 'auto' mode for AR model (deduced by current voice) mrq 2023-03-07 04:34:39 +0000
  • 3899f9b4e3 added (yet another) experimental voice latent calculation mode (when chunk size is 0 and theres a dataset generated, itll leverage it by padding to a common size then computing them, should help avoid splitting mid-phoneme) mrq 2023-03-07 03:55:35 +0000
  • 5063728bb0 brain worms and headaches mrq 2023-03-07 03:01:02 +0000
  • 0f31c34120 download dvae.pth for the people who managed to somehow put the web UI into a state where it never initializes TTS at all somehow mrq 2023-03-07 02:47:10 +0000
  • 0f0b394445 moved (actually not working) setting to use BigVGAN to a dropdown to select between vocoders (for when slotting in future ones), and ability to load a new vocoder while TTS is loaded mrq 2023-03-07 02:45:22 +0000
  • e731b9ba84 reworked generating metadata to embed, should now store overrided settings mrq 2023-03-06 23:07:16 +0000
  • 7798767fc6 added settings editing (will add a guide on what to do later, and an example) mrq 2023-03-06 21:48:34 +0000
  • 119ac50c58 forgot to re-append the existing transcription when skipping existing (have to go back again and do the first 10% of my giant dataset mrq 2023-03-06 16:50:55 +0000
  • da0af4c498 one more mrq 2023-03-06 16:47:34 +0000
  • 11a1f6a00e forgot to reorder the dependency install because whisperx needs to be installed before DLAS mrq 2023-03-06 16:43:17 +0000
  • 12c51b6057 Im not too sure if manually invoking gc actually closes all the open files from whisperx (or ROCm), but it seems to have gone away longside setting 'ulimit -Sn' to half the output of 'ulimit -Hn' mrq 2023-03-06 16:39:37 +0000
  • 999878d9c6 and it turned out I wasn't even using the aligned segments, kmsing now that I have to *redo* my dataset again mrq 2023-03-06 11:01:33 +0000
  • 14779a5020 Added option to skip transcribing if it exists in the output text file, because apparently whisperx will throw a "max files opened" error when using ROCm because it does not close some file descriptors if you're batch-transcribing or something, so poor little me, who's retranscribing his japanese dataset for the 305823042th time woke up to it partially done i am so mad I have to wait another few hours for it to continue when I was hoping to wake up to it done mrq 2023-03-06 10:47:06 +0000
  • 0e3bbc55f8 added api_name for generation, added whisperx backend, relocated use whispercpp option to whisper backend list mrq 2023-03-06 05:21:33 +0000
  • 1e2436aac9 Update 'src/utils.py' master yqxtqymn 2023-03-06 02:04:19 +0000
  • f657f30e2b Update 'src/utils.py' yqxtqymn 2023-03-06 01:59:58 +0000
  • 4f123910fb Update 'src/webui.py' yqxtqymn 2023-03-06 01:59:42 +0000
  • 9ca5192309 Update 'src/utils.py' yqxtqymn 2023-03-06 00:47:56 +0000
  • 079cd32074 Update 'requirements.txt' yqxtqymn 2023-03-06 00:47:03 +0000
  • 788a957f79 stretch loss plot to target iteration just so its not so misleading with the scale mrq 2023-03-06 00:44:29 +0000
  • 5be14abc21 UI cleanup, actually fix syncing the epoch counter (i hope), setting auto-suggest voice chunk size whatever to 0 will just split based on the average duration length, signal when a NaN info value is detected (there's some safeties in the training, but it will inevitably fuck the model) mrq 2023-03-05 23:55:27 +0000
  • 287738a338 (should) fix reported epoch metric desyncing from defacto metric, fixed finding next milestone from wrong sign because of 2AM brain mrq 2023-03-05 20:42:45 +0000
  • 206a14fdbe brianworms mrq 2023-03-05 20:30:27 +0000
  • b82961ba8a typo mrq 2023-03-05 20:13:39 +0000
  • b2e89d8da3 oops mrq 2023-03-05 19:58:15 +0000
  • 8094401a6d print in e-notation for LR mrq 2023-03-05 19:48:24 +0000
  • 8b9c9e1bbf remove redundant stats, add showing LR mrq 2023-03-05 18:53:12 +0000
  • 0231550287 forgot to remove a debug print mrq 2023-03-05 18:27:16 +0000
  • d97639e138 whispercpp actually works now (language loading was weird, slicing needed to divide time by 100), transcribing audio checks for silence and discards them mrq 2023-03-05 17:54:36 +0000
  • b8a620e8d7 actually accumulate derivatives when estimating milestones and final loss by using half of the log mrq 2023-03-05 14:39:24 +0000
  • 35225a35da oops v2 mrq 2023-03-05 14:19:41 +0000
  • b5e9899bbf 5 hour sleep brained mrq 2023-03-05 13:37:05 +0000
  • cd8702ab0d oops mrq 2023-03-05 13:24:07 +0000
  • d312019d05 reordered things so it uses fresh data and not last-updated data mrq 2023-03-05 07:37:27 +0000
  • ce3866d0cd added '''estimating''' iterations until milestones (lr=[1, 0.5, 0.1] and final lr, very, very inaccurate because it uses instantaneous delta lr, I'll need to do a riemann sum later mrq 2023-03-05 06:45:07 +0000
  • 1316331be3 forgot to try and have it try and auto-detect for openai/whisper when no language is specified mrq 2023-03-05 05:22:35 +0000
  • 3e220ed306 added option to set worker size in training config generator (because the default is overkill), for whisper transcriptions, load a specialized language model if it exists (for now, only english), output transcription to web UI when done transcribing mrq 2023-03-05 05:17:19 +0000
  • 37cab14272 use torchrun instead for multigpu mrq 2023-03-04 20:53:00 +0000
  • 5026d93ecd sloppy fix to actually kill children when using multi-GPU distributed training, set GPU training count based on what CUDA exposes automatically so I don't have to keep setting it to 2 mrq 2023-03-04 20:42:54 +0000
  • 1a9d159b2a forgot to add 'bs / gradient accum < 2 clamp validation logic mrq 2023-03-04 17:37:08 +0000
  • df24827b9a renamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details) mrq 2023-03-04 15:55:06 +0000
  • 6d5e1e1a80 fixed user inputted LR schedule not actually getting used (oops) mrq 2023-03-04 04:41:56 +0000
  • 6d8c2dd459 auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slider mrq 2023-03-03 21:13:48 +0000
  • 07163644dd Merge pull request 'Added optional whispercpp update functionality' (#57) from lightmare/ai-voice-cloning:whispercpp-update into master mrq 2023-03-03 19:32:38 +0000
  • e1f3ffa08c oops mrq 2023-03-03 18:51:33 +0000
  • 5487c28683 Added optional whispercpp update functionality lightmare 2023-03-03 18:34:49 +0000
  • 9fb4aa7917 validated whispercpp working, fixed args.listen not being saved due to brainworms mrq 2023-03-03 07:23:10 +0000
  • 740b5587df added option to specify using BigVGAN as the vocoder for mrq/tortoise-tts mrq 2023-03-03 06:39:37 +0000
  • 68f4858ce9 oops mrq 2023-03-03 05:51:17 +0000
  • e859a7c01d experimental multi-gpu training (Linux only, because I can't into batch files) mrq 2023-03-03 04:37:18 +0000
  • e205322c8d added setup script for bitsandbytes-rocm (soon: multi-gpu testing, because I am finally making use of my mispurchased second 6800XT) mrq 2023-03-03 02:58:34 +0000
  • 59773a7637 just uninstall bitsandbytes on ROCm systems for now, I'll need to get it working tomorrow mrq 2023-03-02 03:04:11 +0000
  • c956d81baf added button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.sh mrq 2023-03-02 01:35:12 +0000
  • 534a761e49 added loading/saving of voice latents by model hash, so no more needing to manually regenerate every time you change models mrq 2023-03-02 00:46:52 +0000
  • 5a41db978e oops mrq 2023-03-01 19:39:43 +0000
  • b989123bd4 leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training) mrq 2023-03-01 19:32:11 +0000
  • c2726fa0d4 added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earlier mrq 2023-03-01 01:17:38 +0000
  • 5037752059 oops mrq 2023-02-28 22:13:21 +0000
  • 787b44807a added to embedded metadata: datetime, model path, model hash mrq 2023-02-28 15:36:06 +0000
  • 81eb58f0d6 show different losses, rewordings mrq 2023-02-28 06:18:18 +0000
  • fda47156ec oops mrq 2023-02-28 01:08:07 +0000
  • bc0d9ab3ed added graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something else mrq 2023-02-28 01:01:50 +0000
  • 6925ec731b I don't remember. mrq 2023-02-27 19:20:06 +0000
  • 47abde224c compat with python3.10+ finally (and maybe a small perf uplift with using cu117) mrq 2023-02-26 17:46:57 +0000
  • 92553973be Added option to disable bitsandbytesoptimizations for systems that do not support it (systems without a Turing-onward Nvidia card), saves use of float16 and bitsandbytes for training into the config json mrq 2023-02-26 01:57:56 +0000
  • aafeb9f96a actually fixed the training output text parser mrq 2023-02-25 16:44:25 +0000
  • 65329dba31 oops, epoch increments twice mrq 2023-02-25 15:31:18 +0000
  • 8b4da29d5f csome adjustments to the training output parser, now updates per iteration for really large batches (like the one I'm doing for a dataset size of 19420) mrq 2023-02-25 13:55:25 +0000
  • d5d8821a9d fixed some files not copying for bitsandbytes (I was wrong to assume it copied folders too), fixed stopping generating and training, some other thing that I forgot since it's been slowly worked on in my small free times mrq 2023-02-24 23:13:13 +0000
  • e5e16bc5b5 updating gitmodules to latest commits mrq 2023-02-24 19:32:18 +0000