Commit Graph

  • b82961ba8a typo mrq 2023-03-05 20:13:39 +0000
  • b2e89d8da3 oops mrq 2023-03-05 19:58:15 +0000
  • 8094401a6d print in e-notation for LR mrq 2023-03-05 19:48:24 +0000
  • 8b9c9e1bbf remove redundant stats, add showing LR mrq 2023-03-05 18:53:12 +0000
  • 0231550287 forgot to remove a debug print mrq 2023-03-05 18:27:16 +0000
  • d97639e138 whispercpp actually works now (language loading was weird, slicing needed to divide time by 100), transcribing audio checks for silence and discards them mrq 2023-03-05 17:54:36 +0000
  • b8a620e8d7 actually accumulate derivatives when estimating milestones and final loss by using half of the log mrq 2023-03-05 14:39:24 +0000
  • 35225a35da oops v2 mrq 2023-03-05 14:19:41 +0000
  • b5e9899bbf 5 hour sleep brained mrq 2023-03-05 13:37:05 +0000
  • cd8702ab0d oops mrq 2023-03-05 13:24:07 +0000
  • d312019d05 reordered things so it uses fresh data and not last-updated data mrq 2023-03-05 07:37:27 +0000
  • ce3866d0cd added '''estimating''' iterations until milestones (lr=[1, 0.5, 0.1] and final lr, very, very inaccurate because it uses instantaneous delta lr, I'll need to do a riemann sum later mrq 2023-03-05 06:45:07 +0000
  • 1316331be3 forgot to try and have it try and auto-detect for openai/whisper when no language is specified mrq 2023-03-05 05:22:35 +0000
  • 3e220ed306 added option to set worker size in training config generator (because the default is overkill), for whisper transcriptions, load a specialized language model if it exists (for now, only english), output transcription to web UI when done transcribing mrq 2023-03-05 05:17:19 +0000
  • 0e7598bb59 Update 'src/webui.py' 1719303895554885058/tmp_refs/heads/master 1719303895554885058/master 1718777418697004431/tmp_refs/heads/master 1718777418697004431/master 1718654076631918226/tmp_refs/heads/master 1718654076631918226/master 1716606696754872731/tmp_refs/heads/master 1716606696754872731/master 1715625437400338440/tmp_refs/heads/master 1715625437400338440/master ethanfel 2023-03-05 00:12:45 +0000
  • e13462b5c4 Update 'src/webui.py' ethanfel 2023-03-05 00:06:00 +0000
  • 37cab14272 use torchrun instead for multigpu mrq 2023-03-04 20:53:00 +0000
  • 5026d93ecd sloppy fix to actually kill children when using multi-GPU distributed training, set GPU training count based on what CUDA exposes automatically so I don't have to keep setting it to 2 mrq 2023-03-04 20:42:54 +0000
  • 1a9d159b2a forgot to add 'bs / gradient accum < 2 clamp validation logic mrq 2023-03-04 17:37:08 +0000
  • df24827b9a renamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details) mrq 2023-03-04 15:55:06 +0000
  • 6d5e1e1a80 fixed user inputted LR schedule not actually getting used (oops) mrq 2023-03-04 04:41:56 +0000
  • 6d8c2dd459 auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slider mrq 2023-03-03 21:13:48 +0000
  • 07163644dd Merge pull request 'Added optional whispercpp update functionality' (#57) from lightmare/ai-voice-cloning:whispercpp-update into master mrq 2023-03-03 19:32:38 +0000
  • e1f3ffa08c oops mrq 2023-03-03 18:51:33 +0000
  • 5487c28683 Added optional whispercpp update functionality lightmare 2023-03-03 18:34:49 +0000
  • 9fb4aa7917 validated whispercpp working, fixed args.listen not being saved due to brainworms mrq 2023-03-03 07:23:10 +0000
  • 740b5587df added option to specify using BigVGAN as the vocoder for mrq/tortoise-tts mrq 2023-03-03 06:39:37 +0000
  • 68f4858ce9 oops mrq 2023-03-03 05:51:17 +0000
  • e859a7c01d experimental multi-gpu training (Linux only, because I can't into batch files) mrq 2023-03-03 04:37:18 +0000
  • e205322c8d added setup script for bitsandbytes-rocm (soon: multi-gpu testing, because I am finally making use of my mispurchased second 6800XT) mrq 2023-03-03 02:58:34 +0000
  • 59773a7637 just uninstall bitsandbytes on ROCm systems for now, I'll need to get it working tomorrow mrq 2023-03-02 03:04:11 +0000
  • c956d81baf added button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.sh mrq 2023-03-02 01:35:12 +0000
  • 534a761e49 added loading/saving of voice latents by model hash, so no more needing to manually regenerate every time you change models mrq 2023-03-02 00:46:52 +0000
  • 5a41db978e oops mrq 2023-03-01 19:39:43 +0000
  • b989123bd4 leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training) mrq 2023-03-01 19:32:11 +0000
  • c2726fa0d4 added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earlier mrq 2023-03-01 01:17:38 +0000
  • 5037752059 oops mrq 2023-02-28 22:13:21 +0000
  • 787b44807a added to embedded metadata: datetime, model path, model hash mrq 2023-02-28 15:36:06 +0000
  • 81eb58f0d6 show different losses, rewordings mrq 2023-02-28 06:18:18 +0000
  • fda47156ec oops mrq 2023-02-28 01:08:07 +0000
  • bc0d9ab3ed added graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something else mrq 2023-02-28 01:01:50 +0000
  • 6925ec731b I don't remember. mrq 2023-02-27 19:20:06 +0000
  • 47abde224c compat with python3.10+ finally (and maybe a small perf uplift with using cu117) mrq 2023-02-26 17:46:57 +0000
  • 92553973be Added option to disable bitsandbytesoptimizations for systems that do not support it (systems without a Turing-onward Nvidia card), saves use of float16 and bitsandbytes for training into the config json mrq 2023-02-26 01:57:56 +0000
  • aafeb9f96a actually fixed the training output text parser mrq 2023-02-25 16:44:25 +0000
  • 65329dba31 oops, epoch increments twice mrq 2023-02-25 15:31:18 +0000
  • 8b4da29d5f csome adjustments to the training output parser, now updates per iteration for really large batches (like the one I'm doing for a dataset size of 19420) mrq 2023-02-25 13:55:25 +0000
  • d5d8821a9d fixed some files not copying for bitsandbytes (I was wrong to assume it copied folders too), fixed stopping generating and training, some other thing that I forgot since it's been slowly worked on in my small free times mrq 2023-02-24 23:13:13 +0000
  • e5e16bc5b5 updating gitmodules to latest commits mrq 2023-02-24 19:32:18 +0000
  • bedbb893ac clarified import dataset settings button mrq 2023-02-24 16:40:22 +0000
  • f31ea9d5bc oops mrq 2023-02-24 16:23:30 +0000
  • 2104dbdbc5 ops mrq 2023-02-24 13:05:08 +0000
  • f6d0b66e10 finally added model refresh button, also searches in the training folder for outputted models so you don't even need to copy them mrq 2023-02-24 12:58:41 +0000
  • 1e0fec4358 god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutes mrq 2023-02-23 23:22:23 +0000
  • 7d1220e83e forgot to mult by batch size mrq 2023-02-23 15:38:04 +0000
  • 487f2ebf32 fixed the brain worm discrepancy between epochs, iterations, and steps mrq 2023-02-23 15:31:43 +0000
  • 1cbcf14cff oops mrq 2023-02-23 13:18:51 +0000
  • 41fca1a101 ugh mrq 2023-02-23 07:20:40 +0000
  • 941a27d2b3 removed the logic to toggle BNB capabilities, since I guess I can't do that from outside the module mrq 2023-02-23 07:05:39 +0000
  • 225dee22d4 huge success mrq 2023-02-23 06:24:54 +0000
  • aa96edde2f Updated notebook to put userdata under a dedicated folder (and some safeties to not nuke them if you double run the script like I did thinking rm -r [symlink] would just remove the symlink mrq 2023-02-22 15:45:41 +0000
  • 526a430c2a how did this revert... mrq 2023-02-22 13:24:03 +0000
  • 2aa70532e8 added '''suggested''' voice chunk size (it just updates it to how many files you have, not based on combined voice length, like it should mrq 2023-02-22 03:31:46 +0000
  • cc47ed7242 kmsing mrq 2023-02-22 03:27:28 +0000
  • 93b061fb4d oops mrq 2023-02-22 03:21:03 +0000
  • c4b41e07fa properly placed the line toe xtract starting iteration mrq 2023-02-22 01:17:09 +0000
  • fefc7aba03 oops mrq 2023-02-21 22:13:30 +0000
  • 9e64dad785 clamp batch size to sample count when generating for the sickos that want that, added setting to remove non-final output after a generation, something else I forgot already mrq 2023-02-21 21:50:05 +0000
  • f119993fb5 explicitly use python3 because some OSs will not have python alias to python3, allow batch size 1 mrq 2023-02-21 20:20:52 +0000
  • 8a1a48f31e Added very experimental float16 training for cards with not enough VRAM (10GiB and below, maybe) \!NOTE\! this is VERY EXPERIMETNAL, I have zero free time to validate it right now, I'll do it later mrq 2023-02-21 19:31:57 +0000
  • ed2cf9f5ee wrap checking for metadata when adding a voice in case it throws an error mrq 2023-02-21 17:35:30 +0000
  • b6f7aa6264 fixes mrq 2023-02-21 04:22:11 +0000
  • bbc2d26289 I finally figured out how to fix gr.Dropdown.change, so a lot of dumb UI decisions are fixed and makes sense mrq 2023-02-21 03:00:45 +0000
  • 7d1936adad actually cleaned the notebook mrq 2023-02-20 23:12:53 +0000
  • 1fd88afcca updated notebook for newer setup structure, added formatting of getting it/s and lass loss rate (have not tested loss rate yet) mrq 2023-02-20 22:56:39 +0000
  • bacac6daea handled paths that contain spaces because python for whatever god forsaken reason will always split on spaces even if wrapping an argument in quotes mrq 2023-02-20 20:23:22 +0000
  • 37ffa60d14 brain worms forgot a global, hate global semantics mrq 2023-02-20 15:31:38 +0000
  • d17f6fafb0 clean up, reordered, added some rather liberal loading/unloading auxiliary models, can't really focus right now to keep testing it, report any issues and I'll get around to it mrq 2023-02-20 00:21:16 +0000
  • c99cacec2e oops mrq 2023-02-19 23:29:12 +0000
  • 109757d56d I forgot submodules existed mrq 2023-02-19 21:41:51 +0000
  • ee95616dfd optimize batch sizes to be as evenly divisible as possible (noticed the calculated epochs mismatched the inputted epochs) mrq 2023-02-19 21:06:14 +0000
  • 6260594a1e Forgot to base print/save frequencies in terms of epochs in the UI, will get converted when saving the YAML mrq 2023-02-19 20:38:00 +0000
  • 4694d622f4 doing something completely unrelated had me realize it's 1000x easier to just base things in terms of epochs, and calculate iteratsions from there mrq 2023-02-19 20:22:03 +0000
  • ec76676b16 i hate gradio I hate having to specify step=1 mrq 2023-02-19 17:12:39 +0000
  • 4f79b3724b Fixed model setting not getting updated when TTS is unloaded, for when you change it and then load TTS (sorry for that brain worm) mrq 2023-02-19 16:24:06 +0000
  • 092dd7b2d7 added more safeties and parameters to training yaml generator, I think I tested it extensively enough mrq 2023-02-19 16:16:44 +0000
  • f4e82fcf08 I swear I committed forwarding arguments from the start scripts mrq 2023-02-19 15:01:16 +0000
  • 3891870b5d Update notebook to follow the \'other\' way of installing mrq/tortoise-tts mrq 2023-02-19 07:22:22 +0000
  • d89b7d60e0 forgot to divide checkpoint freq by iterations to get checkpoint counts mrq 2023-02-19 07:05:11 +0000
  • 485319c2bb don't know what brain worms had me throw printing training output under verbose mrq 2023-02-19 06:28:53 +0000
  • debdf6049a forgot to copy again from dev folder to git folder mrq 2023-02-19 06:04:46 +0000
  • ae5d4023aa fix for (I assume) some inconsistency with gradio sometimes-but-not-all-the-time coercing an empty Textbox into an empty string or sometimes None, but I also assume that might be a deserialization issue from JSON (cannot be assed to ask people to screenshot UI or send their ./config/generation.json for analysis, so get this hot monkeyshit patch) mrq 2023-02-19 06:02:47 +0000
  • ec550d74fd changed setup scripts to just clone mrq/tortoise-tts and install locally, instead of relying on pip's garbage git-integrations mrq 2023-02-19 05:29:01 +0000
  • 57060190af absolutely detest global semantics mrq 2023-02-19 05:12:09 +0000
  • f44239a85a added polyfill for loading autoregressive models in case mrq/tortoise-tts absolutely refuses to update mrq 2023-02-19 05:10:08 +0000
  • e7d0cfaa82 added some output parsing during training (print current iteration step, and checkpoint save), added option for verbose output (for debugging), added buffer size for output, full console output gets dumped on terminating training mrq 2023-02-19 05:05:30 +0000
  • 5fcdb19f8b I forgot to make it update the whisper model at runtime mrq 2023-02-19 01:47:06 +0000
  • 47058db67f oops mrq 2023-02-18 20:56:34 +0000
  • fc5b303319 we do a little garbage collection mrq 2023-02-18 20:37:37 +0000
  • 58c981d714 Fix killing a voice generation because I must have broken it during migration mrq 2023-02-18 19:54:21 +0000