Commit Graph

369 Commits

Author SHA1 Message Date
mrq
1a9d159b2a forgot to add 'bs / gradient accum < 2 clamp validation logic 2023-03-04 17:37:08 +00:00
mrq
df24827b9a renamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details) 2023-03-04 15:55:06 +00:00
mrq
6d5e1e1a80 fixed user inputted LR schedule not actually getting used (oops) 2023-03-04 04:41:56 +00:00
mrq
6d8c2dd459 auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slider 2023-03-03 21:13:48 +00:00
mrq
07163644dd Merge pull request 'Added optional whispercpp update functionality' (#57) from lightmare/ai-voice-cloning:whispercpp-update into master
Reviewed-on: mrq/ai-voice-cloning#57
2023-03-03 19:32:38 +00:00
mrq
e1f3ffa08c oops 2023-03-03 18:51:33 +00:00
lightmare
5487c28683 Added optional whispercpp update functionality 2023-03-03 18:34:49 +00:00
mrq
9fb4aa7917 validated whispercpp working, fixed args.listen not being saved due to brainworms 2023-03-03 07:23:10 +00:00
mrq
740b5587df added option to specify using BigVGAN as the vocoder for mrq/tortoise-tts 2023-03-03 06:39:37 +00:00
mrq
68f4858ce9 oops 2023-03-03 05:51:17 +00:00
mrq
e859a7c01d experimental multi-gpu training (Linux only, because I can't into batch files) 2023-03-03 04:37:18 +00:00
mrq
e205322c8d added setup script for bitsandbytes-rocm (soon: multi-gpu testing, because I am finally making use of my mispurchased second 6800XT) 2023-03-03 02:58:34 +00:00
mrq
59773a7637 just uninstall bitsandbytes on ROCm systems for now, I'll need to get it working tomorrow 2023-03-02 03:04:11 +00:00
mrq
c956d81baf added button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.sh 2023-03-02 01:35:12 +00:00
mrq
534a761e49 added loading/saving of voice latents by model hash, so no more needing to manually regenerate every time you change models 2023-03-02 00:46:52 +00:00
mrq
5a41db978e oops 2023-03-01 19:39:43 +00:00
mrq
b989123bd4 leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training) 2023-03-01 19:32:11 +00:00
mrq
c2726fa0d4 added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earlier 2023-03-01 01:17:38 +00:00
mrq
5037752059 oops 2023-02-28 22:13:21 +00:00
mrq
787b44807a added to embedded metadata: datetime, model path, model hash 2023-02-28 15:36:06 +00:00
mrq
81eb58f0d6 show different losses, rewordings 2023-02-28 06:18:18 +00:00
mrq
fda47156ec oops 2023-02-28 01:08:07 +00:00
mrq
bc0d9ab3ed added graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something else 2023-02-28 01:01:50 +00:00
mrq
6925ec731b I don't remember. 2023-02-27 19:20:06 +00:00
mrq
47abde224c compat with python3.10+ finally (and maybe a small perf uplift with using cu117) 2023-02-26 17:46:57 +00:00
mrq
92553973be Added option to disable bitsandbytesoptimizations for systems that do not support it (systems without a Turing-onward Nvidia card), saves use of float16 and bitsandbytes for training into the config json 2023-02-26 01:57:56 +00:00
mrq
aafeb9f96a actually fixed the training output text parser 2023-02-25 16:44:25 +00:00
mrq
65329dba31 oops, epoch increments twice 2023-02-25 15:31:18 +00:00
mrq
8b4da29d5f csome adjustments to the training output parser, now updates per iteration for really large batches (like the one I'm doing for a dataset size of 19420) 2023-02-25 13:55:25 +00:00
mrq
d5d8821a9d fixed some files not copying for bitsandbytes (I was wrong to assume it copied folders too), fixed stopping generating and training, some other thing that I forgot since it's been slowly worked on in my small free times 2023-02-24 23:13:13 +00:00
mrq
e5e16bc5b5 updating gitmodules to latest commits 2023-02-24 19:32:18 +00:00
mrq
bedbb893ac clarified import dataset settings button 2023-02-24 16:40:22 +00:00
mrq
f31ea9d5bc oops 2023-02-24 16:23:30 +00:00
mrq
2104dbdbc5 ops 2023-02-24 13:05:08 +00:00
mrq
f6d0b66e10 finally added model refresh button, also searches in the training folder for outputted models so you don't even need to copy them 2023-02-24 12:58:41 +00:00
mrq
1e0fec4358 god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutes 2023-02-23 23:22:23 +00:00
mrq
7d1220e83e forgot to mult by batch size 2023-02-23 15:38:04 +00:00
mrq
487f2ebf32 fixed the brain worm discrepancy between epochs, iterations, and steps 2023-02-23 15:31:43 +00:00
mrq
1cbcf14cff oops 2023-02-23 13:18:51 +00:00
mrq
41fca1a101 ugh 2023-02-23 07:20:40 +00:00
mrq
941a27d2b3 removed the logic to toggle BNB capabilities, since I guess I can't do that from outside the module 2023-02-23 07:05:39 +00:00
mrq
225dee22d4 huge success 2023-02-23 06:24:54 +00:00
mrq
aa96edde2f Updated notebook to put userdata under a dedicated folder (and some safeties to not nuke them if you double run the script like I did thinking rm -r [symlink] would just remove the symlink 2023-02-22 15:45:41 +00:00
mrq
526a430c2a how did this revert... 2023-02-22 13:24:03 +00:00
mrq
2aa70532e8 added '''suggested''' voice chunk size (it just updates it to how many files you have, not based on combined voice length, like it should 2023-02-22 03:31:46 +00:00
mrq
cc47ed7242 kmsing 2023-02-22 03:27:28 +00:00
mrq
93b061fb4d oops 2023-02-22 03:21:03 +00:00
mrq
c4b41e07fa properly placed the line toe xtract starting iteration 2023-02-22 01:17:09 +00:00
mrq
fefc7aba03 oops 2023-02-21 22:13:30 +00:00
mrq
9e64dad785 clamp batch size to sample count when generating for the sickos that want that, added setting to remove non-final output after a generation, something else I forgot already 2023-02-21 21:50:05 +00:00