Commit Graph

144 Commits (master)
 

Author SHA1 Message Date
yqxtqymn 1e2436aac9 Update 'src/utils.py'
removed some comments
2023-03-06 02:04:19 +07:00
yqxtqymn f657f30e2b Update 'src/utils.py'
whisper->whisperx
2023-03-06 01:59:58 +07:00
yqxtqymn 4f123910fb Update 'src/webui.py'
whisper->whisperx
2023-03-06 01:59:42 +07:00
yqxtqymn 9ca5192309 Update 'src/utils.py'
whisper->whisperx
2023-03-06 00:47:56 +07:00
yqxtqymn 079cd32074 Update 'requirements.txt'
whisper->whisperx
2023-03-06 00:47:03 +07:00
mrq 788a957f79 stretch loss plot to target iteration just so its not so misleading with the scale 2023-03-06 00:44:29 +07:00
mrq 5be14abc21 UI cleanup, actually fix syncing the epoch counter (i hope), setting auto-suggest voice chunk size whatever to 0 will just split based on the average duration length, signal when a NaN info value is detected (there's some safeties in the training, but it will inevitably fuck the model) 2023-03-05 23:55:27 +07:00
mrq 287738a338 (should) fix reported epoch metric desyncing from defacto metric, fixed finding next milestone from wrong sign because of 2AM brain 2023-03-05 20:42:45 +07:00
mrq 206a14fdbe brianworms 2023-03-05 20:30:27 +07:00
mrq b82961ba8a typo 2023-03-05 20:13:39 +07:00
mrq b2e89d8da3 oops 2023-03-05 19:58:15 +07:00
mrq 8094401a6d print in e-notation for LR 2023-03-05 19:48:24 +07:00
mrq 8b9c9e1bbf remove redundant stats, add showing LR 2023-03-05 18:53:12 +07:00
mrq 0231550287 forgot to remove a debug print 2023-03-05 18:27:16 +07:00
mrq d97639e138 whispercpp actually works now (language loading was weird, slicing needed to divide time by 100), transcribing audio checks for silence and discards them 2023-03-05 17:54:36 +07:00
mrq b8a620e8d7 actually accumulate derivatives when estimating milestones and final loss by using half of the log 2023-03-05 14:39:24 +07:00
mrq 35225a35da oops v2 2023-03-05 14:19:41 +07:00
mrq b5e9899bbf 5 hour sleep brained 2023-03-05 13:37:05 +07:00
mrq cd8702ab0d oops 2023-03-05 13:24:07 +07:00
mrq d312019d05 reordered things so it uses fresh data and not last-updated data 2023-03-05 07:37:27 +07:00
mrq ce3866d0cd added '''estimating''' iterations until milestones (lr=[1, 0.5, 0.1] and final lr, very, very inaccurate because it uses instantaneous delta lr, I'll need to do a riemann sum later 2023-03-05 06:45:07 +07:00
mrq 1316331be3 forgot to try and have it try and auto-detect for openai/whisper when no language is specified 2023-03-05 05:22:35 +07:00
mrq 3e220ed306 added option to set worker size in training config generator (because the default is overkill), for whisper transcriptions, load a specialized language model if it exists (for now, only english), output transcription to web UI when done transcribing 2023-03-05 05:17:19 +07:00
mrq 37cab14272 use torchrun instead for multigpu 2023-03-04 20:53:00 +07:00
mrq 5026d93ecd sloppy fix to actually kill children when using multi-GPU distributed training, set GPU training count based on what CUDA exposes automatically so I don't have to keep setting it to 2 2023-03-04 20:42:54 +07:00
mrq 1a9d159b2a forgot to add 'bs / gradient accum < 2 clamp validation logic 2023-03-04 17:37:08 +07:00
mrq df24827b9a renamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details) 2023-03-04 15:55:06 +07:00
mrq 6d5e1e1a80 fixed user inputted LR schedule not actually getting used (oops) 2023-03-04 04:41:56 +07:00
mrq 6d8c2dd459 auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slider 2023-03-03 21:13:48 +07:00
mrq 07163644dd Merge pull request 'Added optional whispercpp update functionality' (#57) from lightmare/ai-voice-cloning:whispercpp-update into master
Reviewed-on: mrq/ai-voice-cloning#57
2023-03-03 19:32:38 +07:00
mrq e1f3ffa08c oops 2023-03-03 18:51:33 +07:00
lightmare 5487c28683 Added optional whispercpp update functionality 2023-03-03 18:34:49 +07:00
mrq 9fb4aa7917 validated whispercpp working, fixed args.listen not being saved due to brainworms 2023-03-03 07:23:10 +07:00
mrq 740b5587df added option to specify using BigVGAN as the vocoder for mrq/tortoise-tts 2023-03-03 06:39:37 +07:00
mrq 68f4858ce9 oops 2023-03-03 05:51:17 +07:00
mrq e859a7c01d experimental multi-gpu training (Linux only, because I can't into batch files) 2023-03-03 04:37:18 +07:00
mrq e205322c8d added setup script for bitsandbytes-rocm (soon: multi-gpu testing, because I am finally making use of my mispurchased second 6800XT) 2023-03-03 02:58:34 +07:00
mrq 59773a7637 just uninstall bitsandbytes on ROCm systems for now, I'll need to get it working tomorrow 2023-03-02 03:04:11 +07:00
mrq c956d81baf added button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.sh 2023-03-02 01:35:12 +07:00
mrq 534a761e49 added loading/saving of voice latents by model hash, so no more needing to manually regenerate every time you change models 2023-03-02 00:46:52 +07:00
mrq 5a41db978e oops 2023-03-01 19:39:43 +07:00
mrq b989123bd4 leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training) 2023-03-01 19:32:11 +07:00
mrq c2726fa0d4 added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earlier 2023-03-01 01:17:38 +07:00
mrq 5037752059 oops 2023-02-28 22:13:21 +07:00
mrq 787b44807a added to embedded metadata: datetime, model path, model hash 2023-02-28 15:36:06 +07:00
mrq 81eb58f0d6 show different losses, rewordings 2023-02-28 06:18:18 +07:00
mrq fda47156ec oops 2023-02-28 01:08:07 +07:00
mrq bc0d9ab3ed added graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something else 2023-02-28 01:01:50 +07:00
mrq 6925ec731b I don't remember. 2023-02-27 19:20:06 +07:00
mrq 47abde224c compat with python3.10+ finally (and maybe a small perf uplift with using cu117) 2023-02-26 17:46:57 +07:00