3b4f4500d1when you have three separate machines running and you test one one, but you accidentally revert changes because you then test on anothermrq2023-03-09 03:26:18 +0000
ef75dba995I hate commas make tuplesmrq2023-03-09 02:43:05 +0000
f795dd5c20you might be wondering why so many small commits instead of rolling the HEAD back one to just combine them, i don't want to force push and roll back the paperspace i'm testing inmrq2023-03-09 02:31:32 +0000
1b18b3e335forgot to save the simplified training input json first before touching any of the settings that dump to the yamlmrq2023-03-09 02:27:20 +0000
221ac38b32forgot to update to finetune subdirmrq2023-03-09 02:25:32 +0000
0e80e311b0added VRAM validation for a given batch:gradient accumulation size ratio (based emprically off of 6GiB, 16GiB, and 16x2GiB, would be nice to have more data on what's safe)mrq2023-03-09 02:08:06 +0000
d58b67004acolab notebook uses venv and normal scripts to keep it on parity with a local install (and it literally just works stop creating issues for someething inconsistent with known solutions)mrq2023-03-08 15:51:13 +0000
34dcb845b5actually make using adamw_zero optimizer for multi-gpus workmrq2023-03-08 15:31:33 +0000
8494628f3cnormalize validation batch size because i oom'd without it getting scaledmrq2023-03-08 05:27:20 +0000
d7e75a51cfI forgot about the changelog and never kept up with it, so I'll just not use a changelogmrq2023-03-08 05:14:50 +0000
ff07f707cbdisable validation if validation dataset not found, clamp validation batch size to validation dataset size instead of simply reusing batch size, switch to adamw_zero optimizier when training with multi-gpus (because the yaml comment said to and I think it might be why I'm absolutely having garbage luck training this japanese dataset)mrq2023-03-08 04:47:05 +0000
f1788a5639lazy wrap around the voicefixer block because sometimes it just an heros itself despite having a specific block to load it beforehandmrq2023-03-08 04:12:22 +0000
e862169e7fset validation to save rate and validation file if exists (need to test later)mrq2023-03-07 20:38:31 +0000
fe8bf7a9d1added helper script to cull short enough lines from training set as a validation set (if it yields good results doing validation during training, i'll add it to the web ui)mrq2023-03-07 20:16:49 +0000
7f89e8058afixed update checker for dlas+tortoise-ttsmrq2023-03-07 19:33:56 +0000
6d7e143f53added override for large training plotsmrq2023-03-07 19:29:09 +0000
3718e9d0fbset NaN alarm to show the iteration it happened itmrq2023-03-07 19:22:11 +0000
c27ee3ce95added update checking for dlas and tortoise-tts, caching voices (for a given model and voice name) so random latents will remain the samemrq2023-03-07 17:04:45 +0000
2726d98ee1fried my brain trying to nail out bugs involving using solely ar model=automrq2023-03-07 05:35:21 +0000
d7a5ad9fd9cleaned up some model loading logic, added 'auto' mode for AR model (deduced by current voice)mrq2023-03-07 04:34:39 +0000
3899f9b4e3added (yet another) experimental voice latent calculation mode (when chunk size is 0 and theres a dataset generated, itll leverage it by padding to a common size then computing them, should help avoid splitting mid-phoneme)mrq2023-03-07 03:55:35 +0000
5063728bb0brain worms and headachesmrq2023-03-07 03:01:02 +0000
0f31c34120download dvae.pth for the people who managed to somehow put the web UI into a state where it never initializes TTS at all somehowmrq2023-03-07 02:47:10 +0000
0f0b394445moved (actually not working) setting to use BigVGAN to a dropdown to select between vocoders (for when slotting in future ones), and ability to load a new vocoder while TTS is loadedmrq2023-03-07 02:45:22 +0000
e731b9ba84reworked generating metadata to embed, should now store overrided settingsmrq2023-03-06 23:07:16 +0000
7798767fc6added settings editing (will add a guide on what to do later, and an example)mrq2023-03-06 21:48:34 +0000
119ac50c58forgot to re-append the existing transcription when skipping existing (have to go back again and do the first 10% of my giant datasetmrq2023-03-06 16:50:55 +0000
11a1f6a00eforgot to reorder the dependency install because whisperx needs to be installed before DLASmrq2023-03-06 16:43:17 +0000
12c51b6057Im not too sure if manually invoking gc actually closes all the open files from whisperx (or ROCm), but it seems to have gone away longside setting 'ulimit -Sn' to half the output of 'ulimit -Hn'mrq2023-03-06 16:39:37 +0000
999878d9c6and it turned out I wasn't even using the aligned segments, kmsing now that I have to *redo* my dataset againmrq2023-03-06 11:01:33 +0000
14779a5020Added option to skip transcribing if it exists in the output text file, because apparently whisperx will throw a "max files opened" error when using ROCm because it does not close some file descriptors if you're batch-transcribing or something, so poor little me, who's retranscribing his japanese dataset for the 305823042th time woke up to it partially done i am so mad I have to wait another few hours for it to continue when I was hoping to wake up to it donemrq2023-03-06 10:47:06 +0000
0e3bbc55f8added api_name for generation, added whisperx backend, relocated use whispercpp option to whisper backend listmrq2023-03-06 05:21:33 +0000
788a957f79stretch loss plot to target iteration just so its not so misleading with the scalemrq2023-03-06 00:44:29 +0000
5be14abc21UI cleanup, actually fix syncing the epoch counter (i hope), setting auto-suggest voice chunk size whatever to 0 will just split based on the average duration length, signal when a NaN info value is detected (there's some safeties in the training, but it will inevitably fuck the model)mrq2023-03-05 23:55:27 +0000
287738a338(should) fix reported epoch metric desyncing from defacto metric, fixed finding next milestone from wrong sign because of 2AM brainmrq2023-03-05 20:42:45 +0000
0231550287forgot to remove a debug printmrq2023-03-05 18:27:16 +0000
d97639e138whispercpp actually works now (language loading was weird, slicing needed to divide time by 100), transcribing audio checks for silence and discards themmrq2023-03-05 17:54:36 +0000
b8a620e8d7actually accumulate derivatives when estimating milestones and final loss by using half of the logmrq2023-03-05 14:39:24 +0000
d312019d05reordered things so it uses fresh data and not last-updated datamrq2023-03-05 07:37:27 +0000
ce3866d0cdadded '''estimating''' iterations until milestones (lr=[1, 0.5, 0.1] and final lr, very, very inaccurate because it uses instantaneous delta lr, I'll need to do a riemann sum latermrq2023-03-05 06:45:07 +0000
1316331be3forgot to try and have it try and auto-detect for openai/whisper when no language is specifiedmrq2023-03-05 05:22:35 +0000
3e220ed306added option to set worker size in training config generator (because the default is overkill), for whisper transcriptions, load a specialized language model if it exists (for now, only english), output transcription to web UI when done transcribingmrq2023-03-05 05:17:19 +0000
37cab14272use torchrun instead for multigpumrq2023-03-04 20:53:00 +0000
5026d93ecdsloppy fix to actually kill children when using multi-GPU distributed training, set GPU training count based on what CUDA exposes automatically so I don't have to keep setting it to 2mrq2023-03-04 20:42:54 +0000
df24827b9arenamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details)mrq2023-03-04 15:55:06 +0000
6d5e1e1a80fixed user inputted LR schedule not actually getting used (oops)mrq2023-03-04 04:41:56 +0000
6d8c2dd459auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slidermrq2023-03-03 21:13:48 +0000
07163644ddMerge pull request 'Added optional whispercpp update functionality' (#57) from lightmare/ai-voice-cloning:whispercpp-update into master
mrq
2023-03-03 19:32:38 +0000
e859a7c01dexperimental multi-gpu training (Linux only, because I can't into batch files)mrq2023-03-03 04:37:18 +0000
e205322c8dadded setup script for bitsandbytes-rocm (soon: multi-gpu testing, because I am finally making use of my mispurchased second 6800XT)mrq2023-03-03 02:58:34 +0000
59773a7637just uninstall bitsandbytes on ROCm systems for now, I'll need to get it working tomorrowmrq2023-03-02 03:04:11 +0000
c956d81bafadded button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.shmrq2023-03-02 01:35:12 +0000
534a761e49added loading/saving of voice latents by model hash, so no more needing to manually regenerate every time you change modelsmrq2023-03-02 00:46:52 +0000
b989123bd4leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training)mrq2023-03-01 19:32:11 +0000
c2726fa0d4added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earliermrq2023-03-01 01:17:38 +0000
bc0d9ab3edadded graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something elsemrq2023-02-28 01:01:50 +0000
47abde224ccompat with python3.10+ finally (and maybe a small perf uplift with using cu117)mrq2023-02-26 17:46:57 +0000
92553973beAdded option to disable bitsandbytesoptimizations for systems that do not support it (systems without a Turing-onward Nvidia card), saves use of float16 and bitsandbytes for training into the config jsonmrq2023-02-26 01:57:56 +0000
aafeb9f96aactually fixed the training output text parsermrq2023-02-25 16:44:25 +0000
8b4da29d5fcsome adjustments to the training output parser, now updates per iteration for really large batches (like the one I'm doing for a dataset size of 19420)mrq2023-02-25 13:55:25 +0000
d5d8821a9dfixed some files not copying for bitsandbytes (I was wrong to assume it copied folders too), fixed stopping generating and training, some other thing that I forgot since it's been slowly worked on in my small free timesmrq2023-02-24 23:13:13 +0000
e5e16bc5b5updating gitmodules to latest commitsmrq2023-02-24 19:32:18 +0000
f6d0b66e10finally added model refresh button, also searches in the training folder for outputted models so you don't even need to copy themmrq2023-02-24 12:58:41 +0000
1e0fec4358god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutesmrq2023-02-23 23:22:23 +0000