0231550287forgot to remove a debug printmrq2023-03-05 18:27:16 +0000
d97639e138whispercpp actually works now (language loading was weird, slicing needed to divide time by 100), transcribing audio checks for silence and discards themmrq2023-03-05 17:54:36 +0000
b8a620e8d7actually accumulate derivatives when estimating milestones and final loss by using half of the logmrq2023-03-05 14:39:24 +0000
d312019d05reordered things so it uses fresh data and not last-updated datamrq2023-03-05 07:37:27 +0000
ce3866d0cdadded '''estimating''' iterations until milestones (lr=[1, 0.5, 0.1] and final lr, very, very inaccurate because it uses instantaneous delta lr, I'll need to do a riemann sum latermrq2023-03-05 06:45:07 +0000
1316331be3forgot to try and have it try and auto-detect for openai/whisper when no language is specifiedmrq2023-03-05 05:22:35 +0000
3e220ed306added option to set worker size in training config generator (because the default is overkill), for whisper transcriptions, load a specialized language model if it exists (for now, only english), output transcription to web UI when done transcribingmrq2023-03-05 05:17:19 +0000
37cab14272use torchrun instead for multigpumrq2023-03-04 20:53:00 +0000
5026d93ecdsloppy fix to actually kill children when using multi-GPU distributed training, set GPU training count based on what CUDA exposes automatically so I don't have to keep setting it to 2mrq2023-03-04 20:42:54 +0000
df24827b9arenamed mega batch factor to an actual real term: gradient accumulation factor, fixed halting training not actually killing the training process and freeing up resources, some logic cleanup for gradient accumulation (so many brain worms and wrong assumptions from testing on low batch sizes) (read the training section in the wiki for more details)mrq2023-03-04 15:55:06 +0000
6d5e1e1a80fixed user inputted LR schedule not actually getting used (oops)mrq2023-03-04 04:41:56 +0000
6d8c2dd459auto-suggested voice chunk size is based on the total duration of the voice files divided by 10 seconds, added setting to adjust the auto-suggested division factor (a really oddly worded one), because I'm sure people will OOM blindly generating without adjusting this slidermrq2023-03-03 21:13:48 +0000
07163644ddMerge pull request 'Added optional whispercpp update functionality' (#57) from lightmare/ai-voice-cloning:whispercpp-update into master
mrq
2023-03-03 19:32:38 +0000
e859a7c01dexperimental multi-gpu training (Linux only, because I can't into batch files)mrq2023-03-03 04:37:18 +0000
e205322c8dadded setup script for bitsandbytes-rocm (soon: multi-gpu testing, because I am finally making use of my mispurchased second 6800XT)mrq2023-03-03 02:58:34 +0000
59773a7637just uninstall bitsandbytes on ROCm systems for now, I'll need to get it working tomorrowmrq2023-03-02 03:04:11 +0000
c956d81bafadded button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.shmrq2023-03-02 01:35:12 +0000
534a761e49added loading/saving of voice latents by model hash, so no more needing to manually regenerate every time you change modelsmrq2023-03-02 00:46:52 +0000
b989123bd4leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training)mrq2023-03-01 19:32:11 +0000
c2726fa0d4added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earliermrq2023-03-01 01:17:38 +0000
bc0d9ab3edadded graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something elsemrq2023-02-28 01:01:50 +0000
47abde224ccompat with python3.10+ finally (and maybe a small perf uplift with using cu117)mrq2023-02-26 17:46:57 +0000
92553973beAdded option to disable bitsandbytesoptimizations for systems that do not support it (systems without a Turing-onward Nvidia card), saves use of float16 and bitsandbytes for training into the config jsonmrq2023-02-26 01:57:56 +0000
aafeb9f96aactually fixed the training output text parsermrq2023-02-25 16:44:25 +0000
8b4da29d5fcsome adjustments to the training output parser, now updates per iteration for really large batches (like the one I'm doing for a dataset size of 19420)mrq2023-02-25 13:55:25 +0000
d5d8821a9dfixed some files not copying for bitsandbytes (I was wrong to assume it copied folders too), fixed stopping generating and training, some other thing that I forgot since it's been slowly worked on in my small free timesmrq2023-02-24 23:13:13 +0000
e5e16bc5b5updating gitmodules to latest commitsmrq2023-02-24 19:32:18 +0000
f6d0b66e10finally added model refresh button, also searches in the training folder for outputted models so you don't even need to copy themmrq2023-02-24 12:58:41 +0000
1e0fec4358god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutesmrq2023-02-23 23:22:23 +0000
7d1220e83eforgot to mult by batch sizemrq2023-02-23 15:38:04 +0000
487f2ebf32fixed the brain worm discrepancy between epochs, iterations, and stepsmrq2023-02-23 15:31:43 +0000
aa96edde2fUpdated notebook to put userdata under a dedicated folder (and some safeties to not nuke them if you double run the script like I did thinking rm -r [symlink] would just remove the symlinkmrq2023-02-22 15:45:41 +0000
526a430c2ahow did this revert...mrq2023-02-22 13:24:03 +0000
2aa70532e8added '''suggested''' voice chunk size (it just updates it to how many files you have, not based on combined voice length, like it shouldmrq2023-02-22 03:31:46 +0000
9e64dad785clamp batch size to sample count when generating for the sickos that want that, added setting to remove non-final output after a generation, something else I forgot alreadymrq2023-02-21 21:50:05 +0000
f119993fb5explicitly use python3 because some OSs will not have python alias to python3, allow batch size 1mrq2023-02-21 20:20:52 +0000
8a1a48f31eAdded very experimental float16 training for cards with not enough VRAM (10GiB and below, maybe) \!NOTE\! this is VERY EXPERIMETNAL, I have zero free time to validate it right now, I'll do it latermrq2023-02-21 19:31:57 +0000
ed2cf9f5eewrap checking for metadata when adding a voice in case it throws an errormrq2023-02-21 17:35:30 +0000
bbc2d26289I finally figured out how to fix gr.Dropdown.change, so a lot of dumb UI decisions are fixed and makes sensemrq2023-02-21 03:00:45 +0000
7d1936adadactually cleaned the notebookmrq2023-02-20 23:12:53 +0000
1fd88afccaupdated notebook for newer setup structure, added formatting of getting it/s and lass loss rate (have not tested loss rate yet)mrq2023-02-20 22:56:39 +0000
bacac6daeahandled paths that contain spaces because python for whatever god forsaken reason will always split on spaces even if wrapping an argument in quotesmrq2023-02-20 20:23:22 +0000
37ffa60d14brain worms forgot a global, hate global semanticsmrq2023-02-20 15:31:38 +0000
d17f6fafb0clean up, reordered, added some rather liberal loading/unloading auxiliary models, can't really focus right now to keep testing it, report any issues and I'll get around to itmrq2023-02-20 00:21:16 +0000
ee95616dfdoptimize batch sizes to be as evenly divisible as possible (noticed the calculated epochs mismatched the inputted epochs)mrq2023-02-19 21:06:14 +0000
6260594a1eForgot to base print/save frequencies in terms of epochs in the UI, will get converted when saving the YAMLmrq2023-02-19 20:38:00 +0000
4694d622f4doing something completely unrelated had me realize it's 1000x easier to just base things in terms of epochs, and calculate iteratsions from theremrq2023-02-19 20:22:03 +0000
ec76676b16i hate gradio I hate having to specify step=1mrq2023-02-19 17:12:39 +0000
4f79b3724bFixed model setting not getting updated when TTS is unloaded, for when you change it and then load TTS (sorry for that brain worm)mrq2023-02-19 16:24:06 +0000
092dd7b2d7added more safeties and parameters to training yaml generator, I think I tested it extensively enoughmrq2023-02-19 16:16:44 +0000
f4e82fcf08I swear I committed forwarding arguments from the start scriptsmrq2023-02-19 15:01:16 +0000
3891870b5dUpdate notebook to follow the \'other\' way of installing mrq/tortoise-ttsmrq2023-02-19 07:22:22 +0000
d89b7d60e0forgot to divide checkpoint freq by iterations to get checkpoint countsmrq2023-02-19 07:05:11 +0000
485319c2bbdon't know what brain worms had me throw printing training output under verbosemrq2023-02-19 06:28:53 +0000
debdf6049aforgot to copy again from dev folder to git foldermrq2023-02-19 06:04:46 +0000
ae5d4023aafix for (I assume) some inconsistency with gradio sometimes-but-not-all-the-time coercing an empty Textbox into an empty string or sometimes None, but I also assume that might be a deserialization issue from JSON (cannot be assed to ask people to screenshot UI or send their ./config/generation.json for analysis, so get this hot monkeyshit patch)mrq2023-02-19 06:02:47 +0000
ec550d74fdchanged setup scripts to just clone mrq/tortoise-tts and install locally, instead of relying on pip's garbage git-integrationsmrq2023-02-19 05:29:01 +0000
57060190afabsolutely detest global semanticsmrq2023-02-19 05:12:09 +0000
f44239a85aadded polyfill for loading autoregressive models in case mrq/tortoise-tts absolutely refuses to updatemrq2023-02-19 05:10:08 +0000
e7d0cfaa82added some output parsing during training (print current iteration step, and checkpoint save), added option for verbose output (for debugging), added buffer size for output, full console output gets dumped on terminating trainingmrq2023-02-19 05:05:30 +0000
5fcdb19f8bI forgot to make it update the whisper model at runtimemrq2023-02-19 01:47:06 +0000