Commit Graph

283 Commits (master)

Author SHA1 Message Date
mrq e1f3ffa08c oops 2023-03-03 18:51:33 +07:00
mrq 9fb4aa7917 validated whispercpp working, fixed args.listen not being saved due to brainworms 2023-03-03 07:23:10 +07:00
mrq 740b5587df added option to specify using BigVGAN as the vocoder for mrq/tortoise-tts 2023-03-03 06:39:37 +07:00
mrq 68f4858ce9 oops 2023-03-03 05:51:17 +07:00
mrq e859a7c01d experimental multi-gpu training (Linux only, because I can't into batch files) 2023-03-03 04:37:18 +07:00
mrq c956d81baf added button to just load a training set's loss information, added installing broncotc/bitsandbytes-rocm when running setup-rocm.sh 2023-03-02 01:35:12 +07:00
mrq 534a761e49 added loading/saving of voice latents by model hash, so no more needing to manually regenerate every time you change models 2023-03-02 00:46:52 +07:00
mrq 5a41db978e oops 2023-03-01 19:39:43 +07:00
mrq b989123bd4 leverage tensorboard to parse tb_logger files when starting training (it seems to give a nicer resolution of training data, need to see about reading it directly while training) 2023-03-01 19:32:11 +07:00
mrq c2726fa0d4 added new training tunable: loss_text_ce_loss weight, added option to specify source model in case you want to finetune a finetuned model (for example, train a Japanese finetune on a large dataset, then finetune for a specific voice, need to truly validate if it produces usable output), some bug fixes that came up for some reason now and not earlier 2023-03-01 01:17:38 +07:00
mrq 5037752059 oops 2023-02-28 22:13:21 +07:00
mrq 787b44807a added to embedded metadata: datetime, model path, model hash 2023-02-28 15:36:06 +07:00
mrq 81eb58f0d6 show different losses, rewordings 2023-02-28 06:18:18 +07:00
mrq fda47156ec oops 2023-02-28 01:08:07 +07:00
mrq bc0d9ab3ed added graph to chart loss_gpt_total rate, added option to prune X number of previous models/states, something else 2023-02-28 01:01:50 +07:00
mrq 6925ec731b I don't remember. 2023-02-27 19:20:06 +07:00
mrq 92553973be Added option to disable bitsandbytesoptimizations for systems that do not support it (systems without a Turing-onward Nvidia card), saves use of float16 and bitsandbytes for training into the config json 2023-02-26 01:57:56 +07:00
mrq aafeb9f96a actually fixed the training output text parser 2023-02-25 16:44:25 +07:00
mrq 65329dba31 oops, epoch increments twice 2023-02-25 15:31:18 +07:00
mrq 8b4da29d5f csome adjustments to the training output parser, now updates per iteration for really large batches (like the one I'm doing for a dataset size of 19420) 2023-02-25 13:55:25 +07:00
mrq d5d8821a9d fixed some files not copying for bitsandbytes (I was wrong to assume it copied folders too), fixed stopping generating and training, some other thing that I forgot since it's been slowly worked on in my small free times 2023-02-24 23:13:13 +07:00
mrq 2104dbdbc5 ops 2023-02-24 13:05:08 +07:00
mrq f6d0b66e10 finally added model refresh button, also searches in the training folder for outputted models so you don't even need to copy them 2023-02-24 12:58:41 +07:00
mrq 1e0fec4358 god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutes 2023-02-23 23:22:23 +07:00
mrq 7d1220e83e forgot to mult by batch size 2023-02-23 15:38:04 +07:00
mrq 487f2ebf32 fixed the brain worm discrepancy between epochs, iterations, and steps 2023-02-23 15:31:43 +07:00
mrq 1cbcf14cff oops 2023-02-23 13:18:51 +07:00
mrq 225dee22d4 huge success 2023-02-23 06:24:54 +07:00
mrq 526a430c2a how did this revert... 2023-02-22 13:24:03 +07:00
mrq 93b061fb4d oops 2023-02-22 03:21:03 +07:00
mrq c4b41e07fa properly placed the line toe xtract starting iteration 2023-02-22 01:17:09 +07:00
mrq fefc7aba03 oops 2023-02-21 22:13:30 +07:00
mrq 9e64dad785 clamp batch size to sample count when generating for the sickos that want that, added setting to remove non-final output after a generation, something else I forgot already 2023-02-21 21:50:05 +07:00
mrq f119993fb5 explicitly use python3 because some OSs will not have python alias to python3, allow batch size 1 2023-02-21 20:20:52 +07:00
mrq 8a1a48f31e Added very experimental float16 training for cards with not enough VRAM (10GiB and below, maybe) \!NOTE\! this is VERY EXPERIMETNAL, I have zero free time to validate it right now, I'll do it later 2023-02-21 19:31:57 +07:00
mrq ed2cf9f5ee wrap checking for metadata when adding a voice in case it throws an error 2023-02-21 17:35:30 +07:00
mrq b6f7aa6264 fixes 2023-02-21 04:22:11 +07:00
mrq bbc2d26289 I finally figured out how to fix gr.Dropdown.change, so a lot of dumb UI decisions are fixed and makes sense 2023-02-21 03:00:45 +07:00
mrq 1fd88afcca updated notebook for newer setup structure, added formatting of getting it/s and lass loss rate (have not tested loss rate yet) 2023-02-20 22:56:39 +07:00
mrq 37ffa60d14 brain worms forgot a global, hate global semantics 2023-02-20 15:31:38 +07:00
mrq d17f6fafb0 clean up, reordered, added some rather liberal loading/unloading auxiliary models, can't really focus right now to keep testing it, report any issues and I'll get around to it 2023-02-20 00:21:16 +07:00
mrq c99cacec2e oops 2023-02-19 23:29:12 +07:00
mrq ee95616dfd optimize batch sizes to be as evenly divisible as possible (noticed the calculated epochs mismatched the inputted epochs) 2023-02-19 21:06:14 +07:00
mrq 6260594a1e Forgot to base print/save frequencies in terms of epochs in the UI, will get converted when saving the YAML 2023-02-19 20:38:00 +07:00
mrq 4694d622f4 doing something completely unrelated had me realize it's 1000x easier to just base things in terms of epochs, and calculate iteratsions from there 2023-02-19 20:22:03 +07:00
mrq 4f79b3724b Fixed model setting not getting updated when TTS is unloaded, for when you change it and then load TTS (sorry for that brain worm) 2023-02-19 16:24:06 +07:00
mrq 092dd7b2d7 added more safeties and parameters to training yaml generator, I think I tested it extensively enough 2023-02-19 16:16:44 +07:00
mrq d89b7d60e0 forgot to divide checkpoint freq by iterations to get checkpoint counts 2023-02-19 07:05:11 +07:00
mrq 485319c2bb don't know what brain worms had me throw printing training output under verbose 2023-02-19 06:28:53 +07:00
mrq debdf6049a forgot to copy again from dev folder to git folder 2023-02-19 06:04:46 +07:00
mrq ae5d4023aa fix for (I assume) some inconsistency with gradio sometimes-but-not-all-the-time coercing an empty Textbox into an empty string or sometimes None, but I also assume that might be a deserialization issue from JSON (cannot be assed to ask people to screenshot UI or send their ./config/generation.json for analysis, so get this hot monkeyshit patch) 2023-02-19 06:02:47 +07:00
mrq 57060190af absolutely detest global semantics 2023-02-19 05:12:09 +07:00
mrq f44239a85a added polyfill for loading autoregressive models in case mrq/tortoise-tts absolutely refuses to update 2023-02-19 05:10:08 +07:00
mrq e7d0cfaa82 added some output parsing during training (print current iteration step, and checkpoint save), added option for verbose output (for debugging), added buffer size for output, full console output gets dumped on terminating training 2023-02-19 05:05:30 +07:00
mrq 5fcdb19f8b I forgot to make it update the whisper model at runtime 2023-02-19 01:47:06 +07:00
mrq fc5b303319 we do a little garbage collection 2023-02-18 20:37:37 +07:00
mrq 58c981d714 Fix killing a voice generation because I must have broken it during migration 2023-02-18 19:54:21 +07:00
mrq cd8919e65c fix sloppy copy paste job when looking for new models 2023-02-18 19:46:26 +07:00
mrq ebbc85fb6a finetuned => finetunes 2023-02-18 19:41:21 +07:00
lightmare 4807072894 Using zfill in utils.pad 2023-02-18 19:09:25 +07:00
mrq 1f4cdcb8a9 rude 2023-02-18 17:23:44 +07:00
mrq cf758f4732 oops 2023-02-18 15:50:51 +07:00
mrq 843bfbfb96 Simplified generating training YAML, cleaned it up, training output is cleaned up and will "autoscroll" (only show the last 8 lines, refer to console for a full trace if needed) 2023-02-18 14:51:00 +07:00
mrq 0dd5640a89 forgot that call only worked if shell=True 2023-02-18 14:14:42 +07:00
mrq 2615cafd75 added dropdown to select autoregressive model for TTS, fixed a bug where the settings saveer constantly fires I hate gradio so much why are dropdown.change broken to contiuously fire and send an empty array 2023-02-18 14:10:26 +07:00
mrq a9bd17c353 fixes #2 2023-02-18 13:07:23 +07:00
mrq 809012c84d debugging in colab is pure cock and ball torture because sometimes the files don't actually update when edited, and sometimes they update after I restart the runtime, notebook can't use venv because I can't source it in a subprocess shell call 2023-02-18 03:31:44 +07:00
mrq 915ab5f65d fixes 2023-02-18 03:17:46 +07:00
mrq 650eada8d5 fix spawning training subprocess for unixes 2023-02-18 02:40:30 +07:00
mrq d5c1433268 a bit of UI cleanup, import multiple audio files at once, actually shows progress when importing voices, hides audio metadata / latents if no generated settings are detected, preparing datasets shows its progress, saving a training YAML shows a message when done, training now works within the web UI, training output shows to web UI, provided notebook is cleaned up and uses a venv, etc. 2023-02-18 02:07:22 +07:00
mrq c75d0bc5da pulls DLAS for any updates since I might be actually updating it, added option to not load TTS on initialization to save VRAM when training 2023-02-17 20:43:12 +07:00
mrq ad4adc960f small fixes 2023-02-17 20:10:27 +07:00
mrq bcec64af0f cleanup, "injected" dvae.pth to download through tortoise's model loader, so I don't need to keep copying it 2023-02-17 19:06:05 +07:00
mrq 13c9920b7f caveats while I tighten some nuts 2023-02-17 17:44:52 +07:00
mrq f87764e7d0 Slight fix, getting close to be able to train from the web UI directly 2023-02-17 13:57:03 +07:00
mrq 8482131e10 oops x2 2023-02-17 06:25:00 +07:00
mrq a16e6b150f oops 2023-02-17 06:11:04 +07:00
mrq 59d0f08244 https://arch.b4k.co/v/search/text/%22TAKE%20YOUR%20DAMN%20CLOTHES%20OFF%22/type/op/ 2023-02-17 06:06:50 +07:00
mrq 12933cfd60 added dropdown to select which whisper model to use for transcription, added note that FFMPEG is required 2023-02-17 06:01:14 +07:00
mrq 96e9acdeec added preparation of LJSpeech-esque dataset 2023-02-17 05:42:55 +07:00
mrq 9c0e4666d2 updated notebooks to use the new "main" setup 2023-02-17 03:30:53 +07:00
mrq f8249aa826 tab to generate the training YAML 2023-02-17 03:05:27 +07:00
mrq 3a078df95e Initial refractor 2023-02-17 00:08:27 +07:00