1
0
Commit Graph

88 Commits

Author SHA1 Message Date
mrq
bedbb893ac clarified import dataset settings button 2023-02-24 16:40:22 +00:00
mrq
f31ea9d5bc oops 2023-02-24 16:23:30 +00:00
mrq
2104dbdbc5 ops 2023-02-24 13:05:08 +00:00
mrq
f6d0b66e10 finally added model refresh button, also searches in the training folder for outputted models so you don't even need to copy them 2023-02-24 12:58:41 +00:00
mrq
1e0fec4358 god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutes 2023-02-23 23:22:23 +00:00
mrq
7d1220e83e forgot to mult by batch size 2023-02-23 15:38:04 +00:00
mrq
487f2ebf32 fixed the brain worm discrepancy between epochs, iterations, and steps 2023-02-23 15:31:43 +00:00
mrq
1cbcf14cff oops 2023-02-23 13:18:51 +00:00
mrq
41fca1a101 ugh 2023-02-23 07:20:40 +00:00
mrq
941a27d2b3 removed the logic to toggle BNB capabilities, since I guess I can't do that from outside the module 2023-02-23 07:05:39 +00:00
mrq
225dee22d4 huge success 2023-02-23 06:24:54 +00:00
mrq
aa96edde2f Updated notebook to put userdata under a dedicated folder (and some safeties to not nuke them if you double run the script like I did thinking rm -r [symlink] would just remove the symlink 2023-02-22 15:45:41 +00:00
mrq
526a430c2a how did this revert... 2023-02-22 13:24:03 +00:00
mrq
2aa70532e8 added '''suggested''' voice chunk size (it just updates it to how many files you have, not based on combined voice length, like it should 2023-02-22 03:31:46 +00:00
mrq
cc47ed7242 kmsing 2023-02-22 03:27:28 +00:00
mrq
93b061fb4d oops 2023-02-22 03:21:03 +00:00
mrq
c4b41e07fa properly placed the line toe xtract starting iteration 2023-02-22 01:17:09 +00:00
mrq
fefc7aba03 oops 2023-02-21 22:13:30 +00:00
mrq
9e64dad785 clamp batch size to sample count when generating for the sickos that want that, added setting to remove non-final output after a generation, something else I forgot already 2023-02-21 21:50:05 +00:00
mrq
f119993fb5 explicitly use python3 because some OSs will not have python alias to python3, allow batch size 1 2023-02-21 20:20:52 +00:00
mrq
8a1a48f31e Added very experimental float16 training for cards with not enough VRAM (10GiB and below, maybe) \!NOTE\! this is VERY EXPERIMETNAL, I have zero free time to validate it right now, I'll do it later 2023-02-21 19:31:57 +00:00
mrq
ed2cf9f5ee wrap checking for metadata when adding a voice in case it throws an error 2023-02-21 17:35:30 +00:00
mrq
b6f7aa6264 fixes 2023-02-21 04:22:11 +00:00
mrq
bbc2d26289 I finally figured out how to fix gr.Dropdown.change, so a lot of dumb UI decisions are fixed and makes sense 2023-02-21 03:00:45 +00:00
mrq
7d1936adad actually cleaned the notebook 2023-02-20 23:12:53 +00:00
mrq
1fd88afcca updated notebook for newer setup structure, added formatting of getting it/s and lass loss rate (have not tested loss rate yet) 2023-02-20 22:56:39 +00:00
mrq
bacac6daea handled paths that contain spaces because python for whatever god forsaken reason will always split on spaces even if wrapping an argument in quotes 2023-02-20 20:23:22 +00:00
mrq
37ffa60d14 brain worms forgot a global, hate global semantics 2023-02-20 15:31:38 +00:00
mrq
d17f6fafb0 clean up, reordered, added some rather liberal loading/unloading auxiliary models, can't really focus right now to keep testing it, report any issues and I'll get around to it 2023-02-20 00:21:16 +00:00
mrq
c99cacec2e oops 2023-02-19 23:29:12 +00:00
mrq
109757d56d I forgot submodules existed 2023-02-19 21:41:51 +00:00
mrq
ee95616dfd optimize batch sizes to be as evenly divisible as possible (noticed the calculated epochs mismatched the inputted epochs) 2023-02-19 21:06:14 +00:00
mrq
6260594a1e Forgot to base print/save frequencies in terms of epochs in the UI, will get converted when saving the YAML 2023-02-19 20:38:00 +00:00
mrq
4694d622f4 doing something completely unrelated had me realize it's 1000x easier to just base things in terms of epochs, and calculate iteratsions from there 2023-02-19 20:22:03 +00:00
mrq
ec76676b16 i hate gradio I hate having to specify step=1 2023-02-19 17:12:39 +00:00
mrq
4f79b3724b Fixed model setting not getting updated when TTS is unloaded, for when you change it and then load TTS (sorry for that brain worm) 2023-02-19 16:24:06 +00:00
mrq
092dd7b2d7 added more safeties and parameters to training yaml generator, I think I tested it extensively enough 2023-02-19 16:16:44 +00:00
mrq
f4e82fcf08 I swear I committed forwarding arguments from the start scripts 2023-02-19 15:01:16 +00:00
mrq
3891870b5d Update notebook to follow the \'other\' way of installing mrq/tortoise-tts 2023-02-19 07:22:22 +00:00
mrq
d89b7d60e0 forgot to divide checkpoint freq by iterations to get checkpoint counts 2023-02-19 07:05:11 +00:00
mrq
485319c2bb don't know what brain worms had me throw printing training output under verbose 2023-02-19 06:28:53 +00:00
mrq
debdf6049a forgot to copy again from dev folder to git folder 2023-02-19 06:04:46 +00:00
mrq
ae5d4023aa fix for (I assume) some inconsistency with gradio sometimes-but-not-all-the-time coercing an empty Textbox into an empty string or sometimes None, but I also assume that might be a deserialization issue from JSON (cannot be assed to ask people to screenshot UI or send their ./config/generation.json for analysis, so get this hot monkeyshit patch) 2023-02-19 06:02:47 +00:00
mrq
ec550d74fd changed setup scripts to just clone mrq/tortoise-tts and install locally, instead of relying on pip's garbage git-integrations 2023-02-19 05:29:01 +00:00
mrq
57060190af absolutely detest global semantics 2023-02-19 05:12:09 +00:00
mrq
f44239a85a added polyfill for loading autoregressive models in case mrq/tortoise-tts absolutely refuses to update 2023-02-19 05:10:08 +00:00
mrq
e7d0cfaa82 added some output parsing during training (print current iteration step, and checkpoint save), added option for verbose output (for debugging), added buffer size for output, full console output gets dumped on terminating training 2023-02-19 05:05:30 +00:00
mrq
5fcdb19f8b I forgot to make it update the whisper model at runtime 2023-02-19 01:47:06 +00:00
mrq
47058db67f oops 2023-02-18 20:56:34 +00:00
mrq
fc5b303319 we do a little garbage collection 2023-02-18 20:37:37 +00:00