1
0
Fork 0

Commit Graph

  • 1e0fec4358 god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutes mrq 2023-02-23 23:22:23 +0000
  • 7d1220e83e forgot to mult by batch size mrq 2023-02-23 15:38:04 +0000
  • 487f2ebf32 fixed the brain worm discrepancy between epochs, iterations, and steps mrq 2023-02-23 15:31:43 +0000
  • 1cbcf14cff oops mrq 2023-02-23 13:18:51 +0000
  • 41fca1a101 ugh mrq 2023-02-23 07:20:40 +0000
  • 941a27d2b3 removed the logic to toggle BNB capabilities, since I guess I can't do that from outside the module mrq 2023-02-23 07:05:39 +0000
  • 225dee22d4 huge success mrq 2023-02-23 06:24:54 +0000
  • aa96edde2f Updated notebook to put userdata under a dedicated folder (and some safeties to not nuke them if you double run the script like I did thinking rm -r [symlink] would just remove the symlink mrq 2023-02-22 15:45:41 +0000
  • 526a430c2a how did this revert... mrq 2023-02-22 13:24:03 +0000
  • 2aa70532e8 added '''suggested''' voice chunk size (it just updates it to how many files you have, not based on combined voice length, like it should mrq 2023-02-22 03:31:46 +0000
  • cc47ed7242 kmsing mrq 2023-02-22 03:27:28 +0000
  • 93b061fb4d oops mrq 2023-02-22 03:21:03 +0000
  • c4b41e07fa properly placed the line toe xtract starting iteration mrq 2023-02-22 01:17:09 +0000
  • fefc7aba03 oops mrq 2023-02-21 22:13:30 +0000
  • 9e64dad785 clamp batch size to sample count when generating for the sickos that want that, added setting to remove non-final output after a generation, something else I forgot already mrq 2023-02-21 21:50:05 +0000
  • f119993fb5 explicitly use python3 because some OSs will not have python alias to python3, allow batch size 1 mrq 2023-02-21 20:20:52 +0000
  • 8a1a48f31e Added very experimental float16 training for cards with not enough VRAM (10GiB and below, maybe) \!NOTE\! this is VERY EXPERIMETNAL, I have zero free time to validate it right now, I'll do it later mrq 2023-02-21 19:31:57 +0000
  • ed2cf9f5ee wrap checking for metadata when adding a voice in case it throws an error mrq 2023-02-21 17:35:30 +0000
  • b6f7aa6264 fixes mrq 2023-02-21 04:22:11 +0000
  • bbc2d26289 I finally figured out how to fix gr.Dropdown.change, so a lot of dumb UI decisions are fixed and makes sense mrq 2023-02-21 03:00:45 +0000
  • 7d1936adad actually cleaned the notebook mrq 2023-02-20 23:12:53 +0000
  • 1fd88afcca updated notebook for newer setup structure, added formatting of getting it/s and lass loss rate (have not tested loss rate yet) mrq 2023-02-20 22:56:39 +0000
  • bacac6daea handled paths that contain spaces because python for whatever god forsaken reason will always split on spaces even if wrapping an argument in quotes mrq 2023-02-20 20:23:22 +0000
  • 37ffa60d14 brain worms forgot a global, hate global semantics mrq 2023-02-20 15:31:38 +0000
  • d17f6fafb0 clean up, reordered, added some rather liberal loading/unloading auxiliary models, can't really focus right now to keep testing it, report any issues and I'll get around to it mrq 2023-02-20 00:21:16 +0000
  • c99cacec2e oops mrq 2023-02-19 23:29:12 +0000
  • 109757d56d I forgot submodules existed mrq 2023-02-19 21:41:51 +0000
  • ee95616dfd optimize batch sizes to be as evenly divisible as possible (noticed the calculated epochs mismatched the inputted epochs) mrq 2023-02-19 21:06:14 +0000
  • 6260594a1e Forgot to base print/save frequencies in terms of epochs in the UI, will get converted when saving the YAML mrq 2023-02-19 20:38:00 +0000
  • 4694d622f4 doing something completely unrelated had me realize it's 1000x easier to just base things in terms of epochs, and calculate iteratsions from there mrq 2023-02-19 20:22:03 +0000
  • ec76676b16 i hate gradio I hate having to specify step=1 mrq 2023-02-19 17:12:39 +0000
  • 4f79b3724b Fixed model setting not getting updated when TTS is unloaded, for when you change it and then load TTS (sorry for that brain worm) mrq 2023-02-19 16:24:06 +0000
  • 092dd7b2d7 added more safeties and parameters to training yaml generator, I think I tested it extensively enough mrq 2023-02-19 16:16:44 +0000
  • f4e82fcf08 I swear I committed forwarding arguments from the start scripts mrq 2023-02-19 15:01:16 +0000
  • 3891870b5d Update notebook to follow the \'other\' way of installing mrq/tortoise-tts mrq 2023-02-19 07:22:22 +0000
  • d89b7d60e0 forgot to divide checkpoint freq by iterations to get checkpoint counts mrq 2023-02-19 07:05:11 +0000
  • 485319c2bb don't know what brain worms had me throw printing training output under verbose mrq 2023-02-19 06:28:53 +0000
  • debdf6049a forgot to copy again from dev folder to git folder mrq 2023-02-19 06:04:46 +0000
  • ae5d4023aa fix for (I assume) some inconsistency with gradio sometimes-but-not-all-the-time coercing an empty Textbox into an empty string or sometimes None, but I also assume that might be a deserialization issue from JSON (cannot be assed to ask people to screenshot UI or send their ./config/generation.json for analysis, so get this hot monkeyshit patch) mrq 2023-02-19 06:02:47 +0000
  • ec550d74fd changed setup scripts to just clone mrq/tortoise-tts and install locally, instead of relying on pip's garbage git-integrations mrq 2023-02-19 05:29:01 +0000
  • 57060190af absolutely detest global semantics mrq 2023-02-19 05:12:09 +0000
  • f44239a85a added polyfill for loading autoregressive models in case mrq/tortoise-tts absolutely refuses to update mrq 2023-02-19 05:10:08 +0000
  • e7d0cfaa82 added some output parsing during training (print current iteration step, and checkpoint save), added option for verbose output (for debugging), added buffer size for output, full console output gets dumped on terminating training mrq 2023-02-19 05:05:30 +0000
  • 5fcdb19f8b I forgot to make it update the whisper model at runtime mrq 2023-02-19 01:47:06 +0000
  • 47058db67f oops mrq 2023-02-18 20:56:34 +0000
  • fc5b303319 we do a little garbage collection mrq 2023-02-18 20:37:37 +0000
  • 58c981d714 Fix killing a voice generation because I must have broken it during migration mrq 2023-02-18 19:54:21 +0000
  • cd8919e65c fix sloppy copy paste job when looking for new models mrq 2023-02-18 19:46:26 +0000
  • ebbc85fb6a finetuned => finetunes mrq 2023-02-18 19:41:21 +0000
  • 8dddb560e1 Merge pull request 'Using zfill in utils.pad' (#5) from lightmare/ai-voice-cloning:zfill into master mrq 2023-02-18 19:29:57 +0000
  • 4807072894 Using zfill in utils.pad lightmare 2023-02-18 19:09:25 +0000
  • 1f4cdcb8a9 rude mrq 2023-02-18 17:23:44 +0000
  • 13d466baf5 notebook tweaked, drive mounts and symlinks folders so I can stop having to wait a gorillion years to import voices mrq 2023-02-18 16:30:05 +0000
  • 996e5217d2 apparently anything after deactivate does not get ran, as it terminates the batch script. mrq 2023-02-18 16:20:26 +0000
  • cf758f4732 oops mrq 2023-02-18 15:50:51 +0000
  • 843bfbfb96 Simplified generating training YAML, cleaned it up, training output is cleaned up and will "autoscroll" (only show the last 8 lines, refer to console for a full trace if needed) mrq 2023-02-18 14:51:00 +0000
  • 0dd5640a89 forgot that call only worked if shell=True mrq 2023-02-18 14:14:42 +0000
  • 2615cafd75 added dropdown to select autoregressive model for TTS, fixed a bug where the settings saveer constantly fires I hate gradio so much why are dropdown.change broken to contiuously fire and send an empty array mrq 2023-02-18 14:10:26 +0000
  • a9bd17c353 fixes #2 mrq 2023-02-18 13:07:23 +0000
  • c26bda4d96 finally can get training to work under the web UI mrq 2023-02-18 03:36:08 +0000
  • 809012c84d debugging in colab is pure cock and ball torture because sometimes the files don't actually update when edited, and sometimes they update after I restart the runtime, notebook can't use venv because I can't source it in a subprocess shell call mrq 2023-02-18 03:31:44 +0000
  • 915ab5f65d fixes mrq 2023-02-18 03:17:46 +0000
  • 602d477935 crunchbangs mrq 2023-02-18 02:46:44 +0000
  • 650eada8d5 fix spawning training subprocess for unixes mrq 2023-02-18 02:40:30 +0000
  • d5c1433268 a bit of UI cleanup, import multiple audio files at once, actually shows progress when importing voices, hides audio metadata / latents if no generated settings are detected, preparing datasets shows its progress, saving a training YAML shows a message when done, training now works within the web UI, training output shows to web UI, provided notebook is cleaned up and uses a venv, etc. mrq 2023-02-18 02:07:22 +0000
  • c75d0bc5da pulls DLAS for any updates since I might be actually updating it, added option to not load TTS on initialization to save VRAM when training mrq 2023-02-17 20:43:12 +0000
  • a245dc43c0 small fixes mrq 2023-02-17 20:18:57 +0000
  • 67208be022 just in case mrq 2023-02-17 20:13:00 +0000
  • ad4adc960f small fixes mrq 2023-02-17 20:10:27 +0000
  • f708909687 Wiki'd mrq 2023-02-17 19:21:31 +0000
  • bcec64af0f cleanup, "injected" dvae.pth to download through tortoise's model loader, so I don't need to keep copying it mrq 2023-02-17 19:06:05 +0000
  • 13c9920b7f caveats while I tighten some nuts mrq 2023-02-17 17:44:52 +0000
  • 8d268bc7a3 training added, seems to work, need to test it more mrq 2023-02-17 16:29:27 +0000
  • 229be0bdb8 almost mrq 2023-02-17 15:53:50 +0000
  • f87764e7d0 Slight fix, getting close to be able to train from the web UI directly mrq 2023-02-17 13:57:03 +0000
  • 8482131e10 oops x2 mrq 2023-02-17 06:25:00 +0000
  • a16e6b150f oops mrq 2023-02-17 06:11:04 +0000
  • 59d0f08244 https://arch.b4k.co/v/search/text/%22TAKE%20YOUR%20DAMN%20CLOTHES%20OFF%22/type/op/ mrq 2023-02-17 06:06:50 +0000
  • 12933cfd60 added dropdown to select which whisper model to use for transcription, added note that FFMPEG is required mrq 2023-02-17 06:01:14 +0000
  • 96e9acdeec added preparation of LJSpeech-esque dataset mrq 2023-02-17 05:42:55 +0000
  • 9c0e4666d2 updated notebooks to use the new "main" setup mrq 2023-02-17 03:30:53 +0000
  • f8249aa826 tab to generate the training YAML mrq 2023-02-17 03:05:27 +0000
  • 3a078df95e Initial refractor mrq 2023-02-17 00:08:27 +0000
  • 0456f71ec3 Initial commit mrq 2023-02-16 19:38:15 +0000