|
f119993fb5
|
explicitly use python3 because some OSs will not have python alias to python3, allow batch size 1
|
2023-02-21 20:20:52 +00:00 |
|
|
8a1a48f31e
|
Added very experimental float16 training for cards with not enough VRAM (10GiB and below, maybe) \!NOTE\! this is VERY EXPERIMETNAL, I have zero free time to validate it right now, I'll do it later
|
2023-02-21 19:31:57 +00:00 |
|
|
ed2cf9f5ee
|
wrap checking for metadata when adding a voice in case it throws an error
|
2023-02-21 17:35:30 +00:00 |
|
|
b6f7aa6264
|
fixes
|
2023-02-21 04:22:11 +00:00 |
|
|
bbc2d26289
|
I finally figured out how to fix gr.Dropdown.change, so a lot of dumb UI decisions are fixed and makes sense
|
2023-02-21 03:00:45 +00:00 |
|
|
7d1936adad
|
actually cleaned the notebook
|
2023-02-20 23:12:53 +00:00 |
|
|
1fd88afcca
|
updated notebook for newer setup structure, added formatting of getting it/s and lass loss rate (have not tested loss rate yet)
|
2023-02-20 22:56:39 +00:00 |
|
|
bacac6daea
|
handled paths that contain spaces because python for whatever god forsaken reason will always split on spaces even if wrapping an argument in quotes
|
2023-02-20 20:23:22 +00:00 |
|
|
37ffa60d14
|
brain worms forgot a global, hate global semantics
|
2023-02-20 15:31:38 +00:00 |
|
|
d17f6fafb0
|
clean up, reordered, added some rather liberal loading/unloading auxiliary models, can't really focus right now to keep testing it, report any issues and I'll get around to it
|
2023-02-20 00:21:16 +00:00 |
|
|
c99cacec2e
|
oops
|
2023-02-19 23:29:12 +00:00 |
|
|
109757d56d
|
I forgot submodules existed
|
2023-02-19 21:41:51 +00:00 |
|
|
ee95616dfd
|
optimize batch sizes to be as evenly divisible as possible (noticed the calculated epochs mismatched the inputted epochs)
|
2023-02-19 21:06:14 +00:00 |
|
|
6260594a1e
|
Forgot to base print/save frequencies in terms of epochs in the UI, will get converted when saving the YAML
|
2023-02-19 20:38:00 +00:00 |
|
|
4694d622f4
|
doing something completely unrelated had me realize it's 1000x easier to just base things in terms of epochs, and calculate iteratsions from there
|
2023-02-19 20:22:03 +00:00 |
|
|
ec76676b16
|
i hate gradio I hate having to specify step=1
|
2023-02-19 17:12:39 +00:00 |
|
|
4f79b3724b
|
Fixed model setting not getting updated when TTS is unloaded, for when you change it and then load TTS (sorry for that brain worm)
|
2023-02-19 16:24:06 +00:00 |
|
|
092dd7b2d7
|
added more safeties and parameters to training yaml generator, I think I tested it extensively enough
|
2023-02-19 16:16:44 +00:00 |
|
|
f4e82fcf08
|
I swear I committed forwarding arguments from the start scripts
|
2023-02-19 15:01:16 +00:00 |
|
|
3891870b5d
|
Update notebook to follow the \'other\' way of installing mrq/tortoise-tts
|
2023-02-19 07:22:22 +00:00 |
|
|
d89b7d60e0
|
forgot to divide checkpoint freq by iterations to get checkpoint counts
|
2023-02-19 07:05:11 +00:00 |
|
|
485319c2bb
|
don't know what brain worms had me throw printing training output under verbose
|
2023-02-19 06:28:53 +00:00 |
|
|
debdf6049a
|
forgot to copy again from dev folder to git folder
|
2023-02-19 06:04:46 +00:00 |
|
|
ae5d4023aa
|
fix for (I assume) some inconsistency with gradio sometimes-but-not-all-the-time coercing an empty Textbox into an empty string or sometimes None, but I also assume that might be a deserialization issue from JSON (cannot be assed to ask people to screenshot UI or send their ./config/generation.json for analysis, so get this hot monkeyshit patch)
|
2023-02-19 06:02:47 +00:00 |
|
|
ec550d74fd
|
changed setup scripts to just clone mrq/tortoise-tts and install locally, instead of relying on pip's garbage git-integrations
|
2023-02-19 05:29:01 +00:00 |
|
|
57060190af
|
absolutely detest global semantics
|
2023-02-19 05:12:09 +00:00 |
|
|
f44239a85a
|
added polyfill for loading autoregressive models in case mrq/tortoise-tts absolutely refuses to update
|
2023-02-19 05:10:08 +00:00 |
|
|
e7d0cfaa82
|
added some output parsing during training (print current iteration step, and checkpoint save), added option for verbose output (for debugging), added buffer size for output, full console output gets dumped on terminating training
|
2023-02-19 05:05:30 +00:00 |
|
|
5fcdb19f8b
|
I forgot to make it update the whisper model at runtime
|
2023-02-19 01:47:06 +00:00 |
|
|
47058db67f
|
oops
|
2023-02-18 20:56:34 +00:00 |
|
|
fc5b303319
|
we do a little garbage collection
|
2023-02-18 20:37:37 +00:00 |
|
|
58c981d714
|
Fix killing a voice generation because I must have broken it during migration
|
2023-02-18 19:54:21 +00:00 |
|
|
cd8919e65c
|
fix sloppy copy paste job when looking for new models
|
2023-02-18 19:46:26 +00:00 |
|
|
ebbc85fb6a
|
finetuned => finetunes
|
2023-02-18 19:41:21 +00:00 |
|
mrq
|
8dddb560e1
|
Merge pull request 'Using zfill in utils.pad' (#5) from lightmare/ai-voice-cloning:zfill into master
Reviewed-on: #5
|
2023-02-18 19:29:57 +00:00 |
|
lightmare
|
4807072894
|
Using zfill in utils.pad
|
2023-02-18 19:09:25 +00:00 |
|
|
1f4cdcb8a9
|
rude
|
2023-02-18 17:23:44 +00:00 |
|
|
13d466baf5
|
notebook tweaked, drive mounts and symlinks folders so I can stop having to wait a gorillion years to import voices
|
2023-02-18 16:30:05 +00:00 |
|
|
996e5217d2
|
apparently anything after deactivate does not get ran, as it terminates the batch script.
|
2023-02-18 16:20:26 +00:00 |
|
|
cf758f4732
|
oops
|
2023-02-18 15:50:51 +00:00 |
|
|
843bfbfb96
|
Simplified generating training YAML, cleaned it up, training output is cleaned up and will "autoscroll" (only show the last 8 lines, refer to console for a full trace if needed)
|
2023-02-18 14:51:00 +00:00 |
|
|
0dd5640a89
|
forgot that call only worked if shell=True
|
2023-02-18 14:14:42 +00:00 |
|
|
2615cafd75
|
added dropdown to select autoregressive model for TTS, fixed a bug where the settings saveer constantly fires I hate gradio so much why are dropdown.change broken to contiuously fire and send an empty array
|
2023-02-18 14:10:26 +00:00 |
|
|
a9bd17c353
|
fixes #2
|
2023-02-18 13:07:23 +00:00 |
|
|
c26bda4d96
|
finally can get training to work under the web UI
|
2023-02-18 03:36:08 +00:00 |
|
|
809012c84d
|
debugging in colab is pure cock and ball torture because sometimes the files don't actually update when edited, and sometimes they update after I restart the runtime, notebook can't use venv because I can't source it in a subprocess shell call
|
2023-02-18 03:31:44 +00:00 |
|
|
915ab5f65d
|
fixes
|
2023-02-18 03:17:46 +00:00 |
|
|
602d477935
|
crunchbangs
|
2023-02-18 02:46:44 +00:00 |
|
|
650eada8d5
|
fix spawning training subprocess for unixes
|
2023-02-18 02:40:30 +00:00 |
|
|
d5c1433268
|
a bit of UI cleanup, import multiple audio files at once, actually shows progress when importing voices, hides audio metadata / latents if no generated settings are detected, preparing datasets shows its progress, saving a training YAML shows a message when done, training now works within the web UI, training output shows to web UI, provided notebook is cleaned up and uses a venv, etc.
|
2023-02-18 02:07:22 +00:00 |
|