|
d5d8821a9d
|
fixed some files not copying for bitsandbytes (I was wrong to assume it copied folders too), fixed stopping generating and training, some other thing that I forgot since it's been slowly worked on in my small free times
|
2023-02-24 23:13:13 +00:00 |
|
|
f31ea9d5bc
|
oops
|
2023-02-24 16:23:30 +00:00 |
|
|
2104dbdbc5
|
ops
|
2023-02-24 13:05:08 +00:00 |
|
|
f6d0b66e10
|
finally added model refresh button, also searches in the training folder for outputted models so you don't even need to copy them
|
2023-02-24 12:58:41 +00:00 |
|
|
1e0fec4358
|
god i finally found some time and focus: reworded print/save freq per epoch => print/save freq (in epochs), added import config button to reread the last used settings (will check for the output folder's configs first, then the generated ones) and auto-grab the last resume state (if available), some other cleanups i genuinely don't remember what I did when I spaced out for 20 minutes
|
2023-02-23 23:22:23 +00:00 |
|
|
7d1220e83e
|
forgot to mult by batch size
|
2023-02-23 15:38:04 +00:00 |
|
|
487f2ebf32
|
fixed the brain worm discrepancy between epochs, iterations, and steps
|
2023-02-23 15:31:43 +00:00 |
|
|
1cbcf14cff
|
oops
|
2023-02-23 13:18:51 +00:00 |
|
|
41fca1a101
|
ugh
|
2023-02-23 07:20:40 +00:00 |
|
|
941a27d2b3
|
removed the logic to toggle BNB capabilities, since I guess I can't do that from outside the module
|
2023-02-23 07:05:39 +00:00 |
|
|
225dee22d4
|
huge success
|
2023-02-23 06:24:54 +00:00 |
|
|
526a430c2a
|
how did this revert...
|
2023-02-22 13:24:03 +00:00 |
|
|
2aa70532e8
|
added '''suggested''' voice chunk size (it just updates it to how many files you have, not based on combined voice length, like it should
|
2023-02-22 03:31:46 +00:00 |
|
|
cc47ed7242
|
kmsing
|
2023-02-22 03:27:28 +00:00 |
|
|
93b061fb4d
|
oops
|
2023-02-22 03:21:03 +00:00 |
|
|
c4b41e07fa
|
properly placed the line toe xtract starting iteration
|
2023-02-22 01:17:09 +00:00 |
|
|
fefc7aba03
|
oops
|
2023-02-21 22:13:30 +00:00 |
|
|
9e64dad785
|
clamp batch size to sample count when generating for the sickos that want that, added setting to remove non-final output after a generation, something else I forgot already
|
2023-02-21 21:50:05 +00:00 |
|
|
f119993fb5
|
explicitly use python3 because some OSs will not have python alias to python3, allow batch size 1
|
2023-02-21 20:20:52 +00:00 |
|
|
8a1a48f31e
|
Added very experimental float16 training for cards with not enough VRAM (10GiB and below, maybe) \!NOTE\! this is VERY EXPERIMETNAL, I have zero free time to validate it right now, I'll do it later
|
2023-02-21 19:31:57 +00:00 |
|
|
ed2cf9f5ee
|
wrap checking for metadata when adding a voice in case it throws an error
|
2023-02-21 17:35:30 +00:00 |
|
|
b6f7aa6264
|
fixes
|
2023-02-21 04:22:11 +00:00 |
|
|
bbc2d26289
|
I finally figured out how to fix gr.Dropdown.change, so a lot of dumb UI decisions are fixed and makes sense
|
2023-02-21 03:00:45 +00:00 |
|
|
1fd88afcca
|
updated notebook for newer setup structure, added formatting of getting it/s and lass loss rate (have not tested loss rate yet)
|
2023-02-20 22:56:39 +00:00 |
|
|
bacac6daea
|
handled paths that contain spaces because python for whatever god forsaken reason will always split on spaces even if wrapping an argument in quotes
|
2023-02-20 20:23:22 +00:00 |
|
|
37ffa60d14
|
brain worms forgot a global, hate global semantics
|
2023-02-20 15:31:38 +00:00 |
|
|
d17f6fafb0
|
clean up, reordered, added some rather liberal loading/unloading auxiliary models, can't really focus right now to keep testing it, report any issues and I'll get around to it
|
2023-02-20 00:21:16 +00:00 |
|
|
c99cacec2e
|
oops
|
2023-02-19 23:29:12 +00:00 |
|
|
ee95616dfd
|
optimize batch sizes to be as evenly divisible as possible (noticed the calculated epochs mismatched the inputted epochs)
|
2023-02-19 21:06:14 +00:00 |
|
|
6260594a1e
|
Forgot to base print/save frequencies in terms of epochs in the UI, will get converted when saving the YAML
|
2023-02-19 20:38:00 +00:00 |
|
|
4694d622f4
|
doing something completely unrelated had me realize it's 1000x easier to just base things in terms of epochs, and calculate iteratsions from there
|
2023-02-19 20:22:03 +00:00 |
|
|
ec76676b16
|
i hate gradio I hate having to specify step=1
|
2023-02-19 17:12:39 +00:00 |
|
|
4f79b3724b
|
Fixed model setting not getting updated when TTS is unloaded, for when you change it and then load TTS (sorry for that brain worm)
|
2023-02-19 16:24:06 +00:00 |
|
|
092dd7b2d7
|
added more safeties and parameters to training yaml generator, I think I tested it extensively enough
|
2023-02-19 16:16:44 +00:00 |
|
|
d89b7d60e0
|
forgot to divide checkpoint freq by iterations to get checkpoint counts
|
2023-02-19 07:05:11 +00:00 |
|
|
485319c2bb
|
don't know what brain worms had me throw printing training output under verbose
|
2023-02-19 06:28:53 +00:00 |
|
|
debdf6049a
|
forgot to copy again from dev folder to git folder
|
2023-02-19 06:04:46 +00:00 |
|
|
ae5d4023aa
|
fix for (I assume) some inconsistency with gradio sometimes-but-not-all-the-time coercing an empty Textbox into an empty string or sometimes None, but I also assume that might be a deserialization issue from JSON (cannot be assed to ask people to screenshot UI or send their ./config/generation.json for analysis, so get this hot monkeyshit patch)
|
2023-02-19 06:02:47 +00:00 |
|
|
57060190af
|
absolutely detest global semantics
|
2023-02-19 05:12:09 +00:00 |
|
|
f44239a85a
|
added polyfill for loading autoregressive models in case mrq/tortoise-tts absolutely refuses to update
|
2023-02-19 05:10:08 +00:00 |
|
|
e7d0cfaa82
|
added some output parsing during training (print current iteration step, and checkpoint save), added option for verbose output (for debugging), added buffer size for output, full console output gets dumped on terminating training
|
2023-02-19 05:05:30 +00:00 |
|
|
5fcdb19f8b
|
I forgot to make it update the whisper model at runtime
|
2023-02-19 01:47:06 +00:00 |
|
|
47058db67f
|
oops
|
2023-02-18 20:56:34 +00:00 |
|
|
fc5b303319
|
we do a little garbage collection
|
2023-02-18 20:37:37 +00:00 |
|
|
58c981d714
|
Fix killing a voice generation because I must have broken it during migration
|
2023-02-18 19:54:21 +00:00 |
|
|
cd8919e65c
|
fix sloppy copy paste job when looking for new models
|
2023-02-18 19:46:26 +00:00 |
|
|
ebbc85fb6a
|
finetuned => finetunes
|
2023-02-18 19:41:21 +00:00 |
|
lightmare
|
4807072894
|
Using zfill in utils.pad
|
2023-02-18 19:09:25 +00:00 |
|
|
1f4cdcb8a9
|
rude
|
2023-02-18 17:23:44 +00:00 |
|
|
cf758f4732
|
oops
|
2023-02-18 15:50:51 +00:00 |
|
|
843bfbfb96
|
Simplified generating training YAML, cleaned it up, training output is cleaned up and will "autoscroll" (only show the last 8 lines, refer to console for a full trace if needed)
|
2023-02-18 14:51:00 +00:00 |
|
|
0dd5640a89
|
forgot that call only worked if shell=True
|
2023-02-18 14:14:42 +00:00 |
|
|
2615cafd75
|
added dropdown to select autoregressive model for TTS, fixed a bug where the settings saveer constantly fires I hate gradio so much why are dropdown.change broken to contiuously fire and send an empty array
|
2023-02-18 14:10:26 +00:00 |
|
|
a9bd17c353
|
fixes #2
|
2023-02-18 13:07:23 +00:00 |
|
|
809012c84d
|
debugging in colab is pure cock and ball torture because sometimes the files don't actually update when edited, and sometimes they update after I restart the runtime, notebook can't use venv because I can't source it in a subprocess shell call
|
2023-02-18 03:31:44 +00:00 |
|
|
915ab5f65d
|
fixes
|
2023-02-18 03:17:46 +00:00 |
|
|
650eada8d5
|
fix spawning training subprocess for unixes
|
2023-02-18 02:40:30 +00:00 |
|
|
d5c1433268
|
a bit of UI cleanup, import multiple audio files at once, actually shows progress when importing voices, hides audio metadata / latents if no generated settings are detected, preparing datasets shows its progress, saving a training YAML shows a message when done, training now works within the web UI, training output shows to web UI, provided notebook is cleaned up and uses a venv, etc.
|
2023-02-18 02:07:22 +00:00 |
|
|
c75d0bc5da
|
pulls DLAS for any updates since I might be actually updating it, added option to not load TTS on initialization to save VRAM when training
|
2023-02-17 20:43:12 +00:00 |
|
|
ad4adc960f
|
small fixes
|
2023-02-17 20:10:27 +00:00 |
|
|
bcec64af0f
|
cleanup, "injected" dvae.pth to download through tortoise's model loader, so I don't need to keep copying it
|
2023-02-17 19:06:05 +00:00 |
|
|
13c9920b7f
|
caveats while I tighten some nuts
|
2023-02-17 17:44:52 +00:00 |
|
|
8d268bc7a3
|
training added, seems to work, need to test it more
|
2023-02-17 16:29:27 +00:00 |
|
|
f87764e7d0
|
Slight fix, getting close to be able to train from the web UI directly
|
2023-02-17 13:57:03 +00:00 |
|
|
8482131e10
|
oops x2
|
2023-02-17 06:25:00 +00:00 |
|
|
a16e6b150f
|
oops
|
2023-02-17 06:11:04 +00:00 |
|
|
59d0f08244
|
https://arch.b4k.co/v/search/text/%22TAKE%20YOUR%20DAMN%20CLOTHES%20OFF%22/type/op/
|
2023-02-17 06:06:50 +00:00 |
|
|
12933cfd60
|
added dropdown to select which whisper model to use for transcription, added note that FFMPEG is required
|
2023-02-17 06:01:14 +00:00 |
|
|
96e9acdeec
|
added preparation of LJSpeech-esque dataset
|
2023-02-17 05:42:55 +00:00 |
|
|
9c0e4666d2
|
updated notebooks to use the new "main" setup
|
2023-02-17 03:30:53 +00:00 |
|
|
f8249aa826
|
tab to generate the training YAML
|
2023-02-17 03:05:27 +00:00 |
|
|
3a078df95e
|
Initial refractor
|
2023-02-17 00:08:27 +00:00 |
|