|
0b1a71430c
|
added BigVGAN and HiFiGAN (from https://git.ecker.tech/jarod/tortoise-tts), vocoder selectable in webUI
|
2024-06-19 21:43:29 -05:00 |
|
|
a5c21d65d2
|
added automatically loading default YAML if --yaml is not profided (although I think it already does this by using defaults), default YAML will use local backend + deepspeed inferencing for speedups
|
2024-06-19 18:49:39 -05:00 |
|
|
f4fcc35aa8
|
fixed it breaking on subsequent utterances through the web UI from latents being on the CPU
|
2024-06-19 18:26:15 -05:00 |
|
|
96b74f38ef
|
sampler and cond_free selectable in webUI, re-enabled cond_free as default (somehow it's working again)
|
2024-06-19 17:12:28 -05:00 |
|
|
73f271fb8a
|
added automagic offloading models to GPU then CPU when theyre done during inference
|
2024-06-19 17:01:05 -05:00 |
|
|
5d24631bfb
|
don't pad output mel tokens to speed up diffusion (despite copying it exactly from tortoise)
|
2024-06-19 15:27:11 -05:00 |
|
|
99be487482
|
backported old fork features (kv_cache (which looking back seems like a spook), ddim sampling, etc)
|
2024-06-19 14:49:24 -05:00 |
|
|
e5136613f5
|
semblance of documentation, automagic model downloading, a little saner inference results folder
|
2024-06-19 10:08:14 -05:00 |
|
|
fb313d7ef4
|
working, the vocoder was just loading wrong
|
2024-06-18 20:55:50 -05:00 |
|
|
37ec9f1b79
|
initial "refractoring"
|
2024-06-17 22:48:34 -05:00 |
|