6ebdde58f0
(finally) added the CVVP model weigh slider, latents export more data too for weighing against CVVP
2023-02-07 20:55:56 +00:00
793515772a
un-hardcoded input output sampling rates (changing them "works" but leads to wrong audio, naturally)
2023-02-07 18:34:29 +00:00
5f934c5feb
(maybe) fixed an issue with using prompt redactions (emotions) on CPU causing a crash, because for some reason the wav2vec_alignment assumed CUDA was always available
2023-02-07 07:51:05 -06:00
d6b5d67f79
forgot to auto compute batch size again if set to 0
2023-02-06 23:14:17 -06:00
be6fab9dcb
added setting to adjust autoregressive sample batch size
2023-02-06 22:31:06 +00:00
b441a84615
added flag (--cond-latent-max-chunk-size) that should restrict the maximum chunk size when chunking for calculating conditional latents, to avoid OOMing on VRAM
2023-02-06 05:10:07 +00:00
a1f3b6a4da
fixed up the computing conditional latents
2023-02-06 03:44:34 +00:00
5bf21fdbe1
modified how conditional latents are computed (before, it just happened to only bother reading the first 102400/24000=4.26 seconds per audio input, now it will chunk it all to compute latents)
2023-02-05 23:25:41 +00:00
f66754b557
oops
2023-02-05 20:10:40 +00:00
1c582b5dc8
added button to refresh voice list, enabling KV caching for a bonerific speed increase (credit to https://github.com/152334H/tortoise-tts-fast/ )
2023-02-05 17:59:13 +00:00
8831522de9
New tunable: pause size/breathing room (governs pause at the end of clips)
2023-02-05 14:45:51 +00:00
bf32efe503
Added multi-line parsing
2023-02-05 06:17:51 +00:00
84a9758ab9
Set transformer and model folder to local './models/' instead of for the user profile, because I'm sick of more bloat polluting my C:\
2023-02-05 04:18:35 +00:00
ed33e34fcc
Added choices to choose between diffusion samplers (p, ddim)
2023-02-05 01:28:31 +00:00
5c876b81f3
Added small optimization with caching latents, dropped Anaconda for just a py3.9 + pip + venv setup, added helper install scripts for such, cleaned up app.py, added flag '--low-vram' to disable minor optimizations
2023-02-04 01:50:57 +00:00
8f20afc18f
Reverted slight improvement patch, as it's just enough to OOM on GPUs with low VRAM
2023-02-03 21:45:06 +00:00
e8d4a4f89c
Added progress for transforming to audio, changed number inputs to sliders instead
2023-02-03 04:56:30 +00:00
ea751d7b6c
forgot to copy the alleged slight performance improvement patch, added detailed progress information with passing gr.Progress, save a little more info with output
2023-02-03 04:20:01 +00:00
Johan Nordberg
de7c5ddec3
Typofix
2022-06-11 21:19:07 +09:00
Johan Nordberg
b876a6b32c
Allow running on CPU
2022-06-11 20:03:14 +09:00
Johan Nordberg
9f6ae0f0b3
Add tortoise_cli.py
2022-05-28 05:25:23 +00:00
Johan Nordberg
b681fa9d11
Skip CLVP if cvvp_amount is 1
...
Also fixes formatting bug in log message
2022-05-25 11:12:53 +00:00
Johan Nordberg
a52e3026ba
Revive CVVP model
2022-05-25 10:22:50 +00:00
James Betker
1a8c9f741a
Merge remote-tracking branch 'origin/main'
...
# Conflicts:
# tortoise/read.py
2022-05-19 10:34:54 -06:00
Johan Nordberg
20220893af
Allow setting models path from environment variable
2022-05-19 21:02:09 +09:00
James Betker
8139afd0e5
Remove CVVP
...
After training a similar model for a different purpose, I realized that
this model is faulty: the contrastive loss it uses only pays attention
to high-frequency details which do not contribute meaningfully to
output quality. I validated this by comparing a no-CVVP output with
a baseline using tts-scores and found no differences.
2022-05-17 12:21:25 -06:00
James Betker
aef86d21bf
Add a way to get deterministic behavior from tortoise and add debug states for reporting
2022-05-17 12:11:18 -06:00
James Betker
50690e4465
Automatically pick batch size based on available GPU memory
2022-05-13 10:30:02 -06:00
James Betker
1a4f0fa350
update model paths (including clvp2!)
2022-05-12 20:18:11 -06:00
James Betker
7d5e7dbba8
CLVP2!
2022-05-12 13:23:03 -06:00
Mark Baushenko
cbccc5e953
Optimizing graphics card memory
...
During inference it does not store gradients, which take up most of the video memory
2022-05-11 16:35:11 +03:00
James Betker
317d55c252
re-enable redaction
2022-05-06 09:36:42 -06:00
James Betker
8672075914
temporarily disable redaction
2022-05-06 09:06:20 -06:00
James Betker
ddb19f6b0f
Enable redaction by default
2022-05-03 21:21:52 -06:00
James Betker
c1d004aeb0
change quality presets
2022-05-03 21:01:26 -06:00
James Betker
a4cda68ddf
getting ready for 2.1 release
2022-05-02 20:20:50 -06:00
James Betker
f499d66493
misc fixes
2022-05-02 18:00:57 -06:00
James Betker
2888ae0337
Fix bug with k>1
2022-05-02 18:00:22 -06:00
James Betker
cdf44d7506
more fixes
2022-05-02 16:44:47 -06:00
James Betker
39ec1b0db5
Support totally random voices (and make fixes to previous changes)
2022-05-02 15:40:03 -06:00
James Betker
9007955d88
Add redaction support
2022-05-02 14:57:29 -06:00
James Betker
cd2d4229bf
Better error messages when inputs are out of bounds.
2022-05-01 17:39:36 -06:00
James Betker
0ffc191408
Add support for extracting and feeding conditioning latents directly into the model
...
- Adds a new script and API endpoints for doing this
- Reworks autoregressive and diffusion models so that the conditioning is computed separately (which will actually provide a mild performance boost)
- Updates README
This is untested. Need to do the following manual tests (and someday write unit tests for this behemoth before
it becomes a problem..)
1) Does get_conditioning_latents.py work?
2) Can I feed those latents back into the model by creating a new voice?
3) Can I still mix and match voices (both with conditioning latents and normal voices) with read.py?
2022-05-01 17:25:18 -06:00
James Betker
f7c8decfdb
Move everything into the tortoise/ subdirectory
...
For eventual packaging.
2022-05-01 16:24:24 -06:00