tortoise-tts/tortoise
2023-02-06 05:10:07 +00:00
..
data
models fixed up the computing conditional latents 2023-02-06 03:44:34 +00:00
utils
voices
__init__.py
api.py added flag (--cond-latent-max-chunk-size) that should restrict the maximum chunk size when chunking for calculating conditional latents, to avoid OOMing on VRAM 2023-02-06 05:10:07 +00:00
do_tts.py
eval.py
get_conditioning_latents.py fixed up the computing conditional latents 2023-02-06 03:44:34 +00:00
is_this_from_tortoise.py
read.py