tortoise-tts/tortoise
deviandice e650800447 Update 'tortoise/utils/device.py'
Noticed that the autoregressive batch size was being set off of VRAM size. Adjusted to scale for the VRAM capacity of 90 series GPUs. In this case, 16 -> 32 batches. 

Using the standard pre-set with ChungusVGAN, I went from 16 steps to 8.
Over an average of 3 runs, I achieved an average of 294 seconds with 16 batches, to 234 seconds with 32. Can't complain at a 1.2x speed increase with functionally 2 lines of code. Can't complain. 

I restarted tortoise each run, and executing ```torch.cuda.empty_cache()``` just before loading the autoregressive model to clean the memory cache each time.
2023-03-07 14:05:27 +00:00
..
data Add chapter 1 of GoT for read.py demos 2022-05-17 11:21:57 -06:00
models added BigVGAN in place of default vocoder (credit to https://github.com/deviandice/tortoise-tts-BigVGAN) 2023-03-03 06:30:58 +00:00
utils Update 'tortoise/utils/device.py' 2023-03-07 14:05:27 +00:00
__init__.py Move everything into the tortoise/ subdirectory 2022-05-01 16:24:24 -06:00
api.py do not reload AR/vocoder if already loaded 2023-03-07 04:33:49 +00:00
do_tts.py QoL fixes 2023-02-02 21:13:28 +00:00
eval.py add eval script for testing 2022-05-12 20:15:22 -06:00
get_conditioning_latents.py fixed up the computing conditional latents 2023-02-06 03:44:34 +00:00
is_this_from_tortoise.py misc fixes 2022-05-02 18:00:57 -06:00
read.py Fixed silly lack of EOF blank line, indentation 2022-06-06 15:13:29 -05:00