tortoise-tts/tortoise/utils
deviandice e650800447 Update 'tortoise/utils/device.py'
Noticed that the autoregressive batch size was being set off of VRAM size. Adjusted to scale for the VRAM capacity of 90 series GPUs. In this case, 16 -> 32 batches. 

Using the standard pre-set with ChungusVGAN, I went from 16 steps to 8.
Over an average of 3 runs, I achieved an average of 294 seconds with 16 batches, to 234 seconds with 32. Can't complain at a 1.2x speed increase with functionally 2 lines of code. Can't complain. 

I restarted tortoise each run, and executing ```torch.cuda.empty_cache()``` just before loading the autoregressive model to clean the memory cache each time.
2023-03-07 14:05:27 +00:00
..
__init__.py
audio.py added storing the loaded model's hash to the TTS object instead of relying on jerryrig injecting it (although I still have to for the weirdos who refuse to update the right way), added a parameter when loading voices to load a latent tagged with a model's hash so latents are per-model now 2023-03-02 00:44:42 +00:00
device.py Update 'tortoise/utils/device.py' 2023-03-07 14:05:27 +00:00
diffusion.py Added integration for "voicefixer", fixed issue where candidates>1 and lines>1 only outputs the last combined candidate, numbered step for each generation in progress, output time per generation step 2023-02-11 15:02:11 +00:00
stft.py
text.py Typofix 2022-05-28 01:29:34 +00:00
tokenizer.py Remove some assumptions about working directory 2022-05-29 01:10:19 +00:00
torch_intermediary.py applied the bitsandbytes wrapper to tortoise inference (not sure if it matters) 2023-02-28 01:42:10 +00:00
typical_sampling.py
wav2vec_alignment.py applied the bitsandbytes wrapper to tortoise inference (not sure if it matters) 2023-02-28 01:42:10 +00:00