Optimizing graphics card memory

During inference it does not store gradients, which take up most of the video memory
This commit is contained in:
Mark Baushenko 2022-05-11 16:35:11 +03:00 committed by GitHub
parent ea8c825ee0
commit cc38333249

View File

@ -225,6 +225,7 @@ class TextToSpeech:
properties.
:param voice_samples: List of 2 or more ~10 second reference clips, which should be torch tensors containing 22.05kHz waveform data.
"""
with torch.no_grad():
voice_samples = [v.to('cuda') for v in voice_samples]
auto_conds = []