forked from mrq/ai-voice-cloning
resample to 22.5K when creating training inputs (to avoid redundant downsampling when loaded for training, even though most of my inputs are already at 22.5K), generalized resampler function to cache and reuse them, do not unload whisper when done transcribing since it gets unloaded anyways for any other non-transcription task
This commit is contained in:
parent
7c9c0dc584
commit
050bcefd73