CUDA out of memory - all of a sudden? #87
Labels
No Label
bug
duplicate
enhancement
help wanted
insufficient info
invalid
news
not a bug
question
wontfix
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: mrq/ai-voice-cloning#87
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
So today, whenever I try and generate a voice, it goes through a few minutes of the usual loading, and then throws this OOM error. Never used to happen, as I'm on a 3090. Previously I'd only be using about 16gb of memory for generations like this.
Edit - It seems to succeed if I use the 'univan' vocoder, but fails with either of the default bigvgan vocoders?
Go under settings and set Sample Batch Size to 16. If you updated mrq/tortoise-tts recently, it added an additional automatic batch size tier, that I suppose isn't actually safe to use for lengthier sentences. I'll revert it in a moment.
Reverted in mrq/tortoise-tts commit
3dd5cad324
.In the future, if it does happen again, you'll need to lower the sample batch size under settings.