Catch OOM and run whisper on cpu automatically. #117

Merged
mrq merged 1 commits from :vram into master 2023-03-12 05:09:54 +07:00

For users wishing to run larger whisper model but lack VRAM, run it on CPU.

For users wishing to run larger whisper model but lack VRAM, run it on CPU.
zim33 added 1 commit 2023-03-12 04:50:50 +07:00
mrq merged commit 8ed09f9b87 into master 2023-03-12 05:09:54 +07:00
mrq deleted branch vram 2023-03-12 05:09:54 +07:00

Ah, I don't know why I didn't think to check if there was a way to set it to CPU only for openai/whisper.

I suppose for Windows users with low GPU VRAM but high enough system VRAM, this will do fine, but I'd prefer Linux users to use WhisperCPP, as the models are just as good but use less RAM.

Ah, I don't know why I didn't think to check if there was a way to set it to CPU only for openai/whisper. I suppose for Windows users with low GPU VRAM but high enough system VRAM, this will do fine, but I'd prefer Linux users to use WhisperCPP, as the models are just as good but use less RAM.

Pull request successfully merged and closed

Sign in to join this conversation.
No reviewers
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: mrq/ai-voice-cloning#117
There is no content yet.