Maybe dumb question; Could running multiple instances share the same models/resources? #273
Labels
No Label
bug
duplicate
enhancement
help wanted
insufficient info
invalid
news
not a bug
question
wontfix
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: mrq/ai-voice-cloning#273
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Pretty much what the question says. I'm trying a funky experiment to run two TTTS at a time, but running each one eats up about 5g of vram after loading the diffusion model, vocoder, diffusion model.
I don't know enough about ML to say if them sharing these would be possible, but just figured Id ask here before trying to write some custom setup to share the resources.
Unless your time is worthless the amount of development (not just coding but testing and debugging) this would require would likely cost more than buying a second GPU.
I am able to run multiple instances on Google Colab, using the same model stored on Google Drive.
I originally was doing this but for my purposes the speed was not adequate and too costly to run multiple colabs for as long as I need, so I just went with an EC2 which can run 2 at a time and get much faster times piping the results to my needed destination.
The $/hr is better too? A T4 with google is $0.40/hr.
That's good to know actually. I only opted for the EC2 because I have AWS experience and already had the quota needed to run a GPU vm from other projects. I just looked and they also have a 2 T4 option unlike AWS ! Guess I gotta wait for my quota increase from google now lol
Actually, I just realized it's $0.20/hr: $10 for 100 compute resources, 2 computes per hour for a T4= 50 hours...$10/50 = $0.2/hr
Ohh I thought you meant with like a Google cloud VM, which just so happens is actually $.40 and hour lol.
While it would (probably) be cheaper on colab, it came down to a speed requirement.
On colab, the voice gen was taking too slow, short lines were around 7-8 seconds, and longer lines sometimes up to 20 seconds. I'm generating constant 'scenes' of dialogue, so if the scene play time is less than the scene gen time, I get a delay/pause.
With the VM, I can get short lines in about 3-4 seconds, and longer lines no more than 12 seconds. Plus, with colab I had to package and wait for it in google drive, + download it, which ate up a decent amount of time usually 30+ seconds. With the VM I can just SSH directly to my machine, and its practically zero wait time.