[Question] 124GB Model from 8 minutes of audio #444
Labels
No Label
bug
duplicate
enhancement
help wanted
insufficient info
invalid
news
not a bug
question
wontfix
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: mrq/ai-voice-cloning#444
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
HI
I'm new to the AI voice clone and didn't find any references and numbers how big a training/finetuning should be
I've trained a model based on 8-9 minutes of audio and the result in the ./training/ folder is around 124gb in size. is this something normal or did i miss something?
BR
You probably have set your 'Save Frequency (in epochs)' to a very low number, meaning, every couple of epochs, a new model and training state is saved.
In the training/finetuning go to the "models" folder, you should see a lot of .pth files, remove all of them, but keep the one with the highest number in the name
Go back to training/finetuning and go to the "training_state" folder then do the same thing
The ones with the lower number are just less trained models.