ahhhh... I dooooo have a sound file which is 1hour+ because it's a narrated audiobook file (with the bad bits snipped out, but still kept as one single file).
That'll do it. If it's in…
It might have to do with sound file length. If there's even one really long sound file compared to the rest, it'll cause all other sounds to pad to the largest length to easily match. It could…
had to put chunks up to 32 to avoid out-of-memory errors (seems even 24gb of vram isn't enough these days lol).
But I am now getting actual accented voices out of the generator!
I wonder…
That is to also say, I might need to have it use the default behavior too, if it works better in some cases, since I believe the fast repo by default uses the old behavior (use the first 4…
Yeah, it looks like the you can just copy
./voices/patrick/cond_latents_f90a07a1.pth
file and just paste it into the fast repo's voice folder (I'd suggest a new voice folder, just to make sure…
Thanks for looking into it. I would much prefer it if I could do the entire process within your repo version, as the gui and everything is so much nicer to use (it took me ages of struggling to…
I have done training on various versions of the mrq repo over the last few days (I tend to update whenever you say you've fixed something haha) but all of my tests have been garbled like this…
That's strange. Can you send over that model for me to test against?
I can't imagine I broke anything. The only thing I can think of would be it somehow not actually using the model for…
im having the same issue on paperspace not sure how to fix it.
No idea on paperspace, I'm doing it locally, but the wiki has the fix in the "issues" section
adamw
do not use adamw_zero, it will keep your learning rate fixed and not decay, as I've learned the hard way. A lot of the "do this if you're distributing" (multigpu) comments don't…
Do I have to manually edit
optimizer: ${optimizer} # this should be adamw_zero if you're using distributed training
To
optimizer: adamw
or should it be adamw_zero (which the comment…
I am also having the same issue, on a fresh forced-update and with all new data and training config.
Can't help with your issue but... you were getting good results with only 32 epochs?
I'm currently on... 5990 epochs and it's still pretty much just a garbled mess.
This is the current output at around 4000 epochs, with the current graph in the image below. I'm not sure if I'm doing something wrong, or if I simply haven't done…
Good idea. I've also have a discrod setup if any you guys want to share ideas/tips : https://discord.com/channels/1041095964028575765/1078356417582481468
This seems to be a link to a channel,…