UnboundLocalError: local variable 'opt' referenced before assignment #102
Labels
No Label
bug
duplicate
enhancement
help wanted
insufficient info
invalid
news
not a bug
question
wontfix
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: mrq/ai-voice-cloning#102
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Getting this error after pulling a new update today. This is when I click train.
I can't really tell exactly the issue since you neglected to include the full training output (the provided configuration is very important), but it's safe to assume you need to remake your training configuration.
Sorry, I think this what your after :
Tbh I am using my usual settings that have worked, but I will look around
Yeah, you'll need to regenerate your training configuration. I'd say you can manually edit that as "adamw", but I'm certain I had other minor bugs that were around at the time the configuration was spitting that out.
And make extra sure you actually did update with the update script and not a simple git pull (although it looks like it's not necessary since your DLAS is inthe modules folder)
Hey.
I tried a new training config and deleted previous but got the same as above. One thing I noticed there is a "prepare Validation" button now, when I clicked that after transcribing I get 0 culled - not sure if that means anything.
Otherwise I edit training .yaml and put "adaw"as you said and the training now works :)
I am also having the same issue, on a fresh forced-update and with all new data and training config.
Do I have to manually edit
To
optimizer: adamw
or should it be adamw_zero (which the comment seems to imply)
adamw
do not use adamw_zero, it will keep your learning rate fixed and not decay, as I've learned the hard way. A lot of the "do this if you're distributing" (multigpu) comments don't seem necessary desu.
Wonder what broke then, as I haven't had issues on my three machines. I'll check a fresh install then when I get the chance.
I didn't do a complete fresh install, I did the forced-update and then deleted the toretoise and dvae folders (as per the other issue message you put about the new update).
If your fresh install has no issues let me know, and I'll just wipe everything and do a fresh install, no big deal.
Found it. I commented out what I thought was an override. Remedied in commit
eb1551ee92
.