-
https://git.ecker.tech/ aims to provide a place to share my efforts while maintaining true ownership of my code, as I do not trust GitHub.
XMR: 4B9TQdkAkBFYrbj5ztvTx89e5LpucPeTSPzemCihdDi9EBnx7btn8RDNZTBz2zihWsjMnDkzn5As1LU6gLv3KQy8BLsZ8SG
- Joined on
2022-10-10
Just did a quick test of Deferring TTS load, changing the model, and then loading TTS and it worked. It actually does spit out a stack trace saying TTS isn't initialized, but I do the setting save…
Right, I forgot I needed to remedy this exact situation. I think if:
- TTS is not loaded
- the model is requested to change
it'll silently fail. You'll need to load the TTS first, and then…
Pushed 092dd7b2d78b89abc0f1855aeb1f3bee83d3eb7f, should be able to either adjust the mega_batch_factor
yourself, or let it (try) to clamp it. I still OOM locally, but at least it doesn't throw…
I set 50 iterations just to try to get it going on my GPU.
Ah OK. Good thing you did, or I probably wouldn't have gotten around to figuring it out for the future, as it was plaguing me…
Yeah, it's training fine after reducing train.mega_batch_factor
to 2 in the YAML. I'll admit I'm not too sure why exactly it's the case, but I'll throw a guess and assert `batch_size /…
Crashed out before I forgot to mention it: notebook updated in 3891870b5dbc0b1044b8aa843475c7334628faeb.
Or rather it's actually just a config option I overlooked. I lowered train.mega_batch_factor
at line 125 down to 2 and it doesn't throw that error for me, although I'm OOMing on my 2060. I'll…
Doesn't seem to be. There's a few handful of issues opened requesting for it, but I suppose that's what I get for subjugating my private repo into…
I suppose I'll need to fix the underlying problem in the training scripts. Usually, I seem to get that error when I have too low of a batch size, but I haven't had that even at batch size 6.
Yeah, I just realized that when I'm fucking about with my Colab. I'll need to update it.