-
https://git.ecker.tech/ aims to provide a place to share my efforts while maintaining true ownership of my code, as I do not trust GitHub.
XMR: 4B9TQdkAkBFYrbj5ztvTx89e5LpucPeTSPzemCihdDi9EBnx7btn8RDNZTBz2zihWsjMnDkzn5As1LU6gLv3KQy8BLsZ8SG
- Joined on
2022-10-10
If this is for TorToiSe, then everything under the web UI puts the data in the right place. You're free to make any adjustments to the train.txt
before finetuning.
In your training YAML, under trainer.weight_dtype
, set it to float32
or float16
.
In your training YAML, change dataset.sample_type
to path
instead of speaker
.
Additionally, ensure that your ./training/{voice}/
folder contains ckpt/{,n}ar-retnet-4
to finetune, or…
Currently guided only by presets: quarter
, half
, and full
here and in the YAML [here](https://git.ecker.tech/mrq/v…
Nothing's being trained at all anyways. Either the dataloaders didn't load anything (you can validate this when it prints out the symmap / speakers / sample counts / duration) or your batch size…
looks like it's overriding my PyTorch DataParallel settings, etc with whatever's being set by deepspeed
Most likely. DeepSpeed handles whatever distributed training initialization it calls…
I can take a look when I get a moment to do a clean install under Windows with my 6800XT, but
PS
I feel like I vaguely recall that PowerShell has some weird oddities. Try running it under…
mmm...
Training is paused for the meantime on the runpod rentals. The improvements seem very marginal now, and I think I'm starting to hit a wall with how much continued training with a low LR…