a-One-Fan
27024a7b38
- Add an argument to use oneAPI when training - Use it in the oneAPI startup - Set an env var when doing so - Initialize distributed training with ccl when doing so Intel does not and will not support non-distributed training. I think that's a good decision. The message that training will happen with oneAPI gets printed twice. |
||
---|---|---|
bin | ||
models | ||
modules | ||
results | ||
src | ||
training | ||
voices | ||
.dockerignore | ||
.gitignore | ||
.gitmodules | ||
Dockerfile | ||
LICENSE | ||
notebook_colab.ipynb | ||
notebook_paperspace.ipynb | ||
README.md | ||
requirements.txt | ||
setup-cuda-bnb.bat | ||
setup-cuda.bat | ||
setup-cuda.sh | ||
setup-directml.bat | ||
setup-docker.sh | ||
setup-oneapi.sh | ||
setup-rocm-bnb.sh | ||
setup-rocm.sh | ||
start-docker.sh | ||
start-oneapi.sh | ||
start.bat | ||
start.sh | ||
train-docker.sh | ||
train.bat | ||
train.sh | ||
update-force.bat | ||
update-force.sh | ||
update.bat | ||
update.sh |
AI Voice Cloning
This repo/rentry aims to serve as both a foolproof guide for setting up AI voice cloning tools for legitimate, local use on Windows/Linux, as well as a stepping stone for anons that genuinely want to play around with TorToiSe.
Similar to my own findings for Stable Diffusion image generation, this rentry may appear a little disheveled as I note my new findings with TorToiSe. Please keep this in mind if the guide seems to shift a bit or sound confusing.
>Ugh... why bother when I can just abuse 11.AI?
You're more than welcome to, but TorToiSe is shaping up to be a very promising tool, especially with finetuning now on the horizon.
This is not endorsed by neonbjb. I do not expect this to run into any ethical issues, as it seems (like me), this is mostly for making funny haha vidya characters say funny lines.
Documentation
Please consult the wiki for the documentation.
Bug Reporting
If you run into any problems, please refer to the issues you may encounter wiki page first.