forked from mrq/ai-voice-cloning
a-One-Fan
27024a7b38
- Add an argument to use oneAPI when training - Use it in the oneAPI startup - Set an env var when doing so - Initialize distributed training with ccl when doing so Intel does not and will not support non-distributed training. I think that's a good decision. The message that training will happen with oneAPI gets printed twice. |
||
---|---|---|
.. | ||
list_devices.py | ||
main.py | ||
train.py | ||
utils.py | ||
webui.py |