forked from mrq/tortoise-tts
instll python3.9, wrapped try/catch when parsing args.listen in case you somehow manage to insert garbage into that field and fuck up your config, removed a very redudnant setup.py install call since that only is required if you're just going to install it for using outside of the tortoise-tts folder
This commit is contained in:
parent
ddd0c4ccf8
commit
94757f5b41
24
README.md
24
README.md
|
@ -53,7 +53,8 @@ Outside of the very small prerequisites, everything needed to get TorToiSe worki
|
||||||
### Pre-Requirements
|
### Pre-Requirements
|
||||||
|
|
||||||
Windows:
|
Windows:
|
||||||
* Python 3.9: https://www.python.org/downloads/release/python-3913/
|
* ***Python 3.9***: https://www.python.org/downloads/release/python-3913/
|
||||||
|
- I cannot stress this hard enough. PyTorch under Windows requires a very specific version.
|
||||||
* Git (optional): https://git-scm.com/download/win
|
* Git (optional): https://git-scm.com/download/win
|
||||||
* CUDA drivers, if NVIDIA
|
* CUDA drivers, if NVIDIA
|
||||||
|
|
||||||
|
@ -78,6 +79,8 @@ Afterwards, run the setup script, depending on your GPU, to automatically set th
|
||||||
|
|
||||||
If you've done everything right, you shouldn't have any errors.
|
If you've done everything right, you shouldn't have any errors.
|
||||||
|
|
||||||
|
#####
|
||||||
|
|
||||||
##### Note on DirectML Support
|
##### Note on DirectML Support
|
||||||
|
|
||||||
PyTorch-DirectML is very, very experimental and is still not production quality. There's some headaches with the need for hairy kludgy patches.
|
PyTorch-DirectML is very, very experimental and is still not production quality. There's some headaches with the need for hairy kludgy patches.
|
||||||
|
@ -112,16 +115,29 @@ And you should be done!
|
||||||
|
|
||||||
To check for updates, simply run `update.bat` (or `update.sh`). It should pull from the repo, as well as fetch for any new dependencies.
|
To check for updates, simply run `update.bat` (or `update.sh`). It should pull from the repo, as well as fetch for any new dependencies.
|
||||||
|
|
||||||
|
If, for some reason, you manage to be quick on the trigger to update when I reverse a commit and you get an error trying to run the update script, run `update-force.bat` (or `update-force.sh`) to force an update.
|
||||||
|
|
||||||
### Pitfalls You May Encounter
|
### Pitfalls You May Encounter
|
||||||
|
|
||||||
I'll try and make a list of "common" (or what I feel may be common that I experience) issues with getting TorToiSe set up:
|
I'll try and make a list of "common" (or what I feel may be common that I experience) issues with getting TorToiSe set up:
|
||||||
* `CUDA is NOT available for use.`: If you're on Linux, you failed to set up CUDA (if NVIDIA) or ROCm (if AMD). Please make sure you have these installed on your system.
|
* `CUDA is NOT available for use.`: If you're on Linux, you failed to set up CUDA (if NVIDIA) or ROCm (if AMD). Please make sure you have these installed on your system.
|
||||||
If you're on Windows with an AMD card, you're stuck out of luck, as ROCm is not available on Windows (without major hoops to be jumped). If you're on an NVIDIA GPU, then I'm not sure what went wrong.
|
- If you're on Windows with an AMD card, you're stuck out of luck, as ROCm is not available on Windows (without major hoops to be jumped). If you're on an NVIDIA GPU, then I'm not sure what went wrong.
|
||||||
* `failed reading zip archive: failed finding central directory`: You had a file fail to download completely during the model downloading initialization phase. Please open either `.\models\tortoise\` or `.\models\transformers\`, and delete the offending file.
|
* `failed reading zip archive: failed finding central directory`: You had a file fail to download completely during the model downloading initialization phase. Please open either `.\models\tortoise\` or `.\models\transformers\`, and delete the offending file.
|
||||||
You can deduce what that file is by reading the stack trace. A few lines above the last like will be a line trying to read a model path.
|
- You can deduce what that file is by reading the stack trace. A few lines above the last like will be a line trying to read a model path.
|
||||||
* `torch.cuda.OutOfMemoryError: CUDA out of memory.`: You most likely have a GPU with low VRAM (~4GiB), and the small optimizations with keeping data on the GPU is enough to OOM. Please open the `start.bat` file and add `--low-vram` to the command (for example: `py app.py --low-vram`) to disable those small optimizations.
|
* `torch.cuda.OutOfMemoryError: CUDA out of memory.`: You most likely have a GPU with low VRAM (~4GiB), and the small optimizations with keeping data on the GPU is enough to OOM. Please open the `start.bat` file and add `--low-vram` to the command (for example: `py app.py --low-vram`) to disable those small optimizations.
|
||||||
* `WavFileWarning: Chunk (non-data) not understood, skipping it.`: something about your WAVs are funny, and its best to remux your audio files with FFMPEG (included batch file in `.\convert\`).
|
* `WavFileWarning: Chunk (non-data) not understood, skipping it.`: something about your WAVs are funny, and its best to remux your audio files with FFMPEG (included batch file in `.\convert\`).
|
||||||
Honestly, I don't know if this does impact output quality, as I feel it's placebo when I do try and correct this.
|
- Honestly, I don't know if this does impact output quality, as I feel it's placebo when I do try and correct this.
|
||||||
|
|
||||||
|
#### Non-"""Issues"""
|
||||||
|
|
||||||
|
I hate to be a hardass over it, but below are some errors that come from not following my instructions:
|
||||||
|
* `Could not find a version that satisfies the requirement torch (from versions: none)`: you are using an incorrect version of python. Please install the linked python3.9.
|
||||||
|
* `Failed to import soundfile. 'soundfile' backend is not available.`: you are most likely using conda (or miniconda), an environment I do not support anymore due to bloat. Please install the linked python3.9, or try [this](https://github.com/librosa/librosa/issues/1117#issuecomment-907853797).
|
||||||
|
- I used to have a setup script using conda as an environment, but it's bloat and a headache to use, so I can't keep it around.
|
||||||
|
* `No hardware acceleration is available, falling back to CPU...`: you do not have a CUDA runtime/drivers installed. Please install them.
|
||||||
|
- I do not have a link for it, as it literally worked on my machine with the basic drivers for my 2060.
|
||||||
|
* `[a silent crash during generating samples with DirectML](https://git.ecker.tech/mrq/tortoise-tts/attachments/8d25ca63-d72b-4448-9483-d97cfe8eb677)`: install python3.9.
|
||||||
|
- I'm not too sure why this is so, but it works for me under 3.9, but not 3.10.
|
||||||
|
|
||||||
## Preparing Voice Samples
|
## Preparing Voice Samples
|
||||||
|
|
||||||
|
|
|
@ -3,6 +3,5 @@ call .\tortoise-venv\Scripts\activate.bat
|
||||||
python -m pip install --upgrade pip
|
python -m pip install --upgrade pip
|
||||||
python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
|
python -m pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
|
||||||
python -m pip install -r ./requirements.txt
|
python -m pip install -r ./requirements.txt
|
||||||
python setup.py install
|
|
||||||
deactivate
|
deactivate
|
||||||
pause
|
pause
|
1
setup-cuda.sh
Normal file → Executable file
1
setup-cuda.sh
Normal file → Executable file
|
@ -4,5 +4,4 @@ python -m pip install --upgrade pip
|
||||||
# CUDA
|
# CUDA
|
||||||
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
|
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
|
||||||
python -m pip install -r ./requirements.txt
|
python -m pip install -r ./requirements.txt
|
||||||
python setup.py install
|
|
||||||
deactivate
|
deactivate
|
||||||
|
|
|
@ -3,6 +3,5 @@ call .\tortoise-venv\Scripts\activate.bat
|
||||||
python -m pip install --upgrade pip
|
python -m pip install --upgrade pip
|
||||||
python -m pip install torch torchvision torchaudio torch-directml==0.1.13.1.dev230119
|
python -m pip install torch torchvision torchaudio torch-directml==0.1.13.1.dev230119
|
||||||
python -m pip install -r ./requirements.txt
|
python -m pip install -r ./requirements.txt
|
||||||
python setup.py install
|
|
||||||
deactivate
|
deactivate
|
||||||
pause
|
pause
|
1
setup-rocm.sh
Normal file → Executable file
1
setup-rocm.sh
Normal file → Executable file
|
@ -4,5 +4,4 @@ python -m pip install --upgrade pip
|
||||||
# ROCM
|
# ROCM
|
||||||
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.1 # 5.2 does not work for me desu
|
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/rocm5.1.1 # 5.2 does not work for me desu
|
||||||
python -m pip install -r ./requirements.txt
|
python -m pip install -r ./requirements.txt
|
||||||
python setup.py install
|
|
||||||
deactivate
|
deactivate
|
||||||
|
|
3
webui.py
3
webui.py
|
@ -526,11 +526,14 @@ def setup_args():
|
||||||
args.listen_port = None
|
args.listen_port = None
|
||||||
args.listen_path = None
|
args.listen_path = None
|
||||||
if args.listen:
|
if args.listen:
|
||||||
|
try:
|
||||||
match = re.findall(r"^(?:(.+?):(\d+))?(\/.+?)?$", args.listen)[0]
|
match = re.findall(r"^(?:(.+?):(\d+))?(\/.+?)?$", args.listen)[0]
|
||||||
|
|
||||||
args.listen_host = match[0] if match[0] != "" else "127.0.0.1"
|
args.listen_host = match[0] if match[0] != "" else "127.0.0.1"
|
||||||
args.listen_port = match[1] if match[1] != "" else None
|
args.listen_port = match[1] if match[1] != "" else None
|
||||||
args.listen_path = match[2] if match[2] != "" else "/"
|
args.listen_path = match[2] if match[2] != "" else "/"
|
||||||
|
except Exception as e:
|
||||||
|
pass
|
||||||
|
|
||||||
if args.listen_port is not None:
|
if args.listen_port is not None:
|
||||||
args.listen_port = int(args.listen_port)
|
args.listen_port = int(args.listen_port)
|
||||||
|
|
Loading…
Reference in New Issue
Block a user