ImportError: DLL load failed while importing torch_directml_native: The specified process was not found #260
Labels
No Label
bug
duplicate
enhancement
help wanted
insufficient info
invalid
news
not a bug
question
wontfix
No Milestone
No project
No Assignees
4 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: mrq/ai-voice-cloning#260
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hi guys, i am trying runing this project in windows 11 22H2, with DirectML, AMD rx6700xt
>(venv) C:\Users\NoeXVanitasXJunk\ai-voice-cloning>start.bat
>(venv) C:\Users\NoeXVanitasXJunk\ai-voice-cloning>python --version
Python 3.9.13
Note after edit: test delete venv and reinstall with Python 3.10.6, but get the same error :/
Thank u very much! Please help me. I want use this.
Did you run
setup-directml.bat
?Yes bro, I've temporal solved it using Python 3.10.10 and installing:
And now i can download all models, but finally when try run the do.tts.py
try search about of this problem, but dont found a solution, what should i do bro? I cant use nothing TTS with my AMD GPU, i want use it for my GPU, i tried made the port of "https://github.com/suno-ai/bark" for use with DirectML but cant goat it xD
There was either a problem with copy/paste or you're missing a quotation mark:
Oh not bro sorry for the confusion, its a visual bug of my terminal in windows.
My input is:
python .\modules\tortoise-tts\tortoise\do_tts.py --text "Esto es una pruebita no me jodas please" --voice random --preset fast
Output:
Note: I've tested use CMD with .bat and Terminal with .PS1, dont worry for the visual bug, really i get the same error bug in both cases
RuntimeError: new(): expected key in DispatchKeySet(CPU, CUDA, HIP, XLA, MPS, IPU, XPU, HPU, Lazy, Meta) but got: PrivateUse1
Have you also tried with the web UI? The discussion from #242 seems to imply that CLI generation isn't fully implemented.
Ohh nope, but i get the same result now: Uwu

I've never run into that error but I use GeForce+CUDA so I can't be much help from here. All I can suggest is to try on Linux (even WSL through Windows), according to the Wiki DirectML support is iffy at best.
Understand, thank u very much bro, if you need a tester for try fix it. i could help !!
You might need to use either an older version of torch-directml or transformers. I don't have my previous venv of DirectML still around unfortunately, but you can start with doing (sourced from the frozen requirements.txt from 152334H/DL-Art-School:
Its a mess, I fixed the error. By changing the pytorch source code and rebuilding from version 2.0. Just have to add "PrivateUse1" to the DispatchKeySet. But now I get another error. I feel like this was not really tested on none cuda hardware.
IIRC it's been tested extensively on AMD graphics cards but with ROCm (Linux), not DirectML (Windows), as noted on the Wiki: