Must use Low VRAM setting or python crashes #26
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Computer specs:
OS: Windows 10 21H1
Python version: Python 3.9.13
CPU: AMD 5900x
GPU: AMD 6900 XT
RAM: 32 gigs
I was messing with some settings and for whatever reason, if I do not enable "Low VRAM" in the settings menu python will crash as soon as it enters "Generating autoregressive samples". My graphics card has quite a lot of VRAM, so I really don't think that this should be an issue in terms of running OOM.
It produces a blank "input_.json" in the .\results folder but nothing else.
Once one of these python crashes happen, I no longer have the ability to generate audio even if I re-check "Low VRAM" unless I reboot the computer. Doing a display driver reset with the key combination CTRL+WIN+SHIFT+B does not fix the issue either.
On top of all this, when it crashes after a change has been made in the settings menu, there is a chance it will enable the "public share" setting. There are no errors in the console. It merely provides a windows crash error, and then give the terminal prompt back to me.
Any ideas?
How odd, I didn't even realize I had the
Low VRAM
setting checked myself; unchecking it, it's doing the same odd crash again like on python3.10. I think I need to rename that setting since it's a bit of a misnomer, as it was originally to disable optimizations that would cause GPUs with little VRAM to play with over its cap.I can assume the problem is with my naive approach to move tensors back and forth between the CPU and GPU, since most of what
Low VRAM
does is just dictates what gets moved where and when to trade speed for more VRAM consumption.I'll see if it fixes it.
Oh, it was actually the use of
kv_cache
ing that would cause it to crash, which gets disabled if usingLow VRAM
. I'll push a fix out when I validate my (rather unnecessary) changes didn't break it using CUDA as a backend.I wonder if that means you were fine to use python3.10 the entire time, since my py3.10 test environment used clean settings.
Alrighty, it should be fixed in commit
a2d95fe208
. Apologies for the inconveniences, as I could have sworn I didn't have Low VRAM checked when jamming DirectML in.You should be able to leave
Low VRAM
unchecked, despite it sort of not really mattering all that much, since it's merely a hint at this point for DirectML, rather than a guarantee.Yes, all my tests so far were done in a py3.10 env without problems (low vram enabled).
Right now i am experiencing a problem with creating conditioning latents. See #27.
I have just updated using the upgrade script, and I can confirm that this is now resolved, thank you for your quick support once again.