|
2f7d9ab932
|
disable BNB for inferencing by default because I'm pretty sure it makes zero differences (can be force enabled with env vars if you'r erelying on this for some reason)
|
2023-04-29 00:38:18 +00:00 |
|
aJoe
|
eea4c68edc
|
Update tortoise/utils/devices.py vram issue
Added line 85 to set the name variable as it was 'None' causing vram to be incorrect
|
2023-04-12 05:33:30 +00:00 |
|
|
97cd58e7eb
|
maybe solved that odd VRAM spike when doing the clvp pass
|
2023-03-12 12:48:29 -05:00 |
|
|
fec0685405
|
revert muh clean code
|
2023-03-10 00:56:29 +00:00 |
|
|
00be48670b
|
i am very smart
|
2023-03-09 02:06:44 +00:00 |
|
|
bbeee40ab3
|
forgot to convert to gigabytes
|
2023-03-09 00:51:13 +00:00 |
|
|
6410df569b
|
expose VRAM easily
|
2023-03-09 00:38:31 +00:00 |
|
|
3dd5cad324
|
reverting additional auto-suggested batch sizes, per mrq/ai-voice-cloning#87 proving it in fact, is not a good idea
|
2023-03-07 19:38:02 +00:00 |
|
|
cc36c0997c
|
didn't get a chance to commit this this morning
|
2023-03-07 15:43:09 +00:00 |
|
|
d159346572
|
oops
|
2023-02-16 13:23:07 +00:00 |
|
|
eca61af016
|
actually for real fixed incrementing filenames because i had a regex that actually only worked if candidates or lines>1, cuda now takes priority over dml if you're a nut with both of them installed because you can just specify an override anyways
|
2023-02-16 01:06:32 +00:00 |
|
|
ec80ca632b
|
added setting "device-override", less naively decide the number to use for results, some other thing
|
2023-02-15 21:51:22 +00:00 |
|
|
729be135ef
|
Added option: listen path
|
2023-02-09 20:42:38 +00:00 |
|
|
3f8302a680
|
I didn't have to suck off a wizard for DirectML support (courtesy of https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7600 for leading the way)
|
2023-02-09 05:05:21 +00:00 |
|
|
b23d6b4b4c
|
owari da...
|
2023-02-09 01:53:25 +00:00 |
|