This website requires JavaScript.
Explore
Help
Register
Sign In
mrq
/
vall-e
Watch
5
Star
9
Fork
0
You've already forked vall-e
Code
Issues
8
Pull Requests
Packages
Projects
Releases
Wiki
Activity
0d706ec6a1
vall-e
/
vall_e
/
models
History
mrq
0d706ec6a1
added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm
2024-08-26 19:13:34 -05:00
..
arch
added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm
2024-08-26 19:13:34 -05:00
__init__.py
sanity cleanup: moved experimental features under its own thing
2024-06-30 10:37:33 -05:00
ar_nar.py
ughghghhhh
2024-08-09 21:15:01 -05:00
ar.py
fix issue with sft and shared tensors...
2024-08-04 19:56:21 -05:00
base.py
added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm
2024-08-26 19:13:34 -05:00
experimental.py
ughghghhhh
2024-08-09 21:15:01 -05:00
lora.py
naive model offloading support (handles automatically splitting parts of the model to requested device per memory constraints, either inferred or requested in the yaml, input tensors are automatically migrated to the right device, it SEEMS to work for training under the test trainer when split between GPU and CPU) (this was specifically only because that Flux imagegen model released so I can test it there)
2024-08-01 20:12:06 -05:00
nar.py
changed torch.Tensor().to(device, dtype) to just torch.tensor(..., device, dtype) because it's been bothering my autism that I'm creating tensors then converting rather than creating with the right device/dtype, some 'optimization' to compile the model but it doesnt seem to do anything useful
2024-08-03 22:10:21 -05:00