This website requires JavaScript.
Explore
Help
Register
Sign In
mrq
/
vall-e
Watch
5
Star
9
Fork
0
You've already forked vall-e
Code
Issues
8
Pull Requests
Packages
Projects
Releases
Wiki
Activity
0d706ec6a1
vall-e
/
vall_e
/
models
History
mrq
0d706ec6a1
added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm
2024-08-26 19:13:34 -05:00
..
arch
added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm
2024-08-26 19:13:34 -05:00
__init__.py
ar_nar.py
ughghghhhh
2024-08-09 21:15:01 -05:00
ar.py
fix issue with sft and shared tensors...
2024-08-04 19:56:21 -05:00
base.py
added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm
2024-08-26 19:13:34 -05:00
experimental.py
ughghghhhh
2024-08-09 21:15:01 -05:00
lora.py
nar.py
changed torch.Tensor().to(device, dtype) to just torch.tensor(..., device, dtype) because it's been bothering my autism that I'm creating tensors then converting rather than creating with the right device/dtype, some 'optimization' to compile the model but it doesnt seem to do anything useful
2024-08-03 22:10:21 -05:00