This website requires JavaScript.
Explore
Help
Register
Sign In
ecker
/
vall-e
Watch
5
Star
9
Fork
0
You've already forked vall-e
Code
Issues
8
Pull Requests
Packages
Projects
Releases
Wiki
Activity
8b6095f681
vall-e
/
vall_e
/
models
/
arch
History
mrq
84005c5b00
entropix apparently processes the entire sequence of logits but it falls apart when doing that
2024-10-13 12:01:12 -05:00
..
attention
ugh
2024-08-30 14:39:07 -05:00
mamba_vasqu
mamba2-hf using
vasqu/mamba2-torch
because it lets me use mamba2 without triton ops (training with my 4xV100s are not happy with mamba2 because of triton)
2024-06-14 19:42:17 -05:00
retnet_syncdoth
cleanup
2024-06-05 20:30:43 -05:00
__init__.py
added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm
2024-08-26 19:13:34 -05:00
bitnet.py
re-added loading multiple models because I'm now entertaining having split AR/NAR models again (and need a way to load both at once)
2024-06-06 09:48:43 -05:00
llama.py
entropix apparently processes the entire sequence of logits but it falls apart when doing that
2024-10-13 12:01:12 -05:00
mamba.py
ughghghhhh
2024-08-09 21:15:01 -05:00
mixtral.py
fixed attentions for MoE
2024-08-27 17:02:42 -05:00
retnet.py
cleanup
2024-06-05 20:30:43 -05:00
transformer.py
cleanup
2024-06-05 20:30:43 -05:00