This website requires JavaScript.
Explore
Help
Register
Sign In
mrq
/
vall-e
Watch
5
Star
9
Fork
0
You've already forked vall-e
Code
Issues
8
Pull Requests
Packages
Projects
Releases
Wiki
Activity
40e1799adc
vall-e
/
vall_e
/
models
/
arch
History
mrq
40e1799adc
fixed xformers and flash_attn to actually work now
2024-08-19 01:03:35 -05:00
..
mamba_vasqu
mamba2-hf using
vasqu/mamba2-torch
because it lets me use mamba2 without triton ops (training with my 4xV100s are not happy with mamba2 because of triton)
2024-06-14 19:42:17 -05:00
retnet_syncdoth
__init__.py
add adapted MixtralAttention for when I make a bad decision to actually train a MoE
2024-08-04 22:03:22 -05:00
bitnet.py
re-added loading multiple models because I'm now entertaining having split AR/NAR models again (and need a way to load both at once)
2024-06-06 09:48:43 -05:00
llama.py
fixed xformers and flash_attn to actually work now
2024-08-19 01:03:35 -05:00
mamba.py
ughghghhhh
2024-08-09 21:15:01 -05:00
mixtral.py
added flash_attn LlamaAttention (including flash_attn==1.0.9)
2024-08-18 20:51:14 -05:00
retnet.py
transformer.py