vall-e/vall_e/models/arch
2024-08-06 20:42:39 -05:00
..
mamba_vasqu mamba2-hf using vasqu/mamba2-torch because it lets me use mamba2 without triton ops (training with my 4xV100s are not happy with mamba2 because of triton) 2024-06-14 19:42:17 -05:00
retnet_syncdoth cleanup 2024-06-05 20:30:43 -05:00
__init__.py add adapted MixtralAttention for when I make a bad decision to actually train a MoE 2024-08-04 22:03:22 -05:00
bitnet.py re-added loading multiple models because I'm now entertaining having split AR/NAR models again (and need a way to load both at once) 2024-06-06 09:48:43 -05:00
llama.py do not include SDPA attention if there's no available SDPA backends 2024-08-06 20:42:39 -05:00
mamba.py re-added loading multiple models because I'm now entertaining having split AR/NAR models again (and need a way to load both at once) 2024-06-06 09:48:43 -05:00
mixtral.py add adapted MixtralAttention for when I make a bad decision to actually train a MoE 2024-08-04 22:03:22 -05:00
retnet.py cleanup 2024-06-05 20:30:43 -05:00
transformer.py cleanup 2024-06-05 20:30:43 -05:00