vall-e/vall_e/models/arch
2024-11-03 19:19:15 -06:00
..
attention ugh 2024-08-30 14:39:07 -05:00
mamba_vasqu mamba2-hf using vasqu/mamba2-torch because it lets me use mamba2 without triton ops (training with my 4xV100s are not happy with mamba2 because of triton) 2024-06-14 19:42:17 -05:00
retnet_syncdoth cleanup 2024-06-05 20:30:43 -05:00
__init__.py layer skip training implemented (need to gut the inferencing from the repo, and to actually see if the model can benefit from this) 2024-10-30 20:05:45 -05:00
bitnet.py re-added loading multiple models because I'm now entertaining having split AR/NAR models again (and need a way to load both at once) 2024-06-06 09:48:43 -05:00
llama.py Windows specific fixes (to-do: find libespeak-ng.dll automatically because it cannot be trusted to do it by default) 2024-11-03 19:19:15 -06:00
mamba.py ughghghhhh 2024-08-09 21:15:01 -05:00
mixtral.py fixed attentions for MoE 2024-08-27 17:02:42 -05:00
retnet.py cleanup 2024-06-05 20:30:43 -05:00
transformer.py cleanup 2024-06-05 20:30:43 -05:00