vall-e/vall_e/models/arch
2024-11-22 13:44:43 -06:00
..
attention ugh 2024-08-30 14:39:07 -05:00
__init__.py cleanup 2024-11-21 23:08:43 -06:00
bitnet.py re-added loading multiple models because I'm now entertaining having split AR/NAR models again (and need a way to load both at once) 2024-06-06 09:48:43 -05:00
llama.py huge oversight in the attention masking......... (i realized I have not been providing a non-causal mask to non-causal tasks) 2024-11-22 13:44:43 -06:00
mamba.py temporarily dropping support for xformers because it's breaking when using an attention mask (which i dont remember commenting it out when being passed), default to not use wandb because it's being a pain when doing tests and not actual sessionsS) 2024-11-22 11:29:12 -06:00
mixtral.py fixed attentions for MoE 2024-08-27 17:02:42 -05:00
retnet.py cleanup 2024-06-05 20:30:43 -05:00
transformer.py cleanup 2024-06-05 20:30:43 -05:00