vall-e/vall_e/models/arch
2024-08-30 14:39:07 -05:00
..
attention ugh 2024-08-30 14:39:07 -05:00
mamba_vasqu
retnet_syncdoth
__init__.py added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm 2024-08-26 19:13:34 -05:00
bitnet.py
llama.py ugh 2024-08-30 10:46:26 -05:00
mamba.py
mixtral.py fixed attentions for MoE 2024-08-27 17:02:42 -05:00
retnet.py
transformer.py