vall-e/vall_e/models/arch
2024-08-29 13:27:16 -05:00
..
attention added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm 2024-08-26 19:13:34 -05:00
mamba_vasqu
retnet_syncdoth
__init__.py added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm 2024-08-26 19:13:34 -05:00
bitnet.py
llama.py moved prints to use logger, edited readme (fused_attn doesnt seem stable for training) 2024-08-29 13:27:16 -05:00
mamba.py ughghghhhh 2024-08-09 21:15:01 -05:00
mixtral.py fixed attentions for MoE 2024-08-27 17:02:42 -05:00
retnet.py
transformer.py