vall-e/vall_e/models
2024-08-26 19:33:51 -05:00
..
arch added fused_attn (triton-based fused attention) and simply just query for flash_attn under rocm 2024-08-26 19:13:34 -05:00
__init__.py added ability to specify attention backend for CLI and webui (because im tired of editing the yaml) 2024-08-26 19:33:51 -05:00
ar_nar.py ughghghhhh 2024-08-09 21:15:01 -05:00
ar.py
base.py added ability to specify attention backend for CLI and webui (because im tired of editing the yaml) 2024-08-26 19:33:51 -05:00
experimental.py ughghghhhh 2024-08-09 21:15:01 -05:00
lora.py
nar.py