vall-e/vall_e/models/arch
2025-03-21 19:05:49 -05:00
..
attention wow that was fast 2025-03-04 23:17:18 -06:00
__init__.py decoupled llama backend to avoid any funny changes from transformers, removed other backends since i dont think i'll ever bother using them 2025-02-27 19:00:37 -06:00
llama.py add segmented sliding attention, also found a bug with prom-less segments in the attention mask generation......... 2025-03-21 19:05:49 -05:00