vall-e/vall_e/models/arch
2025-03-27 00:51:41 -05:00
..
attention wow that was fast 2025-03-04 23:17:18 -06:00
__init__.py decoupled llama backend to avoid any funny changes from transformers, removed other backends since i dont think i'll ever bother using them 2025-02-27 19:00:37 -06:00
llama.py cannot get segmented mask to actually work without gradients exploding (need to find a different way to do duration prediction...) 2025-03-27 00:51:41 -05:00