vall-e/vall_e/models/arch
2025-03-18 19:34:37 -05:00
..
attention wow that was fast 2025-03-04 23:17:18 -06:00
__init__.py decoupled llama backend to avoid any funny changes from transformers, removed other backends since i dont think i'll ever bother using them 2025-02-27 19:00:37 -06:00
llama.py more tweaks to the new implementation (properly trim the len stuff to save some params, decoder to d_ffn expansion to 2 to maybe also make it faster, etc.) 2025-03-18 19:34:37 -05:00