vall-e/vall_e/models/arch
2025-03-08 17:10:50 -06:00
..
attention wow that was fast 2025-03-04 23:17:18 -06:00
__init__.py decoupled llama backend to avoid any funny changes from transformers, removed other backends since i dont think i'll ever bother using them 2025-02-27 19:00:37 -06:00
llama.py another optimization (within the dataloader because the similar utterance sampler was mondo slow) 2025-03-08 17:10:50 -06:00