vall-e/vall_e
2024-08-26 19:33:51 -05:00
..
emb my DAC dataset again managed to only have some utterances with only 8 of 9 RVQ levels, this fixes an oversight from it 2024-08-09 21:18:01 -05:00
engines added ability to specify attention backend for CLI and webui (because im tired of editing the yaml) 2024-08-26 19:33:51 -05:00
ext
models added ability to specify attention backend for CLI and webui (because im tired of editing the yaml) 2024-08-26 19:33:51 -05:00
utils fix issue with sft and shared tensors... 2024-08-04 19:56:21 -05:00
__init__.py
__main__.py added ability to specify attention backend for CLI and webui (because im tired of editing the yaml) 2024-08-26 19:33:51 -05:00
config.py maybe not 2024-08-09 11:38:08 -05:00
data.py added flash_attn LlamaAttention (including flash_attn==1.0.9) 2024-08-18 20:51:14 -05:00
demo.py fix issue with sft and shared tensors... 2024-08-04 19:56:21 -05:00
export.py added export option to convert Llama to MixtralMoE for another dumb experiment 2024-08-04 20:25:06 -05:00
inference.py added ability to specify attention backend for CLI and webui (because im tired of editing the yaml) 2024-08-26 19:33:51 -05:00
plot.py
samplers.py possible speedup for samplers that require a list of previous tokens (the DRY sampler made me realize that I should copy the tolist() thing from the rep pen sampler for everything else) 2024-07-29 20:23:26 -05:00
train.py add cap for NAR-len training, to avoid any weird cases in early training where it'll just mess up and generate long lengths 2024-08-03 21:00:32 -05:00
webui.py added ability to specify attention backend for CLI and webui (because im tired of editing the yaml) 2024-08-26 19:33:51 -05:00