vall-e/vall_e
2024-05-12 07:30:59 -05:00
..
emb ughh 2024-05-12 07:30:59 -05:00
engines nan loss detection (should have added it earlier), loss scaling for local backend + fp16 2024-05-11 22:23:29 -05:00
ext backwards compat for old YAMLs with models, option to set flash attention 2 for Llama (and derivatives), included syncdoth/RetNets torchscale retnet for shits and grins, etc. 2024-04-16 10:02:31 -05:00
models ughh 2024-05-12 07:30:59 -05:00
utils leverage between xformers and torch.backends.cuda.sdp_kernel for attention 2024-05-11 17:14:05 -05:00
__init__.py
__main__.py deprecate sole AR/NAR model by only keeping the AR+NAR (the beauty of no one using this is that I can break compat as much as I want), add tone token for when I classify my dataset with tone/emotion in the future, some other things 2024-04-15 19:54:32 -05:00
config.py ugh 2024-05-11 22:58:38 -05:00
data.py ughh 2024-05-12 07:30:59 -05:00
export.py cleanup, use deepspeed inferencing pathway if requested 2023-10-09 15:24:04 -05:00
inference.py added option to specify frames per second for the given audio representation (Encodec is 75Hz, DAC is 41Hz (at 24K sources)) 2024-05-04 12:05:41 -05:00
plot.py deprecate sole AR/NAR model by only keeping the AR+NAR (the beauty of no one using this is that I can break compat as much as I want), add tone token for when I classify my dataset with tone/emotion in the future, some other things 2024-04-15 19:54:32 -05:00
samplers.py separated samplers into its own file, don't bother copying the logits back to the GPU after sampling, it's not necessary 2023-10-11 12:25:31 -05:00
train.py simple DDP wrapper (for my NVlink test) 2024-05-04 11:48:26 -05:00
webui.py backwards compat for old YAMLs with models, option to set flash attention 2 for Llama (and derivatives), included syncdoth/RetNets torchscale retnet for shits and grins, etc. 2024-04-16 10:02:31 -05:00