vall-e/vall_e
2024-12-12 17:12:59 -06:00
..
emb store metrics and only recalculate them if the output file is newer than the metrics file 2024-12-11 20:55:43 -06:00
engines actually save the optimizer for the local engine backend because safetensors doesn't save it 2024-12-12 17:12:59 -06:00
models APOLLO cringe (doesn't want to work with deepspeed) 2024-12-12 00:31:58 -06:00
utils more fixes for local engine backend 2024-12-12 14:38:42 -06:00
__init__.py
__main__.py
config.py lol 2024-12-11 19:10:32 -06:00
data.py APOLLO cringe (doesn't want to work with deepspeed) 2024-12-12 00:31:58 -06:00
demo.py more fixes for local engine backend 2024-12-12 14:38:42 -06:00
export.py
inference.py sort batches to try and reduce number of padded tokens in batched inference (also commented out F5 samples getting added to the demo page because I would have to regenerate them) 2024-12-11 22:45:38 -06:00
metrics.py uplifting transformer's WavLM stuff to do speaker verification instead 2024-12-11 19:30:05 -06:00
plot.py
samplers.py sort batches to try and reduce number of padded tokens in batched inference (also commented out F5 samples getting added to the demo page because I would have to regenerate them) 2024-12-11 22:45:38 -06:00
train.py
webui.py