vall-e/data
2024-12-11 22:45:38 -06:00
..
demo sort batches to try and reduce number of padded tokens in batched inference (also commented out F5 samples getting added to the demo page because I would have to regenerate them) 2024-12-11 22:45:38 -06:00
config.yaml
harvard_sentences.txt
noise.enc
qnt.enc
tokenizer.json