vall-e/vall_e
2024-11-19 10:30:05 -06:00
..
emb fixes 2024-11-10 20:37:50 -06:00
engines ugh 2024-11-14 07:34:22 -06:00
ext maybe final tweaks, I really needed to unify my json read/write and orjson is proven to be fast enough for me to try and rely on it more 2024-09-17 22:57:04 -05:00
models cleaned up classifier-free guidance logit processing (in order to try and cope with a bad nar-len model) 2024-11-19 10:30:05 -06:00
utils default set cfg strength to 3.0 since the reference model is updated 2024-11-17 10:23:40 -06:00
__init__.py Rewrite init 2023-08-02 21:53:35 +00:00
__main__.py new meme sampler PogChamp new meme sampler PogChamp (it sort of helps?) 2024-11-12 22:30:09 -06:00
config.py normalize sampler index by batch size (if not using batched sampler), add option to cap out utterances for a speaker, some other things 2024-11-18 12:46:50 -06:00
data.py oops 2024-11-18 14:12:26 -06:00
demo.py set option to set training masking ratio (I don't think for tts a fixed masking ratio is beneficial since the magic of the AR+NAR is being able to still reference the prior sequence of tokens for predicting things) 2024-11-17 17:04:07 -06:00
export.py two weeks of agony concludes 2024-11-18 21:29:28 -06:00
inference.py two weeks of agony concludes 2024-11-18 21:29:28 -06:00
plot.py very, very naive layerskip speculative sampling (it just checks if the current layer's state is good enough) 2024-11-02 11:49:05 -05:00
samplers.py cleaned up classifier-free guidance logit processing (in order to try and cope with a bad nar-len model) 2024-11-19 10:30:05 -06:00
train.py default set cfg strength to 3.0 since the reference model is updated 2024-11-17 10:23:40 -06:00
webui.py default set cfg strength to 3.0 since the reference model is updated 2024-11-17 10:23:40 -06:00