vall-e/vall_e
2024-10-04 22:30:47 -05:00
..
emb fixed oversight where input audio does not resample (lol...) 2024-09-27 20:27:53 -05:00
engines ugh 2024-09-18 21:40:57 -05:00
ext maybe final tweaks, I really needed to unify my json read/write and orjson is proven to be fast enough for me to try and rely on it more 2024-09-17 22:57:04 -05:00
models faster 2024-10-04 22:30:47 -05:00
utils sped up inferencing by not doing .tolist() for rep pen / length pen (and a bug fix in the web UI from prev commit) 2024-10-04 22:18:20 -05:00
__init__.py Rewrite init 2023-08-02 21:53:35 +00:00
__main__.py README tweaks, added --input-prompt-prefix as an experiment (its literally better to just not do this, but i'll retain it in case i have a revelation on how to improve it) 2024-10-04 18:57:19 -05:00
config.py add top_k sampling / offset for prompt similar utterance sampling 2024-09-26 16:26:40 -05:00
data.py coerce into path for other sampler_types (it's required for sampling for similar utterances) 2024-09-26 18:37:56 -05:00
demo.py tweaked demo page script to sample speakers instead 2024-09-28 10:50:26 -05:00
export.py tweaks and fixes for lora stuffs 2024-09-08 18:05:21 -05:00
inference.py README tweaks, added --input-prompt-prefix as an experiment (its literally better to just not do this, but i'll retain it in case i have a revelation on how to improve it) 2024-10-04 18:57:19 -05:00
plot.py vall_e.plot tweaks 2024-09-24 20:05:10 -05:00
samplers.py possible speedup for samplers that require a list of previous tokens (the DRY sampler made me realize that I should copy the tolist() thing from the rep pen sampler for everything else) 2024-07-29 20:23:26 -05:00
train.py don't do eval on stt because it's so slow and I don't even bother doing any metrics against it anyways (to-do: make this a flag) 2024-09-26 18:56:57 -05:00
webui.py sped up inferencing by not doing .tolist() for rep pen / length pen (and a bug fix in the web UI from prev commit) 2024-10-04 22:18:20 -05:00