vall-e/vall_e
2024-11-07 21:19:14 -06:00
..
emb ugh 2024-11-05 11:50:05 -06:00
engines new NAR-len training paradigm...... 2024-11-07 11:32:11 -06:00
ext maybe final tweaks, I really needed to unify my json read/write and orjson is proven to be fast enough for me to try and rely on it more 2024-09-17 22:57:04 -05:00
models 'borrowed' a sampling scheduler for NAR-len's RVQ level 0 (better than before, but still not good enough) 2024-11-07 21:19:14 -06:00
utils modified default arguments (ar temp = 0 and rep pen = 1.125 seems to be stable, at least given the few things i tested), do not pass top k/top p/min p to NAR even though technically none of those things should matter when greedy sampling 2024-10-22 18:12:39 -05:00
__init__.py Rewrite init 2023-08-02 21:53:35 +00:00
__main__.py more adjustments (adjustments of early-exit entropy/varentropy thresholds, default rep pen being 1.5, experimental refine-on-stop, etc.) 2024-11-03 18:31:28 -06:00
config.py more notes 2024-11-06 13:51:28 -06:00
data.py saner mask creation? (it doesnt matter, kv cache wont work) 2024-11-02 21:00:21 -05:00
demo.py more windows specific fixes, limit gradio to <5.0.0 on linux (it works on windows, but not on my linux machine tm) 2024-11-04 18:00:33 -06:00
export.py tweaks and fixes for lora stuffs 2024-09-08 18:05:21 -05:00
inference.py repeat extend the prom to fill the initial tokens for nar-len (it somewhat works, the model just needs to train more) 2024-11-06 23:29:53 -06:00
plot.py very, very naive layerskip speculative sampling (it just checks if the current layer's state is good enough) 2024-11-02 11:49:05 -05:00
samplers.py 'borrowed' a sampling scheduler for NAR-len's RVQ level 0 (better than before, but still not good enough) 2024-11-07 21:19:14 -06:00
train.py eval fix for nar-len 2024-11-06 23:14:16 -06:00
webui.py ugh 2024-11-06 23:16:28 -06:00