vall-e/vall_e
2024-11-01 18:36:44 -05:00
..
emb
engines added option to load lora directly from the model file itself with --lora 2024-10-26 00:13:10 -05:00
ext
models actually float16(+AMP) and layerskip is bad and will kill the model...... 2024-11-01 18:36:44 -05:00
utils
__init__.py
__main__.py added option to load lora directly from the model file itself with --lora 2024-10-26 00:13:10 -05:00
config.py actually float16(+AMP) and layerskip is bad and will kill the model...... 2024-11-01 18:36:44 -05:00
data.py
demo.py layer skip training implemented (need to gut the inferencing from the repo, and to actually see if the model can benefit from this) 2024-10-30 20:05:45 -05:00
export.py
inference.py added option to load lora directly from the model file itself with --lora 2024-10-26 00:13:10 -05:00
plot.py too brainlet to diagnose why low temp / greedy sampling is randomly unstable some of the time 2024-10-22 20:13:54 -05:00
samplers.py actually have beam_width in the webUI work 2024-10-22 22:06:22 -05:00
train.py actually float16(+AMP) and layerskip is bad and will kill the model...... 2024-11-01 18:36:44 -05:00
webui.py added option to load lora directly from the model file itself with --lora 2024-10-26 00:13:10 -05:00