vall-e/vall_e
2024-11-01 20:54:53 -05:00
..
emb
engines skip step on nan loss (ironically I have not had a nan loss after adding this), throw exception with invalid cfg.dataset.sample_type and sample_order combination (because I was tricked by this in my yaml and had inconsistent vram usage) 2024-11-01 20:54:53 -05:00
ext
models actually float16(+AMP) and layerskip is bad and will kill the model...... 2024-11-01 18:36:44 -05:00
utils
__init__.py
__main__.py added option to load lora directly from the model file itself with --lora 2024-10-26 00:13:10 -05:00
config.py actually float16(+AMP) and layerskip is bad and will kill the model...... 2024-11-01 18:36:44 -05:00
data.py skip step on nan loss (ironically I have not had a nan loss after adding this), throw exception with invalid cfg.dataset.sample_type and sample_order combination (because I was tricked by this in my yaml and had inconsistent vram usage) 2024-11-01 20:54:53 -05:00
demo.py layer skip training implemented (need to gut the inferencing from the repo, and to actually see if the model can benefit from this) 2024-10-30 20:05:45 -05:00
export.py
inference.py added option to load lora directly from the model file itself with --lora 2024-10-26 00:13:10 -05:00
plot.py too brainlet to diagnose why low temp / greedy sampling is randomly unstable some of the time 2024-10-22 20:13:54 -05:00
samplers.py actually have beam_width in the webUI work 2024-10-22 22:06:22 -05:00
train.py actually float16(+AMP) and layerskip is bad and will kill the model...... 2024-11-01 18:36:44 -05:00
webui.py added option to load lora directly from the model file itself with --lora 2024-10-26 00:13:10 -05:00