vall-e/vall_e
2024-12-07 09:52:51 -06:00
..
emb
engines added knowledge distillation in the trainer (sadly it is not agnostic because of the grave mistake of further processing the batch within the forward pass, so subsequent calls do not match......) 2024-12-05 23:05:52 -06:00
models agnostified KD 2024-12-06 23:53:46 -06:00
utils agnostified KD 2024-12-06 23:53:46 -06:00
__init__.py
__main__.py actually fixed knowledge distillation because of errant -inf logits causing problems and needed to be filtered (and splitting text language / output audio language because it helps) 2024-12-06 21:55:20 -06:00
config.py ACTUALLY do KD-loss because of an oversight with masked_select outputting 1D tensors that get softmax'd in total 2024-12-07 09:52:51 -06:00
data.py oops 2024-12-04 21:24:22 -06:00
demo.py
export.py
inference.py actually fixed knowledge distillation because of errant -inf logits causing problems and needed to be filtered (and splitting text language / output audio language because it helps) 2024-12-06 21:55:20 -06:00
plot.py
samplers.py
train.py ACTUALLY do KD-loss because of an oversight with masked_select outputting 1D tensors that get softmax'd in total 2024-12-07 09:52:51 -06:00
webui.py actually fixed knowledge distillation because of errant -inf logits causing problems and needed to be filtered (and splitting text language / output audio language because it helps) 2024-12-06 21:55:20 -06:00