vall-e/vall_e/models
2024-12-06 21:55:20 -06:00
..
arch sageattn (forgot to bother with testing this the other day, seems ifne) 2024-12-03 15:14:57 -06:00
__init__.py
ar_nar.py added knowledge distillation in the trainer (sadly it is not agnostic because of the grave mistake of further processing the batch within the forward pass, so subsequent calls do not match......) 2024-12-05 23:05:52 -06:00
base.py actually fixed knowledge distillation because of errant -inf logits causing problems and needed to be filtered (and splitting text language / output audio language because it helps) 2024-12-06 21:55:20 -06:00
experimental.py
lora.py