vall-e/vall_e/models
2024-12-06 22:35:30 -06:00
..
arch sageattn (forgot to bother with testing this the other day, seems ifne) 2024-12-03 15:14:57 -06:00
__init__.py unified nar.py into ar_nar.py 2024-11-10 12:19:48 -06:00
ar_nar.py added knowledge distillation in the trainer (sadly it is not agnostic because of the grave mistake of further processing the batch within the forward pass, so subsequent calls do not match......) 2024-12-05 23:05:52 -06:00
base.py ugh 2024-12-06 22:35:30 -06:00
experimental.py moved prints to use logger, edited readme (fused_attn doesnt seem stable for training) 2024-08-29 13:27:16 -05:00
lora.py naive model offloading support (handles automatically splitting parts of the model to requested device per memory constraints, either inferred or requested in the yaml, input tensors are automatically migrated to the right device, it SEEMS to work for training under the test trainer when split between GPU and CPU) (this was specifically only because that Flux imagegen model released so I can test it there) 2024-08-01 20:12:06 -05:00