vall-e/vall_e
2023-12-23 16:08:17 -06:00
..
emb added sampling by speaker group name (might be better to de-emphasize the LibriVox/Audiobooks that are in large numbers, and emphasize the smaller pools), log cleanup 2023-10-16 19:30:38 -05:00
engines added torchscale XMOE integration (because Mixtral 8x7B seems very promising and I want to see if it works) 2023-12-20 18:45:58 -06:00
models experts weren't forwarded into constructer (wasted a few days of training garbage) 2023-12-23 16:08:17 -06:00
utils added torchscale XMOE integration (because Mixtral 8x7B seems very promising and I want to see if it works) 2023-12-20 18:45:58 -06:00
__init__.py
__main__.py exposed rolling resp context to the web UI, added passing in language to inferencing command line 2023-10-12 23:21:01 -05:00
config.py added torchscale XMOE integration (because Mixtral 8x7B seems very promising and I want to see if it works) 2023-12-20 18:45:58 -06:00
data.py added torchscale XMOE integration (because Mixtral 8x7B seems very promising and I want to see if it works) 2023-12-20 18:45:58 -06:00
export.py
inference.py exposed rolling resp context to the web UI, added passing in language to inferencing command line 2023-10-12 23:21:01 -05:00
plot.py
samplers.py separated samplers into its own file, don't bother copying the logits back to the GPU after sampling, it's not necessary 2023-10-11 12:25:31 -05:00
train.py evaluation/validation passes language ID during training (oops) 2023-10-29 12:00:40 -05:00
webui.py added torchscale XMOE integration (because Mixtral 8x7B seems very promising and I want to see if it works) 2023-12-20 18:45:58 -06:00