vall-e/vall_e
2024-12-16 18:28:01 -06:00
..
emb KO/ZH model soon 2024-12-15 17:01:14 -06:00
engines remove nan checks because it causes problems in distributed training because I'm not syncing between GPUs (and nan losses gets ignored anyways with loss scaling) 2024-12-15 09:42:54 -06:00
models imagine my disappointment when the epoch finished just for it to throw an exception 2024-12-16 18:28:01 -06:00
utils imagine my disappointment when the epoch finished just for it to throw an exception 2024-12-16 18:28:01 -06:00
__init__.py Rewrite init 2023-08-02 21:53:35 +00:00
__main__.py doc update, added automatically deducing language from a given text, also checks if the input is already phonemized text to allow direct control without being cringe (procrastinating adding WER/SIM-O) 2024-12-07 22:34:25 -06:00
config.py APOLLO tweaks to make it work with deepspeed 2024-12-13 23:03:52 -06:00
data.py APOLLO cringe (doesn't want to work with deepspeed) 2024-12-12 00:31:58 -06:00
demo.py more fixes for local engine backend 2024-12-12 14:38:42 -06:00
export.py cringe code to convert to LlamaForCausalLM-happy weights + tokenizer dict (still need to write logic to actually use these weights for proper inferencing) 2024-12-03 10:18:58 -06:00
inference.py sort batches to try and reduce number of padded tokens in batched inference (also commented out F5 samples getting added to the demo page because I would have to regenerate them) 2024-12-11 22:45:38 -06:00
metrics.py uplifting transformer's WavLM stuff to do speaker verification instead 2024-12-11 19:30:05 -06:00
plot.py very, very naive layerskip speculative sampling (it just checks if the current layer's state is good enough) 2024-11-02 11:49:05 -05:00
samplers.py sort batches to try and reduce number of padded tokens in batched inference (also commented out F5 samples getting added to the demo page because I would have to regenerate them) 2024-12-11 22:45:38 -06:00
train.py remove nan checks because it causes problems in distributed training because I'm not syncing between GPUs (and nan losses gets ignored anyways with loss scaling) 2024-12-15 09:42:54 -06:00
webui.py logic fixes, I feel like output is better? (also NAR can have a temperature, I imagine it couldn't because it was having a causal masked passed to it for the longest time before I caught it a month ago) 2024-12-08 14:52:47 -06:00