vall-e/vall_e
2024-12-12 13:37:38 -06:00
..
emb store metrics and only recalculate them if the output file is newer than the metrics file 2024-12-11 20:55:43 -06:00
engines tweaks for the local engine orchestrator (that I never caught since I always used the deepspeed backend) 2024-12-12 13:37:38 -06:00
models APOLLO cringe (doesn't want to work with deepspeed) 2024-12-12 00:31:58 -06:00
utils APOLLO cringe (doesn't want to work with deepspeed) 2024-12-12 00:31:58 -06:00
__init__.py
__main__.py doc update, added automatically deducing language from a given text, also checks if the input is already phonemized text to allow direct control without being cringe (procrastinating adding WER/SIM-O) 2024-12-07 22:34:25 -06:00
config.py lol 2024-12-11 19:10:32 -06:00
data.py APOLLO cringe (doesn't want to work with deepspeed) 2024-12-12 00:31:58 -06:00
demo.py sort batches to try and reduce number of padded tokens in batched inference (also commented out F5 samples getting added to the demo page because I would have to regenerate them) 2024-12-11 22:45:38 -06:00
export.py cringe code to convert to LlamaForCausalLM-happy weights + tokenizer dict (still need to write logic to actually use these weights for proper inferencing) 2024-12-03 10:18:58 -06:00
inference.py sort batches to try and reduce number of padded tokens in batched inference (also commented out F5 samples getting added to the demo page because I would have to regenerate them) 2024-12-11 22:45:38 -06:00
metrics.py uplifting transformer's WavLM stuff to do speaker verification instead 2024-12-11 19:30:05 -06:00
plot.py very, very naive layerskip speculative sampling (it just checks if the current layer's state is good enough) 2024-11-02 11:49:05 -05:00
samplers.py sort batches to try and reduce number of padded tokens in batched inference (also commented out F5 samples getting added to the demo page because I would have to regenerate them) 2024-12-11 22:45:38 -06:00
train.py ugh (batchmean actually expects batch=seq_len, and not the actual batch) 2024-12-07 12:39:01 -06:00
webui.py logic fixes, I feel like output is better? (also NAR can have a temperature, I imagine it couldn't because it was having a causal masked passed to it for the longest time before I caught it a month ago) 2024-12-08 14:52:47 -06:00