vall-e/vall_e
2024-12-10 21:00:51 -06:00
..
emb Added CER, transcription/similarity model args in demo 2024-12-10 21:00:51 -06:00
engines added WER/SIM-O metrics, added APOLLO but I need to test it 2024-12-10 20:13:21 -06:00
models oops 2024-12-08 15:24:21 -06:00
utils Added CER, transcription/similarity model args in demo 2024-12-10 21:00:51 -06:00
__init__.py
__main__.py doc update, added automatically deducing language from a given text, also checks if the input is already phonemized text to allow direct control without being cringe (procrastinating adding WER/SIM-O) 2024-12-07 22:34:25 -06:00
config.py added WER/SIM-O metrics, added APOLLO but I need to test it 2024-12-10 20:13:21 -06:00
data.py added WER/SIM-O metrics, added APOLLO but I need to test it 2024-12-10 20:13:21 -06:00
demo.py Added CER, transcription/similarity model args in demo 2024-12-10 21:00:51 -06:00
export.py cringe code to convert to LlamaForCausalLM-happy weights + tokenizer dict (still need to write logic to actually use these weights for proper inferencing) 2024-12-03 10:18:58 -06:00
inference.py added WER/SIM-O metrics, added APOLLO but I need to test it 2024-12-10 20:13:21 -06:00
metrics.py Added CER, transcription/similarity model args in demo 2024-12-10 21:00:51 -06:00
plot.py very, very naive layerskip speculative sampling (it just checks if the current layer's state is good enough) 2024-11-02 11:49:05 -05:00
samplers.py cleaned up classifier-free guidance logit processing (in order to try and cope with a bad nar-len model) 2024-11-19 10:30:05 -06:00
train.py ugh (batchmean actually expects batch=seq_len, and not the actual batch) 2024-12-07 12:39:01 -06:00
webui.py logic fixes, I feel like output is better? (also NAR can have a temperature, I imagine it couldn't because it was having a causal masked passed to it for the longest time before I caught it a month ago) 2024-12-08 14:52:47 -06:00