vall-e/vall_e
2024-04-21 14:58:04 -05:00
..
emb it slipped my mind that technically DAC can be used at any sample rate, since it models waveforms; make it a config YAML option to allow this behavior 2024-04-19 18:36:54 -05:00
engines deprecate sole AR/NAR model by only keeping the AR+NAR (the beauty of no one using this is that I can break compat as much as I want), add tone token for when I classify my dataset with tone/emotion in the future, some other things 2024-04-15 19:54:32 -05:00
ext backwards compat for old YAMLs with models, option to set flash attention 2 for Llama (and derivatives), included syncdoth/RetNets torchscale retnet for shits and grins, etc. 2024-04-16 10:02:31 -05:00
models forgot to fix up the test trainer 2024-04-21 14:58:04 -05:00
utils wrapper fixes 2024-04-16 10:19:02 -05:00
__init__.py Rewrite init 2023-08-02 21:53:35 +00:00
__main__.py deprecate sole AR/NAR model by only keeping the AR+NAR (the beauty of no one using this is that I can break compat as much as I want), add tone token for when I classify my dataset with tone/emotion in the future, some other things 2024-04-15 19:54:32 -05:00
config.py dataset preparation script updates, caved and am using HF tokenizer now 2024-04-21 14:49:18 -05:00
data.py dataset preparation script updates, caved and am using HF tokenizer now 2024-04-21 14:49:18 -05:00
export.py cleanup, use deepspeed inferencing pathway if requested 2023-10-09 15:24:04 -05:00
inference.py dataset preparation script updates, caved and am using HF tokenizer now 2024-04-21 14:49:18 -05:00
plot.py deprecate sole AR/NAR model by only keeping the AR+NAR (the beauty of no one using this is that I can break compat as much as I want), add tone token for when I classify my dataset with tone/emotion in the future, some other things 2024-04-15 19:54:32 -05:00
samplers.py separated samplers into its own file, don't bother copying the logits back to the GPU after sampling, it's not necessary 2023-10-11 12:25:31 -05:00
train.py logger broke for some reason, added flag to just tqdm.write instead, make cfg.bitsandbytes.bitnet==True yamls denoted since I'm sure they're not interoperable 2024-03-01 10:32:35 -06:00
webui.py backwards compat for old YAMLs with models, option to set flash attention 2 for Llama (and derivatives), included syncdoth/RetNets torchscale retnet for shits and grins, etc. 2024-04-16 10:02:31 -05:00