|
b0dba9db07
|
this may bite me in the ass
|
2025-03-17 21:46:50 -05:00 |
|
|
2053580838
|
updated dataloader to hopefully reduce RAM usage
|
2025-03-15 13:14:37 -05:00 |
|
|
9cfbf94b1c
|
config-ify the len_loss_factor
|
2025-03-14 20:30:48 -05:00 |
|
|
ca8cc15271
|
more tweaks (vall_e.webui --yaml still breaks things, --model needs to deduce what audio backend now that im supporting other ones again // added easy top-sampler settings back for new implementation)
|
2025-03-14 20:18:25 -05:00 |
|
|
6ee505cffd
|
fixed dac
|
2025-03-12 23:17:27 -05:00 |
|
|
2ccf1b5740
|
actually do duration prediction
|
2025-03-11 22:14:54 -05:00 |
|
|
5c512717a6
|
len prediction for new model (and remove logit normalization since it kills inferencing)
|
2025-03-11 20:33:09 -05:00 |
|
|
5670fcb23f
|
hopefully the final tweaks needed for this bastard of a model
|
2025-03-10 20:59:11 -05:00 |
|
|
6cea840710
|
oops
|
2025-03-07 18:57:25 -06:00 |
|
|
dbd34b6430
|
add specialized calc_loss because schizo
|
2025-03-07 18:44:11 -06:00 |
|
|
8d848ed549
|
handle case of dropping cond for segment mask
|
2025-03-07 14:11:58 -06:00 |
|
|
6afc2b7526
|
gut feeling to change the attention mask
|
2025-03-07 13:51:59 -06:00 |
|
|
2dd80a03ff
|
stuff for interfacing with the loss scaler value (because I want to cap it)
|
2025-03-06 17:07:29 -06:00 |
|
|
5cd71ef238
|
QoL so I can stop having to manually inject different configs
|
2025-03-06 14:48:14 -06:00 |
|
|
1d3290b023
|
could have sworn this worked before, might have broke it when i decoupled from omegaconf
|
2025-03-01 19:30:26 -06:00 |
|
|
ddc49c89c5
|
the learning rate scheduler pill is a tough pill to swallow
|
2025-02-28 22:12:19 -06:00 |
|
|
a174c33db6
|
a gorillionth time's the charm (aka: the encoder/decoder pill is a tough pill to swallow)
|
2025-02-28 17:56:50 -06:00 |
|
|
f4f435d7f5
|
when you already had these ideas to stabilize training but you just ignored them
|
2025-02-27 23:39:20 -06:00 |
|
|
2ea387c08a
|
segregated experimental changes into its own streamlined file to avoid breaking the existing model, and it can pivot to the cleaned up code if it actually works (nothing is working)
|
2025-02-26 21:26:13 -06:00 |
|
|
8f5a3997bd
|
another experimental flag
|
2025-02-24 13:50:41 -06:00 |
|
|
ab0abd2b12
|
fixes fixes fixes (a quarter of my recently processed audio returned zero'd tensors......)
|
2025-02-22 09:07:33 -06:00 |
|
|
a65c8144f4
|
with the amount of tweaks I keep making I could have probably had the nvidia/audio-codec-44khz model realized already......
|
2025-02-13 18:38:40 -06:00 |
|
|
e8f182b634
|
cleaned up loss calc code (it REALLY hates ignore_loss_for_inputs, but is fine with splitting with loss factors)
|
2025-02-13 09:35:27 -06:00 |
|
|
04fef5dad5
|
agony
|
2025-02-12 00:18:24 -06:00 |
|
|
e5916ea519
|
for my sanity it seems having extraneous tokens in the embedding/classifier has the loss/acc a little higher than it should
|
2025-02-11 14:47:35 -06:00 |
|
|
7592befc53
|
updated vall_e.emb.process to allow for batched processing, some typo fixes (it's painfully slow on my 7900XTX...)
|
2025-02-05 21:13:20 -06:00 |
|
|
79c504c278
|
cleaned up encode/decode functions to make them a little more coherent, added option to batch encode/decode (would have been very nice in the past, but this should speed things up for me when i fall for the latest meme codec)
|
2025-02-05 20:54:31 -06:00 |
|
|
bb2ebe1ca2
|
fixed issues that may rise from updating transformers with attention, added nvidia/audio-codec-44khz backend support (by gutting everything necessary because I do NOT want to install more dependencies
|
2025-02-04 20:30:07 -06:00 |
|
|
b445f4abb6
|
experimental
|
2025-01-05 19:05:00 -06:00 |
|
|
2e6a7625e4
|
experimental
|
2025-01-05 12:47:03 -06:00 |
|
|
91caf00212
|
ugh
|
2024-12-20 17:13:37 -06:00 |
|
|
53230efd74
|
changed prompt_inject_noise to prompt_inject_noise_p so I can have another reason to do this post-training
|
2024-12-19 19:28:50 -06:00 |
|
|
09804ecc16
|
APOLLO tweaks to make it work with deepspeed
|
2024-12-13 23:03:52 -06:00 |
|
|
6468e5d124
|
lol
|
2024-12-11 19:10:32 -06:00 |
|
|
8568a93dad
|
added WER/SIM-O metrics, added APOLLO but I need to test it
|
2024-12-10 20:13:21 -06:00 |
|
|
5d80a2d0d4
|
fixed NAR-len issues with non-english maybe (langs weren't being passed), added interface to inference in batches through tts.batched_inference (no support for rolling context/prefixes because there's no way to do that), demo page uses batched inferencing now
|
2024-12-07 19:21:05 -06:00 |
|
|
1f54bf5b40
|
revert sageattn back to optional dependency because it's not on windows, force resize_modules on by default because I broke something
|
2024-12-07 17:09:39 -06:00 |
|
|
f97e8b0c7f
|
ACTUALLY do KD-loss because of an oversight with masked_select outputting 1D tensors that get softmax'd in total
|
2024-12-07 09:52:51 -06:00 |
|
|
34a66e1052
|
agnostified KD
|
2024-12-06 23:53:46 -06:00 |
|
|
42fafbaaca
|
actually fixed knowledge distillation because of errant -inf logits causing problems and needed to be filtered (and splitting text language / output audio language because it helps)
|
2024-12-06 21:55:20 -06:00 |
|
|
23d402bf01
|
added knowledge distillation in the trainer (sadly it is not agnostic because of the grave mistake of further processing the batch within the forward pass, so subsequent calls do not match......)
|
2024-12-05 23:05:52 -06:00 |
|
|
147219a5e0
|
huge oversight in the attention masking......... (i realized I have not been providing a non-causal mask to non-causal tasks)
|
2024-11-22 13:44:43 -06:00 |
|
|
24d888c47c
|
temporarily dropping support for xformers because it's breaking when using an attention mask (which i dont remember commenting it out when being passed), default to not use wandb because it's being a pain when doing tests and not actual sessionsS)
|
2024-11-22 11:29:12 -06:00 |
|
|
3fc0540f49
|
m
|
2024-11-21 15:07:46 -06:00 |
|
|
1a73ac6a20
|
I cannot believe it's not actually called Wand DB (added wandb logging support since I think it would have been a much better way to look at my metrics)
|
2024-11-20 16:10:47 -06:00 |
|
|
db64e6cb59
|
dependency updates (gradio 5.x now works on my machine)
|
2024-11-20 12:33:01 -06:00 |
|
|
b1369e7824
|
better modality selection (pick AR+NAR by default for the ar+nar model, pick NAR-len by default for the nar-len model), lowered default CFG because it makes the AR+NAR output sped up (but can't be too low since it's required for the NAR-len)
|
2024-11-19 18:51:17 -06:00 |
|
|
190a917b3e
|
I did it.
|
2024-11-19 12:24:33 -06:00 |
|
|
4a71981456
|
normalize sampler index by batch size (if not using batched sampler), add option to cap out utterances for a speaker, some other things
|
2024-11-18 12:46:50 -06:00 |
|
|
6cfdf94bf9
|
swap priority to use nar-len if available, added notes
|
2024-11-18 09:40:04 -06:00 |
|