Commit Graph

94 Commits

Author SHA1 Message Date
mrq
8641c87611 nothing could go wrong part 2 (reverted and rewrote commits since there was a nasty regression) 2025-03-25 23:06:16 -05:00
mrq
bee2688dea ugh 2025-03-15 16:50:21 -05:00
mrq
91ede71cf0 ugh 2025-03-06 17:19:27 -06:00
mrq
a30dffcca7 wandb additions (to-do eventually, upload samples as artifacts) 2025-03-06 15:44:40 -06:00
mrq
06ef3daf3c require minimum of 1 second durations for training because of my slop code auto-transposing that I don't wanna fix right now 2025-02-26 22:00:33 -06:00
mrq
2ea387c08a segregated experimental changes into its own streamlined file to avoid breaking the existing model, and it can pivot to the cleaned up code if it actually works (nothing is working) 2025-02-26 21:26:13 -06:00
mrq
3330b5bb00 maybe fix NaNs being thrown for immature models at fp16 for training evals 2025-02-24 18:25:54 -06:00
mrq
6e7b269147 ugh 2025-02-24 13:54:21 -06:00
mrq
8f5a3997bd another experimental flag 2025-02-24 13:50:41 -06:00
mrq
3ab11bdc7b oops 2025-01-05 23:53:17 -06:00
mrq
b445f4abb6 experimental 2025-01-05 19:05:00 -06:00
mrq
2e6a7625e4 experimental 2025-01-05 12:47:03 -06:00
mrq
4800e7179a remove nan checks because it causes problems in distributed training because I'm not syncing between GPUs (and nan losses gets ignored anyways with loss scaling) 2024-12-15 09:42:54 -06:00
mrq
218d0e29fd ugh (batchmean actually expects batch=seq_len, and not the actual batch) 2024-12-07 12:39:01 -06:00
mrq
61ed662856 ACTUALLY actually fix KD-loss (the -inf in the logits was caused by cringecode) 2024-12-07 12:31:54 -06:00
mrq
f97e8b0c7f ACTUALLY do KD-loss because of an oversight with masked_select outputting 1D tensors that get softmax'd in total 2024-12-07 09:52:51 -06:00
mrq
34a66e1052 agnostified KD 2024-12-06 23:53:46 -06:00
mrq
23d402bf01 added knowledge distillation in the trainer (sadly it is not agnostic because of the grave mistake of further processing the batch within the forward pass, so subsequent calls do not match......) 2024-12-05 23:05:52 -06:00
mrq
88d840218d default set cfg strength to 3.0 since the reference model is updated 2024-11-17 10:23:40 -06:00
mrq
23fdba0c98 tweaks and changes 2024-11-16 15:49:06 -06:00
mrq
f7b8b1e825 dropped subtrain dataloader since its useless to duplicate 2024-11-11 17:00:49 -06:00
mrq
48490757da fixes 2024-11-10 20:37:50 -06:00
mrq
9def34cd66 lol 2024-11-10 12:48:41 -06:00
mrq
a9d2faf2d7 all I can do now until I wait for the model to (re)train for pure NAR 2024-11-09 22:57:34 -06:00
mrq
d606a693ff eval fix for nar-len 2024-11-06 23:14:16 -06:00
mrq
aee08b7307 changed layerskip float16 training warning (since it didnt seem to fry on my 4xV100 system) 2024-11-03 09:58:29 -06:00
mrq
62fe5b0943 ughh 2024-11-01 22:36:48 -05:00
mrq
fb8faa295b actually float16(+AMP) and layerskip is bad and will kill the model...... 2024-11-01 18:36:44 -05:00
mrq
a96f5aee32 adjusted how i want to pass eval kwargs 2024-10-25 20:38:09 -05:00
mrq
8920e5e86b actually have beam_width in the webUI work 2024-10-22 22:06:22 -05:00
mrq
2ea978f318 added --eval-random-text-prompts to use random text prompts for eval pass, added --random-prompts for demo page and --lora to use a sample with the lora disabled, probably finally fixed validation dataloader breaking on eval 2024-10-10 13:40:25 -05:00
mrq
039482a48e don't do eval on stt because it's so slow and I don't even bother doing any metrics against it anyways (to-do: make this a flag) 2024-09-26 18:56:57 -05:00
mrq
fa93061b3e more fixes, moved sampler state dict to a better place, eval works again 2024-09-06 16:59:56 -05:00
mrq
d33a906119 cleanup for AR_NAR inferencing to allow both TTS and STT tasks simultaneously (need to have training eval do this to though) 2024-09-06 14:30:12 -05:00
mrq
32287710a2 moved prints to use logger, edited readme (fused_attn doesnt seem stable for training) 2024-08-29 13:27:16 -05:00
mrq
ab673e0426 add cap for NAR-len training, to avoid any weird cases in early training where it'll just mess up and generate long lengths 2024-08-03 21:00:32 -05:00
mrq
ce8bb1e4f7 sanity cleanups with weird off-by-one-ness, cleaned up and validated vall_e.models.experimental works again 2024-07-27 15:36:05 -05:00
mrq
75b04686f8 added prom-less training / inferencing, some other things 2024-07-22 19:36:07 -05:00
mrq
692d09f9c1 eval/validation fix for SpeechX tasks 2024-07-19 09:16:37 -05:00
mrq
97e768601c re-introducing SpeechX tasks (need to validate them all, everything works with base tts anyways) 2024-07-18 16:16:14 -05:00
mrq
f770467eb3 stuff 2024-07-01 18:13:29 -05:00
mrq
bc2a6fa756 sanity cleanup: moved experimental features under its own thing 2024-06-30 10:37:33 -05:00
mrq
a8718d35a4 nasty bandaid because some of my DAC dataset only has 8 RVQ levels instead of the full 9 2024-06-29 10:16:37 -05:00
mrq
dd40463803 limit eval size because the training batch size seems to be used for the eval dataloader, somehow (bandaid) 2024-06-29 09:11:28 -05:00
mrq
1a392b69f6 local training backend should be a bit more aware of variable batch sizes, maybe 2024-06-28 22:39:05 -05:00
mrq
234f9efc6e ugh 2024-06-09 11:39:43 -05:00
mrq
132a02c48b sanity cleanup, backup config yaml for each log file 2024-06-09 11:22:52 -05:00
mrq
ead3e2f0cb ugh 2024-06-08 16:14:57 -05:00
mrq
b072f9b96b fixes 2024-06-08 16:01:34 -05:00
mrq
7d6fff24f9 un-tensor'd quant_level marker since it doesn't need to be one (I forgot why I had it as one but nothing seems to need it as a tensor that didn't already make it one) 2024-06-07 20:46:22 -05:00