Commit Graph

183 Commits

Author SHA1 Message Date
mrq
b922f35b6b added documentation on how these new sampling parameters are very iffy and you really need to know what you are doing to use them because this is audio generation and not text generation 2023-09-08 20:43:36 -05:00
mrq
14c78bae39 added lots of sampling options (top-k/top-p, repetition penalty, length penalty) 2023-09-08 20:30:54 -05:00
mrq
f69aad9c65 some day I'll get it right 2023-09-08 15:36:26 -05:00
mrq
b2907ae7e0 seems that my PromEmbedding/RespEmbedding doesn't actually work all that well, naively using dedicated MultiEmbeddings for AR/NAR in the monolithic model is the best way to go 2023-09-08 01:03:24 -05:00
mrq
c47fc3274e added backwards compat flag 2023-09-07 17:12:17 -05:00
mrq
ab5134f385 tweaks and fixes 2023-09-07 17:08:38 -05:00
mrq
b2c2dec291 added homebrewed per-RVQ-bin embedding solutions 2023-09-07 16:48:02 -05:00
mrq
e7a67410d1 oops 2023-09-07 09:14:03 -05:00
mrq
712808494f added support for optional prodigy optimizer (https://github.com/konstmish/prodigy) although it consumes a lot more VRAM per parameter 2023-09-06 20:33:16 -05:00
mrq
7ce06432fd fixed the AR+NAR dual model, the resp_emb has to be split up (classifier might too) 2023-09-06 19:33:39 -05:00
mrq
100ca6b7d0 added option to use SGD optimizer through the YAML, added option to pass in additional optimizer parameters through the YAML, added experimental unified AR+NAR model (does not seem fruitful in testing) 2023-09-06 18:58:35 -05:00
mrq
451726fdd5 added ability to disable activation checkpointing through the YAML (it is very VRAM intensive at double layer size) 2023-09-05 15:38:21 -05:00
mrq
143aee7526 removed dedicated interleaved AR code 2023-09-03 22:47:03 -05:00
mrq
2f9cd0842f merged dedicated interleaved AR code with the normal AR code 2023-09-03 22:46:08 -05:00
mrq
3a6bd50322 haha 2023-09-03 21:36:58 -05:00
mrq
c56ce033d9 work on an interleaved AR (spoiler: it does not work) 2023-09-03 21:27:58 -05:00
mrq
8a6c203277 added per-speaker samplers 2023-09-03 21:27:13 -05:00
mrq
2f06166ddd cleanups 2023-09-01 21:33:51 -05:00
mrq
e40c0d34a0 somewhat got recurrent forward working (it's as accurate as chunkwise forward: it's not accurate at all), added option to use AMP instead of blanket setting the weight's dtype 2023-09-01 20:58:29 -05:00
mrq
2bc2d08b09 (need to verify) added modifying model size and config bool to align with VALL-E continuous' methodology 2023-09-01 17:19:34 -05:00
mrq
165a1154e0 Undo naive=False test flag, this shouldn't have made its way in 2023-08-26 22:00:43 -05:00
mrq
78378ed1ce overhauled dataloading code to be marginally faster, mostly cleaned up, and can leverage a metadata json to help things out 2023-08-26 19:53:23 -05:00
mrq
16e0020901 disabled chunkwise_recurrent for 2x speed gains (I suppose it has been working the entire time, but I have not been properly grabbing things, and this might explain why the output is bad) 2023-08-25 19:50:19 -05:00
mrq
2d1a9f10c0 nightmare of spaghetti that might break compat; mechanism to increase RVQ bins of an existing model without retraining, keeps sampled proms/resps at max RVQ level and trim off excess levels according to what model receives them, some other things I already forgot (I really hope no one else has weights being baked right now) 2023-08-19 15:06:33 -05:00
mrq
2a71486cb6 preparing for SpeechX extensions 2023-08-18 20:58:07 -05:00
mrq
d7deaf6def distributed training works now (hopefully) 2023-08-13 22:07:45 -05:00
mrq
2af09d0bef fixed that mysterious discepancy between the reported losses (I am so freaking mad, my piss is boiling, I had to interrupt halfway through an epoch) 2023-08-05 15:25:41 -05:00
mrq
0a524f1d59 reticulating splines 2023-08-03 21:39:00 -05:00
mrq
608c1970eb ops 2023-08-03 20:36:19 -05:00
mrq
c85101403f big cleanup 2023-08-03 20:26:36 -05:00
mrq
2e03e5ac93 Fixed an issue with having fairseq installed at all will brick logging 2023-08-02 22:57:10 -05:00
mrq
f6597e2dfe adjustments 2023-08-02 18:36:26 -05:00
mrq
7a06b27a9c Tweaks 2023-08-02 22:06:39 +00:00