|
5ac119a6e7
|
added light web UI (need to port the telemetry disabling bandaids from aivc)
|
2023-09-09 16:17:20 -05:00 |
|
|
10c34c5b98
|
added a length-based decay factor for repetition penalty
|
2023-09-08 21:02:00 -05:00 |
|
|
b922f35b6b
|
added documentation on how these new sampling parameters are very iffy and you really need to know what you are doing to use them because this is audio generation and not text generation
|
2023-09-08 20:43:36 -05:00 |
|
|
14c78bae39
|
added lots of sampling options (top-k/top-p, repetition penalty, length penalty)
|
2023-09-08 20:30:54 -05:00 |
|
|
f69aad9c65
|
some day I'll get it right
|
2023-09-08 15:36:26 -05:00 |
|
|
b2907ae7e0
|
seems that my PromEmbedding/RespEmbedding doesn't actually work all that well, naively using dedicated MultiEmbeddings for AR/NAR in the monolithic model is the best way to go
|
2023-09-08 01:03:24 -05:00 |
|
|
67617d7d69
|
also cull frozen_params in the params optimizer receives to reduce VRAM it consumes
|
2023-09-07 18:27:02 -05:00 |
|
|
8837bc34d7
|
added option to specify parameters to freeze per-model in YAML (because I need to see about committing atrocities with convering an AR into an AR+NAR)
|
2023-09-07 18:19:51 -05:00 |
|
|
c47fc3274e
|
added backwards compat flag
|
2023-09-07 17:12:17 -05:00 |
|
|
ab5134f385
|
tweaks and fixes
|
2023-09-07 17:08:38 -05:00 |
|
|
b2c2dec291
|
added homebrewed per-RVQ-bin embedding solutions
|
2023-09-07 16:48:02 -05:00 |
|
|
e7a67410d1
|
oops
|
2023-09-07 09:14:03 -05:00 |
|
|
712808494f
|
added support for optional prodigy optimizer (https://github.com/konstmish/prodigy) although it consumes a lot more VRAM per parameter
|
2023-09-06 20:33:16 -05:00 |
|
|
7ce06432fd
|
fixed the AR+NAR dual model, the resp_emb has to be split up (classifier might too)
|
2023-09-06 19:33:39 -05:00 |
|
|
100ca6b7d0
|
added option to use SGD optimizer through the YAML, added option to pass in additional optimizer parameters through the YAML, added experimental unified AR+NAR model (does not seem fruitful in testing)
|
2023-09-06 18:58:35 -05:00 |
|
|
451726fdd5
|
added ability to disable activation checkpointing through the YAML (it is very VRAM intensive at double layer size)
|
2023-09-05 15:38:21 -05:00 |
|
|
143aee7526
|
removed dedicated interleaved AR code
|
2023-09-03 22:47:03 -05:00 |
|
|
2f9cd0842f
|
merged dedicated interleaved AR code with the normal AR code
|
2023-09-03 22:46:08 -05:00 |
|
|
3a6bd50322
|
haha
|
2023-09-03 21:36:58 -05:00 |
|
|
c56ce033d9
|
work on an interleaved AR (spoiler: it does not work)
|
2023-09-03 21:27:58 -05:00 |
|
|
8a6c203277
|
added per-speaker samplers
|
2023-09-03 21:27:13 -05:00 |
|
|
81b05dabb9
|
accurate epoch metric is now reported (based on samples processed / length of dataset's paths, rather than naive assumptions)
|
2023-09-03 08:03:36 -05:00 |
|
|
922404285c
|
fixed segfault from tts-c task token exceeding being too big (inserted it in the hypothetical svc task token because in reality that is never ever going to be a feasible task to train against)
|
2023-09-02 19:25:43 -05:00 |
|
|
4613781e23
|
integrated plot script, added tts-c task token to help the model be able to mix between normal VALL-E and VALL-E continuous
|
2023-09-02 16:29:53 -05:00 |
|
|
71e68a8528
|
tweaked tts-continuous task
|
2023-09-02 13:39:17 -05:00 |
|
|
57db3ccfa8
|
shuffled VALL-E continuous as a task tts-c instead, logic fixes for it
|
2023-09-02 12:23:40 -05:00 |
|
|
2f06166ddd
|
cleanups
|
2023-09-01 21:33:51 -05:00 |
|
|
e40c0d34a0
|
somewhat got recurrent forward working (it's as accurate as chunkwise forward: it's not accurate at all), added option to use AMP instead of blanket setting the weight's dtype
|
2023-09-01 20:58:29 -05:00 |
|
|
2bc2d08b09
|
(need to verify) added modifying model size and config bool to align with VALL-E continuous' methodology
|
2023-09-01 17:19:34 -05:00 |
|
|
5c8694db8e
|
nasty bandaid if there's no validation dataset specified during training (for example, during finetunes)
|
2023-08-30 18:23:05 -05:00 |
|
|
7f4388e591
|
added total samples processed and tokens processed (len of text tokens + len of target response tokens)
|
2023-08-28 11:02:45 -05:00 |
|
|
87c4bfedba
|
added ability to mark models as disabled for training, and hotloading them for eval/validation (useful if training only one model, or training a model per GPU)
|
2023-08-27 12:26:12 -05:00 |
|
|
165a1154e0
|
Undo naive=False test flag, this shouldn't have made its way in
|
2023-08-26 22:00:43 -05:00 |
|
|
78378ed1ce
|
overhauled dataloading code to be marginally faster, mostly cleaned up, and can leverage a metadata json to help things out
|
2023-08-26 19:53:23 -05:00 |
|
|
16e0020901
|
disabled chunkwise_recurrent for 2x speed gains (I suppose it has been working the entire time, but I have not been properly grabbing things, and this might explain why the output is bad)
|
2023-08-25 19:50:19 -05:00 |
|
|
6455a2f9d7
|
I think I fixed a bug?
|
2023-08-24 23:33:36 -05:00 |
|
|
0517d620b8
|
fixes with the local backend
|
2023-08-24 17:05:56 -05:00 |
|
|
00ad4af651
|
updated draconian requirement for espeak-ng to be installed and the env var set to the dll for Windows
|
2023-08-24 14:57:01 -05:00 |
|
|
22904a8639
|
more oversights fixed because I've been using a cached dataloader forever now and didn't catch these problems
|
2023-08-24 10:25:33 -05:00 |
|
|
5873c27f1a
|
ops
|
2023-08-24 09:20:47 -05:00 |
|
|
501a857d5d
|
ops
|
2023-08-23 17:03:25 -05:00 |
|
|
4585824cd3
|
tweaks, including exporting on save/quit
|
2023-08-23 16:43:03 -05:00 |
|
|
d106598403
|
do not utilize diskcache if a config yaml is not loaded
|
2023-08-23 11:02:15 -05:00 |
|
|
524d289c9c
|
Forgot to re-add in setting the weight's dtype on model load
|
2023-08-22 22:57:23 -05:00 |
|
|
9c5a33bfd2
|
added repo with my weights so far
|
2023-08-22 13:09:44 -05:00 |
|
|
7b1b82e0e5
|
inferencing cleanup
|
2023-08-20 21:36:02 -05:00 |
|
|
a47029065b
|
I don't know if the lack of start/stop tokens being added was causing my inference tests to fail, but it seems better now
|
2023-08-20 19:21:54 -05:00 |
|
|
736c077282
|
ops
|
2023-08-20 13:42:18 -05:00 |
|
|
b105f6211e
|
added ability to export weights mid-training to avoid CBT to yank the weights while the training script is running
|
2023-08-20 13:39:58 -05:00 |
|
|
fc576010ce
|
wrapped saving the checkpoint in a try/catch so I can stop waking up to the damn trainer crashing because it ran out of disk space; I'd much rather it keep training to give me time to eventually clear up disk space rather than it silently restarting on its own
|
2023-08-20 06:29:17 -05:00 |
|