|
57db3ccfa8
|
shuffled VALL-E continuous as a task tts-c instead, logic fixes for it
|
2023-09-02 12:23:40 -05:00 |
|
|
2f06166ddd
|
cleanups
|
2023-09-01 21:33:51 -05:00 |
|
|
e40c0d34a0
|
somewhat got recurrent forward working (it's as accurate as chunkwise forward: it's not accurate at all), added option to use AMP instead of blanket setting the weight's dtype
|
2023-09-01 20:58:29 -05:00 |
|
|
2bc2d08b09
|
(need to verify) added modifying model size and config bool to align with VALL-E continuous' methodology
|
2023-09-01 17:19:34 -05:00 |
|
|
5c8694db8e
|
nasty bandaid if there's no validation dataset specified during training (for example, during finetunes)
|
2023-08-30 18:23:05 -05:00 |
|
|
7f4388e591
|
added total samples processed and tokens processed (len of text tokens + len of target response tokens)
|
2023-08-28 11:02:45 -05:00 |
|
|
87c4bfedba
|
added ability to mark models as disabled for training, and hotloading them for eval/validation (useful if training only one model, or training a model per GPU)
|
2023-08-27 12:26:12 -05:00 |
|
|
165a1154e0
|
Undo naive=False test flag, this shouldn't have made its way in
|
2023-08-26 22:00:43 -05:00 |
|
|
78378ed1ce
|
overhauled dataloading code to be marginally faster, mostly cleaned up, and can leverage a metadata json to help things out
|
2023-08-26 19:53:23 -05:00 |
|
|
7b3be3d7bf
|
added helper scripts to process LibriTTS/LibriLight, detect duplicate speaker+books between them, and script to directly phonemize and quantize LibriTTS
|
2023-08-26 10:21:12 -05:00 |
|
|
16e0020901
|
disabled chunkwise_recurrent for 2x speed gains (I suppose it has been working the entire time, but I have not been properly grabbing things, and this might explain why the output is bad)
|
2023-08-25 19:50:19 -05:00 |
|
|
6455a2f9d7
|
I think I fixed a bug?
|
2023-08-24 23:33:36 -05:00 |
|
|
f3fbed5ffd
|
updated notices tailored for windows / low VRAM cards
|
2023-08-24 17:19:10 -05:00 |
|
|
0517d620b8
|
fixes with the local backend
|
2023-08-24 17:05:56 -05:00 |
|
|
00ad4af651
|
updated draconian requirement for espeak-ng to be installed and the env var set to the dll for Windows
|
2023-08-24 14:57:01 -05:00 |
|
|
b6c9686f7d
|
Do not install DeepSpeed under Windows (to-do: default backend to use local if on Windows)
|
2023-08-24 14:27:36 -05:00 |
|
|
22904a8639
|
more oversights fixed because I've been using a cached dataloader forever now and didn't catch these problems
|
2023-08-24 10:25:33 -05:00 |
|
|
5873c27f1a
|
ops
|
2023-08-24 09:20:47 -05:00 |
|
|
501a857d5d
|
ops
|
2023-08-23 17:03:25 -05:00 |
|
|
4585824cd3
|
tweaks, including exporting on save/quit
|
2023-08-23 16:43:03 -05:00 |
|
|
d106598403
|
do not utilize diskcache if a config yaml is not loaded
|
2023-08-23 11:02:15 -05:00 |
|
|
524d289c9c
|
Forgot to re-add in setting the weight's dtype on model load
|
2023-08-22 22:57:23 -05:00 |
|
|
9c5a33bfd2
|
added repo with my weights so far
|
2023-08-22 13:09:44 -05:00 |
|
|
7b1b82e0e5
|
inferencing cleanup
|
2023-08-20 21:36:02 -05:00 |
|
|
a47029065b
|
I don't know if the lack of start/stop tokens being added was causing my inference tests to fail, but it seems better now
|
2023-08-20 19:21:54 -05:00 |
|
|
736c077282
|
ops
|
2023-08-20 13:42:18 -05:00 |
|
|
b105f6211e
|
added ability to export weights mid-training to avoid CBT to yank the weights while the training script is running
|
2023-08-20 13:39:58 -05:00 |
|
|
fc576010ce
|
wrapped saving the checkpoint in a try/catch so I can stop waking up to the damn trainer crashing because it ran out of disk space; I'd much rather it keep training to give me time to eventually clear up disk space rather than it silently restarting on its own
|
2023-08-20 06:29:17 -05:00 |
|
|
2d1a9f10c0
|
nightmare of spaghetti that might break compat; mechanism to increase RVQ bins of an existing model without retraining, keeps sampled proms/resps at max RVQ level and trim off excess levels according to what model receives them, some other things I already forgot (I really hope no one else has weights being baked right now)
|
2023-08-19 15:06:33 -05:00 |
|
|
f7f6d3bf6d
|
validated that SpeechX tasks cse and nse works, added a method to test each task by invoking python3 -m vall_e.data --action=tasks --tasks='sr,se,cse,nse'
|
2023-08-19 09:50:07 -05:00 |
|
|
6ca347e1e1
|
literally had a urethra moment before going to bed with a way to implement cse/nse tasks
|
2023-08-19 01:16:46 -05:00 |
|
|
8f42c578c9
|
setting up for allowing training for a partial amount of the speechx tasks (do NOT try this at home yet without a proper model, as performance is predecated on having a solid base vall-e model for the tasks
|
2023-08-19 00:16:08 -05:00 |
|
|
ae9d38aa31
|
forgot to have it pull from specified noise to the hdf5 dataset
|
2023-08-18 23:57:07 -05:00 |
|
|
77292c42f9
|
tested the training preparation for tasks ns, sr, and tse (I don't expect it to go well with only 2 RVQ bins)
|
2023-08-18 23:55:40 -05:00 |
|
|
bbb0563b3d
|
pseudocode polyfill stub some other flavor of working on adding the tasks
|
2023-08-18 22:22:13 -05:00 |
|
|
0b46c1e312
|
god I am inexperienced with retaining compat from previous weights, I hope no one actually has weights
|
2023-08-18 21:29:20 -05:00 |
|
|
508677fcd5
|
repaired auraloss loss calc during eval/val
|
2023-08-18 21:19:47 -05:00 |
|
|
fb4e816823
|
oops
|
2023-08-18 21:11:19 -05:00 |
|
|
2a71486cb6
|
preparing for SpeechX extensions
|
2023-08-18 20:58:07 -05:00 |
|
|
ced31fd9b7
|
removed the sampler as it's very misleading
|
2023-08-18 14:47:48 -05:00 |
|
|
8e7f900210
|
forgot the =
|
2023-08-17 19:07:59 -05:00 |
|
|
3ff7cf8341
|
maybe fix evaluation dataset not being capped to cfg.evaluation.size
|
2023-08-17 18:56:37 -05:00 |
|
|
ee58db746f
|
actually make the evaluation dataset shuffled for sample_type=speaker
|
2023-08-17 15:04:45 -05:00 |
|
|
18403a3523
|
maybe fixes eval dataloader not shuffling under distributed
|
2023-08-17 13:41:53 -05:00 |
|
|
03872b823f
|
why did I type rglob, another 10 bucks down the drain...
|
2023-08-17 00:11:29 -05:00 |
|
|
b5f247aa11
|
just nuked about 9 hours of progress because I didn't make sure it pruned only on the global leader
|
2023-08-16 23:37:52 -05:00 |
|
|
d7152fc7b9
|
added pruning of old checkpoints if specified (cfg.trainer.keep_last_checkpoints)
|
2023-08-16 20:12:12 -05:00 |
|
|
44c08d828e
|
added sample_type that samples from speakers to truly balance an epoch by speakers rather than the entire dataset and a sampler that tries to balance by speakers
|
2023-08-16 19:39:21 -05:00 |
|
|
599e47a813
|
might fix user inputted saving/quitting breaking when distributed
|
2023-08-15 23:52:20 -05:00 |
|
|
1e3e1d9315
|
tweaks
|
2023-08-15 21:58:16 -05:00 |
|