Commit Graph

1626 Commits

Author SHA1 Message Date
James Betker
e045fb0ad7 fix clip grad norm with scaler 2022-03-13 16:28:23 -06:00
James Betker
22c67ce8d3 tts9 mods 2022-03-13 10:25:55 -06:00
James Betker
08599b4c75 fix random_audio_crop injector 2022-03-12 20:42:29 -07:00
James Betker
8f130e2b3f add scale_shift_norm back to tts9 2022-03-12 20:42:13 -07:00
James Betker
9bbbe26012 update audio_with_noise 2022-03-12 20:41:47 -07:00
James Betker
e754c4fbbc sweep update 2022-03-12 15:33:00 -07:00
James Betker
73bfd4a86d another tts9 update 2022-03-12 15:17:06 -07:00
James Betker
0523777ff7 add efficient config to tts9 2022-03-12 15:10:35 -07:00
James Betker
896accb71f data and prep improvements 2022-03-12 15:10:11 -07:00
James Betker
1e87b934db potentially average conditioning inputs 2022-03-10 20:37:41 -07:00
James Betker
e6a95f7c11 Update tts9: Remove torchscript provisions and add mechanism to train solely on codes 2022-03-09 09:43:38 -07:00
James Betker
726e30c4f7 Update noise augmentation dataset to include voices that are appended at the end of another clip. 2022-03-09 09:43:10 -07:00
James Betker
c4e4cf91a0 add support for the original vocoder to audio_diffusion_fid; also add a new "intelligibility" metric 2022-03-08 15:53:27 -07:00
James Betker
3e5da71b16 add grad scaler scale to metrics 2022-03-08 15:52:42 -07:00
James Betker
d2bdeb6f20 misc audio support 2022-03-08 15:52:26 -07:00
James Betker
d553808d24 misc 2022-03-08 15:52:16 -07:00
James Betker
7dabc17626 phase2 filter initial commit 2022-03-08 15:51:55 -07:00
James Betker
f56edb2122 minicoder with classifier head: spread out probability mass for 0 predictions 2022-03-08 15:51:31 -07:00
James Betker
29b2921222 move diffusion vocoder 2022-03-08 15:51:05 -07:00
James Betker
94222b0216 tts9 initial commit 2022-03-08 15:50:45 -07:00
James Betker
38fd9fc985 Improve efficiency of audio_with_noise_dataset 2022-03-08 15:50:13 -07:00
James Betker
b3def182de move processing pipeline to "phase_1" 2022-03-08 15:49:51 -07:00
James Betker
30ddac69aa lots of bad entries 2022-03-05 23:15:59 -07:00
James Betker
dcf98df0c2 ++ 2022-03-05 23:12:34 -07:00
James Betker
64d764ccd7 fml 2022-03-05 23:11:10 -07:00
James Betker
ef63ff84e2 pvd2 2022-03-05 23:08:39 -07:00
James Betker
1a05712764 pvd 2022-03-05 23:05:29 -07:00
James Betker
d1dc8dbb35 Support tts9 2022-03-05 20:14:36 -07:00
James Betker
93a3302819 Push training_state data to CPU memory before saving it
For whatever reason, keeping this on GPU memory just doesn't work.
When you load it, it consumes a large amount of GPU memory and that
utilization doesn't go away. Saving to CPU should fix this.
2022-03-04 17:57:33 -07:00
James Betker
6000580e2e df 2022-03-04 13:47:00 -07:00
James Betker
382681a35d Load diffusion_fid DVAE into the correct cuda device 2022-03-04 13:42:14 -07:00
James Betker
e1052a5e32 Move log consensus to train for efficiency 2022-03-04 13:41:32 -07:00
James Betker
ce6dfdf255 Distributed "fixes" 2022-03-04 12:46:41 -07:00
James Betker
3ff878ae85 Accumulate loss & grad_norm metrics from all entities within a distributed graph 2022-03-04 12:01:16 -07:00
James Betker
79e5692388 Fix distributed bug 2022-03-04 11:58:53 -07:00
James Betker
f87e10ffef Make deterministic sampler work with distributed training & microbatches 2022-03-04 11:50:50 -07:00
James Betker
77c18b53b3 Cap grad booster 2022-03-04 10:40:24 -07:00
James Betker
2d1cb83c1d Add a deterministic timestep sampler, with provisions to employ it every n steps 2022-03-04 10:40:14 -07:00
James Betker
f490eaeba7 Shuffle optimizer states back and forth between cpu memory during steps 2022-03-04 10:38:51 -07:00
James Betker
3c242403f5 adjust location of pre-optimizer step so I can visualize the new grad norms 2022-03-04 08:56:42 -07:00
James Betker
58019a2ce3 audio diffusion fid updates 2022-03-03 21:53:32 -07:00
James Betker
998c53ad4f w2v_matcher mods 2022-03-03 21:52:51 -07:00
James Betker
9029e4f20c Add a base-wrapper 2022-03-03 21:52:28 -07:00
James Betker
6873ad6660 Support functionality 2022-03-03 21:52:16 -07:00
James Betker
6af5d129ce Add experimental gradient boosting into tts7 2022-03-03 21:51:40 -07:00
James Betker
7ea84f1ac3 asdf 2022-03-03 13:43:44 -07:00
James Betker
3cd6c7f428 Get rid of unused codes in vq 2022-03-03 13:41:38 -07:00
James Betker
619da9ea28 Get rid of discretization loss 2022-03-03 13:36:25 -07:00
James Betker
beb7c8a39d asdf 2022-03-01 21:41:31 -07:00
James Betker
70fa780edb Add mechanism to export grad norms 2022-03-01 20:19:52 -07:00
James Betker
d9f8f92840 Codified fp16 2022-03-01 15:46:04 -07:00
James Betker
45ab444c04 Rework minicoder to always checkpoint 2022-03-01 14:09:18 -07:00
James Betker
db0c3340ac Implement guidance-free diffusion in eval
And a few other fixes
2022-03-01 11:49:36 -07:00
James Betker
2134f06516 Implement conditioning-free diffusion at the eval level 2022-02-27 15:11:42 -07:00
James Betker
436fe24822 Add conditioning-free guidance 2022-02-27 15:00:06 -07:00
James Betker
ac920798bb misc 2022-02-27 14:49:11 -07:00
James Betker
ba155e4e2f script for uploading models to the HF hub 2022-02-27 14:48:38 -07:00
James Betker
dbc74e96b2 w2v_matcher 2022-02-27 14:48:23 -07:00
James Betker
42879d7296 w2v_wrapper ramping dropout mode
this is an experimental feature that needs some testing
2022-02-27 14:47:51 -07:00
James Betker
c375287db9 Re-instate autocasting 2022-02-25 11:06:18 -07:00
James Betker
34ee32a90e get rid of autocasting in tts7 2022-02-24 21:53:51 -07:00
James Betker
f458f5d8f1 abort early if losses reach nan too much, and save the model 2022-02-24 20:55:30 -07:00
James Betker
18dc62453f Don't step if NaN losses are encountered. 2022-02-24 17:45:08 -07:00
James Betker
ea500ad42a Use clustered masking in udtts7 2022-02-24 07:57:26 -07:00
James Betker
7c17c8e674 gurgl 2022-02-23 21:28:24 -07:00
James Betker
e6824e398f Load dvae to cpu 2022-02-23 21:21:45 -07:00
James Betker
81017d9696 put frechet_distance on cuda 2022-02-23 21:21:13 -07:00
James Betker
9a7bbf33df f 2022-02-23 18:03:38 -07:00
James Betker
68726eac74 . 2022-02-23 17:58:07 -07:00
James Betker
b7319ab518 Support vocoder type diffusion in audio_diffusion_fid 2022-02-23 17:25:16 -07:00
James Betker
58f6c9805b adf 2022-02-22 23:12:58 -07:00
James Betker
03752c1cd6 Report NaN 2022-02-22 23:09:37 -07:00
James Betker
7201b4500c default text_to_sequence cleaners 2022-02-21 19:14:22 -07:00
James Betker
ba7f54c162 w2v: new inference function 2022-02-21 19:13:03 -07:00
James Betker
896ac029ae allow continuation of samples encountered 2022-02-21 19:12:50 -07:00
James Betker
6313a94f96 eval: integrate a n-gram language model into decoding 2022-02-21 19:12:34 -07:00
James Betker
af50afe222 pairedvoice: error out if clip is too short 2022-02-21 19:11:10 -07:00
James Betker
38802a96c8 remove timesteps from cond calculation 2022-02-21 12:32:21 -07:00
James Betker
668876799d unet_diffusion_tts7 2022-02-20 15:22:38 -07:00
James Betker
0872e17e60 unified_voice mods 2022-02-19 20:37:35 -07:00
James Betker
7b12799370 Reformat mel_text_clip for use in eval 2022-02-19 20:37:26 -07:00
James Betker
bcba65c539 DataParallel Fix 2022-02-19 20:36:35 -07:00
James Betker
34001ad765 et 2022-02-18 18:52:33 -07:00
James Betker
baf7b65566 Attempt to make w2v play with DDP AND checkpointing 2022-02-18 18:47:11 -07:00
James Betker
f3776f1992 reset ctc loss from "mean" to "sum" 2022-02-17 22:00:58 -07:00
James Betker
2b20da679c make spec_augment a parameter 2022-02-17 20:22:05 -07:00
James Betker
a813fbed9c Update to evaluator 2022-02-17 17:30:33 -07:00
James Betker
e1d71e1bd5 w2v_wrapper: get rid of ctc attention mask 2022-02-15 20:54:40 -07:00
James Betker
79e8f36d30 Convert CLIP models into new folder 2022-02-15 20:53:07 -07:00
James Betker
8f767b8b4f ... 2022-02-15 07:08:17 -07:00
James Betker
29e07913a8 Fix 2022-02-15 06:58:11 -07:00
James Betker
dd585df772 LAMB optimizer 2022-02-15 06:48:13 -07:00
James Betker
2bdb515068 A few mods to make wav2vec2 trainable with DDP on DLAS 2022-02-15 06:28:54 -07:00
James Betker
52b61b9f77 Update scripts and attempt to figure out how UnifiedVoice could be used to produce CTC codes 2022-02-13 20:48:06 -07:00
James Betker
a4f1641eea Add & refine WER evaluator for w2v 2022-02-13 20:47:29 -07:00
James Betker
e16af944c0 BSO fix 2022-02-12 20:01:04 -07:00
James Betker
29534180b2 w2v fine tuner 2022-02-12 20:00:59 -07:00
James Betker
0c3cc5ebad use script updates to fix output size disparities 2022-02-12 20:00:46 -07:00
James Betker
15fd60aad3 Allow EMA training to be disabled 2022-02-12 20:00:23 -07:00
James Betker
3252972057 ctc_code_gen mods 2022-02-12 19:59:54 -07:00