Commit Graph

1132 Commits

Author SHA1 Message Date
James Betker
a15970dd97 disable checkpointing in conditioning encoder 2022-03-24 11:49:04 -06:00
James Betker
cc5fc91562 flat0 work 2022-03-24 11:46:53 -06:00
James Betker
b0d2827fad flat0 2022-03-24 11:30:40 -06:00
James Betker
8707a3e0c3 drop full layers in layerdrop, not half layers 2022-03-23 17:15:08 -06:00
James Betker
57da6d0ddf more simplifications 2022-03-22 11:46:03 -06:00
James Betker
f3f391b372 undo sandwich 2022-03-22 11:43:24 -06:00
James Betker
927731f3b4 tts9: fix position embeddings snafu 2022-03-22 11:41:32 -06:00
James Betker
536511fc4b unified_voice: relative position encodings 2022-03-22 11:41:13 -06:00
James Betker
5405ce4363 fix flat 2022-03-22 11:39:39 -06:00
James Betker
e47a759ed8 ....... 2022-03-21 17:22:35 -06:00
James Betker
cc4c9faf9a resolve more issues 2022-03-21 17:20:05 -06:00
James Betker
9e97cd800c take the conditioning mean rather than the first element 2022-03-21 16:58:03 -06:00
James Betker
9c7598dc9a fix conditioning_free signal 2022-03-21 15:29:17 -06:00
James Betker
2a65c982ca dont double nest checkpointing 2022-03-21 15:27:51 -06:00
James Betker
723f324eda Make it even better 2022-03-21 14:50:59 -06:00
James Betker
e735d8e1fa unified_voice fixes 2022-03-21 14:44:00 -06:00
James Betker
1ad18d29a8 Flat fixes 2022-03-21 14:43:52 -06:00
James Betker
26dcf7f1a2 r2 of the flat diffusion 2022-03-21 11:40:43 -06:00
James Betker
c14fc003ed flat diffusion 2022-03-17 17:45:27 -06:00
James Betker
428911cd4d flat diffusion network 2022-03-17 10:53:56 -06:00
James Betker
bf08519d71 fixes 2022-03-17 10:53:39 -06:00
James Betker
95ea0a592f More cleaning 2022-03-16 12:05:56 -06:00
James Betker
d186414566 More spring cleaning 2022-03-16 12:04:00 -06:00
James Betker
8b376e63d9 More improvements 2022-03-16 10:16:34 -06:00
James Betker
0fc877cbc8 tts9 fix for alignment size 2022-03-15 21:43:14 -06:00
James Betker
f563a8dd41 fixes 2022-03-15 21:43:00 -06:00
James Betker
b754058018 Update wav2vec2 wrapper 2022-03-15 11:35:38 -06:00
James Betker
9c6f776980 Add univnet vocoder 2022-03-15 11:34:51 -06:00
James Betker
7929fd89de Refactor audio-style models into the audio folder 2022-03-15 11:06:25 -06:00
James Betker
f95d3d2b82 move waveglow to audio/vocoders 2022-03-15 11:03:07 -06:00
James Betker
bb03cbb9fc composable initial checkin 2022-03-15 10:35:40 -06:00
James Betker
86b0d76fb9 tts8 (incomplete, may be removed) 2022-03-15 10:35:31 -06:00
James Betker
eecbc0e678 Use wider spectrogram when asked 2022-03-15 10:35:11 -06:00
James Betker
9767260c6c tacotron stft - loosen bounds restrictions and clip 2022-03-15 10:31:26 -06:00
James Betker
f8631ad4f7 Updates to support inputting MELs into the conditioning encoder 2022-03-14 17:31:42 -06:00
James Betker
22c67ce8d3 tts9 mods 2022-03-13 10:25:55 -06:00
James Betker
8f130e2b3f add scale_shift_norm back to tts9 2022-03-12 20:42:13 -07:00
James Betker
73bfd4a86d another tts9 update 2022-03-12 15:17:06 -07:00
James Betker
0523777ff7 add efficient config to tts9 2022-03-12 15:10:35 -07:00
James Betker
1e87b934db potentially average conditioning inputs 2022-03-10 20:37:41 -07:00
James Betker
e6a95f7c11 Update tts9: Remove torchscript provisions and add mechanism to train solely on codes 2022-03-09 09:43:38 -07:00
James Betker
f56edb2122 minicoder with classifier head: spread out probability mass for 0 predictions 2022-03-08 15:51:31 -07:00
James Betker
29b2921222 move diffusion vocoder 2022-03-08 15:51:05 -07:00
James Betker
94222b0216 tts9 initial commit 2022-03-08 15:50:45 -07:00
James Betker
d1dc8dbb35 Support tts9 2022-03-05 20:14:36 -07:00
James Betker
79e5692388 Fix distributed bug 2022-03-04 11:58:53 -07:00
James Betker
f87e10ffef Make deterministic sampler work with distributed training & microbatches 2022-03-04 11:50:50 -07:00
James Betker
77c18b53b3 Cap grad booster 2022-03-04 10:40:24 -07:00
James Betker
2d1cb83c1d Add a deterministic timestep sampler, with provisions to employ it every n steps 2022-03-04 10:40:14 -07:00
James Betker
58019a2ce3 audio diffusion fid updates 2022-03-03 21:53:32 -07:00
James Betker
998c53ad4f w2v_matcher mods 2022-03-03 21:52:51 -07:00
James Betker
9029e4f20c Add a base-wrapper 2022-03-03 21:52:28 -07:00
James Betker
6873ad6660 Support functionality 2022-03-03 21:52:16 -07:00
James Betker
6af5d129ce Add experimental gradient boosting into tts7 2022-03-03 21:51:40 -07:00
James Betker
7ea84f1ac3 asdf 2022-03-03 13:43:44 -07:00
James Betker
3cd6c7f428 Get rid of unused codes in vq 2022-03-03 13:41:38 -07:00
James Betker
619da9ea28 Get rid of discretization loss 2022-03-03 13:36:25 -07:00
James Betker
beb7c8a39d asdf 2022-03-01 21:41:31 -07:00
James Betker
70fa780edb Add mechanism to export grad norms 2022-03-01 20:19:52 -07:00
James Betker
d9f8f92840 Codified fp16 2022-03-01 15:46:04 -07:00
James Betker
45ab444c04 Rework minicoder to always checkpoint 2022-03-01 14:09:18 -07:00
James Betker
db0c3340ac Implement guidance-free diffusion in eval
And a few other fixes
2022-03-01 11:49:36 -07:00
James Betker
2134f06516 Implement conditioning-free diffusion at the eval level 2022-02-27 15:11:42 -07:00
James Betker
436fe24822 Add conditioning-free guidance 2022-02-27 15:00:06 -07:00
James Betker
ac920798bb misc 2022-02-27 14:49:11 -07:00
James Betker
dbc74e96b2 w2v_matcher 2022-02-27 14:48:23 -07:00
James Betker
42879d7296 w2v_wrapper ramping dropout mode
this is an experimental feature that needs some testing
2022-02-27 14:47:51 -07:00
James Betker
c375287db9 Re-instate autocasting 2022-02-25 11:06:18 -07:00
James Betker
34ee32a90e get rid of autocasting in tts7 2022-02-24 21:53:51 -07:00
James Betker
ea500ad42a Use clustered masking in udtts7 2022-02-24 07:57:26 -07:00
James Betker
7201b4500c default text_to_sequence cleaners 2022-02-21 19:14:22 -07:00
James Betker
ba7f54c162 w2v: new inference function 2022-02-21 19:13:03 -07:00
James Betker
38802a96c8 remove timesteps from cond calculation 2022-02-21 12:32:21 -07:00
James Betker
668876799d unet_diffusion_tts7 2022-02-20 15:22:38 -07:00
James Betker
0872e17e60 unified_voice mods 2022-02-19 20:37:35 -07:00
James Betker
7b12799370 Reformat mel_text_clip for use in eval 2022-02-19 20:37:26 -07:00
James Betker
baf7b65566 Attempt to make w2v play with DDP AND checkpointing 2022-02-18 18:47:11 -07:00
James Betker
f3776f1992 reset ctc loss from "mean" to "sum" 2022-02-17 22:00:58 -07:00
James Betker
2b20da679c make spec_augment a parameter 2022-02-17 20:22:05 -07:00
James Betker
e1d71e1bd5 w2v_wrapper: get rid of ctc attention mask 2022-02-15 20:54:40 -07:00
James Betker
79e8f36d30 Convert CLIP models into new folder 2022-02-15 20:53:07 -07:00
James Betker
2bdb515068 A few mods to make wav2vec2 trainable with DDP on DLAS 2022-02-15 06:28:54 -07:00
James Betker
52b61b9f77 Update scripts and attempt to figure out how UnifiedVoice could be used to produce CTC codes 2022-02-13 20:48:06 -07:00
James Betker
a4f1641eea Add & refine WER evaluator for w2v 2022-02-13 20:47:29 -07:00
James Betker
29534180b2 w2v fine tuner 2022-02-12 20:00:59 -07:00
James Betker
3252972057 ctc_code_gen mods 2022-02-12 19:59:54 -07:00
James Betker
302ac8652d Undo mask during training 2022-02-11 09:35:12 -07:00
James Betker
618a20412a new rev of ctc_code_gen with surrogate LM loss 2022-02-10 23:09:57 -07:00
James Betker
820a29f81e ctc code gen mods 2022-02-10 09:44:01 -07:00
James Betker
ac9417b956 ctc_code_gen: mask out all padding tokens 2022-02-09 17:26:30 -07:00
James Betker
ddb77ef502 ctc_code_gen: use a mean() on the ConditioningEncoder 2022-02-09 14:26:44 -07:00
James Betker
9e9ae328f2 mild updates 2022-02-08 23:51:17 -07:00
James Betker
ff35d13b99 Use non-uniform noise in diffusion_tts6 2022-02-08 07:27:41 -07:00
James Betker
34fbb78671 Straight CtcCodeGenerator as an encoder 2022-02-07 15:46:46 -07:00
James Betker
65a546c4d7 Fix for tts6 2022-02-05 16:00:14 -07:00
James Betker
5ae816bead ctc gen checkin 2022-02-05 15:59:53 -07:00
James Betker
bb3d1ab03d More cleanup 2022-02-04 11:06:17 -07:00
James Betker
5cc342de66 Clean up 2022-02-04 11:00:42 -07:00
James Betker
8fb147e8ab add an autoregressive ctc code generator 2022-02-04 11:00:15 -07:00
James Betker
7f4fc55344 Update SR model 2022-02-03 21:42:53 -07:00