Commit Graph

1596 Commits

Author SHA1 Message Date
James Betker
d2bdeb6f20 misc audio support 2022-03-08 15:52:26 -07:00
James Betker
d553808d24 misc 2022-03-08 15:52:16 -07:00
James Betker
7dabc17626 phase2 filter initial commit 2022-03-08 15:51:55 -07:00
James Betker
f56edb2122 minicoder with classifier head: spread out probability mass for 0 predictions 2022-03-08 15:51:31 -07:00
James Betker
29b2921222 move diffusion vocoder 2022-03-08 15:51:05 -07:00
James Betker
94222b0216 tts9 initial commit 2022-03-08 15:50:45 -07:00
James Betker
38fd9fc985 Improve efficiency of audio_with_noise_dataset 2022-03-08 15:50:13 -07:00
James Betker
b3def182de move processing pipeline to "phase_1" 2022-03-08 15:49:51 -07:00
James Betker
30ddac69aa lots of bad entries 2022-03-05 23:15:59 -07:00
James Betker
dcf98df0c2 ++ 2022-03-05 23:12:34 -07:00
James Betker
64d764ccd7 fml 2022-03-05 23:11:10 -07:00
James Betker
ef63ff84e2 pvd2 2022-03-05 23:08:39 -07:00
James Betker
1a05712764 pvd 2022-03-05 23:05:29 -07:00
James Betker
d1dc8dbb35 Support tts9 2022-03-05 20:14:36 -07:00
James Betker
93a3302819 Push training_state data to CPU memory before saving it
For whatever reason, keeping this on GPU memory just doesn't work.
When you load it, it consumes a large amount of GPU memory and that
utilization doesn't go away. Saving to CPU should fix this.
2022-03-04 17:57:33 -07:00
James Betker
6000580e2e df 2022-03-04 13:47:00 -07:00
James Betker
382681a35d Load diffusion_fid DVAE into the correct cuda device 2022-03-04 13:42:14 -07:00
James Betker
e1052a5e32 Move log consensus to train for efficiency 2022-03-04 13:41:32 -07:00
James Betker
ce6dfdf255 Distributed "fixes" 2022-03-04 12:46:41 -07:00
James Betker
3ff878ae85 Accumulate loss & grad_norm metrics from all entities within a distributed graph 2022-03-04 12:01:16 -07:00
James Betker
79e5692388 Fix distributed bug 2022-03-04 11:58:53 -07:00
James Betker
f87e10ffef Make deterministic sampler work with distributed training & microbatches 2022-03-04 11:50:50 -07:00
James Betker
77c18b53b3 Cap grad booster 2022-03-04 10:40:24 -07:00
James Betker
2d1cb83c1d Add a deterministic timestep sampler, with provisions to employ it every n steps 2022-03-04 10:40:14 -07:00
James Betker
f490eaeba7 Shuffle optimizer states back and forth between cpu memory during steps 2022-03-04 10:38:51 -07:00
James Betker
3c242403f5 adjust location of pre-optimizer step so I can visualize the new grad norms 2022-03-04 08:56:42 -07:00
James Betker
58019a2ce3 audio diffusion fid updates 2022-03-03 21:53:32 -07:00
James Betker
998c53ad4f w2v_matcher mods 2022-03-03 21:52:51 -07:00
James Betker
9029e4f20c Add a base-wrapper 2022-03-03 21:52:28 -07:00
James Betker
6873ad6660 Support functionality 2022-03-03 21:52:16 -07:00
James Betker
6af5d129ce Add experimental gradient boosting into tts7 2022-03-03 21:51:40 -07:00
James Betker
7ea84f1ac3 asdf 2022-03-03 13:43:44 -07:00
James Betker
3cd6c7f428 Get rid of unused codes in vq 2022-03-03 13:41:38 -07:00
James Betker
619da9ea28 Get rid of discretization loss 2022-03-03 13:36:25 -07:00
James Betker
beb7c8a39d asdf 2022-03-01 21:41:31 -07:00
James Betker
70fa780edb Add mechanism to export grad norms 2022-03-01 20:19:52 -07:00
James Betker
d9f8f92840 Codified fp16 2022-03-01 15:46:04 -07:00
James Betker
45ab444c04 Rework minicoder to always checkpoint 2022-03-01 14:09:18 -07:00
James Betker
db0c3340ac Implement guidance-free diffusion in eval
And a few other fixes
2022-03-01 11:49:36 -07:00
James Betker
2134f06516 Implement conditioning-free diffusion at the eval level 2022-02-27 15:11:42 -07:00
James Betker
436fe24822 Add conditioning-free guidance 2022-02-27 15:00:06 -07:00
James Betker
ac920798bb misc 2022-02-27 14:49:11 -07:00
James Betker
ba155e4e2f script for uploading models to the HF hub 2022-02-27 14:48:38 -07:00
James Betker
dbc74e96b2 w2v_matcher 2022-02-27 14:48:23 -07:00
James Betker
42879d7296 w2v_wrapper ramping dropout mode
this is an experimental feature that needs some testing
2022-02-27 14:47:51 -07:00
James Betker
c375287db9 Re-instate autocasting 2022-02-25 11:06:18 -07:00
James Betker
34ee32a90e get rid of autocasting in tts7 2022-02-24 21:53:51 -07:00
James Betker
f458f5d8f1 abort early if losses reach nan too much, and save the model 2022-02-24 20:55:30 -07:00
James Betker
18dc62453f Don't step if NaN losses are encountered. 2022-02-24 17:45:08 -07:00
James Betker
ea500ad42a Use clustered masking in udtts7 2022-02-24 07:57:26 -07:00