Commit Graph

219 Commits

Author SHA1 Message Date
James Betker
c5000420f6 more arbitrary fixes 2022-03-17 17:45:44 -06:00
James Betker
c14fc003ed flat diffusion 2022-03-17 17:45:27 -06:00
James Betker
428911cd4d flat diffusion network 2022-03-17 10:53:56 -06:00
James Betker
bf08519d71 fixes 2022-03-17 10:53:39 -06:00
James Betker
95ea0a592f More cleaning 2022-03-16 12:05:56 -06:00
James Betker
d186414566 More spring cleaning 2022-03-16 12:04:00 -06:00
James Betker
8b376e63d9 More improvements 2022-03-16 10:16:34 -06:00
James Betker
54202aa099 fix mel normalization 2022-03-16 09:26:55 -06:00
James Betker
8437bb0c53 fixes 2022-03-15 23:52:48 -06:00
James Betker
3f244f6a68 add mel_norm to std injector 2022-03-15 22:16:59 -06:00
James Betker
f563a8dd41 fixes 2022-03-15 21:43:00 -06:00
James Betker
1e3a8554a1 updates to audio_diffusion_fid 2022-03-15 11:35:09 -06:00
James Betker
7929fd89de Refactor audio-style models into the audio folder 2022-03-15 11:06:25 -06:00
James Betker
e045fb0ad7 fix clip grad norm with scaler 2022-03-13 16:28:23 -06:00
James Betker
08599b4c75 fix random_audio_crop injector 2022-03-12 20:42:29 -07:00
James Betker
c4e4cf91a0 add support for the original vocoder to audio_diffusion_fid; also add a new "intelligibility" metric 2022-03-08 15:53:27 -07:00
James Betker
3e5da71b16 add grad scaler scale to metrics 2022-03-08 15:52:42 -07:00
James Betker
d1dc8dbb35 Support tts9 2022-03-05 20:14:36 -07:00
James Betker
93a3302819 Push training_state data to CPU memory before saving it
For whatever reason, keeping this on GPU memory just doesn't work.
When you load it, it consumes a large amount of GPU memory and that
utilization doesn't go away. Saving to CPU should fix this.
2022-03-04 17:57:33 -07:00
James Betker
6000580e2e df 2022-03-04 13:47:00 -07:00
James Betker
382681a35d Load diffusion_fid DVAE into the correct cuda device 2022-03-04 13:42:14 -07:00
James Betker
e1052a5e32 Move log consensus to train for efficiency 2022-03-04 13:41:32 -07:00
James Betker
ce6dfdf255 Distributed "fixes" 2022-03-04 12:46:41 -07:00
James Betker
3ff878ae85 Accumulate loss & grad_norm metrics from all entities within a distributed graph 2022-03-04 12:01:16 -07:00
James Betker
f87e10ffef Make deterministic sampler work with distributed training & microbatches 2022-03-04 11:50:50 -07:00
James Betker
2d1cb83c1d Add a deterministic timestep sampler, with provisions to employ it every n steps 2022-03-04 10:40:14 -07:00
James Betker
f490eaeba7 Shuffle optimizer states back and forth between cpu memory during steps 2022-03-04 10:38:51 -07:00
James Betker
3c242403f5 adjust location of pre-optimizer step so I can visualize the new grad norms 2022-03-04 08:56:42 -07:00
James Betker
58019a2ce3 audio diffusion fid updates 2022-03-03 21:53:32 -07:00
James Betker
6873ad6660 Support functionality 2022-03-03 21:52:16 -07:00
James Betker
70fa780edb Add mechanism to export grad norms 2022-03-01 20:19:52 -07:00
James Betker
db0c3340ac Implement guidance-free diffusion in eval
And a few other fixes
2022-03-01 11:49:36 -07:00
James Betker
2134f06516 Implement conditioning-free diffusion at the eval level 2022-02-27 15:11:42 -07:00
James Betker
ac920798bb misc 2022-02-27 14:49:11 -07:00
James Betker
f458f5d8f1 abort early if losses reach nan too much, and save the model 2022-02-24 20:55:30 -07:00
James Betker
18dc62453f Don't step if NaN losses are encountered. 2022-02-24 17:45:08 -07:00
James Betker
7c17c8e674 gurgl 2022-02-23 21:28:24 -07:00
James Betker
81017d9696 put frechet_distance on cuda 2022-02-23 21:21:13 -07:00
James Betker
9a7bbf33df f 2022-02-23 18:03:38 -07:00
James Betker
b7319ab518 Support vocoder type diffusion in audio_diffusion_fid 2022-02-23 17:25:16 -07:00
James Betker
58f6c9805b adf 2022-02-22 23:12:58 -07:00
James Betker
03752c1cd6 Report NaN 2022-02-22 23:09:37 -07:00
James Betker
6313a94f96 eval: integrate a n-gram language model into decoding 2022-02-21 19:12:34 -07:00
James Betker
7b12799370 Reformat mel_text_clip for use in eval 2022-02-19 20:37:26 -07:00
James Betker
bcba65c539 DataParallel Fix 2022-02-19 20:36:35 -07:00
James Betker
34001ad765 et 2022-02-18 18:52:33 -07:00
James Betker
a813fbed9c Update to evaluator 2022-02-17 17:30:33 -07:00
James Betker
79e8f36d30 Convert CLIP models into new folder 2022-02-15 20:53:07 -07:00
James Betker
8f767b8b4f ... 2022-02-15 07:08:17 -07:00
James Betker
29e07913a8 Fix 2022-02-15 06:58:11 -07:00