Commit Graph

1943 Commits

Author SHA1 Message Date
James Betker
7c82e18c6c darn mpi 2022-05-17 17:16:09 -06:00
James Betker
88ec0512f7 Scale losses 2022-05-17 17:12:20 -06:00
James Betker
a6397ce84a Fix incorrect projections 2022-05-17 16:53:52 -06:00
James Betker
c37fc3b4ed m2v grad norm groups 2022-05-17 16:29:36 -06:00
James Betker
c1bdb4f9a1 degrade gumbel softmax over time 2022-05-17 16:23:04 -06:00
James Betker
3853f37257 stable layernorm 2022-05-17 16:07:03 -06:00
James Betker
6a2c29f596 Fix inverted logic 2022-05-17 15:39:07 -06:00
James Betker
519151d83f m2v 2022-05-17 15:37:59 -06:00
James Betker
d1de94d75c Stash mel2vec work (gonna throw it all away..) 2022-05-17 12:35:01 -06:00
James Betker
8202b9f39c some stuff 2022-05-15 21:50:54 -06:00
James Betker
ab5acead0e add exp loss for diffusion models 2022-05-15 21:50:38 -06:00
James Betker
ee218ab9b7 uv3 2022-05-13 17:57:47 -06:00
James Betker
eb64d18075 Fix phoneme tokenizer 2022-05-13 17:56:26 -06:00
James Betker
51f8c1bced phonetic dataset 2022-05-12 11:57:28 -06:00
James Betker
3d7e2a2846 fix collection 2022-05-11 21:50:05 -06:00
James Betker
ba2b71796a k 2022-05-11 21:20:06 -06:00
James Betker
efa737b685 re-add distributed collect to clvp 2022-05-11 21:14:18 -06:00
James Betker
545453077e uv3 2022-05-09 15:36:22 -06:00
James Betker
96a5cc66ee uv3 2022-05-09 15:35:51 -06:00
James Betker
b42b4e18de clean up unified voice
- remove unused code
- fix inference model to use the terms "prior" and "posterior" to properly define the modeling order (they were inverted before)
- default some settings I never intend to change in the future
2022-05-09 14:45:49 -06:00
James Betker
9118f58849 uncomment music projector.. 2022-05-09 09:19:26 -06:00
James Betker
74dd095326 a 2022-05-08 18:54:09 -06:00
James Betker
1177c35dec music fid updates 2022-05-08 18:49:39 -06:00
James Betker
7812c23c7a revert fill_gaps back to old masking behavior 2022-05-08 00:10:19 -06:00
James Betker
58ed27d7a8 new gap_filler 2022-05-07 12:44:23 -06:00
James Betker
6c8032b4be more work 2022-05-06 21:56:49 -06:00
James Betker
f541610256 contrastive_audio 2022-05-06 16:37:22 -06:00
James Betker
79543e5488 Simpler form of the wavegen model 2022-05-06 16:37:04 -06:00
James Betker
d8925ccde5 few things with gap filling 2022-05-06 14:33:44 -06:00
James Betker
b83b53cf84 norm mel 2022-05-06 00:49:54 -06:00
James Betker
b13d983c24 and mel_head 2022-05-06 00:25:27 -06:00
James Betker
d5fb79564a remove mel_pred 2022-05-06 00:24:05 -06:00
James Betker
e9bb692490 fixed aligned_latent 2022-05-06 00:20:21 -06:00
James Betker
1609101a42 musical gap filler 2022-05-05 16:47:08 -06:00
James Betker
d66ab2d28c Remove unused waveform_gens 2022-05-04 21:06:54 -06:00
James Betker
47662b9ec5 some random crap 2022-05-04 20:29:23 -06:00
James Betker
6655f7845a add pixel shuffling for 1d cases 2022-05-04 08:03:09 -06:00
James Betker
c42c53e75a Add a trainable network for converting a normal distribution into a latent space 2022-05-02 09:47:30 -06:00
James Betker
e402089556 abstractify 2022-05-02 00:11:26 -06:00
James Betker
ab219fbefb output variance 2022-05-02 00:10:33 -06:00
James Betker
3b074aac34 add checkpointing 2022-05-02 00:07:42 -06:00
James Betker
ae5f934ea1 diffwave 2022-05-02 00:05:04 -06:00
James Betker
f4254609c1 MDF
around and around in circles........
2022-05-01 23:04:56 -06:00
James Betker
b712d3b72b break out get_conditioning_latent from unified_voice 2022-05-01 23:04:44 -06:00
James Betker
afa2df57c9 gen3 2022-04-30 10:41:38 -06:00
James Betker
64c7582bf5 full pipeline 2022-04-28 22:47:26 -06:00
James Betker
8aa6651fc7 fix surrogate loss return in waveform_gen2 2022-04-28 10:10:11 -06:00
James Betker
e208d9fb80 gate augmentations with a flag 2022-04-28 10:09:22 -06:00
James Betker
3f67cb2023 music diffusion fid adjustments 2022-04-28 10:08:55 -06:00
James Betker
ab8176b217 audio prep misc 2022-04-28 10:08:38 -06:00
James Betker
f02b01bd9d reverse univnet classifier 2022-04-20 21:37:55 -06:00
James Betker
9df85c902e New gen2
Which is basically a autoencoder with a giant diffusion appendage attached
2022-04-20 21:37:34 -06:00
James Betker
b1c2c48720 music diffusion fid 2022-04-20 00:28:03 -06:00
James Betker
084b1c1527 file splitter 2022-04-20 00:27:49 -06:00
James Betker
b4549eed9f uv2 fix 2022-04-20 00:27:38 -06:00
James Betker
24fdafd855 fix2 2022-04-20 00:03:29 -06:00
James Betker
0af0051399 fix 2022-04-20 00:01:57 -06:00
James Betker
419f4d37bd gen2 music 2022-04-19 23:38:37 -06:00
James Betker
c85ab738c5 paired fix 2022-04-16 23:41:57 -06:00
James Betker
8fe0dff33c support tts typing 2022-04-16 23:36:57 -06:00
James Betker
48cb6a5abd misc 2022-04-16 20:28:04 -06:00
James Betker
147478a148 cvvp 2022-04-16 20:27:46 -06:00
James Betker
546ecd5aeb music! 2022-04-15 21:21:37 -06:00
James Betker
254357724d gradprop 2022-04-15 09:37:20 -06:00
James Betker
fbf1f4f637 update 2022-04-15 09:34:44 -06:00
James Betker
82aad335ba add distributued logic for loss 2022-04-15 09:31:48 -06:00
James Betker
efe12cb816 Update clvp to add masking probabilities in conditioning and to support code inputs 2022-04-15 09:11:23 -06:00
James Betker
3cad1b8114 more fixes 2022-04-11 15:18:44 -06:00
James Betker
6dea7da7a8 another fix 2022-04-11 12:29:43 -06:00
James Betker
f2c172291f fix audio_diffusion_fid for autoregressive latent inputs 2022-04-11 12:08:15 -06:00
James Betker
8ea5c307fb Fixes for training the diffusion model on autoregressive inputs 2022-04-11 11:02:44 -06:00
James Betker
a3622462c1 Change latent_conditioner back 2022-04-11 09:00:13 -06:00
James Betker
03d0b90bda fixes 2022-04-10 21:02:12 -06:00
James Betker
19ca5b26c1 Remove flat0 and move it into flat 2022-04-10 21:01:59 -06:00
James Betker
81c952a00a undo relative 2022-04-08 16:32:52 -06:00
James Betker
944b4c3335 more undos 2022-04-08 16:31:08 -06:00
James Betker
032983e2ed fix bug and allow position encodings to be trained separately from the rest of the model 2022-04-08 16:26:01 -06:00
James Betker
09ab1aa9bc revert rotary embeddings work
I'm not really sure that this is going to work. I'd rather explore re-using what I've already trained
2022-04-08 16:18:35 -06:00
James Betker
2fb9ffb0aa Align autoregressive text using start and stop tokens 2022-04-08 09:41:59 -06:00
James Betker
628569af7b Another fix 2022-04-08 09:41:18 -06:00
James Betker
423293e518 fix xtransformers bug 2022-04-08 09:12:46 -06:00
James Betker
048f6f729a remove lightweight_gan 2022-04-07 23:12:08 -07:00
James Betker
e634996a9c autoregressive_codegen: support key_value caching for faster inference 2022-04-07 23:08:46 -07:00
James Betker
d05e162f95 reformat x_transformers 2022-04-07 23:08:03 -07:00
James Betker
7c578eb59b Fix inference in new autoregressive_codegen 2022-04-07 21:22:46 -06:00
James Betker
3f8d7955ef unified_voice with rotary embeddings 2022-04-07 20:11:14 -06:00
James Betker
573e5552b9 CLVP v1 2022-04-07 20:10:57 -06:00
James Betker
71b73db044 clean up 2022-04-07 11:34:10 -06:00
James Betker
6fc4f49e86 some dumb stuff 2022-04-07 11:32:34 -06:00
James Betker
e6387c7613 Fix eval logic to not run immediately 2022-04-07 11:29:57 -06:00
James Betker
305dc95e4b cg2 2022-04-06 21:24:36 -06:00
James Betker
e011166dd6 autoregressive_codegen r3 2022-04-06 21:04:23 -06:00
James Betker
33ef17e9e5 fix context 2022-04-06 00:45:42 -06:00
James Betker
37bdfe82b2 Modify x_transformers to do checkpointing and use relative positional biases 2022-04-06 00:35:29 -06:00
James Betker
09879b434d bring in x_transformers 2022-04-06 00:21:58 -06:00
James Betker
3d916e7687 Fix evaluation when using multiple batch sizes 2022-04-05 07:51:09 -06:00
James Betker
572d137589 track iteration rate 2022-04-04 12:33:25 -06:00
James Betker
4cdb0169d0 update training data encountered when using force_start_step 2022-04-04 12:25:00 -06:00
James Betker
cdd12ff46c Add code validation to autoregressive_codegen 2022-04-04 09:51:41 -06:00
James Betker
99de63a922 man I'm really on it tonight.... 2022-04-02 22:01:33 -06:00
James Betker
a4bdc80933 moikmadsf 2022-04-02 21:59:50 -06:00
James Betker
1cf20b7337 sdfds 2022-04-02 21:58:09 -06:00
James Betker
b6afc4d542 dsfa 2022-04-02 21:57:00 -06:00
James Betker
4c6bdfc9e2 get rid of relative position embeddings, which do not work with DDP & checkpointing 2022-04-02 21:55:32 -06:00
James Betker
b6d62aca5d add inference model on top of codegen 2022-04-02 21:25:10 -06:00
James Betker
2b6ff09225 autoregressive_codegen v1 2022-04-02 15:07:39 -06:00
James Betker
00767219fc undo latent converter change 2022-04-01 20:46:27 -06:00
James Betker
55c86e02c7 Flat fix 2022-04-01 19:13:33 -06:00
James Betker
8623c51902 fix bug 2022-04-01 16:11:34 -06:00
James Betker
035bcd9f6c fwd fix 2022-04-01 16:03:07 -06:00
James Betker
f6a8b0a5ca prep flat0 for feeding from autoregressive_latent_converter 2022-04-01 15:53:45 -06:00
James Betker
3e97abc8a9 update flat0 to break out timestep-independent inference steps 2022-04-01 14:38:53 -06:00
James Betker
a6181a489b Fix loss gapping caused by poor gradients into mel_pred 2022-03-26 22:49:14 -06:00
James Betker
0070867d0f inference script for diffusion image models 2022-03-26 22:48:24 -06:00
James Betker
1feade23ff support x-transformers in text_voice_clip and support relative positional embeddings 2022-03-26 22:48:10 -06:00
James Betker
9b90472e15 feed direct inputs into gd 2022-03-26 08:36:19 -06:00
James Betker
6909f196b4 make code pred returns optional 2022-03-26 08:33:30 -06:00
James Betker
2a29a71c37 attempt to force meaningful codes by adding a surrogate loss 2022-03-26 08:31:40 -06:00
James Betker
45804177b8 more stuff 2022-03-25 00:03:18 -06:00
James Betker
d4218d8443 mods 2022-03-24 23:31:20 -06:00
James Betker
9c79fec734 update adf 2022-03-24 21:20:29 -06:00
James Betker
07731d5491 Fix ET 2022-03-24 21:20:22 -06:00
James Betker
a15970dd97 disable checkpointing in conditioning encoder 2022-03-24 11:49:04 -06:00
James Betker
cc5fc91562 flat0 work 2022-03-24 11:46:53 -06:00
James Betker
b0d2827fad flat0 2022-03-24 11:30:40 -06:00
James Betker
8707a3e0c3 drop full layers in layerdrop, not half layers 2022-03-23 17:15:08 -06:00
James Betker
57da6d0ddf more simplifications 2022-03-22 11:46:03 -06:00
James Betker
f3f391b372 undo sandwich 2022-03-22 11:43:24 -06:00
James Betker
927731f3b4 tts9: fix position embeddings snafu 2022-03-22 11:41:32 -06:00
James Betker
536511fc4b unified_voice: relative position encodings 2022-03-22 11:41:13 -06:00
James Betker
be5f052255 misc 2022-03-22 11:40:56 -06:00
James Betker
963f0e9cee fix unscaler 2022-03-22 11:40:02 -06:00
James Betker
5405ce4363 fix flat 2022-03-22 11:39:39 -06:00
James Betker
e47a759ed8 ....... 2022-03-21 17:22:35 -06:00
James Betker
cc4c9faf9a resolve more issues 2022-03-21 17:20:05 -06:00
James Betker
3692c4cae3 map vocoder into cpu 2022-03-21 17:10:57 -06:00
James Betker
9e97cd800c take the conditioning mean rather than the first element 2022-03-21 16:58:03 -06:00
James Betker
9c7598dc9a fix conditioning_free signal 2022-03-21 15:29:17 -06:00
James Betker
2a65c982ca dont double nest checkpointing 2022-03-21 15:27:51 -06:00
James Betker
723f324eda Make it even better 2022-03-21 14:50:59 -06:00
James Betker
e735d8e1fa unified_voice fixes 2022-03-21 14:44:00 -06:00
James Betker
1ad18d29a8 Flat fixes 2022-03-21 14:43:52 -06:00
James Betker
26dcf7f1a2 r2 of the flat diffusion 2022-03-21 11:40:43 -06:00
James Betker
c5000420f6 more arbitrary fixes 2022-03-17 17:45:44 -06:00
James Betker
c14fc003ed flat diffusion 2022-03-17 17:45:27 -06:00
James Betker
428911cd4d flat diffusion network 2022-03-17 10:53:56 -06:00
James Betker
bf08519d71 fixes 2022-03-17 10:53:39 -06:00
James Betker
95ea0a592f More cleaning 2022-03-16 12:05:56 -06:00
James Betker
d186414566 More spring cleaning 2022-03-16 12:04:00 -06:00
James Betker
735f6e4640 Move gen_similarities and rename 2022-03-16 11:59:34 -06:00
James Betker
8b376e63d9 More improvements 2022-03-16 10:16:34 -06:00
James Betker
54202aa099 fix mel normalization 2022-03-16 09:26:55 -06:00
James Betker
8437bb0c53 fixes 2022-03-15 23:52:48 -06:00
James Betker
3f244f6a68 add mel_norm to std injector 2022-03-15 22:16:59 -06:00
James Betker
0fc877cbc8 tts9 fix for alignment size 2022-03-15 21:43:14 -06:00
James Betker
f563a8dd41 fixes 2022-03-15 21:43:00 -06:00
James Betker
b754058018 Update wav2vec2 wrapper 2022-03-15 11:35:38 -06:00
James Betker
1e3a8554a1 updates to audio_diffusion_fid 2022-03-15 11:35:09 -06:00
James Betker
9c6f776980 Add univnet vocoder 2022-03-15 11:34:51 -06:00
James Betker
7929fd89de Refactor audio-style models into the audio folder 2022-03-15 11:06:25 -06:00
James Betker
f95d3d2b82 move waveglow to audio/vocoders 2022-03-15 11:03:07 -06:00
James Betker
0419a64107 misc 2022-03-15 10:36:34 -06:00
James Betker
bb03cbb9fc composable initial checkin 2022-03-15 10:35:40 -06:00
James Betker
86b0d76fb9 tts8 (incomplete, may be removed) 2022-03-15 10:35:31 -06:00
James Betker
eecbc0e678 Use wider spectrogram when asked 2022-03-15 10:35:11 -06:00
James Betker
9767260c6c tacotron stft - loosen bounds restrictions and clip 2022-03-15 10:31:26 -06:00
James Betker
f8631ad4f7 Updates to support inputting MELs into the conditioning encoder 2022-03-14 17:31:42 -06:00
James Betker
e045fb0ad7 fix clip grad norm with scaler 2022-03-13 16:28:23 -06:00
James Betker
22c67ce8d3 tts9 mods 2022-03-13 10:25:55 -06:00
James Betker
08599b4c75 fix random_audio_crop injector 2022-03-12 20:42:29 -07:00
James Betker
8f130e2b3f add scale_shift_norm back to tts9 2022-03-12 20:42:13 -07:00
James Betker
9bbbe26012 update audio_with_noise 2022-03-12 20:41:47 -07:00
James Betker
e754c4fbbc sweep update 2022-03-12 15:33:00 -07:00
James Betker
73bfd4a86d another tts9 update 2022-03-12 15:17:06 -07:00
James Betker
0523777ff7 add efficient config to tts9 2022-03-12 15:10:35 -07:00
James Betker
896accb71f data and prep improvements 2022-03-12 15:10:11 -07:00
James Betker
1e87b934db potentially average conditioning inputs 2022-03-10 20:37:41 -07:00
James Betker
e6a95f7c11 Update tts9: Remove torchscript provisions and add mechanism to train solely on codes 2022-03-09 09:43:38 -07:00
James Betker
726e30c4f7 Update noise augmentation dataset to include voices that are appended at the end of another clip. 2022-03-09 09:43:10 -07:00
James Betker
c4e4cf91a0 add support for the original vocoder to audio_diffusion_fid; also add a new "intelligibility" metric 2022-03-08 15:53:27 -07:00
James Betker
3e5da71b16 add grad scaler scale to metrics 2022-03-08 15:52:42 -07:00
James Betker
d2bdeb6f20 misc audio support 2022-03-08 15:52:26 -07:00
James Betker
d553808d24 misc 2022-03-08 15:52:16 -07:00
James Betker
7dabc17626 phase2 filter initial commit 2022-03-08 15:51:55 -07:00
James Betker
f56edb2122 minicoder with classifier head: spread out probability mass for 0 predictions 2022-03-08 15:51:31 -07:00
James Betker
29b2921222 move diffusion vocoder 2022-03-08 15:51:05 -07:00
James Betker
94222b0216 tts9 initial commit 2022-03-08 15:50:45 -07:00
James Betker
38fd9fc985 Improve efficiency of audio_with_noise_dataset 2022-03-08 15:50:13 -07:00
James Betker
b3def182de move processing pipeline to "phase_1" 2022-03-08 15:49:51 -07:00
James Betker
30ddac69aa lots of bad entries 2022-03-05 23:15:59 -07:00
James Betker
dcf98df0c2 ++ 2022-03-05 23:12:34 -07:00
James Betker
64d764ccd7 fml 2022-03-05 23:11:10 -07:00
James Betker
ef63ff84e2 pvd2 2022-03-05 23:08:39 -07:00
James Betker
1a05712764 pvd 2022-03-05 23:05:29 -07:00
James Betker
d1dc8dbb35 Support tts9 2022-03-05 20:14:36 -07:00
James Betker
93a3302819 Push training_state data to CPU memory before saving it
For whatever reason, keeping this on GPU memory just doesn't work.
When you load it, it consumes a large amount of GPU memory and that
utilization doesn't go away. Saving to CPU should fix this.
2022-03-04 17:57:33 -07:00
James Betker
6000580e2e df 2022-03-04 13:47:00 -07:00
James Betker
382681a35d Load diffusion_fid DVAE into the correct cuda device 2022-03-04 13:42:14 -07:00
James Betker
e1052a5e32 Move log consensus to train for efficiency 2022-03-04 13:41:32 -07:00
James Betker
ce6dfdf255 Distributed "fixes" 2022-03-04 12:46:41 -07:00
James Betker
3ff878ae85 Accumulate loss & grad_norm metrics from all entities within a distributed graph 2022-03-04 12:01:16 -07:00
James Betker
79e5692388 Fix distributed bug 2022-03-04 11:58:53 -07:00
James Betker
f87e10ffef Make deterministic sampler work with distributed training & microbatches 2022-03-04 11:50:50 -07:00
James Betker
77c18b53b3 Cap grad booster 2022-03-04 10:40:24 -07:00
James Betker
2d1cb83c1d Add a deterministic timestep sampler, with provisions to employ it every n steps 2022-03-04 10:40:14 -07:00
James Betker
f490eaeba7 Shuffle optimizer states back and forth between cpu memory during steps 2022-03-04 10:38:51 -07:00
James Betker
3c242403f5 adjust location of pre-optimizer step so I can visualize the new grad norms 2022-03-04 08:56:42 -07:00
James Betker
58019a2ce3 audio diffusion fid updates 2022-03-03 21:53:32 -07:00
James Betker
998c53ad4f w2v_matcher mods 2022-03-03 21:52:51 -07:00
James Betker
9029e4f20c Add a base-wrapper 2022-03-03 21:52:28 -07:00
James Betker
6873ad6660 Support functionality 2022-03-03 21:52:16 -07:00
James Betker
6af5d129ce Add experimental gradient boosting into tts7 2022-03-03 21:51:40 -07:00
James Betker
7ea84f1ac3 asdf 2022-03-03 13:43:44 -07:00
James Betker
3cd6c7f428 Get rid of unused codes in vq 2022-03-03 13:41:38 -07:00
James Betker
619da9ea28 Get rid of discretization loss 2022-03-03 13:36:25 -07:00
James Betker
beb7c8a39d asdf 2022-03-01 21:41:31 -07:00
James Betker
70fa780edb Add mechanism to export grad norms 2022-03-01 20:19:52 -07:00
James Betker
d9f8f92840 Codified fp16 2022-03-01 15:46:04 -07:00
James Betker
45ab444c04 Rework minicoder to always checkpoint 2022-03-01 14:09:18 -07:00
James Betker
db0c3340ac Implement guidance-free diffusion in eval
And a few other fixes
2022-03-01 11:49:36 -07:00
James Betker
2134f06516 Implement conditioning-free diffusion at the eval level 2022-02-27 15:11:42 -07:00
James Betker
436fe24822 Add conditioning-free guidance 2022-02-27 15:00:06 -07:00
James Betker
ac920798bb misc 2022-02-27 14:49:11 -07:00
James Betker
ba155e4e2f script for uploading models to the HF hub 2022-02-27 14:48:38 -07:00
James Betker
dbc74e96b2 w2v_matcher 2022-02-27 14:48:23 -07:00
James Betker
42879d7296 w2v_wrapper ramping dropout mode
this is an experimental feature that needs some testing
2022-02-27 14:47:51 -07:00
James Betker
c375287db9 Re-instate autocasting 2022-02-25 11:06:18 -07:00
James Betker
34ee32a90e get rid of autocasting in tts7 2022-02-24 21:53:51 -07:00
James Betker
f458f5d8f1 abort early if losses reach nan too much, and save the model 2022-02-24 20:55:30 -07:00
James Betker
18dc62453f Don't step if NaN losses are encountered. 2022-02-24 17:45:08 -07:00
James Betker
ea500ad42a Use clustered masking in udtts7 2022-02-24 07:57:26 -07:00
James Betker
7c17c8e674 gurgl 2022-02-23 21:28:24 -07:00
James Betker
e6824e398f Load dvae to cpu 2022-02-23 21:21:45 -07:00
James Betker
81017d9696 put frechet_distance on cuda 2022-02-23 21:21:13 -07:00
James Betker
9a7bbf33df f 2022-02-23 18:03:38 -07:00
James Betker
68726eac74 . 2022-02-23 17:58:07 -07:00
James Betker
b7319ab518 Support vocoder type diffusion in audio_diffusion_fid 2022-02-23 17:25:16 -07:00
James Betker
58f6c9805b adf 2022-02-22 23:12:58 -07:00
James Betker
03752c1cd6 Report NaN 2022-02-22 23:09:37 -07:00
James Betker
7201b4500c default text_to_sequence cleaners 2022-02-21 19:14:22 -07:00
James Betker
ba7f54c162 w2v: new inference function 2022-02-21 19:13:03 -07:00
James Betker
896ac029ae allow continuation of samples encountered 2022-02-21 19:12:50 -07:00
James Betker
6313a94f96 eval: integrate a n-gram language model into decoding 2022-02-21 19:12:34 -07:00
James Betker
af50afe222 pairedvoice: error out if clip is too short 2022-02-21 19:11:10 -07:00
James Betker
38802a96c8 remove timesteps from cond calculation 2022-02-21 12:32:21 -07:00
James Betker
668876799d unet_diffusion_tts7 2022-02-20 15:22:38 -07:00
James Betker
0872e17e60 unified_voice mods 2022-02-19 20:37:35 -07:00
James Betker
7b12799370 Reformat mel_text_clip for use in eval 2022-02-19 20:37:26 -07:00
James Betker
bcba65c539 DataParallel Fix 2022-02-19 20:36:35 -07:00
James Betker
34001ad765 et 2022-02-18 18:52:33 -07:00