James Betker
|
e9bb692490
|
fixed aligned_latent
|
2022-05-06 00:20:21 -06:00 |
|
James Betker
|
1609101a42
|
musical gap filler
|
2022-05-05 16:47:08 -06:00 |
|
James Betker
|
d66ab2d28c
|
Remove unused waveform_gens
|
2022-05-04 21:06:54 -06:00 |
|
James Betker
|
47662b9ec5
|
some random crap
|
2022-05-04 20:29:23 -06:00 |
|
James Betker
|
c42c53e75a
|
Add a trainable network for converting a normal distribution into a latent space
|
2022-05-02 09:47:30 -06:00 |
|
James Betker
|
ab219fbefb
|
output variance
|
2022-05-02 00:10:33 -06:00 |
|
James Betker
|
3b074aac34
|
add checkpointing
|
2022-05-02 00:07:42 -06:00 |
|
James Betker
|
ae5f934ea1
|
diffwave
|
2022-05-02 00:05:04 -06:00 |
|
James Betker
|
b712d3b72b
|
break out get_conditioning_latent from unified_voice
|
2022-05-01 23:04:44 -06:00 |
|
James Betker
|
afa2df57c9
|
gen3
|
2022-04-30 10:41:38 -06:00 |
|
James Betker
|
8aa6651fc7
|
fix surrogate loss return in waveform_gen2
|
2022-04-28 10:10:11 -06:00 |
|
James Betker
|
f02b01bd9d
|
reverse univnet classifier
|
2022-04-20 21:37:55 -06:00 |
|
James Betker
|
9df85c902e
|
New gen2
Which is basically a autoencoder with a giant diffusion appendage attached
|
2022-04-20 21:37:34 -06:00 |
|
James Betker
|
b4549eed9f
|
uv2 fix
|
2022-04-20 00:27:38 -06:00 |
|
James Betker
|
24fdafd855
|
fix2
|
2022-04-20 00:03:29 -06:00 |
|
James Betker
|
0af0051399
|
fix
|
2022-04-20 00:01:57 -06:00 |
|
James Betker
|
419f4d37bd
|
gen2 music
|
2022-04-19 23:38:37 -06:00 |
|
James Betker
|
8fe0dff33c
|
support tts typing
|
2022-04-16 23:36:57 -06:00 |
|
James Betker
|
48cb6a5abd
|
misc
|
2022-04-16 20:28:04 -06:00 |
|
James Betker
|
147478a148
|
cvvp
|
2022-04-16 20:27:46 -06:00 |
|
James Betker
|
546ecd5aeb
|
music!
|
2022-04-15 21:21:37 -06:00 |
|
James Betker
|
254357724d
|
gradprop
|
2022-04-15 09:37:20 -06:00 |
|
James Betker
|
fbf1f4f637
|
update
|
2022-04-15 09:34:44 -06:00 |
|
James Betker
|
82aad335ba
|
add distributued logic for loss
|
2022-04-15 09:31:48 -06:00 |
|
James Betker
|
efe12cb816
|
Update clvp to add masking probabilities in conditioning and to support code inputs
|
2022-04-15 09:11:23 -06:00 |
|
James Betker
|
8ea5c307fb
|
Fixes for training the diffusion model on autoregressive inputs
|
2022-04-11 11:02:44 -06:00 |
|
James Betker
|
a3622462c1
|
Change latent_conditioner back
|
2022-04-11 09:00:13 -06:00 |
|
James Betker
|
03d0b90bda
|
fixes
|
2022-04-10 21:02:12 -06:00 |
|
James Betker
|
19ca5b26c1
|
Remove flat0 and move it into flat
|
2022-04-10 21:01:59 -06:00 |
|
James Betker
|
81c952a00a
|
undo relative
|
2022-04-08 16:32:52 -06:00 |
|
James Betker
|
944b4c3335
|
more undos
|
2022-04-08 16:31:08 -06:00 |
|
James Betker
|
032983e2ed
|
fix bug and allow position encodings to be trained separately from the rest of the model
|
2022-04-08 16:26:01 -06:00 |
|
James Betker
|
09ab1aa9bc
|
revert rotary embeddings work
I'm not really sure that this is going to work. I'd rather explore re-using what I've already trained
|
2022-04-08 16:18:35 -06:00 |
|
James Betker
|
2fb9ffb0aa
|
Align autoregressive text using start and stop tokens
|
2022-04-08 09:41:59 -06:00 |
|
James Betker
|
423293e518
|
fix xtransformers bug
|
2022-04-08 09:12:46 -06:00 |
|
James Betker
|
048f6f729a
|
remove lightweight_gan
|
2022-04-07 23:12:08 -07:00 |
|
James Betker
|
e634996a9c
|
autoregressive_codegen: support key_value caching for faster inference
|
2022-04-07 23:08:46 -07:00 |
|
James Betker
|
d05e162f95
|
reformat x_transformers
|
2022-04-07 23:08:03 -07:00 |
|
James Betker
|
7c578eb59b
|
Fix inference in new autoregressive_codegen
|
2022-04-07 21:22:46 -06:00 |
|
James Betker
|
3f8d7955ef
|
unified_voice with rotary embeddings
|
2022-04-07 20:11:14 -06:00 |
|
James Betker
|
573e5552b9
|
CLVP v1
|
2022-04-07 20:10:57 -06:00 |
|
James Betker
|
71b73db044
|
clean up
|
2022-04-07 11:34:10 -06:00 |
|
James Betker
|
6fc4f49e86
|
some dumb stuff
|
2022-04-07 11:32:34 -06:00 |
|
James Betker
|
305dc95e4b
|
cg2
|
2022-04-06 21:24:36 -06:00 |
|
James Betker
|
e011166dd6
|
autoregressive_codegen r3
|
2022-04-06 21:04:23 -06:00 |
|
James Betker
|
33ef17e9e5
|
fix context
|
2022-04-06 00:45:42 -06:00 |
|
James Betker
|
37bdfe82b2
|
Modify x_transformers to do checkpointing and use relative positional biases
|
2022-04-06 00:35:29 -06:00 |
|
James Betker
|
09879b434d
|
bring in x_transformers
|
2022-04-06 00:21:58 -06:00 |
|
James Betker
|
cdd12ff46c
|
Add code validation to autoregressive_codegen
|
2022-04-04 09:51:41 -06:00 |
|
James Betker
|
99de63a922
|
man I'm really on it tonight....
|
2022-04-02 22:01:33 -06:00 |
|
James Betker
|
a4bdc80933
|
moikmadsf
|
2022-04-02 21:59:50 -06:00 |
|
James Betker
|
1cf20b7337
|
sdfds
|
2022-04-02 21:58:09 -06:00 |
|
James Betker
|
b6afc4d542
|
dsfa
|
2022-04-02 21:57:00 -06:00 |
|
James Betker
|
4c6bdfc9e2
|
get rid of relative position embeddings, which do not work with DDP & checkpointing
|
2022-04-02 21:55:32 -06:00 |
|
James Betker
|
b6d62aca5d
|
add inference model on top of codegen
|
2022-04-02 21:25:10 -06:00 |
|
James Betker
|
2b6ff09225
|
autoregressive_codegen v1
|
2022-04-02 15:07:39 -06:00 |
|
James Betker
|
00767219fc
|
undo latent converter change
|
2022-04-01 20:46:27 -06:00 |
|
James Betker
|
55c86e02c7
|
Flat fix
|
2022-04-01 19:13:33 -06:00 |
|
James Betker
|
8623c51902
|
fix bug
|
2022-04-01 16:11:34 -06:00 |
|
James Betker
|
f6a8b0a5ca
|
prep flat0 for feeding from autoregressive_latent_converter
|
2022-04-01 15:53:45 -06:00 |
|
James Betker
|
3e97abc8a9
|
update flat0 to break out timestep-independent inference steps
|
2022-04-01 14:38:53 -06:00 |
|
James Betker
|
a6181a489b
|
Fix loss gapping caused by poor gradients into mel_pred
|
2022-03-26 22:49:14 -06:00 |
|
James Betker
|
1feade23ff
|
support x-transformers in text_voice_clip and support relative positional embeddings
|
2022-03-26 22:48:10 -06:00 |
|
James Betker
|
6909f196b4
|
make code pred returns optional
|
2022-03-26 08:33:30 -06:00 |
|
James Betker
|
2a29a71c37
|
attempt to force meaningful codes by adding a surrogate loss
|
2022-03-26 08:31:40 -06:00 |
|
James Betker
|
45804177b8
|
more stuff
|
2022-03-25 00:03:18 -06:00 |
|
James Betker
|
d4218d8443
|
mods
|
2022-03-24 23:31:20 -06:00 |
|
James Betker
|
a15970dd97
|
disable checkpointing in conditioning encoder
|
2022-03-24 11:49:04 -06:00 |
|
James Betker
|
cc5fc91562
|
flat0 work
|
2022-03-24 11:46:53 -06:00 |
|
James Betker
|
b0d2827fad
|
flat0
|
2022-03-24 11:30:40 -06:00 |
|
James Betker
|
8707a3e0c3
|
drop full layers in layerdrop, not half layers
|
2022-03-23 17:15:08 -06:00 |
|
James Betker
|
57da6d0ddf
|
more simplifications
|
2022-03-22 11:46:03 -06:00 |
|
James Betker
|
f3f391b372
|
undo sandwich
|
2022-03-22 11:43:24 -06:00 |
|
James Betker
|
927731f3b4
|
tts9: fix position embeddings snafu
|
2022-03-22 11:41:32 -06:00 |
|
James Betker
|
536511fc4b
|
unified_voice: relative position encodings
|
2022-03-22 11:41:13 -06:00 |
|
James Betker
|
5405ce4363
|
fix flat
|
2022-03-22 11:39:39 -06:00 |
|
James Betker
|
e47a759ed8
|
.......
|
2022-03-21 17:22:35 -06:00 |
|
James Betker
|
cc4c9faf9a
|
resolve more issues
|
2022-03-21 17:20:05 -06:00 |
|
James Betker
|
9e97cd800c
|
take the conditioning mean rather than the first element
|
2022-03-21 16:58:03 -06:00 |
|
James Betker
|
9c7598dc9a
|
fix conditioning_free signal
|
2022-03-21 15:29:17 -06:00 |
|
James Betker
|
2a65c982ca
|
dont double nest checkpointing
|
2022-03-21 15:27:51 -06:00 |
|
James Betker
|
723f324eda
|
Make it even better
|
2022-03-21 14:50:59 -06:00 |
|
James Betker
|
e735d8e1fa
|
unified_voice fixes
|
2022-03-21 14:44:00 -06:00 |
|
James Betker
|
1ad18d29a8
|
Flat fixes
|
2022-03-21 14:43:52 -06:00 |
|
James Betker
|
26dcf7f1a2
|
r2 of the flat diffusion
|
2022-03-21 11:40:43 -06:00 |
|
James Betker
|
c14fc003ed
|
flat diffusion
|
2022-03-17 17:45:27 -06:00 |
|
James Betker
|
428911cd4d
|
flat diffusion network
|
2022-03-17 10:53:56 -06:00 |
|
James Betker
|
bf08519d71
|
fixes
|
2022-03-17 10:53:39 -06:00 |
|
James Betker
|
95ea0a592f
|
More cleaning
|
2022-03-16 12:05:56 -06:00 |
|
James Betker
|
d186414566
|
More spring cleaning
|
2022-03-16 12:04:00 -06:00 |
|
James Betker
|
8b376e63d9
|
More improvements
|
2022-03-16 10:16:34 -06:00 |
|
James Betker
|
0fc877cbc8
|
tts9 fix for alignment size
|
2022-03-15 21:43:14 -06:00 |
|
James Betker
|
f563a8dd41
|
fixes
|
2022-03-15 21:43:00 -06:00 |
|
James Betker
|
b754058018
|
Update wav2vec2 wrapper
|
2022-03-15 11:35:38 -06:00 |
|
James Betker
|
9c6f776980
|
Add univnet vocoder
|
2022-03-15 11:34:51 -06:00 |
|
James Betker
|
7929fd89de
|
Refactor audio-style models into the audio folder
|
2022-03-15 11:06:25 -06:00 |
|
James Betker
|
f95d3d2b82
|
move waveglow to audio/vocoders
|
2022-03-15 11:03:07 -06:00 |
|
James Betker
|
bb03cbb9fc
|
composable initial checkin
|
2022-03-15 10:35:40 -06:00 |
|
James Betker
|
86b0d76fb9
|
tts8 (incomplete, may be removed)
|
2022-03-15 10:35:31 -06:00 |
|
James Betker
|
eecbc0e678
|
Use wider spectrogram when asked
|
2022-03-15 10:35:11 -06:00 |
|
James Betker
|
9767260c6c
|
tacotron stft - loosen bounds restrictions and clip
|
2022-03-15 10:31:26 -06:00 |
|
James Betker
|
f8631ad4f7
|
Updates to support inputting MELs into the conditioning encoder
|
2022-03-14 17:31:42 -06:00 |
|
James Betker
|
22c67ce8d3
|
tts9 mods
|
2022-03-13 10:25:55 -06:00 |
|
James Betker
|
8f130e2b3f
|
add scale_shift_norm back to tts9
|
2022-03-12 20:42:13 -07:00 |
|
James Betker
|
73bfd4a86d
|
another tts9 update
|
2022-03-12 15:17:06 -07:00 |
|
James Betker
|
0523777ff7
|
add efficient config to tts9
|
2022-03-12 15:10:35 -07:00 |
|
James Betker
|
1e87b934db
|
potentially average conditioning inputs
|
2022-03-10 20:37:41 -07:00 |
|
James Betker
|
e6a95f7c11
|
Update tts9: Remove torchscript provisions and add mechanism to train solely on codes
|
2022-03-09 09:43:38 -07:00 |
|
James Betker
|
f56edb2122
|
minicoder with classifier head: spread out probability mass for 0 predictions
|
2022-03-08 15:51:31 -07:00 |
|
James Betker
|
29b2921222
|
move diffusion vocoder
|
2022-03-08 15:51:05 -07:00 |
|
James Betker
|
94222b0216
|
tts9 initial commit
|
2022-03-08 15:50:45 -07:00 |
|
James Betker
|
d1dc8dbb35
|
Support tts9
|
2022-03-05 20:14:36 -07:00 |
|
James Betker
|
79e5692388
|
Fix distributed bug
|
2022-03-04 11:58:53 -07:00 |
|
James Betker
|
f87e10ffef
|
Make deterministic sampler work with distributed training & microbatches
|
2022-03-04 11:50:50 -07:00 |
|
James Betker
|
77c18b53b3
|
Cap grad booster
|
2022-03-04 10:40:24 -07:00 |
|
James Betker
|
2d1cb83c1d
|
Add a deterministic timestep sampler, with provisions to employ it every n steps
|
2022-03-04 10:40:14 -07:00 |
|
James Betker
|
58019a2ce3
|
audio diffusion fid updates
|
2022-03-03 21:53:32 -07:00 |
|
James Betker
|
998c53ad4f
|
w2v_matcher mods
|
2022-03-03 21:52:51 -07:00 |
|
James Betker
|
9029e4f20c
|
Add a base-wrapper
|
2022-03-03 21:52:28 -07:00 |
|
James Betker
|
6873ad6660
|
Support functionality
|
2022-03-03 21:52:16 -07:00 |
|
James Betker
|
6af5d129ce
|
Add experimental gradient boosting into tts7
|
2022-03-03 21:51:40 -07:00 |
|
James Betker
|
7ea84f1ac3
|
asdf
|
2022-03-03 13:43:44 -07:00 |
|
James Betker
|
3cd6c7f428
|
Get rid of unused codes in vq
|
2022-03-03 13:41:38 -07:00 |
|
James Betker
|
619da9ea28
|
Get rid of discretization loss
|
2022-03-03 13:36:25 -07:00 |
|
James Betker
|
beb7c8a39d
|
asdf
|
2022-03-01 21:41:31 -07:00 |
|
James Betker
|
70fa780edb
|
Add mechanism to export grad norms
|
2022-03-01 20:19:52 -07:00 |
|
James Betker
|
d9f8f92840
|
Codified fp16
|
2022-03-01 15:46:04 -07:00 |
|
James Betker
|
45ab444c04
|
Rework minicoder to always checkpoint
|
2022-03-01 14:09:18 -07:00 |
|
James Betker
|
db0c3340ac
|
Implement guidance-free diffusion in eval
And a few other fixes
|
2022-03-01 11:49:36 -07:00 |
|
James Betker
|
2134f06516
|
Implement conditioning-free diffusion at the eval level
|
2022-02-27 15:11:42 -07:00 |
|
James Betker
|
436fe24822
|
Add conditioning-free guidance
|
2022-02-27 15:00:06 -07:00 |
|
James Betker
|
ac920798bb
|
misc
|
2022-02-27 14:49:11 -07:00 |
|
James Betker
|
dbc74e96b2
|
w2v_matcher
|
2022-02-27 14:48:23 -07:00 |
|
James Betker
|
42879d7296
|
w2v_wrapper ramping dropout mode
this is an experimental feature that needs some testing
|
2022-02-27 14:47:51 -07:00 |
|
James Betker
|
c375287db9
|
Re-instate autocasting
|
2022-02-25 11:06:18 -07:00 |
|
James Betker
|
34ee32a90e
|
get rid of autocasting in tts7
|
2022-02-24 21:53:51 -07:00 |
|
James Betker
|
ea500ad42a
|
Use clustered masking in udtts7
|
2022-02-24 07:57:26 -07:00 |
|
James Betker
|
7201b4500c
|
default text_to_sequence cleaners
|
2022-02-21 19:14:22 -07:00 |
|
James Betker
|
ba7f54c162
|
w2v: new inference function
|
2022-02-21 19:13:03 -07:00 |
|
James Betker
|
38802a96c8
|
remove timesteps from cond calculation
|
2022-02-21 12:32:21 -07:00 |
|
James Betker
|
668876799d
|
unet_diffusion_tts7
|
2022-02-20 15:22:38 -07:00 |
|
James Betker
|
0872e17e60
|
unified_voice mods
|
2022-02-19 20:37:35 -07:00 |
|
James Betker
|
7b12799370
|
Reformat mel_text_clip for use in eval
|
2022-02-19 20:37:26 -07:00 |
|
James Betker
|
baf7b65566
|
Attempt to make w2v play with DDP AND checkpointing
|
2022-02-18 18:47:11 -07:00 |
|
James Betker
|
f3776f1992
|
reset ctc loss from "mean" to "sum"
|
2022-02-17 22:00:58 -07:00 |
|
James Betker
|
2b20da679c
|
make spec_augment a parameter
|
2022-02-17 20:22:05 -07:00 |
|
James Betker
|
e1d71e1bd5
|
w2v_wrapper: get rid of ctc attention mask
|
2022-02-15 20:54:40 -07:00 |
|
James Betker
|
79e8f36d30
|
Convert CLIP models into new folder
|
2022-02-15 20:53:07 -07:00 |
|
James Betker
|
2bdb515068
|
A few mods to make wav2vec2 trainable with DDP on DLAS
|
2022-02-15 06:28:54 -07:00 |
|
James Betker
|
52b61b9f77
|
Update scripts and attempt to figure out how UnifiedVoice could be used to produce CTC codes
|
2022-02-13 20:48:06 -07:00 |
|