James Betker
1609101a42
musical gap filler
2022-05-05 16:47:08 -06:00
James Betker
d66ab2d28c
Remove unused waveform_gens
2022-05-04 21:06:54 -06:00
James Betker
47662b9ec5
some random crap
2022-05-04 20:29:23 -06:00
James Betker
6655f7845a
add pixel shuffling for 1d cases
2022-05-04 08:03:09 -06:00
James Betker
c42c53e75a
Add a trainable network for converting a normal distribution into a latent space
2022-05-02 09:47:30 -06:00
James Betker
e402089556
abstractify
2022-05-02 00:11:26 -06:00
James Betker
ab219fbefb
output variance
2022-05-02 00:10:33 -06:00
James Betker
3b074aac34
add checkpointing
2022-05-02 00:07:42 -06:00
James Betker
ae5f934ea1
diffwave
2022-05-02 00:05:04 -06:00
James Betker
f4254609c1
MDF
...
around and around in circles........
2022-05-01 23:04:56 -06:00
James Betker
b712d3b72b
break out get_conditioning_latent from unified_voice
2022-05-01 23:04:44 -06:00
James Betker
afa2df57c9
gen3
2022-04-30 10:41:38 -06:00
James Betker
64c7582bf5
full pipeline
2022-04-28 22:47:26 -06:00
James Betker
8aa6651fc7
fix surrogate loss return in waveform_gen2
2022-04-28 10:10:11 -06:00
James Betker
e208d9fb80
gate augmentations with a flag
2022-04-28 10:09:22 -06:00
James Betker
3f67cb2023
music diffusion fid adjustments
2022-04-28 10:08:55 -06:00
James Betker
ab8176b217
audio prep misc
2022-04-28 10:08:38 -06:00
James Betker
f02b01bd9d
reverse univnet classifier
2022-04-20 21:37:55 -06:00
James Betker
9df85c902e
New gen2
...
Which is basically a autoencoder with a giant diffusion appendage attached
2022-04-20 21:37:34 -06:00
James Betker
b1c2c48720
music diffusion fid
2022-04-20 00:28:03 -06:00
James Betker
084b1c1527
file splitter
2022-04-20 00:27:49 -06:00
James Betker
b4549eed9f
uv2 fix
2022-04-20 00:27:38 -06:00
James Betker
24fdafd855
fix2
2022-04-20 00:03:29 -06:00
James Betker
0af0051399
fix
2022-04-20 00:01:57 -06:00
James Betker
419f4d37bd
gen2 music
2022-04-19 23:38:37 -06:00
James Betker
c85ab738c5
paired fix
2022-04-16 23:41:57 -06:00
James Betker
8fe0dff33c
support tts typing
2022-04-16 23:36:57 -06:00
James Betker
48cb6a5abd
misc
2022-04-16 20:28:04 -06:00
James Betker
147478a148
cvvp
2022-04-16 20:27:46 -06:00
James Betker
546ecd5aeb
music!
2022-04-15 21:21:37 -06:00
James Betker
254357724d
gradprop
2022-04-15 09:37:20 -06:00
James Betker
fbf1f4f637
update
2022-04-15 09:34:44 -06:00
James Betker
82aad335ba
add distributued logic for loss
2022-04-15 09:31:48 -06:00
James Betker
efe12cb816
Update clvp to add masking probabilities in conditioning and to support code inputs
2022-04-15 09:11:23 -06:00
James Betker
3cad1b8114
more fixes
2022-04-11 15:18:44 -06:00
James Betker
6dea7da7a8
another fix
2022-04-11 12:29:43 -06:00
James Betker
f2c172291f
fix audio_diffusion_fid for autoregressive latent inputs
2022-04-11 12:08:15 -06:00
James Betker
8ea5c307fb
Fixes for training the diffusion model on autoregressive inputs
2022-04-11 11:02:44 -06:00
James Betker
a3622462c1
Change latent_conditioner back
2022-04-11 09:00:13 -06:00
James Betker
03d0b90bda
fixes
2022-04-10 21:02:12 -06:00
James Betker
19ca5b26c1
Remove flat0 and move it into flat
2022-04-10 21:01:59 -06:00
James Betker
81c952a00a
undo relative
2022-04-08 16:32:52 -06:00
James Betker
944b4c3335
more undos
2022-04-08 16:31:08 -06:00
James Betker
032983e2ed
fix bug and allow position encodings to be trained separately from the rest of the model
2022-04-08 16:26:01 -06:00
James Betker
09ab1aa9bc
revert rotary embeddings work
...
I'm not really sure that this is going to work. I'd rather explore re-using what I've already trained
2022-04-08 16:18:35 -06:00
James Betker
2fb9ffb0aa
Align autoregressive text using start and stop tokens
2022-04-08 09:41:59 -06:00
James Betker
628569af7b
Another fix
2022-04-08 09:41:18 -06:00
James Betker
423293e518
fix xtransformers bug
2022-04-08 09:12:46 -06:00
James Betker
048f6f729a
remove lightweight_gan
2022-04-07 23:12:08 -07:00
James Betker
e634996a9c
autoregressive_codegen: support key_value caching for faster inference
2022-04-07 23:08:46 -07:00
James Betker
d05e162f95
reformat x_transformers
2022-04-07 23:08:03 -07:00
James Betker
7c578eb59b
Fix inference in new autoregressive_codegen
2022-04-07 21:22:46 -06:00
James Betker
3f8d7955ef
unified_voice with rotary embeddings
2022-04-07 20:11:14 -06:00
James Betker
573e5552b9
CLVP v1
2022-04-07 20:10:57 -06:00
James Betker
71b73db044
clean up
2022-04-07 11:34:10 -06:00
James Betker
6fc4f49e86
some dumb stuff
2022-04-07 11:32:34 -06:00
James Betker
e6387c7613
Fix eval logic to not run immediately
2022-04-07 11:29:57 -06:00
James Betker
305dc95e4b
cg2
2022-04-06 21:24:36 -06:00
James Betker
e011166dd6
autoregressive_codegen r3
2022-04-06 21:04:23 -06:00
James Betker
33ef17e9e5
fix context
2022-04-06 00:45:42 -06:00
James Betker
37bdfe82b2
Modify x_transformers to do checkpointing and use relative positional biases
2022-04-06 00:35:29 -06:00
James Betker
09879b434d
bring in x_transformers
2022-04-06 00:21:58 -06:00
James Betker
3d916e7687
Fix evaluation when using multiple batch sizes
2022-04-05 07:51:09 -06:00
James Betker
572d137589
track iteration rate
2022-04-04 12:33:25 -06:00
James Betker
4cdb0169d0
update training data encountered when using force_start_step
2022-04-04 12:25:00 -06:00
James Betker
cdd12ff46c
Add code validation to autoregressive_codegen
2022-04-04 09:51:41 -06:00
James Betker
99de63a922
man I'm really on it tonight....
2022-04-02 22:01:33 -06:00
James Betker
a4bdc80933
moikmadsf
2022-04-02 21:59:50 -06:00
James Betker
1cf20b7337
sdfds
2022-04-02 21:58:09 -06:00
James Betker
b6afc4d542
dsfa
2022-04-02 21:57:00 -06:00
James Betker
4c6bdfc9e2
get rid of relative position embeddings, which do not work with DDP & checkpointing
2022-04-02 21:55:32 -06:00
James Betker
b6d62aca5d
add inference model on top of codegen
2022-04-02 21:25:10 -06:00
James Betker
2b6ff09225
autoregressive_codegen v1
2022-04-02 15:07:39 -06:00
James Betker
00767219fc
undo latent converter change
2022-04-01 20:46:27 -06:00
James Betker
55c86e02c7
Flat fix
2022-04-01 19:13:33 -06:00
James Betker
8623c51902
fix bug
2022-04-01 16:11:34 -06:00
James Betker
035bcd9f6c
fwd fix
2022-04-01 16:03:07 -06:00
James Betker
f6a8b0a5ca
prep flat0 for feeding from autoregressive_latent_converter
2022-04-01 15:53:45 -06:00
James Betker
3e97abc8a9
update flat0 to break out timestep-independent inference steps
2022-04-01 14:38:53 -06:00
James Betker
a6181a489b
Fix loss gapping caused by poor gradients into mel_pred
2022-03-26 22:49:14 -06:00
James Betker
0070867d0f
inference script for diffusion image models
2022-03-26 22:48:24 -06:00
James Betker
1feade23ff
support x-transformers in text_voice_clip and support relative positional embeddings
2022-03-26 22:48:10 -06:00
James Betker
9b90472e15
feed direct inputs into gd
2022-03-26 08:36:19 -06:00
James Betker
6909f196b4
make code pred returns optional
2022-03-26 08:33:30 -06:00
James Betker
2a29a71c37
attempt to force meaningful codes by adding a surrogate loss
2022-03-26 08:31:40 -06:00
James Betker
45804177b8
more stuff
2022-03-25 00:03:18 -06:00
James Betker
d4218d8443
mods
2022-03-24 23:31:20 -06:00
James Betker
9c79fec734
update adf
2022-03-24 21:20:29 -06:00
James Betker
07731d5491
Fix ET
2022-03-24 21:20:22 -06:00
James Betker
a15970dd97
disable checkpointing in conditioning encoder
2022-03-24 11:49:04 -06:00
James Betker
cc5fc91562
flat0 work
2022-03-24 11:46:53 -06:00
James Betker
b0d2827fad
flat0
2022-03-24 11:30:40 -06:00
James Betker
8707a3e0c3
drop full layers in layerdrop, not half layers
2022-03-23 17:15:08 -06:00
James Betker
57da6d0ddf
more simplifications
2022-03-22 11:46:03 -06:00
James Betker
f3f391b372
undo sandwich
2022-03-22 11:43:24 -06:00
James Betker
927731f3b4
tts9: fix position embeddings snafu
2022-03-22 11:41:32 -06:00
James Betker
536511fc4b
unified_voice: relative position encodings
2022-03-22 11:41:13 -06:00
James Betker
be5f052255
misc
2022-03-22 11:40:56 -06:00
James Betker
963f0e9cee
fix unscaler
2022-03-22 11:40:02 -06:00
James Betker
5405ce4363
fix flat
2022-03-22 11:39:39 -06:00
James Betker
e47a759ed8
.......
2022-03-21 17:22:35 -06:00
James Betker
cc4c9faf9a
resolve more issues
2022-03-21 17:20:05 -06:00
James Betker
3692c4cae3
map vocoder into cpu
2022-03-21 17:10:57 -06:00
James Betker
9e97cd800c
take the conditioning mean rather than the first element
2022-03-21 16:58:03 -06:00
James Betker
9c7598dc9a
fix conditioning_free signal
2022-03-21 15:29:17 -06:00
James Betker
2a65c982ca
dont double nest checkpointing
2022-03-21 15:27:51 -06:00
James Betker
723f324eda
Make it even better
2022-03-21 14:50:59 -06:00
James Betker
e735d8e1fa
unified_voice fixes
2022-03-21 14:44:00 -06:00
James Betker
1ad18d29a8
Flat fixes
2022-03-21 14:43:52 -06:00
James Betker
26dcf7f1a2
r2 of the flat diffusion
2022-03-21 11:40:43 -06:00
James Betker
c5000420f6
more arbitrary fixes
2022-03-17 17:45:44 -06:00
James Betker
c14fc003ed
flat diffusion
2022-03-17 17:45:27 -06:00
James Betker
428911cd4d
flat diffusion network
2022-03-17 10:53:56 -06:00
James Betker
bf08519d71
fixes
2022-03-17 10:53:39 -06:00
James Betker
95ea0a592f
More cleaning
2022-03-16 12:05:56 -06:00
James Betker
d186414566
More spring cleaning
2022-03-16 12:04:00 -06:00
James Betker
735f6e4640
Move gen_similarities and rename
2022-03-16 11:59:34 -06:00
James Betker
8b376e63d9
More improvements
2022-03-16 10:16:34 -06:00
James Betker
54202aa099
fix mel normalization
2022-03-16 09:26:55 -06:00
James Betker
8437bb0c53
fixes
2022-03-15 23:52:48 -06:00
James Betker
3f244f6a68
add mel_norm to std injector
2022-03-15 22:16:59 -06:00
James Betker
0fc877cbc8
tts9 fix for alignment size
2022-03-15 21:43:14 -06:00
James Betker
f563a8dd41
fixes
2022-03-15 21:43:00 -06:00
James Betker
b754058018
Update wav2vec2 wrapper
2022-03-15 11:35:38 -06:00
James Betker
1e3a8554a1
updates to audio_diffusion_fid
2022-03-15 11:35:09 -06:00
James Betker
9c6f776980
Add univnet vocoder
2022-03-15 11:34:51 -06:00
James Betker
7929fd89de
Refactor audio-style models into the audio folder
2022-03-15 11:06:25 -06:00
James Betker
f95d3d2b82
move waveglow to audio/vocoders
2022-03-15 11:03:07 -06:00
James Betker
0419a64107
misc
2022-03-15 10:36:34 -06:00
James Betker
bb03cbb9fc
composable initial checkin
2022-03-15 10:35:40 -06:00
James Betker
86b0d76fb9
tts8 (incomplete, may be removed)
2022-03-15 10:35:31 -06:00
James Betker
eecbc0e678
Use wider spectrogram when asked
2022-03-15 10:35:11 -06:00
James Betker
9767260c6c
tacotron stft - loosen bounds restrictions and clip
2022-03-15 10:31:26 -06:00
James Betker
f8631ad4f7
Updates to support inputting MELs into the conditioning encoder
2022-03-14 17:31:42 -06:00
James Betker
e045fb0ad7
fix clip grad norm with scaler
2022-03-13 16:28:23 -06:00
James Betker
22c67ce8d3
tts9 mods
2022-03-13 10:25:55 -06:00
James Betker
08599b4c75
fix random_audio_crop injector
2022-03-12 20:42:29 -07:00
James Betker
8f130e2b3f
add scale_shift_norm back to tts9
2022-03-12 20:42:13 -07:00
James Betker
9bbbe26012
update audio_with_noise
2022-03-12 20:41:47 -07:00
James Betker
e754c4fbbc
sweep update
2022-03-12 15:33:00 -07:00
James Betker
73bfd4a86d
another tts9 update
2022-03-12 15:17:06 -07:00
James Betker
0523777ff7
add efficient config to tts9
2022-03-12 15:10:35 -07:00
James Betker
896accb71f
data and prep improvements
2022-03-12 15:10:11 -07:00
James Betker
1e87b934db
potentially average conditioning inputs
2022-03-10 20:37:41 -07:00
James Betker
e6a95f7c11
Update tts9: Remove torchscript provisions and add mechanism to train solely on codes
2022-03-09 09:43:38 -07:00
James Betker
726e30c4f7
Update noise augmentation dataset to include voices that are appended at the end of another clip.
2022-03-09 09:43:10 -07:00
James Betker
c4e4cf91a0
add support for the original vocoder to audio_diffusion_fid; also add a new "intelligibility" metric
2022-03-08 15:53:27 -07:00
James Betker
3e5da71b16
add grad scaler scale to metrics
2022-03-08 15:52:42 -07:00
James Betker
d2bdeb6f20
misc audio support
2022-03-08 15:52:26 -07:00
James Betker
d553808d24
misc
2022-03-08 15:52:16 -07:00
James Betker
7dabc17626
phase2 filter initial commit
2022-03-08 15:51:55 -07:00
James Betker
f56edb2122
minicoder with classifier head: spread out probability mass for 0 predictions
2022-03-08 15:51:31 -07:00
James Betker
29b2921222
move diffusion vocoder
2022-03-08 15:51:05 -07:00
James Betker
94222b0216
tts9 initial commit
2022-03-08 15:50:45 -07:00
James Betker
38fd9fc985
Improve efficiency of audio_with_noise_dataset
2022-03-08 15:50:13 -07:00
James Betker
b3def182de
move processing pipeline to "phase_1"
2022-03-08 15:49:51 -07:00
James Betker
30ddac69aa
lots of bad entries
2022-03-05 23:15:59 -07:00
James Betker
dcf98df0c2
++
2022-03-05 23:12:34 -07:00
James Betker
64d764ccd7
fml
2022-03-05 23:11:10 -07:00
James Betker
ef63ff84e2
pvd2
2022-03-05 23:08:39 -07:00
James Betker
1a05712764
pvd
2022-03-05 23:05:29 -07:00
James Betker
d1dc8dbb35
Support tts9
2022-03-05 20:14:36 -07:00
James Betker
93a3302819
Push training_state data to CPU memory before saving it
...
For whatever reason, keeping this on GPU memory just doesn't work.
When you load it, it consumes a large amount of GPU memory and that
utilization doesn't go away. Saving to CPU should fix this.
2022-03-04 17:57:33 -07:00
James Betker
6000580e2e
df
2022-03-04 13:47:00 -07:00
James Betker
382681a35d
Load diffusion_fid DVAE into the correct cuda device
2022-03-04 13:42:14 -07:00
James Betker
e1052a5e32
Move log consensus to train for efficiency
2022-03-04 13:41:32 -07:00
James Betker
ce6dfdf255
Distributed "fixes"
2022-03-04 12:46:41 -07:00
James Betker
3ff878ae85
Accumulate loss & grad_norm metrics from all entities within a distributed graph
2022-03-04 12:01:16 -07:00
James Betker
79e5692388
Fix distributed bug
2022-03-04 11:58:53 -07:00
James Betker
f87e10ffef
Make deterministic sampler work with distributed training & microbatches
2022-03-04 11:50:50 -07:00
James Betker
77c18b53b3
Cap grad booster
2022-03-04 10:40:24 -07:00
James Betker
2d1cb83c1d
Add a deterministic timestep sampler, with provisions to employ it every n steps
2022-03-04 10:40:14 -07:00
James Betker
f490eaeba7
Shuffle optimizer states back and forth between cpu memory during steps
2022-03-04 10:38:51 -07:00
James Betker
3c242403f5
adjust location of pre-optimizer step so I can visualize the new grad norms
2022-03-04 08:56:42 -07:00
James Betker
58019a2ce3
audio diffusion fid updates
2022-03-03 21:53:32 -07:00
James Betker
998c53ad4f
w2v_matcher mods
2022-03-03 21:52:51 -07:00
James Betker
9029e4f20c
Add a base-wrapper
2022-03-03 21:52:28 -07:00
James Betker
6873ad6660
Support functionality
2022-03-03 21:52:16 -07:00
James Betker
6af5d129ce
Add experimental gradient boosting into tts7
2022-03-03 21:51:40 -07:00
James Betker
7ea84f1ac3
asdf
2022-03-03 13:43:44 -07:00
James Betker
3cd6c7f428
Get rid of unused codes in vq
2022-03-03 13:41:38 -07:00
James Betker
619da9ea28
Get rid of discretization loss
2022-03-03 13:36:25 -07:00
James Betker
beb7c8a39d
asdf
2022-03-01 21:41:31 -07:00
James Betker
70fa780edb
Add mechanism to export grad norms
2022-03-01 20:19:52 -07:00
James Betker
d9f8f92840
Codified fp16
2022-03-01 15:46:04 -07:00
James Betker
45ab444c04
Rework minicoder to always checkpoint
2022-03-01 14:09:18 -07:00
James Betker
db0c3340ac
Implement guidance-free diffusion in eval
...
And a few other fixes
2022-03-01 11:49:36 -07:00
James Betker
2134f06516
Implement conditioning-free diffusion at the eval level
2022-02-27 15:11:42 -07:00
James Betker
436fe24822
Add conditioning-free guidance
2022-02-27 15:00:06 -07:00
James Betker
ac920798bb
misc
2022-02-27 14:49:11 -07:00
James Betker
ba155e4e2f
script for uploading models to the HF hub
2022-02-27 14:48:38 -07:00
James Betker
dbc74e96b2
w2v_matcher
2022-02-27 14:48:23 -07:00
James Betker
42879d7296
w2v_wrapper ramping dropout mode
...
this is an experimental feature that needs some testing
2022-02-27 14:47:51 -07:00
James Betker
c375287db9
Re-instate autocasting
2022-02-25 11:06:18 -07:00
James Betker
34ee32a90e
get rid of autocasting in tts7
2022-02-24 21:53:51 -07:00
James Betker
f458f5d8f1
abort early if losses reach nan too much, and save the model
2022-02-24 20:55:30 -07:00
James Betker
18dc62453f
Don't step if NaN losses are encountered.
2022-02-24 17:45:08 -07:00
James Betker
ea500ad42a
Use clustered masking in udtts7
2022-02-24 07:57:26 -07:00
James Betker
7c17c8e674
gurgl
2022-02-23 21:28:24 -07:00
James Betker
e6824e398f
Load dvae to cpu
2022-02-23 21:21:45 -07:00
James Betker
81017d9696
put frechet_distance on cuda
2022-02-23 21:21:13 -07:00
James Betker
9a7bbf33df
f
2022-02-23 18:03:38 -07:00
James Betker
68726eac74
.
2022-02-23 17:58:07 -07:00
James Betker
b7319ab518
Support vocoder type diffusion in audio_diffusion_fid
2022-02-23 17:25:16 -07:00
James Betker
58f6c9805b
adf
2022-02-22 23:12:58 -07:00
James Betker
03752c1cd6
Report NaN
2022-02-22 23:09:37 -07:00
James Betker
7201b4500c
default text_to_sequence cleaners
2022-02-21 19:14:22 -07:00
James Betker
ba7f54c162
w2v: new inference function
2022-02-21 19:13:03 -07:00
James Betker
896ac029ae
allow continuation of samples encountered
2022-02-21 19:12:50 -07:00
James Betker
6313a94f96
eval: integrate a n-gram language model into decoding
2022-02-21 19:12:34 -07:00
James Betker
af50afe222
pairedvoice: error out if clip is too short
2022-02-21 19:11:10 -07:00
James Betker
38802a96c8
remove timesteps from cond calculation
2022-02-21 12:32:21 -07:00
James Betker
668876799d
unet_diffusion_tts7
2022-02-20 15:22:38 -07:00
James Betker
0872e17e60
unified_voice mods
2022-02-19 20:37:35 -07:00
James Betker
7b12799370
Reformat mel_text_clip for use in eval
2022-02-19 20:37:26 -07:00
James Betker
bcba65c539
DataParallel Fix
2022-02-19 20:36:35 -07:00
James Betker
34001ad765
et
2022-02-18 18:52:33 -07:00
James Betker
baf7b65566
Attempt to make w2v play with DDP AND checkpointing
2022-02-18 18:47:11 -07:00
James Betker
f3776f1992
reset ctc loss from "mean" to "sum"
2022-02-17 22:00:58 -07:00
James Betker
2b20da679c
make spec_augment a parameter
2022-02-17 20:22:05 -07:00
James Betker
a813fbed9c
Update to evaluator
2022-02-17 17:30:33 -07:00
James Betker
e1d71e1bd5
w2v_wrapper: get rid of ctc attention mask
2022-02-15 20:54:40 -07:00
James Betker
79e8f36d30
Convert CLIP models into new folder
2022-02-15 20:53:07 -07:00
James Betker
8f767b8b4f
...
2022-02-15 07:08:17 -07:00
James Betker
29e07913a8
Fix
2022-02-15 06:58:11 -07:00
James Betker
dd585df772
LAMB optimizer
2022-02-15 06:48:13 -07:00
James Betker
2bdb515068
A few mods to make wav2vec2 trainable with DDP on DLAS
2022-02-15 06:28:54 -07:00
James Betker
52b61b9f77
Update scripts and attempt to figure out how UnifiedVoice could be used to produce CTC codes
2022-02-13 20:48:06 -07:00
James Betker
a4f1641eea
Add & refine WER evaluator for w2v
2022-02-13 20:47:29 -07:00
James Betker
e16af944c0
BSO fix
2022-02-12 20:01:04 -07:00
James Betker
29534180b2
w2v fine tuner
2022-02-12 20:00:59 -07:00
James Betker
0c3cc5ebad
use script updates to fix output size disparities
2022-02-12 20:00:46 -07:00
James Betker
15fd60aad3
Allow EMA training to be disabled
2022-02-12 20:00:23 -07:00
James Betker
3252972057
ctc_code_gen mods
2022-02-12 19:59:54 -07:00
James Betker
35170c77b3
fix sweep
2022-02-11 11:43:11 -07:00
James Betker
c6b6d120fe
fix ranking
2022-02-11 11:34:57 -07:00
James Betker
095944569c
deep_update dicts
2022-02-11 11:32:25 -07:00
James Betker
ab1f6e8ac6
deepcopy map
2022-02-11 11:29:32 -07:00
James Betker
496fb81997
use fork instead
2022-02-11 11:22:25 -07:00
James Betker
4abc094b47
fix train bug
2022-02-11 11:18:15 -07:00
James Betker
006add64c5
sweep fix
2022-02-11 11:17:08 -07:00
James Betker
102142d1eb
f
2022-02-11 11:05:13 -07:00
James Betker
40b08a52d0
dafuk
2022-02-11 11:01:31 -07:00
James Betker
f6a7f12cad
Remove broken evaluator
2022-02-11 11:00:29 -07:00
James Betker
46b97049dc
Fix eval
2022-02-11 10:59:32 -07:00
James Betker
5175b7d91a
training sweeper checkin
2022-02-11 10:46:37 -07:00
James Betker
302ac8652d
Undo mask during training
2022-02-11 09:35:12 -07:00
James Betker
618a20412a
new rev of ctc_code_gen with surrogate LM loss
2022-02-10 23:09:57 -07:00
James Betker
d1d1ae32a1
audio diffusion frechet distance measurement!
2022-02-10 22:55:46 -07:00
James Betker
23a310b488
Fix BSO
2022-02-10 20:54:51 -07:00
James Betker
1e28e02f98
BSO improvement to make it work with distributed optimizers
2022-02-10 09:53:13 -07:00
James Betker
836eb08afb
Update BSO to use the proper step size
2022-02-10 09:44:15 -07:00
James Betker
820a29f81e
ctc code gen mods
2022-02-10 09:44:01 -07:00
James Betker
ac9417b956
ctc_code_gen: mask out all padding tokens
2022-02-09 17:26:30 -07:00
James Betker
a930f2576e
Begin a migration to specifying training rate on megasamples instead of arbitrary "steps"
...
This should help me greatly in tuning models. It's also necessary now that batch size isn't really
respected; we simply step once the gradient direction becomes unstable.
2022-02-09 17:25:05 -07:00
James Betker
93ca619267
script updates
2022-02-09 14:26:52 -07:00
James Betker
ddb77ef502
ctc_code_gen: use a mean() on the ConditioningEncoder
2022-02-09 14:26:44 -07:00
James Betker
3d946356f8
batch_size_optimizer works. sweet! no more tuning batch sizes.
2022-02-09 14:26:23 -07:00
James Betker
18938248e4
Add batch_size_optimizer support
2022-02-08 23:51:31 -07:00
James Betker
9e9ae328f2
mild updates
2022-02-08 23:51:17 -07:00
James Betker
ff35d13b99
Use non-uniform noise in diffusion_tts6
2022-02-08 07:27:41 -07:00
James Betker
f44b064c5e
Update scripts
2022-02-07 19:43:18 -07:00
James Betker
34fbb78671
Straight CtcCodeGenerator as an encoder
2022-02-07 15:46:46 -07:00
James Betker
c24682c668
Record load times in fast_paired_dataset
2022-02-07 15:45:38 -07:00
James Betker
65a546c4d7
Fix for tts6
2022-02-05 16:00:14 -07:00
James Betker
5ae816bead
ctc gen checkin
2022-02-05 15:59:53 -07:00
James Betker
bb3d1ab03d
More cleanup
2022-02-04 11:06:17 -07:00
James Betker
5cc342de66
Clean up
2022-02-04 11:00:42 -07:00
James Betker
8fb147e8ab
add an autoregressive ctc code generator
2022-02-04 11:00:15 -07:00
James Betker
7f4fc55344
Update SR model
2022-02-03 21:42:53 -07:00
James Betker
de1a1d501a
Move audio injectors into their own file
2022-02-03 21:42:37 -07:00
James Betker
687393de59
Add a better split_on_silence (processing_pipeline)
...
Going to extend this a bit more going forwards to support the entire pipeline.
2022-02-03 20:00:26 -07:00
James Betker
1d29999648
Uupdates to the TTS production scripts
2022-02-03 20:00:01 -07:00
James Betker
bc506d4bcd
Mods to unet_diffusion_tts6 to support super resolution mode
2022-02-03 19:59:39 -07:00
James Betker
4249681c4b
Mods to support a autoregressive CTC code generator
2022-02-03 19:58:54 -07:00
James Betker
8132766d38
tts6
2022-01-31 20:15:06 -07:00
James Betker
fbea6e8eac
Adjustments to diffusion networks
2022-01-30 16:14:06 -07:00
James Betker
e58dab14c3
new diffusion updates from testing
2022-01-29 11:01:01 -07:00
James Betker
935a4e853e
get rid of nil tokens in <2>
2022-01-27 22:45:57 -07:00
James Betker
0152174c0e
Add wandb_step_factor argument
2022-01-27 19:58:58 -07:00
James Betker
e0e36ed98c
Update use_diffuse_tts
2022-01-27 19:57:28 -07:00
James Betker
a77d376ad2
rename unet diffusion tts and add 3
2022-01-27 19:56:24 -07:00
James Betker
7badbf1b4d
update usage scripts
2022-01-25 17:57:26 -07:00
James Betker
8c255811ad
more fixes
2022-01-25 17:57:16 -07:00
James Betker
0f3ca28e39
Allow diffusion model to be trained with masking tokens
2022-01-25 14:26:21 -07:00
James Betker
798ed7730a
i like wasting time
2022-01-24 18:12:08 -07:00
James Betker
fc09cff4b3
angry
2022-01-24 18:09:29 -07:00
James Betker
cc0d9f7216
Fix
2022-01-24 18:05:45 -07:00
James Betker
3a9e3a9db3
consolidate state
2022-01-24 17:59:31 -07:00
James Betker
dfef34ba39
Load ema to cpu memory if specified
2022-01-24 15:08:29 -07:00
James Betker
49edffb6ad
Revise device mapping
2022-01-24 15:08:13 -07:00
James Betker
33511243d5
load model state dicts into the correct device
...
it's not clear to me that this will make a huge difference, but it's a good idea anyways
2022-01-24 14:40:09 -07:00
James Betker
3e16c509f6
Misc fixes
2022-01-24 14:31:43 -07:00
James Betker
e2ed0adbd8
use_diffuse_tts updates
2022-01-24 14:31:28 -07:00
James Betker
e420df479f
Allow steps to specify which state keys to carry forward (reducing memory utilization)
2022-01-24 11:01:27 -07:00
James Betker
62475005e4
Sort data items in descending order, which I suspect will improve performance because we will hit GC less
2022-01-23 19:05:32 -07:00
James Betker
d18aec793a
Revert "(re) attempt diffusion checkpointing logic"
...
This reverts commit b22eec8fe3
.
2022-01-22 09:14:50 -07:00
James Betker
b22eec8fe3
(re) attempt diffusion checkpointing logic
2022-01-22 08:34:40 -07:00
James Betker
8f48848f91
misc
2022-01-22 08:23:29 -07:00
James Betker
851070075a
text<->cond clip
...
I need that universal clip..
2022-01-22 08:23:14 -07:00
James Betker
8ada52ccdc
Update LR layers to checkpoint better
2022-01-22 08:22:57 -07:00
James Betker
ce929a6b3f
Allow grad scaler to be enabled even in fp32 mode
2022-01-21 23:13:24 -07:00
James Betker
91b4b240ac
dont pickle unique files
2022-01-21 00:02:06 -07:00
James Betker
7fef7fb9ff
Update fast_paired_dataset to report how many audio files it is actually using
2022-01-20 21:49:38 -07:00
James Betker
ed35cfe393
Update inference scripts
2022-01-20 11:28:50 -07:00
James Betker
20312211e0
Fix bug in code alignment
2022-01-20 11:28:12 -07:00
James Betker
8e2439f50d
Decrease resolution requirements to 2048
2022-01-20 11:27:49 -07:00
James Betker
4af8525dc3
Adjust diffusion vocoder to allow training individual levels
2022-01-19 13:37:59 -07:00
James Betker
ac13bfefe8
use_diffuse_tts
2022-01-19 00:35:24 -07:00
James Betker
bcd8cc51e1
Enable collated data for diffusion purposes
2022-01-19 00:35:08 -07:00
James Betker
dc9cd8c206
Update use_gpt_tts to be usable with unified_voice2
2022-01-18 21:14:17 -07:00
James Betker
7b4544b83a
Add an experimental unet_diffusion_tts to perform experiments on
2022-01-18 08:38:24 -07:00
James Betker
b6190e96b2
fast_paired
2022-01-17 15:46:02 -07:00
James Betker
1d30d79e34
De-specify fast-paired-dataset
2022-01-16 21:20:00 -07:00
James Betker
2b36ca5f8e
Revert paired back
2022-01-16 21:10:46 -07:00
James Betker
ad3e7df086
Split the fast random into its own new dataset
2022-01-16 21:10:11 -07:00
James Betker
7331862755
Updated paired to randomly index data, offsetting memory costs and speeding up initialization
2022-01-16 21:09:22 -07:00
James Betker
37e4e737b5
a few fixes
2022-01-16 15:17:17 -07:00
James Betker
35db5ebf41
paired_voice_audio_dataset - aligned codes support
2022-01-15 17:38:26 -07:00
James Betker
3f177cd2b3
requirements
2022-01-15 17:28:59 -07:00
James Betker
b398ecca01
wer fix
2022-01-15 17:28:17 -07:00
James Betker
9100e7fa9b
Add a diffusion network that takes aligned text instead of MELs
2022-01-15 17:28:02 -07:00
James Betker
87c83e4957
update wer script
2022-01-13 17:08:49 -07:00
James Betker
009a1e8404
Add a new diffusion_vocoder that should be trainable faster
...
This new one has a "cheating" top layer, that does not feed down into the unet encoder,
but does consume the outputs of the unet. This cheater only operates on half of the input,
while the rest of the unet operates on the full input. This limits the dimensionality of this last
layer, on the assumption that these last layers consume by far the most computation and memory,
but do not require the full input context.
Losses are only computed on half of the aggregate input.
2022-01-11 17:26:07 -07:00
James Betker
d4e27ccf62
misc updates
2022-01-11 16:25:40 -07:00
James Betker
91f28580e2
fix unified_voice
2022-01-10 16:17:31 -07:00
James Betker
136744dc1d
Fixes
2022-01-10 14:32:04 -07:00
James Betker
ee3dfac2ae
unified_voice2: decouple positional embeddings and token embeddings from underlying gpt model
2022-01-10 08:14:41 -07:00
James Betker
f503d8d96b
Partially implement performers in transformer_builders
2022-01-09 22:35:03 -07:00
James Betker
ec456b6733
Revert unified_voice back to beginning
...
I'll be doing my work within unified_voice2
2022-01-09 22:34:30 -07:00
James Betker
432073c5ca
Make performer code functional
2022-01-09 22:32:50 -07:00
James Betker
f474a7ac65
unified_voice2
2022-01-09 22:32:34 -07:00
James Betker
c075fe72e2
import performer repo
2022-01-09 22:10:07 -07:00
James Betker
7de3874f15
Make dalle transformer checkpointable
2022-01-09 19:14:35 -07:00
James Betker
70b17da193
Alter unified_voice to use extensible transformer (still WIP)
2022-01-08 22:18:25 -07:00
James Betker
15d9517e26
Allow bi-directional clipping
2022-01-08 22:18:04 -07:00
James Betker
894d245062
More zero_grad fixes
2022-01-08 20:31:19 -07:00
James Betker
8bade38180
Add generic CLIP model based off of x_clip
2022-01-08 19:08:01 -07:00
James Betker
2a9a25e6e7
Fix likely defective nan grad recovery
2022-01-08 18:24:58 -07:00
James Betker
438dd9ed33
fix text-voice-clip bug
2022-01-08 08:55:00 -07:00
James Betker
34774f9948
unified_voice: begin decoupling from HF GPT
...
I'd like to try some different (newer) transformer variants. The way to get
there is softly decoupling the transformer portion of this architecture
from GPT. This actually should be fairly easy.
2022-01-07 22:51:24 -07:00
James Betker
1f6a5310b8
More fixes to use_gpt_tts
2022-01-07 22:30:55 -07:00
James Betker
68090ac3e9
Finish up the text->voice clip model
2022-01-07 22:28:45 -07:00
James Betker
65ffe38fce
misc
2022-01-06 22:16:17 -07:00
James Betker
6706591d3d
Fix dataset
2022-01-06 15:24:37 -07:00
James Betker
f4484fd155
Add "dataset_debugger" support
...
This allows the datasets themselves compile statistics and report them
via tensorboard and wandb.
2022-01-06 12:38:20 -07:00
James Betker
f3cab45658
Revise audio datasets to include interesting statistics in batch
...
Stats include:
- How many indices were skipped to retrieve a given index
- Whether or not a conditioning input was actually the file itself
2022-01-06 11:15:16 -07:00
James Betker
06c1093090
Remove collating from paired_voice_audio_dataset
...
This will now be done at the model level, which is more efficient
2022-01-06 10:29:39 -07:00
James Betker
e7a705fe6e
Make gpt_asr_hf2 more efficient at inference
2022-01-06 10:27:10 -07:00
James Betker
5e1d1da2e9
Clean paired_voice
2022-01-06 10:26:53 -07:00