James Betker
d05e162f95
reformat x_transformers
2022-04-07 23:08:03 -07:00
James Betker
7c578eb59b
Fix inference in new autoregressive_codegen
2022-04-07 21:22:46 -06:00
James Betker
3f8d7955ef
unified_voice with rotary embeddings
2022-04-07 20:11:14 -06:00
James Betker
573e5552b9
CLVP v1
2022-04-07 20:10:57 -06:00
James Betker
71b73db044
clean up
2022-04-07 11:34:10 -06:00
James Betker
6fc4f49e86
some dumb stuff
2022-04-07 11:32:34 -06:00
James Betker
e6387c7613
Fix eval logic to not run immediately
2022-04-07 11:29:57 -06:00
James Betker
305dc95e4b
cg2
2022-04-06 21:24:36 -06:00
James Betker
e011166dd6
autoregressive_codegen r3
2022-04-06 21:04:23 -06:00
James Betker
33ef17e9e5
fix context
2022-04-06 00:45:42 -06:00
James Betker
37bdfe82b2
Modify x_transformers to do checkpointing and use relative positional biases
2022-04-06 00:35:29 -06:00
James Betker
09879b434d
bring in x_transformers
2022-04-06 00:21:58 -06:00
James Betker
3d916e7687
Fix evaluation when using multiple batch sizes
2022-04-05 07:51:09 -06:00
James Betker
572d137589
track iteration rate
2022-04-04 12:33:25 -06:00
James Betker
4cdb0169d0
update training data encountered when using force_start_step
2022-04-04 12:25:00 -06:00
James Betker
cdd12ff46c
Add code validation to autoregressive_codegen
2022-04-04 09:51:41 -06:00
James Betker
99de63a922
man I'm really on it tonight....
2022-04-02 22:01:33 -06:00
James Betker
a4bdc80933
moikmadsf
2022-04-02 21:59:50 -06:00
James Betker
1cf20b7337
sdfds
2022-04-02 21:58:09 -06:00
James Betker
b6afc4d542
dsfa
2022-04-02 21:57:00 -06:00
James Betker
4c6bdfc9e2
get rid of relative position embeddings, which do not work with DDP & checkpointing
2022-04-02 21:55:32 -06:00
James Betker
b6d62aca5d
add inference model on top of codegen
2022-04-02 21:25:10 -06:00
James Betker
2b6ff09225
autoregressive_codegen v1
2022-04-02 15:07:39 -06:00
James Betker
00767219fc
undo latent converter change
2022-04-01 20:46:27 -06:00
James Betker
55c86e02c7
Flat fix
2022-04-01 19:13:33 -06:00
James Betker
8623c51902
fix bug
2022-04-01 16:11:34 -06:00
James Betker
035bcd9f6c
fwd fix
2022-04-01 16:03:07 -06:00
James Betker
f6a8b0a5ca
prep flat0 for feeding from autoregressive_latent_converter
2022-04-01 15:53:45 -06:00
James Betker
3e97abc8a9
update flat0 to break out timestep-independent inference steps
2022-04-01 14:38:53 -06:00
James Betker
a6181a489b
Fix loss gapping caused by poor gradients into mel_pred
2022-03-26 22:49:14 -06:00
James Betker
0070867d0f
inference script for diffusion image models
2022-03-26 22:48:24 -06:00
James Betker
1feade23ff
support x-transformers in text_voice_clip and support relative positional embeddings
2022-03-26 22:48:10 -06:00
James Betker
9b90472e15
feed direct inputs into gd
2022-03-26 08:36:19 -06:00
James Betker
6909f196b4
make code pred returns optional
2022-03-26 08:33:30 -06:00
James Betker
2a29a71c37
attempt to force meaningful codes by adding a surrogate loss
2022-03-26 08:31:40 -06:00
James Betker
45804177b8
more stuff
2022-03-25 00:03:18 -06:00
James Betker
d4218d8443
mods
2022-03-24 23:31:20 -06:00
James Betker
9c79fec734
update adf
2022-03-24 21:20:29 -06:00
James Betker
07731d5491
Fix ET
2022-03-24 21:20:22 -06:00
James Betker
a15970dd97
disable checkpointing in conditioning encoder
2022-03-24 11:49:04 -06:00
James Betker
cc5fc91562
flat0 work
2022-03-24 11:46:53 -06:00
James Betker
b0d2827fad
flat0
2022-03-24 11:30:40 -06:00
James Betker
8707a3e0c3
drop full layers in layerdrop, not half layers
2022-03-23 17:15:08 -06:00
James Betker
57da6d0ddf
more simplifications
2022-03-22 11:46:03 -06:00
James Betker
f3f391b372
undo sandwich
2022-03-22 11:43:24 -06:00
James Betker
927731f3b4
tts9: fix position embeddings snafu
2022-03-22 11:41:32 -06:00
James Betker
536511fc4b
unified_voice: relative position encodings
2022-03-22 11:41:13 -06:00
James Betker
be5f052255
misc
2022-03-22 11:40:56 -06:00
James Betker
963f0e9cee
fix unscaler
2022-03-22 11:40:02 -06:00
James Betker
5405ce4363
fix flat
2022-03-22 11:39:39 -06:00
James Betker
e47a759ed8
.......
2022-03-21 17:22:35 -06:00
James Betker
cc4c9faf9a
resolve more issues
2022-03-21 17:20:05 -06:00
James Betker
3692c4cae3
map vocoder into cpu
2022-03-21 17:10:57 -06:00
James Betker
9e97cd800c
take the conditioning mean rather than the first element
2022-03-21 16:58:03 -06:00
James Betker
9c7598dc9a
fix conditioning_free signal
2022-03-21 15:29:17 -06:00
James Betker
2a65c982ca
dont double nest checkpointing
2022-03-21 15:27:51 -06:00
James Betker
723f324eda
Make it even better
2022-03-21 14:50:59 -06:00
James Betker
e735d8e1fa
unified_voice fixes
2022-03-21 14:44:00 -06:00
James Betker
1ad18d29a8
Flat fixes
2022-03-21 14:43:52 -06:00
James Betker
26dcf7f1a2
r2 of the flat diffusion
2022-03-21 11:40:43 -06:00
James Betker
c5000420f6
more arbitrary fixes
2022-03-17 17:45:44 -06:00
James Betker
c14fc003ed
flat diffusion
2022-03-17 17:45:27 -06:00
James Betker
428911cd4d
flat diffusion network
2022-03-17 10:53:56 -06:00
James Betker
bf08519d71
fixes
2022-03-17 10:53:39 -06:00
James Betker
95ea0a592f
More cleaning
2022-03-16 12:05:56 -06:00
James Betker
d186414566
More spring cleaning
2022-03-16 12:04:00 -06:00
James Betker
735f6e4640
Move gen_similarities and rename
2022-03-16 11:59:34 -06:00
James Betker
8b376e63d9
More improvements
2022-03-16 10:16:34 -06:00
James Betker
54202aa099
fix mel normalization
2022-03-16 09:26:55 -06:00
James Betker
8437bb0c53
fixes
2022-03-15 23:52:48 -06:00
James Betker
3f244f6a68
add mel_norm to std injector
2022-03-15 22:16:59 -06:00
James Betker
0fc877cbc8
tts9 fix for alignment size
2022-03-15 21:43:14 -06:00
James Betker
f563a8dd41
fixes
2022-03-15 21:43:00 -06:00
James Betker
b754058018
Update wav2vec2 wrapper
2022-03-15 11:35:38 -06:00
James Betker
1e3a8554a1
updates to audio_diffusion_fid
2022-03-15 11:35:09 -06:00
James Betker
9c6f776980
Add univnet vocoder
2022-03-15 11:34:51 -06:00
James Betker
7929fd89de
Refactor audio-style models into the audio folder
2022-03-15 11:06:25 -06:00
James Betker
f95d3d2b82
move waveglow to audio/vocoders
2022-03-15 11:03:07 -06:00
James Betker
0419a64107
misc
2022-03-15 10:36:34 -06:00
James Betker
bb03cbb9fc
composable initial checkin
2022-03-15 10:35:40 -06:00
James Betker
86b0d76fb9
tts8 (incomplete, may be removed)
2022-03-15 10:35:31 -06:00
James Betker
eecbc0e678
Use wider spectrogram when asked
2022-03-15 10:35:11 -06:00
James Betker
9767260c6c
tacotron stft - loosen bounds restrictions and clip
2022-03-15 10:31:26 -06:00
James Betker
f8631ad4f7
Updates to support inputting MELs into the conditioning encoder
2022-03-14 17:31:42 -06:00
James Betker
e045fb0ad7
fix clip grad norm with scaler
2022-03-13 16:28:23 -06:00
James Betker
22c67ce8d3
tts9 mods
2022-03-13 10:25:55 -06:00
James Betker
08599b4c75
fix random_audio_crop injector
2022-03-12 20:42:29 -07:00
James Betker
8f130e2b3f
add scale_shift_norm back to tts9
2022-03-12 20:42:13 -07:00
James Betker
9bbbe26012
update audio_with_noise
2022-03-12 20:41:47 -07:00
James Betker
e754c4fbbc
sweep update
2022-03-12 15:33:00 -07:00
James Betker
73bfd4a86d
another tts9 update
2022-03-12 15:17:06 -07:00
James Betker
0523777ff7
add efficient config to tts9
2022-03-12 15:10:35 -07:00
James Betker
896accb71f
data and prep improvements
2022-03-12 15:10:11 -07:00
James Betker
1e87b934db
potentially average conditioning inputs
2022-03-10 20:37:41 -07:00
James Betker
e6a95f7c11
Update tts9: Remove torchscript provisions and add mechanism to train solely on codes
2022-03-09 09:43:38 -07:00
James Betker
726e30c4f7
Update noise augmentation dataset to include voices that are appended at the end of another clip.
2022-03-09 09:43:10 -07:00
James Betker
c4e4cf91a0
add support for the original vocoder to audio_diffusion_fid; also add a new "intelligibility" metric
2022-03-08 15:53:27 -07:00
James Betker
3e5da71b16
add grad scaler scale to metrics
2022-03-08 15:52:42 -07:00
James Betker
d2bdeb6f20
misc audio support
2022-03-08 15:52:26 -07:00
James Betker
d553808d24
misc
2022-03-08 15:52:16 -07:00
James Betker
7dabc17626
phase2 filter initial commit
2022-03-08 15:51:55 -07:00
James Betker
f56edb2122
minicoder with classifier head: spread out probability mass for 0 predictions
2022-03-08 15:51:31 -07:00
James Betker
29b2921222
move diffusion vocoder
2022-03-08 15:51:05 -07:00
James Betker
94222b0216
tts9 initial commit
2022-03-08 15:50:45 -07:00
James Betker
38fd9fc985
Improve efficiency of audio_with_noise_dataset
2022-03-08 15:50:13 -07:00
James Betker
b3def182de
move processing pipeline to "phase_1"
2022-03-08 15:49:51 -07:00
James Betker
30ddac69aa
lots of bad entries
2022-03-05 23:15:59 -07:00
James Betker
dcf98df0c2
++
2022-03-05 23:12:34 -07:00
James Betker
64d764ccd7
fml
2022-03-05 23:11:10 -07:00
James Betker
ef63ff84e2
pvd2
2022-03-05 23:08:39 -07:00
James Betker
1a05712764
pvd
2022-03-05 23:05:29 -07:00
James Betker
d1dc8dbb35
Support tts9
2022-03-05 20:14:36 -07:00
James Betker
93a3302819
Push training_state data to CPU memory before saving it
...
For whatever reason, keeping this on GPU memory just doesn't work.
When you load it, it consumes a large amount of GPU memory and that
utilization doesn't go away. Saving to CPU should fix this.
2022-03-04 17:57:33 -07:00
James Betker
6000580e2e
df
2022-03-04 13:47:00 -07:00
James Betker
382681a35d
Load diffusion_fid DVAE into the correct cuda device
2022-03-04 13:42:14 -07:00
James Betker
e1052a5e32
Move log consensus to train for efficiency
2022-03-04 13:41:32 -07:00
James Betker
ce6dfdf255
Distributed "fixes"
2022-03-04 12:46:41 -07:00
James Betker
3ff878ae85
Accumulate loss & grad_norm metrics from all entities within a distributed graph
2022-03-04 12:01:16 -07:00
James Betker
79e5692388
Fix distributed bug
2022-03-04 11:58:53 -07:00
James Betker
f87e10ffef
Make deterministic sampler work with distributed training & microbatches
2022-03-04 11:50:50 -07:00
James Betker
77c18b53b3
Cap grad booster
2022-03-04 10:40:24 -07:00
James Betker
2d1cb83c1d
Add a deterministic timestep sampler, with provisions to employ it every n steps
2022-03-04 10:40:14 -07:00
James Betker
f490eaeba7
Shuffle optimizer states back and forth between cpu memory during steps
2022-03-04 10:38:51 -07:00
James Betker
3c242403f5
adjust location of pre-optimizer step so I can visualize the new grad norms
2022-03-04 08:56:42 -07:00
James Betker
58019a2ce3
audio diffusion fid updates
2022-03-03 21:53:32 -07:00
James Betker
998c53ad4f
w2v_matcher mods
2022-03-03 21:52:51 -07:00
James Betker
9029e4f20c
Add a base-wrapper
2022-03-03 21:52:28 -07:00
James Betker
6873ad6660
Support functionality
2022-03-03 21:52:16 -07:00
James Betker
6af5d129ce
Add experimental gradient boosting into tts7
2022-03-03 21:51:40 -07:00
James Betker
7ea84f1ac3
asdf
2022-03-03 13:43:44 -07:00
James Betker
3cd6c7f428
Get rid of unused codes in vq
2022-03-03 13:41:38 -07:00
James Betker
619da9ea28
Get rid of discretization loss
2022-03-03 13:36:25 -07:00
James Betker
beb7c8a39d
asdf
2022-03-01 21:41:31 -07:00
James Betker
70fa780edb
Add mechanism to export grad norms
2022-03-01 20:19:52 -07:00
James Betker
d9f8f92840
Codified fp16
2022-03-01 15:46:04 -07:00
James Betker
45ab444c04
Rework minicoder to always checkpoint
2022-03-01 14:09:18 -07:00
James Betker
db0c3340ac
Implement guidance-free diffusion in eval
...
And a few other fixes
2022-03-01 11:49:36 -07:00
James Betker
2134f06516
Implement conditioning-free diffusion at the eval level
2022-02-27 15:11:42 -07:00
James Betker
436fe24822
Add conditioning-free guidance
2022-02-27 15:00:06 -07:00
James Betker
ac920798bb
misc
2022-02-27 14:49:11 -07:00
James Betker
ba155e4e2f
script for uploading models to the HF hub
2022-02-27 14:48:38 -07:00
James Betker
dbc74e96b2
w2v_matcher
2022-02-27 14:48:23 -07:00
James Betker
42879d7296
w2v_wrapper ramping dropout mode
...
this is an experimental feature that needs some testing
2022-02-27 14:47:51 -07:00
James Betker
c375287db9
Re-instate autocasting
2022-02-25 11:06:18 -07:00
James Betker
34ee32a90e
get rid of autocasting in tts7
2022-02-24 21:53:51 -07:00
James Betker
f458f5d8f1
abort early if losses reach nan too much, and save the model
2022-02-24 20:55:30 -07:00
James Betker
18dc62453f
Don't step if NaN losses are encountered.
2022-02-24 17:45:08 -07:00
James Betker
ea500ad42a
Use clustered masking in udtts7
2022-02-24 07:57:26 -07:00
James Betker
7c17c8e674
gurgl
2022-02-23 21:28:24 -07:00
James Betker
e6824e398f
Load dvae to cpu
2022-02-23 21:21:45 -07:00
James Betker
81017d9696
put frechet_distance on cuda
2022-02-23 21:21:13 -07:00
James Betker
9a7bbf33df
f
2022-02-23 18:03:38 -07:00
James Betker
68726eac74
.
2022-02-23 17:58:07 -07:00
James Betker
b7319ab518
Support vocoder type diffusion in audio_diffusion_fid
2022-02-23 17:25:16 -07:00
James Betker
58f6c9805b
adf
2022-02-22 23:12:58 -07:00
James Betker
03752c1cd6
Report NaN
2022-02-22 23:09:37 -07:00
James Betker
7201b4500c
default text_to_sequence cleaners
2022-02-21 19:14:22 -07:00
James Betker
ba7f54c162
w2v: new inference function
2022-02-21 19:13:03 -07:00
James Betker
896ac029ae
allow continuation of samples encountered
2022-02-21 19:12:50 -07:00
James Betker
6313a94f96
eval: integrate a n-gram language model into decoding
2022-02-21 19:12:34 -07:00
James Betker
af50afe222
pairedvoice: error out if clip is too short
2022-02-21 19:11:10 -07:00
James Betker
38802a96c8
remove timesteps from cond calculation
2022-02-21 12:32:21 -07:00
James Betker
668876799d
unet_diffusion_tts7
2022-02-20 15:22:38 -07:00
James Betker
0872e17e60
unified_voice mods
2022-02-19 20:37:35 -07:00
James Betker
7b12799370
Reformat mel_text_clip for use in eval
2022-02-19 20:37:26 -07:00
James Betker
bcba65c539
DataParallel Fix
2022-02-19 20:36:35 -07:00
James Betker
34001ad765
et
2022-02-18 18:52:33 -07:00
James Betker
baf7b65566
Attempt to make w2v play with DDP AND checkpointing
2022-02-18 18:47:11 -07:00
James Betker
f3776f1992
reset ctc loss from "mean" to "sum"
2022-02-17 22:00:58 -07:00
James Betker
2b20da679c
make spec_augment a parameter
2022-02-17 20:22:05 -07:00
James Betker
a813fbed9c
Update to evaluator
2022-02-17 17:30:33 -07:00
James Betker
e1d71e1bd5
w2v_wrapper: get rid of ctc attention mask
2022-02-15 20:54:40 -07:00
James Betker
79e8f36d30
Convert CLIP models into new folder
2022-02-15 20:53:07 -07:00
James Betker
8f767b8b4f
...
2022-02-15 07:08:17 -07:00
James Betker
29e07913a8
Fix
2022-02-15 06:58:11 -07:00
James Betker
dd585df772
LAMB optimizer
2022-02-15 06:48:13 -07:00
James Betker
2bdb515068
A few mods to make wav2vec2 trainable with DDP on DLAS
2022-02-15 06:28:54 -07:00
James Betker
52b61b9f77
Update scripts and attempt to figure out how UnifiedVoice could be used to produce CTC codes
2022-02-13 20:48:06 -07:00
James Betker
a4f1641eea
Add & refine WER evaluator for w2v
2022-02-13 20:47:29 -07:00
James Betker
e16af944c0
BSO fix
2022-02-12 20:01:04 -07:00
James Betker
29534180b2
w2v fine tuner
2022-02-12 20:00:59 -07:00
James Betker
0c3cc5ebad
use script updates to fix output size disparities
2022-02-12 20:00:46 -07:00
James Betker
15fd60aad3
Allow EMA training to be disabled
2022-02-12 20:00:23 -07:00
James Betker
3252972057
ctc_code_gen mods
2022-02-12 19:59:54 -07:00
James Betker
35170c77b3
fix sweep
2022-02-11 11:43:11 -07:00
James Betker
c6b6d120fe
fix ranking
2022-02-11 11:34:57 -07:00
James Betker
095944569c
deep_update dicts
2022-02-11 11:32:25 -07:00
James Betker
ab1f6e8ac6
deepcopy map
2022-02-11 11:29:32 -07:00
James Betker
496fb81997
use fork instead
2022-02-11 11:22:25 -07:00
James Betker
4abc094b47
fix train bug
2022-02-11 11:18:15 -07:00
James Betker
006add64c5
sweep fix
2022-02-11 11:17:08 -07:00
James Betker
102142d1eb
f
2022-02-11 11:05:13 -07:00
James Betker
40b08a52d0
dafuk
2022-02-11 11:01:31 -07:00
James Betker
f6a7f12cad
Remove broken evaluator
2022-02-11 11:00:29 -07:00
James Betker
46b97049dc
Fix eval
2022-02-11 10:59:32 -07:00
James Betker
5175b7d91a
training sweeper checkin
2022-02-11 10:46:37 -07:00
James Betker
302ac8652d
Undo mask during training
2022-02-11 09:35:12 -07:00
James Betker
618a20412a
new rev of ctc_code_gen with surrogate LM loss
2022-02-10 23:09:57 -07:00
James Betker
d1d1ae32a1
audio diffusion frechet distance measurement!
2022-02-10 22:55:46 -07:00
James Betker
23a310b488
Fix BSO
2022-02-10 20:54:51 -07:00