James Betker
|
f56edb2122
|
minicoder with classifier head: spread out probability mass for 0 predictions
|
2022-03-08 15:51:31 -07:00 |
|
James Betker
|
29b2921222
|
move diffusion vocoder
|
2022-03-08 15:51:05 -07:00 |
|
James Betker
|
94222b0216
|
tts9 initial commit
|
2022-03-08 15:50:45 -07:00 |
|
James Betker
|
d1dc8dbb35
|
Support tts9
|
2022-03-05 20:14:36 -07:00 |
|
James Betker
|
79e5692388
|
Fix distributed bug
|
2022-03-04 11:58:53 -07:00 |
|
James Betker
|
f87e10ffef
|
Make deterministic sampler work with distributed training & microbatches
|
2022-03-04 11:50:50 -07:00 |
|
James Betker
|
77c18b53b3
|
Cap grad booster
|
2022-03-04 10:40:24 -07:00 |
|
James Betker
|
2d1cb83c1d
|
Add a deterministic timestep sampler, with provisions to employ it every n steps
|
2022-03-04 10:40:14 -07:00 |
|
James Betker
|
58019a2ce3
|
audio diffusion fid updates
|
2022-03-03 21:53:32 -07:00 |
|
James Betker
|
998c53ad4f
|
w2v_matcher mods
|
2022-03-03 21:52:51 -07:00 |
|
James Betker
|
9029e4f20c
|
Add a base-wrapper
|
2022-03-03 21:52:28 -07:00 |
|
James Betker
|
6873ad6660
|
Support functionality
|
2022-03-03 21:52:16 -07:00 |
|
James Betker
|
6af5d129ce
|
Add experimental gradient boosting into tts7
|
2022-03-03 21:51:40 -07:00 |
|
James Betker
|
7ea84f1ac3
|
asdf
|
2022-03-03 13:43:44 -07:00 |
|
James Betker
|
3cd6c7f428
|
Get rid of unused codes in vq
|
2022-03-03 13:41:38 -07:00 |
|
James Betker
|
619da9ea28
|
Get rid of discretization loss
|
2022-03-03 13:36:25 -07:00 |
|
James Betker
|
beb7c8a39d
|
asdf
|
2022-03-01 21:41:31 -07:00 |
|
James Betker
|
70fa780edb
|
Add mechanism to export grad norms
|
2022-03-01 20:19:52 -07:00 |
|
James Betker
|
d9f8f92840
|
Codified fp16
|
2022-03-01 15:46:04 -07:00 |
|
James Betker
|
45ab444c04
|
Rework minicoder to always checkpoint
|
2022-03-01 14:09:18 -07:00 |
|
James Betker
|
db0c3340ac
|
Implement guidance-free diffusion in eval
And a few other fixes
|
2022-03-01 11:49:36 -07:00 |
|
James Betker
|
2134f06516
|
Implement conditioning-free diffusion at the eval level
|
2022-02-27 15:11:42 -07:00 |
|
James Betker
|
436fe24822
|
Add conditioning-free guidance
|
2022-02-27 15:00:06 -07:00 |
|
James Betker
|
ac920798bb
|
misc
|
2022-02-27 14:49:11 -07:00 |
|
James Betker
|
dbc74e96b2
|
w2v_matcher
|
2022-02-27 14:48:23 -07:00 |
|
James Betker
|
42879d7296
|
w2v_wrapper ramping dropout mode
this is an experimental feature that needs some testing
|
2022-02-27 14:47:51 -07:00 |
|
James Betker
|
c375287db9
|
Re-instate autocasting
|
2022-02-25 11:06:18 -07:00 |
|
James Betker
|
34ee32a90e
|
get rid of autocasting in tts7
|
2022-02-24 21:53:51 -07:00 |
|
James Betker
|
ea500ad42a
|
Use clustered masking in udtts7
|
2022-02-24 07:57:26 -07:00 |
|
James Betker
|
7201b4500c
|
default text_to_sequence cleaners
|
2022-02-21 19:14:22 -07:00 |
|
James Betker
|
ba7f54c162
|
w2v: new inference function
|
2022-02-21 19:13:03 -07:00 |
|
James Betker
|
38802a96c8
|
remove timesteps from cond calculation
|
2022-02-21 12:32:21 -07:00 |
|
James Betker
|
668876799d
|
unet_diffusion_tts7
|
2022-02-20 15:22:38 -07:00 |
|
James Betker
|
0872e17e60
|
unified_voice mods
|
2022-02-19 20:37:35 -07:00 |
|
James Betker
|
7b12799370
|
Reformat mel_text_clip for use in eval
|
2022-02-19 20:37:26 -07:00 |
|
James Betker
|
baf7b65566
|
Attempt to make w2v play with DDP AND checkpointing
|
2022-02-18 18:47:11 -07:00 |
|
James Betker
|
f3776f1992
|
reset ctc loss from "mean" to "sum"
|
2022-02-17 22:00:58 -07:00 |
|
James Betker
|
2b20da679c
|
make spec_augment a parameter
|
2022-02-17 20:22:05 -07:00 |
|
James Betker
|
e1d71e1bd5
|
w2v_wrapper: get rid of ctc attention mask
|
2022-02-15 20:54:40 -07:00 |
|
James Betker
|
79e8f36d30
|
Convert CLIP models into new folder
|
2022-02-15 20:53:07 -07:00 |
|
James Betker
|
2bdb515068
|
A few mods to make wav2vec2 trainable with DDP on DLAS
|
2022-02-15 06:28:54 -07:00 |
|
James Betker
|
52b61b9f77
|
Update scripts and attempt to figure out how UnifiedVoice could be used to produce CTC codes
|
2022-02-13 20:48:06 -07:00 |
|
James Betker
|
a4f1641eea
|
Add & refine WER evaluator for w2v
|
2022-02-13 20:47:29 -07:00 |
|
James Betker
|
29534180b2
|
w2v fine tuner
|
2022-02-12 20:00:59 -07:00 |
|
James Betker
|
3252972057
|
ctc_code_gen mods
|
2022-02-12 19:59:54 -07:00 |
|
James Betker
|
302ac8652d
|
Undo mask during training
|
2022-02-11 09:35:12 -07:00 |
|
James Betker
|
618a20412a
|
new rev of ctc_code_gen with surrogate LM loss
|
2022-02-10 23:09:57 -07:00 |
|
James Betker
|
820a29f81e
|
ctc code gen mods
|
2022-02-10 09:44:01 -07:00 |
|
James Betker
|
ac9417b956
|
ctc_code_gen: mask out all padding tokens
|
2022-02-09 17:26:30 -07:00 |
|
James Betker
|
ddb77ef502
|
ctc_code_gen: use a mean() on the ConditioningEncoder
|
2022-02-09 14:26:44 -07:00 |
|