James Betker
1e1bbe1a27
whoops
2022-05-23 12:28:36 -06:00
James Betker
560b83e770
default to residual encoder
2022-05-23 12:24:00 -06:00
James Betker
f432bdf7ae
deeper resblock encoder
2022-05-23 11:46:40 -06:00
James Betker
dc471f5c6d
residual features
2022-05-23 09:58:30 -06:00
James Betker
1f521d6a1d
add reconstruction loss to m2v
2022-05-23 09:28:41 -06:00
James Betker
2270c89fdc
.
2022-05-23 08:47:15 -06:00
James Betker
40f844657b
tolong
2022-05-23 08:27:54 -06:00
James Betker
10f4a742bd
reintroduce attention masks
2022-05-23 08:16:04 -06:00
James Betker
68c0afcbcc
m2v frequency masking
2022-05-23 07:04:12 -06:00
James Betker
4093e38717
revert flat diffusion back...
2022-05-22 23:10:58 -06:00
James Betker
8f28404645
another fix
2022-05-22 21:32:43 -06:00
James Betker
41809a6330
Add 8x dim reductor
2022-05-22 20:23:16 -06:00
James Betker
1095248caf
Revert "retest"
...
This reverts commit ed7768c73b
.
2022-05-22 19:23:01 -06:00
James Betker
ed7768c73b
retest
2022-05-22 16:30:09 -06:00
James Betker
2dd0b9e6e9
mel_head should be optional
2022-05-22 12:25:45 -06:00
James Betker
0c60f22197
fix unused parameters
2022-05-22 08:16:31 -06:00
James Betker
57d6f6d366
Big rework of flat_diffusion
...
Back to the drawing board, boys. Time to waste some resources catching bugs....
2022-05-22 08:09:33 -06:00
James Betker
be937d202e
new attempt
2022-05-20 17:04:22 -06:00
James Betker
968660c248
another update
2022-05-20 11:25:00 -06:00
James Betker
28f950b7d3
fix
2022-05-20 11:18:52 -06:00
James Betker
b317c68ac9
fix
2022-05-20 11:12:53 -06:00
James Betker
3121bc4e43
flat diffusion
2022-05-20 11:01:48 -06:00
James Betker
e9fb2ead9a
m2v stuff
2022-05-20 11:01:17 -06:00
James Betker
c9c16e3b01
misc updates
2022-05-19 13:39:32 -06:00
James Betker
10378fc37f
make codebooks specifiable
2022-05-18 11:07:12 -06:00
James Betker
efc2657b48
fiddle with init
2022-05-18 10:56:01 -06:00
James Betker
208a703080
use gelu act
2022-05-18 09:34:01 -06:00
James Betker
b2b37453df
make the codebook bigger
2022-05-17 20:58:56 -06:00
James Betker
9a9c3cafba
Make feature encoder a bit more descriptive
2022-05-17 18:14:52 -06:00
James Betker
ee364f4eeb
just take the mean...
2022-05-17 18:09:23 -06:00
James Betker
6130391a85
fix div
2022-05-17 18:04:20 -06:00
James Betker
7213ad2b89
Do grad reduction
2022-05-17 17:59:40 -06:00
James Betker
7c82e18c6c
darn mpi
2022-05-17 17:16:09 -06:00
James Betker
88ec0512f7
Scale losses
2022-05-17 17:12:20 -06:00
James Betker
a6397ce84a
Fix incorrect projections
2022-05-17 16:53:52 -06:00
James Betker
c37fc3b4ed
m2v grad norm groups
2022-05-17 16:29:36 -06:00
James Betker
c1bdb4f9a1
degrade gumbel softmax over time
2022-05-17 16:23:04 -06:00
James Betker
3853f37257
stable layernorm
2022-05-17 16:07:03 -06:00
James Betker
519151d83f
m2v
2022-05-17 15:37:59 -06:00
James Betker
d1de94d75c
Stash mel2vec work (gonna throw it all away..)
2022-05-17 12:35:01 -06:00
James Betker
ee218ab9b7
uv3
2022-05-13 17:57:47 -06:00
James Betker
3d7e2a2846
fix collection
2022-05-11 21:50:05 -06:00
James Betker
ba2b71796a
k
2022-05-11 21:20:06 -06:00
James Betker
efa737b685
re-add distributed collect to clvp
2022-05-11 21:14:18 -06:00
James Betker
545453077e
uv3
2022-05-09 15:36:22 -06:00
James Betker
96a5cc66ee
uv3
2022-05-09 15:35:51 -06:00
James Betker
b42b4e18de
clean up unified voice
...
- remove unused code
- fix inference model to use the terms "prior" and "posterior" to properly define the modeling order (they were inverted before)
- default some settings I never intend to change in the future
2022-05-09 14:45:49 -06:00
James Betker
1177c35dec
music fid updates
2022-05-08 18:49:39 -06:00
James Betker
7812c23c7a
revert fill_gaps back to old masking behavior
2022-05-08 00:10:19 -06:00
James Betker
58ed27d7a8
new gap_filler
2022-05-07 12:44:23 -06:00
James Betker
6c8032b4be
more work
2022-05-06 21:56:49 -06:00
James Betker
f541610256
contrastive_audio
2022-05-06 16:37:22 -06:00
James Betker
79543e5488
Simpler form of the wavegen model
2022-05-06 16:37:04 -06:00
James Betker
d8925ccde5
few things with gap filling
2022-05-06 14:33:44 -06:00
James Betker
b13d983c24
and mel_head
2022-05-06 00:25:27 -06:00
James Betker
d5fb79564a
remove mel_pred
2022-05-06 00:24:05 -06:00
James Betker
e9bb692490
fixed aligned_latent
2022-05-06 00:20:21 -06:00
James Betker
1609101a42
musical gap filler
2022-05-05 16:47:08 -06:00
James Betker
d66ab2d28c
Remove unused waveform_gens
2022-05-04 21:06:54 -06:00
James Betker
47662b9ec5
some random crap
2022-05-04 20:29:23 -06:00
James Betker
c42c53e75a
Add a trainable network for converting a normal distribution into a latent space
2022-05-02 09:47:30 -06:00
James Betker
ab219fbefb
output variance
2022-05-02 00:10:33 -06:00
James Betker
3b074aac34
add checkpointing
2022-05-02 00:07:42 -06:00
James Betker
ae5f934ea1
diffwave
2022-05-02 00:05:04 -06:00
James Betker
b712d3b72b
break out get_conditioning_latent from unified_voice
2022-05-01 23:04:44 -06:00
James Betker
afa2df57c9
gen3
2022-04-30 10:41:38 -06:00
James Betker
8aa6651fc7
fix surrogate loss return in waveform_gen2
2022-04-28 10:10:11 -06:00
James Betker
f02b01bd9d
reverse univnet classifier
2022-04-20 21:37:55 -06:00
James Betker
9df85c902e
New gen2
...
Which is basically a autoencoder with a giant diffusion appendage attached
2022-04-20 21:37:34 -06:00
James Betker
b4549eed9f
uv2 fix
2022-04-20 00:27:38 -06:00
James Betker
24fdafd855
fix2
2022-04-20 00:03:29 -06:00
James Betker
0af0051399
fix
2022-04-20 00:01:57 -06:00
James Betker
419f4d37bd
gen2 music
2022-04-19 23:38:37 -06:00
James Betker
8fe0dff33c
support tts typing
2022-04-16 23:36:57 -06:00
James Betker
48cb6a5abd
misc
2022-04-16 20:28:04 -06:00
James Betker
147478a148
cvvp
2022-04-16 20:27:46 -06:00
James Betker
546ecd5aeb
music!
2022-04-15 21:21:37 -06:00
James Betker
254357724d
gradprop
2022-04-15 09:37:20 -06:00
James Betker
fbf1f4f637
update
2022-04-15 09:34:44 -06:00
James Betker
82aad335ba
add distributued logic for loss
2022-04-15 09:31:48 -06:00
James Betker
efe12cb816
Update clvp to add masking probabilities in conditioning and to support code inputs
2022-04-15 09:11:23 -06:00
James Betker
8ea5c307fb
Fixes for training the diffusion model on autoregressive inputs
2022-04-11 11:02:44 -06:00
James Betker
a3622462c1
Change latent_conditioner back
2022-04-11 09:00:13 -06:00
James Betker
03d0b90bda
fixes
2022-04-10 21:02:12 -06:00
James Betker
19ca5b26c1
Remove flat0 and move it into flat
2022-04-10 21:01:59 -06:00
James Betker
81c952a00a
undo relative
2022-04-08 16:32:52 -06:00
James Betker
944b4c3335
more undos
2022-04-08 16:31:08 -06:00
James Betker
032983e2ed
fix bug and allow position encodings to be trained separately from the rest of the model
2022-04-08 16:26:01 -06:00
James Betker
09ab1aa9bc
revert rotary embeddings work
...
I'm not really sure that this is going to work. I'd rather explore re-using what I've already trained
2022-04-08 16:18:35 -06:00
James Betker
2fb9ffb0aa
Align autoregressive text using start and stop tokens
2022-04-08 09:41:59 -06:00
James Betker
423293e518
fix xtransformers bug
2022-04-08 09:12:46 -06:00
James Betker
048f6f729a
remove lightweight_gan
2022-04-07 23:12:08 -07:00
James Betker
e634996a9c
autoregressive_codegen: support key_value caching for faster inference
2022-04-07 23:08:46 -07:00
James Betker
d05e162f95
reformat x_transformers
2022-04-07 23:08:03 -07:00
James Betker
7c578eb59b
Fix inference in new autoregressive_codegen
2022-04-07 21:22:46 -06:00
James Betker
3f8d7955ef
unified_voice with rotary embeddings
2022-04-07 20:11:14 -06:00
James Betker
573e5552b9
CLVP v1
2022-04-07 20:10:57 -06:00
James Betker
71b73db044
clean up
2022-04-07 11:34:10 -06:00
James Betker
6fc4f49e86
some dumb stuff
2022-04-07 11:32:34 -06:00
James Betker
305dc95e4b
cg2
2022-04-06 21:24:36 -06:00
James Betker
e011166dd6
autoregressive_codegen r3
2022-04-06 21:04:23 -06:00
James Betker
33ef17e9e5
fix context
2022-04-06 00:45:42 -06:00
James Betker
37bdfe82b2
Modify x_transformers to do checkpointing and use relative positional biases
2022-04-06 00:35:29 -06:00
James Betker
09879b434d
bring in x_transformers
2022-04-06 00:21:58 -06:00
James Betker
cdd12ff46c
Add code validation to autoregressive_codegen
2022-04-04 09:51:41 -06:00
James Betker
99de63a922
man I'm really on it tonight....
2022-04-02 22:01:33 -06:00
James Betker
a4bdc80933
moikmadsf
2022-04-02 21:59:50 -06:00
James Betker
1cf20b7337
sdfds
2022-04-02 21:58:09 -06:00
James Betker
b6afc4d542
dsfa
2022-04-02 21:57:00 -06:00
James Betker
4c6bdfc9e2
get rid of relative position embeddings, which do not work with DDP & checkpointing
2022-04-02 21:55:32 -06:00
James Betker
b6d62aca5d
add inference model on top of codegen
2022-04-02 21:25:10 -06:00
James Betker
2b6ff09225
autoregressive_codegen v1
2022-04-02 15:07:39 -06:00
James Betker
00767219fc
undo latent converter change
2022-04-01 20:46:27 -06:00
James Betker
55c86e02c7
Flat fix
2022-04-01 19:13:33 -06:00
James Betker
8623c51902
fix bug
2022-04-01 16:11:34 -06:00
James Betker
f6a8b0a5ca
prep flat0 for feeding from autoregressive_latent_converter
2022-04-01 15:53:45 -06:00
James Betker
3e97abc8a9
update flat0 to break out timestep-independent inference steps
2022-04-01 14:38:53 -06:00
James Betker
a6181a489b
Fix loss gapping caused by poor gradients into mel_pred
2022-03-26 22:49:14 -06:00
James Betker
1feade23ff
support x-transformers in text_voice_clip and support relative positional embeddings
2022-03-26 22:48:10 -06:00
James Betker
6909f196b4
make code pred returns optional
2022-03-26 08:33:30 -06:00
James Betker
2a29a71c37
attempt to force meaningful codes by adding a surrogate loss
2022-03-26 08:31:40 -06:00
James Betker
45804177b8
more stuff
2022-03-25 00:03:18 -06:00
James Betker
d4218d8443
mods
2022-03-24 23:31:20 -06:00
James Betker
a15970dd97
disable checkpointing in conditioning encoder
2022-03-24 11:49:04 -06:00
James Betker
cc5fc91562
flat0 work
2022-03-24 11:46:53 -06:00
James Betker
b0d2827fad
flat0
2022-03-24 11:30:40 -06:00
James Betker
8707a3e0c3
drop full layers in layerdrop, not half layers
2022-03-23 17:15:08 -06:00
James Betker
57da6d0ddf
more simplifications
2022-03-22 11:46:03 -06:00
James Betker
f3f391b372
undo sandwich
2022-03-22 11:43:24 -06:00
James Betker
927731f3b4
tts9: fix position embeddings snafu
2022-03-22 11:41:32 -06:00
James Betker
536511fc4b
unified_voice: relative position encodings
2022-03-22 11:41:13 -06:00
James Betker
5405ce4363
fix flat
2022-03-22 11:39:39 -06:00
James Betker
e47a759ed8
.......
2022-03-21 17:22:35 -06:00
James Betker
cc4c9faf9a
resolve more issues
2022-03-21 17:20:05 -06:00
James Betker
9e97cd800c
take the conditioning mean rather than the first element
2022-03-21 16:58:03 -06:00
James Betker
9c7598dc9a
fix conditioning_free signal
2022-03-21 15:29:17 -06:00
James Betker
2a65c982ca
dont double nest checkpointing
2022-03-21 15:27:51 -06:00
James Betker
723f324eda
Make it even better
2022-03-21 14:50:59 -06:00
James Betker
e735d8e1fa
unified_voice fixes
2022-03-21 14:44:00 -06:00
James Betker
1ad18d29a8
Flat fixes
2022-03-21 14:43:52 -06:00
James Betker
26dcf7f1a2
r2 of the flat diffusion
2022-03-21 11:40:43 -06:00
James Betker
c14fc003ed
flat diffusion
2022-03-17 17:45:27 -06:00
James Betker
428911cd4d
flat diffusion network
2022-03-17 10:53:56 -06:00
James Betker
bf08519d71
fixes
2022-03-17 10:53:39 -06:00
James Betker
95ea0a592f
More cleaning
2022-03-16 12:05:56 -06:00
James Betker
d186414566
More spring cleaning
2022-03-16 12:04:00 -06:00
James Betker
8b376e63d9
More improvements
2022-03-16 10:16:34 -06:00
James Betker
0fc877cbc8
tts9 fix for alignment size
2022-03-15 21:43:14 -06:00
James Betker
f563a8dd41
fixes
2022-03-15 21:43:00 -06:00
James Betker
b754058018
Update wav2vec2 wrapper
2022-03-15 11:35:38 -06:00
James Betker
9c6f776980
Add univnet vocoder
2022-03-15 11:34:51 -06:00
James Betker
7929fd89de
Refactor audio-style models into the audio folder
2022-03-15 11:06:25 -06:00
James Betker
f95d3d2b82
move waveglow to audio/vocoders
2022-03-15 11:03:07 -06:00
James Betker
bb03cbb9fc
composable initial checkin
2022-03-15 10:35:40 -06:00
James Betker
86b0d76fb9
tts8 (incomplete, may be removed)
2022-03-15 10:35:31 -06:00
James Betker
eecbc0e678
Use wider spectrogram when asked
2022-03-15 10:35:11 -06:00
James Betker
9767260c6c
tacotron stft - loosen bounds restrictions and clip
2022-03-15 10:31:26 -06:00
James Betker
f8631ad4f7
Updates to support inputting MELs into the conditioning encoder
2022-03-14 17:31:42 -06:00
James Betker
22c67ce8d3
tts9 mods
2022-03-13 10:25:55 -06:00
James Betker
8f130e2b3f
add scale_shift_norm back to tts9
2022-03-12 20:42:13 -07:00
James Betker
73bfd4a86d
another tts9 update
2022-03-12 15:17:06 -07:00
James Betker
0523777ff7
add efficient config to tts9
2022-03-12 15:10:35 -07:00
James Betker
1e87b934db
potentially average conditioning inputs
2022-03-10 20:37:41 -07:00
James Betker
e6a95f7c11
Update tts9: Remove torchscript provisions and add mechanism to train solely on codes
2022-03-09 09:43:38 -07:00
James Betker
f56edb2122
minicoder with classifier head: spread out probability mass for 0 predictions
2022-03-08 15:51:31 -07:00
James Betker
29b2921222
move diffusion vocoder
2022-03-08 15:51:05 -07:00
James Betker
94222b0216
tts9 initial commit
2022-03-08 15:50:45 -07:00
James Betker
d1dc8dbb35
Support tts9
2022-03-05 20:14:36 -07:00
James Betker
79e5692388
Fix distributed bug
2022-03-04 11:58:53 -07:00
James Betker
f87e10ffef
Make deterministic sampler work with distributed training & microbatches
2022-03-04 11:50:50 -07:00
James Betker
77c18b53b3
Cap grad booster
2022-03-04 10:40:24 -07:00
James Betker
2d1cb83c1d
Add a deterministic timestep sampler, with provisions to employ it every n steps
2022-03-04 10:40:14 -07:00
James Betker
58019a2ce3
audio diffusion fid updates
2022-03-03 21:53:32 -07:00
James Betker
998c53ad4f
w2v_matcher mods
2022-03-03 21:52:51 -07:00
James Betker
9029e4f20c
Add a base-wrapper
2022-03-03 21:52:28 -07:00
James Betker
6873ad6660
Support functionality
2022-03-03 21:52:16 -07:00
James Betker
6af5d129ce
Add experimental gradient boosting into tts7
2022-03-03 21:51:40 -07:00
James Betker
7ea84f1ac3
asdf
2022-03-03 13:43:44 -07:00
James Betker
3cd6c7f428
Get rid of unused codes in vq
2022-03-03 13:41:38 -07:00
James Betker
619da9ea28
Get rid of discretization loss
2022-03-03 13:36:25 -07:00
James Betker
beb7c8a39d
asdf
2022-03-01 21:41:31 -07:00
James Betker
70fa780edb
Add mechanism to export grad norms
2022-03-01 20:19:52 -07:00
James Betker
d9f8f92840
Codified fp16
2022-03-01 15:46:04 -07:00
James Betker
45ab444c04
Rework minicoder to always checkpoint
2022-03-01 14:09:18 -07:00
James Betker
db0c3340ac
Implement guidance-free diffusion in eval
...
And a few other fixes
2022-03-01 11:49:36 -07:00
James Betker
2134f06516
Implement conditioning-free diffusion at the eval level
2022-02-27 15:11:42 -07:00
James Betker
436fe24822
Add conditioning-free guidance
2022-02-27 15:00:06 -07:00
James Betker
ac920798bb
misc
2022-02-27 14:49:11 -07:00
James Betker
dbc74e96b2
w2v_matcher
2022-02-27 14:48:23 -07:00
James Betker
42879d7296
w2v_wrapper ramping dropout mode
...
this is an experimental feature that needs some testing
2022-02-27 14:47:51 -07:00
James Betker
c375287db9
Re-instate autocasting
2022-02-25 11:06:18 -07:00
James Betker
34ee32a90e
get rid of autocasting in tts7
2022-02-24 21:53:51 -07:00
James Betker
ea500ad42a
Use clustered masking in udtts7
2022-02-24 07:57:26 -07:00
James Betker
7201b4500c
default text_to_sequence cleaners
2022-02-21 19:14:22 -07:00
James Betker
ba7f54c162
w2v: new inference function
2022-02-21 19:13:03 -07:00
James Betker
38802a96c8
remove timesteps from cond calculation
2022-02-21 12:32:21 -07:00
James Betker
668876799d
unet_diffusion_tts7
2022-02-20 15:22:38 -07:00
James Betker
0872e17e60
unified_voice mods
2022-02-19 20:37:35 -07:00
James Betker
7b12799370
Reformat mel_text_clip for use in eval
2022-02-19 20:37:26 -07:00
James Betker
baf7b65566
Attempt to make w2v play with DDP AND checkpointing
2022-02-18 18:47:11 -07:00
James Betker
f3776f1992
reset ctc loss from "mean" to "sum"
2022-02-17 22:00:58 -07:00
James Betker
2b20da679c
make spec_augment a parameter
2022-02-17 20:22:05 -07:00
James Betker
e1d71e1bd5
w2v_wrapper: get rid of ctc attention mask
2022-02-15 20:54:40 -07:00
James Betker
79e8f36d30
Convert CLIP models into new folder
2022-02-15 20:53:07 -07:00
James Betker
2bdb515068
A few mods to make wav2vec2 trainable with DDP on DLAS
2022-02-15 06:28:54 -07:00
James Betker
52b61b9f77
Update scripts and attempt to figure out how UnifiedVoice could be used to produce CTC codes
2022-02-13 20:48:06 -07:00
James Betker
a4f1641eea
Add & refine WER evaluator for w2v
2022-02-13 20:47:29 -07:00
James Betker
29534180b2
w2v fine tuner
2022-02-12 20:00:59 -07:00
James Betker
3252972057
ctc_code_gen mods
2022-02-12 19:59:54 -07:00
James Betker
302ac8652d
Undo mask during training
2022-02-11 09:35:12 -07:00
James Betker
618a20412a
new rev of ctc_code_gen with surrogate LM loss
2022-02-10 23:09:57 -07:00
James Betker
820a29f81e
ctc code gen mods
2022-02-10 09:44:01 -07:00
James Betker
ac9417b956
ctc_code_gen: mask out all padding tokens
2022-02-09 17:26:30 -07:00
James Betker
ddb77ef502
ctc_code_gen: use a mean() on the ConditioningEncoder
2022-02-09 14:26:44 -07:00
James Betker
9e9ae328f2
mild updates
2022-02-08 23:51:17 -07:00
James Betker
ff35d13b99
Use non-uniform noise in diffusion_tts6
2022-02-08 07:27:41 -07:00
James Betker
34fbb78671
Straight CtcCodeGenerator as an encoder
2022-02-07 15:46:46 -07:00
James Betker
65a546c4d7
Fix for tts6
2022-02-05 16:00:14 -07:00
James Betker
5ae816bead
ctc gen checkin
2022-02-05 15:59:53 -07:00
James Betker
bb3d1ab03d
More cleanup
2022-02-04 11:06:17 -07:00
James Betker
5cc342de66
Clean up
2022-02-04 11:00:42 -07:00
James Betker
8fb147e8ab
add an autoregressive ctc code generator
2022-02-04 11:00:15 -07:00
James Betker
7f4fc55344
Update SR model
2022-02-03 21:42:53 -07:00
James Betker
bc506d4bcd
Mods to unet_diffusion_tts6 to support super resolution mode
2022-02-03 19:59:39 -07:00
James Betker
4249681c4b
Mods to support a autoregressive CTC code generator
2022-02-03 19:58:54 -07:00
James Betker
8132766d38
tts6
2022-01-31 20:15:06 -07:00
James Betker
fbea6e8eac
Adjustments to diffusion networks
2022-01-30 16:14:06 -07:00
James Betker
e58dab14c3
new diffusion updates from testing
2022-01-29 11:01:01 -07:00
James Betker
935a4e853e
get rid of nil tokens in <2>
2022-01-27 22:45:57 -07:00
James Betker
a77d376ad2
rename unet diffusion tts and add 3
2022-01-27 19:56:24 -07:00
James Betker
8c255811ad
more fixes
2022-01-25 17:57:16 -07:00
James Betker
0f3ca28e39
Allow diffusion model to be trained with masking tokens
2022-01-25 14:26:21 -07:00
James Betker
d18aec793a
Revert "(re) attempt diffusion checkpointing logic"
...
This reverts commit b22eec8fe3
.
2022-01-22 09:14:50 -07:00
James Betker
b22eec8fe3
(re) attempt diffusion checkpointing logic
2022-01-22 08:34:40 -07:00
James Betker
8f48848f91
misc
2022-01-22 08:23:29 -07:00
James Betker
851070075a
text<->cond clip
...
I need that universal clip..
2022-01-22 08:23:14 -07:00
James Betker
8ada52ccdc
Update LR layers to checkpoint better
2022-01-22 08:22:57 -07:00
James Betker
8e2439f50d
Decrease resolution requirements to 2048
2022-01-20 11:27:49 -07:00
James Betker
4af8525dc3
Adjust diffusion vocoder to allow training individual levels
2022-01-19 13:37:59 -07:00
James Betker
ac13bfefe8
use_diffuse_tts
2022-01-19 00:35:24 -07:00
James Betker
bcd8cc51e1
Enable collated data for diffusion purposes
2022-01-19 00:35:08 -07:00
James Betker
dc9cd8c206
Update use_gpt_tts to be usable with unified_voice2
2022-01-18 21:14:17 -07:00
James Betker
7b4544b83a
Add an experimental unet_diffusion_tts to perform experiments on
2022-01-18 08:38:24 -07:00
James Betker
37e4e737b5
a few fixes
2022-01-16 15:17:17 -07:00
James Betker
9100e7fa9b
Add a diffusion network that takes aligned text instead of MELs
2022-01-15 17:28:02 -07:00
James Betker
009a1e8404
Add a new diffusion_vocoder that should be trainable faster
...
This new one has a "cheating" top layer, that does not feed down into the unet encoder,
but does consume the outputs of the unet. This cheater only operates on half of the input,
while the rest of the unet operates on the full input. This limits the dimensionality of this last
layer, on the assumption that these last layers consume by far the most computation and memory,
but do not require the full input context.
Losses are only computed on half of the aggregate input.
2022-01-11 17:26:07 -07:00
James Betker
91f28580e2
fix unified_voice
2022-01-10 16:17:31 -07:00
James Betker
136744dc1d
Fixes
2022-01-10 14:32:04 -07:00
James Betker
ee3dfac2ae
unified_voice2: decouple positional embeddings and token embeddings from underlying gpt model
2022-01-10 08:14:41 -07:00
James Betker
f503d8d96b
Partially implement performers in transformer_builders
2022-01-09 22:35:03 -07:00