James Betker
7812c23c7a
revert fill_gaps back to old masking behavior
2022-05-08 00:10:19 -06:00
James Betker
58ed27d7a8
new gap_filler
2022-05-07 12:44:23 -06:00
James Betker
6c8032b4be
more work
2022-05-06 21:56:49 -06:00
James Betker
f541610256
contrastive_audio
2022-05-06 16:37:22 -06:00
James Betker
79543e5488
Simpler form of the wavegen model
2022-05-06 16:37:04 -06:00
James Betker
d8925ccde5
few things with gap filling
2022-05-06 14:33:44 -06:00
James Betker
b13d983c24
and mel_head
2022-05-06 00:25:27 -06:00
James Betker
d5fb79564a
remove mel_pred
2022-05-06 00:24:05 -06:00
James Betker
e9bb692490
fixed aligned_latent
2022-05-06 00:20:21 -06:00
James Betker
1609101a42
musical gap filler
2022-05-05 16:47:08 -06:00
James Betker
d66ab2d28c
Remove unused waveform_gens
2022-05-04 21:06:54 -06:00
James Betker
47662b9ec5
some random crap
2022-05-04 20:29:23 -06:00
James Betker
c42c53e75a
Add a trainable network for converting a normal distribution into a latent space
2022-05-02 09:47:30 -06:00
James Betker
ab219fbefb
output variance
2022-05-02 00:10:33 -06:00
James Betker
3b074aac34
add checkpointing
2022-05-02 00:07:42 -06:00
James Betker
ae5f934ea1
diffwave
2022-05-02 00:05:04 -06:00
James Betker
b712d3b72b
break out get_conditioning_latent from unified_voice
2022-05-01 23:04:44 -06:00
James Betker
afa2df57c9
gen3
2022-04-30 10:41:38 -06:00
James Betker
8aa6651fc7
fix surrogate loss return in waveform_gen2
2022-04-28 10:10:11 -06:00
James Betker
f02b01bd9d
reverse univnet classifier
2022-04-20 21:37:55 -06:00
James Betker
9df85c902e
New gen2
...
Which is basically a autoencoder with a giant diffusion appendage attached
2022-04-20 21:37:34 -06:00
James Betker
b4549eed9f
uv2 fix
2022-04-20 00:27:38 -06:00
James Betker
24fdafd855
fix2
2022-04-20 00:03:29 -06:00
James Betker
0af0051399
fix
2022-04-20 00:01:57 -06:00
James Betker
419f4d37bd
gen2 music
2022-04-19 23:38:37 -06:00
James Betker
8fe0dff33c
support tts typing
2022-04-16 23:36:57 -06:00
James Betker
48cb6a5abd
misc
2022-04-16 20:28:04 -06:00
James Betker
147478a148
cvvp
2022-04-16 20:27:46 -06:00
James Betker
546ecd5aeb
music!
2022-04-15 21:21:37 -06:00
James Betker
254357724d
gradprop
2022-04-15 09:37:20 -06:00
James Betker
fbf1f4f637
update
2022-04-15 09:34:44 -06:00
James Betker
82aad335ba
add distributued logic for loss
2022-04-15 09:31:48 -06:00
James Betker
efe12cb816
Update clvp to add masking probabilities in conditioning and to support code inputs
2022-04-15 09:11:23 -06:00
James Betker
8ea5c307fb
Fixes for training the diffusion model on autoregressive inputs
2022-04-11 11:02:44 -06:00
James Betker
a3622462c1
Change latent_conditioner back
2022-04-11 09:00:13 -06:00
James Betker
03d0b90bda
fixes
2022-04-10 21:02:12 -06:00
James Betker
19ca5b26c1
Remove flat0 and move it into flat
2022-04-10 21:01:59 -06:00
James Betker
81c952a00a
undo relative
2022-04-08 16:32:52 -06:00
James Betker
944b4c3335
more undos
2022-04-08 16:31:08 -06:00
James Betker
032983e2ed
fix bug and allow position encodings to be trained separately from the rest of the model
2022-04-08 16:26:01 -06:00
James Betker
09ab1aa9bc
revert rotary embeddings work
...
I'm not really sure that this is going to work. I'd rather explore re-using what I've already trained
2022-04-08 16:18:35 -06:00
James Betker
2fb9ffb0aa
Align autoregressive text using start and stop tokens
2022-04-08 09:41:59 -06:00
James Betker
423293e518
fix xtransformers bug
2022-04-08 09:12:46 -06:00
James Betker
048f6f729a
remove lightweight_gan
2022-04-07 23:12:08 -07:00
James Betker
e634996a9c
autoregressive_codegen: support key_value caching for faster inference
2022-04-07 23:08:46 -07:00
James Betker
d05e162f95
reformat x_transformers
2022-04-07 23:08:03 -07:00
James Betker
7c578eb59b
Fix inference in new autoregressive_codegen
2022-04-07 21:22:46 -06:00
James Betker
3f8d7955ef
unified_voice with rotary embeddings
2022-04-07 20:11:14 -06:00
James Betker
573e5552b9
CLVP v1
2022-04-07 20:10:57 -06:00
James Betker
71b73db044
clean up
2022-04-07 11:34:10 -06:00
James Betker
6fc4f49e86
some dumb stuff
2022-04-07 11:32:34 -06:00
James Betker
305dc95e4b
cg2
2022-04-06 21:24:36 -06:00
James Betker
e011166dd6
autoregressive_codegen r3
2022-04-06 21:04:23 -06:00
James Betker
33ef17e9e5
fix context
2022-04-06 00:45:42 -06:00
James Betker
37bdfe82b2
Modify x_transformers to do checkpointing and use relative positional biases
2022-04-06 00:35:29 -06:00
James Betker
09879b434d
bring in x_transformers
2022-04-06 00:21:58 -06:00
James Betker
cdd12ff46c
Add code validation to autoregressive_codegen
2022-04-04 09:51:41 -06:00
James Betker
99de63a922
man I'm really on it tonight....
2022-04-02 22:01:33 -06:00
James Betker
a4bdc80933
moikmadsf
2022-04-02 21:59:50 -06:00
James Betker
1cf20b7337
sdfds
2022-04-02 21:58:09 -06:00
James Betker
b6afc4d542
dsfa
2022-04-02 21:57:00 -06:00
James Betker
4c6bdfc9e2
get rid of relative position embeddings, which do not work with DDP & checkpointing
2022-04-02 21:55:32 -06:00
James Betker
b6d62aca5d
add inference model on top of codegen
2022-04-02 21:25:10 -06:00
James Betker
2b6ff09225
autoregressive_codegen v1
2022-04-02 15:07:39 -06:00
James Betker
00767219fc
undo latent converter change
2022-04-01 20:46:27 -06:00
James Betker
55c86e02c7
Flat fix
2022-04-01 19:13:33 -06:00
James Betker
8623c51902
fix bug
2022-04-01 16:11:34 -06:00
James Betker
f6a8b0a5ca
prep flat0 for feeding from autoregressive_latent_converter
2022-04-01 15:53:45 -06:00
James Betker
3e97abc8a9
update flat0 to break out timestep-independent inference steps
2022-04-01 14:38:53 -06:00
James Betker
a6181a489b
Fix loss gapping caused by poor gradients into mel_pred
2022-03-26 22:49:14 -06:00
James Betker
1feade23ff
support x-transformers in text_voice_clip and support relative positional embeddings
2022-03-26 22:48:10 -06:00
James Betker
6909f196b4
make code pred returns optional
2022-03-26 08:33:30 -06:00
James Betker
2a29a71c37
attempt to force meaningful codes by adding a surrogate loss
2022-03-26 08:31:40 -06:00
James Betker
45804177b8
more stuff
2022-03-25 00:03:18 -06:00
James Betker
d4218d8443
mods
2022-03-24 23:31:20 -06:00
James Betker
a15970dd97
disable checkpointing in conditioning encoder
2022-03-24 11:49:04 -06:00
James Betker
cc5fc91562
flat0 work
2022-03-24 11:46:53 -06:00
James Betker
b0d2827fad
flat0
2022-03-24 11:30:40 -06:00
James Betker
8707a3e0c3
drop full layers in layerdrop, not half layers
2022-03-23 17:15:08 -06:00
James Betker
57da6d0ddf
more simplifications
2022-03-22 11:46:03 -06:00
James Betker
f3f391b372
undo sandwich
2022-03-22 11:43:24 -06:00
James Betker
927731f3b4
tts9: fix position embeddings snafu
2022-03-22 11:41:32 -06:00
James Betker
536511fc4b
unified_voice: relative position encodings
2022-03-22 11:41:13 -06:00
James Betker
5405ce4363
fix flat
2022-03-22 11:39:39 -06:00
James Betker
e47a759ed8
.......
2022-03-21 17:22:35 -06:00
James Betker
cc4c9faf9a
resolve more issues
2022-03-21 17:20:05 -06:00
James Betker
9e97cd800c
take the conditioning mean rather than the first element
2022-03-21 16:58:03 -06:00
James Betker
9c7598dc9a
fix conditioning_free signal
2022-03-21 15:29:17 -06:00
James Betker
2a65c982ca
dont double nest checkpointing
2022-03-21 15:27:51 -06:00
James Betker
723f324eda
Make it even better
2022-03-21 14:50:59 -06:00
James Betker
e735d8e1fa
unified_voice fixes
2022-03-21 14:44:00 -06:00
James Betker
1ad18d29a8
Flat fixes
2022-03-21 14:43:52 -06:00
James Betker
26dcf7f1a2
r2 of the flat diffusion
2022-03-21 11:40:43 -06:00
James Betker
c14fc003ed
flat diffusion
2022-03-17 17:45:27 -06:00
James Betker
428911cd4d
flat diffusion network
2022-03-17 10:53:56 -06:00
James Betker
bf08519d71
fixes
2022-03-17 10:53:39 -06:00
James Betker
95ea0a592f
More cleaning
2022-03-16 12:05:56 -06:00
James Betker
d186414566
More spring cleaning
2022-03-16 12:04:00 -06:00
James Betker
8b376e63d9
More improvements
2022-03-16 10:16:34 -06:00
James Betker
0fc877cbc8
tts9 fix for alignment size
2022-03-15 21:43:14 -06:00
James Betker
f563a8dd41
fixes
2022-03-15 21:43:00 -06:00
James Betker
b754058018
Update wav2vec2 wrapper
2022-03-15 11:35:38 -06:00
James Betker
9c6f776980
Add univnet vocoder
2022-03-15 11:34:51 -06:00
James Betker
7929fd89de
Refactor audio-style models into the audio folder
2022-03-15 11:06:25 -06:00
James Betker
f95d3d2b82
move waveglow to audio/vocoders
2022-03-15 11:03:07 -06:00
James Betker
bb03cbb9fc
composable initial checkin
2022-03-15 10:35:40 -06:00
James Betker
86b0d76fb9
tts8 (incomplete, may be removed)
2022-03-15 10:35:31 -06:00
James Betker
eecbc0e678
Use wider spectrogram when asked
2022-03-15 10:35:11 -06:00
James Betker
9767260c6c
tacotron stft - loosen bounds restrictions and clip
2022-03-15 10:31:26 -06:00
James Betker
f8631ad4f7
Updates to support inputting MELs into the conditioning encoder
2022-03-14 17:31:42 -06:00
James Betker
22c67ce8d3
tts9 mods
2022-03-13 10:25:55 -06:00
James Betker
8f130e2b3f
add scale_shift_norm back to tts9
2022-03-12 20:42:13 -07:00
James Betker
73bfd4a86d
another tts9 update
2022-03-12 15:17:06 -07:00
James Betker
0523777ff7
add efficient config to tts9
2022-03-12 15:10:35 -07:00
James Betker
1e87b934db
potentially average conditioning inputs
2022-03-10 20:37:41 -07:00
James Betker
e6a95f7c11
Update tts9: Remove torchscript provisions and add mechanism to train solely on codes
2022-03-09 09:43:38 -07:00
James Betker
f56edb2122
minicoder with classifier head: spread out probability mass for 0 predictions
2022-03-08 15:51:31 -07:00
James Betker
29b2921222
move diffusion vocoder
2022-03-08 15:51:05 -07:00
James Betker
94222b0216
tts9 initial commit
2022-03-08 15:50:45 -07:00
James Betker
d1dc8dbb35
Support tts9
2022-03-05 20:14:36 -07:00
James Betker
79e5692388
Fix distributed bug
2022-03-04 11:58:53 -07:00
James Betker
f87e10ffef
Make deterministic sampler work with distributed training & microbatches
2022-03-04 11:50:50 -07:00
James Betker
77c18b53b3
Cap grad booster
2022-03-04 10:40:24 -07:00
James Betker
2d1cb83c1d
Add a deterministic timestep sampler, with provisions to employ it every n steps
2022-03-04 10:40:14 -07:00
James Betker
58019a2ce3
audio diffusion fid updates
2022-03-03 21:53:32 -07:00
James Betker
998c53ad4f
w2v_matcher mods
2022-03-03 21:52:51 -07:00
James Betker
9029e4f20c
Add a base-wrapper
2022-03-03 21:52:28 -07:00
James Betker
6873ad6660
Support functionality
2022-03-03 21:52:16 -07:00
James Betker
6af5d129ce
Add experimental gradient boosting into tts7
2022-03-03 21:51:40 -07:00
James Betker
7ea84f1ac3
asdf
2022-03-03 13:43:44 -07:00
James Betker
3cd6c7f428
Get rid of unused codes in vq
2022-03-03 13:41:38 -07:00
James Betker
619da9ea28
Get rid of discretization loss
2022-03-03 13:36:25 -07:00
James Betker
beb7c8a39d
asdf
2022-03-01 21:41:31 -07:00
James Betker
70fa780edb
Add mechanism to export grad norms
2022-03-01 20:19:52 -07:00
James Betker
d9f8f92840
Codified fp16
2022-03-01 15:46:04 -07:00
James Betker
45ab444c04
Rework minicoder to always checkpoint
2022-03-01 14:09:18 -07:00
James Betker
db0c3340ac
Implement guidance-free diffusion in eval
...
And a few other fixes
2022-03-01 11:49:36 -07:00
James Betker
2134f06516
Implement conditioning-free diffusion at the eval level
2022-02-27 15:11:42 -07:00
James Betker
436fe24822
Add conditioning-free guidance
2022-02-27 15:00:06 -07:00
James Betker
ac920798bb
misc
2022-02-27 14:49:11 -07:00
James Betker
dbc74e96b2
w2v_matcher
2022-02-27 14:48:23 -07:00
James Betker
42879d7296
w2v_wrapper ramping dropout mode
...
this is an experimental feature that needs some testing
2022-02-27 14:47:51 -07:00
James Betker
c375287db9
Re-instate autocasting
2022-02-25 11:06:18 -07:00
James Betker
34ee32a90e
get rid of autocasting in tts7
2022-02-24 21:53:51 -07:00
James Betker
ea500ad42a
Use clustered masking in udtts7
2022-02-24 07:57:26 -07:00
James Betker
7201b4500c
default text_to_sequence cleaners
2022-02-21 19:14:22 -07:00
James Betker
ba7f54c162
w2v: new inference function
2022-02-21 19:13:03 -07:00
James Betker
38802a96c8
remove timesteps from cond calculation
2022-02-21 12:32:21 -07:00
James Betker
668876799d
unet_diffusion_tts7
2022-02-20 15:22:38 -07:00
James Betker
0872e17e60
unified_voice mods
2022-02-19 20:37:35 -07:00
James Betker
7b12799370
Reformat mel_text_clip for use in eval
2022-02-19 20:37:26 -07:00
James Betker
baf7b65566
Attempt to make w2v play with DDP AND checkpointing
2022-02-18 18:47:11 -07:00
James Betker
f3776f1992
reset ctc loss from "mean" to "sum"
2022-02-17 22:00:58 -07:00
James Betker
2b20da679c
make spec_augment a parameter
2022-02-17 20:22:05 -07:00
James Betker
e1d71e1bd5
w2v_wrapper: get rid of ctc attention mask
2022-02-15 20:54:40 -07:00
James Betker
79e8f36d30
Convert CLIP models into new folder
2022-02-15 20:53:07 -07:00
James Betker
2bdb515068
A few mods to make wav2vec2 trainable with DDP on DLAS
2022-02-15 06:28:54 -07:00
James Betker
52b61b9f77
Update scripts and attempt to figure out how UnifiedVoice could be used to produce CTC codes
2022-02-13 20:48:06 -07:00
James Betker
a4f1641eea
Add & refine WER evaluator for w2v
2022-02-13 20:47:29 -07:00
James Betker
29534180b2
w2v fine tuner
2022-02-12 20:00:59 -07:00
James Betker
3252972057
ctc_code_gen mods
2022-02-12 19:59:54 -07:00
James Betker
302ac8652d
Undo mask during training
2022-02-11 09:35:12 -07:00
James Betker
618a20412a
new rev of ctc_code_gen with surrogate LM loss
2022-02-10 23:09:57 -07:00
James Betker
820a29f81e
ctc code gen mods
2022-02-10 09:44:01 -07:00
James Betker
ac9417b956
ctc_code_gen: mask out all padding tokens
2022-02-09 17:26:30 -07:00
James Betker
ddb77ef502
ctc_code_gen: use a mean() on the ConditioningEncoder
2022-02-09 14:26:44 -07:00
James Betker
9e9ae328f2
mild updates
2022-02-08 23:51:17 -07:00
James Betker
ff35d13b99
Use non-uniform noise in diffusion_tts6
2022-02-08 07:27:41 -07:00
James Betker
34fbb78671
Straight CtcCodeGenerator as an encoder
2022-02-07 15:46:46 -07:00
James Betker
65a546c4d7
Fix for tts6
2022-02-05 16:00:14 -07:00
James Betker
5ae816bead
ctc gen checkin
2022-02-05 15:59:53 -07:00
James Betker
bb3d1ab03d
More cleanup
2022-02-04 11:06:17 -07:00
James Betker
5cc342de66
Clean up
2022-02-04 11:00:42 -07:00
James Betker
8fb147e8ab
add an autoregressive ctc code generator
2022-02-04 11:00:15 -07:00
James Betker
7f4fc55344
Update SR model
2022-02-03 21:42:53 -07:00
James Betker
bc506d4bcd
Mods to unet_diffusion_tts6 to support super resolution mode
2022-02-03 19:59:39 -07:00
James Betker
4249681c4b
Mods to support a autoregressive CTC code generator
2022-02-03 19:58:54 -07:00
James Betker
8132766d38
tts6
2022-01-31 20:15:06 -07:00
James Betker
fbea6e8eac
Adjustments to diffusion networks
2022-01-30 16:14:06 -07:00
James Betker
e58dab14c3
new diffusion updates from testing
2022-01-29 11:01:01 -07:00
James Betker
935a4e853e
get rid of nil tokens in <2>
2022-01-27 22:45:57 -07:00
James Betker
a77d376ad2
rename unet diffusion tts and add 3
2022-01-27 19:56:24 -07:00
James Betker
8c255811ad
more fixes
2022-01-25 17:57:16 -07:00
James Betker
0f3ca28e39
Allow diffusion model to be trained with masking tokens
2022-01-25 14:26:21 -07:00
James Betker
d18aec793a
Revert "(re) attempt diffusion checkpointing logic"
...
This reverts commit b22eec8fe3
.
2022-01-22 09:14:50 -07:00
James Betker
b22eec8fe3
(re) attempt diffusion checkpointing logic
2022-01-22 08:34:40 -07:00
James Betker
8f48848f91
misc
2022-01-22 08:23:29 -07:00
James Betker
851070075a
text<->cond clip
...
I need that universal clip..
2022-01-22 08:23:14 -07:00
James Betker
8ada52ccdc
Update LR layers to checkpoint better
2022-01-22 08:22:57 -07:00
James Betker
8e2439f50d
Decrease resolution requirements to 2048
2022-01-20 11:27:49 -07:00
James Betker
4af8525dc3
Adjust diffusion vocoder to allow training individual levels
2022-01-19 13:37:59 -07:00
James Betker
ac13bfefe8
use_diffuse_tts
2022-01-19 00:35:24 -07:00
James Betker
bcd8cc51e1
Enable collated data for diffusion purposes
2022-01-19 00:35:08 -07:00
James Betker
dc9cd8c206
Update use_gpt_tts to be usable with unified_voice2
2022-01-18 21:14:17 -07:00
James Betker
7b4544b83a
Add an experimental unet_diffusion_tts to perform experiments on
2022-01-18 08:38:24 -07:00
James Betker
37e4e737b5
a few fixes
2022-01-16 15:17:17 -07:00
James Betker
9100e7fa9b
Add a diffusion network that takes aligned text instead of MELs
2022-01-15 17:28:02 -07:00
James Betker
009a1e8404
Add a new diffusion_vocoder that should be trainable faster
...
This new one has a "cheating" top layer, that does not feed down into the unet encoder,
but does consume the outputs of the unet. This cheater only operates on half of the input,
while the rest of the unet operates on the full input. This limits the dimensionality of this last
layer, on the assumption that these last layers consume by far the most computation and memory,
but do not require the full input context.
Losses are only computed on half of the aggregate input.
2022-01-11 17:26:07 -07:00
James Betker
91f28580e2
fix unified_voice
2022-01-10 16:17:31 -07:00
James Betker
136744dc1d
Fixes
2022-01-10 14:32:04 -07:00
James Betker
ee3dfac2ae
unified_voice2: decouple positional embeddings and token embeddings from underlying gpt model
2022-01-10 08:14:41 -07:00
James Betker
f503d8d96b
Partially implement performers in transformer_builders
2022-01-09 22:35:03 -07:00
James Betker
ec456b6733
Revert unified_voice back to beginning
...
I'll be doing my work within unified_voice2
2022-01-09 22:34:30 -07:00
James Betker
432073c5ca
Make performer code functional
2022-01-09 22:32:50 -07:00
James Betker
f474a7ac65
unified_voice2
2022-01-09 22:32:34 -07:00
James Betker
c075fe72e2
import performer repo
2022-01-09 22:10:07 -07:00
James Betker
7de3874f15
Make dalle transformer checkpointable
2022-01-09 19:14:35 -07:00
James Betker
70b17da193
Alter unified_voice to use extensible transformer (still WIP)
2022-01-08 22:18:25 -07:00
James Betker
15d9517e26
Allow bi-directional clipping
2022-01-08 22:18:04 -07:00
James Betker
8bade38180
Add generic CLIP model based off of x_clip
2022-01-08 19:08:01 -07:00
James Betker
438dd9ed33
fix text-voice-clip bug
2022-01-08 08:55:00 -07:00
James Betker
34774f9948
unified_voice: begin decoupling from HF GPT
...
I'd like to try some different (newer) transformer variants. The way to get
there is softly decoupling the transformer portion of this architecture
from GPT. This actually should be fairly easy.
2022-01-07 22:51:24 -07:00
James Betker
68090ac3e9
Finish up the text->voice clip model
2022-01-07 22:28:45 -07:00
James Betker
65ffe38fce
misc
2022-01-06 22:16:17 -07:00
James Betker
e7a705fe6e
Make gpt_asr_hf2 more efficient at inference
2022-01-06 10:27:10 -07:00
James Betker
525addffab
Unified: automatically clip inputs according to specified max length to improve inference time
2022-01-06 10:13:45 -07:00
James Betker
61cd351b71
update unified
2022-01-06 09:48:11 -07:00
James Betker
10fd1110be
Fix (?) use_gpt_tts for unified_voice
2022-01-05 20:09:31 -07:00
James Betker
3c4301f085
Remove dvae_arch_playground
2022-01-05 17:06:45 -07:00
James Betker
a63a17e48f
Remove deepspeech models
2022-01-05 17:05:13 -07:00
James Betker
c584ba05ee
unified_voice improvements
...
- Rename max_symbols_per_phrase to max_text_tokens
- Remove max_total_tokens (no longer necessary)
- Fix integration with MelEncoder
2022-01-05 17:03:53 -07:00
James Betker
38aba6f88d
Another dumdum fix
2022-01-04 15:18:25 -07:00
James Betker
963c6072bb
Add mel_encoder and solo embeddings to unified_voice
2022-01-04 15:15:58 -07:00
James Betker
2165124f19
Add GPT documentation
2022-01-01 21:00:07 -07:00
James Betker
2635412291
doh
2022-01-01 14:29:59 -07:00
James Betker
d4a6298658
more debugging
2022-01-01 14:25:27 -07:00
James Betker
d8111e0477
misc
2022-01-01 14:05:33 -07:00
James Betker
dc535b5358
better bounds
2022-01-01 14:05:22 -07:00
James Betker
fe9ea4e01a
auto-fix text_inputs too big
2022-01-01 13:25:47 -07:00
James Betker
bbacffb790
dataset improvements and fix to unified_voice_Bilevel
2022-01-01 00:16:30 -07:00
James Betker
eda753e776
Allow conditioning shuffling to be disabled
2021-12-31 23:32:08 -07:00
James Betker
9aa06542cd
Further reduce the complexity of the MEL encoder in GptAsrHf
2021-12-30 09:10:40 -07:00
James Betker
5ae7e0d9b0
Fix gapping bug in voice2voice clip
2021-12-29 14:44:46 -07:00
James Betker
b12f47b36d
Add some noise to voice_voice_clip
2021-12-29 13:56:30 -07:00
James Betker
b24a51f0aa
Check in speech2speech CLIP inference tool
2021-12-29 00:19:44 -07:00
James Betker
c1bef01dfa
GptAsrHf2 checkin
2021-12-28 20:48:38 -07:00
James Betker
07c2b9907c
Add voice2voice clip model
2021-12-28 16:18:12 -07:00
James Betker
a9ee5b624f
Simplify and conform gpt_asr_hf2
2021-12-28 11:54:33 -07:00
James Betker
a5b4bee719
Improve asr_eval
2021-12-28 11:45:15 -07:00
James Betker
312f631c5b
gpt_asr_hf2: remove dual positional embeddings
2021-12-28 10:57:45 -07:00
James Betker
a12042ea99
Allow multi-embeddings to be disabled
2021-12-28 09:00:53 -07:00
James Betker
a698d3f525
unified_voice: introduce paired embeddings
2021-12-26 15:33:05 -07:00
James Betker
6996dfd9d5
asr_hf2: add independent position embedders
2021-12-26 15:17:24 -07:00
James Betker
5b5cbc057c
Work checkpoint for gpt asr hf2
2021-12-26 10:29:12 -07:00
James Betker
cd89e6b42e
Initialize our embeddings the same way GPT-2 initializes theirs.
2021-12-26 00:20:30 -07:00
James Betker
8d01f7685c
Get rid of absolute positional embeddings in unifiedvoice
2021-12-26 00:10:24 -07:00
James Betker
6700f8851d
moar verbosity
2021-12-25 23:23:21 -07:00
James Betker
8acf3b3097
Better dimensional asserting
2021-12-25 23:18:25 -07:00
James Betker
e959541494
Add position embeddings back into unified_voice
...
I think this may be the solution behind the days problems.
2021-12-25 23:10:56 -07:00
James Betker
ab9cafa572
Make tokenization configs more configurable
2021-12-25 12:17:50 -07:00