James Betker
|
546ecd5aeb
|
music!
|
2022-04-15 21:21:37 -06:00 |
|
James Betker
|
8ea5c307fb
|
Fixes for training the diffusion model on autoregressive inputs
|
2022-04-11 11:02:44 -06:00 |
|
James Betker
|
a3622462c1
|
Change latent_conditioner back
|
2022-04-11 09:00:13 -06:00 |
|
James Betker
|
19ca5b26c1
|
Remove flat0 and move it into flat
|
2022-04-10 21:01:59 -06:00 |
|
James Betker
|
81c952a00a
|
undo relative
|
2022-04-08 16:32:52 -06:00 |
|
James Betker
|
944b4c3335
|
more undos
|
2022-04-08 16:31:08 -06:00 |
|
James Betker
|
032983e2ed
|
fix bug and allow position encodings to be trained separately from the rest of the model
|
2022-04-08 16:26:01 -06:00 |
|
James Betker
|
09ab1aa9bc
|
revert rotary embeddings work
I'm not really sure that this is going to work. I'd rather explore re-using what I've already trained
|
2022-04-08 16:18:35 -06:00 |
|
James Betker
|
2fb9ffb0aa
|
Align autoregressive text using start and stop tokens
|
2022-04-08 09:41:59 -06:00 |
|
James Betker
|
e634996a9c
|
autoregressive_codegen: support key_value caching for faster inference
|
2022-04-07 23:08:46 -07:00 |
|
James Betker
|
7c578eb59b
|
Fix inference in new autoregressive_codegen
|
2022-04-07 21:22:46 -06:00 |
|
James Betker
|
3f8d7955ef
|
unified_voice with rotary embeddings
|
2022-04-07 20:11:14 -06:00 |
|
James Betker
|
71b73db044
|
clean up
|
2022-04-07 11:34:10 -06:00 |
|
James Betker
|
6fc4f49e86
|
some dumb stuff
|
2022-04-07 11:32:34 -06:00 |
|
James Betker
|
305dc95e4b
|
cg2
|
2022-04-06 21:24:36 -06:00 |
|
James Betker
|
e011166dd6
|
autoregressive_codegen r3
|
2022-04-06 21:04:23 -06:00 |
|
James Betker
|
37bdfe82b2
|
Modify x_transformers to do checkpointing and use relative positional biases
|
2022-04-06 00:35:29 -06:00 |
|
James Betker
|
cdd12ff46c
|
Add code validation to autoregressive_codegen
|
2022-04-04 09:51:41 -06:00 |
|
James Betker
|
99de63a922
|
man I'm really on it tonight....
|
2022-04-02 22:01:33 -06:00 |
|
James Betker
|
a4bdc80933
|
moikmadsf
|
2022-04-02 21:59:50 -06:00 |
|
James Betker
|
1cf20b7337
|
sdfds
|
2022-04-02 21:58:09 -06:00 |
|
James Betker
|
b6afc4d542
|
dsfa
|
2022-04-02 21:57:00 -06:00 |
|
James Betker
|
4c6bdfc9e2
|
get rid of relative position embeddings, which do not work with DDP & checkpointing
|
2022-04-02 21:55:32 -06:00 |
|
James Betker
|
b6d62aca5d
|
add inference model on top of codegen
|
2022-04-02 21:25:10 -06:00 |
|
James Betker
|
2b6ff09225
|
autoregressive_codegen v1
|
2022-04-02 15:07:39 -06:00 |
|
James Betker
|
00767219fc
|
undo latent converter change
|
2022-04-01 20:46:27 -06:00 |
|
James Betker
|
55c86e02c7
|
Flat fix
|
2022-04-01 19:13:33 -06:00 |
|
James Betker
|
8623c51902
|
fix bug
|
2022-04-01 16:11:34 -06:00 |
|
James Betker
|
f6a8b0a5ca
|
prep flat0 for feeding from autoregressive_latent_converter
|
2022-04-01 15:53:45 -06:00 |
|
James Betker
|
3e97abc8a9
|
update flat0 to break out timestep-independent inference steps
|
2022-04-01 14:38:53 -06:00 |
|
James Betker
|
a6181a489b
|
Fix loss gapping caused by poor gradients into mel_pred
|
2022-03-26 22:49:14 -06:00 |
|
James Betker
|
1feade23ff
|
support x-transformers in text_voice_clip and support relative positional embeddings
|
2022-03-26 22:48:10 -06:00 |
|
James Betker
|
6909f196b4
|
make code pred returns optional
|
2022-03-26 08:33:30 -06:00 |
|
James Betker
|
2a29a71c37
|
attempt to force meaningful codes by adding a surrogate loss
|
2022-03-26 08:31:40 -06:00 |
|
James Betker
|
45804177b8
|
more stuff
|
2022-03-25 00:03:18 -06:00 |
|
James Betker
|
d4218d8443
|
mods
|
2022-03-24 23:31:20 -06:00 |
|
James Betker
|
a15970dd97
|
disable checkpointing in conditioning encoder
|
2022-03-24 11:49:04 -06:00 |
|
James Betker
|
cc5fc91562
|
flat0 work
|
2022-03-24 11:46:53 -06:00 |
|
James Betker
|
b0d2827fad
|
flat0
|
2022-03-24 11:30:40 -06:00 |
|
James Betker
|
8707a3e0c3
|
drop full layers in layerdrop, not half layers
|
2022-03-23 17:15:08 -06:00 |
|
James Betker
|
57da6d0ddf
|
more simplifications
|
2022-03-22 11:46:03 -06:00 |
|
James Betker
|
f3f391b372
|
undo sandwich
|
2022-03-22 11:43:24 -06:00 |
|
James Betker
|
927731f3b4
|
tts9: fix position embeddings snafu
|
2022-03-22 11:41:32 -06:00 |
|
James Betker
|
536511fc4b
|
unified_voice: relative position encodings
|
2022-03-22 11:41:13 -06:00 |
|
James Betker
|
5405ce4363
|
fix flat
|
2022-03-22 11:39:39 -06:00 |
|
James Betker
|
e47a759ed8
|
.......
|
2022-03-21 17:22:35 -06:00 |
|
James Betker
|
cc4c9faf9a
|
resolve more issues
|
2022-03-21 17:20:05 -06:00 |
|
James Betker
|
9e97cd800c
|
take the conditioning mean rather than the first element
|
2022-03-21 16:58:03 -06:00 |
|
James Betker
|
9c7598dc9a
|
fix conditioning_free signal
|
2022-03-21 15:29:17 -06:00 |
|
James Betker
|
2a65c982ca
|
dont double nest checkpointing
|
2022-03-21 15:27:51 -06:00 |
|