Commit Graph

1052 Commits

Author SHA1 Message Date
James Betker
c075fe72e2 import performer repo 2022-01-09 22:10:07 -07:00
James Betker
7de3874f15 Make dalle transformer checkpointable 2022-01-09 19:14:35 -07:00
James Betker
70b17da193 Alter unified_voice to use extensible transformer (still WIP) 2022-01-08 22:18:25 -07:00
James Betker
15d9517e26 Allow bi-directional clipping 2022-01-08 22:18:04 -07:00
James Betker
8bade38180 Add generic CLIP model based off of x_clip 2022-01-08 19:08:01 -07:00
James Betker
438dd9ed33 fix text-voice-clip bug 2022-01-08 08:55:00 -07:00
James Betker
34774f9948 unified_voice: begin decoupling from HF GPT
I'd like to try some different (newer) transformer variants. The way to get
there is softly decoupling the transformer portion of this architecture
from GPT. This actually should be fairly easy.
2022-01-07 22:51:24 -07:00
James Betker
68090ac3e9 Finish up the text->voice clip model 2022-01-07 22:28:45 -07:00
James Betker
65ffe38fce misc 2022-01-06 22:16:17 -07:00
James Betker
e7a705fe6e Make gpt_asr_hf2 more efficient at inference 2022-01-06 10:27:10 -07:00
James Betker
525addffab Unified: automatically clip inputs according to specified max length to improve inference time 2022-01-06 10:13:45 -07:00
James Betker
61cd351b71 update unified 2022-01-06 09:48:11 -07:00
James Betker
10fd1110be Fix (?) use_gpt_tts for unified_voice 2022-01-05 20:09:31 -07:00
James Betker
3c4301f085 Remove dvae_arch_playground 2022-01-05 17:06:45 -07:00
James Betker
a63a17e48f Remove deepspeech models 2022-01-05 17:05:13 -07:00
James Betker
c584ba05ee unified_voice improvements
- Rename max_symbols_per_phrase to max_text_tokens
- Remove max_total_tokens (no longer necessary)
- Fix integration with MelEncoder
2022-01-05 17:03:53 -07:00
James Betker
38aba6f88d Another dumdum fix 2022-01-04 15:18:25 -07:00
James Betker
963c6072bb Add mel_encoder and solo embeddings to unified_voice 2022-01-04 15:15:58 -07:00
James Betker
2165124f19 Add GPT documentation 2022-01-01 21:00:07 -07:00
James Betker
2635412291 doh 2022-01-01 14:29:59 -07:00
James Betker
d4a6298658 more debugging 2022-01-01 14:25:27 -07:00
James Betker
d8111e0477 misc 2022-01-01 14:05:33 -07:00
James Betker
dc535b5358 better bounds 2022-01-01 14:05:22 -07:00
James Betker
fe9ea4e01a auto-fix text_inputs too big 2022-01-01 13:25:47 -07:00
James Betker
bbacffb790 dataset improvements and fix to unified_voice_Bilevel 2022-01-01 00:16:30 -07:00
James Betker
eda753e776 Allow conditioning shuffling to be disabled 2021-12-31 23:32:08 -07:00
James Betker
9aa06542cd Further reduce the complexity of the MEL encoder in GptAsrHf 2021-12-30 09:10:40 -07:00
James Betker
5ae7e0d9b0 Fix gapping bug in voice2voice clip 2021-12-29 14:44:46 -07:00
James Betker
b12f47b36d Add some noise to voice_voice_clip 2021-12-29 13:56:30 -07:00
James Betker
b24a51f0aa Check in speech2speech CLIP inference tool 2021-12-29 00:19:44 -07:00
James Betker
c1bef01dfa GptAsrHf2 checkin 2021-12-28 20:48:38 -07:00
James Betker
07c2b9907c Add voice2voice clip model 2021-12-28 16:18:12 -07:00
James Betker
a9ee5b624f Simplify and conform gpt_asr_hf2 2021-12-28 11:54:33 -07:00
James Betker
a5b4bee719 Improve asr_eval 2021-12-28 11:45:15 -07:00
James Betker
312f631c5b gpt_asr_hf2: remove dual positional embeddings 2021-12-28 10:57:45 -07:00
James Betker
a12042ea99 Allow multi-embeddings to be disabled 2021-12-28 09:00:53 -07:00
James Betker
a698d3f525 unified_voice: introduce paired embeddings 2021-12-26 15:33:05 -07:00
James Betker
6996dfd9d5 asr_hf2: add independent position embedders 2021-12-26 15:17:24 -07:00
James Betker
5b5cbc057c Work checkpoint for gpt asr hf2 2021-12-26 10:29:12 -07:00
James Betker
cd89e6b42e Initialize our embeddings the same way GPT-2 initializes theirs. 2021-12-26 00:20:30 -07:00
James Betker
8d01f7685c Get rid of absolute positional embeddings in unifiedvoice 2021-12-26 00:10:24 -07:00
James Betker
6700f8851d moar verbosity 2021-12-25 23:23:21 -07:00
James Betker
8acf3b3097 Better dimensional asserting 2021-12-25 23:18:25 -07:00
James Betker
e959541494 Add position embeddings back into unified_voice
I think this may be the solution behind the days problems.
2021-12-25 23:10:56 -07:00
James Betker
ab9cafa572 Make tokenization configs more configurable 2021-12-25 12:17:50 -07:00
James Betker
52410fd9d9 256-bpe tokenizer 2021-12-25 08:52:08 -07:00
James Betker
8e26400ce2 Add inference for unified gpt 2021-12-24 13:27:06 -07:00
James Betker
8b19c37409 UnifiedGptVoice! 2021-12-23 15:20:26 -07:00
James Betker
e55d949855 GrandConjoinedDataset 2021-12-23 14:32:33 -07:00
James Betker
c737632eae Train and use a bespoke tokenizer 2021-12-22 15:06:14 -07:00
James Betker
66bc60aeff Re-add start_text_token 2021-12-22 14:10:35 -07:00
James Betker
a9629f7022 Try out using the GPT tokenizer rather than nv_tacotron
This results in a significant compression of the text domain, I'm curious what the
effect on speech quality will be.
2021-12-22 14:03:18 -07:00
James Betker
7ae7d423af VoiceCLIP model 2021-12-22 13:44:11 -07:00
James Betker
09f7f3e615 Remove obsolete lucidrains DALLE stuff, re-create it in a dedicated folder 2021-12-22 13:44:02 -07:00
James Betker
a42b94ab72 gpt_tts_hf inference fixes 2021-12-22 13:22:15 -07:00
James Betker
48e3ee9a5b Shuffle conditioning inputs along the positional axis to reduce fitting on prosody and other positional information
The mels should still retain some short-range positional information the model can use
for tone and frequencies, for example.
2021-12-20 19:05:56 -07:00
James Betker
53858b2055 Fix gpt_tts_hf inference 2021-12-20 17:45:26 -07:00
James Betker
712d746e9b gpt_tts: format conditioning inputs more for contextual voice clues and less for prosidy
also support single conditional inputs
2021-12-19 17:42:29 -07:00
James Betker
c813befd53 Remove dedicated positioning embeddings 2021-12-19 09:01:31 -07:00
James Betker
b4ddcd7111 More inference improvements 2021-12-19 09:01:19 -07:00
James Betker
f9c45d70f0 Fix mel terminator 2021-12-18 17:18:06 -07:00
James Betker
937045cb63 Fixes 2021-12-18 16:45:38 -07:00
James Betker
9b9f7ea61b GptTtsHf: Make the input/target placement easier to reason about 2021-12-17 10:24:14 -07:00
James Betker
2fb4213a3e More lossy fixes 2021-12-17 10:01:42 -07:00
James Betker
9e8a9bf6ca Various fixes to gpt_tts_hf 2021-12-16 23:28:44 -07:00
James Betker
62c8ed9a29 move speech utils 2021-12-16 20:47:37 -07:00
James Betker
4f8c4d130c gpt_tts_hf: pad mel tokens with an <end_of_sequence> token. 2021-12-12 20:04:50 -07:00
James Betker
8917c02a4d gpt_tts_hf inference first pass 2021-12-12 19:51:44 -07:00
James Betker
5a664aa56e misc 2021-12-11 08:17:26 -07:00
James Betker
6ccff3f49f Record codes more often 2021-12-07 09:22:45 -07:00
James Betker
d0b2f931bf Add feature to diffusion vocoder where the spectrogram conditioning layers can be re-trained apart from the rest of the model 2021-12-07 09:22:30 -07:00
James Betker
662920bde3 Log codes when simply fetching codebook_indices 2021-12-06 09:21:43 -07:00
James Betker
380a5d5475 gdi.. 2021-12-03 08:53:09 -07:00
James Betker
101a01f744 Fix dvae codes issue 2021-12-02 23:28:36 -07:00
James Betker
07b0124712 GptTtsHf! 2021-12-02 21:48:42 -07:00
James Betker
85542ec547 One last fix for gpt_asr_hf2 2021-12-02 21:19:28 -07:00
James Betker
04454ee63a Add evaluation logic for gpt_asr_hf2 2021-12-02 21:04:36 -07:00
James Betker
5956eb757c ffffff 2021-11-24 00:19:47 -07:00
James Betker
f1ed0588e3 another fix 2021-11-24 00:11:21 -07:00
James Betker
7a3c4a4fc6 Fix lr quantizer decode 2021-11-24 00:01:26 -07:00
James Betker
3f6ecfe0db q fix 2021-11-23 23:50:27 -07:00
James Betker
d9747fe623 Integrate with lr_quantizer 2021-11-23 19:48:22 -07:00
James Betker
82d0e7720e Add choke to lucidrains_dvae 2021-11-23 18:53:37 -07:00
James Betker
934395d4b8 A few fixes for gpt_asr_hf2 2021-11-23 09:29:29 -07:00
James Betker
01e635168b whoops 2021-11-22 17:24:13 -07:00
James Betker
973f47c525 misc nonfunctional 2021-11-22 17:16:39 -07:00
James Betker
3125ca38f5 Further wandb logs 2021-11-22 16:40:19 -07:00
James Betker
0604060580 Finish up mods for next version of GptAsrHf 2021-11-20 21:33:49 -07:00
James Betker
14f3155ec4 misc 2021-11-20 17:45:14 -07:00
James Betker
555b7e52ad Add rev2 of GptAsrHf 2021-11-18 20:02:24 -07:00
James Betker
1287915f3c Fix dvae test failure 2021-11-18 00:58:36 -07:00
James Betker
019acfa4c5 Allow flat dvae 2021-11-18 00:53:42 -07:00
James Betker
f3db41f125 Fix code logging 2021-11-18 00:34:37 -07:00
James Betker
79367f753d Fix error & add nonfinite warning 2021-11-09 23:58:41 -07:00
James Betker
c584320cf3 Fix gpt_asr_hf distillation 2021-11-07 21:53:21 -07:00
James Betker
a367ea3fda Add script for computing attention for gpt_asr 2021-11-07 18:42:06 -07:00
James Betker
756b4dad09 Working gpt_asr_hf inference - and it's a beast! 2021-11-06 21:47:15 -06:00
James Betker
596a62fe01 Apply fix to gpt_asr_hf and prep it for inference
Fix is that we were predicting two characters in advance, not next character
2021-11-04 10:09:24 -06:00
James Betker
993bd52d42 Add spec_augment injector 2021-11-01 18:43:11 -06:00
James Betker
4cff774b0e Reduce complexity of the encoder for gpt_asr_hf 2021-11-01 17:02:28 -06:00
James Betker
da55ca0438 gpt_asr using the huggingfaces transformer 2021-11-01 17:00:22 -06:00
James Betker
83cccef9d8 Condition on full signal 2021-10-30 19:58:34 -06:00
James Betker
df45a9dec2 Fix inference mode for lucidrains_gpt 2021-10-30 16:59:18 -06:00
James Betker
92fe8b4dd9 ffffpt2 2021-10-29 17:29:49 -06:00
James Betker
95ca88efce Fix feedforward 2021-10-29 17:27:51 -06:00
James Betker
b476516340 Check in backing changes (which may have broken something?) 2021-10-29 17:22:33 -06:00
James Betker
986fc9628d Check in GPT with new inference methods (but not the backing code..) 2021-10-29 17:21:40 -06:00
James Betker
58494b0888 Add support for distilling gpt_asr 2021-10-27 13:10:07 -06:00
James Betker
5d714bc566 Add deepspeech model and support for decoding with it 2021-10-27 13:09:46 -06:00
James Betker
3a9d1c53ea Rework conditioning inputs provided 2021-10-26 10:46:33 -06:00
James Betker
43e389aac6 Add time_embed_dim_multiplier 2021-10-26 08:55:55 -06:00
James Betker
ba6e46c02a Further simplify diffusion_vocoder and make noise_surfer work 2021-10-26 08:54:30 -06:00
James Betker
0ee1c67ce5 Rework how conditioning inputs are applied to DiffusionVocoder 2021-10-24 09:08:58 -06:00
James Betker
06ea6191a9 Initial implementation of audio_with_noise dataset 2021-10-21 16:45:19 -06:00
James Betker
0dee15f875 base DVAE & vector_quantizer 2021-10-20 21:19:38 -06:00
James Betker
f2a31702b5 Clean stuff up, move more things into arch_util 2021-10-20 21:19:25 -06:00
James Betker
a6f0f854b9 Fix codes when inferring from dvae 2021-10-17 22:51:17 -06:00
James Betker
d016a2fbad Go back to vanilla flavor of diffusion 2021-10-17 17:32:46 -06:00
James Betker
23da073037 Norm decoder outputs now 2021-10-16 09:07:10 -06:00
James Betker
0edc98f6c4 Throw out the idea of conditioning on discrete codes. Oh well :( 2021-10-16 09:02:01 -06:00
James Betker
62c8c5d93e Zero out spectrogram code inputs initially. 2021-10-15 12:10:11 -06:00
James Betker
1d0b44ebc2 More tweaks to diffusion-vocoder 2021-10-15 11:51:17 -06:00
James Betker
3b19581f9a Allow num_resblocks to specified per-level 2021-10-14 11:26:04 -06:00
James Betker
83798887a8 Mods to support unet diffusion vocoder with conditioning 2021-10-13 21:23:18 -06:00
James Betker
33120cb35c Add norming to discretization_loss 2021-10-06 17:10:50 -06:00
James Betker
f2977d360c Allow attention_dim in channel attention to be specified, add converter 2021-10-05 17:29:38 -06:00
James Betker
9c0d7288ea Discretization loss attempt 2021-10-04 20:59:21 -06:00
James Betker
66f99a159c Rev2 2021-10-03 15:20:50 -06:00
James Betker
09f373e3b1 Add dvae with channel attention 2021-10-03 10:52:01 -06:00
James Betker
0396a9d2ca Increase baseline codes recording across all dvae models 2021-09-30 08:09:07 -06:00
James Betker
f84ccbdfb2 Fix quantizer with balancing_heuristic 2021-09-29 14:46:05 -06:00
James Betker
4914c526dc More cleanup 2021-09-29 14:24:49 -06:00
James Betker
6e550edfe3 Attentive dvae 2021-09-29 14:17:29 -06:00
James Betker
55b58fb67f Clean up codebase
Remove stuff that I'm likely not going to use again (or generally failed experiments)
2021-09-29 09:21:44 -06:00
James Betker
4d1a42e944 Add switchnorm to gumbel_quantizer 2021-09-24 18:49:25 -06:00
James Betker
ac57cdc794 Add scheduling to quantizer, enable cudnn_benchmarking to be disabled 2021-09-24 17:01:36 -06:00
James Betker
3e64e847c2 Gumbel quantizer 2021-09-23 23:32:03 -06:00
James Betker
c5297ccec6 Add dvae balancing heuristic 2021-09-23 21:19:36 -06:00
James Betker
e24c619387 Fix 2021-09-23 16:07:58 -06:00
James Betker
6833048bf7 Alterations to diffusion_dvae so it can be used directly on spectrograms 2021-09-23 15:56:25 -06:00
James Betker
5c8d266d4f chk 2021-09-17 09:15:36 -06:00
James Betker
a6544f1684 More checkpointing fixes 2021-09-16 23:12:43 -06:00
James Betker
94899d88f3 Fix overuse of checkpointing 2021-09-16 23:00:28 -06:00
James Betker
f78ce9d924 Get diffusion_dvae ready for prime time! 2021-09-16 22:43:10 -06:00
James Betker
6f48674647 Support diffusion models with extra return values & inference in diffusion_dvae 2021-09-16 10:53:46 -06:00
James Betker
0382660159 Get diffusion_dvae functional 2021-09-14 17:43:31 -06:00
James Betker
76e2c497f7 Improvements to splitter 2021-09-09 23:34:56 -06:00
James Betker
742f9b4010 Batch spleeter cleaner using GPU 2021-09-09 23:14:32 -06:00
James Betker
73b930c0f6 Add diffusion_dvae
Increase split_on_silence interval
2021-09-09 16:22:05 -06:00
James Betker
b8f2e0f452 mydvae 2021-09-06 17:45:30 -06:00