Commit Graph

1446 Commits

Author SHA1 Message Date
James Betker
dc535b5358 better bounds 2022-01-01 14:05:22 -07:00
James Betker
fe9ea4e01a auto-fix text_inputs too big 2022-01-01 13:25:47 -07:00
James Betker
35abefd038 More fix 2022-01-01 10:31:03 -07:00
James Betker
d5a5111890 Fix collating on by default on grand_conjoined 2022-01-01 10:30:15 -07:00
James Betker
4d9ba4a48a can i has fix now 2022-01-01 00:48:27 -07:00
James Betker
56752f1dbc Fix collator bug 2022-01-01 00:33:31 -07:00
James Betker
c28d8770c7 fix tensor lengths 2022-01-01 00:23:46 -07:00
James Betker
bbacffb790 dataset improvements and fix to unified_voice_Bilevel 2022-01-01 00:16:30 -07:00
James Betker
eda753e776 Allow conditioning shuffling to be disabled 2021-12-31 23:32:08 -07:00
James Betker
17fb934575 wer update 2021-12-31 16:21:39 -07:00
James Betker
f0c4cd6317 Taking another stab at a BPE tokenizer 2021-12-30 13:41:24 -07:00
James Betker
9aa06542cd Further reduce the complexity of the MEL encoder in GptAsrHf 2021-12-30 09:10:40 -07:00
James Betker
f2cd6a7f08 For loading conditional clips, default to falling back to loading the clip itself 2021-12-30 09:10:14 -07:00
James Betker
5ae7e0d9b0 Fix gapping bug in voice2voice clip 2021-12-29 14:44:46 -07:00
James Betker
51ce1b5007 Add conditioning clips features to grand_conjoined 2021-12-29 14:44:32 -07:00
James Betker
b12f47b36d Add some noise to voice_voice_clip 2021-12-29 13:56:30 -07:00
James Betker
c6ef0eef0b asdf 2021-12-29 10:07:39 -07:00
James Betker
53784ec806 grand conjoined dataset: support collating 2021-12-29 09:44:37 -07:00
James Betker
8a02ba5935 Transit s2s clips back to CPU memory after processing 2021-12-29 08:54:07 -07:00
James Betker
af6d5cd526 Add resume into speech-speech 2021-12-29 08:50:49 -07:00
James Betker
0e4bcc33ab Additional debugging 2021-12-29 00:23:27 -07:00
James Betker
b24a51f0aa Check in speech2speech CLIP inference tool 2021-12-29 00:19:44 -07:00
James Betker
c1bef01dfa GptAsrHf2 checkin 2021-12-28 20:48:38 -07:00
James Betker
07c2b9907c Add voice2voice clip model 2021-12-28 16:18:12 -07:00
James Betker
a9ee5b624f Simplify and conform gpt_asr_hf2 2021-12-28 11:54:33 -07:00
James Betker
a5b4bee719 Improve asr_eval 2021-12-28 11:45:15 -07:00
James Betker
312f631c5b gpt_asr_hf2: remove dual positional embeddings 2021-12-28 10:57:45 -07:00
James Betker
93624fa4b2 Don't use tqdm in ranks!=0 2021-12-28 10:06:54 -07:00
James Betker
a12042ea99 Allow multi-embeddings to be disabled 2021-12-28 09:00:53 -07:00
James Betker
4a32949b0e update inference mode for unified 2021-12-26 15:33:21 -07:00
James Betker
a698d3f525 unified_voice: introduce paired embeddings 2021-12-26 15:33:05 -07:00
James Betker
6996dfd9d5 asr_hf2: add independent position embedders 2021-12-26 15:17:24 -07:00
James Betker
5b5cbc057c Work checkpoint for gpt asr hf2 2021-12-26 10:29:12 -07:00
James Betker
cd89e6b42e Initialize our embeddings the same way GPT-2 initializes theirs. 2021-12-26 00:20:30 -07:00
James Betker
8d01f7685c Get rid of absolute positional embeddings in unifiedvoice 2021-12-26 00:10:24 -07:00
James Betker
6700f8851d moar verbosity 2021-12-25 23:23:21 -07:00
James Betker
8acf3b3097 Better dimensional asserting 2021-12-25 23:18:25 -07:00
James Betker
e959541494 Add position embeddings back into unified_voice
I think this may be the solution behind the days problems.
2021-12-25 23:10:56 -07:00
James Betker
64cb4a92db Support adamw_zero 2021-12-25 21:32:01 -07:00
James Betker
776a7abfcc Support torch DDP _set_static_graph 2021-12-25 21:20:06 -07:00
James Betker
746392f35c Fix DS 2021-12-25 15:28:59 -07:00
James Betker
736c2626ee build in character tokenizer 2021-12-25 15:21:01 -07:00
James Betker
b595c62893 One way decoder for decoding from mel codes 2021-12-25 12:18:00 -07:00
James Betker
ab9cafa572 Make tokenization configs more configurable 2021-12-25 12:17:50 -07:00
James Betker
52410fd9d9 256-bpe tokenizer 2021-12-25 08:52:08 -07:00
James Betker
8e26400ce2 Add inference for unified gpt 2021-12-24 13:27:06 -07:00
James Betker
ead2a74bf0 Add debug_failures flag 2021-12-23 16:12:16 -07:00
James Betker
9677f7084c dataset mod 2021-12-23 15:21:30 -07:00
James Betker
8b19c37409 UnifiedGptVoice! 2021-12-23 15:20:26 -07:00
James Betker
5bc9772cb0 grand: support validation mode 2021-12-23 15:03:20 -07:00
James Betker
e55d949855 GrandConjoinedDataset 2021-12-23 14:32:33 -07:00
James Betker
b9de8a8eda More fixes 2021-12-22 19:21:29 -07:00
James Betker
191e0130ee Another fix 2021-12-22 18:30:50 -07:00
James Betker
6c6daa5795 Build a bigger, better tokenizer 2021-12-22 17:46:18 -07:00
James Betker
c737632eae Train and use a bespoke tokenizer 2021-12-22 15:06:14 -07:00
James Betker
66bc60aeff Re-add start_text_token 2021-12-22 14:10:35 -07:00
James Betker
a9629f7022 Try out using the GPT tokenizer rather than nv_tacotron
This results in a significant compression of the text domain, I'm curious what the
effect on speech quality will be.
2021-12-22 14:03:18 -07:00
James Betker
ced81a760b restore nv_tacotron 2021-12-22 13:48:53 -07:00
James Betker
7bf4f9f580 duplicate nvtacotron 2021-12-22 13:48:30 -07:00
James Betker
7ae7d423af VoiceCLIP model 2021-12-22 13:44:11 -07:00
James Betker
09f7f3e615 Remove obsolete lucidrains DALLE stuff, re-create it in a dedicated folder 2021-12-22 13:44:02 -07:00
James Betker
a42b94ab72 gpt_tts_hf inference fixes 2021-12-22 13:22:15 -07:00
James Betker
48e3ee9a5b Shuffle conditioning inputs along the positional axis to reduce fitting on prosody and other positional information
The mels should still retain some short-range positional information the model can use
for tone and frequencies, for example.
2021-12-20 19:05:56 -07:00
James Betker
53858b2055 Fix gpt_tts_hf inference 2021-12-20 17:45:26 -07:00
James Betker
712d746e9b gpt_tts: format conditioning inputs more for contextual voice clues and less for prosidy
also support single conditional inputs
2021-12-19 17:42:29 -07:00
James Betker
c813befd53 Remove dedicated positioning embeddings 2021-12-19 09:01:31 -07:00
James Betker
b4ddcd7111 More inference improvements 2021-12-19 09:01:19 -07:00
James Betker
f9c45d70f0 Fix mel terminator 2021-12-18 17:18:06 -07:00
James Betker
937045cb63 Fixes 2021-12-18 16:45:38 -07:00
James Betker
9b9f7ea61b GptTtsHf: Make the input/target placement easier to reason about 2021-12-17 10:24:14 -07:00
James Betker
2fb4213a3e More lossy fixes 2021-12-17 10:01:42 -07:00
James Betker
dee34f096c Add use_gpt_tts script 2021-12-16 23:28:54 -07:00
James Betker
9e8a9bf6ca Various fixes to gpt_tts_hf 2021-12-16 23:28:44 -07:00
James Betker
62c8ed9a29 move speech utils 2021-12-16 20:47:37 -07:00
James Betker
e7957e4897 Make loss accumulator for logs accumulate better 2021-12-12 22:23:17 -07:00
James Betker
4f8c4d130c gpt_tts_hf: pad mel tokens with an <end_of_sequence> token. 2021-12-12 20:04:50 -07:00
James Betker
76f86c0e47 gaussian_diffusion: support fp16 2021-12-12 19:52:21 -07:00
James Betker
aa7cfd1edf Add support for mel norms across the channel dim 2021-12-12 19:52:08 -07:00
James Betker
8917c02a4d gpt_tts_hf inference first pass 2021-12-12 19:51:44 -07:00
James Betker
63bf135b93 Support norms 2021-12-11 08:30:49 -07:00
James Betker
959979086d fix 2021-12-11 08:18:00 -07:00
James Betker
5a664aa56e misc 2021-12-11 08:17:26 -07:00
James Betker
d610540ce5 mel norm computation script 2021-12-11 08:16:50 -07:00
James Betker
306274245b Also do dynamic range compression across mel 2021-12-10 20:06:24 -07:00
James Betker
faf55684b8 Use slaney norm in the mel filterbank computation 2021-12-10 20:04:52 -07:00
James Betker
b2d8fbcfc0 build a better speech synthesis toolset 2021-12-09 22:59:56 -07:00
James Betker
32cfcf3684 Turn off optimization in find_faulty_files 2021-12-09 09:02:09 -07:00
James Betker
a66a2bf91b Update find_faulty_files 2021-12-09 09:00:00 -07:00
James Betker
9191201f05 asd 2021-12-07 09:55:39 -07:00
James Betker
ef15a39841 fix gdi bug? 2021-12-07 09:53:48 -07:00
James Betker
6ccff3f49f Record codes more often 2021-12-07 09:22:45 -07:00
James Betker
d0b2f931bf Add feature to diffusion vocoder where the spectrogram conditioning layers can be re-trained apart from the rest of the model 2021-12-07 09:22:30 -07:00
James Betker
662920bde3 Log codes when simply fetching codebook_indices 2021-12-06 09:21:43 -07:00
James Betker
380a5d5475 gdi.. 2021-12-03 08:53:09 -07:00
James Betker
101a01f744 Fix dvae codes issue 2021-12-02 23:28:36 -07:00
James Betker
31fc693a8a dafsdf 2021-12-02 22:55:36 -07:00
James Betker
040d998922 maasd 2021-12-02 22:53:48 -07:00
James Betker
cc10e7e7e8 Add tsv loader 2021-12-02 22:43:07 -07:00
James Betker
702607556d nv_tacotron_dataset: allow it to load conditioning signals 2021-12-02 22:14:44 -07:00
James Betker
07b0124712 GptTtsHf! 2021-12-02 21:48:42 -07:00
James Betker
85542ec547 One last fix for gpt_asr_hf2 2021-12-02 21:19:28 -07:00
James Betker
68e9db12b5 Add interleaving and direct injectors 2021-12-02 21:04:49 -07:00
James Betker
04454ee63a Add evaluation logic for gpt_asr_hf2 2021-12-02 21:04:36 -07:00
James Betker
47fe032a3d Try to make diffusion validator more reproducible 2021-11-24 09:38:10 -07:00
James Betker
5956eb757c ffffff 2021-11-24 00:19:47 -07:00
James Betker
f1ed0588e3 another fix 2021-11-24 00:11:21 -07:00
James Betker
7a3c4a4fc6 Fix lr quantizer decode 2021-11-24 00:01:26 -07:00
James Betker
3f6ecfe0db q fix 2021-11-23 23:50:27 -07:00
James Betker
d9747fe623 Integrate with lr_quantizer 2021-11-23 19:48:22 -07:00
James Betker
82d0e7720e Add choke to lucidrains_dvae 2021-11-23 18:53:37 -07:00
James Betker
934395d4b8 A few fixes for gpt_asr_hf2 2021-11-23 09:29:29 -07:00
James Betker
3b5c3d85d8 Allow specification of wandb run name 2021-11-22 17:31:29 -07:00
James Betker
01e635168b whoops 2021-11-22 17:24:13 -07:00
James Betker
973f47c525 misc nonfunctional 2021-11-22 17:16:39 -07:00
James Betker
3125ca38f5 Further wandb logs 2021-11-22 16:40:19 -07:00
James Betker
19c80bf7a7 Improve wandb logging 2021-11-22 16:40:05 -07:00
James Betker
0604060580 Finish up mods for next version of GptAsrHf 2021-11-20 21:33:49 -07:00
James Betker
14f3155ec4 misc 2021-11-20 17:45:14 -07:00
James Betker
687e0746b3 Add Torch-derived MelSpectrogramInjector 2021-11-18 20:02:45 -07:00
James Betker
555b7e52ad Add rev2 of GptAsrHf 2021-11-18 20:02:24 -07:00
James Betker
c30a38cdf1 Undo baseline GDI changes 2021-11-18 20:02:09 -07:00
James Betker
1287915f3c Fix dvae test failure 2021-11-18 00:58:36 -07:00
James Betker
019acfa4c5 Allow flat dvae 2021-11-18 00:53:42 -07:00
James Betker
f3db41f125 Fix code logging 2021-11-18 00:34:37 -07:00
James Betker
f36bab95dd Audio resample injector 2021-11-10 20:06:33 -07:00
James Betker
79367f753d Fix error & add nonfinite warning 2021-11-09 23:58:41 -07:00
James Betker
5d5558893a Merge remote-tracking branch 'origin/master' 2021-11-08 20:10:49 -07:00
James Betker
d43f25cc20 Update losses 2021-11-08 20:10:07 -07:00
James Betker
c584320cf3 Fix gpt_asr_hf distillation 2021-11-07 21:53:21 -07:00
James Betker
9b3c3b1227 use sets instead of list ops 2021-11-07 20:45:57 -07:00
James Betker
722d3dbdc2 f 2021-11-07 18:52:05 -07:00
James Betker
18b1de9b2c Add exclusion_lists to unsupervised_audio_dataset 2021-11-07 18:46:47 -07:00
James Betker
9b693b0a54 Fixes to filter_clips_hifreq 2021-11-07 18:42:22 -07:00
James Betker
a367ea3fda Add script for computing attention for gpt_asr 2021-11-07 18:42:06 -07:00
James Betker
3c0f2fbb21 Add filtration script for finding resampled clips (or phone calls) 2021-11-07 14:16:11 -07:00
James Betker
756b4dad09 Working gpt_asr_hf inference - and it's a beast! 2021-11-06 21:47:15 -06:00
James Betker
596a62fe01 Apply fix to gpt_asr_hf and prep it for inference
Fix is that we were predicting two characters in advance, not next character
2021-11-04 10:09:24 -06:00
James Betker
fd14746bf8 badtimes 2021-11-03 00:33:38 -06:00
James Betker
2fa80486de tacotron_dataset: recover gracefully 2021-11-03 00:31:50 -06:00
James Betker
af51d00dee Load wav files from voxpopuli instead of oggs 2021-11-02 09:32:26 -06:00
James Betker
3b65241b6b Get rid of printing grad names (didn't work very well..) 2021-11-01 18:44:05 -06:00
James Betker
993bd52d42 Add spec_augment injector 2021-11-01 18:43:11 -06:00
James Betker
4cff774b0e Reduce complexity of the encoder for gpt_asr_hf 2021-11-01 17:02:28 -06:00
James Betker
da55ca0438 gpt_asr using the huggingfaces transformer 2021-11-01 17:00:22 -06:00
James Betker
ee9b199d2b Build in capacity to revert & resume networks that encounter a NaN
I'm increasingly seeing issues where something like this can be useful. In many (most?)
cases it's just a waste of compute, though. Still, better than a cold computer for a whole
night.
2021-11-01 16:14:59 -06:00
James Betker
87364b890f Add custom clip_grad_norm that prints out the param names in error. 2021-11-01 11:12:20 -06:00
James Betker
f7d0901ce6 Decouple MEL from nv_tacotron_dataset 2021-10-31 15:01:38 -06:00
James Betker
b8b268b5f6 Misc 2021-10-31 14:29:23 -06:00
James Betker
b404a3b747 Revert recent changes to extr 2021-10-30 20:48:06 -06:00
James Betker
83cccef9d8 Condition on full signal 2021-10-30 19:58:34 -06:00