Commit Graph

173 Commits

Author SHA1 Message Date
James Betker
f503d8d96b Partially implement performers in transformer_builders 2022-01-09 22:35:03 -07:00
James Betker
ec456b6733 Revert unified_voice back to beginning
I'll be doing my work within unified_voice2
2022-01-09 22:34:30 -07:00
James Betker
f474a7ac65 unified_voice2 2022-01-09 22:32:34 -07:00
James Betker
70b17da193 Alter unified_voice to use extensible transformer (still WIP) 2022-01-08 22:18:25 -07:00
James Betker
15d9517e26 Allow bi-directional clipping 2022-01-08 22:18:04 -07:00
James Betker
438dd9ed33 fix text-voice-clip bug 2022-01-08 08:55:00 -07:00
James Betker
34774f9948 unified_voice: begin decoupling from HF GPT
I'd like to try some different (newer) transformer variants. The way to get
there is softly decoupling the transformer portion of this architecture
from GPT. This actually should be fairly easy.
2022-01-07 22:51:24 -07:00
James Betker
68090ac3e9 Finish up the text->voice clip model 2022-01-07 22:28:45 -07:00
James Betker
65ffe38fce misc 2022-01-06 22:16:17 -07:00
James Betker
e7a705fe6e Make gpt_asr_hf2 more efficient at inference 2022-01-06 10:27:10 -07:00
James Betker
525addffab Unified: automatically clip inputs according to specified max length to improve inference time 2022-01-06 10:13:45 -07:00
James Betker
61cd351b71 update unified 2022-01-06 09:48:11 -07:00
James Betker
10fd1110be Fix (?) use_gpt_tts for unified_voice 2022-01-05 20:09:31 -07:00
James Betker
3c4301f085 Remove dvae_arch_playground 2022-01-05 17:06:45 -07:00
James Betker
c584ba05ee unified_voice improvements
- Rename max_symbols_per_phrase to max_text_tokens
- Remove max_total_tokens (no longer necessary)
- Fix integration with MelEncoder
2022-01-05 17:03:53 -07:00
James Betker
38aba6f88d Another dumdum fix 2022-01-04 15:18:25 -07:00
James Betker
963c6072bb Add mel_encoder and solo embeddings to unified_voice 2022-01-04 15:15:58 -07:00
James Betker
2165124f19 Add GPT documentation 2022-01-01 21:00:07 -07:00
James Betker
2635412291 doh 2022-01-01 14:29:59 -07:00
James Betker
d4a6298658 more debugging 2022-01-01 14:25:27 -07:00
James Betker
d8111e0477 misc 2022-01-01 14:05:33 -07:00
James Betker
dc535b5358 better bounds 2022-01-01 14:05:22 -07:00
James Betker
fe9ea4e01a auto-fix text_inputs too big 2022-01-01 13:25:47 -07:00
James Betker
bbacffb790 dataset improvements and fix to unified_voice_Bilevel 2022-01-01 00:16:30 -07:00
James Betker
eda753e776 Allow conditioning shuffling to be disabled 2021-12-31 23:32:08 -07:00
James Betker
9aa06542cd Further reduce the complexity of the MEL encoder in GptAsrHf 2021-12-30 09:10:40 -07:00
James Betker
5ae7e0d9b0 Fix gapping bug in voice2voice clip 2021-12-29 14:44:46 -07:00
James Betker
b12f47b36d Add some noise to voice_voice_clip 2021-12-29 13:56:30 -07:00
James Betker
b24a51f0aa Check in speech2speech CLIP inference tool 2021-12-29 00:19:44 -07:00
James Betker
c1bef01dfa GptAsrHf2 checkin 2021-12-28 20:48:38 -07:00
James Betker
07c2b9907c Add voice2voice clip model 2021-12-28 16:18:12 -07:00
James Betker
a9ee5b624f Simplify and conform gpt_asr_hf2 2021-12-28 11:54:33 -07:00
James Betker
a5b4bee719 Improve asr_eval 2021-12-28 11:45:15 -07:00
James Betker
312f631c5b gpt_asr_hf2: remove dual positional embeddings 2021-12-28 10:57:45 -07:00
James Betker
a12042ea99 Allow multi-embeddings to be disabled 2021-12-28 09:00:53 -07:00
James Betker
a698d3f525 unified_voice: introduce paired embeddings 2021-12-26 15:33:05 -07:00
James Betker
6996dfd9d5 asr_hf2: add independent position embedders 2021-12-26 15:17:24 -07:00
James Betker
5b5cbc057c Work checkpoint for gpt asr hf2 2021-12-26 10:29:12 -07:00
James Betker
cd89e6b42e Initialize our embeddings the same way GPT-2 initializes theirs. 2021-12-26 00:20:30 -07:00
James Betker
8d01f7685c Get rid of absolute positional embeddings in unifiedvoice 2021-12-26 00:10:24 -07:00
James Betker
6700f8851d moar verbosity 2021-12-25 23:23:21 -07:00
James Betker
8acf3b3097 Better dimensional asserting 2021-12-25 23:18:25 -07:00
James Betker
e959541494 Add position embeddings back into unified_voice
I think this may be the solution behind the days problems.
2021-12-25 23:10:56 -07:00
James Betker
ab9cafa572 Make tokenization configs more configurable 2021-12-25 12:17:50 -07:00
James Betker
52410fd9d9 256-bpe tokenizer 2021-12-25 08:52:08 -07:00
James Betker
8e26400ce2 Add inference for unified gpt 2021-12-24 13:27:06 -07:00
James Betker
8b19c37409 UnifiedGptVoice! 2021-12-23 15:20:26 -07:00
James Betker
e55d949855 GrandConjoinedDataset 2021-12-23 14:32:33 -07:00
James Betker
c737632eae Train and use a bespoke tokenizer 2021-12-22 15:06:14 -07:00
James Betker
66bc60aeff Re-add start_text_token 2021-12-22 14:10:35 -07:00