James Betker
|
17fb934575
|
wer update
|
2021-12-31 16:21:39 -07:00 |
|
James Betker
|
f0c4cd6317
|
Taking another stab at a BPE tokenizer
|
2021-12-30 13:41:24 -07:00 |
|
James Betker
|
9aa06542cd
|
Further reduce the complexity of the MEL encoder in GptAsrHf
|
2021-12-30 09:10:40 -07:00 |
|
James Betker
|
f2cd6a7f08
|
For loading conditional clips, default to falling back to loading the clip itself
|
2021-12-30 09:10:14 -07:00 |
|
James Betker
|
5ae7e0d9b0
|
Fix gapping bug in voice2voice clip
|
2021-12-29 14:44:46 -07:00 |
|
James Betker
|
51ce1b5007
|
Add conditioning clips features to grand_conjoined
|
2021-12-29 14:44:32 -07:00 |
|
James Betker
|
b12f47b36d
|
Add some noise to voice_voice_clip
|
2021-12-29 13:56:30 -07:00 |
|
James Betker
|
c6ef0eef0b
|
asdf
|
2021-12-29 10:07:39 -07:00 |
|
James Betker
|
53784ec806
|
grand conjoined dataset: support collating
|
2021-12-29 09:44:37 -07:00 |
|
James Betker
|
8a02ba5935
|
Transit s2s clips back to CPU memory after processing
|
2021-12-29 08:54:07 -07:00 |
|
James Betker
|
af6d5cd526
|
Add resume into speech-speech
|
2021-12-29 08:50:49 -07:00 |
|
James Betker
|
0e4bcc33ab
|
Additional debugging
|
2021-12-29 00:23:27 -07:00 |
|
James Betker
|
b24a51f0aa
|
Check in speech2speech CLIP inference tool
|
2021-12-29 00:19:44 -07:00 |
|
James Betker
|
c1bef01dfa
|
GptAsrHf2 checkin
|
2021-12-28 20:48:38 -07:00 |
|
James Betker
|
07c2b9907c
|
Add voice2voice clip model
|
2021-12-28 16:18:12 -07:00 |
|
James Betker
|
a9ee5b624f
|
Simplify and conform gpt_asr_hf2
|
2021-12-28 11:54:33 -07:00 |
|
James Betker
|
a5b4bee719
|
Improve asr_eval
|
2021-12-28 11:45:15 -07:00 |
|
James Betker
|
312f631c5b
|
gpt_asr_hf2: remove dual positional embeddings
|
2021-12-28 10:57:45 -07:00 |
|
James Betker
|
93624fa4b2
|
Don't use tqdm in ranks!=0
|
2021-12-28 10:06:54 -07:00 |
|
James Betker
|
a12042ea99
|
Allow multi-embeddings to be disabled
|
2021-12-28 09:00:53 -07:00 |
|
James Betker
|
4a32949b0e
|
update inference mode for unified
|
2021-12-26 15:33:21 -07:00 |
|
James Betker
|
a698d3f525
|
unified_voice: introduce paired embeddings
|
2021-12-26 15:33:05 -07:00 |
|
James Betker
|
6996dfd9d5
|
asr_hf2: add independent position embedders
|
2021-12-26 15:17:24 -07:00 |
|
James Betker
|
5b5cbc057c
|
Work checkpoint for gpt asr hf2
|
2021-12-26 10:29:12 -07:00 |
|
James Betker
|
cd89e6b42e
|
Initialize our embeddings the same way GPT-2 initializes theirs.
|
2021-12-26 00:20:30 -07:00 |
|
James Betker
|
8d01f7685c
|
Get rid of absolute positional embeddings in unifiedvoice
|
2021-12-26 00:10:24 -07:00 |
|
James Betker
|
6700f8851d
|
moar verbosity
|
2021-12-25 23:23:21 -07:00 |
|
James Betker
|
8acf3b3097
|
Better dimensional asserting
|
2021-12-25 23:18:25 -07:00 |
|
James Betker
|
e959541494
|
Add position embeddings back into unified_voice
I think this may be the solution behind the days problems.
|
2021-12-25 23:10:56 -07:00 |
|
James Betker
|
64cb4a92db
|
Support adamw_zero
|
2021-12-25 21:32:01 -07:00 |
|
James Betker
|
776a7abfcc
|
Support torch DDP _set_static_graph
|
2021-12-25 21:20:06 -07:00 |
|
James Betker
|
746392f35c
|
Fix DS
|
2021-12-25 15:28:59 -07:00 |
|
James Betker
|
736c2626ee
|
build in character tokenizer
|
2021-12-25 15:21:01 -07:00 |
|
James Betker
|
b595c62893
|
One way decoder for decoding from mel codes
|
2021-12-25 12:18:00 -07:00 |
|
James Betker
|
ab9cafa572
|
Make tokenization configs more configurable
|
2021-12-25 12:17:50 -07:00 |
|
James Betker
|
52410fd9d9
|
256-bpe tokenizer
|
2021-12-25 08:52:08 -07:00 |
|
James Betker
|
8e26400ce2
|
Add inference for unified gpt
|
2021-12-24 13:27:06 -07:00 |
|
James Betker
|
ead2a74bf0
|
Add debug_failures flag
|
2021-12-23 16:12:16 -07:00 |
|
James Betker
|
9677f7084c
|
dataset mod
|
2021-12-23 15:21:30 -07:00 |
|
James Betker
|
8b19c37409
|
UnifiedGptVoice!
|
2021-12-23 15:20:26 -07:00 |
|
James Betker
|
5bc9772cb0
|
grand: support validation mode
|
2021-12-23 15:03:20 -07:00 |
|
James Betker
|
e55d949855
|
GrandConjoinedDataset
|
2021-12-23 14:32:33 -07:00 |
|
James Betker
|
b9de8a8eda
|
More fixes
|
2021-12-22 19:21:29 -07:00 |
|
James Betker
|
191e0130ee
|
Another fix
|
2021-12-22 18:30:50 -07:00 |
|
James Betker
|
6c6daa5795
|
Build a bigger, better tokenizer
|
2021-12-22 17:46:18 -07:00 |
|
James Betker
|
c737632eae
|
Train and use a bespoke tokenizer
|
2021-12-22 15:06:14 -07:00 |
|
James Betker
|
66bc60aeff
|
Re-add start_text_token
|
2021-12-22 14:10:35 -07:00 |
|
James Betker
|
a9629f7022
|
Try out using the GPT tokenizer rather than nv_tacotron
This results in a significant compression of the text domain, I'm curious what the
effect on speech quality will be.
|
2021-12-22 14:03:18 -07:00 |
|
James Betker
|
ced81a760b
|
restore nv_tacotron
|
2021-12-22 13:48:53 -07:00 |
|
James Betker
|
7bf4f9f580
|
duplicate nvtacotron
|
2021-12-22 13:48:30 -07:00 |
|