James Betker
ff35d13b99
Use non-uniform noise in diffusion_tts6
2022-02-08 07:27:41 -07:00
James Betker
f44b064c5e
Update scripts
2022-02-07 19:43:18 -07:00
James Betker
34fbb78671
Straight CtcCodeGenerator as an encoder
2022-02-07 15:46:46 -07:00
James Betker
c24682c668
Record load times in fast_paired_dataset
2022-02-07 15:45:38 -07:00
James Betker
65a546c4d7
Fix for tts6
2022-02-05 16:00:14 -07:00
James Betker
5ae816bead
ctc gen checkin
2022-02-05 15:59:53 -07:00
James Betker
bb3d1ab03d
More cleanup
2022-02-04 11:06:17 -07:00
James Betker
5cc342de66
Clean up
2022-02-04 11:00:42 -07:00
James Betker
8fb147e8ab
add an autoregressive ctc code generator
2022-02-04 11:00:15 -07:00
James Betker
7f4fc55344
Update SR model
2022-02-03 21:42:53 -07:00
James Betker
de1a1d501a
Move audio injectors into their own file
2022-02-03 21:42:37 -07:00
James Betker
687393de59
Add a better split_on_silence (processing_pipeline)
...
Going to extend this a bit more going forwards to support the entire pipeline.
2022-02-03 20:00:26 -07:00
James Betker
1d29999648
Uupdates to the TTS production scripts
2022-02-03 20:00:01 -07:00
James Betker
bc506d4bcd
Mods to unet_diffusion_tts6 to support super resolution mode
2022-02-03 19:59:39 -07:00
James Betker
4249681c4b
Mods to support a autoregressive CTC code generator
2022-02-03 19:58:54 -07:00
James Betker
8132766d38
tts6
2022-01-31 20:15:06 -07:00
James Betker
fbea6e8eac
Adjustments to diffusion networks
2022-01-30 16:14:06 -07:00
James Betker
e58dab14c3
new diffusion updates from testing
2022-01-29 11:01:01 -07:00
James Betker
935a4e853e
get rid of nil tokens in <2>
2022-01-27 22:45:57 -07:00
James Betker
0152174c0e
Add wandb_step_factor argument
2022-01-27 19:58:58 -07:00
James Betker
e0e36ed98c
Update use_diffuse_tts
2022-01-27 19:57:28 -07:00
James Betker
a77d376ad2
rename unet diffusion tts and add 3
2022-01-27 19:56:24 -07:00
James Betker
7badbf1b4d
update usage scripts
2022-01-25 17:57:26 -07:00
James Betker
8c255811ad
more fixes
2022-01-25 17:57:16 -07:00
James Betker
0f3ca28e39
Allow diffusion model to be trained with masking tokens
2022-01-25 14:26:21 -07:00
James Betker
798ed7730a
i like wasting time
2022-01-24 18:12:08 -07:00
James Betker
fc09cff4b3
angry
2022-01-24 18:09:29 -07:00
James Betker
cc0d9f7216
Fix
2022-01-24 18:05:45 -07:00
James Betker
3a9e3a9db3
consolidate state
2022-01-24 17:59:31 -07:00
James Betker
dfef34ba39
Load ema to cpu memory if specified
2022-01-24 15:08:29 -07:00
James Betker
49edffb6ad
Revise device mapping
2022-01-24 15:08:13 -07:00
James Betker
33511243d5
load model state dicts into the correct device
...
it's not clear to me that this will make a huge difference, but it's a good idea anyways
2022-01-24 14:40:09 -07:00
James Betker
3e16c509f6
Misc fixes
2022-01-24 14:31:43 -07:00
James Betker
e2ed0adbd8
use_diffuse_tts updates
2022-01-24 14:31:28 -07:00
James Betker
e420df479f
Allow steps to specify which state keys to carry forward (reducing memory utilization)
2022-01-24 11:01:27 -07:00
James Betker
62475005e4
Sort data items in descending order, which I suspect will improve performance because we will hit GC less
2022-01-23 19:05:32 -07:00
James Betker
d18aec793a
Revert "(re) attempt diffusion checkpointing logic"
...
This reverts commit b22eec8fe3
.
2022-01-22 09:14:50 -07:00
James Betker
b22eec8fe3
(re) attempt diffusion checkpointing logic
2022-01-22 08:34:40 -07:00
James Betker
8f48848f91
misc
2022-01-22 08:23:29 -07:00
James Betker
851070075a
text<->cond clip
...
I need that universal clip..
2022-01-22 08:23:14 -07:00
James Betker
8ada52ccdc
Update LR layers to checkpoint better
2022-01-22 08:22:57 -07:00
James Betker
ce929a6b3f
Allow grad scaler to be enabled even in fp32 mode
2022-01-21 23:13:24 -07:00
James Betker
91b4b240ac
dont pickle unique files
2022-01-21 00:02:06 -07:00
James Betker
7fef7fb9ff
Update fast_paired_dataset to report how many audio files it is actually using
2022-01-20 21:49:38 -07:00
James Betker
ed35cfe393
Update inference scripts
2022-01-20 11:28:50 -07:00
James Betker
20312211e0
Fix bug in code alignment
2022-01-20 11:28:12 -07:00
James Betker
8e2439f50d
Decrease resolution requirements to 2048
2022-01-20 11:27:49 -07:00
James Betker
4af8525dc3
Adjust diffusion vocoder to allow training individual levels
2022-01-19 13:37:59 -07:00
James Betker
ac13bfefe8
use_diffuse_tts
2022-01-19 00:35:24 -07:00
James Betker
bcd8cc51e1
Enable collated data for diffusion purposes
2022-01-19 00:35:08 -07:00
James Betker
dc9cd8c206
Update use_gpt_tts to be usable with unified_voice2
2022-01-18 21:14:17 -07:00
James Betker
7b4544b83a
Add an experimental unet_diffusion_tts to perform experiments on
2022-01-18 08:38:24 -07:00
James Betker
b6190e96b2
fast_paired
2022-01-17 15:46:02 -07:00
James Betker
1d30d79e34
De-specify fast-paired-dataset
2022-01-16 21:20:00 -07:00
James Betker
2b36ca5f8e
Revert paired back
2022-01-16 21:10:46 -07:00
James Betker
ad3e7df086
Split the fast random into its own new dataset
2022-01-16 21:10:11 -07:00
James Betker
7331862755
Updated paired to randomly index data, offsetting memory costs and speeding up initialization
2022-01-16 21:09:22 -07:00
James Betker
37e4e737b5
a few fixes
2022-01-16 15:17:17 -07:00
James Betker
35db5ebf41
paired_voice_audio_dataset - aligned codes support
2022-01-15 17:38:26 -07:00
James Betker
3f177cd2b3
requirements
2022-01-15 17:28:59 -07:00
James Betker
b398ecca01
wer fix
2022-01-15 17:28:17 -07:00
James Betker
9100e7fa9b
Add a diffusion network that takes aligned text instead of MELs
2022-01-15 17:28:02 -07:00
James Betker
87c83e4957
update wer script
2022-01-13 17:08:49 -07:00
James Betker
009a1e8404
Add a new diffusion_vocoder that should be trainable faster
...
This new one has a "cheating" top layer, that does not feed down into the unet encoder,
but does consume the outputs of the unet. This cheater only operates on half of the input,
while the rest of the unet operates on the full input. This limits the dimensionality of this last
layer, on the assumption that these last layers consume by far the most computation and memory,
but do not require the full input context.
Losses are only computed on half of the aggregate input.
2022-01-11 17:26:07 -07:00
James Betker
d4e27ccf62
misc updates
2022-01-11 16:25:40 -07:00
James Betker
91f28580e2
fix unified_voice
2022-01-10 16:17:31 -07:00
James Betker
136744dc1d
Fixes
2022-01-10 14:32:04 -07:00
James Betker
ee3dfac2ae
unified_voice2: decouple positional embeddings and token embeddings from underlying gpt model
2022-01-10 08:14:41 -07:00
James Betker
f503d8d96b
Partially implement performers in transformer_builders
2022-01-09 22:35:03 -07:00
James Betker
ec456b6733
Revert unified_voice back to beginning
...
I'll be doing my work within unified_voice2
2022-01-09 22:34:30 -07:00
James Betker
432073c5ca
Make performer code functional
2022-01-09 22:32:50 -07:00
James Betker
f474a7ac65
unified_voice2
2022-01-09 22:32:34 -07:00
James Betker
c075fe72e2
import performer repo
2022-01-09 22:10:07 -07:00
James Betker
7de3874f15
Make dalle transformer checkpointable
2022-01-09 19:14:35 -07:00
James Betker
70b17da193
Alter unified_voice to use extensible transformer (still WIP)
2022-01-08 22:18:25 -07:00
James Betker
15d9517e26
Allow bi-directional clipping
2022-01-08 22:18:04 -07:00
James Betker
894d245062
More zero_grad fixes
2022-01-08 20:31:19 -07:00
James Betker
8bade38180
Add generic CLIP model based off of x_clip
2022-01-08 19:08:01 -07:00
James Betker
2a9a25e6e7
Fix likely defective nan grad recovery
2022-01-08 18:24:58 -07:00
James Betker
438dd9ed33
fix text-voice-clip bug
2022-01-08 08:55:00 -07:00
James Betker
34774f9948
unified_voice: begin decoupling from HF GPT
...
I'd like to try some different (newer) transformer variants. The way to get
there is softly decoupling the transformer portion of this architecture
from GPT. This actually should be fairly easy.
2022-01-07 22:51:24 -07:00
James Betker
1f6a5310b8
More fixes to use_gpt_tts
2022-01-07 22:30:55 -07:00
James Betker
68090ac3e9
Finish up the text->voice clip model
2022-01-07 22:28:45 -07:00
James Betker
65ffe38fce
misc
2022-01-06 22:16:17 -07:00
James Betker
6706591d3d
Fix dataset
2022-01-06 15:24:37 -07:00
James Betker
f4484fd155
Add "dataset_debugger" support
...
This allows the datasets themselves compile statistics and report them
via tensorboard and wandb.
2022-01-06 12:38:20 -07:00
James Betker
f3cab45658
Revise audio datasets to include interesting statistics in batch
...
Stats include:
- How many indices were skipped to retrieve a given index
- Whether or not a conditioning input was actually the file itself
2022-01-06 11:15:16 -07:00
James Betker
06c1093090
Remove collating from paired_voice_audio_dataset
...
This will now be done at the model level, which is more efficient
2022-01-06 10:29:39 -07:00
James Betker
e7a705fe6e
Make gpt_asr_hf2 more efficient at inference
2022-01-06 10:27:10 -07:00
James Betker
5e1d1da2e9
Clean paired_voice
2022-01-06 10:26:53 -07:00
James Betker
525addffab
Unified: automatically clip inputs according to specified max length to improve inference time
2022-01-06 10:13:45 -07:00
James Betker
61cd351b71
update unified
2022-01-06 09:48:11 -07:00
James Betker
10fd1110be
Fix (?) use_gpt_tts for unified_voice
2022-01-05 20:09:31 -07:00
James Betker
3c4301f085
Remove dvae_arch_playground
2022-01-05 17:06:45 -07:00
James Betker
a63a17e48f
Remove deepspeech models
2022-01-05 17:05:13 -07:00
James Betker
c584ba05ee
unified_voice improvements
...
- Rename max_symbols_per_phrase to max_text_tokens
- Remove max_total_tokens (no longer necessary)
- Fix integration with MelEncoder
2022-01-05 17:03:53 -07:00
James Betker
50d267ab1a
misc
2022-01-05 17:01:22 -07:00
James Betker
0fe34f57d1
Use torch resampler
2022-01-05 15:47:22 -07:00
James Betker
38aba6f88d
Another dumdum fix
2022-01-04 15:18:25 -07:00
James Betker
963c6072bb
Add mel_encoder and solo embeddings to unified_voice
2022-01-04 15:15:58 -07:00
James Betker
2165124f19
Add GPT documentation
2022-01-01 21:00:07 -07:00
James Betker
2635412291
doh
2022-01-01 14:29:59 -07:00
James Betker
d4a6298658
more debugging
2022-01-01 14:25:27 -07:00
James Betker
d8111e0477
misc
2022-01-01 14:05:33 -07:00
James Betker
dc535b5358
better bounds
2022-01-01 14:05:22 -07:00
James Betker
fe9ea4e01a
auto-fix text_inputs too big
2022-01-01 13:25:47 -07:00
James Betker
35abefd038
More fix
2022-01-01 10:31:03 -07:00
James Betker
d5a5111890
Fix collating on by default on grand_conjoined
2022-01-01 10:30:15 -07:00
James Betker
4d9ba4a48a
can i has fix now
2022-01-01 00:48:27 -07:00
James Betker
56752f1dbc
Fix collator bug
2022-01-01 00:33:31 -07:00
James Betker
c28d8770c7
fix tensor lengths
2022-01-01 00:23:46 -07:00
James Betker
bbacffb790
dataset improvements and fix to unified_voice_Bilevel
2022-01-01 00:16:30 -07:00
James Betker
eda753e776
Allow conditioning shuffling to be disabled
2021-12-31 23:32:08 -07:00
James Betker
17fb934575
wer update
2021-12-31 16:21:39 -07:00
James Betker
f0c4cd6317
Taking another stab at a BPE tokenizer
2021-12-30 13:41:24 -07:00
James Betker
9aa06542cd
Further reduce the complexity of the MEL encoder in GptAsrHf
2021-12-30 09:10:40 -07:00
James Betker
f2cd6a7f08
For loading conditional clips, default to falling back to loading the clip itself
2021-12-30 09:10:14 -07:00
James Betker
5ae7e0d9b0
Fix gapping bug in voice2voice clip
2021-12-29 14:44:46 -07:00
James Betker
51ce1b5007
Add conditioning clips features to grand_conjoined
2021-12-29 14:44:32 -07:00
James Betker
b12f47b36d
Add some noise to voice_voice_clip
2021-12-29 13:56:30 -07:00
James Betker
c6ef0eef0b
asdf
2021-12-29 10:07:39 -07:00
James Betker
53784ec806
grand conjoined dataset: support collating
2021-12-29 09:44:37 -07:00
James Betker
8a02ba5935
Transit s2s clips back to CPU memory after processing
2021-12-29 08:54:07 -07:00
James Betker
af6d5cd526
Add resume into speech-speech
2021-12-29 08:50:49 -07:00
James Betker
0e4bcc33ab
Additional debugging
2021-12-29 00:23:27 -07:00
James Betker
b24a51f0aa
Check in speech2speech CLIP inference tool
2021-12-29 00:19:44 -07:00
James Betker
c1bef01dfa
GptAsrHf2 checkin
2021-12-28 20:48:38 -07:00
James Betker
07c2b9907c
Add voice2voice clip model
2021-12-28 16:18:12 -07:00
James Betker
a9ee5b624f
Simplify and conform gpt_asr_hf2
2021-12-28 11:54:33 -07:00
James Betker
a5b4bee719
Improve asr_eval
2021-12-28 11:45:15 -07:00
James Betker
312f631c5b
gpt_asr_hf2: remove dual positional embeddings
2021-12-28 10:57:45 -07:00
James Betker
93624fa4b2
Don't use tqdm in ranks!=0
2021-12-28 10:06:54 -07:00
James Betker
a12042ea99
Allow multi-embeddings to be disabled
2021-12-28 09:00:53 -07:00
James Betker
4a32949b0e
update inference mode for unified
2021-12-26 15:33:21 -07:00
James Betker
a698d3f525
unified_voice: introduce paired embeddings
2021-12-26 15:33:05 -07:00
James Betker
6996dfd9d5
asr_hf2: add independent position embedders
2021-12-26 15:17:24 -07:00
James Betker
5b5cbc057c
Work checkpoint for gpt asr hf2
2021-12-26 10:29:12 -07:00
James Betker
cd89e6b42e
Initialize our embeddings the same way GPT-2 initializes theirs.
2021-12-26 00:20:30 -07:00
James Betker
8d01f7685c
Get rid of absolute positional embeddings in unifiedvoice
2021-12-26 00:10:24 -07:00
James Betker
6700f8851d
moar verbosity
2021-12-25 23:23:21 -07:00
James Betker
8acf3b3097
Better dimensional asserting
2021-12-25 23:18:25 -07:00
James Betker
e959541494
Add position embeddings back into unified_voice
...
I think this may be the solution behind the days problems.
2021-12-25 23:10:56 -07:00
James Betker
64cb4a92db
Support adamw_zero
2021-12-25 21:32:01 -07:00
James Betker
776a7abfcc
Support torch DDP _set_static_graph
2021-12-25 21:20:06 -07:00
James Betker
746392f35c
Fix DS
2021-12-25 15:28:59 -07:00
James Betker
736c2626ee
build in character tokenizer
2021-12-25 15:21:01 -07:00
James Betker
b595c62893
One way decoder for decoding from mel codes
2021-12-25 12:18:00 -07:00
James Betker
ab9cafa572
Make tokenization configs more configurable
2021-12-25 12:17:50 -07:00
James Betker
52410fd9d9
256-bpe tokenizer
2021-12-25 08:52:08 -07:00
James Betker
8e26400ce2
Add inference for unified gpt
2021-12-24 13:27:06 -07:00
James Betker
ead2a74bf0
Add debug_failures flag
2021-12-23 16:12:16 -07:00
James Betker
9677f7084c
dataset mod
2021-12-23 15:21:30 -07:00
James Betker
8b19c37409
UnifiedGptVoice!
2021-12-23 15:20:26 -07:00
James Betker
5bc9772cb0
grand: support validation mode
2021-12-23 15:03:20 -07:00
James Betker
e55d949855
GrandConjoinedDataset
2021-12-23 14:32:33 -07:00
James Betker
b9de8a8eda
More fixes
2021-12-22 19:21:29 -07:00
James Betker
191e0130ee
Another fix
2021-12-22 18:30:50 -07:00
James Betker
6c6daa5795
Build a bigger, better tokenizer
2021-12-22 17:46:18 -07:00
James Betker
c737632eae
Train and use a bespoke tokenizer
2021-12-22 15:06:14 -07:00
James Betker
66bc60aeff
Re-add start_text_token
2021-12-22 14:10:35 -07:00
James Betker
a9629f7022
Try out using the GPT tokenizer rather than nv_tacotron
...
This results in a significant compression of the text domain, I'm curious what the
effect on speech quality will be.
2021-12-22 14:03:18 -07:00
James Betker
ced81a760b
restore nv_tacotron
2021-12-22 13:48:53 -07:00
James Betker
7bf4f9f580
duplicate nvtacotron
2021-12-22 13:48:30 -07:00
James Betker
7ae7d423af
VoiceCLIP model
2021-12-22 13:44:11 -07:00
James Betker
09f7f3e615
Remove obsolete lucidrains DALLE stuff, re-create it in a dedicated folder
2021-12-22 13:44:02 -07:00
James Betker
a42b94ab72
gpt_tts_hf inference fixes
2021-12-22 13:22:15 -07:00
James Betker
48e3ee9a5b
Shuffle conditioning inputs along the positional axis to reduce fitting on prosody and other positional information
...
The mels should still retain some short-range positional information the model can use
for tone and frequencies, for example.
2021-12-20 19:05:56 -07:00
James Betker
53858b2055
Fix gpt_tts_hf inference
2021-12-20 17:45:26 -07:00
James Betker
712d746e9b
gpt_tts: format conditioning inputs more for contextual voice clues and less for prosidy
...
also support single conditional inputs
2021-12-19 17:42:29 -07:00
James Betker
c813befd53
Remove dedicated positioning embeddings
2021-12-19 09:01:31 -07:00
James Betker
b4ddcd7111
More inference improvements
2021-12-19 09:01:19 -07:00
James Betker
f9c45d70f0
Fix mel terminator
2021-12-18 17:18:06 -07:00
James Betker
937045cb63
Fixes
2021-12-18 16:45:38 -07:00
James Betker
9b9f7ea61b
GptTtsHf: Make the input/target placement easier to reason about
2021-12-17 10:24:14 -07:00
James Betker
2fb4213a3e
More lossy fixes
2021-12-17 10:01:42 -07:00
James Betker
dee34f096c
Add use_gpt_tts script
2021-12-16 23:28:54 -07:00
James Betker
9e8a9bf6ca
Various fixes to gpt_tts_hf
2021-12-16 23:28:44 -07:00
James Betker
62c8ed9a29
move speech utils
2021-12-16 20:47:37 -07:00
James Betker
e7957e4897
Make loss accumulator for logs accumulate better
2021-12-12 22:23:17 -07:00
James Betker
4f8c4d130c
gpt_tts_hf: pad mel tokens with an <end_of_sequence> token.
2021-12-12 20:04:50 -07:00
James Betker
76f86c0e47
gaussian_diffusion: support fp16
2021-12-12 19:52:21 -07:00
James Betker
aa7cfd1edf
Add support for mel norms across the channel dim
2021-12-12 19:52:08 -07:00
James Betker
8917c02a4d
gpt_tts_hf inference first pass
2021-12-12 19:51:44 -07:00
James Betker
63bf135b93
Support norms
2021-12-11 08:30:49 -07:00
James Betker
959979086d
fix
2021-12-11 08:18:00 -07:00
James Betker
5a664aa56e
misc
2021-12-11 08:17:26 -07:00
James Betker
d610540ce5
mel norm computation script
2021-12-11 08:16:50 -07:00
James Betker
306274245b
Also do dynamic range compression across mel
2021-12-10 20:06:24 -07:00
James Betker
faf55684b8
Use slaney norm in the mel filterbank computation
2021-12-10 20:04:52 -07:00
James Betker
b2d8fbcfc0
build a better speech synthesis toolset
2021-12-09 22:59:56 -07:00
James Betker
32cfcf3684
Turn off optimization in find_faulty_files
2021-12-09 09:02:09 -07:00
James Betker
a66a2bf91b
Update find_faulty_files
2021-12-09 09:00:00 -07:00
James Betker
9191201f05
asd
2021-12-07 09:55:39 -07:00
James Betker
ef15a39841
fix gdi bug?
2021-12-07 09:53:48 -07:00
James Betker
6ccff3f49f
Record codes more often
2021-12-07 09:22:45 -07:00
James Betker
d0b2f931bf
Add feature to diffusion vocoder where the spectrogram conditioning layers can be re-trained apart from the rest of the model
2021-12-07 09:22:30 -07:00
James Betker
662920bde3
Log codes when simply fetching codebook_indices
2021-12-06 09:21:43 -07:00
James Betker
380a5d5475
gdi..
2021-12-03 08:53:09 -07:00
James Betker
101a01f744
Fix dvae codes issue
2021-12-02 23:28:36 -07:00
James Betker
31fc693a8a
dafsdf
2021-12-02 22:55:36 -07:00
James Betker
040d998922
maasd
2021-12-02 22:53:48 -07:00
James Betker
cc10e7e7e8
Add tsv loader
2021-12-02 22:43:07 -07:00
James Betker
702607556d
nv_tacotron_dataset: allow it to load conditioning signals
2021-12-02 22:14:44 -07:00
James Betker
07b0124712
GptTtsHf!
2021-12-02 21:48:42 -07:00
James Betker
85542ec547
One last fix for gpt_asr_hf2
2021-12-02 21:19:28 -07:00
James Betker
68e9db12b5
Add interleaving and direct injectors
2021-12-02 21:04:49 -07:00
James Betker
04454ee63a
Add evaluation logic for gpt_asr_hf2
2021-12-02 21:04:36 -07:00
James Betker
47fe032a3d
Try to make diffusion validator more reproducible
2021-11-24 09:38:10 -07:00
James Betker
5956eb757c
ffffff
2021-11-24 00:19:47 -07:00
James Betker
f1ed0588e3
another fix
2021-11-24 00:11:21 -07:00
James Betker
7a3c4a4fc6
Fix lr quantizer decode
2021-11-24 00:01:26 -07:00
James Betker
3f6ecfe0db
q fix
2021-11-23 23:50:27 -07:00
James Betker
d9747fe623
Integrate with lr_quantizer
2021-11-23 19:48:22 -07:00
James Betker
82d0e7720e
Add choke to lucidrains_dvae
2021-11-23 18:53:37 -07:00
James Betker
934395d4b8
A few fixes for gpt_asr_hf2
2021-11-23 09:29:29 -07:00
James Betker
3b5c3d85d8
Allow specification of wandb run name
2021-11-22 17:31:29 -07:00
James Betker
01e635168b
whoops
2021-11-22 17:24:13 -07:00
James Betker
973f47c525
misc nonfunctional
2021-11-22 17:16:39 -07:00
James Betker
3125ca38f5
Further wandb logs
2021-11-22 16:40:19 -07:00
James Betker
19c80bf7a7
Improve wandb logging
2021-11-22 16:40:05 -07:00
James Betker
0604060580
Finish up mods for next version of GptAsrHf
2021-11-20 21:33:49 -07:00
James Betker
14f3155ec4
misc
2021-11-20 17:45:14 -07:00
James Betker
687e0746b3
Add Torch-derived MelSpectrogramInjector
2021-11-18 20:02:45 -07:00
James Betker
555b7e52ad
Add rev2 of GptAsrHf
2021-11-18 20:02:24 -07:00
James Betker
c30a38cdf1
Undo baseline GDI changes
2021-11-18 20:02:09 -07:00
James Betker
1287915f3c
Fix dvae test failure
2021-11-18 00:58:36 -07:00
James Betker
019acfa4c5
Allow flat dvae
2021-11-18 00:53:42 -07:00
James Betker
f3db41f125
Fix code logging
2021-11-18 00:34:37 -07:00
James Betker
f36bab95dd
Audio resample injector
2021-11-10 20:06:33 -07:00
James Betker
79367f753d
Fix error & add nonfinite warning
2021-11-09 23:58:41 -07:00
James Betker
5d5558893a
Merge remote-tracking branch 'origin/master'
2021-11-08 20:10:49 -07:00
James Betker
d43f25cc20
Update losses
2021-11-08 20:10:07 -07:00
James Betker
c584320cf3
Fix gpt_asr_hf distillation
2021-11-07 21:53:21 -07:00
James Betker
9b3c3b1227
use sets instead of list ops
2021-11-07 20:45:57 -07:00
James Betker
722d3dbdc2
f
2021-11-07 18:52:05 -07:00
James Betker
18b1de9b2c
Add exclusion_lists to unsupervised_audio_dataset
2021-11-07 18:46:47 -07:00
James Betker
9b693b0a54
Fixes to filter_clips_hifreq
2021-11-07 18:42:22 -07:00
James Betker
a367ea3fda
Add script for computing attention for gpt_asr
2021-11-07 18:42:06 -07:00
James Betker
3c0f2fbb21
Add filtration script for finding resampled clips (or phone calls)
2021-11-07 14:16:11 -07:00
James Betker
756b4dad09
Working gpt_asr_hf inference - and it's a beast!
2021-11-06 21:47:15 -06:00
James Betker
596a62fe01
Apply fix to gpt_asr_hf and prep it for inference
...
Fix is that we were predicting two characters in advance, not next character
2021-11-04 10:09:24 -06:00
James Betker
fd14746bf8
badtimes
2021-11-03 00:33:38 -06:00
James Betker
2fa80486de
tacotron_dataset: recover gracefully
2021-11-03 00:31:50 -06:00
James Betker
af51d00dee
Load wav files from voxpopuli instead of oggs
2021-11-02 09:32:26 -06:00
James Betker
3b65241b6b
Get rid of printing grad names (didn't work very well..)
2021-11-01 18:44:05 -06:00
James Betker
993bd52d42
Add spec_augment injector
2021-11-01 18:43:11 -06:00
James Betker
4cff774b0e
Reduce complexity of the encoder for gpt_asr_hf
2021-11-01 17:02:28 -06:00
James Betker
da55ca0438
gpt_asr using the huggingfaces transformer
2021-11-01 17:00:22 -06:00
James Betker
ee9b199d2b
Build in capacity to revert & resume networks that encounter a NaN
...
I'm increasingly seeing issues where something like this can be useful. In many (most?)
cases it's just a waste of compute, though. Still, better than a cold computer for a whole
night.
2021-11-01 16:14:59 -06:00
James Betker
87364b890f
Add custom clip_grad_norm that prints out the param names in error.
2021-11-01 11:12:20 -06:00