James Betker
2d3372054d
Add support for voxpopuli to nv_tacotron_dataset
2021-08-16 17:13:40 -06:00
James Betker
729c1fd5a9
Fix up max lengths to save memory
2021-08-15 21:29:28 -06:00
James Betker
9e47e64d5a
Add gpt_segmentor model
...
The idea is to specifically train a model that extracts phrases from
audio clips.
2021-08-15 21:23:07 -06:00
James Betker
a826d5f658
Mods to dvae
...
- Add resblock to each layer
- Increase filter size for each layer
- Use SiLU
2021-08-15 20:54:10 -06:00
James Betker
b8bec22f1a
Fix gpt_asr inference bug
2021-08-15 20:53:42 -06:00
James Betker
3580c52eac
Fix up wavfile_dataset to be able to provide a full clip
2021-08-15 20:53:26 -06:00
James Betker
a523c4f932
Auto-normalize wav files by data type
2021-08-15 09:09:51 -06:00
James Betker
98057b6516
Make lrdvae use quantized mode in eval()
2021-08-14 23:43:01 -06:00
James Betker
c28f657ab8
Allow usage of pre-rendered mels saved to npy files
2021-08-14 23:38:15 -06:00
James Betker
ad3391bd96
Fix nan issue when interpolating audio
2021-08-14 20:42:01 -06:00
James Betker
769f0acc53
Moar fix
2021-08-14 17:23:15 -06:00
James Betker
3d2e724083
Fix audio ranging problem
2021-08-14 17:18:55 -06:00
James Betker
d6a73acaed
Allow processing of multiple audio sources at once from nv_tacotron_dataset
2021-08-14 16:04:05 -06:00
James Betker
007976082b
GPT_asr for inference
2021-08-14 14:37:17 -06:00
James Betker
e1bdd3f7c7
Fix gpt_asr bug. Initial implementation of beam search
2021-08-13 22:47:00 -06:00
James Betker
72622b4d61
Allow saving mel strips as files from the dataset implementation
2021-08-13 22:46:41 -06:00
James Betker
cfd284f425
Fix up some stuff that allows the MEL to be computed on-GPU
2021-08-13 18:35:55 -06:00
James Betker
cdee31c60b
GPT_ASR
2021-08-13 15:02:18 -06:00
James Betker
81e91c99de
Misc
2021-08-13 13:58:59 -06:00
James Betker
fff1a59e08
max/min mel invalid fix
2021-08-13 09:36:31 -06:00
James Betker
4b2946e581
More fix
2021-08-12 15:51:23 -06:00
James Betker
4c76257c71
Dont require collation for nv_tacotron
2021-08-12 15:44:55 -06:00
James Betker
5b07d3b623
Found error that I was trying to fix with reload=True
2021-08-12 15:22:34 -06:00
James Betker
430b650a34
......
2021-08-12 10:31:10 -06:00
James Betker
b35d6ae028
Print some metrics from tacotron dataset when it croaks
2021-08-12 09:21:12 -06:00
James Betker
0c4d6b1916
Just offer generic re-load for nv-tacotron
2021-08-12 09:09:12 -06:00
James Betker
154f5aa73c
Fix annoying warning and add to requirements
2021-08-11 17:32:06 -06:00
James Betker
f5a9b88ef6
tacotron cleaners: remove quotation marks
...
these don't really have relevance for tts or asr
2021-08-11 16:18:44 -06:00
James Betker
20586a8edc
Fix LRDVAE bug with quantizer integration
2021-08-11 16:17:22 -06:00
James Betker
f04a7bdf63
Bug fixes for tacotron dataset on mozilla cv
...
- Support a max mel length (mozilla cv has some tracks that are basically unbounded..)
- Don't fail on low sample rates (mozilla cv has some of those)
2021-08-11 16:17:03 -06:00
James Betker
2d3f0cc33c
nv_tacotron_dataset - Allow training on mozilla cv
2021-08-11 13:34:31 -06:00
James Betker
d0c74278bf
Enable multiple wavfile paths to be specified, fix eps bug in mp3 splitter
2021-08-11 08:46:02 -06:00
James Betker
e19c00398e
More improvements to random_mp3_splitter
2021-08-09 21:31:12 -06:00
James Betker
04d14b3acc
No batch factors for eval
2021-08-09 16:02:01 -06:00
James Betker
82fc69abfa
Add "pure" evaluator
...
Which simply computes the training loss against an eval dataset
2021-08-09 14:58:35 -06:00
James Betker
080bea2f19
No, really
2021-08-09 12:02:31 -06:00
James Betker
e1ce4671e4
Apply dropout to gpt_tts, get rid of min_gpt implementation
2021-08-09 12:01:10 -06:00
James Betker
74342b860b
Revert "Undo forced text padding"
...
This reverts commit 83ab5e6a00
.
2021-08-09 11:56:34 -06:00
James Betker
1068f53b78
Add a sampling beam search
2021-08-09 11:56:06 -06:00
James Betker
d4e33bf15f
Fixes to the mp3 splitter
2021-08-09 11:55:46 -06:00
James Betker
4100469902
Add a tool to split mp3 files into arbitrary chunks of wav files
2021-08-08 23:23:13 -06:00
James Betker
01cfae28d8
Beam search implementation in one pass? Dayyyum
2021-08-08 23:22:42 -06:00
James Betker
83ab5e6a00
Undo forced text padding
2021-08-08 11:42:20 -06:00
James Betker
690d7e86d3
Fix nv_tacotron_dataset bug which incorrectly mapped filenames
...
dammit..
2021-08-08 11:38:52 -06:00
James Betker
a2afb25e42
Fix inference, always flow full text tokens through transformer
2021-08-07 20:11:10 -06:00
James Betker
4c678172d6
ugh
2021-08-06 22:10:18 -06:00
James Betker
e723137273
Make gpttts more configurable
2021-08-06 22:08:51 -06:00
James Betker
a7496b661c
combined dvae ftw
2021-08-06 22:01:06 -06:00
James Betker
0237e96b34
Fix dvae bug
2021-08-06 14:17:01 -06:00
James Betker
0799d95af5
Use quantizer from rosinality/vqvae with openai dvae
2021-08-06 14:06:26 -06:00
James Betker
d3ace153af
Add logic for performing inference using gpt_tts with dual-encoder modes
2021-08-06 12:04:12 -06:00
James Betker
b43683b772
Add lucidrains_dvae
2021-08-06 12:03:46 -06:00
James Betker
62c7570512
Constrain wav_aug a bit more
2021-08-06 08:19:38 -06:00
James Betker
f126040da2
Undo noise first
2021-08-05 23:24:38 -06:00
James Betker
908ef5495f
Add noise first to audio_aug
2021-08-05 23:22:44 -06:00
James Betker
d6007c6de1
dataset fixes
2021-08-05 23:12:59 -06:00
James Betker
3ca51e80b2
Only fix weird path bug in windows
2021-08-05 22:21:25 -06:00
James Betker
70dcd1107f
Fix byol_model_wrapper to function with audio inputs
2021-08-05 22:20:22 -06:00
James Betker
f86df53ce0
Export extract_byol_model as a function
2021-08-05 22:15:26 -06:00
James Betker
89d15c9e74
Move gpt-tts back to lucidrains implementation
...
Much better performance.
2021-08-05 22:15:13 -06:00
James Betker
d120e1aa99
Add audio augmentation to wavfile_dataset, utility to test audio similary
2021-08-05 22:14:49 -06:00
James Betker
c0f61a2e15
Rework how DVAE tokens are ordered
...
It might make more sense to have top tokens, then bottom tokens
with top tokens having different discretized values.
2021-08-05 07:07:17 -06:00
James Betker
4017236ba9
Fix up inference for gpt_tts
2021-08-05 06:46:30 -06:00
James Betker
5037220ac7
Mods to support contrastive learning on audio files
2021-08-05 05:57:04 -06:00
James Betker
341f28dd82
It works!
2021-08-04 20:07:51 -06:00
James Betker
36c7c1fbdb
Fix training flow for NEXT TOKEN prediction instead of same token prediction
...
doh
2021-08-04 10:28:09 -06:00
James Betker
d9936df363
Add gpt_tts dataset and implement inference
...
- Adds a script which preprocesses quantized mels given a DVAE
- Adds a dataset which can consume preprocessed qmels
- Reworks GPT TTS to consume the outputs of that dataset (removes logic to add padding and start/end tokens)
- Adds inference to gpt_tts
2021-08-04 00:44:04 -06:00
James Betker
4c98b9703f
Get dalle-style TTS to "work"
2021-08-03 21:08:27 -06:00
James Betker
2814307eee
Alterations to support VQVAE on mel spectrograms
2021-08-01 07:54:21 -06:00
James Betker
965f6e6b52
Fixes to weight_decay in adamw
2021-07-31 15:58:41 -06:00
James Betker
0c9e75bc69
Improvements to GptTts
2021-07-31 15:57:57 -06:00
James Betker
31ee9ae262
Checkin
2021-07-30 23:07:35 -06:00
James Betker
dadc54795c
Add gpt_tts
2021-07-27 20:33:30 -06:00
James Betker
398185e109
More work on wave-diffusion
2021-07-27 05:36:17 -06:00
James Betker
49e3b310ea
Allow audio sample rate interpolation for faster training
2021-07-26 17:44:06 -06:00
James Betker
96e90e7047
Add support for a gaussian-diffusion-based wave tacotron
2021-07-26 16:27:31 -06:00
James Betker
97d7cbbc34
Additional work for audio xformer (which doesnt really do a great job)
2021-07-23 10:58:14 -06:00
James Betker
2325e7a88c
Allow inference for vqvae
2021-07-20 10:40:05 -06:00
James Betker
d81386c1be
Mods to support vqvae in audio mode (1d)
2021-07-20 08:36:46 -06:00
James Betker
5584cfcc7a
tacotron2 work
2021-07-14 21:41:57 -06:00
James Betker
fe0c699ced
Various fixes
2021-07-14 00:08:42 -06:00
James Betker
be2745f42d
Add waveglow & inference capabilities to audio generator
2021-07-08 23:07:36 -06:00
James Betker
1ff434218e
tacotron2, ready for prime time!
2021-07-08 22:13:44 -06:00
James Betker
86fd3ad7fd
Initial checkin of nvidia tacotron model & dataset
...
These two are tested, full support for training to come.
2021-07-06 11:11:35 -06:00
James Betker
3801d5d55e
diffusion surfin'
2021-07-06 09:36:52 -06:00
James Betker
afa41f1804
Allow hq color jittering and corruptions that are not included in the corruption factor
2021-06-30 09:44:46 -06:00
James Betker
6fd16ea9c8
Add meta-anomaly detection, colorjitter augmentation
2021-06-29 13:41:55 -06:00
James Betker
46e9f62be0
Add unet with latent guide
...
This is a diffusion network that uses both a LQ image
and a reference sample HQ image that is compressed into
a latent vector to perform upsampling
The hope is that we can steer the upsampling network
with sample images.
2021-06-26 11:02:58 -06:00
James Betker
0ded106562
Merge remote-tracking branch 'origin/master'
2021-06-25 13:16:28 -06:00
James Betker
a57ed8e960
Various mods to support better jpeg image filtering
2021-06-25 13:16:15 -06:00
James Betker
61e7ca39cd
Update image_folder_dataset.py
2021-06-25 11:48:31 -06:00
James Betker
a0ef07ddb8
Create unet_latent_guide.py
2021-06-25 11:25:14 -06:00
James Betker
e7890dc0ba
Misc fixes for diffusion nets
2021-06-21 10:38:07 -06:00
James Betker
8e3a33e001
Fix a bug where non-rank-0 is computing FID before all images are saved.
2021-06-16 16:27:09 -06:00
James Betker
68cbbed886
Add some cool diffusion testing scripts
2021-06-16 16:26:36 -06:00
James Betker
ae8de0cb9d
fid saving images across all rank fix
2021-06-15 10:31:07 -06:00
James Betker
6a75bd0777
Another fix
2021-06-14 09:51:44 -06:00
James Betker
54bff35171
Fix issue where eval was not being used by all ddp processes
2021-06-14 09:50:04 -06:00
James Betker
60079a1572
Fix saver in distributed mode
2021-06-14 09:41:06 -06:00
James Betker
545f2db170
Distributed FID dataset across processes
2021-06-14 09:33:44 -06:00
James Betker
6b32c87dcb
Try to make diffusion fid more deterministic
2021-06-14 09:27:43 -06:00
James Betker
5b4f86293f
Add FID evaluator for diffusion models
2021-06-14 09:14:30 -06:00
James Betker
9cfe840872
Attempt to fix syncing multiple times when doing gradient accumulation
2021-06-13 14:30:30 -06:00
James Betker
1cd75dfd33
Fix ddp bug
2021-06-13 10:25:23 -06:00
James Betker
3e3ad7825f
Add support for training an EMA network alongside the main networks
2021-06-12 21:01:41 -06:00
James Betker
696f320820
Get rid of feature networks
2021-06-11 20:50:07 -06:00
James Betker
65c474eecf
Various changes to fix testing
2021-06-11 15:31:10 -06:00
James Betker
220f11a5e4
Half channel sizes in cifar_resnet
2021-06-09 17:06:37 -06:00
James Betker
aea12e1b9c
Fix cat eval hack
2021-06-09 17:05:11 -06:00
James Betker
9b5f4abb91
Add fade in for hard switch
2021-06-07 18:15:09 -06:00
James Betker
108c5d829c
Fix dropout norm
2021-06-07 16:13:23 -06:00
James Betker
438217094c
Also debug distribution of switch
2021-06-07 15:36:07 -06:00
James Betker
44b09e5f20
Amplify dropout rate
2021-06-07 15:20:53 -06:00
James Betker
f0d4eb9182
Fixor
2021-06-07 11:58:36 -06:00
James Betker
c456a60466
Another go at fixing nan
2021-06-07 11:51:43 -06:00
James Betker
1c574c5bd1
Attempt to fix nan
2021-06-07 11:43:42 -06:00
James Betker
eda796985b
Try out dropout norm
2021-06-07 11:33:33 -06:00
James Betker
6c6e82406e
Pass a corruption factor through the dataset into the upsampling network
...
The intuition is this will help guide the network to make better informed decisions
about how it performs upsampling based on how it perceives the underlying content.
(I'm giving up on letting networks detect their own quality - I'm not convinced it is
actually feasible)
2021-06-07 09:13:54 -06:00
James Betker
2ad2b56438
Don't do wandb except on rank 0
2021-06-06 16:52:07 -06:00
James Betker
7c5478bc2c
Formatting issue with gdi
2021-06-06 16:35:37 -06:00
James Betker
061dbcd458
Another fix to anorm
2021-06-06 15:09:49 -06:00
James Betker
9a6991e461
Fix switch norm average
2021-06-06 15:04:28 -06:00
James Betker
57e1a6a0f2
cifar: add hard routing
...
Also mods switched_routing to support non-pixular inputs
2021-06-06 14:53:43 -06:00
James Betker
692e9c417b
Support diffusion unet
2021-06-06 13:57:22 -06:00
James Betker
a0158ebc69
Simplify cifar resnet further for faster training
2021-06-06 10:02:24 -06:00
James Betker
75567a9814
Only head norm removed
2021-06-05 23:29:11 -06:00
James Betker
65d0376b90
Re-add normalization at the tail of the RRDB
2021-06-05 23:04:05 -06:00
James Betker
184e887122
Remove rrdb normalization
2021-06-05 21:39:19 -06:00
James Betker
f5e75602b9
Add regular attention to cifar_resnet
2021-06-05 21:34:07 -06:00
James Betker
16cd92acd5
hack
2021-06-05 14:23:41 -06:00
James Betker
af52751d6b
Fix device error
2021-06-05 14:21:32 -06:00
James Betker
5f0cc65f3b
Register branched resnet properly
2021-06-05 14:19:03 -06:00
James Betker
fb405d9ef1
CIFAR stuff
...
- Extract coarse labels for the CIFAR dataset
- Add simple resnet that branches lower layers based on coarse labels
- Some other cleanup
2021-06-05 14:16:02 -06:00
James Betker
80d4404367
A few fixes:
...
- Output better prediction of xstart from eps
- Support LossAwareSampler
- Support AdamW
2021-06-05 13:40:32 -06:00
James Betker
fa908a6a15
Fix wandb import issue
2021-06-04 23:27:15 -06:00
James Betker
103a88506e
Log eval to wandb
2021-06-04 23:23:20 -06:00
James Betker
7d45132f60
fdsa
2021-06-04 21:26:54 -06:00
James Betker
6c8c8087d5
asdf
2021-06-04 21:24:48 -06:00
James Betker
e6c537824a
Allow validation for ce
2021-06-04 21:21:04 -06:00
James Betker
7c251af7a8
Support cifar100 with resnet
2021-06-04 17:29:07 -06:00
James Betker
bf811f80c1
GD mods & fixes
...
- Report variational loss separately
- Report model prediction from injector
- Log these things
- Use respacing like guided diffusion
2021-06-04 17:13:16 -06:00
James Betker
6084915af8
Support gaussian diffusion models
...
Adds support for GD models, courtesy of some maths from openai.
Also:
- Fixes requirement for eval{} even when it isn't being used
- Adds support for denormalizing an imagenet norm
2021-06-02 21:47:32 -06:00
James Betker
45bc76ba92
Fixes and mods to support training classifiers on imagenet
2021-06-01 17:25:24 -06:00
James Betker
f129eaa39e
Clean up byol a bit
...
- Remove option to aug in dataset (there's really no reason for this now that kornia works on GPU on windows)
- Other stufff
2021-05-24 21:35:46 -06:00
James Betker
6649ef2dae
Add zipfilesdataset
2021-05-24 21:35:00 -06:00
James Betker
1a2b9fa130
Get rid of old byol net wrapping
...
Simplifies and makes this usable with DLAS' multi-gpu trainer
2021-04-27 12:48:34 -06:00
James Betker
119f17c808
Add testing capabilities for segformer & contrastive feature
2021-04-27 09:59:50 -06:00
James Betker
9bbe6fc81e
Get segformer to a trainable state
2021-04-25 11:45:20 -06:00
James Betker
23e01314d4
Add dataset, ui for labeling and evaluator for pointwise classification
2021-04-23 17:17:13 -06:00
James Betker
fc623d4b5a
Add segformer model. Start work on BYOL adaptation that will support training it.
2021-04-23 17:16:46 -06:00