James Betker
68e9db12b5
Add interleaving and direct injectors
2021-12-02 21:04:49 -07:00
James Betker
47fe032a3d
Try to make diffusion validator more reproducible
2021-11-24 09:38:10 -07:00
James Betker
934395d4b8
A few fixes for gpt_asr_hf2
2021-11-23 09:29:29 -07:00
James Betker
973f47c525
misc nonfunctional
2021-11-22 17:16:39 -07:00
James Betker
3125ca38f5
Further wandb logs
2021-11-22 16:40:19 -07:00
James Betker
0604060580
Finish up mods for next version of GptAsrHf
2021-11-20 21:33:49 -07:00
James Betker
14f3155ec4
misc
2021-11-20 17:45:14 -07:00
James Betker
687e0746b3
Add Torch-derived MelSpectrogramInjector
2021-11-18 20:02:45 -07:00
James Betker
c30a38cdf1
Undo baseline GDI changes
2021-11-18 20:02:09 -07:00
James Betker
f36bab95dd
Audio resample injector
2021-11-10 20:06:33 -07:00
James Betker
79367f753d
Fix error & add nonfinite warning
2021-11-09 23:58:41 -07:00
James Betker
d43f25cc20
Update losses
2021-11-08 20:10:07 -07:00
James Betker
596a62fe01
Apply fix to gpt_asr_hf and prep it for inference
...
Fix is that we were predicting two characters in advance, not next character
2021-11-04 10:09:24 -06:00
James Betker
993bd52d42
Add spec_augment injector
2021-11-01 18:43:11 -06:00
James Betker
ee9b199d2b
Build in capacity to revert & resume networks that encounter a NaN
...
I'm increasingly seeing issues where something like this can be useful. In many (most?)
cases it's just a waste of compute, though. Still, better than a cold computer for a whole
night.
2021-11-01 16:14:59 -06:00
James Betker
87364b890f
Add custom clip_grad_norm that prints out the param names in error.
2021-11-01 11:12:20 -06:00
James Betker
b404a3b747
Revert recent changes to extr
2021-10-30 20:48:06 -06:00
James Betker
e9dc37f19c
Mod trainer to copy config file into experiments root
2021-10-30 17:00:24 -06:00
James Betker
928e7026c2
Mod STFT injector to be specifiable
2021-10-28 22:34:12 -06:00
James Betker
c3421b7f6d
Dataset work for audio quality processor
2021-10-24 09:09:34 -06:00
James Betker
9a3e89ec53
Force LR fix
2021-10-21 12:01:01 -06:00
James Betker
40cb25292a
Fix force_lr logic
2021-10-21 11:51:30 -06:00
James Betker
d016a2fbad
Go back to vanilla flavor of diffusion
2021-10-17 17:32:46 -06:00
James Betker
4914c526dc
More cleanup
2021-09-29 14:24:49 -06:00
James Betker
e24c619387
Fix
2021-09-23 16:07:58 -06:00
James Betker
5c8d266d4f
chk
2021-09-17 09:15:36 -06:00
James Betker
94899d88f3
Fix overuse of checkpointing
2021-09-16 23:00:28 -06:00
James Betker
f78ce9d924
Get diffusion_dvae ready for prime time!
2021-09-16 22:43:10 -06:00
James Betker
6f48674647
Support diffusion models with extra return values & inference in diffusion_dvae
2021-09-16 10:53:46 -06:00
James Betker
b8f2e0f452
mydvae
2021-09-06 17:45:30 -06:00
James Betker
92e7e57f81
Update diffusion_noise_surfer to support audio
2021-09-01 08:34:47 -06:00
James Betker
3e073cff85
Set kernel_size in diffusion_vocoder
2021-09-01 08:33:46 -06:00
James Betker
dabd87246d
Add unet_diffusion_vocoder
2021-08-31 14:38:33 -06:00
James Betker
909754cc27
Add find_faulty_files.py
2021-08-25 18:00:43 -06:00
James Betker
cfd284f425
Fix up some stuff that allows the MEL to be computed on-GPU
2021-08-13 18:35:55 -06:00
James Betker
cdee31c60b
GPT_ASR
2021-08-13 15:02:18 -06:00
James Betker
04d14b3acc
No batch factors for eval
2021-08-09 16:02:01 -06:00
James Betker
82fc69abfa
Add "pure" evaluator
...
Which simply computes the training loss against an eval dataset
2021-08-09 14:58:35 -06:00
James Betker
b43683b772
Add lucidrains_dvae
2021-08-06 12:03:46 -06:00
James Betker
3ca51e80b2
Only fix weird path bug in windows
2021-08-05 22:21:25 -06:00
James Betker
5037220ac7
Mods to support contrastive learning on audio files
2021-08-05 05:57:04 -06:00
James Betker
341f28dd82
It works!
2021-08-04 20:07:51 -06:00
James Betker
4c98b9703f
Get dalle-style TTS to "work"
2021-08-03 21:08:27 -06:00
James Betker
2814307eee
Alterations to support VQVAE on mel spectrograms
2021-08-01 07:54:21 -06:00
James Betker
965f6e6b52
Fixes to weight_decay in adamw
2021-07-31 15:58:41 -06:00
James Betker
0c9e75bc69
Improvements to GptTts
2021-07-31 15:57:57 -06:00
James Betker
96e90e7047
Add support for a gaussian-diffusion-based wave tacotron
2021-07-26 16:27:31 -06:00
James Betker
97d7cbbc34
Additional work for audio xformer (which doesnt really do a great job)
2021-07-23 10:58:14 -06:00
James Betker
2325e7a88c
Allow inference for vqvae
2021-07-20 10:40:05 -06:00
James Betker
d81386c1be
Mods to support vqvae in audio mode (1d)
2021-07-20 08:36:46 -06:00
James Betker
5584cfcc7a
tacotron2 work
2021-07-14 21:41:57 -06:00
James Betker
be2745f42d
Add waveglow & inference capabilities to audio generator
2021-07-08 23:07:36 -06:00
James Betker
1ff434218e
tacotron2, ready for prime time!
2021-07-08 22:13:44 -06:00
James Betker
86fd3ad7fd
Initial checkin of nvidia tacotron model & dataset
...
These two are tested, full support for training to come.
2021-07-06 11:11:35 -06:00
James Betker
6fd16ea9c8
Add meta-anomaly detection, colorjitter augmentation
2021-06-29 13:41:55 -06:00
James Betker
a57ed8e960
Various mods to support better jpeg image filtering
2021-06-25 13:16:15 -06:00
James Betker
e7890dc0ba
Misc fixes for diffusion nets
2021-06-21 10:38:07 -06:00
James Betker
8e3a33e001
Fix a bug where non-rank-0 is computing FID before all images are saved.
2021-06-16 16:27:09 -06:00
James Betker
68cbbed886
Add some cool diffusion testing scripts
2021-06-16 16:26:36 -06:00
James Betker
ae8de0cb9d
fid saving images across all rank fix
2021-06-15 10:31:07 -06:00
James Betker
6a75bd0777
Another fix
2021-06-14 09:51:44 -06:00
James Betker
54bff35171
Fix issue where eval was not being used by all ddp processes
2021-06-14 09:50:04 -06:00
James Betker
60079a1572
Fix saver in distributed mode
2021-06-14 09:41:06 -06:00
James Betker
545f2db170
Distributed FID dataset across processes
2021-06-14 09:33:44 -06:00
James Betker
6b32c87dcb
Try to make diffusion fid more deterministic
2021-06-14 09:27:43 -06:00
James Betker
5b4f86293f
Add FID evaluator for diffusion models
2021-06-14 09:14:30 -06:00
James Betker
9cfe840872
Attempt to fix syncing multiple times when doing gradient accumulation
2021-06-13 14:30:30 -06:00
James Betker
1cd75dfd33
Fix ddp bug
2021-06-13 10:25:23 -06:00
James Betker
3e3ad7825f
Add support for training an EMA network alongside the main networks
2021-06-12 21:01:41 -06:00
James Betker
696f320820
Get rid of feature networks
2021-06-11 20:50:07 -06:00
James Betker
65c474eecf
Various changes to fix testing
2021-06-11 15:31:10 -06:00
James Betker
aea12e1b9c
Fix cat eval hack
2021-06-09 17:05:11 -06:00
James Betker
2ad2b56438
Don't do wandb except on rank 0
2021-06-06 16:52:07 -06:00
James Betker
7c5478bc2c
Formatting issue with gdi
2021-06-06 16:35:37 -06:00
James Betker
692e9c417b
Support diffusion unet
2021-06-06 13:57:22 -06:00
James Betker
16cd92acd5
hack
2021-06-05 14:23:41 -06:00
James Betker
80d4404367
A few fixes:
...
- Output better prediction of xstart from eps
- Support LossAwareSampler
- Support AdamW
2021-06-05 13:40:32 -06:00
James Betker
7d45132f60
fdsa
2021-06-04 21:26:54 -06:00
James Betker
6c8c8087d5
asdf
2021-06-04 21:24:48 -06:00
James Betker
bf811f80c1
GD mods & fixes
...
- Report variational loss separately
- Report model prediction from injector
- Log these things
- Use respacing like guided diffusion
2021-06-04 17:13:16 -06:00
James Betker
6084915af8
Support gaussian diffusion models
...
Adds support for GD models, courtesy of some maths from openai.
Also:
- Fixes requirement for eval{} even when it isn't being used
- Adds support for denormalizing an imagenet norm
2021-06-02 21:47:32 -06:00
James Betker
45bc76ba92
Fixes and mods to support training classifiers on imagenet
2021-06-01 17:25:24 -06:00
James Betker
119f17c808
Add testing capabilities for segformer & contrastive feature
2021-04-27 09:59:50 -06:00
James Betker
9bbe6fc81e
Get segformer to a trainable state
2021-04-25 11:45:20 -06:00
James Betker
23e01314d4
Add dataset, ui for labeling and evaluator for pointwise classification
2021-04-23 17:17:13 -06:00
James Betker
17555e7d07
misc adjustments for stylegan
2021-04-21 18:14:17 -06:00
James Betker
f89ea5f1c6
Mods to support lightweight_gan model
2021-03-02 20:51:48 -07:00
James Betker
784b96c059
Misc options to add support for training stylegan2-rosinality models:
...
- Allow image_folder_dataset to normalize inbound images
- ExtensibleTrainer can denormalize images on the output path
- Support .webp - an output from LSUN
- Support logistic GAN divergence loss
- Support stylegan2 TF weight extraction for discriminator
- New injector that produces latent noise (with separated paths)
- Modify FID evaluator to be operable with rosinality-style GANs
2021-02-08 08:09:21 -07:00
James Betker
320edbaa3c
Move switched_conv logic around a bit
2021-02-02 20:41:24 -07:00
James Betker
0dca36946f
Hard Routing mods
...
- Turns out my custom convolution was RIDDLED with backwards bugs, which is
why the existing implementation wasn't working so well.
- Implements the switch logic from both Mixture of Experts and Switch Transformers
for testing purposes.
2021-02-02 20:35:58 -07:00
James Betker
97d895aebe
Add SrPixLoss, which focuses pixel-based losses on high-frequency regions
...
of the image.
2021-01-25 08:26:14 -07:00
James Betker
587a4f4050
resnet_unet_3
...
I'm being really lazy here - these nets are not really different from each other
except at which layer they terminate. This one terminates at 2x downsampling,
which is simply indicative of a direction I want to go for testing these pixpro networks.
2021-01-15 14:51:03 -07:00
James Betker
34f8c8641f
Support training imagenet classifier
2021-01-11 20:09:16 -07:00
James Betker
48f0d8964b
Allow dist_backend to be specified in options
2021-01-09 20:54:32 -07:00
James Betker
7c6c7a8014
Fix process_video
2021-01-09 20:53:46 -07:00
James Betker
acf1535b14
Fix for randomresizedcrop injector
2021-01-07 16:31:43 -07:00
James Betker
04961b91cf
Add random-crop injector
2021-01-07 12:14:55 -07:00
James Betker
2c65b6b28e
More mods to support styledsr
2021-01-04 11:32:28 -07:00
James Betker
4d8064c32c
Modifications to allow partially trained stylegan discriminators to be used
2021-01-03 16:37:18 -07:00
James Betker
ce6524184c
Do the last commit but in a better way
2021-01-02 22:24:12 -07:00
James Betker
edf9c38198
Make ExtensibleTrainer set the starting step for the LR scheduler
2021-01-02 22:22:34 -07:00
James Betker
bdbab65082
Allow optimizers to train separate param groups, add higher dimensional VGG discriminator
...
Did this to support training 512x512px networks off of a pretrained 256x256 network.
2021-01-02 15:10:06 -07:00
James Betker
193cdc6636
Move discriminators to the create_model paradigm
...
Also cleans up a lot of old discriminator models that I have no intention
of using again.
2021-01-01 15:56:09 -07:00
James Betker
7976a5825d
srfid is incorrectly labeled
2021-01-01 13:00:59 -07:00
James Betker
e992e18767
Add initial_stride term to style_sr
...
Also fix fid and a networks.py issue.
2021-01-01 11:59:36 -07:00
James Betker
9864fe4c04
Fix for train.py
2021-01-01 11:59:00 -07:00
James Betker
8f0984cacf
Add sr_fid evaluator
2020-12-30 20:18:58 -07:00
James Betker
9c53314ea2
Add gradient penalty visual debug
2020-12-30 09:51:59 -07:00
James Betker
63cf3d3126
Injector auto-registration
...
I love it!
2020-12-29 20:58:02 -07:00
James Betker
a777c1e4f9
Misc script fixes
2020-12-29 20:25:09 -07:00
James Betker
3fd627fc62
Mods to support image classification & filtering
2020-12-26 13:49:27 -07:00
James Betker
10fdfa1563
Migrate generators to dynamic model registration
2020-12-24 23:02:10 -07:00
James Betker
29db7c7a02
Further mods to BYOL
2020-12-24 09:28:41 -07:00
James Betker
036684893e
Add LARS optimizer & support for BYOL idiosyncrasies
...
- Added LARS and SGD optimizer variants that support turning off certain
features for BN and bias layers
- Added a variant of pytorch's resnet model that supports gradient checkpointing.
- Modify the trainer infrastructure to support above
- Fix bug with BYOL (should have been nonfunctional)
2020-12-23 20:33:43 -07:00
James Betker
1bbcb96ee8
Implement a few changes to support training BYOL networks
2020-12-23 10:50:23 -07:00
James Betker
f35c034fa5
Add trainer readme
2020-12-18 16:52:16 -07:00
James Betker
92f9a129f7
GLEAN!
2020-12-18 16:04:19 -07:00
James Betker
d875ca8342
More refactor changes
2020-12-18 09:24:31 -07:00
James Betker
5640e4efe4
More refactoring
2020-12-18 09:18:34 -07:00