Commit Graph

1685 Commits

Author SHA1 Message Date
James Betker
2afea126d7 mod trainer to be very explicit about the fact that loading models and state together dont work, but allow it 2021-10-28 22:32:42 -06:00
James Betker
bb0a0c8264 classify_into_folders script 2021-10-27 14:56:16 -06:00
James Betker
d91dcbd404 Make classifier inference script more open 2021-10-27 13:18:54 -06:00
James Betker
58494b0888 Add support for distilling gpt_asr 2021-10-27 13:10:07 -06:00
James Betker
5d714bc566 Add deepspeech model and support for decoding with it 2021-10-27 13:09:46 -06:00
James Betker
15437b2fc3 WER script 2021-10-26 13:30:29 -06:00
James Betker
3a9d1c53ea Rework conditioning inputs provided 2021-10-26 10:46:33 -06:00
James Betker
21b6daa0ed Introduce clip resampling 2021-10-26 10:42:23 -06:00
James Betker
43e389aac6 Add time_embed_dim_multiplier 2021-10-26 08:55:55 -06:00
James Betker
ba6e46c02a Further simplify diffusion_vocoder and make noise_surfer work 2021-10-26 08:54:30 -06:00
James Betker
c3421b7f6d Dataset work for audio quality processor 2021-10-24 09:09:34 -06:00
James Betker
0ee1c67ce5 Rework how conditioning inputs are applied to DiffusionVocoder 2021-10-24 09:08:58 -06:00
James Betker
b1248e7114 Get rid of filter_urbansounds 2021-10-21 16:46:04 -06:00
James Betker
06ea6191a9 Initial implementation of audio_with_noise dataset 2021-10-21 16:45:19 -06:00
James Betker
9a3e89ec53 Force LR fix 2021-10-21 12:01:01 -06:00
James Betker
40cb25292a Fix force_lr logic 2021-10-21 11:51:30 -06:00
James Betker
0dee15f875 base DVAE & vector_quantizer 2021-10-20 21:19:38 -06:00
James Betker
f2a31702b5 Clean stuff up, move more things into arch_util 2021-10-20 21:19:25 -06:00
James Betker
a6f0f854b9 Fix codes when inferring from dvae 2021-10-17 22:51:17 -06:00
James Betker
d016a2fbad Go back to vanilla flavor of diffusion 2021-10-17 17:32:46 -06:00
James Betker
23da073037 Norm decoder outputs now 2021-10-16 09:07:10 -06:00
James Betker
0edc98f6c4 Throw out the idea of conditioning on discrete codes. Oh well :( 2021-10-16 09:02:01 -06:00
James Betker
62c8c5d93e Zero out spectrogram code inputs initially. 2021-10-15 12:10:11 -06:00
James Betker
1d0b44ebc2 More tweaks to diffusion-vocoder 2021-10-15 11:51:17 -06:00
James Betker
3b19581f9a Allow num_resblocks to specified per-level 2021-10-14 11:26:04 -06:00
James Betker
83798887a8 Mods to support unet diffusion vocoder with conditioning 2021-10-13 21:23:18 -06:00
James Betker
c861054218 Restore spleeter_splitter
The mods don't help - in TF mode, everything is done on the GPU anyways. Something else
is going to have to be done to fix this.
2021-10-09 23:55:42 -06:00
James Betker
32ba496632 More fixes 2021-10-09 23:27:14 -06:00
James Betker
932ea29a83 Add multiprocessing to the spleeter splitter script to try and improve performance further 2021-10-09 23:15:36 -06:00
James Betker
b94e587f46 Improvements to spleeter_filter_noisy_clips 2021-10-07 21:28:00 -06:00
James Betker
33120cb35c Add norming to discretization_loss 2021-10-06 17:10:50 -06:00
James Betker
bb891a3a53 Add partitioning and improved resuming to the spleeter filtering 2021-10-06 17:10:12 -06:00
James Betker
f2977d360c Allow attention_dim in channel attention to be specified, add converter 2021-10-05 17:29:38 -06:00
James Betker
9c0d7288ea Discretization loss attempt 2021-10-04 20:59:21 -06:00
James Betker
66f99a159c Rev2 2021-10-03 15:20:50 -06:00
James Betker
09f373e3b1 Add dvae with channel attention 2021-10-03 10:52:01 -06:00
James Betker
0396a9d2ca Increase baseline codes recording across all dvae models 2021-09-30 08:09:07 -06:00
James Betker
f84ccbdfb2 Fix quantizer with balancing_heuristic 2021-09-29 14:46:05 -06:00
James Betker
4914c526dc More cleanup 2021-09-29 14:24:49 -06:00
James Betker
6e550edfe3 Attentive dvae 2021-09-29 14:17:29 -06:00
James Betker
fc8ae4679a Work on spleeter filtering script 2021-09-29 09:24:56 -06:00
James Betker
55b58fb67f Clean up codebase
Remove stuff that I'm likely not going to use again (or generally failed experiments)
2021-09-29 09:21:44 -06:00
James Betker
4d1a42e944 Add switchnorm to gumbel_quantizer 2021-09-24 18:49:25 -06:00
James Betker
ac57cdc794 Add scheduling to quantizer, enable cudnn_benchmarking to be disabled 2021-09-24 17:01:36 -06:00
James Betker
3e64e847c2 Gumbel quantizer 2021-09-23 23:32:03 -06:00
James Betker
c5297ccec6 Add dvae balancing heuristic 2021-09-23 21:19:36 -06:00
James Betker
e24c619387 Fix 2021-09-23 16:07:58 -06:00
James Betker
6833048bf7 Alterations to diffusion_dvae so it can be used directly on spectrograms 2021-09-23 15:56:25 -06:00
James Betker
97ea329a59 Make spleeter filter simpler (and hopefully much faster) 2021-09-17 15:29:42 -06:00
James Betker
359e9e27a7 unsupervised_audio_dataset: try to recover from failures of audio2numpy 2021-09-17 15:25:57 -06:00
James Betker
5c8d266d4f chk 2021-09-17 09:15:36 -06:00
James Betker
a6544f1684 More checkpointing fixes 2021-09-16 23:12:43 -06:00
James Betker
94899d88f3 Fix overuse of checkpointing 2021-09-16 23:00:28 -06:00
James Betker
f78ce9d924 Get diffusion_dvae ready for prime time! 2021-09-16 22:43:10 -06:00
James Betker
1197ae1928 Misc 2021-09-16 10:53:56 -06:00
James Betker
6f48674647 Support diffusion models with extra return values & inference in diffusion_dvae 2021-09-16 10:53:46 -06:00
James Betker
8d9857f33d More fixes 2021-09-14 20:45:05 -06:00
James Betker
9a9c90660f Fixes 2021-09-14 18:29:17 -06:00
James Betker
4334a67924 Spleeter mods 2021-09-14 17:43:40 -06:00
James Betker
0382660159 Get diffusion_dvae functional 2021-09-14 17:43:31 -06:00
James Betker
e513052fca Add unsupervised_audio_dataset 2021-09-14 17:43:16 -06:00
James Betker
bc603c3231 Script adjustments and fixes 2021-09-12 21:26:45 -06:00
James Betker
76e2c497f7 Improvements to splitter 2021-09-09 23:34:56 -06:00
James Betker
742f9b4010 Batch spleeter cleaner using GPU 2021-09-09 23:14:32 -06:00
James Betker
73b930c0f6 Add diffusion_dvae
Increase split_on_silence interval
2021-09-09 16:22:05 -06:00
James Betker
b8f2e0f452 mydvae 2021-09-06 17:45:30 -06:00
James Betker
92e7e57f81 Update diffusion_noise_surfer to support audio 2021-09-01 08:34:47 -06:00
James Betker
3e073cff85 Set kernel_size in diffusion_vocoder 2021-09-01 08:33:46 -06:00
James Betker
30cd33fe44 another fix 2021-08-31 14:46:46 -06:00
James Betker
8810d3de97 fix wavfile_dataset 2021-08-31 14:45:29 -06:00
James Betker
dabd87246d Add unet_diffusion_vocoder 2021-08-31 14:38:33 -06:00
James Betker
274d352e6f dug 2021-08-30 21:45:58 -06:00
James Betker
f1a0c21fb2 asr_eval 2021-08-30 21:41:34 -06:00
James Betker
ed6eae407f More scripts for splitting and formatting audio 2021-08-30 21:20:52 -06:00
James Betker
909754cc27 Add find_faulty_files.py 2021-08-25 18:00:43 -06:00
James Betker
08b33c8e3a Support silu activation 2021-08-25 09:03:14 -06:00
James Betker
67bf7f5219 dvae mods
Trying to squeeze as much performance out of this net as possible
2021-08-25 08:55:13 -06:00
James Betker
d05cc1f46c Misc 2021-08-24 17:12:04 -06:00
James Betker
9dfe936c16 Fix ddp for sampler 2021-08-19 16:45:34 -06:00
James Betker
b521d94b01 Make gpt-asr more configurable 2021-08-19 16:33:41 -06:00
James Betker
570ed327ed Stop dataset - attempt #2 2021-08-18 18:29:38 -06:00
James Betker
17453ccbe8 Revert mods to lrdvae
They didn't really change anything
2021-08-17 09:09:29 -06:00
James Betker
8332923f5c Two more tools to test the audio segmentor 2021-08-17 09:09:11 -06:00
James Betker
7c086d0c2c libritts - only write on successful check 2021-08-16 22:52:55 -06:00
James Betker
93e903af15 Rework wavfile dataset to be usable for things other than augments 2021-08-16 22:52:35 -06:00
James Betker
d7f30232c3 Oh yeah 2021-08-16 22:52:15 -06:00
James Betker
4c01d82265 Fix for voxpopuli 2021-08-16 22:52:05 -06:00
James Betker
1fede41b7b Audio segmentor 2021-08-16 22:51:53 -06:00
James Betker
2d3372054d Add support for voxpopuli to nv_tacotron_dataset 2021-08-16 17:13:40 -06:00
James Betker
729c1fd5a9 Fix up max lengths to save memory 2021-08-15 21:29:28 -06:00
James Betker
9e47e64d5a Add gpt_segmentor model
The idea is to specifically train a model that extracts phrases from
audio clips.
2021-08-15 21:23:07 -06:00
James Betker
a826d5f658 Mods to dvae
- Add resblock to each layer
- Increase filter size for each layer
- Use SiLU
2021-08-15 20:54:10 -06:00
James Betker
b8bec22f1a Fix gpt_asr inference bug 2021-08-15 20:53:42 -06:00
James Betker
3580c52eac Fix up wavfile_dataset to be able to provide a full clip 2021-08-15 20:53:26 -06:00
James Betker
a523c4f932 Auto-normalize wav files by data type 2021-08-15 09:09:51 -06:00
James Betker
98057b6516 Make lrdvae use quantized mode in eval() 2021-08-14 23:43:01 -06:00
James Betker
c28f657ab8 Allow usage of pre-rendered mels saved to npy files 2021-08-14 23:38:15 -06:00
James Betker
ad3391bd96 Fix nan issue when interpolating audio 2021-08-14 20:42:01 -06:00
James Betker
769f0acc53 Moar fix 2021-08-14 17:23:15 -06:00
James Betker
3d2e724083 Fix audio ranging problem 2021-08-14 17:18:55 -06:00
James Betker
d6a73acaed Allow processing of multiple audio sources at once from nv_tacotron_dataset 2021-08-14 16:04:05 -06:00
James Betker
007976082b GPT_asr for inference 2021-08-14 14:37:17 -06:00
James Betker
e1bdd3f7c7 Fix gpt_asr bug. Initial implementation of beam search 2021-08-13 22:47:00 -06:00
James Betker
72622b4d61 Allow saving mel strips as files from the dataset implementation 2021-08-13 22:46:41 -06:00
James Betker
cfd284f425 Fix up some stuff that allows the MEL to be computed on-GPU 2021-08-13 18:35:55 -06:00
James Betker
cdee31c60b GPT_ASR 2021-08-13 15:02:18 -06:00
James Betker
81e91c99de Misc 2021-08-13 13:58:59 -06:00
James Betker
fff1a59e08 max/min mel invalid fix 2021-08-13 09:36:31 -06:00
James Betker
4b2946e581 More fix 2021-08-12 15:51:23 -06:00
James Betker
4c76257c71 Dont require collation for nv_tacotron 2021-08-12 15:44:55 -06:00
James Betker
5b07d3b623 Found error that I was trying to fix with reload=True 2021-08-12 15:22:34 -06:00
James Betker
430b650a34 ...... 2021-08-12 10:31:10 -06:00
James Betker
b35d6ae028 Print some metrics from tacotron dataset when it croaks 2021-08-12 09:21:12 -06:00
James Betker
0c4d6b1916 Just offer generic re-load for nv-tacotron 2021-08-12 09:09:12 -06:00
James Betker
154f5aa73c Fix annoying warning and add to requirements 2021-08-11 17:32:06 -06:00
James Betker
f5a9b88ef6 tacotron cleaners: remove quotation marks
these don't really have relevance for tts or asr
2021-08-11 16:18:44 -06:00
James Betker
20586a8edc Fix LRDVAE bug with quantizer integration 2021-08-11 16:17:22 -06:00
James Betker
f04a7bdf63 Bug fixes for tacotron dataset on mozilla cv
- Support a max mel length (mozilla cv has some tracks that are basically unbounded..)
- Don't fail on low sample rates (mozilla cv has some of those)
2021-08-11 16:17:03 -06:00
James Betker
2d3f0cc33c nv_tacotron_dataset - Allow training on mozilla cv 2021-08-11 13:34:31 -06:00
James Betker
d0c74278bf Enable multiple wavfile paths to be specified, fix eps bug in mp3 splitter 2021-08-11 08:46:02 -06:00
James Betker
e19c00398e More improvements to random_mp3_splitter 2021-08-09 21:31:12 -06:00
James Betker
04d14b3acc No batch factors for eval 2021-08-09 16:02:01 -06:00
James Betker
82fc69abfa Add "pure" evaluator
Which simply computes the training loss against an eval dataset
2021-08-09 14:58:35 -06:00
James Betker
080bea2f19 No, really 2021-08-09 12:02:31 -06:00
James Betker
e1ce4671e4 Apply dropout to gpt_tts, get rid of min_gpt implementation 2021-08-09 12:01:10 -06:00
James Betker
74342b860b Revert "Undo forced text padding"
This reverts commit 83ab5e6a00.
2021-08-09 11:56:34 -06:00
James Betker
1068f53b78 Add a sampling beam search 2021-08-09 11:56:06 -06:00
James Betker
d4e33bf15f Fixes to the mp3 splitter 2021-08-09 11:55:46 -06:00
James Betker
4100469902 Add a tool to split mp3 files into arbitrary chunks of wav files 2021-08-08 23:23:13 -06:00
James Betker
01cfae28d8 Beam search implementation in one pass? Dayyyum 2021-08-08 23:22:42 -06:00
James Betker
83ab5e6a00 Undo forced text padding 2021-08-08 11:42:20 -06:00
James Betker
690d7e86d3 Fix nv_tacotron_dataset bug which incorrectly mapped filenames
dammit..
2021-08-08 11:38:52 -06:00
James Betker
a2afb25e42 Fix inference, always flow full text tokens through transformer 2021-08-07 20:11:10 -06:00
James Betker
4c678172d6 ugh 2021-08-06 22:10:18 -06:00
James Betker
e723137273 Make gpttts more configurable 2021-08-06 22:08:51 -06:00
James Betker
a7496b661c combined dvae ftw 2021-08-06 22:01:06 -06:00
James Betker
0237e96b34 Fix dvae bug 2021-08-06 14:17:01 -06:00
James Betker
0799d95af5 Use quantizer from rosinality/vqvae with openai dvae 2021-08-06 14:06:26 -06:00
James Betker
d3ace153af Add logic for performing inference using gpt_tts with dual-encoder modes 2021-08-06 12:04:12 -06:00
James Betker
b43683b772 Add lucidrains_dvae 2021-08-06 12:03:46 -06:00
James Betker
62c7570512 Constrain wav_aug a bit more 2021-08-06 08:19:38 -06:00
James Betker
f126040da2 Undo noise first 2021-08-05 23:24:38 -06:00
James Betker
908ef5495f Add noise first to audio_aug 2021-08-05 23:22:44 -06:00
James Betker
d6007c6de1 dataset fixes 2021-08-05 23:12:59 -06:00
James Betker
3ca51e80b2 Only fix weird path bug in windows 2021-08-05 22:21:25 -06:00
James Betker
70dcd1107f Fix byol_model_wrapper to function with audio inputs 2021-08-05 22:20:22 -06:00
James Betker
f86df53ce0 Export extract_byol_model as a function 2021-08-05 22:15:26 -06:00
James Betker
89d15c9e74 Move gpt-tts back to lucidrains implementation
Much better performance.
2021-08-05 22:15:13 -06:00
James Betker
d120e1aa99 Add audio augmentation to wavfile_dataset, utility to test audio similary 2021-08-05 22:14:49 -06:00
James Betker
c0f61a2e15 Rework how DVAE tokens are ordered
It might make more sense to have top tokens, then bottom tokens
with top tokens having different discretized values.
2021-08-05 07:07:17 -06:00
James Betker
4017236ba9 Fix up inference for gpt_tts 2021-08-05 06:46:30 -06:00
James Betker
5037220ac7 Mods to support contrastive learning on audio files 2021-08-05 05:57:04 -06:00
James Betker
341f28dd82 It works! 2021-08-04 20:07:51 -06:00
James Betker
36c7c1fbdb Fix training flow for NEXT TOKEN prediction instead of same token prediction
doh
2021-08-04 10:28:09 -06:00
James Betker
d9936df363 Add gpt_tts dataset and implement inference
- Adds a script which preprocesses quantized mels given a DVAE
- Adds a dataset which can consume preprocessed qmels
- Reworks GPT TTS to consume the outputs of that dataset (removes logic to add padding and start/end tokens)
- Adds inference to gpt_tts
2021-08-04 00:44:04 -06:00
James Betker
4c98b9703f Get dalle-style TTS to "work" 2021-08-03 21:08:27 -06:00
James Betker
2814307eee Alterations to support VQVAE on mel spectrograms 2021-08-01 07:54:21 -06:00
James Betker
965f6e6b52 Fixes to weight_decay in adamw 2021-07-31 15:58:41 -06:00
James Betker
0c9e75bc69 Improvements to GptTts 2021-07-31 15:57:57 -06:00
James Betker
31ee9ae262 Checkin 2021-07-30 23:07:35 -06:00
James Betker
dadc54795c Add gpt_tts 2021-07-27 20:33:30 -06:00
James Betker
398185e109 More work on wave-diffusion 2021-07-27 05:36:17 -06:00
James Betker
49e3b310ea Allow audio sample rate interpolation for faster training 2021-07-26 17:44:06 -06:00
James Betker
96e90e7047 Add support for a gaussian-diffusion-based wave tacotron 2021-07-26 16:27:31 -06:00
James Betker
97d7cbbc34 Additional work for audio xformer (which doesnt really do a great job) 2021-07-23 10:58:14 -06:00
James Betker
2325e7a88c Allow inference for vqvae 2021-07-20 10:40:05 -06:00
James Betker
d81386c1be Mods to support vqvae in audio mode (1d) 2021-07-20 08:36:46 -06:00
James Betker
5584cfcc7a tacotron2 work 2021-07-14 21:41:57 -06:00
James Betker
fe0c699ced Various fixes 2021-07-14 00:08:42 -06:00
James Betker
be2745f42d Add waveglow & inference capabilities to audio generator 2021-07-08 23:07:36 -06:00
James Betker
1ff434218e tacotron2, ready for prime time! 2021-07-08 22:13:44 -06:00
James Betker
86fd3ad7fd Initial checkin of nvidia tacotron model & dataset
These two are tested, full support for training to come.
2021-07-06 11:11:35 -06:00
James Betker
3801d5d55e diffusion surfin' 2021-07-06 09:36:52 -06:00
James Betker
afa41f1804 Allow hq color jittering and corruptions that are not included in the corruption factor 2021-06-30 09:44:46 -06:00
James Betker
6fd16ea9c8 Add meta-anomaly detection, colorjitter augmentation 2021-06-29 13:41:55 -06:00
James Betker
46e9f62be0 Add unet with latent guide
This is a diffusion network that uses both a LQ image
and a reference sample HQ image that is compressed into
a latent vector to perform upsampling

The hope is that we can steer the upsampling network
with sample images.
2021-06-26 11:02:58 -06:00
James Betker
0ded106562 Merge remote-tracking branch 'origin/master' 2021-06-25 13:16:28 -06:00
James Betker
a57ed8e960 Various mods to support better jpeg image filtering 2021-06-25 13:16:15 -06:00
James Betker
61e7ca39cd
Update image_folder_dataset.py 2021-06-25 11:48:31 -06:00
James Betker
a0ef07ddb8
Create unet_latent_guide.py 2021-06-25 11:25:14 -06:00
James Betker
e7890dc0ba Misc fixes for diffusion nets 2021-06-21 10:38:07 -06:00
James Betker
8e3a33e001 Fix a bug where non-rank-0 is computing FID before all images are saved. 2021-06-16 16:27:09 -06:00
James Betker
68cbbed886 Add some cool diffusion testing scripts 2021-06-16 16:26:36 -06:00
James Betker
ae8de0cb9d fid saving images across all rank fix 2021-06-15 10:31:07 -06:00
James Betker
6a75bd0777 Another fix 2021-06-14 09:51:44 -06:00
James Betker
54bff35171 Fix issue where eval was not being used by all ddp processes 2021-06-14 09:50:04 -06:00
James Betker
60079a1572 Fix saver in distributed mode 2021-06-14 09:41:06 -06:00
James Betker
545f2db170 Distributed FID dataset across processes 2021-06-14 09:33:44 -06:00
James Betker
6b32c87dcb Try to make diffusion fid more deterministic 2021-06-14 09:27:43 -06:00
James Betker
5b4f86293f Add FID evaluator for diffusion models 2021-06-14 09:14:30 -06:00
James Betker
9cfe840872 Attempt to fix syncing multiple times when doing gradient accumulation 2021-06-13 14:30:30 -06:00
James Betker
1cd75dfd33 Fix ddp bug 2021-06-13 10:25:23 -06:00
James Betker
3e3ad7825f Add support for training an EMA network alongside the main networks 2021-06-12 21:01:41 -06:00
James Betker
696f320820 Get rid of feature networks 2021-06-11 20:50:07 -06:00
James Betker
65c474eecf Various changes to fix testing 2021-06-11 15:31:10 -06:00
James Betker
220f11a5e4 Half channel sizes in cifar_resnet 2021-06-09 17:06:37 -06:00
James Betker
aea12e1b9c Fix cat eval hack 2021-06-09 17:05:11 -06:00
James Betker
9b5f4abb91 Add fade in for hard switch 2021-06-07 18:15:09 -06:00
James Betker
108c5d829c Fix dropout norm 2021-06-07 16:13:23 -06:00
James Betker
438217094c Also debug distribution of switch 2021-06-07 15:36:07 -06:00
James Betker
44b09e5f20 Amplify dropout rate 2021-06-07 15:20:53 -06:00
James Betker
f0d4eb9182 Fixor 2021-06-07 11:58:36 -06:00
James Betker
c456a60466 Another go at fixing nan 2021-06-07 11:51:43 -06:00
James Betker
1c574c5bd1 Attempt to fix nan 2021-06-07 11:43:42 -06:00
James Betker
eda796985b Try out dropout norm 2021-06-07 11:33:33 -06:00
James Betker
6c6e82406e Pass a corruption factor through the dataset into the upsampling network
The intuition is this will help guide the network to make better informed decisions
about how it performs upsampling based on how it perceives the underlying content.

(I'm giving up on letting networks detect their own quality - I'm not convinced it is
actually feasible)
2021-06-07 09:13:54 -06:00
James Betker
2ad2b56438 Don't do wandb except on rank 0 2021-06-06 16:52:07 -06:00
James Betker
7c5478bc2c Formatting issue with gdi 2021-06-06 16:35:37 -06:00
James Betker
061dbcd458 Another fix to anorm 2021-06-06 15:09:49 -06:00
James Betker
9a6991e461 Fix switch norm average 2021-06-06 15:04:28 -06:00
James Betker
57e1a6a0f2 cifar: add hard routing
Also mods switched_routing to support non-pixular inputs
2021-06-06 14:53:43 -06:00
James Betker
692e9c417b Support diffusion unet 2021-06-06 13:57:22 -06:00
James Betker
a0158ebc69 Simplify cifar resnet further for faster training 2021-06-06 10:02:24 -06:00
James Betker
75567a9814 Only head norm removed 2021-06-05 23:29:11 -06:00
James Betker
65d0376b90 Re-add normalization at the tail of the RRDB 2021-06-05 23:04:05 -06:00
James Betker
184e887122 Remove rrdb normalization 2021-06-05 21:39:19 -06:00
James Betker
f5e75602b9 Add regular attention to cifar_resnet 2021-06-05 21:34:07 -06:00
James Betker
16cd92acd5 hack 2021-06-05 14:23:41 -06:00
James Betker
af52751d6b Fix device error 2021-06-05 14:21:32 -06:00
James Betker
5f0cc65f3b Register branched resnet properly 2021-06-05 14:19:03 -06:00
James Betker
fb405d9ef1 CIFAR stuff
- Extract coarse labels for the CIFAR dataset
- Add simple resnet that branches lower layers based on coarse labels
- Some other cleanup
2021-06-05 14:16:02 -06:00
James Betker
80d4404367 A few fixes:
- Output better prediction of xstart from eps
- Support LossAwareSampler
- Support AdamW
2021-06-05 13:40:32 -06:00
James Betker
fa908a6a15 Fix wandb import issue 2021-06-04 23:27:15 -06:00
James Betker
103a88506e Log eval to wandb 2021-06-04 23:23:20 -06:00
James Betker
7d45132f60 fdsa 2021-06-04 21:26:54 -06:00
James Betker
6c8c8087d5 asdf 2021-06-04 21:24:48 -06:00
James Betker
e6c537824a Allow validation for ce 2021-06-04 21:21:04 -06:00
James Betker
7c251af7a8 Support cifar100 with resnet 2021-06-04 17:29:07 -06:00
James Betker
bf811f80c1 GD mods & fixes
- Report variational loss separately
- Report model prediction from injector
- Log these things
- Use respacing like guided diffusion
2021-06-04 17:13:16 -06:00
James Betker
6084915af8 Support gaussian diffusion models
Adds support for GD models, courtesy of some maths from openai.

Also:
- Fixes requirement for eval{} even when it isn't being used
- Adds support for denormalizing an imagenet norm
2021-06-02 21:47:32 -06:00
James Betker
45bc76ba92 Fixes and mods to support training classifiers on imagenet 2021-06-01 17:25:24 -06:00
James Betker
f129eaa39e Clean up byol a bit
- Remove option to aug in dataset (there's really no reason for this now that kornia works on GPU on windows)
- Other stufff
2021-05-24 21:35:46 -06:00
James Betker
6649ef2dae Add zipfilesdataset 2021-05-24 21:35:00 -06:00
James Betker
1a2b9fa130 Get rid of old byol net wrapping
Simplifies and makes this usable with DLAS' multi-gpu trainer
2021-04-27 12:48:34 -06:00
James Betker
119f17c808 Add testing capabilities for segformer & contrastive feature 2021-04-27 09:59:50 -06:00
James Betker
9bbe6fc81e Get segformer to a trainable state 2021-04-25 11:45:20 -06:00
James Betker
23e01314d4 Add dataset, ui for labeling and evaluator for pointwise classification 2021-04-23 17:17:13 -06:00
James Betker
fc623d4b5a Add segformer model. Start work on BYOL adaptation that will support training it. 2021-04-23 17:16:46 -06:00
James Betker
17555e7d07 misc adjustments for stylegan 2021-04-21 18:14:17 -06:00
James Betker
b687ef4cd0 Misc 2021-04-21 18:09:46 -06:00
James Betker
94e069bced Misc changes 2021-03-13 10:45:26 -07:00
James Betker
9fc3df3f5b Switched conv: add conversion function with allowlist 2021-03-13 10:44:56 -07:00
James Betker
cf9a6da889 Fix some bugs, checkin work on vqvae3 2021-03-02 20:56:19 -07:00
James Betker
f89ea5f1c6 Mods to support lightweight_gan model 2021-03-02 20:51:48 -07:00
James Betker
543d459b4e extract_temporal_squares script
For extracting related patches across a video
2021-02-08 08:10:24 -07:00
James Betker
39fd755baa New benchmark numbers 2021-02-08 08:09:41 -07:00
James Betker
784b96c059 Misc options to add support for training stylegan2-rosinality models:
- Allow image_folder_dataset to normalize inbound images
- ExtensibleTrainer can denormalize images on the output path
- Support .webp - an output from LSUN
- Support logistic GAN divergence loss
- Support stylegan2 TF weight extraction for discriminator
- New injector that produces latent noise (with separated paths)
- Modify FID evaluator to be operable with rosinality-style GANs
2021-02-08 08:09:21 -07:00
James Betker
e7be4bdff3 Revert 2021-02-05 08:43:07 -07:00
James Betker
6dec1f5968 Back to groupnorm 2021-02-05 08:42:11 -07:00
James Betker
336f807c8e lambda2 2021-02-05 00:00:24 -07:00
James Betker
025a5867c4 Use syncbatchnorm instead 2021-02-04 22:26:36 -07:00
James Betker
bb79fafb89 Fix groupnorm specification 2021-02-04 22:15:38 -07:00
James Betker
43da1f9c4b Convert lambda coupler to use groupnorm instead of batchnorm 2021-02-04 21:59:44 -07:00
James Betker
7070142805 Make vqvae3_hard more configurable 2021-02-04 09:03:22 -07:00
James Betker
b980028ca8 Add get_debug_values for vqvae_3_hardswitch 2021-02-03 14:12:24 -07:00
James Betker
1405ff06b8 Fix SwitchedConvHardRoutingFunction for current cuda router 2021-02-03 14:11:55 -07:00
James Betker
d7bec392dd ... 2021-02-02 23:50:25 -07:00
James Betker
b0a8fa00bc Visual dbg in vqvae3hs 2021-02-02 23:50:01 -07:00
James Betker
f5f91850fd hardswitch variant of vqvae3 2021-02-02 21:00:04 -07:00
James Betker
320edbaa3c Move switched_conv logic around a bit 2021-02-02 20:41:24 -07:00
James Betker
0dca36946f Hard Routing mods
- Turns out my custom convolution was RIDDLED with backwards bugs, which is
   why the existing implementation wasn't working so well.
- Implements the switch logic from both Mixture of Experts and Switch Transformers
  for testing purposes.
2021-02-02 20:35:58 -07:00
James Betker
29c1c3bede Register vqvae3 2021-01-29 15:26:28 -07:00
James Betker
bc20b4739e vqvae3
Changes VQVAE as so:
- Reverts back to smaller codebook
- Adds an additional conv layer at the highest resolution for both the encoder & decoder
- Uses LeakyReLU on trunk
2021-01-29 15:24:26 -07:00
James Betker
96bc80313c Add switch norm, up dropout rate, detach selector 2021-01-26 09:31:53 -07:00
James Betker
97d895aebe Add SrPixLoss, which focuses pixel-based losses on high-frequency regions
of the image.
2021-01-25 08:26:14 -07:00
James Betker
2cdac6bd09 Add PWCNet for human optical flow 2021-01-25 08:25:44 -07:00
James Betker
51b63b2aa6 Add switched_conv with hard routing and make vqvae use it. 2021-01-25 08:25:29 -07:00
James Betker
ae4ff4a1e7 Enable lambda visualization 2021-01-23 15:53:27 -07:00
James Betker
10ec6bda1d lambda nets in switched_conv and a vqvae to use it 2021-01-23 14:57:57 -07:00
James Betker
b374dcdd46 update vqvae to double codebook size for bottom quantizer 2021-01-23 13:47:07 -07:00
James Betker
dac7d768fa test uresnet playground mods 2021-01-23 13:46:43 -07:00
James Betker
1b8a26db93 New switched_conv 2021-01-23 13:46:30 -07:00
James Betker
557cdec116 misc 2021-01-23 13:45:17 -07:00
James Betker
d919ae7148 Add VQVAE with no Conv2dTranspose 2021-01-18 08:49:59 -07:00
James Betker
587a4f4050 resnet_unet_3
I'm being really lazy here - these nets are not really different from each other
except at which layer they terminate. This one terminates at 2x downsampling,
which is simply indicative of a direction I want to go for testing these pixpro networks.
2021-01-15 14:51:03 -07:00
James Betker
038b8654b6 Pixpro: unwrap losses 2021-01-13 11:54:25 -07:00
James Betker
8990801a3f Fix pixpro stochastic sampling bugs 2021-01-13 11:34:24 -07:00
James Betker
19475a072f Pixpro: Rather than using a latent square for pixpro, use an entirely stochastic sampling of the pixels 2021-01-13 11:26:51 -07:00
James Betker
d1007ccfe7 Adjustments to pixpro to allow training against networks with arbitrarily large structural latents
- The pixpro latent now rescales the latent space instead of using a "coordinate vector", which
   **might** have performance implications.
- The latent against which the pixel loss is computed can now be a small, randomly sampled patch
   out of the entire latent, allowing further memory/computational discounts. Since the loss
   computation does not have a receptive field, this should not alter the loss.
- The instance projection size can now be separate from the pixel projection size.
- PixContrast removed entirely.
- ResUnet with full resolution added.
2021-01-12 09:17:45 -07:00
James Betker
34f8c8641f Support training imagenet classifier 2021-01-11 20:09:16 -07:00
James Betker
f3db381fa1 Allow uresnet to use pretrained resnet50 2021-01-10 12:57:31 -07:00
James Betker
4119cd6240 Fix to image_folder_dataset to accomodate images with mismatched dimensions 2021-01-10 12:57:21 -07:00
James Betker
48f0d8964b Allow dist_backend to be specified in options 2021-01-09 20:54:32 -07:00
James Betker
14a868e8e6 byol playground updates 2021-01-09 20:54:21 -07:00
James Betker
7c6c7a8014 Fix process_video 2021-01-09 20:53:46 -07:00
James Betker
07168ecfb4 Enable vqvae to use a switched_conv variant 2021-01-09 20:53:14 -07:00
James Betker
41b7d50944 Update extract_square_images 2021-01-08 13:16:34 -07:00
James Betker
5a8156026a Did anyone ask for k-means clustering?
This is so cool...
2021-01-07 22:37:41 -07:00
James Betker
acf1535b14 Fix for randomresizedcrop injector 2021-01-07 16:31:43 -07:00
James Betker
659814c20f BYOL script updates 2021-01-07 16:31:28 -07:00
James Betker
de10c7246a Add injected noise into bypass maps 2021-01-07 16:31:12 -07:00
James Betker
04961b91cf Add random-crop injector 2021-01-07 12:14:55 -07:00
James Betker
61a86a3c1e VQVAE 2021-01-07 10:20:15 -07:00
James Betker
01a589e712 Adjustments to pixpro & resnet-unet
I'm not really satisfied with what I got out of these networks on round 1.
Lets try again..
2021-01-06 15:00:46 -07:00
James Betker
9680294430 Move byol scripts around 2021-01-06 14:52:17 -07:00
James Betker
2f2f87bbea Styled SR fixes 2021-01-05 20:14:39 -07:00
James Betker
9fed90393f Add lucidrains pixpro trainer 2021-01-05 20:14:22 -07:00
James Betker
39a94c74b5 Allow BYOL resnet playground to produce a latent dict 2021-01-04 20:11:29 -07:00
James Betker
ade2732c82 Transfer learning for styleSR
This is a concept from "Lifelong Learning GAN", although I'm skeptical of it's novelty -
basically you scale and shift the weights for the generator and discriminator of a pretrained
GAN to "shift" into new modalities, e.g. faces->birds or whatever. There are some interesting
applications of this that I would like to try out.
2021-01-04 20:10:48 -07:00
James Betker
2c65b6b28e More mods to support styledsr 2021-01-04 11:32:28 -07:00
James Betker
2225fe6ac2 Undo lucidrains changes for new discriminator
This "new" code will live in the styledsr directory from now on.
2021-01-04 10:57:09 -07:00
James Betker
40ec71da81 Move styled_sr into its own folder 2021-01-04 10:54:34 -07:00
James Betker
5916f5f7d4 Misc fixes 2021-01-04 10:53:53 -07:00
James Betker
4d8064c32c Modifications to allow partially trained stylegan discriminators to be used 2021-01-03 16:37:18 -07:00
James Betker
5e7ade0114 ImageFolderDataset - corrupt lq images alongside each other 2021-01-03 16:36:38 -07:00
James Betker
ce6524184c Do the last commit but in a better way 2021-01-02 22:24:12 -07:00
James Betker
edf9c38198 Make ExtensibleTrainer set the starting step for the LR scheduler 2021-01-02 22:22:34 -07:00
James Betker
bdbab65082 Allow optimizers to train separate param groups, add higher dimensional VGG discriminator
Did this to support training 512x512px networks off of a pretrained 256x256 network.
2021-01-02 15:10:06 -07:00
James Betker
193cdc6636 Move discriminators to the create_model paradigm
Also cleans up a lot of old discriminator models that I have no intention
of using again.
2021-01-01 15:56:09 -07:00
James Betker
7976a5825d srfid is incorrectly labeled 2021-01-01 13:00:59 -07:00
James Betker
f39179e85a styled_sr: fix bug when using initial_stride 2021-01-01 12:13:21 -07:00
James Betker
913fc3b75e Need init to pick up styled_sr 2021-01-01 12:10:32 -07:00
James Betker
aae65e6ed8 Mods to byol_resnet_playground for large batches 2021-01-01 11:59:54 -07:00
James Betker
e992e18767 Add initial_stride term to style_sr
Also fix fid and a networks.py issue.
2021-01-01 11:59:36 -07:00
James Betker
9864fe4c04 Fix for train.py 2021-01-01 11:59:00 -07:00
James Betker
e214e6ce33 Styled SR model 2020-12-31 20:54:18 -07:00
James Betker
0eb1f4dd67 Revert "Get rid of CUDA_VISIBLE_DEVICES"
It is actually necessary for training in distributed mode. Only
do it then.
2020-12-31 10:31:40 -07:00
James Betker
8de5a02a48 byol_resnet_playground
Similar to the spinenet playground, but tinkers with resnet instead
2020-12-31 10:15:04 -07:00
James Betker
8f18b2709e Get rid of CUDA_VISIBLE_DEVICES
It is not clear to me what the purpose of this is, but it has recently
started causing failures.
2020-12-31 10:13:58 -07:00
James Betker
1de1fa30ac Disable refs and centers altogether in single_image_dataset
I suspect that this might be a cause of failures on parallel datasets.
Plus it is unnecessary computation.
2020-12-31 10:13:24 -07:00
James Betker
8f0984cacf Add sr_fid evaluator 2020-12-30 20:18:58 -07:00
James Betker
b1fb82476b Add gp debug (fix) 2020-12-30 15:26:54 -07:00
James Betker
9c53314ea2 Add gradient penalty visual debug 2020-12-30 09:51:59 -07:00
James Betker
63cf3d3126 Injector auto-registration
I love it!
2020-12-29 20:58:02 -07:00
James Betker
a777c1e4f9 Misc script fixes 2020-12-29 20:25:09 -07:00
James Betker
9dc3c8f0ff Script updates 2020-12-29 20:24:41 -07:00
James Betker
ba543d1152 Glean mods
- Fixes fixed upscale factor issues
- Refines a few ops to decrease computation & parameterization
2020-12-27 12:25:06 -07:00
James Betker
5e2e605a50 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-12-26 13:51:19 -07:00
James Betker
f9be049adb GLEAN mod to support custom initial strides 2020-12-26 13:51:14 -07:00
James Betker
2706a84f15 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-12-26 13:50:34 -07:00
James Betker
90e2362c00 Fix bug with full_image_dataset 2020-12-26 13:50:27 -07:00
James Betker
3fd627fc62 Mods to support image classification & filtering 2020-12-26 13:49:27 -07:00
James Betker
10fdfa1563 Migrate generators to dynamic model registration 2020-12-24 23:02:10 -07:00
James Betker
29db7c7a02 Further mods to BYOL 2020-12-24 09:28:41 -07:00
James Betker
036684893e Add LARS optimizer & support for BYOL idiosyncrasies
- Added LARS and SGD optimizer variants that support turning off certain
  features for BN and bias layers
- Added a variant of pytorch's resnet model that supports gradient checkpointing.
- Modify the trainer infrastructure to support above
- Fix bug with BYOL (should have been nonfunctional)
2020-12-23 20:33:43 -07:00
James Betker
1bbcb96ee8 Implement a few changes to support training BYOL networks 2020-12-23 10:50:23 -07:00
James Betker
2437b33e74 Fix srflow_latent_space_playground bug 2020-12-22 15:42:38 -07:00
James Betker
e7aeb17404 ImageFolder dataset: allow intermediary downscale before corrupt
For massive upscales (ex: 8x), corruption does almost nothing when applied
at the HQ level. This patch adds support to perform corruption at a specified
intermediary scale. The dataset downscales to this level, performs the corruption,
then downscales the rest of the way to get the LQ image.
2020-12-22 15:42:21 -07:00
James Betker
7938f9f50b Fix bug with single_image_dataset which prevented working on multiple directories from working 2020-12-19 15:13:46 -07:00
James Betker
ae666dc520 Fix bugs with srflow after refactor 2020-12-19 10:28:23 -07:00
James Betker
4328c2f713 Change default ReLU slope to .2 BREAKS COMPATIBILITY
This conforms my ConvGnLelu implementation with the generally accepted negative_slope=.2. I have no idea where I got .1. This will break backwards compatibility with some older models but will likely improve their performance when freshly trained. I did some auditing to find what these models might be, and I am not actively using any of them, so probably OK.
2020-12-19 08:28:03 -07:00
James Betker
9377d34ac3 glean mods 2020-12-19 08:26:07 -07:00
James Betker
f35c034fa5 Add trainer readme 2020-12-18 16:52:16 -07:00
James Betker
e82f4552db Update other docs with dumb config options 2020-12-18 16:21:28 -07:00
James Betker
92f9a129f7 GLEAN! 2020-12-18 16:04:19 -07:00
James Betker
c717765bcb Notes for lucidrains converter. 2020-12-18 09:55:38 -07:00
James Betker
b4720ea377 Move stylegan to new location 2020-12-18 09:52:36 -07:00
James Betker
1708136b55 Commit my attempt at "conforming" the lucidrains stylegan implementation to the reference spec. Not working. will probably be abandoned. 2020-12-18 09:51:48 -07:00
James Betker
209332292a Rosinality stylegan fix 2020-12-18 09:50:41 -07:00
James Betker
d875ca8342 More refactor changes 2020-12-18 09:24:31 -07:00
James Betker
5640e4efe4 More refactoring 2020-12-18 09:18:34 -07:00
James Betker
b905b108da Large cleanup
Removed a lot of old code that I won't be touching again. Refactored some
code elements into more logical places.
2020-12-18 09:10:44 -07:00
James Betker
2f0a52b7db misc changes 2020-12-18 08:53:45 -07:00
James Betker
a8179ff53c Image label work 2020-12-18 08:53:18 -07:00
James Betker
3074f41877 Get rosinality model converter to work
Mostly, just needed to remove the custom cuda ops, not so bueno on Windows.
2020-12-17 16:03:39 -07:00
James Betker
e838c6e75b Rosinality stylegan2 port 2020-12-17 14:18:46 -07:00
James Betker
12cf052889 Add an image patch labeling UI 2020-12-17 10:16:21 -07:00
James Betker
49327b99fe SRFlow outputs RRDB output 2020-12-16 10:28:02 -07:00
James Betker
c25b49bb12 Clean up of SRFlowNet_arch 2020-12-16 10:27:38 -07:00
James Betker
42ac8e3eeb Remove unnecessary comment from SRFlowNet 2020-12-16 09:43:07 -07:00
James Betker
fb2cfc795b Update requirements, add image_patch_classifier tool 2020-12-16 09:42:50 -07:00
James Betker
09de3052ac Add softmax to spinenet classification head 2020-12-16 09:42:15 -07:00
James Betker
4310e66848 Fix bug in 'corrupt_before_downsize=true' 2020-12-16 09:41:59 -07:00
James Betker
8661207d57 Merge branch 'gan_lab' of https://github.com/neonbjb/DL-Art-School into gan_lab 2020-12-15 17:16:48 -07:00
James Betker
fc376d34b2 Spinenet with logits head 2020-12-15 17:16:19 -07:00
James Betker
8e0e883050 Mods to support labeled datasets & random augs for those datasets 2020-12-15 17:15:56 -07:00
James Betker
e5a3e6b9b5 srflow latent space misc 2020-12-14 23:59:49 -07:00
James Betker
1e14635d88 Add exclusions to extract_subimages_with_ref 2020-12-14 23:59:41 -07:00
James Betker
0a19e53df0 BYOL mods 2020-12-14 23:59:11 -07:00
James Betker
ef7eabf457 Allow RRDB to upscale 8x 2020-12-14 23:58:52 -07:00
James Betker
087e9280ed Add labeling feature to image_folder_dataset 2020-12-14 23:58:37 -07:00
James Betker
ec0ee25f4b Structural latents checkpoint 2020-12-11 12:01:09 -07:00
James Betker
26ceca68c0 BYOL with structure! 2020-12-10 15:07:35 -07:00
James Betker
9c5e272a22 Script to extract models from a wrapped BYOL model 2020-12-10 09:57:52 -07:00
James Betker
a5630d282f Get rid of 2nd trainer 2020-12-10 09:57:38 -07:00
James Betker
8e4b9f42fd New BYOL dataset which uses a form of RandomCrop that lends itself to
structural guidance to the latents.
2020-12-10 09:57:18 -07:00
James Betker
c203cee31e Allow swapping to torch DDP as needed in code 2020-12-09 15:03:59 -07:00
James Betker
66cbae8731 Add random_dataset for testing 2020-12-09 14:55:05 -07:00
James Betker
97ff25a086 BYOL!
Man, is there anything ExtensibleTrainer can't train? :)
2020-12-08 13:07:53 -07:00
James Betker
5369cba8ed Stage 2020-12-08 00:33:07 -07:00
James Betker
bca59ed98a Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-12-07 12:51:04 -07:00
James Betker
ea56eb61f0 Fix DDP errors for discriminator
- Don't define training_net in define_optimizers - this drops the shell and leads to problems downstream
- Get rid of support for multiple training nets per opt. This was half baked and needs a better solution if needed
   downstream.
2020-12-07 12:50:57 -07:00
James Betker
c0aeaabc31 Spinenet playground 2020-12-07 12:49:32 -07:00
James Betker
88fc049c8d spinenet latent playground! 2020-12-05 20:30:36 -07:00
James Betker
11155aead4 Directly use dataset keys
This has been a long time coming. Cleans up messy "GT" nomenclature and simplifies ExtensibleTraner.feed_data
2020-12-04 20:14:53 -07:00
James Betker
8a83b1c716 Go back to apex DDP, fix distributed bugs 2020-12-04 16:39:21 -07:00
James Betker
7a81d4e2f4 Revert gaussian loss changes 2020-12-04 12:49:20 -07:00
James Betker
711780126e Cleanup 2020-12-03 23:42:51 -07:00
James Betker
ac7256d4a3 Do tqdm reporting when calculating flow_gaussian_nll 2020-12-03 23:42:29 -07:00
James Betker
dc9ff8e05b Allow the majority of the srflow steps to checkpoint 2020-12-03 23:41:57 -07:00
James Betker
06d1c62c5a iGPT support!
Sweeeeet
2020-12-03 15:32:21 -07:00
James Betker
c18adbd606 Delete mdcn & panet
Garbage, all of it.
2020-12-02 22:25:57 -07:00
James Betker
f2880b33c9 Get rid of mean shift from MDCN 2020-12-02 14:18:33 -07:00
James Betker
8a00f15746 Implement FlowGaussianNll evaluator 2020-12-02 14:09:54 -07:00
James Betker
edf408508c Fix discriminator 2020-12-01 17:45:56 -07:00
James Betker
c963e5f2ce Add ImageFolderDataset
This one has been a long time coming.. How does torch not have something like this?
2020-12-01 17:45:37 -07:00
James Betker
9a421a41f4 SRFlow: accomodate mismatches between global scale and flow_scale 2020-12-01 11:11:51 -07:00
James Betker
8f65f81ddb Adjustments to subimage extractor 2020-12-01 11:11:30 -07:00
James Betker
e343722d37 Add stepped rrdb 2020-12-01 11:11:15 -07:00
James Betker
2e0bbda640 Remove unused archs 2020-12-01 11:10:48 -07:00
James Betker
a1c8300052 Add mdcn 2020-11-30 16:14:21 -07:00
James Betker
1e0f69e34b extra_conv in gn discriminator, multiframe support in rrdb. 2020-11-29 15:39:50 -07:00
James Betker
da604752e6 Misc RRDB changes 2020-11-29 12:21:31 -07:00
James Betker
f2422f1d75 Latent space playground 2020-11-29 09:33:29 -07:00
James Betker
a1d4c9f83c multires rrdb work 2020-11-28 14:35:46 -07:00
James Betker
929cd45c05 Fix for RRDB scale 2020-11-27 21:37:10 -07:00
James Betker
71fa532356 Adjustments to how flow networks set size and scale 2020-11-27 21:37:00 -07:00
James Betker
6f958bb150 Maybe this is necessary after all? 2020-11-27 15:21:13 -07:00
James Betker
ef8d5f88c1 Bring split gaussian nll out of split so it can be computed accurately with the rest of the nll component 2020-11-27 13:30:21 -07:00
James Betker
11d2b70bdd Latent space playground work 2020-11-27 12:03:16 -07:00
James Betker
4ab49b0d69 RRDB disc work 2020-11-27 12:03:08 -07:00
James Betker
6de4dabb73 Remove srflow (modified version)
Starting from orig and re-working from there.
2020-11-27 12:02:06 -07:00
James Betker
5f5420ff4a Update to srflow_latent_space_playground 2020-11-26 20:31:21 -07:00
James Betker
fd356580c0 Play with lambdas 2020-11-26 20:30:55 -07:00
James Betker
0c6d7971b9 Dataset documentation 2020-11-26 11:58:39 -07:00
James Betker
45a489110f Fix datasets 2020-11-26 11:50:38 -07:00
James Betker
5edaf085e0 Adjustments to latent_space_playground 2020-11-25 15:52:36 -07:00
James Betker
205c9a5335 Learn how to functionally use srflow networks 2020-11-25 13:59:06 -07:00
James Betker
cb045121b3 Expose srflow rrdb 2020-11-24 13:20:20 -07:00
James Betker
f3c1fc1bcd Dataset modifications 2020-11-24 13:20:12 -07:00
James Betker
f6098155cd Mods to tecogan to allow use of embeddings as input 2020-11-24 09:24:02 -07:00
James Betker
b10bcf6436 Rework stylegan_for_sr to incorporate structure as an adain block 2020-11-23 11:31:11 -07:00
James Betker
519ba6f10c Support 2x RRDB with 4x srflow 2020-11-21 14:46:15 -07:00
James Betker
cad92bada8 Report logp and logdet for srflow 2020-11-21 10:13:05 -07:00
James Betker
c37d3faa58 More adjustments to srflow_orig 2020-11-20 19:38:33 -07:00
James Betker
d51d12a41a Adjustments to srflow to (maybe?) fix training 2020-11-20 14:44:24 -07:00
James Betker
6c8c35ac47 Support training RRDB encoder [srflow] 2020-11-20 10:03:06 -07:00
James Betker
5ccdbcefe3 srflow_orig integration 2020-11-19 23:47:24 -07:00
James Betker
f80acfcab6 Throw if dataset isn't going to work with force_multiple setting 2020-11-19 23:47:00 -07:00
James Betker
2b2d754d8e Bring in an original SRFlow implementation for reference 2020-11-19 21:42:39 -07:00
James Betker
1e0d7be3ce "Clean up" SRFlow 2020-11-19 21:42:24 -07:00
James Betker
d7877d0a36 Fixes to teco losses and translational losses 2020-11-19 11:35:05 -07:00
James Betker
b2a05465fc Fix missing requirements 2020-11-18 10:16:39 -07:00
James Betker
5c10264538 Remove pyramid_disc hard dependencies 2020-11-17 18:34:11 -07:00
James Betker
6b679e2b51 Make grad_penalty available to classical discs 2020-11-17 18:31:40 -07:00
James Betker
8a19c9ae15 Add additive mode to rrdb 2020-11-16 20:45:09 -07:00
James Betker
2a507987df Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-11-15 16:16:30 -07:00
James Betker
931ed903c1 Allow combined additive loss 2020-11-15 16:16:18 -07:00
James Betker
4b68116977 import fix 2020-11-15 16:15:42 -07:00
James Betker
98eada1e4c More circular dependency fixes + unet fixes 2020-11-15 11:53:35 -07:00
James Betker
e587d549f7 Fix circular imports 2020-11-15 11:32:35 -07:00
James Betker
99f0cfaab5 Rework stylegan2 divergence losses
Notably: include unet loss
2020-11-15 11:26:44 -07:00
James Betker
ea94b93a37 Fixes for unet 2020-11-15 10:38:33 -07:00
James Betker
89f56b2091 Fix another import 2020-11-14 22:10:45 -07:00
James Betker
9af049c671 Import fix for unet 2020-11-14 22:09:18 -07:00
James Betker
5cade6b874 Move stylegan2 around, bring in unet 2020-11-14 22:04:48 -07:00
James Betker
4c6b14a3f8 Allow extract_square_images to work on multiple images 2020-11-14 20:24:05 -07:00
James Betker
125cb16dce Add a FID evaluator for stylegan with structural guidance 2020-11-14 20:16:07 -07:00
James Betker
c9258e2da3 Alter how structural guidance is given to stylegan 2020-11-14 20:15:48 -07:00
James Betker
3397c83447 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-11-14 09:30:09 -07:00
James Betker
423ee7cb90 Allow attention to be specified for stylegan2 2020-11-14 09:29:53 -07:00
James Betker
ec621c69b5 Fix train bug 2020-11-14 09:29:08 -07:00
James Betker
cdc5ac30e9 oddity 2020-11-13 20:11:57 -07:00
James Betker
f406a5dd4c Mods to support stylegan2 in SR mode 2020-11-13 20:11:50 -07:00
James Betker
9c3d0b7560 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-11-13 20:10:47 -07:00
James Betker
67bf55495b Allow hq_batched_key to be specified 2020-11-13 20:10:12 -07:00
James Betker
0b96811611 Fix another issue with gpu ids getting thrown all over hte place 2020-11-13 20:05:52 -07:00
James Betker
c47925ae34 New image extractor utility 2020-11-13 11:04:03 -07:00
James Betker
a07e1a7292 Add separate Evaluator module and FID evaluator 2020-11-13 11:03:54 -07:00
James Betker
080ad61be4 Add option to work with nonrandom latents 2020-11-12 21:23:50 -07:00
James Betker
566b99ca75 GP adjustments for stylegan2 2020-11-12 16:44:51 -07:00
James Betker
fc55bdb24e Mods to how wandb are integrated 2020-11-12 15:45:25 -07:00
James Betker
44a19cd37c ExtensibleTrainer mods to support advanced checkpointing for stylegan2
Basically: stylegan2 makes use of gradient-based normalizers. These
make it so that I cannot use gradient checkpointing. But I love gradient
checkpointing. It makes things really, really fast and memory conscious.

So - only don't checkpoint when we run the regularizer loss. This is a
bit messy, but speeds up training by at least 20%.

Also: pytorch: please make checkpointing a first class citizen.
2020-11-12 15:45:07 -07:00
James Betker
db9e9e28a0 Fix an issue where GPU0 was always being used in non-ddp
Frankly, I don't understand how this has ever worked. WTF.
2020-11-12 15:43:01 -07:00
James Betker
2d3449d7a5 stylegan2 in ml art school! 2020-11-12 15:42:05 -07:00
James Betker
fd97573085 Fixes 2020-11-11 21:49:06 -07:00
James Betker
88f349bdf1 Enable usage of wandb 2020-11-11 21:48:56 -07:00
James Betker
1c065c41b4 Revert "..."
This reverts commit 4b92191880.
2020-11-11 17:24:27 -07:00
James Betker
4b92191880 ... 2020-11-11 14:12:40 -07:00
James Betker
12b57bbd03 Add residual blocks to pyramid disc 2020-11-11 13:56:45 -07:00
James Betker
b4136d766a Back to pyramids, no rrdb 2020-11-11 13:40:24 -07:00
James Betker
42a97de756 Convert PyramidRRDBDisc to RRDBDisc
Had numeric stability issues. This probably makes more sense anyways.
2020-11-11 12:14:14 -07:00
James Betker
72762f200c PyramidRRDB net 2020-11-11 11:25:49 -07:00
James Betker
a1760f8969 Adapt srg2 for video 2020-11-10 16:16:41 -07:00
James Betker
b742d1e5a5 When skipping steps via "every", still run nontrainable injection points 2020-11-10 16:09:17 -07:00
James Betker
91d27372e4 rrdb with adain latent 2020-11-10 16:08:54 -07:00
James Betker
6a2fd5f7d0 Lots of new discriminator nets 2020-11-10 16:06:54 -07:00
James Betker
4e5ba61ae7 SRG2classic further re-integration 2020-11-10 16:06:14 -07:00
James Betker
9e2c96ad5d More latent work 2020-11-07 20:38:56 -07:00
James Betker
6be6c92e5d Fix yet ANOTHER OBO error in multi_frame_dataset 2020-11-06 20:38:34 -07:00
James Betker
0cf52ef52c latent work 2020-11-06 20:38:23 -07:00
James Betker
34d319585c Add srflow arch 2020-11-06 20:38:04 -07:00
James Betker
4469d2e661 More work on RRDB with latent 2020-11-05 22:13:05 -07:00
James Betker
62d3b6496b Latent work checkpoint 2020-11-05 13:31:34 -07:00
James Betker
fd6cdba88f RRDB with latent 2020-11-05 10:04:17 -07:00
James Betker
df47d6cbbb More work in support of training flow networks in tandem with generators 2020-11-04 18:07:48 -07:00
James Betker
c21088e238 Fix OBO error in multi_frame_dataset
In some datasets, this meant one frame was included in a sequence where it didn't belong. In datasets with mismatched chunk sizes, this resulted in an error.
2020-11-03 14:32:06 -07:00
James Betker
e990be0449 Improve ignore_first logic 2020-11-03 11:56:32 -07:00
James Betker
658a267bab More work on SSIM/PSNR approximators
- Add a network that accomodates this style of approximator while retaining structure
- Migrate to SSIM approximation
- Add a tool to visualize how these approximators are working
- Fix some issues that came up while doign this work
2020-11-03 08:09:58 -07:00
James Betker
85c545835c Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-11-02 08:48:15 -07:00
James Betker
f13fdd43ed Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-11-02 08:47:42 -07:00
James Betker
fed16abc22 Report chunking errors 2020-11-02 08:47:18 -07:00
James Betker
a51daacde2 Fix reporting of d_fake_diff for generators 2020-11-02 08:45:46 -07:00
James Betker
3676f26d94 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-10-31 20:55:45 -06:00
James Betker
dcfe994fee Add standalone srg2_classic
Trying to investigate how I was so misguided. I *thought* srg2 was considerably
better than RRDB in performance but am not actually seeing that.
2020-10-31 20:55:34 -06:00
James Betker
ea8c20c0e2 Fix bug with multiscale_dataset 2020-10-31 20:54:41 -06:00
James Betker
bb39d3efe5 Bump image corruption factor a bit 2020-10-31 20:50:24 -06:00
James Betker
eb7df63592 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-10-31 11:09:32 -06:00
James Betker
c2866ad8d2 Disable debugging of comparable pingpong generations 2020-10-31 11:09:10 -06:00
James Betker
7303d8c932 Add psnr approximator 2020-10-31 11:08:55 -06:00
James Betker
565517814e Restore SRG2
Going to try to figure out where SRG lost competitiveness to RRDB..
2020-10-30 14:01:56 -06:00
James Betker
b24ff3c88d Fix bug that causes multiscale dataset to crash 2020-10-30 14:01:24 -06:00
James Betker
74738489b9 Fixes and additional support for progressive zoom 2020-10-30 09:59:54 -06:00
James Betker
a3918fa808 Tecogan & other fixes 2020-10-30 00:19:58 -06:00
James Betker
b316078a15 Fix tecogan_losses fp16 2020-10-29 23:02:20 -06:00
James Betker
3791f95ad0 Enable RRDB to take in reference inputs 2020-10-29 11:07:40 -06:00
James Betker
7d38381d46 Add scaling to rrdb 2020-10-29 09:48:10 -06:00
James Betker
607ff3c67c RRDB with bypass 2020-10-29 09:39:45 -06:00
James Betker
1655b9e242 Fix fast_forward teco loss bug 2020-10-28 17:49:54 -06:00
James Betker
25b007a0f5 Increase jpeg corruption & add error 2020-10-28 17:37:39 -06:00
James Betker
796659b0ac Add 'jpeg-normal' corruption 2020-10-28 16:40:47 -06:00
James Betker
515905e904 Add a min_loss that is DDP compatible 2020-10-28 15:46:59 -06:00
James Betker
f133243ac8 Extra logging for teco_resgen 2020-10-28 15:21:22 -06:00
James Betker
2ab5054d4c Add noise to teco disc 2020-10-27 22:48:23 -06:00
James Betker
4dc16d5889 Upgrade tecogan_losses for speed 2020-10-27 22:40:15 -06:00
James Betker
ac3da0c5a6 Make tecogen functional 2020-10-27 21:08:59 -06:00
James Betker
10da206db6 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-10-27 20:59:59 -06:00
James Betker
9848f4c6cb Add teco_resgen 2020-10-27 20:59:55 -06:00
James Betker
543c384a91 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-10-27 20:59:16 -06:00
James Betker
da53090ce6 More adjustments to support distributed training with teco & on multi_modal_train 2020-10-27 20:58:03 -06:00
James Betker
00bb568956 further checkpointify spsr_arch 2020-10-27 17:54:28 -06:00
James Betker
c2727a0150 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-10-27 15:24:19 -06:00
James Betker
2a3eec8fd7 Fix some distributed training snafus 2020-10-27 15:24:05 -06:00
James Betker
d923a62ed3 Allow SPSR to checkpoint 2020-10-27 15:23:20 -06:00
James Betker
11a9e223a6 Retrofit SPSR_arch so it is capable of accepting a ref 2020-10-27 11:14:36 -06:00
James Betker
8202ee72b9 Re-add original SPSR_arch 2020-10-27 11:00:38 -06:00
James Betker
31cf1ac98d Retrofit full_image_dataset to work with new arch. 2020-10-27 10:26:19 -06:00
James Betker
ade0a129da Include psnr in test.py 2020-10-27 10:25:42 -06:00
James Betker
231137ab0a Revert RRDB back to original model 2020-10-27 10:25:31 -06:00
James Betker
1ce863849a Remove temporary base_model change 2020-10-26 11:13:01 -06:00
James Betker
54accfa693 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-10-26 11:12:37 -06:00
James Betker
ff58c6484a Fixes to unified chunk datasets to support stereoscopic training 2020-10-26 11:12:22 -06:00
James Betker
b2f803588b Fix multi_modal_train.py 2020-10-26 11:10:22 -06:00
James Betker
f857eb00a8 Allow tecogan losses to compute at 32px 2020-10-26 11:09:55 -06:00
James Betker
629b968901 ChainedGen 4x alteration
Increases conv window for teco_recurrent in the 4x case so all data
can be used.

base_model changes should be temporary.
2020-10-26 10:54:51 -06:00
James Betker
85c07f85d9 Update flownet submodule 2020-10-24 11:59:00 -06:00
James Betker
327cdbe110 Support configurable multi-modal training 2020-10-24 11:57:39 -06:00
James Betker
9c3d059ef0 Updates to be able to train flownet2 in ExtensibleTrainer
Only supports basic losses for now, though.
2020-10-24 11:56:39 -06:00
James Betker
1dbcbfbac8 Restore ChainedEmbeddingGenWithStructure
Still using this guy, after all
2020-10-24 11:54:52 -06:00
James Betker
8e5b6682bf Add PairedFrameDataset 2020-10-23 20:58:07 -06:00
James Betker
7a75d10784 Arch cleanup 2020-10-23 09:35:33 -06:00
James Betker
646d6a621a Support 4x zoom on ChainedEmbeddingGen 2020-10-23 09:25:58 -06:00
James Betker
8636492db0 Copy train.py mods to train2 2020-10-22 17:16:36 -06:00
James Betker
e9c0b9f0fd More adjustments to support multi-modal training
Specifically - looks like at least MSE loss cannot handle autocasted tensors
2020-10-22 16:49:34 -06:00
James Betker
76789a456f Class-ify train.py and workon multi-modal trainer 2020-10-22 16:15:31 -06:00
James Betker
15e00e9014 Finish integration with autocast
Note: autocast is broken when also using checkpoint(). Overcome this by modifying
torch's checkpoint() function in place to also use autocast.
2020-10-22 14:39:19 -06:00
James Betker
d7ee14f721 Move to torch.cuda.amp (not working)
Running into OOM errors, needs diagnosing. Checkpointing here.
2020-10-22 13:58:05 -06:00
James Betker
3e3d2af1f3 Add multi-modal trainer 2020-10-22 13:27:32 -06:00
James Betker
40dc2938e8 Fix multifaceted chain gen 2020-10-22 13:27:06 -06:00
James Betker
f9dc472f63 Misc nonfunctional mods to datasets 2020-10-22 10:16:17 -06:00