Commit Graph

1332 Commits

Author SHA1 Message Date
James Betker
36c7c1fbdb Fix training flow for NEXT TOKEN prediction instead of same token prediction
doh
2021-08-04 10:28:09 -06:00
James Betker
d9936df363 Add gpt_tts dataset and implement inference
- Adds a script which preprocesses quantized mels given a DVAE
- Adds a dataset which can consume preprocessed qmels
- Reworks GPT TTS to consume the outputs of that dataset (removes logic to add padding and start/end tokens)
- Adds inference to gpt_tts
2021-08-04 00:44:04 -06:00
James Betker
4c98b9703f Get dalle-style TTS to "work" 2021-08-03 21:08:27 -06:00
James Betker
2814307eee Alterations to support VQVAE on mel spectrograms 2021-08-01 07:54:21 -06:00
James Betker
965f6e6b52 Fixes to weight_decay in adamw 2021-07-31 15:58:41 -06:00
James Betker
0c9e75bc69 Improvements to GptTts 2021-07-31 15:57:57 -06:00
James Betker
31ee9ae262 Checkin 2021-07-30 23:07:35 -06:00
James Betker
dadc54795c Add gpt_tts 2021-07-27 20:33:30 -06:00
James Betker
398185e109 More work on wave-diffusion 2021-07-27 05:36:17 -06:00
James Betker
49e3b310ea Allow audio sample rate interpolation for faster training 2021-07-26 17:44:06 -06:00
James Betker
96e90e7047 Add support for a gaussian-diffusion-based wave tacotron 2021-07-26 16:27:31 -06:00
James Betker
97d7cbbc34 Additional work for audio xformer (which doesnt really do a great job) 2021-07-23 10:58:14 -06:00
James Betker
2325e7a88c Allow inference for vqvae 2021-07-20 10:40:05 -06:00
James Betker
d81386c1be Mods to support vqvae in audio mode (1d) 2021-07-20 08:36:46 -06:00
James Betker
5584cfcc7a tacotron2 work 2021-07-14 21:41:57 -06:00
James Betker
fe0c699ced Various fixes 2021-07-14 00:08:42 -06:00
James Betker
be2745f42d Add waveglow & inference capabilities to audio generator 2021-07-08 23:07:36 -06:00
James Betker
1ff434218e tacotron2, ready for prime time! 2021-07-08 22:13:44 -06:00
James Betker
86fd3ad7fd Initial checkin of nvidia tacotron model & dataset
These two are tested, full support for training to come.
2021-07-06 11:11:35 -06:00
James Betker
3801d5d55e diffusion surfin' 2021-07-06 09:36:52 -06:00
James Betker
afa41f1804 Allow hq color jittering and corruptions that are not included in the corruption factor 2021-06-30 09:44:46 -06:00
James Betker
6fd16ea9c8 Add meta-anomaly detection, colorjitter augmentation 2021-06-29 13:41:55 -06:00
James Betker
46e9f62be0 Add unet with latent guide
This is a diffusion network that uses both a LQ image
and a reference sample HQ image that is compressed into
a latent vector to perform upsampling

The hope is that we can steer the upsampling network
with sample images.
2021-06-26 11:02:58 -06:00
James Betker
0ded106562 Merge remote-tracking branch 'origin/master' 2021-06-25 13:16:28 -06:00
James Betker
a57ed8e960 Various mods to support better jpeg image filtering 2021-06-25 13:16:15 -06:00
James Betker
61e7ca39cd
Update image_folder_dataset.py 2021-06-25 11:48:31 -06:00
James Betker
a0ef07ddb8
Create unet_latent_guide.py 2021-06-25 11:25:14 -06:00
James Betker
e7890dc0ba Misc fixes for diffusion nets 2021-06-21 10:38:07 -06:00
James Betker
8e3a33e001 Fix a bug where non-rank-0 is computing FID before all images are saved. 2021-06-16 16:27:09 -06:00
James Betker
68cbbed886 Add some cool diffusion testing scripts 2021-06-16 16:26:36 -06:00
James Betker
ae8de0cb9d fid saving images across all rank fix 2021-06-15 10:31:07 -06:00
James Betker
6a75bd0777 Another fix 2021-06-14 09:51:44 -06:00
James Betker
54bff35171 Fix issue where eval was not being used by all ddp processes 2021-06-14 09:50:04 -06:00
James Betker
60079a1572 Fix saver in distributed mode 2021-06-14 09:41:06 -06:00
James Betker
545f2db170 Distributed FID dataset across processes 2021-06-14 09:33:44 -06:00
James Betker
6b32c87dcb Try to make diffusion fid more deterministic 2021-06-14 09:27:43 -06:00
James Betker
5b4f86293f Add FID evaluator for diffusion models 2021-06-14 09:14:30 -06:00
James Betker
9cfe840872 Attempt to fix syncing multiple times when doing gradient accumulation 2021-06-13 14:30:30 -06:00
James Betker
1cd75dfd33 Fix ddp bug 2021-06-13 10:25:23 -06:00
James Betker
3e3ad7825f Add support for training an EMA network alongside the main networks 2021-06-12 21:01:41 -06:00
James Betker
696f320820 Get rid of feature networks 2021-06-11 20:50:07 -06:00
James Betker
65c474eecf Various changes to fix testing 2021-06-11 15:31:10 -06:00
James Betker
220f11a5e4 Half channel sizes in cifar_resnet 2021-06-09 17:06:37 -06:00
James Betker
aea12e1b9c Fix cat eval hack 2021-06-09 17:05:11 -06:00
James Betker
9b5f4abb91 Add fade in for hard switch 2021-06-07 18:15:09 -06:00
James Betker
108c5d829c Fix dropout norm 2021-06-07 16:13:23 -06:00
James Betker
438217094c Also debug distribution of switch 2021-06-07 15:36:07 -06:00
James Betker
44b09e5f20 Amplify dropout rate 2021-06-07 15:20:53 -06:00
James Betker
f0d4eb9182 Fixor 2021-06-07 11:58:36 -06:00
James Betker
c456a60466 Another go at fixing nan 2021-06-07 11:51:43 -06:00
James Betker
1c574c5bd1 Attempt to fix nan 2021-06-07 11:43:42 -06:00
James Betker
eda796985b Try out dropout norm 2021-06-07 11:33:33 -06:00
James Betker
6c6e82406e Pass a corruption factor through the dataset into the upsampling network
The intuition is this will help guide the network to make better informed decisions
about how it performs upsampling based on how it perceives the underlying content.

(I'm giving up on letting networks detect their own quality - I'm not convinced it is
actually feasible)
2021-06-07 09:13:54 -06:00
James Betker
2ad2b56438 Don't do wandb except on rank 0 2021-06-06 16:52:07 -06:00
James Betker
7c5478bc2c Formatting issue with gdi 2021-06-06 16:35:37 -06:00
James Betker
061dbcd458 Another fix to anorm 2021-06-06 15:09:49 -06:00
James Betker
9a6991e461 Fix switch norm average 2021-06-06 15:04:28 -06:00
James Betker
57e1a6a0f2 cifar: add hard routing
Also mods switched_routing to support non-pixular inputs
2021-06-06 14:53:43 -06:00
James Betker
692e9c417b Support diffusion unet 2021-06-06 13:57:22 -06:00
James Betker
a0158ebc69 Simplify cifar resnet further for faster training 2021-06-06 10:02:24 -06:00
James Betker
75567a9814 Only head norm removed 2021-06-05 23:29:11 -06:00
James Betker
65d0376b90 Re-add normalization at the tail of the RRDB 2021-06-05 23:04:05 -06:00
James Betker
184e887122 Remove rrdb normalization 2021-06-05 21:39:19 -06:00
James Betker
f5e75602b9 Add regular attention to cifar_resnet 2021-06-05 21:34:07 -06:00
James Betker
16cd92acd5 hack 2021-06-05 14:23:41 -06:00
James Betker
af52751d6b Fix device error 2021-06-05 14:21:32 -06:00
James Betker
5f0cc65f3b Register branched resnet properly 2021-06-05 14:19:03 -06:00
James Betker
fb405d9ef1 CIFAR stuff
- Extract coarse labels for the CIFAR dataset
- Add simple resnet that branches lower layers based on coarse labels
- Some other cleanup
2021-06-05 14:16:02 -06:00
James Betker
80d4404367 A few fixes:
- Output better prediction of xstart from eps
- Support LossAwareSampler
- Support AdamW
2021-06-05 13:40:32 -06:00
James Betker
fa908a6a15 Fix wandb import issue 2021-06-04 23:27:15 -06:00
James Betker
103a88506e Log eval to wandb 2021-06-04 23:23:20 -06:00
James Betker
7d45132f60 fdsa 2021-06-04 21:26:54 -06:00
James Betker
6c8c8087d5 asdf 2021-06-04 21:24:48 -06:00
James Betker
e6c537824a Allow validation for ce 2021-06-04 21:21:04 -06:00
James Betker
7c251af7a8 Support cifar100 with resnet 2021-06-04 17:29:07 -06:00
James Betker
bf811f80c1 GD mods & fixes
- Report variational loss separately
- Report model prediction from injector
- Log these things
- Use respacing like guided diffusion
2021-06-04 17:13:16 -06:00
James Betker
6084915af8 Support gaussian diffusion models
Adds support for GD models, courtesy of some maths from openai.

Also:
- Fixes requirement for eval{} even when it isn't being used
- Adds support for denormalizing an imagenet norm
2021-06-02 21:47:32 -06:00
James Betker
45bc76ba92 Fixes and mods to support training classifiers on imagenet 2021-06-01 17:25:24 -06:00
James Betker
f129eaa39e Clean up byol a bit
- Remove option to aug in dataset (there's really no reason for this now that kornia works on GPU on windows)
- Other stufff
2021-05-24 21:35:46 -06:00
James Betker
6649ef2dae Add zipfilesdataset 2021-05-24 21:35:00 -06:00
James Betker
1a2b9fa130 Get rid of old byol net wrapping
Simplifies and makes this usable with DLAS' multi-gpu trainer
2021-04-27 12:48:34 -06:00
James Betker
119f17c808 Add testing capabilities for segformer & contrastive feature 2021-04-27 09:59:50 -06:00
James Betker
9bbe6fc81e Get segformer to a trainable state 2021-04-25 11:45:20 -06:00
James Betker
23e01314d4 Add dataset, ui for labeling and evaluator for pointwise classification 2021-04-23 17:17:13 -06:00
James Betker
fc623d4b5a Add segformer model. Start work on BYOL adaptation that will support training it. 2021-04-23 17:16:46 -06:00
James Betker
17555e7d07 misc adjustments for stylegan 2021-04-21 18:14:17 -06:00
James Betker
b687ef4cd0 Misc 2021-04-21 18:09:46 -06:00
James Betker
94e069bced Misc changes 2021-03-13 10:45:26 -07:00
James Betker
9fc3df3f5b Switched conv: add conversion function with allowlist 2021-03-13 10:44:56 -07:00
James Betker
cf9a6da889 Fix some bugs, checkin work on vqvae3 2021-03-02 20:56:19 -07:00
James Betker
f89ea5f1c6 Mods to support lightweight_gan model 2021-03-02 20:51:48 -07:00
James Betker
543d459b4e extract_temporal_squares script
For extracting related patches across a video
2021-02-08 08:10:24 -07:00
James Betker
39fd755baa New benchmark numbers 2021-02-08 08:09:41 -07:00
James Betker
784b96c059 Misc options to add support for training stylegan2-rosinality models:
- Allow image_folder_dataset to normalize inbound images
- ExtensibleTrainer can denormalize images on the output path
- Support .webp - an output from LSUN
- Support logistic GAN divergence loss
- Support stylegan2 TF weight extraction for discriminator
- New injector that produces latent noise (with separated paths)
- Modify FID evaluator to be operable with rosinality-style GANs
2021-02-08 08:09:21 -07:00
James Betker
e7be4bdff3 Revert 2021-02-05 08:43:07 -07:00
James Betker
6dec1f5968 Back to groupnorm 2021-02-05 08:42:11 -07:00
James Betker
336f807c8e lambda2 2021-02-05 00:00:24 -07:00
James Betker
025a5867c4 Use syncbatchnorm instead 2021-02-04 22:26:36 -07:00
James Betker
bb79fafb89 Fix groupnorm specification 2021-02-04 22:15:38 -07:00
James Betker
43da1f9c4b Convert lambda coupler to use groupnorm instead of batchnorm 2021-02-04 21:59:44 -07:00
James Betker
7070142805 Make vqvae3_hard more configurable 2021-02-04 09:03:22 -07:00
James Betker
b980028ca8 Add get_debug_values for vqvae_3_hardswitch 2021-02-03 14:12:24 -07:00
James Betker
1405ff06b8 Fix SwitchedConvHardRoutingFunction for current cuda router 2021-02-03 14:11:55 -07:00
James Betker
d7bec392dd ... 2021-02-02 23:50:25 -07:00
James Betker
b0a8fa00bc Visual dbg in vqvae3hs 2021-02-02 23:50:01 -07:00
James Betker
f5f91850fd hardswitch variant of vqvae3 2021-02-02 21:00:04 -07:00
James Betker
320edbaa3c Move switched_conv logic around a bit 2021-02-02 20:41:24 -07:00
James Betker
0dca36946f Hard Routing mods
- Turns out my custom convolution was RIDDLED with backwards bugs, which is
   why the existing implementation wasn't working so well.
- Implements the switch logic from both Mixture of Experts and Switch Transformers
  for testing purposes.
2021-02-02 20:35:58 -07:00
James Betker
29c1c3bede Register vqvae3 2021-01-29 15:26:28 -07:00
James Betker
bc20b4739e vqvae3
Changes VQVAE as so:
- Reverts back to smaller codebook
- Adds an additional conv layer at the highest resolution for both the encoder & decoder
- Uses LeakyReLU on trunk
2021-01-29 15:24:26 -07:00
James Betker
96bc80313c Add switch norm, up dropout rate, detach selector 2021-01-26 09:31:53 -07:00
James Betker
97d895aebe Add SrPixLoss, which focuses pixel-based losses on high-frequency regions
of the image.
2021-01-25 08:26:14 -07:00
James Betker
2cdac6bd09 Add PWCNet for human optical flow 2021-01-25 08:25:44 -07:00
James Betker
51b63b2aa6 Add switched_conv with hard routing and make vqvae use it. 2021-01-25 08:25:29 -07:00
James Betker
ae4ff4a1e7 Enable lambda visualization 2021-01-23 15:53:27 -07:00
James Betker
10ec6bda1d lambda nets in switched_conv and a vqvae to use it 2021-01-23 14:57:57 -07:00
James Betker
b374dcdd46 update vqvae to double codebook size for bottom quantizer 2021-01-23 13:47:07 -07:00
James Betker
dac7d768fa test uresnet playground mods 2021-01-23 13:46:43 -07:00
James Betker
1b8a26db93 New switched_conv 2021-01-23 13:46:30 -07:00
James Betker
557cdec116 misc 2021-01-23 13:45:17 -07:00
James Betker
d919ae7148 Add VQVAE with no Conv2dTranspose 2021-01-18 08:49:59 -07:00
James Betker
587a4f4050 resnet_unet_3
I'm being really lazy here - these nets are not really different from each other
except at which layer they terminate. This one terminates at 2x downsampling,
which is simply indicative of a direction I want to go for testing these pixpro networks.
2021-01-15 14:51:03 -07:00
James Betker
038b8654b6 Pixpro: unwrap losses 2021-01-13 11:54:25 -07:00
James Betker
8990801a3f Fix pixpro stochastic sampling bugs 2021-01-13 11:34:24 -07:00
James Betker
19475a072f Pixpro: Rather than using a latent square for pixpro, use an entirely stochastic sampling of the pixels 2021-01-13 11:26:51 -07:00
James Betker
d1007ccfe7 Adjustments to pixpro to allow training against networks with arbitrarily large structural latents
- The pixpro latent now rescales the latent space instead of using a "coordinate vector", which
   **might** have performance implications.
- The latent against which the pixel loss is computed can now be a small, randomly sampled patch
   out of the entire latent, allowing further memory/computational discounts. Since the loss
   computation does not have a receptive field, this should not alter the loss.
- The instance projection size can now be separate from the pixel projection size.
- PixContrast removed entirely.
- ResUnet with full resolution added.
2021-01-12 09:17:45 -07:00
James Betker
34f8c8641f Support training imagenet classifier 2021-01-11 20:09:16 -07:00
James Betker
f3db381fa1 Allow uresnet to use pretrained resnet50 2021-01-10 12:57:31 -07:00
James Betker
4119cd6240 Fix to image_folder_dataset to accomodate images with mismatched dimensions 2021-01-10 12:57:21 -07:00
James Betker
48f0d8964b Allow dist_backend to be specified in options 2021-01-09 20:54:32 -07:00
James Betker
14a868e8e6 byol playground updates 2021-01-09 20:54:21 -07:00
James Betker
7c6c7a8014 Fix process_video 2021-01-09 20:53:46 -07:00
James Betker
07168ecfb4 Enable vqvae to use a switched_conv variant 2021-01-09 20:53:14 -07:00
James Betker
41b7d50944 Update extract_square_images 2021-01-08 13:16:34 -07:00
James Betker
5a8156026a Did anyone ask for k-means clustering?
This is so cool...
2021-01-07 22:37:41 -07:00
James Betker
acf1535b14 Fix for randomresizedcrop injector 2021-01-07 16:31:43 -07:00
James Betker
659814c20f BYOL script updates 2021-01-07 16:31:28 -07:00
James Betker
de10c7246a Add injected noise into bypass maps 2021-01-07 16:31:12 -07:00
James Betker
04961b91cf Add random-crop injector 2021-01-07 12:14:55 -07:00
James Betker
61a86a3c1e VQVAE 2021-01-07 10:20:15 -07:00
James Betker
01a589e712 Adjustments to pixpro & resnet-unet
I'm not really satisfied with what I got out of these networks on round 1.
Lets try again..
2021-01-06 15:00:46 -07:00
James Betker
9680294430 Move byol scripts around 2021-01-06 14:52:17 -07:00
James Betker
2f2f87bbea Styled SR fixes 2021-01-05 20:14:39 -07:00
James Betker
9fed90393f Add lucidrains pixpro trainer 2021-01-05 20:14:22 -07:00
James Betker
39a94c74b5 Allow BYOL resnet playground to produce a latent dict 2021-01-04 20:11:29 -07:00
James Betker
ade2732c82 Transfer learning for styleSR
This is a concept from "Lifelong Learning GAN", although I'm skeptical of it's novelty -
basically you scale and shift the weights for the generator and discriminator of a pretrained
GAN to "shift" into new modalities, e.g. faces->birds or whatever. There are some interesting
applications of this that I would like to try out.
2021-01-04 20:10:48 -07:00
James Betker
2c65b6b28e More mods to support styledsr 2021-01-04 11:32:28 -07:00
James Betker
2225fe6ac2 Undo lucidrains changes for new discriminator
This "new" code will live in the styledsr directory from now on.
2021-01-04 10:57:09 -07:00
James Betker
40ec71da81 Move styled_sr into its own folder 2021-01-04 10:54:34 -07:00
James Betker
5916f5f7d4 Misc fixes 2021-01-04 10:53:53 -07:00
James Betker
4d8064c32c Modifications to allow partially trained stylegan discriminators to be used 2021-01-03 16:37:18 -07:00
James Betker
5e7ade0114 ImageFolderDataset - corrupt lq images alongside each other 2021-01-03 16:36:38 -07:00
James Betker
ce6524184c Do the last commit but in a better way 2021-01-02 22:24:12 -07:00
James Betker
edf9c38198 Make ExtensibleTrainer set the starting step for the LR scheduler 2021-01-02 22:22:34 -07:00
James Betker
bdbab65082 Allow optimizers to train separate param groups, add higher dimensional VGG discriminator
Did this to support training 512x512px networks off of a pretrained 256x256 network.
2021-01-02 15:10:06 -07:00
James Betker
193cdc6636 Move discriminators to the create_model paradigm
Also cleans up a lot of old discriminator models that I have no intention
of using again.
2021-01-01 15:56:09 -07:00
James Betker
7976a5825d srfid is incorrectly labeled 2021-01-01 13:00:59 -07:00
James Betker
f39179e85a styled_sr: fix bug when using initial_stride 2021-01-01 12:13:21 -07:00
James Betker
913fc3b75e Need init to pick up styled_sr 2021-01-01 12:10:32 -07:00
James Betker
aae65e6ed8 Mods to byol_resnet_playground for large batches 2021-01-01 11:59:54 -07:00
James Betker
e992e18767 Add initial_stride term to style_sr
Also fix fid and a networks.py issue.
2021-01-01 11:59:36 -07:00
James Betker
9864fe4c04 Fix for train.py 2021-01-01 11:59:00 -07:00
James Betker
e214e6ce33 Styled SR model 2020-12-31 20:54:18 -07:00
James Betker
0eb1f4dd67 Revert "Get rid of CUDA_VISIBLE_DEVICES"
It is actually necessary for training in distributed mode. Only
do it then.
2020-12-31 10:31:40 -07:00
James Betker
8de5a02a48 byol_resnet_playground
Similar to the spinenet playground, but tinkers with resnet instead
2020-12-31 10:15:04 -07:00
James Betker
8f18b2709e Get rid of CUDA_VISIBLE_DEVICES
It is not clear to me what the purpose of this is, but it has recently
started causing failures.
2020-12-31 10:13:58 -07:00
James Betker
1de1fa30ac Disable refs and centers altogether in single_image_dataset
I suspect that this might be a cause of failures on parallel datasets.
Plus it is unnecessary computation.
2020-12-31 10:13:24 -07:00
James Betker
8f0984cacf Add sr_fid evaluator 2020-12-30 20:18:58 -07:00
James Betker
b1fb82476b Add gp debug (fix) 2020-12-30 15:26:54 -07:00
James Betker
9c53314ea2 Add gradient penalty visual debug 2020-12-30 09:51:59 -07:00
James Betker
63cf3d3126 Injector auto-registration
I love it!
2020-12-29 20:58:02 -07:00
James Betker
a777c1e4f9 Misc script fixes 2020-12-29 20:25:09 -07:00
James Betker
9dc3c8f0ff Script updates 2020-12-29 20:24:41 -07:00
James Betker
ba543d1152 Glean mods
- Fixes fixed upscale factor issues
- Refines a few ops to decrease computation & parameterization
2020-12-27 12:25:06 -07:00
James Betker
5e2e605a50 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-12-26 13:51:19 -07:00
James Betker
f9be049adb GLEAN mod to support custom initial strides 2020-12-26 13:51:14 -07:00
James Betker
2706a84f15 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-12-26 13:50:34 -07:00
James Betker
90e2362c00 Fix bug with full_image_dataset 2020-12-26 13:50:27 -07:00
James Betker
3fd627fc62 Mods to support image classification & filtering 2020-12-26 13:49:27 -07:00
James Betker
10fdfa1563 Migrate generators to dynamic model registration 2020-12-24 23:02:10 -07:00
James Betker
29db7c7a02 Further mods to BYOL 2020-12-24 09:28:41 -07:00
James Betker
036684893e Add LARS optimizer & support for BYOL idiosyncrasies
- Added LARS and SGD optimizer variants that support turning off certain
  features for BN and bias layers
- Added a variant of pytorch's resnet model that supports gradient checkpointing.
- Modify the trainer infrastructure to support above
- Fix bug with BYOL (should have been nonfunctional)
2020-12-23 20:33:43 -07:00
James Betker
1bbcb96ee8 Implement a few changes to support training BYOL networks 2020-12-23 10:50:23 -07:00
James Betker
2437b33e74 Fix srflow_latent_space_playground bug 2020-12-22 15:42:38 -07:00
James Betker
e7aeb17404 ImageFolder dataset: allow intermediary downscale before corrupt
For massive upscales (ex: 8x), corruption does almost nothing when applied
at the HQ level. This patch adds support to perform corruption at a specified
intermediary scale. The dataset downscales to this level, performs the corruption,
then downscales the rest of the way to get the LQ image.
2020-12-22 15:42:21 -07:00
James Betker
7938f9f50b Fix bug with single_image_dataset which prevented working on multiple directories from working 2020-12-19 15:13:46 -07:00
James Betker
ae666dc520 Fix bugs with srflow after refactor 2020-12-19 10:28:23 -07:00
James Betker
4328c2f713 Change default ReLU slope to .2 BREAKS COMPATIBILITY
This conforms my ConvGnLelu implementation with the generally accepted negative_slope=.2. I have no idea where I got .1. This will break backwards compatibility with some older models but will likely improve their performance when freshly trained. I did some auditing to find what these models might be, and I am not actively using any of them, so probably OK.
2020-12-19 08:28:03 -07:00
James Betker
9377d34ac3 glean mods 2020-12-19 08:26:07 -07:00
James Betker
f35c034fa5 Add trainer readme 2020-12-18 16:52:16 -07:00
James Betker
e82f4552db Update other docs with dumb config options 2020-12-18 16:21:28 -07:00
James Betker
92f9a129f7 GLEAN! 2020-12-18 16:04:19 -07:00
James Betker
c717765bcb Notes for lucidrains converter. 2020-12-18 09:55:38 -07:00
James Betker
b4720ea377 Move stylegan to new location 2020-12-18 09:52:36 -07:00
James Betker
1708136b55 Commit my attempt at "conforming" the lucidrains stylegan implementation to the reference spec. Not working. will probably be abandoned. 2020-12-18 09:51:48 -07:00
James Betker
209332292a Rosinality stylegan fix 2020-12-18 09:50:41 -07:00
James Betker
d875ca8342 More refactor changes 2020-12-18 09:24:31 -07:00
James Betker
5640e4efe4 More refactoring 2020-12-18 09:18:34 -07:00
James Betker
b905b108da Large cleanup
Removed a lot of old code that I won't be touching again. Refactored some
code elements into more logical places.
2020-12-18 09:10:44 -07:00
James Betker
2f0a52b7db misc changes 2020-12-18 08:53:45 -07:00
James Betker
a8179ff53c Image label work 2020-12-18 08:53:18 -07:00
James Betker
3074f41877 Get rosinality model converter to work
Mostly, just needed to remove the custom cuda ops, not so bueno on Windows.
2020-12-17 16:03:39 -07:00
James Betker
e838c6e75b Rosinality stylegan2 port 2020-12-17 14:18:46 -07:00
James Betker
12cf052889 Add an image patch labeling UI 2020-12-17 10:16:21 -07:00
James Betker
49327b99fe SRFlow outputs RRDB output 2020-12-16 10:28:02 -07:00
James Betker
c25b49bb12 Clean up of SRFlowNet_arch 2020-12-16 10:27:38 -07:00
James Betker
42ac8e3eeb Remove unnecessary comment from SRFlowNet 2020-12-16 09:43:07 -07:00
James Betker
fb2cfc795b Update requirements, add image_patch_classifier tool 2020-12-16 09:42:50 -07:00
James Betker
09de3052ac Add softmax to spinenet classification head 2020-12-16 09:42:15 -07:00
James Betker
4310e66848 Fix bug in 'corrupt_before_downsize=true' 2020-12-16 09:41:59 -07:00
James Betker
8661207d57 Merge branch 'gan_lab' of https://github.com/neonbjb/DL-Art-School into gan_lab 2020-12-15 17:16:48 -07:00
James Betker
fc376d34b2 Spinenet with logits head 2020-12-15 17:16:19 -07:00
James Betker
8e0e883050 Mods to support labeled datasets & random augs for those datasets 2020-12-15 17:15:56 -07:00
James Betker
e5a3e6b9b5 srflow latent space misc 2020-12-14 23:59:49 -07:00
James Betker
1e14635d88 Add exclusions to extract_subimages_with_ref 2020-12-14 23:59:41 -07:00
James Betker
0a19e53df0 BYOL mods 2020-12-14 23:59:11 -07:00
James Betker
ef7eabf457 Allow RRDB to upscale 8x 2020-12-14 23:58:52 -07:00
James Betker
087e9280ed Add labeling feature to image_folder_dataset 2020-12-14 23:58:37 -07:00
James Betker
ec0ee25f4b Structural latents checkpoint 2020-12-11 12:01:09 -07:00
James Betker
26ceca68c0 BYOL with structure! 2020-12-10 15:07:35 -07:00
James Betker
9c5e272a22 Script to extract models from a wrapped BYOL model 2020-12-10 09:57:52 -07:00
James Betker
a5630d282f Get rid of 2nd trainer 2020-12-10 09:57:38 -07:00
James Betker
8e4b9f42fd New BYOL dataset which uses a form of RandomCrop that lends itself to
structural guidance to the latents.
2020-12-10 09:57:18 -07:00
James Betker
c203cee31e Allow swapping to torch DDP as needed in code 2020-12-09 15:03:59 -07:00
James Betker
66cbae8731 Add random_dataset for testing 2020-12-09 14:55:05 -07:00
James Betker
97ff25a086 BYOL!
Man, is there anything ExtensibleTrainer can't train? :)
2020-12-08 13:07:53 -07:00
James Betker
5369cba8ed Stage 2020-12-08 00:33:07 -07:00
James Betker
bca59ed98a Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-12-07 12:51:04 -07:00
James Betker
ea56eb61f0 Fix DDP errors for discriminator
- Don't define training_net in define_optimizers - this drops the shell and leads to problems downstream
- Get rid of support for multiple training nets per opt. This was half baked and needs a better solution if needed
   downstream.
2020-12-07 12:50:57 -07:00
James Betker
c0aeaabc31 Spinenet playground 2020-12-07 12:49:32 -07:00
James Betker
88fc049c8d spinenet latent playground! 2020-12-05 20:30:36 -07:00
James Betker
11155aead4 Directly use dataset keys
This has been a long time coming. Cleans up messy "GT" nomenclature and simplifies ExtensibleTraner.feed_data
2020-12-04 20:14:53 -07:00
James Betker
8a83b1c716 Go back to apex DDP, fix distributed bugs 2020-12-04 16:39:21 -07:00
James Betker
7a81d4e2f4 Revert gaussian loss changes 2020-12-04 12:49:20 -07:00
James Betker
711780126e Cleanup 2020-12-03 23:42:51 -07:00
James Betker
ac7256d4a3 Do tqdm reporting when calculating flow_gaussian_nll 2020-12-03 23:42:29 -07:00
James Betker
dc9ff8e05b Allow the majority of the srflow steps to checkpoint 2020-12-03 23:41:57 -07:00
James Betker
06d1c62c5a iGPT support!
Sweeeeet
2020-12-03 15:32:21 -07:00
James Betker
c18adbd606 Delete mdcn & panet
Garbage, all of it.
2020-12-02 22:25:57 -07:00
James Betker
f2880b33c9 Get rid of mean shift from MDCN 2020-12-02 14:18:33 -07:00
James Betker
8a00f15746 Implement FlowGaussianNll evaluator 2020-12-02 14:09:54 -07:00
James Betker
edf408508c Fix discriminator 2020-12-01 17:45:56 -07:00
James Betker
c963e5f2ce Add ImageFolderDataset
This one has been a long time coming.. How does torch not have something like this?
2020-12-01 17:45:37 -07:00
James Betker
9a421a41f4 SRFlow: accomodate mismatches between global scale and flow_scale 2020-12-01 11:11:51 -07:00
James Betker
8f65f81ddb Adjustments to subimage extractor 2020-12-01 11:11:30 -07:00
James Betker
e343722d37 Add stepped rrdb 2020-12-01 11:11:15 -07:00
James Betker
2e0bbda640 Remove unused archs 2020-12-01 11:10:48 -07:00
James Betker
a1c8300052 Add mdcn 2020-11-30 16:14:21 -07:00
James Betker
1e0f69e34b extra_conv in gn discriminator, multiframe support in rrdb. 2020-11-29 15:39:50 -07:00
James Betker
da604752e6 Misc RRDB changes 2020-11-29 12:21:31 -07:00
James Betker
f2422f1d75 Latent space playground 2020-11-29 09:33:29 -07:00
James Betker
a1d4c9f83c multires rrdb work 2020-11-28 14:35:46 -07:00
James Betker
929cd45c05 Fix for RRDB scale 2020-11-27 21:37:10 -07:00
James Betker
71fa532356 Adjustments to how flow networks set size and scale 2020-11-27 21:37:00 -07:00
James Betker
6f958bb150 Maybe this is necessary after all? 2020-11-27 15:21:13 -07:00
James Betker
ef8d5f88c1 Bring split gaussian nll out of split so it can be computed accurately with the rest of the nll component 2020-11-27 13:30:21 -07:00
James Betker
11d2b70bdd Latent space playground work 2020-11-27 12:03:16 -07:00
James Betker
4ab49b0d69 RRDB disc work 2020-11-27 12:03:08 -07:00
James Betker
6de4dabb73 Remove srflow (modified version)
Starting from orig and re-working from there.
2020-11-27 12:02:06 -07:00
James Betker
5f5420ff4a Update to srflow_latent_space_playground 2020-11-26 20:31:21 -07:00
James Betker
fd356580c0 Play with lambdas 2020-11-26 20:30:55 -07:00
James Betker
0c6d7971b9 Dataset documentation 2020-11-26 11:58:39 -07:00
James Betker
45a489110f Fix datasets 2020-11-26 11:50:38 -07:00
James Betker
5edaf085e0 Adjustments to latent_space_playground 2020-11-25 15:52:36 -07:00
James Betker
205c9a5335 Learn how to functionally use srflow networks 2020-11-25 13:59:06 -07:00
James Betker
cb045121b3 Expose srflow rrdb 2020-11-24 13:20:20 -07:00
James Betker
f3c1fc1bcd Dataset modifications 2020-11-24 13:20:12 -07:00
James Betker
f6098155cd Mods to tecogan to allow use of embeddings as input 2020-11-24 09:24:02 -07:00
James Betker
b10bcf6436 Rework stylegan_for_sr to incorporate structure as an adain block 2020-11-23 11:31:11 -07:00
James Betker
519ba6f10c Support 2x RRDB with 4x srflow 2020-11-21 14:46:15 -07:00
James Betker
cad92bada8 Report logp and logdet for srflow 2020-11-21 10:13:05 -07:00
James Betker
c37d3faa58 More adjustments to srflow_orig 2020-11-20 19:38:33 -07:00
James Betker
d51d12a41a Adjustments to srflow to (maybe?) fix training 2020-11-20 14:44:24 -07:00
James Betker
6c8c35ac47 Support training RRDB encoder [srflow] 2020-11-20 10:03:06 -07:00
James Betker
5ccdbcefe3 srflow_orig integration 2020-11-19 23:47:24 -07:00
James Betker
f80acfcab6 Throw if dataset isn't going to work with force_multiple setting 2020-11-19 23:47:00 -07:00
James Betker
2b2d754d8e Bring in an original SRFlow implementation for reference 2020-11-19 21:42:39 -07:00
James Betker
1e0d7be3ce "Clean up" SRFlow 2020-11-19 21:42:24 -07:00
James Betker
d7877d0a36 Fixes to teco losses and translational losses 2020-11-19 11:35:05 -07:00
James Betker
b2a05465fc Fix missing requirements 2020-11-18 10:16:39 -07:00
James Betker
5c10264538 Remove pyramid_disc hard dependencies 2020-11-17 18:34:11 -07:00
James Betker
6b679e2b51 Make grad_penalty available to classical discs 2020-11-17 18:31:40 -07:00
James Betker
8a19c9ae15 Add additive mode to rrdb 2020-11-16 20:45:09 -07:00
James Betker
2a507987df Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-11-15 16:16:30 -07:00
James Betker
931ed903c1 Allow combined additive loss 2020-11-15 16:16:18 -07:00
James Betker
4b68116977 import fix 2020-11-15 16:15:42 -07:00
James Betker
98eada1e4c More circular dependency fixes + unet fixes 2020-11-15 11:53:35 -07:00
James Betker
e587d549f7 Fix circular imports 2020-11-15 11:32:35 -07:00
James Betker
99f0cfaab5 Rework stylegan2 divergence losses
Notably: include unet loss
2020-11-15 11:26:44 -07:00
James Betker
ea94b93a37 Fixes for unet 2020-11-15 10:38:33 -07:00
James Betker
89f56b2091 Fix another import 2020-11-14 22:10:45 -07:00
James Betker
9af049c671 Import fix for unet 2020-11-14 22:09:18 -07:00
James Betker
5cade6b874 Move stylegan2 around, bring in unet 2020-11-14 22:04:48 -07:00
James Betker
4c6b14a3f8 Allow extract_square_images to work on multiple images 2020-11-14 20:24:05 -07:00
James Betker
125cb16dce Add a FID evaluator for stylegan with structural guidance 2020-11-14 20:16:07 -07:00
James Betker
c9258e2da3 Alter how structural guidance is given to stylegan 2020-11-14 20:15:48 -07:00
James Betker
3397c83447 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-11-14 09:30:09 -07:00
James Betker
423ee7cb90 Allow attention to be specified for stylegan2 2020-11-14 09:29:53 -07:00
James Betker
ec621c69b5 Fix train bug 2020-11-14 09:29:08 -07:00
James Betker
cdc5ac30e9 oddity 2020-11-13 20:11:57 -07:00
James Betker
f406a5dd4c Mods to support stylegan2 in SR mode 2020-11-13 20:11:50 -07:00
James Betker
9c3d0b7560 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-11-13 20:10:47 -07:00
James Betker
67bf55495b Allow hq_batched_key to be specified 2020-11-13 20:10:12 -07:00
James Betker
0b96811611 Fix another issue with gpu ids getting thrown all over hte place 2020-11-13 20:05:52 -07:00
James Betker
c47925ae34 New image extractor utility 2020-11-13 11:04:03 -07:00
James Betker
a07e1a7292 Add separate Evaluator module and FID evaluator 2020-11-13 11:03:54 -07:00
James Betker
080ad61be4 Add option to work with nonrandom latents 2020-11-12 21:23:50 -07:00
James Betker
566b99ca75 GP adjustments for stylegan2 2020-11-12 16:44:51 -07:00
James Betker
fc55bdb24e Mods to how wandb are integrated 2020-11-12 15:45:25 -07:00
James Betker
44a19cd37c ExtensibleTrainer mods to support advanced checkpointing for stylegan2
Basically: stylegan2 makes use of gradient-based normalizers. These
make it so that I cannot use gradient checkpointing. But I love gradient
checkpointing. It makes things really, really fast and memory conscious.

So - only don't checkpoint when we run the regularizer loss. This is a
bit messy, but speeds up training by at least 20%.

Also: pytorch: please make checkpointing a first class citizen.
2020-11-12 15:45:07 -07:00
James Betker
db9e9e28a0 Fix an issue where GPU0 was always being used in non-ddp
Frankly, I don't understand how this has ever worked. WTF.
2020-11-12 15:43:01 -07:00
James Betker
2d3449d7a5 stylegan2 in ml art school! 2020-11-12 15:42:05 -07:00
James Betker
fd97573085 Fixes 2020-11-11 21:49:06 -07:00
James Betker
88f349bdf1 Enable usage of wandb 2020-11-11 21:48:56 -07:00
James Betker
1c065c41b4 Revert "..."
This reverts commit 4b92191880.
2020-11-11 17:24:27 -07:00
James Betker
4b92191880 ... 2020-11-11 14:12:40 -07:00
James Betker
12b57bbd03 Add residual blocks to pyramid disc 2020-11-11 13:56:45 -07:00
James Betker
b4136d766a Back to pyramids, no rrdb 2020-11-11 13:40:24 -07:00
James Betker
42a97de756 Convert PyramidRRDBDisc to RRDBDisc
Had numeric stability issues. This probably makes more sense anyways.
2020-11-11 12:14:14 -07:00
James Betker
72762f200c PyramidRRDB net 2020-11-11 11:25:49 -07:00
James Betker
a1760f8969 Adapt srg2 for video 2020-11-10 16:16:41 -07:00
James Betker
b742d1e5a5 When skipping steps via "every", still run nontrainable injection points 2020-11-10 16:09:17 -07:00
James Betker
91d27372e4 rrdb with adain latent 2020-11-10 16:08:54 -07:00
James Betker
6a2fd5f7d0 Lots of new discriminator nets 2020-11-10 16:06:54 -07:00
James Betker
4e5ba61ae7 SRG2classic further re-integration 2020-11-10 16:06:14 -07:00
James Betker
9e2c96ad5d More latent work 2020-11-07 20:38:56 -07:00
James Betker
6be6c92e5d Fix yet ANOTHER OBO error in multi_frame_dataset 2020-11-06 20:38:34 -07:00
James Betker
0cf52ef52c latent work 2020-11-06 20:38:23 -07:00
James Betker
34d319585c Add srflow arch 2020-11-06 20:38:04 -07:00
James Betker
4469d2e661 More work on RRDB with latent 2020-11-05 22:13:05 -07:00
James Betker
62d3b6496b Latent work checkpoint 2020-11-05 13:31:34 -07:00
James Betker
fd6cdba88f RRDB with latent 2020-11-05 10:04:17 -07:00
James Betker
df47d6cbbb More work in support of training flow networks in tandem with generators 2020-11-04 18:07:48 -07:00
James Betker
c21088e238 Fix OBO error in multi_frame_dataset
In some datasets, this meant one frame was included in a sequence where it didn't belong. In datasets with mismatched chunk sizes, this resulted in an error.
2020-11-03 14:32:06 -07:00
James Betker
e990be0449 Improve ignore_first logic 2020-11-03 11:56:32 -07:00
James Betker
658a267bab More work on SSIM/PSNR approximators
- Add a network that accomodates this style of approximator while retaining structure
- Migrate to SSIM approximation
- Add a tool to visualize how these approximators are working
- Fix some issues that came up while doign this work
2020-11-03 08:09:58 -07:00
James Betker
85c545835c Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-11-02 08:48:15 -07:00
James Betker
f13fdd43ed Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-11-02 08:47:42 -07:00
James Betker
fed16abc22 Report chunking errors 2020-11-02 08:47:18 -07:00
James Betker
a51daacde2 Fix reporting of d_fake_diff for generators 2020-11-02 08:45:46 -07:00
James Betker
3676f26d94 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-10-31 20:55:45 -06:00
James Betker
dcfe994fee Add standalone srg2_classic
Trying to investigate how I was so misguided. I *thought* srg2 was considerably
better than RRDB in performance but am not actually seeing that.
2020-10-31 20:55:34 -06:00
James Betker
ea8c20c0e2 Fix bug with multiscale_dataset 2020-10-31 20:54:41 -06:00
James Betker
bb39d3efe5 Bump image corruption factor a bit 2020-10-31 20:50:24 -06:00
James Betker
eb7df63592 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-10-31 11:09:32 -06:00
James Betker
c2866ad8d2 Disable debugging of comparable pingpong generations 2020-10-31 11:09:10 -06:00
James Betker
7303d8c932 Add psnr approximator 2020-10-31 11:08:55 -06:00
James Betker
565517814e Restore SRG2
Going to try to figure out where SRG lost competitiveness to RRDB..
2020-10-30 14:01:56 -06:00
James Betker
b24ff3c88d Fix bug that causes multiscale dataset to crash 2020-10-30 14:01:24 -06:00
James Betker
74738489b9 Fixes and additional support for progressive zoom 2020-10-30 09:59:54 -06:00