James Betker
6a75bd0777
Another fix
2021-06-14 09:51:44 -06:00
James Betker
54bff35171
Fix issue where eval was not being used by all ddp processes
2021-06-14 09:50:04 -06:00
James Betker
60079a1572
Fix saver in distributed mode
2021-06-14 09:41:06 -06:00
James Betker
545f2db170
Distributed FID dataset across processes
2021-06-14 09:33:44 -06:00
James Betker
6b32c87dcb
Try to make diffusion fid more deterministic
2021-06-14 09:27:43 -06:00
James Betker
5b4f86293f
Add FID evaluator for diffusion models
2021-06-14 09:14:30 -06:00
James Betker
9cfe840872
Attempt to fix syncing multiple times when doing gradient accumulation
2021-06-13 14:30:30 -06:00
James Betker
1cd75dfd33
Fix ddp bug
2021-06-13 10:25:23 -06:00
James Betker
3e3ad7825f
Add support for training an EMA network alongside the main networks
2021-06-12 21:01:41 -06:00
James Betker
696f320820
Get rid of feature networks
2021-06-11 20:50:07 -06:00
James Betker
65c474eecf
Various changes to fix testing
2021-06-11 15:31:10 -06:00
James Betker
220f11a5e4
Half channel sizes in cifar_resnet
2021-06-09 17:06:37 -06:00
James Betker
aea12e1b9c
Fix cat eval hack
2021-06-09 17:05:11 -06:00
James Betker
9b5f4abb91
Add fade in for hard switch
2021-06-07 18:15:09 -06:00
James Betker
108c5d829c
Fix dropout norm
2021-06-07 16:13:23 -06:00
James Betker
438217094c
Also debug distribution of switch
2021-06-07 15:36:07 -06:00
James Betker
44b09e5f20
Amplify dropout rate
2021-06-07 15:20:53 -06:00
James Betker
f0d4eb9182
Fixor
2021-06-07 11:58:36 -06:00
James Betker
c456a60466
Another go at fixing nan
2021-06-07 11:51:43 -06:00
James Betker
1c574c5bd1
Attempt to fix nan
2021-06-07 11:43:42 -06:00
James Betker
eda796985b
Try out dropout norm
2021-06-07 11:33:33 -06:00
James Betker
6c6e82406e
Pass a corruption factor through the dataset into the upsampling network
...
The intuition is this will help guide the network to make better informed decisions
about how it performs upsampling based on how it perceives the underlying content.
(I'm giving up on letting networks detect their own quality - I'm not convinced it is
actually feasible)
2021-06-07 09:13:54 -06:00
James Betker
2ad2b56438
Don't do wandb except on rank 0
2021-06-06 16:52:07 -06:00
James Betker
7c5478bc2c
Formatting issue with gdi
2021-06-06 16:35:37 -06:00
James Betker
061dbcd458
Another fix to anorm
2021-06-06 15:09:49 -06:00
James Betker
9a6991e461
Fix switch norm average
2021-06-06 15:04:28 -06:00
James Betker
57e1a6a0f2
cifar: add hard routing
...
Also mods switched_routing to support non-pixular inputs
2021-06-06 14:53:43 -06:00
James Betker
692e9c417b
Support diffusion unet
2021-06-06 13:57:22 -06:00
James Betker
a0158ebc69
Simplify cifar resnet further for faster training
2021-06-06 10:02:24 -06:00
James Betker
75567a9814
Only head norm removed
2021-06-05 23:29:11 -06:00
James Betker
65d0376b90
Re-add normalization at the tail of the RRDB
2021-06-05 23:04:05 -06:00
James Betker
184e887122
Remove rrdb normalization
2021-06-05 21:39:19 -06:00
James Betker
f5e75602b9
Add regular attention to cifar_resnet
2021-06-05 21:34:07 -06:00
James Betker
16cd92acd5
hack
2021-06-05 14:23:41 -06:00
James Betker
af52751d6b
Fix device error
2021-06-05 14:21:32 -06:00
James Betker
5f0cc65f3b
Register branched resnet properly
2021-06-05 14:19:03 -06:00
James Betker
fb405d9ef1
CIFAR stuff
...
- Extract coarse labels for the CIFAR dataset
- Add simple resnet that branches lower layers based on coarse labels
- Some other cleanup
2021-06-05 14:16:02 -06:00
James Betker
80d4404367
A few fixes:
...
- Output better prediction of xstart from eps
- Support LossAwareSampler
- Support AdamW
2021-06-05 13:40:32 -06:00
James Betker
fa908a6a15
Fix wandb import issue
2021-06-04 23:27:15 -06:00
James Betker
103a88506e
Log eval to wandb
2021-06-04 23:23:20 -06:00
James Betker
7d45132f60
fdsa
2021-06-04 21:26:54 -06:00
James Betker
6c8c8087d5
asdf
2021-06-04 21:24:48 -06:00
James Betker
e6c537824a
Allow validation for ce
2021-06-04 21:21:04 -06:00
James Betker
7c251af7a8
Support cifar100 with resnet
2021-06-04 17:29:07 -06:00
James Betker
bf811f80c1
GD mods & fixes
...
- Report variational loss separately
- Report model prediction from injector
- Log these things
- Use respacing like guided diffusion
2021-06-04 17:13:16 -06:00
James Betker
6084915af8
Support gaussian diffusion models
...
Adds support for GD models, courtesy of some maths from openai.
Also:
- Fixes requirement for eval{} even when it isn't being used
- Adds support for denormalizing an imagenet norm
2021-06-02 21:47:32 -06:00
James Betker
45bc76ba92
Fixes and mods to support training classifiers on imagenet
2021-06-01 17:25:24 -06:00
James Betker
f129eaa39e
Clean up byol a bit
...
- Remove option to aug in dataset (there's really no reason for this now that kornia works on GPU on windows)
- Other stufff
2021-05-24 21:35:46 -06:00
James Betker
6649ef2dae
Add zipfilesdataset
2021-05-24 21:35:00 -06:00
James Betker
1a2b9fa130
Get rid of old byol net wrapping
...
Simplifies and makes this usable with DLAS' multi-gpu trainer
2021-04-27 12:48:34 -06:00
James Betker
119f17c808
Add testing capabilities for segformer & contrastive feature
2021-04-27 09:59:50 -06:00
James Betker
9bbe6fc81e
Get segformer to a trainable state
2021-04-25 11:45:20 -06:00
James Betker
23e01314d4
Add dataset, ui for labeling and evaluator for pointwise classification
2021-04-23 17:17:13 -06:00
James Betker
fc623d4b5a
Add segformer model. Start work on BYOL adaptation that will support training it.
2021-04-23 17:16:46 -06:00
James Betker
17555e7d07
misc adjustments for stylegan
2021-04-21 18:14:17 -06:00
James Betker
b687ef4cd0
Misc
2021-04-21 18:09:46 -06:00
James Betker
94e069bced
Misc changes
2021-03-13 10:45:26 -07:00
James Betker
9fc3df3f5b
Switched conv: add conversion function with allowlist
2021-03-13 10:44:56 -07:00
James Betker
cf9a6da889
Fix some bugs, checkin work on vqvae3
2021-03-02 20:56:19 -07:00
James Betker
f89ea5f1c6
Mods to support lightweight_gan model
2021-03-02 20:51:48 -07:00
James Betker
543d459b4e
extract_temporal_squares script
...
For extracting related patches across a video
2021-02-08 08:10:24 -07:00
James Betker
39fd755baa
New benchmark numbers
2021-02-08 08:09:41 -07:00
James Betker
784b96c059
Misc options to add support for training stylegan2-rosinality models:
...
- Allow image_folder_dataset to normalize inbound images
- ExtensibleTrainer can denormalize images on the output path
- Support .webp - an output from LSUN
- Support logistic GAN divergence loss
- Support stylegan2 TF weight extraction for discriminator
- New injector that produces latent noise (with separated paths)
- Modify FID evaluator to be operable with rosinality-style GANs
2021-02-08 08:09:21 -07:00
James Betker
e7be4bdff3
Revert
2021-02-05 08:43:07 -07:00
James Betker
6dec1f5968
Back to groupnorm
2021-02-05 08:42:11 -07:00
James Betker
336f807c8e
lambda2
2021-02-05 00:00:24 -07:00
James Betker
025a5867c4
Use syncbatchnorm instead
2021-02-04 22:26:36 -07:00
James Betker
bb79fafb89
Fix groupnorm specification
2021-02-04 22:15:38 -07:00
James Betker
43da1f9c4b
Convert lambda coupler to use groupnorm instead of batchnorm
2021-02-04 21:59:44 -07:00
James Betker
7070142805
Make vqvae3_hard more configurable
2021-02-04 09:03:22 -07:00
James Betker
b980028ca8
Add get_debug_values for vqvae_3_hardswitch
2021-02-03 14:12:24 -07:00
James Betker
1405ff06b8
Fix SwitchedConvHardRoutingFunction for current cuda router
2021-02-03 14:11:55 -07:00
James Betker
d7bec392dd
...
2021-02-02 23:50:25 -07:00
James Betker
b0a8fa00bc
Visual dbg in vqvae3hs
2021-02-02 23:50:01 -07:00
James Betker
f5f91850fd
hardswitch variant of vqvae3
2021-02-02 21:00:04 -07:00
James Betker
320edbaa3c
Move switched_conv logic around a bit
2021-02-02 20:41:24 -07:00
James Betker
0dca36946f
Hard Routing mods
...
- Turns out my custom convolution was RIDDLED with backwards bugs, which is
why the existing implementation wasn't working so well.
- Implements the switch logic from both Mixture of Experts and Switch Transformers
for testing purposes.
2021-02-02 20:35:58 -07:00
James Betker
29c1c3bede
Register vqvae3
2021-01-29 15:26:28 -07:00
James Betker
bc20b4739e
vqvae3
...
Changes VQVAE as so:
- Reverts back to smaller codebook
- Adds an additional conv layer at the highest resolution for both the encoder & decoder
- Uses LeakyReLU on trunk
2021-01-29 15:24:26 -07:00
James Betker
96bc80313c
Add switch norm, up dropout rate, detach selector
2021-01-26 09:31:53 -07:00
James Betker
97d895aebe
Add SrPixLoss, which focuses pixel-based losses on high-frequency regions
...
of the image.
2021-01-25 08:26:14 -07:00
James Betker
2cdac6bd09
Add PWCNet for human optical flow
2021-01-25 08:25:44 -07:00
James Betker
51b63b2aa6
Add switched_conv with hard routing and make vqvae use it.
2021-01-25 08:25:29 -07:00
James Betker
ae4ff4a1e7
Enable lambda visualization
2021-01-23 15:53:27 -07:00
James Betker
10ec6bda1d
lambda nets in switched_conv and a vqvae to use it
2021-01-23 14:57:57 -07:00
James Betker
b374dcdd46
update vqvae to double codebook size for bottom quantizer
2021-01-23 13:47:07 -07:00
James Betker
dac7d768fa
test uresnet playground mods
2021-01-23 13:46:43 -07:00
James Betker
1b8a26db93
New switched_conv
2021-01-23 13:46:30 -07:00
James Betker
557cdec116
misc
2021-01-23 13:45:17 -07:00
James Betker
d919ae7148
Add VQVAE with no Conv2dTranspose
2021-01-18 08:49:59 -07:00
James Betker
587a4f4050
resnet_unet_3
...
I'm being really lazy here - these nets are not really different from each other
except at which layer they terminate. This one terminates at 2x downsampling,
which is simply indicative of a direction I want to go for testing these pixpro networks.
2021-01-15 14:51:03 -07:00
James Betker
038b8654b6
Pixpro: unwrap losses
2021-01-13 11:54:25 -07:00
James Betker
8990801a3f
Fix pixpro stochastic sampling bugs
2021-01-13 11:34:24 -07:00
James Betker
19475a072f
Pixpro: Rather than using a latent square for pixpro, use an entirely stochastic sampling of the pixels
2021-01-13 11:26:51 -07:00
James Betker
d1007ccfe7
Adjustments to pixpro to allow training against networks with arbitrarily large structural latents
...
- The pixpro latent now rescales the latent space instead of using a "coordinate vector", which
**might** have performance implications.
- The latent against which the pixel loss is computed can now be a small, randomly sampled patch
out of the entire latent, allowing further memory/computational discounts. Since the loss
computation does not have a receptive field, this should not alter the loss.
- The instance projection size can now be separate from the pixel projection size.
- PixContrast removed entirely.
- ResUnet with full resolution added.
2021-01-12 09:17:45 -07:00
James Betker
34f8c8641f
Support training imagenet classifier
2021-01-11 20:09:16 -07:00
James Betker
f3db381fa1
Allow uresnet to use pretrained resnet50
2021-01-10 12:57:31 -07:00
James Betker
4119cd6240
Fix to image_folder_dataset to accomodate images with mismatched dimensions
2021-01-10 12:57:21 -07:00
James Betker
48f0d8964b
Allow dist_backend to be specified in options
2021-01-09 20:54:32 -07:00
James Betker
14a868e8e6
byol playground updates
2021-01-09 20:54:21 -07:00