DL-Art-School/codes
James Betker d1007ccfe7 Adjustments to pixpro to allow training against networks with arbitrarily large structural latents
- The pixpro latent now rescales the latent space instead of using a "coordinate vector", which
   **might** have performance implications.
- The latent against which the pixel loss is computed can now be a small, randomly sampled patch
   out of the entire latent, allowing further memory/computational discounts. Since the loss
   computation does not have a receptive field, this should not alter the loss.
- The instance projection size can now be separate from the pixel projection size.
- PixContrast removed entirely.
- ResUnet with full resolution added.
2021-01-12 09:17:45 -07:00
..
.idea
data Support training imagenet classifier 2021-01-11 20:09:16 -07:00
models Adjustments to pixpro to allow training against networks with arbitrarily large structural latents 2021-01-12 09:17:45 -07:00
scripts Adjustments to pixpro to allow training against networks with arbitrarily large structural latents 2021-01-12 09:17:45 -07:00
trainer Support training imagenet classifier 2021-01-11 20:09:16 -07:00
utils Enable vqvae to use a switched_conv variant 2021-01-09 20:53:14 -07:00
multi_modal_train.py
process_video.py Fix process_video 2021-01-09 20:53:46 -07:00
requirements.txt Support training imagenet classifier 2021-01-11 20:09:16 -07:00
test_image_patch_classifier.py
test.py
train.py Adjustments to pixpro to allow training against networks with arbitrarily large structural latents 2021-01-12 09:17:45 -07:00