- Turns out my custom convolution was RIDDLED with backwards bugs, which is
why the existing implementation wasn't working so well.
- Implements the switch logic from both Mixture of Experts and Switch Transformers
for testing purposes.
Changes VQVAE as so:
- Reverts back to smaller codebook
- Adds an additional conv layer at the highest resolution for both the encoder & decoder
- Uses LeakyReLU on trunk
I'm being really lazy here - these nets are not really different from each other
except at which layer they terminate. This one terminates at 2x downsampling,
which is simply indicative of a direction I want to go for testing these pixpro networks.
- The pixpro latent now rescales the latent space instead of using a "coordinate vector", which
**might** have performance implications.
- The latent against which the pixel loss is computed can now be a small, randomly sampled patch
out of the entire latent, allowing further memory/computational discounts. Since the loss
computation does not have a receptive field, this should not alter the loss.
- The instance projection size can now be separate from the pixel projection size.
- PixContrast removed entirely.
- ResUnet with full resolution added.
This is a concept from "Lifelong Learning GAN", although I'm skeptical of it's novelty -
basically you scale and shift the weights for the generator and discriminator of a pretrained
GAN to "shift" into new modalities, e.g. faces->birds or whatever. There are some interesting
applications of this that I would like to try out.
- Added LARS and SGD optimizer variants that support turning off certain
features for BN and bias layers
- Added a variant of pytorch's resnet model that supports gradient checkpointing.
- Modify the trainer infrastructure to support above
- Fix bug with BYOL (should have been nonfunctional)