forked from mrq/DL-Art-School
0dca36946f
- Turns out my custom convolution was RIDDLED with backwards bugs, which is why the existing implementation wasn't working so well. - Implements the switch logic from both Mixture of Experts and Switch Transformers for testing purposes. |
||
---|---|---|
.. | ||
__init__.py | ||
kmeans_mask_producer.py | ||
scaled_weight_conv.py | ||
vqvae_3.py | ||
vqvae_no_conv_transpose_hardswitched_lambda.py | ||
vqvae_no_conv_transpose_switched_lambda.py | ||
vqvae_no_conv_transpose.py | ||
vqvae.py |