forked from mrq/DL-Art-School
34774f9948
I'd like to try some different (newer) transformer variants. The way to get there is softly decoupling the transformer portion of this architecture from GPT. This actually should be fairly easy. |
||
---|---|---|
.. | ||
byol | ||
classifiers | ||
diffusion | ||
fixup_resnet | ||
flownet2@db2b7899ea | ||
glean | ||
gpt_voice | ||
lucidrains/dalle | ||
optical_flow | ||
segformer | ||
spleeter | ||
srflow | ||
stylegan | ||
switched_conv | ||
tacotron2 | ||
vqvae | ||
waveglow | ||
__init__.py | ||
arch_util.py | ||
audio_resnet.py | ||
discriminator_vgg_arch.py | ||
feature_arch.py | ||
lightweight_gan.py | ||
ResGen_arch.py | ||
RRDBNet_arch.py | ||
spinenet_arch.py |