forked from mrq/DL-Art-School
42a10b34ce
Found out that batch norm is causing the switches to init really poorly - not using a significant number of transforms. Might be a great time to re-consider using the attention norm, but for now just re-enable it. |
||
---|---|---|
.. | ||
__init__.py | ||
arch_util.py | ||
AttentionResnet.py | ||
discriminator_vgg_arch.py | ||
DiscriminatorResnet_arch_passthrough.py | ||
DiscriminatorResnet_arch.py | ||
feature_arch.py | ||
FlatProcessorNet_arch.py | ||
FlatProcessorNetNew_arch.py | ||
HighToLowResNet.py | ||
ResGen_arch.py | ||
RRDBNet_arch.py | ||
SRResNet_arch.py | ||
SwitchedResidualGenerator_arch.py |