DL-Art-School/codes/models/archs
James Betker 42a10b34ce Re-enable batch norm on switch processing blocks
Found out that batch norm is causing the switches to init really poorly -
not using a significant number of transforms. Might be a great time to
re-consider using the attention norm, but for now just re-enable it.
2020-06-24 21:15:17 -06:00
..
__init__.py
arch_util.py Fix initialization in mhead switched rrdb 2020-06-15 21:32:03 -06:00
AttentionResnet.py Add attention resnet 2020-05-29 20:02:10 -06:00
discriminator_vgg_arch.py Add capability to place additional conv into discriminator 2020-06-23 09:40:33 -06:00
DiscriminatorResnet_arch_passthrough.py
DiscriminatorResnet_arch.py
feature_arch.py Fix process_video bugs 2020-05-29 12:47:22 -06:00
FlatProcessorNet_arch.py
FlatProcessorNetNew_arch.py
HighToLowResNet.py
ResGen_arch.py Apply fixes to resgen 2020-05-24 07:43:23 -06:00
RRDBNet_arch.py Fix initialization in mhead switched rrdb 2020-06-15 21:32:03 -06:00
SRResNet_arch.py
SwitchedResidualGenerator_arch.py Re-enable batch norm on switch processing blocks 2020-06-24 21:15:17 -06:00