forked from mrq/DL-Art-School
747ded2bf7
Some lessons learned: - Biases are fairly important as a relief valve. They dont need to be everywhere, but most computationally heavy branches should have a bias. - GroupNorm in SPSR is not a great idea. Since image gradients are represented in this model, normal means and standard deviations are not applicable. (imggrad has a high representation of 0). - Don't fuck with the mainline of any generative model. As much as possible, all additions should be done through residual connections. Never pollute the mainline with reference data, do that in branches. It basically leaves the mode untrainable. |
||
---|---|---|
.. | ||
__init__.py | ||
arch_util.py | ||
AttentionResnet.py | ||
discriminator_vgg_arch.py | ||
DiscriminatorResnet_arch_passthrough.py | ||
DiscriminatorResnet_arch.py | ||
feature_arch.py | ||
FlatProcessorNet_arch.py | ||
FlatProcessorNetNew_arch.py | ||
HighToLowResNet.py | ||
NestedSwitchGenerator.py | ||
ProgressiveSrg_arch.py | ||
ResGen_arch.py | ||
RRDBNet_arch.py | ||
spinenet_arch.py | ||
SPSR_arch.py | ||
SPSR_util.py | ||
SRResNet_arch.py | ||
SwitchedResidualGenerator_arch.py |