747ded2bf7
Some lessons learned: - Biases are fairly important as a relief valve. They dont need to be everywhere, but most computationally heavy branches should have a bias. - GroupNorm in SPSR is not a great idea. Since image gradients are represented in this model, normal means and standard deviations are not applicable. (imggrad has a high representation of 0). - Don't fuck with the mainline of any generative model. As much as possible, all additions should be done through residual connections. Never pollute the mainline with reference data, do that in branches. It basically leaves the mode untrainable. |
||
---|---|---|
.. | ||
archs | ||
steps | ||
__init__.py | ||
base_model.py | ||
ExtensibleTrainer.py | ||
feature_model.py | ||
loss.py | ||
lr_scheduler.py | ||
networks.py | ||
novograd.py | ||
SR_model.py | ||
SRGAN_model.py |