forked from mrq/DL-Art-School
42a10b34ce
Found out that batch norm is causing the switches to init really poorly - not using a significant number of transforms. Might be a great time to re-consider using the attention norm, but for now just re-enable it. |
||
---|---|---|
.. | ||
archs | ||
__init__.py | ||
base_model.py | ||
loss.py | ||
lr_scheduler.py | ||
networks.py | ||
SR_model.py | ||
SRGAN_model.py |