DL-Art-School/codes/models
James Betker 42a10b34ce Re-enable batch norm on switch processing blocks
Found out that batch norm is causing the switches to init really poorly -
not using a significant number of transforms. Might be a great time to
re-consider using the attention norm, but for now just re-enable it.
2020-06-24 21:15:17 -06:00
..
archs Re-enable batch norm on switch processing blocks 2020-06-24 21:15:17 -06:00
__init__.py
base_model.py
loss.py
lr_scheduler.py
networks.py
SR_model.py
SRGAN_model.py