DL-Art-School/codes/models
James Betker 44b89330c2 Support inference across batches, support inference on cpu, checkpoint
This is a checkpoint of a set of long tests with reduced-complexity networks. Some takeaways:
1) A full GAN using the resnet discriminator does appear to converge, but the quality is capped.
2) Likewise, a combination GAN/feature loss does not converge. The feature loss is optimized but
    the model appears unable to fight the discriminator, so the G-loss steadily increases.

Going forwards, I want to try some bigger models. In particular, I want to change the generator
to increase complexity and capacity. I also want to add skip connections between the
disc and generator.
2020-05-04 08:48:25 -06:00
..
archs Fixup upconv for the next attempt! 2020-05-01 19:56:14 -06:00
__init__.py Implement downsample GAN 2020-04-24 00:00:46 -06:00
base_model.py Enable AMP optimizations & write sample train images to folder. 2020-04-21 16:28:06 -06:00
loss.py mmsr 2019-08-23 21:42:47 +08:00
lr_scheduler.py mmsr 2019-08-23 21:42:47 +08:00
networks.py Turn off EVDR (so we dont need the weird convs) 2020-05-02 17:47:14 -06:00
SR_model.py Support inference across batches, support inference on cpu, checkpoint 2020-05-04 08:48:25 -06:00
SRGAN_model.py Support inference across batches, support inference on cpu, checkpoint 2020-05-04 08:48:25 -06:00
Video_base_model.py mmsr 2019-08-23 21:42:47 +08:00