DL-Art-School/codes/models
James Betker 44a19cd37c ExtensibleTrainer mods to support advanced checkpointing for stylegan2
Basically: stylegan2 makes use of gradient-based normalizers. These
make it so that I cannot use gradient checkpointing. But I love gradient
checkpointing. It makes things really, really fast and memory conscious.

So - only don't checkpoint when we run the regularizer loss. This is a
bit messy, but speeds up training by at least 20%.

Also: pytorch: please make checkpointing a first class citizen.
2020-11-12 15:45:07 -07:00
..
archs stylegan2 in ml art school! 2020-11-12 15:42:05 -07:00
experiments Add in experiments hook 2020-09-19 10:05:25 -06:00
flownet2@db2b7899ea Update flownet submodule 2020-10-24 11:59:00 -06:00
steps stylegan2 in ml art school! 2020-11-12 15:42:05 -07:00
__init__.py Lots of new discriminator nets 2020-11-10 16:06:54 -07:00
base_model.py Extra logging for teco_resgen 2020-10-28 15:21:22 -06:00
ExtensibleTrainer.py ExtensibleTrainer mods to support advanced checkpointing for stylegan2 2020-11-12 15:45:07 -07:00
feature_model.py Feature mode -> back to LR fea 2020-09-11 13:09:55 -06:00
loss.py More adjustments to support multi-modal training 2020-10-22 16:49:34 -06:00
lr_scheduler.py Extensible trainer (in progress) 2020-08-12 08:45:23 -06:00
networks.py stylegan2 in ml art school! 2020-11-12 15:42:05 -07:00