DL-Art-School/codes/models/archs
James Betker fba29d7dcc Move to apex distributeddataparallel and add switch all_reduce
Torch's distributed_data_parallel is missing "delay_allreduce", which is
necessary to get gradient checkpointing to work with recurrent models.
2020-10-08 11:20:05 -06:00
..
__init__.py
arch_util.py SSG network 2020-09-15 20:59:24 -06:00
discriminator_vgg_arch.py Add new referencing discriminator 2020-09-10 21:35:29 -06:00
DiscriminatorResnet_arch_passthrough.py
DiscriminatorResnet_arch.py
feature_arch.py Several things 2020-09-23 11:56:36 -06:00
ProgressiveSrg_arch.py Import switched_conv as a submodule 2020-10-07 23:10:54 -06:00
rcan.py Allow checkpointing to be disabled in the options file 2020-10-03 11:03:28 -06:00
ResGen_arch.py More NSG improvements (v3) 2020-06-29 20:26:51 -06:00
RRDBNet_arch.py Allow checkpointing to be disabled in the options file 2020-10-03 11:03:28 -06:00
spinenet_arch.py Spinenet: implementation without 4x downsampling right off the bat 2020-09-21 12:36:30 -06:00
SPSR_arch.py Import switched_conv as a submodule 2020-10-07 23:10:54 -06:00
SPSR_util.py Add simplified SPSR architecture 2020-08-03 10:25:37 -06:00
SRResNet_arch.py
StructuredSwitchedGenerator.py Move to apex distributeddataparallel and add switch all_reduce 2020-10-08 11:20:05 -06:00
SwitchedResidualGenerator_arch.py Import switched_conv as a submodule 2020-10-07 23:10:54 -06:00