DL-Art-School/codes/models/archs
James Betker fba29d7dcc Move to apex distributeddataparallel and add switch all_reduce
Torch's distributed_data_parallel is missing "delay_allreduce", which is
necessary to get gradient checkpointing to work with recurrent models.
2020-10-08 11:20:05 -06:00
..
__init__.py
arch_util.py
discriminator_vgg_arch.py
DiscriminatorResnet_arch_passthrough.py
DiscriminatorResnet_arch.py
feature_arch.py
ProgressiveSrg_arch.py Import switched_conv as a submodule 2020-10-07 23:10:54 -06:00
rcan.py
ResGen_arch.py
RRDBNet_arch.py
spinenet_arch.py
SPSR_arch.py Import switched_conv as a submodule 2020-10-07 23:10:54 -06:00
SPSR_util.py
SRResNet_arch.py
StructuredSwitchedGenerator.py Move to apex distributeddataparallel and add switch all_reduce 2020-10-08 11:20:05 -06:00
SwitchedResidualGenerator_arch.py Import switched_conv as a submodule 2020-10-07 23:10:54 -06:00