forked from ecker/DL-Art-School
Got rid of the converged multiplexer bases but kept the configurable architecture. The new multiplexers look a lot like the old one. Took some queues from the transformer architecture: translate image to a higher filter-space and stay there for the duration of the models computation. Also perform convs after each switch to allow the model to anneal issues that arise. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| arch_util.py | ||
| AttentionResnet.py | ||
| discriminator_vgg_arch.py | ||
| DiscriminatorResnet_arch_passthrough.py | ||
| DiscriminatorResnet_arch.py | ||
| feature_arch.py | ||
| FlatProcessorNet_arch.py | ||
| FlatProcessorNetNew_arch.py | ||
| HighToLowResNet.py | ||
| ResGen_arch.py | ||
| RRDBNet_arch.py | ||
| SRResNet_arch.py | ||
| SwitchedResidualGenerator_arch.py | ||