Commit Graph

14 Commits

Author SHA1 Message Date
James Betker
030648f2bc Remove batchnorms from resgen 2020-06-22 17:23:36 -06:00
James Betker
68bcab03ae Add growth channel to switch_growths for flat networks 2020-06-22 10:40:16 -06:00
James Betker
3b81712c49 Remove BN from transforms 2020-06-19 16:52:56 -06:00
James Betker
61364ec7d0 Fix inverse temperature curve logic and add upsample factor 2020-06-19 09:18:30 -06:00
James Betker
0551139b8d Fix resgen temperature curve below 1
It needs to be inverted to maintain a true linear curve
2020-06-18 16:08:07 -06:00
James Betker
778e7b6931 Add a double-step to attention temperature 2020-06-18 11:29:31 -06:00
James Betker
59b0533b06 Fix attimage step size 2020-06-17 18:45:24 -06:00
James Betker
645d0ca767 ResidualGen mods
- Add filters_mid spec which allows a expansion->squeeze for the transformation layers.
- Add scale and bias AFTER the switch
- Remove identity transform (models were converging on this)
- Move attention image generation and temperature setting into new function which gets called every step with a save path
2020-06-17 17:18:28 -06:00
James Betker
6f8406fbdc Fixed ConfigurableSwitchedGenerator bug 2020-06-16 16:53:57 -06:00
James Betker
7d541642aa Get rid of SwitchedResidualGenerator
Just use the configurable one instead..
2020-06-16 16:23:29 -06:00
James Betker
379b96eb55 Output histograms with SwitchedResidualGenerator
This also fixes the initialization weight for the configurable generator.
2020-06-16 15:54:37 -06:00
James Betker
2def96203e Mods to SwitchedResidualGenerator_arch
- Increased processing for high-resolution switches
- Do stride=2 first in HalvingProcessingBlock
2020-06-16 14:19:12 -06:00
James Betker
70c764b9d4 Create a configurable SwichedResidualGenerator
Also move attention image generator out of repo
2020-06-16 13:24:07 -06:00
James Betker
df1046c318 New arch: SwitchedResidualGenerator_arch
The concept here is to use switching to split the generator into two functions:
interpretation and transformation. Transformation is done at the pixel level by
relatively simple conv layers, while interpretation is computed at various levels
by far more complicated conv stacks. The two are merged using the switching
mechanism.

This architecture is far less computationally intensive that RRDB.
2020-06-16 11:23:50 -06:00