James Betker
8a4eb8241d
SRG3 work
...
Operates on top of a pre-trained SpineNET backbone (trained on CoCo 2017 with RetinaNet)
This variant is extremely shallow.
2020-07-07 13:46:40 -06:00
James Betker
0acad81035
More SRG2 adjustments..
2020-07-06 22:40:40 -06:00
James Betker
086b2f0570
More bugs
2020-07-06 22:28:07 -06:00
James Betker
d4d4f85fc0
Bug fixes
2020-07-06 22:25:40 -06:00
James Betker
3c31bea1ac
SRG2 architectural changes
2020-07-06 22:22:29 -06:00
James Betker
9a1c3241f5
Switch discriminator to groupnorm
2020-07-06 20:59:59 -06:00
James Betker
60c6352843
Misc
2020-07-06 20:44:07 -06:00
James Betker
6beefa6d0c
PixDisc - Add two more levels of losses coming from this gen at higher resolutions
2020-07-06 11:15:52 -06:00
James Betker
2636d3b620
Fix assertion error
2020-07-06 09:23:53 -06:00
James Betker
8f92c0a088
Interpolate attention well before softmax
2020-07-06 09:18:30 -06:00
James Betker
72f90cabf8
More pixdisc fixes
2020-07-05 22:03:16 -06:00
James Betker
909007ee6a
Add G_warmup
...
Let the Generator get to a point where it is at least competing with the discriminator before firing off.
Backwards from most GAN architectures, but this one is a bit different from most.
2020-07-05 21:58:35 -06:00
James Betker
a47a5dca43
Fix pixdisc bug
2020-07-05 21:57:52 -06:00
James Betker
d0957bd7d4
Alter weight initialization for transformation blocks
2020-07-05 17:32:46 -06:00
James Betker
16d1bf6dd7
Replace ConvBnRelus in SRG2 with Silus
2020-07-05 17:29:20 -06:00
James Betker
10f7e49214
Add ConvBnSilu to replace ConvBnRelu
...
Relu produced good performance gains over LeakyRelu, but
GAN performance degraded significantly. Try SiLU as an alternative
to see if it's the leaky-ness we are looking for or the smooth activation
curvature.
2020-07-05 13:39:08 -06:00
James Betker
9934e5d082
Move SRG1 to identical to new
2020-07-05 08:49:34 -06:00
James Betker
416538f31c
SRG1 conjoined except ConvBnRelu
2020-07-05 08:44:17 -06:00
James Betker
c58c2b09ca
Back to remove all biases (looks like a ConvBnRelu made its way in..)
2020-07-04 22:41:02 -06:00
James Betker
86cda86e94
Re-add biases, also add new init
...
A/B testing where we lost our GAN competitiveness.
2020-07-04 22:24:42 -06:00
James Betker
b03741f30e
Remove all biases from generator
...
Continuing to investigate loss of GAN competitiveness, this is a big difference
between "old" SRG1 and "new".
2020-07-04 22:19:55 -06:00
James Betker
726e946e79
Turn BN off in SRG1
...
This wont work well but just testing if GAN performance comes back
2020-07-04 14:51:27 -06:00
James Betker
0ee39d419b
OrderedDict not needed
2020-07-04 14:09:27 -06:00
James Betker
9048105b72
Break out SRG1 as separate network
...
Something strange is going on. These networks do not respond to
discriminator gradients properly anymore. SRG1 did, however so
reverting back to last known good state to figure out why.
2020-07-04 13:28:50 -06:00
James Betker
188de5e15a
Misc changes
2020-07-04 13:22:50 -06:00
James Betker
510b2f887d
Remove RDB from srg2
...
Doesnt seem to work so great.
2020-07-03 22:31:20 -06:00
James Betker
77d3765364
Fix new feature loss calc
2020-07-03 22:20:13 -06:00
James Betker
ed6a15e768
Add feature to dataset which allows it to force images to be a certain size.
2020-07-03 15:19:16 -06:00
James Betker
da4335c25e
Add a feature-based validation test
2020-07-03 15:18:57 -06:00
James Betker
703dec4472
Add SpineNet & integrate with SRG
...
New version of SRG uses SpineNet for a switch backbone.
2020-07-03 12:07:31 -06:00
James Betker
3ed7a2b9ab
Move ConvBnRelu/Lelu to arch_util
2020-07-03 12:06:38 -06:00
James Betker
ea9c6765ca
Move train imports into init_dist
2020-07-02 15:11:21 -06:00
James Betker
e9ee67ff10
Integrate RDB into SRG
...
The last RDB for each cluster is switched.
2020-07-01 17:19:55 -06:00
James Betker
6ac6c95177
Fix scaling bug
2020-07-01 16:42:27 -06:00
James Betker
30653181ba
Experiment: get rid of post_switch_conv
2020-07-01 16:30:40 -06:00
James Betker
17191de836
Experiment: bring initialize_weights back again
...
Something really strange going on here..
2020-07-01 15:58:13 -06:00
James Betker
d1d573de07
Experiment: new init and post-switch-conv
2020-07-01 15:25:54 -06:00
James Betker
480d1299d7
Remove RRDB with switching
...
This idea never really panned out, removing it.
2020-07-01 12:08:32 -06:00
James Betker
e2398ac83c
Experiment: revert initialization changes
2020-07-01 12:08:09 -06:00
James Betker
78276afcaa
Experiment: Back to lelu
2020-07-01 11:43:25 -06:00
James Betker
b945021c90
SRG v2 - Move to Relu, rely on Module-based initialization
2020-07-01 11:33:32 -06:00
James Betker
ee6443ad7d
Add numeric stability computation script
2020-07-01 11:30:34 -06:00
James Betker
c0bb123504
Misc changes
2020-07-01 11:28:23 -06:00
James Betker
604763be68
NSG r7
...
Converts the switching trunk to a VGG-style network to make it more comparable
to SRG architectures.
2020-07-01 09:54:29 -06:00
James Betker
87f1e9c56f
Invert ResGen2 to operate in LR space
2020-06-30 20:57:40 -06:00
James Betker
e07d8abafb
NSG rev 6
...
- Disable style passthrough
- Process multiplexers starting at base resolution
2020-06-30 20:47:26 -06:00
James Betker
3ce1a1878d
NSG improvements (r5)
...
- Get rid of forwards(), it makes numeric_stability.py not work properly.
- Do stability auditing across layers.
- Upsample last instead of first, work in much higher dimensionality for transforms.
2020-06-30 16:59:57 -06:00
James Betker
75f148022d
Even more NSG improvements (r4)
2020-06-30 13:52:47 -06:00
James Betker
773753073f
More NSG improvements (v3)
...
Move to a fully fixup residual network for the switch (no
batch norms). Fix a bunch of other small bugs. Add in a
temporary latent feed-forward from the bottom of the
switch. Fix several initialization issues.
2020-06-29 20:26:51 -06:00
James Betker
4b82d0815d
NSG improvements
...
- Just use resnet blocks for the multiplexer trunk of the generator
- Every block initializes itself, rather than everything at the end
- Cleans up some messy parts of the architecture, including unnecessary
kernel sizes and places where BN is not used properly.
2020-06-29 10:09:51 -06:00
James Betker
978036e7b3
Add NestedSwitchGenerator
...
An evolution of SwitchedResidualGenerator, this variant nests attention
modules upon themselves to extend the representative capacity of the
model significantly.
2020-06-28 21:22:05 -06:00
James Betker
6f2bc36c61
Distill_torchscript mods
...
Starts down the path of writing a custom trace that works using torch's hook mechanism.
2020-06-27 08:28:09 -06:00
James Betker
db08dedfe2
Add recover_tensorboard_log
...
Generates a tb_logger from raw console output. Useful for colab sessions
that crash.
2020-06-27 08:26:57 -06:00
James Betker
c8a670842e
Missed networks.py in last commit
2020-06-25 18:36:06 -06:00
James Betker
407224eba1
Re-work SwitchedResgen2
...
Got rid of the converged multiplexer bases but kept the configurable architecture. The
new multiplexers look a lot like the old one.
Took some queues from the transformer architecture: translate image to a higher filter-space
and stay there for the duration of the models computation. Also perform convs after each
switch to allow the model to anneal issues that arise.
2020-06-25 18:17:05 -06:00
James Betker
42a10b34ce
Re-enable batch norm on switch processing blocks
...
Found out that batch norm is causing the switches to init really poorly -
not using a significant number of transforms. Might be a great time to
re-consider using the attention norm, but for now just re-enable it.
2020-06-24 21:15:17 -06:00
James Betker
4001db1ede
Add ConfigurableSwitchComputer
2020-06-24 19:49:37 -06:00
James Betker
83c3b8b982
Add parameterized noise injection into resgen
2020-06-23 10:16:02 -06:00
James Betker
0584c3b587
Add negative_transforms switch to resgen
2020-06-23 09:41:12 -06:00
James Betker
dfcbe5f2db
Add capability to place additional conv into discriminator
...
This should allow us to support larger images sizes. May need
to add another one of these.
2020-06-23 09:40:33 -06:00
James Betker
bad33de906
Add simple resize to extract images
2020-06-23 09:39:51 -06:00
James Betker
030648f2bc
Remove batchnorms from resgen
2020-06-22 17:23:36 -06:00
James Betker
68bcab03ae
Add growth channel to switch_growths for flat networks
2020-06-22 10:40:16 -06:00
James Betker
3b81712c49
Remove BN from transforms
2020-06-19 16:52:56 -06:00
James Betker
61364ec7d0
Fix inverse temperature curve logic and add upsample factor
2020-06-19 09:18:30 -06:00
James Betker
0551139b8d
Fix resgen temperature curve below 1
...
It needs to be inverted to maintain a true linear curve
2020-06-18 16:08:07 -06:00
James Betker
efc80f041c
Save & load amp state
2020-06-18 11:38:48 -06:00
James Betker
2e3b6bad77
Log tensorboard directly into experiments directory
2020-06-18 11:33:02 -06:00
James Betker
778e7b6931
Add a double-step to attention temperature
2020-06-18 11:29:31 -06:00
James Betker
d2d5e097d5
Add profiling to SRGAN for testing timings
2020-06-18 11:29:10 -06:00
James Betker
45a900fafe
Misc
2020-06-18 11:28:55 -06:00
James Betker
59b0533b06
Fix attimage step size
2020-06-17 18:45:24 -06:00
James Betker
645d0ca767
ResidualGen mods
...
- Add filters_mid spec which allows a expansion->squeeze for the transformation layers.
- Add scale and bias AFTER the switch
- Remove identity transform (models were converging on this)
- Move attention image generation and temperature setting into new function which gets called every step with a save path
2020-06-17 17:18:28 -06:00
James Betker
6f8406fbdc
Fixed ConfigurableSwitchedGenerator bug
2020-06-16 16:53:57 -06:00
James Betker
7d541642aa
Get rid of SwitchedResidualGenerator
...
Just use the configurable one instead..
2020-06-16 16:23:29 -06:00
James Betker
379b96eb55
Output histograms with SwitchedResidualGenerator
...
This also fixes the initialization weight for the configurable generator.
2020-06-16 15:54:37 -06:00
James Betker
f8b67f134b
Get proper contiguous view for backwards compatibility
2020-06-16 14:27:16 -06:00
James Betker
2def96203e
Mods to SwitchedResidualGenerator_arch
...
- Increased processing for high-resolution switches
- Do stride=2 first in HalvingProcessingBlock
2020-06-16 14:19:12 -06:00
James Betker
70c764b9d4
Create a configurable SwichedResidualGenerator
...
Also move attention image generator out of repo
2020-06-16 13:24:07 -06:00
James Betker
df1046c318
New arch: SwitchedResidualGenerator_arch
...
The concept here is to use switching to split the generator into two functions:
interpretation and transformation. Transformation is done at the pixel level by
relatively simple conv layers, while interpretation is computed at various levels
by far more complicated conv stacks. The two are merged using the switching
mechanism.
This architecture is far less computationally intensive that RRDB.
2020-06-16 11:23:50 -06:00
James Betker
ddfd7f67a0
Get rid of biggan
...
Not really sure it's a great fit for what is being done here.
2020-06-16 11:21:44 -06:00
James Betker
0a714e8451
Fix initialization in mhead switched rrdb
2020-06-15 21:32:03 -06:00
James Betker
be7982b9ae
Add skip heads to switcher
...
These pass through the input so that it can be selected by the attention mechanism.
2020-06-14 12:46:54 -06:00
James Betker
6c27ddc9b5
Misc
2020-06-14 11:03:02 -06:00
James Betker
6c0e9f45c7
Add GPU mem tracing module
2020-06-14 11:02:54 -06:00
James Betker
48532a0a8a
Fix initial_stride on lowdim models
2020-06-14 11:02:16 -06:00
James Betker
532704af40
Multiple modifications for experimental RRDB architectures
...
- Add LowDimRRDB; essentially a "normal RRDB" but the RDB blocks process at a low dimension using PixelShuffle
- Add switching wrappers around it
- Add support for switching on top of multi-headed inputs and outputs
- Moves PixelUnshuffle to arch_util
2020-06-13 11:37:27 -06:00
James Betker
e89f28ead0
Update multirrdb to do HR fixing in the base image dimension.
2020-06-11 08:43:39 -06:00
James Betker
d3b2cbfe7c
Fix loading new state dicts for RRDB
2020-06-11 08:25:57 -06:00
James Betker
5ca53e7786
Add alternative first block for PixShuffleRRDB
2020-06-10 21:45:24 -06:00
James Betker
43b7fccc89
Fix mhead attention integration bug for RRDB
2020-06-10 12:02:33 -06:00
James Betker
12e8fad079
Add serveral new RRDB architectures
2020-06-09 13:28:55 -06:00
James Betker
296135ec18
Add doResizeLoss to dataset
...
doResizeLoss has a 50% chance to resize the LQ image to 50% size,
then back to original size. This is useful to training a generator to
recover these lost pixel values while also being able to do
repairs on higher resolution images during training.
2020-06-08 11:27:06 -06:00
James Betker
786a4288d6
Allow switched RRDBNet to record metrics and decay temperature
2020-06-08 11:10:38 -06:00
James Betker
ae3301c0ea
SwitchedRRDB work
...
Renames AttentiveRRDB to SwitchedRRDB. Moves SwitchedConv to
an external repo (neonbjb/switchedconv). Switchs RDB blocks instead
of conv blocks. Works good!
2020-06-08 08:47:34 -06:00
James Betker
93528ff8df
Merge branch 'gan_lab' of https://github.com/neonbjb/mmsr into gan_lab
2020-06-07 16:59:31 -06:00
James Betker
805bd129b7
Switched conv partial impl
2020-06-07 16:59:22 -06:00
James Betker
9e203d07c4
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
2020-06-07 16:56:12 -06:00
James Betker
299d855b34
Enable forced learning rates
2020-06-07 16:56:05 -06:00
James Betker
efb5b3d078
Add switched_conv
2020-06-07 16:45:07 -06:00