Commit Graph

434 Commits

Author SHA1 Message Date
James Betker
dfcbe5f2db Add capability to place additional conv into discriminator
This should allow us to support larger images sizes. May need
to add another one of these.
2020-06-23 09:40:33 -06:00
James Betker
bad33de906 Add simple resize to extract images 2020-06-23 09:39:51 -06:00
James Betker
030648f2bc Remove batchnorms from resgen 2020-06-22 17:23:36 -06:00
James Betker
68bcab03ae Add growth channel to switch_growths for flat networks 2020-06-22 10:40:16 -06:00
James Betker
3b81712c49 Remove BN from transforms 2020-06-19 16:52:56 -06:00
James Betker
61364ec7d0 Fix inverse temperature curve logic and add upsample factor 2020-06-19 09:18:30 -06:00
James Betker
0551139b8d Fix resgen temperature curve below 1
It needs to be inverted to maintain a true linear curve
2020-06-18 16:08:07 -06:00
James Betker
efc80f041c Save & load amp state 2020-06-18 11:38:48 -06:00
James Betker
2e3b6bad77 Log tensorboard directly into experiments directory 2020-06-18 11:33:02 -06:00
James Betker
778e7b6931 Add a double-step to attention temperature 2020-06-18 11:29:31 -06:00
James Betker
d2d5e097d5 Add profiling to SRGAN for testing timings 2020-06-18 11:29:10 -06:00
James Betker
45a900fafe Misc 2020-06-18 11:28:55 -06:00
James Betker
59b0533b06 Fix attimage step size 2020-06-17 18:45:24 -06:00
James Betker
645d0ca767 ResidualGen mods
- Add filters_mid spec which allows a expansion->squeeze for the transformation layers.
- Add scale and bias AFTER the switch
- Remove identity transform (models were converging on this)
- Move attention image generation and temperature setting into new function which gets called every step with a save path
2020-06-17 17:18:28 -06:00
James Betker
6f8406fbdc Fixed ConfigurableSwitchedGenerator bug 2020-06-16 16:53:57 -06:00
James Betker
7d541642aa Get rid of SwitchedResidualGenerator
Just use the configurable one instead..
2020-06-16 16:23:29 -06:00
James Betker
379b96eb55 Output histograms with SwitchedResidualGenerator
This also fixes the initialization weight for the configurable generator.
2020-06-16 15:54:37 -06:00
James Betker
f8b67f134b Get proper contiguous view for backwards compatibility 2020-06-16 14:27:16 -06:00
James Betker
2def96203e Mods to SwitchedResidualGenerator_arch
- Increased processing for high-resolution switches
- Do stride=2 first in HalvingProcessingBlock
2020-06-16 14:19:12 -06:00
James Betker
70c764b9d4 Create a configurable SwichedResidualGenerator
Also move attention image generator out of repo
2020-06-16 13:24:07 -06:00
James Betker
df1046c318 New arch: SwitchedResidualGenerator_arch
The concept here is to use switching to split the generator into two functions:
interpretation and transformation. Transformation is done at the pixel level by
relatively simple conv layers, while interpretation is computed at various levels
by far more complicated conv stacks. The two are merged using the switching
mechanism.

This architecture is far less computationally intensive that RRDB.
2020-06-16 11:23:50 -06:00
James Betker
ddfd7f67a0 Get rid of biggan
Not really sure it's a great fit for what is being done here.
2020-06-16 11:21:44 -06:00
James Betker
0a714e8451 Fix initialization in mhead switched rrdb 2020-06-15 21:32:03 -06:00
James Betker
be7982b9ae Add skip heads to switcher
These pass through the input so that it can be selected by the attention mechanism.
2020-06-14 12:46:54 -06:00
James Betker
6c27ddc9b5 Misc 2020-06-14 11:03:02 -06:00
James Betker
6c0e9f45c7 Add GPU mem tracing module 2020-06-14 11:02:54 -06:00
James Betker
48532a0a8a Fix initial_stride on lowdim models 2020-06-14 11:02:16 -06:00
James Betker
532704af40 Multiple modifications for experimental RRDB architectures
- Add LowDimRRDB; essentially a "normal RRDB" but the RDB blocks process at a low dimension using PixelShuffle
- Add switching wrappers around it
- Add support for switching on top of multi-headed inputs and outputs
- Moves PixelUnshuffle to arch_util
2020-06-13 11:37:27 -06:00
James Betker
e89f28ead0 Update multirrdb to do HR fixing in the base image dimension. 2020-06-11 08:43:39 -06:00
James Betker
d3b2cbfe7c Fix loading new state dicts for RRDB 2020-06-11 08:25:57 -06:00
James Betker
5ca53e7786 Add alternative first block for PixShuffleRRDB 2020-06-10 21:45:24 -06:00
James Betker
43b7fccc89 Fix mhead attention integration bug for RRDB 2020-06-10 12:02:33 -06:00
James Betker
12e8fad079 Add serveral new RRDB architectures 2020-06-09 13:28:55 -06:00
James Betker
296135ec18 Add doResizeLoss to dataset
doResizeLoss has a 50% chance to resize the LQ image to 50% size,
then back to original size. This is useful to training a generator to
recover these lost pixel values while also being able to do
repairs on higher resolution images during training.
2020-06-08 11:27:06 -06:00
James Betker
786a4288d6 Allow switched RRDBNet to record metrics and decay temperature 2020-06-08 11:10:38 -06:00
James Betker
ae3301c0ea SwitchedRRDB work
Renames AttentiveRRDB to SwitchedRRDB. Moves SwitchedConv to
an external repo (neonbjb/switchedconv). Switchs RDB blocks instead
of conv blocks. Works good!
2020-06-08 08:47:34 -06:00
James Betker
93528ff8df Merge branch 'gan_lab' of https://github.com/neonbjb/mmsr into gan_lab 2020-06-07 16:59:31 -06:00
James Betker
805bd129b7 Switched conv partial impl 2020-06-07 16:59:22 -06:00
James Betker
9e203d07c4 Merge remote-tracking branch 'origin/gan_lab' into gan_lab 2020-06-07 16:56:12 -06:00
James Betker
299d855b34 Enable forced learning rates 2020-06-07 16:56:05 -06:00
James Betker
efb5b3d078 Add switched_conv 2020-06-07 16:45:07 -06:00
James Betker
063719c5cc Fix attention conv bugs 2020-06-06 18:31:02 -06:00
James Betker
cbedd6340a Add RRDB with attention 2020-06-05 21:02:08 -06:00
James Betker
ef5d8a0ed1 Misc 2020-06-05 21:01:50 -06:00
James Betker
318a604405 Allow weighting of input data
This essentially allows you to give some datasets more
importance than others for the purposes of reaching a more
refined network.
2020-06-04 10:05:21 -06:00
James Betker
edf0f8582e Fix rrdb bug 2020-06-02 11:15:55 -06:00
James Betker
dc17545083 Add RRDB Initial Stride
Allows downsampling immediately before processing, which reduces network complexity on
higher resolution images but keeps a higher filter count.
2020-06-02 10:47:15 -06:00
James Betker
76a38b6a53 Misc 2020-06-02 09:35:52 -06:00
James Betker
726d1913ac Allow validating in batches, remove val size limit 2020-06-02 08:41:22 -06:00
James Betker
90125f5bed Allow blurring to be specified 2020-06-02 08:40:52 -06:00