James Betker
5ccdbcefe3
srflow_orig integration
2020-11-19 23:47:24 -07:00
James Betker
d7877d0a36
Fixes to teco losses and translational losses
2020-11-19 11:35:05 -07:00
James Betker
6b679e2b51
Make grad_penalty available to classical discs
2020-11-17 18:31:40 -07:00
James Betker
8a19c9ae15
Add additive mode to rrdb
2020-11-16 20:45:09 -07:00
James Betker
125cb16dce
Add a FID evaluator for stylegan with structural guidance
2020-11-14 20:16:07 -07:00
James Betker
ec621c69b5
Fix train bug
2020-11-14 09:29:08 -07:00
James Betker
a07e1a7292
Add separate Evaluator module and FID evaluator
2020-11-13 11:03:54 -07:00
James Betker
fc55bdb24e
Mods to how wandb are integrated
2020-11-12 15:45:25 -07:00
James Betker
db9e9e28a0
Fix an issue where GPU0 was always being used in non-ddp
...
Frankly, I don't understand how this has ever worked. WTF.
2020-11-12 15:43:01 -07:00
James Betker
88f349bdf1
Enable usage of wandb
2020-11-11 21:48:56 -07:00
James Betker
6a2fd5f7d0
Lots of new discriminator nets
2020-11-10 16:06:54 -07:00
James Betker
0cf52ef52c
latent work
2020-11-06 20:38:23 -07:00
James Betker
74738489b9
Fixes and additional support for progressive zoom
2020-10-30 09:59:54 -06:00
James Betker
607ff3c67c
RRDB with bypass
2020-10-29 09:39:45 -06:00
James Betker
da53090ce6
More adjustments to support distributed training with teco & on multi_modal_train
2020-10-27 20:58:03 -06:00
James Betker
2a3eec8fd7
Fix some distributed training snafus
2020-10-27 15:24:05 -06:00
James Betker
ff58c6484a
Fixes to unified chunk datasets to support stereoscopic training
2020-10-26 11:12:22 -06:00
James Betker
9c3d059ef0
Updates to be able to train flownet2 in ExtensibleTrainer
...
Only supports basic losses for now, though.
2020-10-24 11:56:39 -06:00
James Betker
8636492db0
Copy train.py mods to train2
2020-10-22 17:16:36 -06:00
James Betker
e9c0b9f0fd
More adjustments to support multi-modal training
...
Specifically - looks like at least MSE loss cannot handle autocasted tensors
2020-10-22 16:49:34 -06:00
James Betker
76789a456f
Class-ify train.py and workon multi-modal trainer
2020-10-22 16:15:31 -06:00
James Betker
3e3d2af1f3
Add multi-modal trainer
2020-10-22 13:27:32 -06:00
James Betker
5753e77d67
ChainedGen: Output debugging information on blocks
2020-10-21 16:36:23 -06:00
James Betker
d8c6a4bbb8
Misc
2020-10-20 12:56:52 -06:00
James Betker
331c40f0c8
Allow starting step to be forced
...
Useful for testing purposes or to force a validation.
2020-10-19 15:23:04 -06:00
James Betker
981d64413b
Support validation over a custom injector
...
Also re-enable PSNR
2020-10-19 11:01:56 -06:00
James Betker
9ead2c0a08
Multiscale training in!
2020-10-17 22:54:12 -06:00
James Betker
d856378b2e
Add ChainedGenWithStructure
2020-10-16 20:44:36 -06:00
James Betker
6f8705e8cb
SSGSimpler network
2020-10-15 17:18:44 -06:00
James Betker
24792bdb4f
Codebase cleanup
...
Removed a lot of legacy stuff I have no intent on using again.
Plan is to shape this repo into something more extensible (get it? hah!)
2020-10-13 20:56:39 -06:00
James Betker
17d78195ee
Mods to SRG to support returning switch logits
2020-10-13 20:46:37 -06:00
James Betker
2bc5701b10
misc
2020-10-12 10:21:25 -06:00
James Betker
b2c4b2a16d
Move gpu_ids out of if statement
2020-10-06 20:40:20 -06:00
James Betker
0e3ea63a14
Misc
2020-10-05 18:01:50 -06:00
James Betker
ffd069fd97
Lots of SSG work
...
- Checkpointed pretty much the entire model - enabling recurrent inputs
- Added two new models for test - adding depth (again) and removing SPSR (in lieu of the new losses)
2020-10-04 20:48:08 -06:00
James Betker
fc396baf1a
Move loaded_options to util
...
Doesn't seem to work with python 3.6
2020-10-03 20:29:06 -06:00
James Betker
3cbb9ecd45
Misc
2020-10-03 16:15:42 -06:00
James Betker
21d3bb83b2
Use tqdm reporting with validation
2020-10-03 11:16:39 -06:00
James Betker
6c9718ad64
Don't log if you aren't 0 rank
2020-10-03 11:14:13 -06:00
James Betker
19a4075e1e
Allow checkpointing to be disabled in the options file
...
Also makes options a global variable for usage in utils.
2020-10-03 11:03:28 -06:00
James Betker
c9a9e5c525
Prompt user for gpu_id if multiple gpus are detected
2020-10-01 17:24:50 -06:00
James Betker
0b5a033503
spsr7 + cleanup
...
SPSR7 adds ref onto spsr6, makes more "common sense" mods.
2020-09-29 16:59:26 -06:00
James Betker
eb12b5f887
Misc
2020-09-26 21:27:17 -06:00
James Betker
254cb1e915
More dataset integration work
2020-09-25 22:19:38 -06:00
James Betker
f211575e9d
Save models before validation
...
Validation often fails with OOM, wasting hours of training time.
Save models first.
2020-09-16 08:17:17 -06:00
James Betker
c833bd1eac
Misc changes
2020-09-15 20:57:59 -06:00
James Betker
747ded2bf7
Fixes to the spsr3
...
Some lessons learned:
- Biases are fairly important as a relief valve. They dont need to be everywhere, but
most computationally heavy branches should have a bias.
- GroupNorm in SPSR is not a great idea. Since image gradients are represented
in this model, normal means and standard deviations are not applicable. (imggrad
has a high representation of 0).
- Don't fuck with the mainline of any generative model. As much as possible, all
additions should be done through residual connections. Never pollute the mainline
with reference data, do that in branches. It basically leaves the mode untrainable.
2020-09-09 15:28:14 -06:00
James Betker
c04f244802
More mods
2020-09-08 20:36:27 -06:00
James Betker
e6207d4c50
SPSR3 work
...
SPSR3 is meant to fix whatever is causing the switching units
inside of the newer SPSR architectures to fail and basically
not use the multiplexers.
2020-09-08 15:14:23 -06:00
James Betker
a18ece62ee
Add updated spsr net for test
2020-09-07 17:01:48 -06:00
James Betker
e8613041c0
Add novograd optimizer
2020-09-06 17:27:08 -06:00
James Betker
6657a406ac
Mods needed to support training a corruptor again:
...
- Allow original SPSRNet to have a specifiable block increment
- Cleanup
- Bug fixes in code that hasnt been touched in awhile.
2020-09-04 15:33:39 -06:00
James Betker
886d59d5df
Misc fixes & adjustments
2020-09-01 07:58:11 -06:00
James Betker
0a9b85f239
Fix vgg_gn input_img_factor
2020-08-31 09:50:30 -06:00
James Betker
4b4d08bdec
Enable testing in ExtensibleTrainer, fix it in SRGAN_model
...
Also compute fea loss for this.
2020-08-31 09:41:48 -06:00
James Betker
623f3b99b2
Stupid pathing..
2020-08-26 17:58:24 -06:00
James Betker
80aa83bfd2
Try copytree for tb_logger again.
2020-08-26 17:55:02 -06:00
James Betker
b593d8e7c3
Save tb_logger to alt_path
2020-08-26 17:45:07 -06:00
James Betker
f35b3ad28f
Fix val behavior for ExtensibleTrainer
2020-08-26 08:44:22 -06:00
James Betker
19487d9bbd
Fix distributed launch for large distributed runs
2020-08-25 15:42:59 -06:00
James Betker
a65b07607c
Reference network
2020-08-25 11:56:59 -06:00
James Betker
f9276007a8
More fixes to corrupt_fea
2020-08-23 17:52:18 -06:00
James Betker
dffc15184d
More ExtensibleTrainer work
...
It runs now, just need to debug it to reach performance parity with SRGAN. Sweet.
2020-08-23 17:22:45 -06:00
James Betker
afdd93fbe9
Grey feature
2020-08-22 13:41:38 -06:00
James Betker
40bb0597bb
misc
2020-08-18 08:50:24 -06:00
James Betker
0c98c61f4a
Enable start_step to be specified
2020-08-15 18:34:59 -06:00
James Betker
2d205f52ac
Unite spsr_arch switched gens
...
Found a pretty good basis model.
2020-08-12 17:04:45 -06:00
James Betker
bdaa67deb7
Misc
2020-08-12 08:46:15 -06:00
James Betker
1d5f4f6102
Crossgan
2020-08-07 21:03:39 -06:00
James Betker
3ab39f0d22
Several new spsr nets
2020-08-05 10:01:24 -06:00
James Betker
328afde9c0
Integrate SPSR into SRGAN_model
...
SPSR_model really isn't that different from SRGAN_model. Rather than continuing to re-implement
everything I've done in SRGAN_model, port the new stuff from SPSR over.
This really demonstrates the need to refactor SRGAN_model a bit to make it cleaner. It is quite the
beast these days..
2020-08-02 12:55:08 -06:00
James Betker
c8da78966b
Substantial SPSR mods & fixes
...
- Added in gradient accumulation via mega-batch-factor
- Added AMP
- Added missing train hooks
- Added debug image outputs
- Cleaned up including removing GradientPenaltyLoss, custom SpectralNorm
- Removed all the custom discriminators
2020-08-02 10:45:24 -06:00
James Betker
f894ba8f98
Add SPSR_module
...
This is a port from the SPSR repo, it's going to need a lot of work to be properly integrated
but as of this commit it at least runs.
2020-08-01 22:02:54 -06:00
James Betker
eb11a08d1c
Enable disjoint feature networks
...
This is done by pre-training a feature net that predicts the features
of HR images from LR images. Then use the original feature network
and this new one in tandem to work only on LR/Gen images.
2020-07-31 16:29:47 -06:00
James Betker
e37726f302
Add feature_model for training custom feature nets
2020-07-31 11:20:39 -06:00
James Betker
a7541b6d8d
Fix illegal tb_logger use in distributed training
2020-07-23 09:14:01 -06:00
James Betker
dbf6147504
Add switched discriminator
...
The logic is that the discriminator may be incapable of providing a truly
targeted loss for all image regions since it has to be too generic
(basically the same argument for the switched generator). So add some
switches in! See how it works!
2020-07-22 20:52:59 -06:00
James Betker
8a9f215653
Huge set of mods to support progressive generator growth
2020-07-18 14:18:48 -06:00
James Betker
47a525241f
Make attention norm optional
2020-07-18 07:24:02 -06:00
James Betker
8d061a2687
Add u-net discriminator with feature output
2020-07-16 10:10:09 -06:00
James Betker
1b1431133b
Add DualOutputSRG
...
Also removes the old multi-return mechanism that Generators support.
Also fixes AttentionNorm.
2020-07-14 09:28:24 -06:00
James Betker
812c684f7d
Update pixgan swap algorithm
...
- Swap multiple blocks in the image instead of just one. The discriminator was clearly
learning that most blocks have one region that needs to be fixed.
- Relax block size constraints. This was in place to gaurantee that the discriminator
signal was clean. Instead, just downsample the "loss image" with bilinear interpolation.
The result is noisier, but this is actually probably healthy for the discriminator.
2020-07-10 15:56:14 -06:00
James Betker
5f2c722a10
SRG2 revival
...
Big update to SRG2 architecture to pull in a lot of things that have been learned:
- Use group norm instead of batch norm
- Initialize the weights on the transformations low like is done in RRDB rather than using the scalar. Models live or die by their early stages, and this ones early stage is pretty weak
- Transform multiplexer to use u-net like architecture.
- Just use one set of configuration variables instead of a list - flat networks performed fine in this regard.
2020-07-09 17:34:51 -06:00
James Betker
b2507be13c
Fix up pixgan loss and pixdisc
2020-07-08 21:27:48 -06:00
James Betker
26a4a66d1c
Bug fixes and new gan mechanism
...
- Removed a bunch of unnecessary image loggers. These were just consuming space and never being viewed
- Got rid of support of artificial var_ref support. The new pixdisc is what i wanted to implement then - it's much better.
- Add pixgan GAN mechanism. This is purpose-built for the pixdisc. It is intended to promote a healthy discriminator
- Megabatchfactor was applied twice on metrics, fixed that
Adds pix_gan (untested) which swaps a portion of the fake and real image with each other, then expects the discriminator
to properly discriminate the swapped regions.
2020-07-08 17:40:26 -06:00
James Betker
8a4eb8241d
SRG3 work
...
Operates on top of a pre-trained SpineNET backbone (trained on CoCo 2017 with RetinaNet)
This variant is extremely shallow.
2020-07-07 13:46:40 -06:00
James Betker
a47a5dca43
Fix pixdisc bug
2020-07-05 21:57:52 -06:00
James Betker
188de5e15a
Misc changes
2020-07-04 13:22:50 -06:00
James Betker
77d3765364
Fix new feature loss calc
2020-07-03 22:20:13 -06:00
James Betker
da4335c25e
Add a feature-based validation test
2020-07-03 15:18:57 -06:00
James Betker
703dec4472
Add SpineNet & integrate with SRG
...
New version of SRG uses SpineNet for a switch backbone.
2020-07-03 12:07:31 -06:00
James Betker
ea9c6765ca
Move train imports into init_dist
2020-07-02 15:11:21 -06:00
James Betker
c0bb123504
Misc changes
2020-07-01 11:28:23 -06:00
James Betker
2e3b6bad77
Log tensorboard directly into experiments directory
2020-06-18 11:33:02 -06:00
James Betker
45a900fafe
Misc
2020-06-18 11:28:55 -06:00
James Betker
379b96eb55
Output histograms with SwitchedResidualGenerator
...
This also fixes the initialization weight for the configurable generator.
2020-06-16 15:54:37 -06:00
James Betker
6c27ddc9b5
Misc
2020-06-14 11:03:02 -06:00
James Betker
296135ec18
Add doResizeLoss to dataset
...
doResizeLoss has a 50% chance to resize the LQ image to 50% size,
then back to original size. This is useful to training a generator to
recover these lost pixel values while also being able to do
repairs on higher resolution images during training.
2020-06-08 11:27:06 -06:00
James Betker
ef5d8a0ed1
Misc
2020-06-05 21:01:50 -06:00
James Betker
726d1913ac
Allow validating in batches, remove val size limit
2020-06-02 08:41:22 -06:00