Commit Graph

107 Commits

Author SHA1 Message Date
James Betker
bae18c05e6 wrap disc grad 2020-08-25 17:58:20 -06:00
James Betker
f85f1e21db Turns out, can't do that 2020-08-25 17:18:52 -06:00
James Betker
935a735327 More dohs 2020-08-25 17:05:16 -06:00
James Betker
53e67bdb9c Distribute get_grad_no_padding 2020-08-25 17:03:18 -06:00
James Betker
2f706b7d93 I an inept. 2020-08-25 16:42:59 -06:00
James Betker
8bae0de769 ffffffffffffffffff 2020-08-25 16:41:01 -06:00
James Betker
1fe16f71dd Fix bug reporting spsr gan weight 2020-08-25 16:37:45 -06:00
James Betker
96586d6592 Fix distributed d_grad 2020-08-25 16:06:27 -06:00
James Betker
09a9079e17 Check rank before doing image logging. 2020-08-25 16:00:49 -06:00
James Betker
f9276007a8 More fixes to corrupt_fea 2020-08-23 17:52:18 -06:00
James Betker
0005c56cd4 dbg 2020-08-23 17:43:03 -06:00
James Betker
4bb5b3c981 corfea debugging 2020-08-23 17:39:02 -06:00
James Betker
7713cb8df5 Corrupted features in srgan 2020-08-23 17:32:03 -06:00
James Betker
dffc15184d More ExtensibleTrainer work
It runs now, just need to debug it to reach performance parity with SRGAN. Sweet.
2020-08-23 17:22:45 -06:00
James Betker
e59e712e39 More ExtensibleTrainer work 2020-08-22 13:08:33 -06:00
James Betker
a498d7b1b3 Report l_g_gan_grad before weight multiplication 2020-08-20 11:57:53 -06:00
James Betker
3d0ece804b SPSR LR2 2020-08-12 08:45:49 -06:00
James Betker
cb316fabc7 Use LR data for image gradient prediction when HR data is disjoint 2020-08-10 15:00:28 -06:00
James Betker
1d5f4f6102 Crossgan 2020-08-07 21:03:39 -06:00
James Betker
fd7b6ca0a9 Comptue gan_grad_branch.... 2020-08-06 12:11:40 -06:00
James Betker
30b16d5235 Update how branch GAN grad is disseminated 2020-08-06 11:13:02 -06:00
James Betker
be272248af More RAGAN fixes 2020-08-05 16:47:21 -06:00
James Betker
26a6a5d512 Compute grad GAN loss against both the branch and final target, simplify pixel loss
Also fixes a memory leak issue where we weren't detaching our loss stats when
logging them. This stabilizes memory usage substantially.
2020-08-05 12:08:15 -06:00
James Betker
299ee13988 More RAGAN fixes 2020-08-05 11:03:06 -06:00
James Betker
b8a4df0a0a Enable RAGAN in SPSR, retrofit old RAGAN for efficiency 2020-08-05 10:34:34 -06:00
James Betker
3c0a2d6efe Fix grad branch debug out 2020-08-04 16:43:43 -06:00
James Betker
ec2a795d53 Fix multistep optimizer (feeding from wrong config params) 2020-08-04 16:42:58 -06:00
James Betker
4bfbdaf94f Don't recompute generator outputs for D in standard operation
Should significantly improve training performance with negligible
results differences.
2020-08-04 11:28:52 -06:00
James Betker
11b227edfc Whoops 2020-08-04 10:30:40 -06:00
James Betker
6d25bcd5df Apply fixes to grad discriminator 2020-08-04 10:25:13 -06:00
James Betker
c7e5d3888a Add pix_grad_branch loss to metrics 2020-08-03 16:21:05 -06:00
James Betker
0d070b47a7 Add simplified SPSR architecture
Basically just cleaning up the code, removing some bad conventions,
and reducing complexity somewhat so that I can play around with
this arch a bit more easily.
2020-08-03 10:25:37 -06:00
James Betker
47e24039b5 Fix bug that makes feature loss run even when it is off 2020-08-02 20:37:51 -06:00
James Betker
328afde9c0 Integrate SPSR into SRGAN_model
SPSR_model really isn't that different from SRGAN_model. Rather than continuing to re-implement
everything I've done in SRGAN_model, port the new stuff from SPSR over.

This really demonstrates the need to refactor SRGAN_model a bit to make it cleaner. It is quite the
beast these days..
2020-08-02 12:55:08 -06:00
James Betker
c139f5cd17 More torch 1.6 fixes 2020-07-31 17:03:20 -06:00
James Betker
a66fbb32b6 Fix fixed_disc DataParallel issue 2020-07-31 16:59:23 -06:00
James Betker
bcebed19b7 Fix pixdisc bugs 2020-07-31 16:38:14 -06:00
James Betker
eb11a08d1c Enable disjoint feature networks
This is done by pre-training a feature net that predicts the features
of HR images from LR images. Then use the original feature network
and this new one in tandem to work only on LR/Gen images.
2020-07-31 16:29:47 -06:00
James Betker
6e086d0c20 Fix fixed_disc 2020-07-31 15:07:10 -06:00
James Betker
d5fa059594 Add capability to have old discriminators serve as feature networks 2020-07-31 14:59:54 -06:00
James Betker
7629cb0e61 Add FDPL Loss
New loss type that can replace PSNR loss. Works against the frequency domain
and focuses on frequency features loss during hr->lr conversion.
2020-07-30 20:47:57 -06:00
James Betker
85ee64b8d9 Turn down feadisc intensity
Honestly - this feature is probably going to be removed soon, so backwards
compatibility is not a huge deal anymore.
2020-07-27 15:28:55 -06:00
James Betker
ebb199e884 Get rid of safety valve (probably being encountered in val) 2020-07-26 22:51:59 -06:00
James Betker
d09ed4e5f7 Misc fixes 2020-07-26 22:44:24 -06:00
James Betker
c54784ae9e Fix feature disc log item error 2020-07-26 22:25:59 -06:00
James Betker
9a8f227501 Allow separate dataset to pushed in for GAN-only training 2020-07-26 21:44:45 -06:00
James Betker
3320ad685f Fix mega_batch_factor not set for test 2020-07-24 12:26:44 -06:00
James Betker
c50cce2a62 Add an abstract, configurabler weight scheduling class and apply it to the feature weight 2020-07-23 17:03:54 -06:00
James Betker
9ccf771629 Fix feature validation, wrong device
Only shows up in distributed training for some reason.
2020-07-23 10:16:34 -06:00
James Betker
bba283776c Enable find_unused_parameters for DistributedDataParallel
attention_norm has some parameters which are not used to compute grad,
which is causing failures in the distributed case.
2020-07-23 09:08:13 -06:00