Commit Graph

2145 Commits (b253da6e353f0170c3eb60fe299c41d2fa21db50)
 

Author SHA1 Message Date
James Betker b6e036147a Add more batch norms to FlatProcessorNet_arch 2020-04-30 11:47:21 +07:00
James Betker 66e91a3d9e Revert "Enable skip-through connections from disc to gen"
This reverts commit b7857f35c3.
2020-04-30 11:45:07 +07:00
James Betker f027e888ed Clear out tensorboard on job restart. 2020-04-30 11:44:53 +07:00
James Betker b7857f35c3 Enable skip-through connections from disc to gen 2020-04-30 11:30:11 +07:00
James Betker bf634fc9fa Make resnet w/ BN discriminator use leaky relus 2020-04-30 11:28:59 +07:00
James Betker 3781ea725c Add Resnet Discriminator with BN 2020-04-29 20:51:57 +07:00
James Betker a5188bb7ca Remover fixup code from arch_util
Going into it's own arch.
2020-04-29 15:17:43 +07:00
James Betker 5b8a77f02c Discriminator part 1
New discriminator. Includes spectral norming.
2020-04-28 23:00:29 +07:00
James Betker 2c145c39b6 Misc changes 2020-04-28 11:50:16 +07:00
James Betker 46f550e42b Change downsample_dataset to do no image modification
I'm  preprocessing the images myself now. There's no need to have
the dataset do this processing as well.
2020-04-28 11:50:04 +07:00
James Betker 8ab595e427 Add FlatProcessorNet
After doing some thinking and reading on the subject, it occurred to me that
I was treating the generator like a discriminator by focusing the network
complexity at the feature levels. It makes far more sense to process each conv
level equally for the generator, hence the FlatProcessorNet in this commit. This
network borrows some of the residual pass-through logic from RRDB which makes
the gradient path exceptionally short for pretty much all model parameters and
can be trained in O1 optimization mode without overflows again.
2020-04-28 11:49:21 +07:00
James Betker b8f67418d4 Retool HighToLowResNet
The receptive field of the original was *really* low. This new one has a
receptive field of 36x36px patches. It also has some gradient issues
that need to be worked out
2020-04-26 01:13:42 +07:00
James Betker 02ff4a57fd Enable HighToLowResNet to do a 1:1 transform 2020-04-25 21:36:32 +07:00
James Betker 35bd1ecae4 Config changes for discriminator advantage run
Still going from high->low, discriminator discerns on low. Next up disc works on high.
2020-04-25 11:24:28 +07:00
James Betker d95808f4ef Implement downsample GAN
This bad boy is for a workflow where you train a model on disjoint image sets to
downsample a "good" set of images like a "bad" set of images looks. You then
use that downsampler to generate a training set of paired images for supersampling.
2020-04-24 00:00:46 +07:00
James Betker ea54c7618a Print error when image read fails 2020-04-23 23:59:32 +07:00
James Betker e98d92fc77 Allow test to operate on batches 2020-04-23 23:59:09 +07:00
James Betker 8ead9ae183 Lots more config files 2020-04-23 23:58:27 +07:00
James Betker ea5f432f5a Log total gen loss 2020-04-22 14:02:10 +07:00
James Betker 79aff886b5 Modifications that allow developer to explicitly specify a different image set for PIX and feature losses 2020-04-22 10:11:14 +07:00
James Betker 12d92dc443 Add GTLQ dataset 2020-04-22 00:40:38 +07:00
James Betker 4d269fdac6 Support independent PIX dataroot 2020-04-22 00:40:13 +07:00
James Betker 05aafef938 Support variant input sizes and scales 2020-04-22 00:39:55 +07:00
James Betker ebda70fcba Fix AMP 2020-04-22 00:39:31 +07:00
James Betker f4b33b0531 Some random fixes/adjustments 2020-04-22 00:38:53 +07:00
James Betker 2538ca9f33 Add my own configs 2020-04-22 00:37:54 +07:00
James Betker af5dfaa90d Change GT_size to target_size 2020-04-22 00:37:41 +07:00
James Betker cc834bd5a3 Support >128px image squares 2020-04-21 16:32:59 +07:00
James Betker 35a421f6ad Add IDEA Project 2020-04-21 16:28:21 +07:00
James Betker 4f6d3f0dfb Enable AMP optimizations & write sample train images to folder. 2020-04-21 16:28:06 +07:00
James Betker 9fc556be35 Remove bad folder
Breaks git on windows
2020-04-21 16:26:19 +07:00
Xintao a73b318f0f
Merge pull request #32 from imPRAGMA/master
pip requirements file, and optimisations to readme
2019-11-29 23:57:45 +07:00
PRAGMA 1fb12871fd
Create requirements.txt 2019-11-24 07:48:52 +07:00
PRAGMA 8e91d59107
Update README.md 2019-11-24 07:47:57 +07:00
Xintao 3549100670
Merge pull request #24 from nlpjoe/master
Update DATASETS.md
2019-10-27 17:28:03 +07:00
Jzzhou ff8b645246
Update DATASETS.md 2019-10-27 16:46:45 +07:00
Xintao 8c615f0763
Update README.md 2019-09-06 21:32:28 +07:00
XintaoWang a25ee9464d test w/o GT 2019-09-01 22:20:10 +07:00
XintaoWang 0098663b6b SRGAN model supprots dist training 2019-09-01 22:14:29 +07:00
XintaoWang 9d949b838e Merge branch 'master' of github.com:open-mmlab/mmsr 2019-08-27 17:49:28 +07:00
XintaoWang 866a858e59 add deform_conv_cuda_kernel.cu 2019-08-27 17:49:12 +07:00
Chen Change Loy caabd8e9c7
Update README.md 2019-08-26 17:34:32 +07:00
XintaoWang dfdaaef492 add DATASETS.md 2019-08-23 21:45:54 +07:00
XintaoWang 037933ba66 mmsr 2019-08-23 21:42:47 +07:00
Kai Chen 58b175161c
Initial commit 2019-08-23 21:04:30 +07:00