James Betker
|
44c75f7642
|
Undo SRG change
|
2020-09-04 17:32:16 -06:00 |
|
James Betker
|
6657a406ac
|
Mods needed to support training a corruptor again:
- Allow original SPSRNet to have a specifiable block increment
- Cleanup
- Bug fixes in code that hasnt been touched in awhile.
|
2020-09-04 15:33:39 -06:00 |
|
James Betker
|
bfdfaab911
|
Checkpoint RRDB
Greatly reduces memory consumption with a low performance penalty
|
2020-09-04 15:32:00 -06:00 |
|
James Betker
|
696242064c
|
Use tensor checkpointing to drastically reduce memory usage
This comes at the expense of computation, but since we can use much larger
batches, it results in a net speedup.
|
2020-09-03 11:33:36 -06:00 |
|
James Betker
|
365813bde3
|
Add InterpolateInjector
|
2020-09-03 11:32:47 -06:00 |
|
James Betker
|
d90c96e55e
|
Fix greyscale injector
|
2020-09-02 10:29:40 -06:00 |
|
James Betker
|
8b52d46847
|
Interpreted feature loss to extensibletrainer
|
2020-09-02 10:08:24 -06:00 |
|
James Betker
|
886d59d5df
|
Misc fixes & adjustments
|
2020-09-01 07:58:11 -06:00 |
|
James Betker
|
0a9b85f239
|
Fix vgg_gn input_img_factor
|
2020-08-31 09:50:30 -06:00 |
|
James Betker
|
4b4d08bdec
|
Enable testing in ExtensibleTrainer, fix it in SRGAN_model
Also compute fea loss for this.
|
2020-08-31 09:41:48 -06:00 |
|
James Betker
|
b2091cb698
|
feamod fix
|
2020-08-30 08:08:49 -06:00 |
|
James Betker
|
a56e906f9c
|
train HR feature trainer
|
2020-08-29 09:27:48 -06:00 |
|
James Betker
|
0e859a8082
|
4x spsr ref (not workin)
|
2020-08-29 09:27:18 -06:00 |
|
James Betker
|
25832930db
|
Update loss with lr crossgan
|
2020-08-26 17:57:22 -06:00 |
|
James Betker
|
cbd5e7a986
|
Support old school crossgan in extensibletrainer
|
2020-08-26 17:52:35 -06:00 |
|
James Betker
|
8a6a2e6e2e
|
Rev3 of the full image ref arch
|
2020-08-26 17:11:01 -06:00 |
|
James Betker
|
f35b3ad28f
|
Fix val behavior for ExtensibleTrainer
|
2020-08-26 08:44:22 -06:00 |
|
James Betker
|
434ed70a9a
|
Wrap vgg disc
|
2020-08-25 18:14:45 -06:00 |
|
James Betker
|
83f2f8d239
|
more debugging
|
2020-08-25 18:12:12 -06:00 |
|
James Betker
|
3f60281da7
|
Print when wrapping
|
2020-08-25 18:08:46 -06:00 |
|
James Betker
|
bae18c05e6
|
wrap disc grad
|
2020-08-25 17:58:20 -06:00 |
|
James Betker
|
f85f1e21db
|
Turns out, can't do that
|
2020-08-25 17:18:52 -06:00 |
|
James Betker
|
935a735327
|
More dohs
|
2020-08-25 17:05:16 -06:00 |
|
James Betker
|
53e67bdb9c
|
Distribute get_grad_no_padding
|
2020-08-25 17:03:18 -06:00 |
|
James Betker
|
2f706b7d93
|
I an inept.
|
2020-08-25 16:42:59 -06:00 |
|
James Betker
|
8bae0de769
|
ffffffffffffffffff
|
2020-08-25 16:41:01 -06:00 |
|
James Betker
|
1fe16f71dd
|
Fix bug reporting spsr gan weight
|
2020-08-25 16:37:45 -06:00 |
|
James Betker
|
96586d6592
|
Fix distributed d_grad
|
2020-08-25 16:06:27 -06:00 |
|
James Betker
|
09a9079e17
|
Check rank before doing image logging.
|
2020-08-25 16:00:49 -06:00 |
|
James Betker
|
a1800f45ef
|
Fix for referencingmultiplexer
|
2020-08-25 15:43:12 -06:00 |
|
James Betker
|
a65b07607c
|
Reference network
|
2020-08-25 11:56:59 -06:00 |
|
James Betker
|
f9276007a8
|
More fixes to corrupt_fea
|
2020-08-23 17:52:18 -06:00 |
|
James Betker
|
0005c56cd4
|
dbg
|
2020-08-23 17:43:03 -06:00 |
|
James Betker
|
4bb5b3c981
|
corfea debugging
|
2020-08-23 17:39:02 -06:00 |
|
James Betker
|
7713cb8df5
|
Corrupted features in srgan
|
2020-08-23 17:32:03 -06:00 |
|
James Betker
|
dffc15184d
|
More ExtensibleTrainer work
It runs now, just need to debug it to reach performance parity with SRGAN. Sweet.
|
2020-08-23 17:22:45 -06:00 |
|
James Betker
|
afdd93fbe9
|
Grey feature
|
2020-08-22 13:41:38 -06:00 |
|
James Betker
|
e59e712e39
|
More ExtensibleTrainer work
|
2020-08-22 13:08:33 -06:00 |
|
James Betker
|
f40545f235
|
ExtensibleTrainer work
|
2020-08-22 08:24:34 -06:00 |
|
James Betker
|
a498d7b1b3
|
Report l_g_gan_grad before weight multiplication
|
2020-08-20 11:57:53 -06:00 |
|
James Betker
|
9d77a4db2e
|
Allow initial temperature to be specified to SPSR net for inference
|
2020-08-20 11:57:34 -06:00 |
|
James Betker
|
24bdcc1181
|
Let SwitchedSpsr transform count be specified
|
2020-08-18 09:10:25 -06:00 |
|
James Betker
|
74cdaa2226
|
Some work on extensible trainer
|
2020-08-18 08:49:32 -06:00 |
|
James Betker
|
868d0aa442
|
Undo early dim reduction on grad branch for SPSR_arch
|
2020-08-14 16:23:42 -06:00 |
|
James Betker
|
2d205f52ac
|
Unite spsr_arch switched gens
Found a pretty good basis model.
|
2020-08-12 17:04:45 -06:00 |
|
James Betker
|
3d0ece804b
|
SPSR LR2
|
2020-08-12 08:45:49 -06:00 |
|
James Betker
|
ab04ca1778
|
Extensible trainer (in progress)
|
2020-08-12 08:45:23 -06:00 |
|
James Betker
|
cb316fabc7
|
Use LR data for image gradient prediction when HR data is disjoint
|
2020-08-10 15:00:28 -06:00 |
|
James Betker
|
f0e2816239
|
Denoise attention maps
|
2020-08-10 14:59:58 -06:00 |
|
James Betker
|
59aba1daa7
|
LR switched SPSR arch
This variant doesn't do conv processing at HR, which should save
a ton of memory in inference. Lets see how it works.
|
2020-08-10 13:03:36 -06:00 |
|