James Betker
|
53e67bdb9c
|
Distribute get_grad_no_padding
|
2020-08-25 17:03:18 -06:00 |
|
James Betker
|
2f706b7d93
|
I an inept.
|
2020-08-25 16:42:59 -06:00 |
|
James Betker
|
8bae0de769
|
ffffffffffffffffff
|
2020-08-25 16:41:01 -06:00 |
|
James Betker
|
1fe16f71dd
|
Fix bug reporting spsr gan weight
|
2020-08-25 16:37:45 -06:00 |
|
James Betker
|
96586d6592
|
Fix distributed d_grad
|
2020-08-25 16:06:27 -06:00 |
|
James Betker
|
09a9079e17
|
Check rank before doing image logging.
|
2020-08-25 16:00:49 -06:00 |
|
James Betker
|
a1800f45ef
|
Fix for referencingmultiplexer
|
2020-08-25 15:43:12 -06:00 |
|
James Betker
|
19487d9bbd
|
Fix distributed launch for large distributed runs
|
2020-08-25 15:42:59 -06:00 |
|
James Betker
|
03eb29a4d9
|
Fix LQGT dataset
|
2020-08-25 11:57:25 -06:00 |
|
James Betker
|
a65b07607c
|
Reference network
|
2020-08-25 11:56:59 -06:00 |
|
James Betker
|
f224907603
|
Fix LQGT_dataset, add full_image_dataset
|
2020-08-24 17:12:43 -06:00 |
|
James Betker
|
5ec04aedc8
|
Let noise be configurable
LQ noise is not currently configurable for some reason..
|
2020-08-24 15:00:14 -06:00 |
|
James Betker
|
f9276007a8
|
More fixes to corrupt_fea
|
2020-08-23 17:52:18 -06:00 |
|
James Betker
|
0005c56cd4
|
dbg
|
2020-08-23 17:43:03 -06:00 |
|
James Betker
|
4bb5b3c981
|
corfea debugging
|
2020-08-23 17:39:02 -06:00 |
|
James Betker
|
7713cb8df5
|
Corrupted features in srgan
|
2020-08-23 17:32:03 -06:00 |
|
James Betker
|
dffc15184d
|
More ExtensibleTrainer work
It runs now, just need to debug it to reach performance parity with SRGAN. Sweet.
|
2020-08-23 17:22:45 -06:00 |
|
James Betker
|
afdd93fbe9
|
Grey feature
|
2020-08-22 13:41:38 -06:00 |
|
James Betker
|
e59e712e39
|
More ExtensibleTrainer work
|
2020-08-22 13:08:33 -06:00 |
|
James Betker
|
f40545f235
|
ExtensibleTrainer work
|
2020-08-22 08:24:34 -06:00 |
|
James Betker
|
a498d7b1b3
|
Report l_g_gan_grad before weight multiplication
|
2020-08-20 11:57:53 -06:00 |
|
James Betker
|
9d77a4db2e
|
Allow initial temperature to be specified to SPSR net for inference
|
2020-08-20 11:57:34 -06:00 |
|
James Betker
|
24bdcc1181
|
Let SwitchedSpsr transform count be specified
|
2020-08-18 09:10:25 -06:00 |
|
James Betker
|
40bb0597bb
|
misc
|
2020-08-18 08:50:24 -06:00 |
|
James Betker
|
74cdaa2226
|
Some work on extensible trainer
|
2020-08-18 08:49:32 -06:00 |
|
James Betker
|
0c98c61f4a
|
Enable start_step to be specified
|
2020-08-15 18:34:59 -06:00 |
|
James Betker
|
868d0aa442
|
Undo early dim reduction on grad branch for SPSR_arch
|
2020-08-14 16:23:42 -06:00 |
|
James Betker
|
2d205f52ac
|
Unite spsr_arch switched gens
Found a pretty good basis model.
|
2020-08-12 17:04:45 -06:00 |
|
James Betker
|
bdaa67deb7
|
Misc
|
2020-08-12 08:46:15 -06:00 |
|
James Betker
|
3d0ece804b
|
SPSR LR2
|
2020-08-12 08:45:49 -06:00 |
|
James Betker
|
ab04ca1778
|
Extensible trainer (in progress)
|
2020-08-12 08:45:23 -06:00 |
|
James Betker
|
cb316fabc7
|
Use LR data for image gradient prediction when HR data is disjoint
|
2020-08-10 15:00:28 -06:00 |
|
James Betker
|
f0e2816239
|
Denoise attention maps
|
2020-08-10 14:59:58 -06:00 |
|
James Betker
|
59aba1daa7
|
LR switched SPSR arch
This variant doesn't do conv processing at HR, which should save
a ton of memory in inference. Lets see how it works.
|
2020-08-10 13:03:36 -06:00 |
|
James Betker
|
4e972144ae
|
More attention fixes for switched_spsr
|
2020-08-07 21:11:50 -06:00 |
|
James Betker
|
d02509ef97
|
spsr_switched missing import
|
2020-08-07 21:05:29 -06:00 |
|
James Betker
|
887806ffa0
|
Finish up spsr_switched
|
2020-08-07 21:03:48 -06:00 |
|
James Betker
|
1d5f4f6102
|
Crossgan
|
2020-08-07 21:03:39 -06:00 |
|
James Betker
|
fd7b6ca0a9
|
Comptue gan_grad_branch....
|
2020-08-06 12:11:40 -06:00 |
|
James Betker
|
30b16d5235
|
Update how branch GAN grad is disseminated
|
2020-08-06 11:13:02 -06:00 |
|
James Betker
|
1f21c02f8b
|
Add cross-compare discriminator
|
2020-08-06 08:56:21 -06:00 |
|
James Betker
|
be272248af
|
More RAGAN fixes
|
2020-08-05 16:47:21 -06:00 |
|
James Betker
|
26a6a5d512
|
Compute grad GAN loss against both the branch and final target, simplify pixel loss
Also fixes a memory leak issue where we weren't detaching our loss stats when
logging them. This stabilizes memory usage substantially.
|
2020-08-05 12:08:15 -06:00 |
|
James Betker
|
299ee13988
|
More RAGAN fixes
|
2020-08-05 11:03:06 -06:00 |
|
James Betker
|
b8a4df0a0a
|
Enable RAGAN in SPSR, retrofit old RAGAN for efficiency
|
2020-08-05 10:34:34 -06:00 |
|
James Betker
|
3ab39f0d22
|
Several new spsr nets
|
2020-08-05 10:01:24 -06:00 |
|
James Betker
|
3c0a2d6efe
|
Fix grad branch debug out
|
2020-08-04 16:43:43 -06:00 |
|
James Betker
|
ec2a795d53
|
Fix multistep optimizer (feeding from wrong config params)
|
2020-08-04 16:42:58 -06:00 |
|
James Betker
|
4bfbdaf94f
|
Don't recompute generator outputs for D in standard operation
Should significantly improve training performance with negligible
results differences.
|
2020-08-04 11:28:52 -06:00 |
|
James Betker
|
11b227edfc
|
Whoops
|
2020-08-04 10:30:40 -06:00 |
|