James Betker
|
a51daacde2
|
Fix reporting of d_fake_diff for generators
|
2020-11-02 08:45:46 -07:00 |
|
James Betker
|
dcfe994fee
|
Add standalone srg2_classic
Trying to investigate how I was so misguided. I *thought* srg2 was considerably
better than RRDB in performance but am not actually seeing that.
|
2020-10-31 20:55:34 -06:00 |
|
James Betker
|
eb7df63592
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-31 11:09:32 -06:00 |
|
James Betker
|
c2866ad8d2
|
Disable debugging of comparable pingpong generations
|
2020-10-31 11:09:10 -06:00 |
|
James Betker
|
7303d8c932
|
Add psnr approximator
|
2020-10-31 11:08:55 -06:00 |
|
James Betker
|
565517814e
|
Restore SRG2
Going to try to figure out where SRG lost competitiveness to RRDB..
|
2020-10-30 14:01:56 -06:00 |
|
James Betker
|
74738489b9
|
Fixes and additional support for progressive zoom
|
2020-10-30 09:59:54 -06:00 |
|
James Betker
|
a3918fa808
|
Tecogan & other fixes
|
2020-10-30 00:19:58 -06:00 |
|
James Betker
|
b316078a15
|
Fix tecogan_losses fp16
|
2020-10-29 23:02:20 -06:00 |
|
James Betker
|
3791f95ad0
|
Enable RRDB to take in reference inputs
|
2020-10-29 11:07:40 -06:00 |
|
James Betker
|
7d38381d46
|
Add scaling to rrdb
|
2020-10-29 09:48:10 -06:00 |
|
James Betker
|
607ff3c67c
|
RRDB with bypass
|
2020-10-29 09:39:45 -06:00 |
|
James Betker
|
1655b9e242
|
Fix fast_forward teco loss bug
|
2020-10-28 17:49:54 -06:00 |
|
James Betker
|
515905e904
|
Add a min_loss that is DDP compatible
|
2020-10-28 15:46:59 -06:00 |
|
James Betker
|
f133243ac8
|
Extra logging for teco_resgen
|
2020-10-28 15:21:22 -06:00 |
|
James Betker
|
2ab5054d4c
|
Add noise to teco disc
|
2020-10-27 22:48:23 -06:00 |
|
James Betker
|
4dc16d5889
|
Upgrade tecogan_losses for speed
|
2020-10-27 22:40:15 -06:00 |
|
James Betker
|
ac3da0c5a6
|
Make tecogen functional
|
2020-10-27 21:08:59 -06:00 |
|
James Betker
|
10da206db6
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-27 20:59:59 -06:00 |
|
James Betker
|
9848f4c6cb
|
Add teco_resgen
|
2020-10-27 20:59:55 -06:00 |
|
James Betker
|
543c384a91
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-27 20:59:16 -06:00 |
|
James Betker
|
da53090ce6
|
More adjustments to support distributed training with teco & on multi_modal_train
|
2020-10-27 20:58:03 -06:00 |
|
James Betker
|
00bb568956
|
further checkpointify spsr_arch
|
2020-10-27 17:54:28 -06:00 |
|
James Betker
|
c2727a0150
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-27 15:24:19 -06:00 |
|
James Betker
|
2a3eec8fd7
|
Fix some distributed training snafus
|
2020-10-27 15:24:05 -06:00 |
|
James Betker
|
d923a62ed3
|
Allow SPSR to checkpoint
|
2020-10-27 15:23:20 -06:00 |
|
James Betker
|
11a9e223a6
|
Retrofit SPSR_arch so it is capable of accepting a ref
|
2020-10-27 11:14:36 -06:00 |
|
James Betker
|
8202ee72b9
|
Re-add original SPSR_arch
|
2020-10-27 11:00:38 -06:00 |
|
James Betker
|
231137ab0a
|
Revert RRDB back to original model
|
2020-10-27 10:25:31 -06:00 |
|
James Betker
|
1ce863849a
|
Remove temporary base_model change
|
2020-10-26 11:13:01 -06:00 |
|
James Betker
|
54accfa693
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-26 11:12:37 -06:00 |
|
James Betker
|
ff58c6484a
|
Fixes to unified chunk datasets to support stereoscopic training
|
2020-10-26 11:12:22 -06:00 |
|
James Betker
|
f857eb00a8
|
Allow tecogan losses to compute at 32px
|
2020-10-26 11:09:55 -06:00 |
|
James Betker
|
629b968901
|
ChainedGen 4x alteration
Increases conv window for teco_recurrent in the 4x case so all data
can be used.
base_model changes should be temporary.
|
2020-10-26 10:54:51 -06:00 |
|
James Betker
|
85c07f85d9
|
Update flownet submodule
|
2020-10-24 11:59:00 -06:00 |
|
James Betker
|
9c3d059ef0
|
Updates to be able to train flownet2 in ExtensibleTrainer
Only supports basic losses for now, though.
|
2020-10-24 11:56:39 -06:00 |
|
James Betker
|
1dbcbfbac8
|
Restore ChainedEmbeddingGenWithStructure
Still using this guy, after all
|
2020-10-24 11:54:52 -06:00 |
|
James Betker
|
7a75d10784
|
Arch cleanup
|
2020-10-23 09:35:33 -06:00 |
|
James Betker
|
646d6a621a
|
Support 4x zoom on ChainedEmbeddingGen
|
2020-10-23 09:25:58 -06:00 |
|
James Betker
|
e9c0b9f0fd
|
More adjustments to support multi-modal training
Specifically - looks like at least MSE loss cannot handle autocasted tensors
|
2020-10-22 16:49:34 -06:00 |
|
James Betker
|
76789a456f
|
Class-ify train.py and workon multi-modal trainer
|
2020-10-22 16:15:31 -06:00 |
|
James Betker
|
15e00e9014
|
Finish integration with autocast
Note: autocast is broken when also using checkpoint(). Overcome this by modifying
torch's checkpoint() function in place to also use autocast.
|
2020-10-22 14:39:19 -06:00 |
|
James Betker
|
d7ee14f721
|
Move to torch.cuda.amp (not working)
Running into OOM errors, needs diagnosing. Checkpointing here.
|
2020-10-22 13:58:05 -06:00 |
|
James Betker
|
3e3d2af1f3
|
Add multi-modal trainer
|
2020-10-22 13:27:32 -06:00 |
|
James Betker
|
40dc2938e8
|
Fix multifaceted chain gen
|
2020-10-22 13:27:06 -06:00 |
|
James Betker
|
43c4f92123
|
Collapse progressive zoom candidates into the batch dimension
This contributes a significant speedup to training this type of network
since losses can operate on the entire prediction spectrum at once.
|
2020-10-21 22:37:23 -06:00 |
|
James Betker
|
680d635420
|
Enable ExtensibleTrainer to skip steps when state keys are missing
|
2020-10-21 22:22:28 -06:00 |
|
James Betker
|
d1175f0de1
|
Add FFT injector
|
2020-10-21 22:22:00 -06:00 |
|
James Betker
|
1ef559d7ca
|
Add a ChainedEmbeddingGen which can be simueltaneously used with multiple training paradigms
|
2020-10-21 22:21:51 -06:00 |
|
James Betker
|
931aa65dd0
|
Allow recurrent losses to be weighted
|
2020-10-21 16:59:44 -06:00 |
|