James Betker
|
7d38381d46
|
Add scaling to rrdb
|
2020-10-29 09:48:10 -06:00 |
|
James Betker
|
607ff3c67c
|
RRDB with bypass
|
2020-10-29 09:39:45 -06:00 |
|
James Betker
|
1655b9e242
|
Fix fast_forward teco loss bug
|
2020-10-28 17:49:54 -06:00 |
|
James Betker
|
25b007a0f5
|
Increase jpeg corruption & add error
|
2020-10-28 17:37:39 -06:00 |
|
James Betker
|
796659b0ac
|
Add 'jpeg-normal' corruption
|
2020-10-28 16:40:47 -06:00 |
|
James Betker
|
515905e904
|
Add a min_loss that is DDP compatible
|
2020-10-28 15:46:59 -06:00 |
|
James Betker
|
f133243ac8
|
Extra logging for teco_resgen
|
2020-10-28 15:21:22 -06:00 |
|
James Betker
|
2ab5054d4c
|
Add noise to teco disc
|
2020-10-27 22:48:23 -06:00 |
|
James Betker
|
4dc16d5889
|
Upgrade tecogan_losses for speed
|
2020-10-27 22:40:15 -06:00 |
|
James Betker
|
ac3da0c5a6
|
Make tecogen functional
|
2020-10-27 21:08:59 -06:00 |
|
James Betker
|
10da206db6
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-27 20:59:59 -06:00 |
|
James Betker
|
9848f4c6cb
|
Add teco_resgen
|
2020-10-27 20:59:55 -06:00 |
|
James Betker
|
543c384a91
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-27 20:59:16 -06:00 |
|
James Betker
|
da53090ce6
|
More adjustments to support distributed training with teco & on multi_modal_train
|
2020-10-27 20:58:03 -06:00 |
|
James Betker
|
00bb568956
|
further checkpointify spsr_arch
|
2020-10-27 17:54:28 -06:00 |
|
James Betker
|
c2727a0150
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-27 15:24:19 -06:00 |
|
James Betker
|
2a3eec8fd7
|
Fix some distributed training snafus
|
2020-10-27 15:24:05 -06:00 |
|
James Betker
|
d923a62ed3
|
Allow SPSR to checkpoint
|
2020-10-27 15:23:20 -06:00 |
|
James Betker
|
11a9e223a6
|
Retrofit SPSR_arch so it is capable of accepting a ref
|
2020-10-27 11:14:36 -06:00 |
|
James Betker
|
8202ee72b9
|
Re-add original SPSR_arch
|
2020-10-27 11:00:38 -06:00 |
|
James Betker
|
31cf1ac98d
|
Retrofit full_image_dataset to work with new arch.
|
2020-10-27 10:26:19 -06:00 |
|
James Betker
|
ade0a129da
|
Include psnr in test.py
|
2020-10-27 10:25:42 -06:00 |
|
James Betker
|
231137ab0a
|
Revert RRDB back to original model
|
2020-10-27 10:25:31 -06:00 |
|
James Betker
|
1ce863849a
|
Remove temporary base_model change
|
2020-10-26 11:13:01 -06:00 |
|
James Betker
|
54accfa693
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-26 11:12:37 -06:00 |
|
James Betker
|
ff58c6484a
|
Fixes to unified chunk datasets to support stereoscopic training
|
2020-10-26 11:12:22 -06:00 |
|
James Betker
|
b2f803588b
|
Fix multi_modal_train.py
|
2020-10-26 11:10:22 -06:00 |
|
James Betker
|
f857eb00a8
|
Allow tecogan losses to compute at 32px
|
2020-10-26 11:09:55 -06:00 |
|
James Betker
|
629b968901
|
ChainedGen 4x alteration
Increases conv window for teco_recurrent in the 4x case so all data
can be used.
base_model changes should be temporary.
|
2020-10-26 10:54:51 -06:00 |
|
James Betker
|
85c07f85d9
|
Update flownet submodule
|
2020-10-24 11:59:00 -06:00 |
|
James Betker
|
327cdbe110
|
Support configurable multi-modal training
|
2020-10-24 11:57:39 -06:00 |
|
James Betker
|
9c3d059ef0
|
Updates to be able to train flownet2 in ExtensibleTrainer
Only supports basic losses for now, though.
|
2020-10-24 11:56:39 -06:00 |
|
James Betker
|
1dbcbfbac8
|
Restore ChainedEmbeddingGenWithStructure
Still using this guy, after all
|
2020-10-24 11:54:52 -06:00 |
|
James Betker
|
8e5b6682bf
|
Add PairedFrameDataset
|
2020-10-23 20:58:07 -06:00 |
|
James Betker
|
7a75d10784
|
Arch cleanup
|
2020-10-23 09:35:33 -06:00 |
|
James Betker
|
646d6a621a
|
Support 4x zoom on ChainedEmbeddingGen
|
2020-10-23 09:25:58 -06:00 |
|
James Betker
|
8636492db0
|
Copy train.py mods to train2
|
2020-10-22 17:16:36 -06:00 |
|
James Betker
|
e9c0b9f0fd
|
More adjustments to support multi-modal training
Specifically - looks like at least MSE loss cannot handle autocasted tensors
|
2020-10-22 16:49:34 -06:00 |
|
James Betker
|
76789a456f
|
Class-ify train.py and workon multi-modal trainer
|
2020-10-22 16:15:31 -06:00 |
|
James Betker
|
15e00e9014
|
Finish integration with autocast
Note: autocast is broken when also using checkpoint(). Overcome this by modifying
torch's checkpoint() function in place to also use autocast.
|
2020-10-22 14:39:19 -06:00 |
|
James Betker
|
d7ee14f721
|
Move to torch.cuda.amp (not working)
Running into OOM errors, needs diagnosing. Checkpointing here.
|
2020-10-22 13:58:05 -06:00 |
|
James Betker
|
3e3d2af1f3
|
Add multi-modal trainer
|
2020-10-22 13:27:32 -06:00 |
|
James Betker
|
40dc2938e8
|
Fix multifaceted chain gen
|
2020-10-22 13:27:06 -06:00 |
|
James Betker
|
f9dc472f63
|
Misc nonfunctional mods to datasets
|
2020-10-22 10:16:17 -06:00 |
|
James Betker
|
43c4f92123
|
Collapse progressive zoom candidates into the batch dimension
This contributes a significant speedup to training this type of network
since losses can operate on the entire prediction spectrum at once.
|
2020-10-21 22:37:23 -06:00 |
|
James Betker
|
680d635420
|
Enable ExtensibleTrainer to skip steps when state keys are missing
|
2020-10-21 22:22:28 -06:00 |
|
James Betker
|
d1175f0de1
|
Add FFT injector
|
2020-10-21 22:22:00 -06:00 |
|
James Betker
|
1ef559d7ca
|
Add a ChainedEmbeddingGen which can be simueltaneously used with multiple training paradigms
|
2020-10-21 22:21:51 -06:00 |
|
James Betker
|
931aa65dd0
|
Allow recurrent losses to be weighted
|
2020-10-21 16:59:44 -06:00 |
|
James Betker
|
5753e77d67
|
ChainedGen: Output debugging information on blocks
|
2020-10-21 16:36:23 -06:00 |
|