James Betker
|
11a9e223a6
|
Retrofit SPSR_arch so it is capable of accepting a ref
|
2020-10-27 11:14:36 -06:00 |
|
James Betker
|
8202ee72b9
|
Re-add original SPSR_arch
|
2020-10-27 11:00:38 -06:00 |
|
James Betker
|
31cf1ac98d
|
Retrofit full_image_dataset to work with new arch.
|
2020-10-27 10:26:19 -06:00 |
|
James Betker
|
ade0a129da
|
Include psnr in test.py
|
2020-10-27 10:25:42 -06:00 |
|
James Betker
|
231137ab0a
|
Revert RRDB back to original model
|
2020-10-27 10:25:31 -06:00 |
|
James Betker
|
1ce863849a
|
Remove temporary base_model change
|
2020-10-26 11:13:01 -06:00 |
|
James Betker
|
54accfa693
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-26 11:12:37 -06:00 |
|
James Betker
|
ff58c6484a
|
Fixes to unified chunk datasets to support stereoscopic training
|
2020-10-26 11:12:22 -06:00 |
|
James Betker
|
b2f803588b
|
Fix multi_modal_train.py
|
2020-10-26 11:10:22 -06:00 |
|
James Betker
|
f857eb00a8
|
Allow tecogan losses to compute at 32px
|
2020-10-26 11:09:55 -06:00 |
|
James Betker
|
629b968901
|
ChainedGen 4x alteration
Increases conv window for teco_recurrent in the 4x case so all data
can be used.
base_model changes should be temporary.
|
2020-10-26 10:54:51 -06:00 |
|
James Betker
|
85c07f85d9
|
Update flownet submodule
|
2020-10-24 11:59:00 -06:00 |
|
James Betker
|
327cdbe110
|
Support configurable multi-modal training
|
2020-10-24 11:57:39 -06:00 |
|
James Betker
|
9c3d059ef0
|
Updates to be able to train flownet2 in ExtensibleTrainer
Only supports basic losses for now, though.
|
2020-10-24 11:56:39 -06:00 |
|
James Betker
|
1dbcbfbac8
|
Restore ChainedEmbeddingGenWithStructure
Still using this guy, after all
|
2020-10-24 11:54:52 -06:00 |
|
James Betker
|
8e5b6682bf
|
Add PairedFrameDataset
|
2020-10-23 20:58:07 -06:00 |
|
James Betker
|
7a75d10784
|
Arch cleanup
|
2020-10-23 09:35:33 -06:00 |
|
James Betker
|
646d6a621a
|
Support 4x zoom on ChainedEmbeddingGen
|
2020-10-23 09:25:58 -06:00 |
|
James Betker
|
8636492db0
|
Copy train.py mods to train2
|
2020-10-22 17:16:36 -06:00 |
|
James Betker
|
e9c0b9f0fd
|
More adjustments to support multi-modal training
Specifically - looks like at least MSE loss cannot handle autocasted tensors
|
2020-10-22 16:49:34 -06:00 |
|
James Betker
|
76789a456f
|
Class-ify train.py and workon multi-modal trainer
|
2020-10-22 16:15:31 -06:00 |
|
James Betker
|
15e00e9014
|
Finish integration with autocast
Note: autocast is broken when also using checkpoint(). Overcome this by modifying
torch's checkpoint() function in place to also use autocast.
|
2020-10-22 14:39:19 -06:00 |
|
James Betker
|
d7ee14f721
|
Move to torch.cuda.amp (not working)
Running into OOM errors, needs diagnosing. Checkpointing here.
|
2020-10-22 13:58:05 -06:00 |
|
James Betker
|
3e3d2af1f3
|
Add multi-modal trainer
|
2020-10-22 13:27:32 -06:00 |
|
James Betker
|
40dc2938e8
|
Fix multifaceted chain gen
|
2020-10-22 13:27:06 -06:00 |
|
James Betker
|
f9dc472f63
|
Misc nonfunctional mods to datasets
|
2020-10-22 10:16:17 -06:00 |
|
James Betker
|
43c4f92123
|
Collapse progressive zoom candidates into the batch dimension
This contributes a significant speedup to training this type of network
since losses can operate on the entire prediction spectrum at once.
|
2020-10-21 22:37:23 -06:00 |
|
James Betker
|
680d635420
|
Enable ExtensibleTrainer to skip steps when state keys are missing
|
2020-10-21 22:22:28 -06:00 |
|
James Betker
|
d1175f0de1
|
Add FFT injector
|
2020-10-21 22:22:00 -06:00 |
|
James Betker
|
1ef559d7ca
|
Add a ChainedEmbeddingGen which can be simueltaneously used with multiple training paradigms
|
2020-10-21 22:21:51 -06:00 |
|
James Betker
|
931aa65dd0
|
Allow recurrent losses to be weighted
|
2020-10-21 16:59:44 -06:00 |
|
James Betker
|
5753e77d67
|
ChainedGen: Output debugging information on blocks
|
2020-10-21 16:36:23 -06:00 |
|
James Betker
|
b54de69153
|
Misc
|
2020-10-21 11:08:21 -06:00 |
|
James Betker
|
71c3820d2d
|
Fix process_video
|
2020-10-21 11:08:12 -06:00 |
|
James Betker
|
3c6e600e48
|
Add capacity for models to self-report visuals
|
2020-10-21 11:08:03 -06:00 |
|
James Betker
|
dca5cddb3b
|
Add bypass to ChainedEmbeddingGen
|
2020-10-21 11:07:45 -06:00 |
|
James Betker
|
d8c6a4bbb8
|
Misc
|
2020-10-20 12:56:52 -06:00 |
|
James Betker
|
aba83e7497
|
Don't apply jpeg corruption & noise corruption together
This causes some severe noise.
|
2020-10-20 12:56:35 -06:00 |
|
James Betker
|
111450f4e7
|
Use areal interpolate for multiscale_dataset
|
2020-10-19 15:30:25 -06:00 |
|
James Betker
|
a63bf2ea2f
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-19 15:26:11 -06:00 |
|
James Betker
|
76e4f0c086
|
Restore test.py for use as standalone validator
|
2020-10-19 15:26:07 -06:00 |
|
James Betker
|
1b1ca297f8
|
Fix recurrent=None bug in ChainedEmbeddingGen
|
2020-10-19 15:25:12 -06:00 |
|
James Betker
|
331c40f0c8
|
Allow starting step to be forced
Useful for testing purposes or to force a validation.
|
2020-10-19 15:23:04 -06:00 |
|
James Betker
|
8ca566b621
|
Revert "Misc"
This reverts commit 0e3ea63a14 .
# Conflicts:
# codes/test.py
# codes/train.py
|
2020-10-19 13:34:54 -06:00 |
|
James Betker
|
b28e4d9cc7
|
Add spread loss
Experimental loss that peaks around 0.
|
2020-10-19 11:31:19 -06:00 |
|
James Betker
|
9b9a6e5925
|
Add get_paths() to base_unsupervised_image_dataset
|
2020-10-19 11:30:06 -06:00 |
|
James Betker
|
981d64413b
|
Support validation over a custom injector
Also re-enable PSNR
|
2020-10-19 11:01:56 -06:00 |
|
James Betker
|
ffad0e0422
|
Allow image corruption in multiscale dataset
|
2020-10-19 10:10:27 -06:00 |
|
James Betker
|
668cafa798
|
Push correct patch of recurrent embedding to upstream image, rather than whole thing
|
2020-10-18 22:39:52 -06:00 |
|
James Betker
|
7df378a944
|
Remove separated vgg discriminator
Checkpointing happens inline instead. Was a dumb idea..
Also fixes some loss reporting issues.
|
2020-10-18 12:10:24 -06:00 |
|