James Betker
|
7a75d10784
|
Arch cleanup
|
2020-10-23 09:35:33 -06:00 |
|
James Betker
|
646d6a621a
|
Support 4x zoom on ChainedEmbeddingGen
|
2020-10-23 09:25:58 -06:00 |
|
James Betker
|
e9c0b9f0fd
|
More adjustments to support multi-modal training
Specifically - looks like at least MSE loss cannot handle autocasted tensors
|
2020-10-22 16:49:34 -06:00 |
|
James Betker
|
76789a456f
|
Class-ify train.py and workon multi-modal trainer
|
2020-10-22 16:15:31 -06:00 |
|
James Betker
|
15e00e9014
|
Finish integration with autocast
Note: autocast is broken when also using checkpoint(). Overcome this by modifying
torch's checkpoint() function in place to also use autocast.
|
2020-10-22 14:39:19 -06:00 |
|
James Betker
|
d7ee14f721
|
Move to torch.cuda.amp (not working)
Running into OOM errors, needs diagnosing. Checkpointing here.
|
2020-10-22 13:58:05 -06:00 |
|
James Betker
|
3e3d2af1f3
|
Add multi-modal trainer
|
2020-10-22 13:27:32 -06:00 |
|
James Betker
|
40dc2938e8
|
Fix multifaceted chain gen
|
2020-10-22 13:27:06 -06:00 |
|
James Betker
|
43c4f92123
|
Collapse progressive zoom candidates into the batch dimension
This contributes a significant speedup to training this type of network
since losses can operate on the entire prediction spectrum at once.
|
2020-10-21 22:37:23 -06:00 |
|
James Betker
|
680d635420
|
Enable ExtensibleTrainer to skip steps when state keys are missing
|
2020-10-21 22:22:28 -06:00 |
|
James Betker
|
d1175f0de1
|
Add FFT injector
|
2020-10-21 22:22:00 -06:00 |
|
James Betker
|
1ef559d7ca
|
Add a ChainedEmbeddingGen which can be simueltaneously used with multiple training paradigms
|
2020-10-21 22:21:51 -06:00 |
|
James Betker
|
931aa65dd0
|
Allow recurrent losses to be weighted
|
2020-10-21 16:59:44 -06:00 |
|
James Betker
|
5753e77d67
|
ChainedGen: Output debugging information on blocks
|
2020-10-21 16:36:23 -06:00 |
|
James Betker
|
3c6e600e48
|
Add capacity for models to self-report visuals
|
2020-10-21 11:08:03 -06:00 |
|
James Betker
|
dca5cddb3b
|
Add bypass to ChainedEmbeddingGen
|
2020-10-21 11:07:45 -06:00 |
|
James Betker
|
a63bf2ea2f
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-19 15:26:11 -06:00 |
|
James Betker
|
76e4f0c086
|
Restore test.py for use as standalone validator
|
2020-10-19 15:26:07 -06:00 |
|
James Betker
|
1b1ca297f8
|
Fix recurrent=None bug in ChainedEmbeddingGen
|
2020-10-19 15:25:12 -06:00 |
|
James Betker
|
b28e4d9cc7
|
Add spread loss
Experimental loss that peaks around 0.
|
2020-10-19 11:31:19 -06:00 |
|
James Betker
|
981d64413b
|
Support validation over a custom injector
Also re-enable PSNR
|
2020-10-19 11:01:56 -06:00 |
|
James Betker
|
668cafa798
|
Push correct patch of recurrent embedding to upstream image, rather than whole thing
|
2020-10-18 22:39:52 -06:00 |
|
James Betker
|
7df378a944
|
Remove separated vgg discriminator
Checkpointing happens inline instead. Was a dumb idea..
Also fixes some loss reporting issues.
|
2020-10-18 12:10:24 -06:00 |
|
James Betker
|
c709d38cd5
|
Fix memory leak with recurrent loss
|
2020-10-18 10:22:10 -06:00 |
|
James Betker
|
552e70a032
|
Get rid of excessive checkpointed disc params
|
2020-10-18 10:09:37 -06:00 |
|
James Betker
|
6a0d5f4813
|
Add a checkpointable discriminator
|
2020-10-18 09:57:47 -06:00 |
|
James Betker
|
9ead2c0a08
|
Multiscale training in!
|
2020-10-17 22:54:12 -06:00 |
|
James Betker
|
e706911c83
|
Fix spinenet bug
|
2020-10-17 20:20:36 -06:00 |
|
James Betker
|
b008a27d39
|
Spinenet should allow bypassing the initial conv
This makes feeding in references for recurrence easier.
|
2020-10-17 20:16:47 -06:00 |
|
James Betker
|
c1c9c5681f
|
Swap recurrence
|
2020-10-17 08:40:28 -06:00 |
|
James Betker
|
6141aa1110
|
More recurrence fixes for chainedgen
|
2020-10-17 08:35:46 -06:00 |
|
James Betker
|
cf8118a85b
|
Allow recurrence to specified for chainedgen
|
2020-10-17 08:32:29 -06:00 |
|
James Betker
|
fc4c064867
|
Add recurrent support to chainedgenwithstructure
|
2020-10-17 08:31:34 -06:00 |
|
James Betker
|
d4a3e11ab2
|
Don't use several stages of spinenet_arch
These are used for lower outputs which I am not using
|
2020-10-17 08:28:37 -06:00 |
|
James Betker
|
d1c63ae339
|
Go back to torch's DDP
Apex was having some weird crashing issues.
|
2020-10-16 20:47:35 -06:00 |
|
James Betker
|
d856378b2e
|
Add ChainedGenWithStructure
|
2020-10-16 20:44:36 -06:00 |
|
James Betker
|
617d97e19d
|
Add ChainedEmbeddingGen
|
2020-10-15 23:18:08 -06:00 |
|
James Betker
|
c4543ce124
|
Set post_transform_block to None where applicable
|
2020-10-15 17:20:42 -06:00 |
|
James Betker
|
6f8705e8cb
|
SSGSimpler network
|
2020-10-15 17:18:44 -06:00 |
|
James Betker
|
eda75c9779
|
Cleanup fixes
|
2020-10-15 10:13:17 -06:00 |
|
James Betker
|
920865defb
|
Arch work
|
2020-10-15 10:13:06 -06:00 |
|
James Betker
|
1f20d59c31
|
Revert big switch back
|
2020-10-14 11:03:34 -06:00 |
|
James Betker
|
24792bdb4f
|
Codebase cleanup
Removed a lot of legacy stuff I have no intent on using again.
Plan is to shape this repo into something more extensible (get it? hah!)
|
2020-10-13 20:56:39 -06:00 |
|
James Betker
|
e620fc05ba
|
Mods to support video processing with teco networks
|
2020-10-13 20:47:05 -06:00 |
|
James Betker
|
17d78195ee
|
Mods to SRG to support returning switch logits
|
2020-10-13 20:46:37 -06:00 |
|
James Betker
|
cc915303a5
|
Fix SPSR calls into SwitchComputer
|
2020-10-13 10:14:47 -06:00 |
|
James Betker
|
bdf4c38899
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
# Conflicts:
# codes/models/archs/SwitchedResidualGenerator_arch.py
|
2020-10-13 10:12:26 -06:00 |
|
James Betker
|
9a5d6162e9
|
Add the "BigSwitch"
|
2020-10-13 10:11:10 -06:00 |
|
James Betker
|
8014f050ac
|
Clear metrics properly
Holy cow, what a PITA bug.
|
2020-10-13 10:07:49 -06:00 |
|
James Betker
|
4d52374e60
|
Merge remote-tracking branch 'origin/gan_lab' into gan_lab
|
2020-10-12 17:43:51 -06:00 |
|