James Betker
|
ce6524184c
|
Do the last commit but in a better way
|
2021-01-02 22:24:12 -07:00 |
|
James Betker
|
edf9c38198
|
Make ExtensibleTrainer set the starting step for the LR scheduler
|
2021-01-02 22:22:34 -07:00 |
|
James Betker
|
bdbab65082
|
Allow optimizers to train separate param groups, add higher dimensional VGG discriminator
Did this to support training 512x512px networks off of a pretrained 256x256 network.
|
2021-01-02 15:10:06 -07:00 |
|
James Betker
|
193cdc6636
|
Move discriminators to the create_model paradigm
Also cleans up a lot of old discriminator models that I have no intention
of using again.
|
2021-01-01 15:56:09 -07:00 |
|
James Betker
|
9864fe4c04
|
Fix for train.py
|
2021-01-01 11:59:00 -07:00 |
|
James Betker
|
0eb1f4dd67
|
Revert "Get rid of CUDA_VISIBLE_DEVICES"
It is actually necessary for training in distributed mode. Only
do it then.
|
2020-12-31 10:31:40 -07:00 |
|
James Betker
|
1de1fa30ac
|
Disable refs and centers altogether in single_image_dataset
I suspect that this might be a cause of failures on parallel datasets.
Plus it is unnecessary computation.
|
2020-12-31 10:13:24 -07:00 |
|
James Betker
|
8f0984cacf
|
Add sr_fid evaluator
|
2020-12-30 20:18:58 -07:00 |
|
James Betker
|
a777c1e4f9
|
Misc script fixes
|
2020-12-29 20:25:09 -07:00 |
|
James Betker
|
3fd627fc62
|
Mods to support image classification & filtering
|
2020-12-26 13:49:27 -07:00 |
|
James Betker
|
29db7c7a02
|
Further mods to BYOL
|
2020-12-24 09:28:41 -07:00 |
|
James Betker
|
1bbcb96ee8
|
Implement a few changes to support training BYOL networks
|
2020-12-23 10:50:23 -07:00 |
|
James Betker
|
2437b33e74
|
Fix srflow_latent_space_playground bug
|
2020-12-22 15:42:38 -07:00 |
|
James Betker
|
e82f4552db
|
Update other docs with dumb config options
|
2020-12-18 16:21:28 -07:00 |
|
James Betker
|
5640e4efe4
|
More refactoring
|
2020-12-18 09:18:34 -07:00 |
|
James Betker
|
a8179ff53c
|
Image label work
|
2020-12-18 08:53:18 -07:00 |
|
James Betker
|
fb2cfc795b
|
Update requirements, add image_patch_classifier tool
|
2020-12-16 09:42:50 -07:00 |
|
James Betker
|
e5a3e6b9b5
|
srflow latent space misc
|
2020-12-14 23:59:49 -07:00 |
|
James Betker
|
ec0ee25f4b
|
Structural latents checkpoint
|
2020-12-11 12:01:09 -07:00 |
|
James Betker
|
a5630d282f
|
Get rid of 2nd trainer
|
2020-12-10 09:57:38 -07:00 |
|
James Betker
|
11155aead4
|
Directly use dataset keys
This has been a long time coming. Cleans up messy "GT" nomenclature and simplifies ExtensibleTraner.feed_data
|
2020-12-04 20:14:53 -07:00 |
|
James Betker
|
8a83b1c716
|
Go back to apex DDP, fix distributed bugs
|
2020-12-04 16:39:21 -07:00 |
|
James Betker
|
8a00f15746
|
Implement FlowGaussianNll evaluator
|
2020-12-02 14:09:54 -07:00 |
|
James Betker
|
2e0bbda640
|
Remove unused archs
|
2020-12-01 11:10:48 -07:00 |
|
James Betker
|
da604752e6
|
Misc RRDB changes
|
2020-11-29 12:21:31 -07:00 |
|
James Betker
|
a1d4c9f83c
|
multires rrdb work
|
2020-11-28 14:35:46 -07:00 |
|
James Betker
|
ef8d5f88c1
|
Bring split gaussian nll out of split so it can be computed accurately with the rest of the nll component
|
2020-11-27 13:30:21 -07:00 |
|
James Betker
|
fd356580c0
|
Play with lambdas
|
2020-11-26 20:30:55 -07:00 |
|
James Betker
|
45a489110f
|
Fix datasets
|
2020-11-26 11:50:38 -07:00 |
|
James Betker
|
f6098155cd
|
Mods to tecogan to allow use of embeddings as input
|
2020-11-24 09:24:02 -07:00 |
|
James Betker
|
b10bcf6436
|
Rework stylegan_for_sr to incorporate structure as an adain block
|
2020-11-23 11:31:11 -07:00 |
|
James Betker
|
5ccdbcefe3
|
srflow_orig integration
|
2020-11-19 23:47:24 -07:00 |
|
James Betker
|
d7877d0a36
|
Fixes to teco losses and translational losses
|
2020-11-19 11:35:05 -07:00 |
|
James Betker
|
6b679e2b51
|
Make grad_penalty available to classical discs
|
2020-11-17 18:31:40 -07:00 |
|
James Betker
|
8a19c9ae15
|
Add additive mode to rrdb
|
2020-11-16 20:45:09 -07:00 |
|
James Betker
|
125cb16dce
|
Add a FID evaluator for stylegan with structural guidance
|
2020-11-14 20:16:07 -07:00 |
|
James Betker
|
ec621c69b5
|
Fix train bug
|
2020-11-14 09:29:08 -07:00 |
|
James Betker
|
a07e1a7292
|
Add separate Evaluator module and FID evaluator
|
2020-11-13 11:03:54 -07:00 |
|
James Betker
|
fc55bdb24e
|
Mods to how wandb are integrated
|
2020-11-12 15:45:25 -07:00 |
|
James Betker
|
db9e9e28a0
|
Fix an issue where GPU0 was always being used in non-ddp
Frankly, I don't understand how this has ever worked. WTF.
|
2020-11-12 15:43:01 -07:00 |
|
James Betker
|
88f349bdf1
|
Enable usage of wandb
|
2020-11-11 21:48:56 -07:00 |
|
James Betker
|
6a2fd5f7d0
|
Lots of new discriminator nets
|
2020-11-10 16:06:54 -07:00 |
|
James Betker
|
0cf52ef52c
|
latent work
|
2020-11-06 20:38:23 -07:00 |
|
James Betker
|
74738489b9
|
Fixes and additional support for progressive zoom
|
2020-10-30 09:59:54 -06:00 |
|
James Betker
|
607ff3c67c
|
RRDB with bypass
|
2020-10-29 09:39:45 -06:00 |
|
James Betker
|
da53090ce6
|
More adjustments to support distributed training with teco & on multi_modal_train
|
2020-10-27 20:58:03 -06:00 |
|
James Betker
|
2a3eec8fd7
|
Fix some distributed training snafus
|
2020-10-27 15:24:05 -06:00 |
|
James Betker
|
ff58c6484a
|
Fixes to unified chunk datasets to support stereoscopic training
|
2020-10-26 11:12:22 -06:00 |
|
James Betker
|
9c3d059ef0
|
Updates to be able to train flownet2 in ExtensibleTrainer
Only supports basic losses for now, though.
|
2020-10-24 11:56:39 -06:00 |
|
James Betker
|
8636492db0
|
Copy train.py mods to train2
|
2020-10-22 17:16:36 -06:00 |
|