James Betker
f87e10ffef
Make deterministic sampler work with distributed training & microbatches
2022-03-04 11:50:50 -07:00
James Betker
2d1cb83c1d
Add a deterministic timestep sampler, with provisions to employ it every n steps
2022-03-04 10:40:14 -07:00
James Betker
db0c3340ac
Implement guidance-free diffusion in eval
...
And a few other fixes
2022-03-01 11:49:36 -07:00
James Betker
2134f06516
Implement conditioning-free diffusion at the eval level
2022-02-27 15:11:42 -07:00
James Betker
7b4544b83a
Add an experimental unet_diffusion_tts to perform experiments on
2022-01-18 08:38:24 -07:00
James Betker
937045cb63
Fixes
2021-12-18 16:45:38 -07:00
James Betker
5a664aa56e
misc
2021-12-11 08:17:26 -07:00
James Betker
f2a31702b5
Clean stuff up, move more things into arch_util
2021-10-20 21:19:25 -06:00
James Betker
1d0b44ebc2
More tweaks to diffusion-vocoder
2021-10-15 11:51:17 -06:00
James Betker
83798887a8
Mods to support unet diffusion vocoder with conditioning
2021-10-13 21:23:18 -06:00
James Betker
0396a9d2ca
Increase baseline codes recording across all dvae models
2021-09-30 08:09:07 -06:00
James Betker
ac57cdc794
Add scheduling to quantizer, enable cudnn_benchmarking to be disabled
2021-09-24 17:01:36 -06:00
James Betker
3e64e847c2
Gumbel quantizer
2021-09-23 23:32:03 -06:00
James Betker
c5297ccec6
Add dvae balancing heuristic
2021-09-23 21:19:36 -06:00
James Betker
e24c619387
Fix
2021-09-23 16:07:58 -06:00
James Betker
6833048bf7
Alterations to diffusion_dvae so it can be used directly on spectrograms
2021-09-23 15:56:25 -06:00
James Betker
5c8d266d4f
chk
2021-09-17 09:15:36 -06:00
James Betker
a6544f1684
More checkpointing fixes
2021-09-16 23:12:43 -06:00
James Betker
94899d88f3
Fix overuse of checkpointing
2021-09-16 23:00:28 -06:00
James Betker
f78ce9d924
Get diffusion_dvae ready for prime time!
2021-09-16 22:43:10 -06:00
James Betker
6f48674647
Support diffusion models with extra return values & inference in diffusion_dvae
2021-09-16 10:53:46 -06:00
James Betker
0382660159
Get diffusion_dvae functional
2021-09-14 17:43:31 -06:00
James Betker
73b930c0f6
Add diffusion_dvae
...
Increase split_on_silence interval
2021-09-09 16:22:05 -06:00
James Betker
b8f2e0f452
mydvae
2021-09-06 17:45:30 -06:00
James Betker
3e073cff85
Set kernel_size in diffusion_vocoder
2021-09-01 08:33:46 -06:00
James Betker
dabd87246d
Add unet_diffusion_vocoder
2021-08-31 14:38:33 -06:00
James Betker
398185e109
More work on wave-diffusion
2021-07-27 05:36:17 -06:00
James Betker
96e90e7047
Add support for a gaussian-diffusion-based wave tacotron
2021-07-26 16:27:31 -06:00
James Betker
afa41f1804
Allow hq color jittering and corruptions that are not included in the corruption factor
2021-06-30 09:44:46 -06:00
James Betker
6fd16ea9c8
Add meta-anomaly detection, colorjitter augmentation
2021-06-29 13:41:55 -06:00
James Betker
46e9f62be0
Add unet with latent guide
...
This is a diffusion network that uses both a LQ image
and a reference sample HQ image that is compressed into
a latent vector to perform upsampling
The hope is that we can steer the upsampling network
with sample images.
2021-06-26 11:02:58 -06:00
James Betker
0ded106562
Merge remote-tracking branch 'origin/master'
2021-06-25 13:16:28 -06:00
James Betker
a0ef07ddb8
Create unet_latent_guide.py
2021-06-25 11:25:14 -06:00
James Betker
e7890dc0ba
Misc fixes for diffusion nets
2021-06-21 10:38:07 -06:00
James Betker
6c6e82406e
Pass a corruption factor through the dataset into the upsampling network
...
The intuition is this will help guide the network to make better informed decisions
about how it performs upsampling based on how it perceives the underlying content.
(I'm giving up on letting networks detect their own quality - I'm not convinced it is
actually feasible)
2021-06-07 09:13:54 -06:00
James Betker
692e9c417b
Support diffusion unet
2021-06-06 13:57:22 -06:00
James Betker
75567a9814
Only head norm removed
2021-06-05 23:29:11 -06:00
James Betker
65d0376b90
Re-add normalization at the tail of the RRDB
2021-06-05 23:04:05 -06:00
James Betker
184e887122
Remove rrdb normalization
2021-06-05 21:39:19 -06:00
James Betker
80d4404367
A few fixes:
...
- Output better prediction of xstart from eps
- Support LossAwareSampler
- Support AdamW
2021-06-05 13:40:32 -06:00
James Betker
bf811f80c1
GD mods & fixes
...
- Report variational loss separately
- Report model prediction from injector
- Log these things
- Use respacing like guided diffusion
2021-06-04 17:13:16 -06:00
James Betker
6084915af8
Support gaussian diffusion models
...
Adds support for GD models, courtesy of some maths from openai.
Also:
- Fixes requirement for eval{} even when it isn't being used
- Adds support for denormalizing an imagenet norm
2021-06-02 21:47:32 -06:00