DL-Art-School/codes
James Betker 009a1e8404 Add a new diffusion_vocoder that should be trainable faster
This new one has a "cheating" top layer, that does not feed down into the unet encoder,
but does consume the outputs of the unet. This cheater only operates on half of the input,
while the rest of the unet operates on the full input. This limits the dimensionality of this last
layer, on the assumption that these last layers consume by far the most computation and memory,
but do not require the full input context.

Losses are only computed on half of the aggregate input.
2022-01-11 17:26:07 -07:00
..
.idea IDEA update 2020-05-19 09:35:26 -06:00
data Fix dataset 2022-01-06 15:24:37 -07:00
models Add a new diffusion_vocoder that should be trainable faster 2022-01-11 17:26:07 -07:00
scripts misc updates 2022-01-11 16:25:40 -07:00
trainer More zero_grad fixes 2022-01-08 20:31:19 -07:00
utils Add "dataset_debugger" support 2022-01-06 12:38:20 -07:00
multi_modal_train.py More adjustments to support distributed training with teco & on multi_modal_train 2020-10-27 20:58:03 -06:00
process_video.py misc 2021-01-23 13:45:17 -07:00
requirements.txt misc updates 2022-01-11 16:25:40 -07:00
test.py Add FID evaluator for diffusion models 2021-06-14 09:14:30 -06:00
train.py misc updates 2022-01-11 16:25:40 -07:00
use_discriminator_as_filter.py Various mods to support better jpeg image filtering 2021-06-25 13:16:15 -06:00