DL-Art-School/codes
James Betker 93a3302819 Push training_state data to CPU memory before saving it
For whatever reason, keeping this on GPU memory just doesn't work.
When you load it, it consumes a large amount of GPU memory and that
utilization doesn't go away. Saving to CPU should fix this.
2022-03-04 17:57:33 -07:00
..
.idea IDEA update 2020-05-19 09:35:26 -06:00
data Implement guidance-free diffusion in eval 2022-03-01 11:49:36 -07:00
models Fix distributed bug 2022-03-04 11:58:53 -07:00
scripts Implement conditioning-free diffusion at the eval level 2022-02-27 15:11:42 -07:00
trainer Push training_state data to CPU memory before saving it 2022-03-04 17:57:33 -07:00
utils Push training_state data to CPU memory before saving it 2022-03-04 17:57:33 -07:00
multi_modal_train.py More adjustments to support distributed training with teco & on multi_modal_train 2020-10-27 20:58:03 -06:00
process_video.py misc 2021-01-23 13:45:17 -07:00
requirements.txt requirements 2022-01-15 17:28:59 -07:00
sweep.py fix sweep 2022-02-11 11:43:11 -07:00
test.py Add FID evaluator for diffusion models 2021-06-14 09:14:30 -06:00
train.py Move log consensus to train for efficiency 2022-03-04 13:41:32 -07:00
use_discriminator_as_filter.py Various mods to support better jpeg image filtering 2021-06-25 13:16:15 -06:00