DLAS - A configuration-driven trainer for generative models
Go to file
mrq efd038c076 forgot the other things that were in tortoise implementation but not here 2023-03-17 20:24:17 +07:00
.idea update environment and fix a bunch of deps 2022-07-24 23:43:25 +07:00
codes forgot the other things that were in tortoise implementation but not here 2023-03-17 20:24:17 +07:00
experiments more cleanup, pip-ifying won't work, got an alternative 2023-02-17 15:47:55 +07:00
recipes Add tacotron2 recipe 2021-07-20 08:37:57 +07:00
.flake8 mmsr 2019-08-23 21:42:47 +07:00
.gitignore Necessary fixes to get it to work 2023-02-17 02:03:00 +07:00
.gitmodules fixes 2022-03-17 10:53:39 +07:00
LICENSE Initial commit 2019-08-23 21:04:30 +07:00
MANIFEST.in Necessary fixes to get it to work 2023-02-17 02:03:00 +07:00
README.md Disabling bitsandbytes optimization as default for now, in the off chance that it actually produces garbage (which shouldn't happen, there's no chance, if training at float16 from a model at float16 works fine, then this has to work) 2023-02-23 03:22:59 +07:00
requirements.txt silence printing the model because it's just useless noise 2023-03-04 16:38:24 +07:00
sandbox.py Misc 2020-08-12 08:46:15 +07:00
setup.py binaries 2023-02-22 23:09:27 +07:00

README.md

(QoL improvements for) Deep Learning Art School

This fork of neonbjb/DL-Art-School contains a few fixes and QoL improvements, including but not limited to:

  • sanity tidying, like:
    • not outputing to ./DL-Art-School/experiments/
    • the custom module loader for networks/injectors getting fixed
    • BitsAndBytes integration:
      • working but output untested: Adam/AdamW
      • toggles available in ./codes/torch_indermediary/__init__.py

Deep Learning Art School

Send your Pytorch model to art class!

This repository is both a framework and a set of tools for training deep neural networks that create images. It started as a branch of the open-mmlab project developed by Multimedia Laboratory, CUHK but has been almost completely re-written at every level.

Why do we need another training framework

These are a dime a dozen, no doubt. DL Art School (DLAS) differentiates itself by being configuration driven. You write the model code (specifically, a torch.nn.Module) and (possibly) some losses, then you cobble together a config file written in yaml that tells DLAS how to train it. Swapping model architectures and tuning hyper-parameters is simple and often requires no changes to actual code. You also don't need to remember complex command line incantations. This effectively enables you to run multiple concurrent experiments that use the same codebase, as well as retain backwards compatibility for past experiments.

Training effective generators often means juggling multiple loss functions. As a result, DLAS' configuration language is specifically designed to make it easy to support large number of losses and networks that interact with each other. As an example: some GANs I have trained in this framework consist of more than 15 losses and use 2 separate discriminators and require no bespoke code.

Generators are also notorious GPU memory hogs. I have spent substantial time streamlining the training framework to support gradient checkpointing and FP16. DLAS also supports "mega batching", where multiple forward passes contribute to a single backward pass. Most models can be trained on midrange GPUs with 8-11GB of memory.

The final value-added feature is interpretability. Tensorboard logging operates out of the box with no custom code. Intermediate images from within the training pipeline can be intermittently surfaced as normal PNG files so you can see what your network is up to. Validation passes are also cached as images so you can view how your network improves over time.

Modeling Capabilities

DLAS was built with extensibility in mind. One of the reasons I'm putting in the effort to better document this code is the incredible ease with which I have been able to train entirely new model types with no changes to the core training code.

I intend to fill out the sections below with sample configurations which can be used to train different architectures. You will need to bring your own data.

Super-resolution

Style Transfer

  • Stylegan2 (documentation TBC)

Latent development

  • BYOL
  • iGPT (documentation TBC)

Dependencies and Installation

  • Python 3
  • PyTorch >= 1.6
  • NVIDIA GPU + CUDA
  • Python packages: pip install -r requirements.txt
  • Some video utilities require FFMPEG

User Guide

TBC

Development Environment

If you aren't already using Pycharm - now is the time to try it out. This project was built in Pycharm and comes with an IDEA project for you to get started with. I've done all of my development on this repo in this IDE and lean heavily on its incredible debugger. It's free. Try it out. You won't be sorry.

Dataset Preparation

DLAS comes with some Dataset instances that I have created for my own use. Unless you want to use one of the recipes above, you'll need to provide your own. Here is how to add your own Dataset:

  1. Create a Dataset in codes/data/ which takes a single Python dict as a constructor and extracts options from that dict.
  2. Register your Dataset in codes/data/init.py
  3. Your Dataset should return a dict of tensors. The keys of the dict are injected directly into the training state, which you can interact within your configuration file.

Training and Testing

There are currently 3 base scripts for interacting with models. They all take a single parameter, -opt which specifies the configuration file which controls how they work. Configs (will be) documented above in the user guide.

train.py

Start (or continue) a training session: python train.py -opt <your_config.yml>

Start a distributed training session: python -m torch.distributed.launch --nproc_per_node=<gpus> --master_port=1234 train.py -o <opt> --launcher=pytorch

test.py

Runs a model against a validation or test set of data and reports metrics (for now, just PSNR and a custom perceptual metric) python test.py -opt <your_config.yml>

process_video.py

Breaks a video into individual frames and uses a network to do processing on it, then reassembles the output back into video form. python process_video -opt <your_config.yml>

Contributing

At this time I am not taking feature requests or bug reports, but I appreciate all contributions.

License

This project is released under the Apache 2.0 license.