DL-Art-School/codes/data
2022-03-08 15:50:13 -07:00
..
audio Improve efficiency of audio_with_noise_dataset 2022-03-08 15:50:13 -07:00
text dataset mod 2021-12-23 15:21:30 -07:00
__init__.py Update fast_paired_dataset to report how many audio files it is actually using 2022-01-20 21:49:38 -07:00
base_unsupervised_image_dataset.py Fix bug with single_image_dataset which prevented working on multiple directories from working 2020-12-19 15:13:46 -07:00
byol_attachment.py Move discriminators to the create_model paradigm 2021-01-01 15:56:09 -07:00
chunk_with_reference.py Add a tool to split mp3 files into arbitrary chunks of wav files 2021-08-08 23:23:13 -06:00
cifar.py CIFAR stuff 2021-06-05 14:16:02 -06:00
combined_dataset.py Add combined dataset for training across multiple datasets 2020-09-11 08:44:06 -06:00
data_sampler.py mmsr 2019-08-23 21:42:47 +08:00
full_image_dataset.py Add a tool to split mp3 files into arbitrary chunks of wav files 2021-08-08 23:23:13 -06:00
image_corruptor.py Allow hq color jittering and corruptions that are not included in the corruption factor 2021-06-30 09:44:46 -06:00
image_folder_dataset.py Add a tool to split mp3 files into arbitrary chunks of wav files 2021-08-08 23:23:13 -06:00
image_label_parser.py Image label work 2020-12-18 08:53:18 -07:00
image_pair_with_corresponding_points_dataset.py Add dataset, ui for labeling and evaluator for pointwise classification 2021-04-23 17:17:13 -06:00
multi_frame_dataset.py Directly use dataset keys 2020-12-04 20:14:53 -07:00
multiscale_dataset.py Add a tool to split mp3 files into arbitrary chunks of wav files 2021-08-08 23:23:13 -06:00
paired_frame_dataset.py Directly use dataset keys 2020-12-04 20:14:53 -07:00
random_dataset.py Add random_dataset for testing 2020-12-09 14:55:05 -07:00
README.md Directly use dataset keys 2020-12-04 20:14:53 -07:00
single_image_dataset.py Disable refs and centers altogether in single_image_dataset 2020-12-31 10:13:24 -07:00
stylegan2_dataset.py Misc options to add support for training stylegan2-rosinality models: 2021-02-08 08:09:21 -07:00
torch_dataset.py CIFAR stuff 2021-06-05 14:16:02 -06:00
util.py use sets instead of list ops 2021-11-07 20:45:57 -07:00
zero_pad_dict_collate.py More fix 2022-01-01 10:31:03 -07:00
zip_file_dataset.py Fixes and mods to support training classifiers on imagenet 2021-06-01 17:25:24 -06:00

DLAS Datasets

Quick Overview

DLAS uses the standard Torch Dataset infrastructure. Datasets are expected to be constructed using an "options" dict, which is fed directly from the configuration file. They are also expected to output a dict, where the keys are injected directly into the trainer state.

Datasets conforming to the above expectations must be registered in __init__.py to be used by a configuration.

Reference Datasets

This directory contains several reference datasets which I have used in building DLAS. They include:

  1. Stylegan2Dataset - Reads a set of images from a directory, performs some basic augmentations on them and injects them directly into the state. LQ = HQ in this dataset.
  2. SingleImageDataset - Reads image patches from a 'chunked' format along with the reference image and metadata about how the patch was originally computed. The 'chunked' format is described below. Includes built-in ImageCorruption features actuated by image_corruptor.py.
  3. MultiframeDataset - Similar to SingleImageDataset, but infers a temporal relationship between images based on their filenames: the last 12 characters before the file extension are assumed to be a frame counter. Images from this dataset are grouped together with a temporal dimension for working with video data.
  4. ImageFolderDataset - Reads raw images from a folder and feeds them into the model. Capable of performing corruptions on those images like the above.
  5. MultiscaleDataset - Reads full images from a directory and builds a tree of images constructed by cropping squares from the source image and resizing them to the target size recursively until the native resolution is hit. Each recursive step decreases the crop size by a factor of 2.
  6. TorchDataset - A wrapper for miscellaneous pytorch datasets (e.g. MNIST, CIFAR, etc) which extracts the images and reformats them in a way that the DLAS trainer understands.
  7. FullImageDataset - An image patch dataset where the patches are dynamically extracted from full-size images. I have generally stopped using this for performance reasons and it should be considered deprecated.

Information about the "chunked" format

This is the main format I have used in my experiments with image super resolution. It is fast to read and provides rich metadata on the images that the patches are derived from, including a downsized "reference" fullsize image and information on where the crop was taken from in the original image.

Creating a chunked dataset

The file format for 'chunked' datasets is very particular. I recommend using scripts/extract_subimages_with_ref.py to build these datasets from raw images. Here is how you would do that:

  1. Edit scripts/extract_subimages_with_ref.py to set these configuration options:
    opt['input_folder'] = <path to raw images>
    opt['save_folder'] = <where your chunked dataset will be stored>
    opt['crop_sz'] = [256, 512]  # A list, the size of each sub-image that will be extracted and turned into patches.
    opt['step'] = [128, 256]  # The pixel distance the algorithm will step for each sub-image. If this is < crop_sz, patches will share image content.
    opt['thres_sz'] = 128  # Amount of space that must be present on the edges of an image for it to be included in the image patch. Generally should be equal to the lowest step size.
    opt['resize_final_img'] = [1, .5] # Reduction factor that will be applied to image patches at this crop_sz level. TODO: infer this.
    opt['only_resize'] = False # If true, disables the patch-removal algorithm and just resizes the input images.
    opt['vertical_split'] = False # Used for stereoscopic images. Not documented.
    
    Note: the defaults should work fine for many applications.
  2. Execute the script: python scripts/extract_subimages_with_ref.py. If you are having issues with imports, make sure you set PYTHONPATH to the repo root.

Chunked cache

To make trainer startup fast, the chunked datasets perform some preprocessing the first time they are loaded. The entire dataset is scanned and a cache is built up and saved in cache.pth. Future invocations only need to load cache.pth on startup, which greatly speeds up trainer startup when you are debugging issues.

There is an important caveat here: this cache will not be recomputed unless you delete it. This means if you add new images to your dataset, you must delete the cache for them to be picked up! Likewise, if you copy your dataset to a new file path or a different computer, cache.pth must be deleted for it to work. In the latter case, you'll likely run into some weird errors.

Details about the dataset format

If you look inside of a dataset folder output by above, you'll see a list of folders. Each folder represents a single image that was found by the script.

Inside of that folder, you will see 3 different types of files:

  1. Image patches, each of which have a unique ID within the given set. These IDs do not necessarily need to be unique across the entire dataset.
  2. centers.pt A pytorch pickle which is just a dict that describes some metadata about the patches, like: where they were located in the source image and their original width/height.
  3. ref.jpg Is a square version of the original image that is downsampled to the patch size.