We used below four datasets in our experiments.
1355 train and 135 test bedroom images from Ade20k Dataset. [Citation]
To download illustration images, please refer to Ganilla. [Citation]
2975 train and 500 test images from the Cityscapes training set. [Citation]
We shared coco elephant and sheep datasets in this GDrive folder
To train a model on your own datasets, you need to create a data folder with two subdirectories trainA
and trainB
that contain images from domain A and B.
You can test your model on your training set by setting --phase train
in test.py
. You can also create subdirectories testA
and testB
if you have test data.
We provide a python script to generate pix2pix training data in the form of pairs of images {A,B}, where A and B are two different depictions of the same underlying scene. For example, these might be pairs {label map, photo} or {bw image, color image}. Then we can learn to translate A to B or B to A:
Create folder /path/to/data
with subfolders A
and B
. A
and B
should each have their own subfolders train
, test
, etc. In /path/to/data/A/train
, put training images in style A. In /path/to/data/B/train
, put the corresponding images in style B. Repeat same for other data splits (test
, etc).
Corresponding images in a pair {A,B} must be the same size and have the same filename, e.g., /path/to/data/A/train/1.jpg
is considered to correspond to /path/to/data/B/train/1.jpg
.
Once the data is formatted this way, call:
python datasets/combine_A_and_B.py --fold_A /path/to/data/A --fold_B /path/to/data/B --fold_AB /path/to/data
This will combine each pair of images (A,B) into a single image file, ready for training.
In the scripts/hed/edges
folder, we provide edge map extraction scripts.
- First run
batch_hed.py
. Required steps and explanations are given in the top of that script. - Then, run
postprocess_main.m
. Again explanations are given in the top of that script.
Repeat that procedure for trainA and testA folders.