Skip to content
Snippets Groups Projects
Commit 4f5e2ced authored by Sanghyun Son's avatar Sanghyun Son
Browse files

fix dependency, directory structure

parent 126a55c3
No related branches found
No related tags found
1 merge request!1Jan 09, 2018 updates
Showing with 13 additions and 104 deletions
...@@ -26,10 +26,11 @@ We provide scripts for reproducing all the results from our paper. You can train ...@@ -26,10 +26,11 @@ We provide scripts for reproducing all the results from our paper. You can train
* Python-based. * Python-based.
## Dependencies ## Dependencies
* Python (Tested with 3.6) * Python 3.6
* PyTorch >= 0.4.0 * PyTorch >= 0.4.0
* numpy * numpy
* scipy * scipy
* skimage
* matplotlib * matplotlib
* tqdm * tqdm
...@@ -50,9 +51,9 @@ cd EDSR-PyTorch ...@@ -50,9 +51,9 @@ cd EDSR-PyTorch
## Quick start (Demo) ## Quick start (Demo)
You can test our super-resolution algorithm with your own images. Place your images in ``test`` folder. (like ``test/<your_image>``) We support **png** and **jpeg** files. You can test our super-resolution algorithm with your own images. Place your images in ``test`` folder. (like ``test/<your_image>``) We support **png** and **jpeg** files.
Run the script in ``code`` folder. Before you run the demo, please uncomment the appropriate line in ```demo.sh``` that you want to execute. Run the script in ``src`` folder. Before you run the demo, please uncomment the appropriate line in ```demo.sh``` that you want to execute.
```bash ```bash
cd code # You are now in */EDSR-PyTorch/code cd src # You are now in */EDSR-PyTorch/src
sh demo.sh sh demo.sh
``` ```
...@@ -91,16 +92,16 @@ For these datasets, we first convert the result images to YCbCr color space and ...@@ -91,16 +92,16 @@ For these datasets, we first convert the result images to YCbCr color space and
## How to train EDSR and MDSR ## How to train EDSR and MDSR
We used [DIV2K](http://www.vision.ee.ethz.ch/%7Etimofter/publications/Agustsson-CVPRW-2017.pdf) dataset to train our model. Please download it from [here](https://cv.snu.ac.kr/research/EDSR/DIV2K.tar) (7.1GB). We used [DIV2K](http://www.vision.ee.ethz.ch/%7Etimofter/publications/Agustsson-CVPRW-2017.pdf) dataset to train our model. Please download it from [here](https://cv.snu.ac.kr/research/EDSR/DIV2K.tar) (7.1GB).
Unpack the tar file to any place you want. Then, change the ```dir_data``` argument in ```code/option.py``` to the place where DIV2K images are located. Unpack the tar file to any place you want. Then, change the ```dir_data``` argument in ```src/option.py``` to the place where DIV2K images are located.
We recommend you to pre-process the images before training. This step will decode all **png** files and save them as binaries. Use ``--ext sep_reset`` argument on your first run. You can skip the decoding part and use saved binaries with ``--ext sep`` argument. We recommend you to pre-process the images before training. This step will decode all **png** files and save them as binaries. Use ``--ext sep_reset`` argument on your first run. You can skip the decoding part and use saved binaries with ``--ext sep`` argument.
If you have enough RAM (>= 32GB), you can use ``--ext bin`` argument to pack all DIV2K images in one binary file. If you have enough RAM (>= 32GB), you can use ``--ext bin`` argument to pack all DIV2K images in one binary file.
You can train EDSR and MDSR by yourself. All scripts are provided in the ``code/demo.sh``. Note that EDSR (x3, x4) requires pre-trained EDSR (x2). You can ignore this constraint by removing ```--pre_train <x2 model>``` argument. You can train EDSR and MDSR by yourself. All scripts are provided in the ``src/demo.sh``. Note that EDSR (x3, x4) requires pre-trained EDSR (x2). You can ignore this constraint by removing ```--pre_train <x2 model>``` argument.
```bash ```bash
cd code # You are now in */EDSR-PyTorch/code cd src # You are now in */EDSR-PyTorch/src
sh demo.sh sh demo.sh
``` ```
...@@ -111,7 +112,7 @@ sh demo.sh ...@@ -111,7 +112,7 @@ sh demo.sh
* Training details are included. * Training details are included.
* Jan 09, 2018 * Jan 09, 2018
* Missing files are included (```code/data/MyImage.py```). * Missing files are included (```src/data/MyImage.py```).
* Some links are fixed. * Some links are fixed.
* Jan 16, 2018 * Jan 16, 2018
...@@ -128,7 +129,7 @@ sh demo.sh ...@@ -128,7 +129,7 @@ sh demo.sh
* Feb 23, 2018 * Feb 23, 2018
* Now PyTorch 0.3.1 is default. Use legacy/0.3.0 branch if you use the old version. * Now PyTorch 0.3.1 is default. Use legacy/0.3.0 branch if you use the old version.
* With a new ``code/data/DIV2K.py`` code, one can easily create new data class for super-resolution. * With a new ``src/data/DIV2K.py`` code, one can easily create new data class for super-resolution.
* New binary data pack. (Please remove the ``DIV2K_decoded`` folder from your dataset if you have.) * New binary data pack. (Please remove the ``DIV2K_decoded`` folder from your dataset if you have.)
* With ``--ext bin``, this code will automatically generates and saves the binary data pack that corresponds to previous ``DIV2K_decoded``. (This requires huge RAM (~45GB, Swap can be used.), so please be careful.) * With ``--ext bin``, this code will automatically generates and saves the binary data pack that corresponds to previous ``DIV2K_decoded``. (This requires huge RAM (~45GB, Swap can be used.), so please be careful.)
* If you cannot make the binary pack, just use the default setting (``--ext img``). * If you cannot make the binary pack, just use the default setting (``--ext img``).
......
scale = [2, 3, 4];
dataset = 'DIV2K';
apath = '../../../../dataset';
quality = 87;
hrDir = fullfile(apath, dataset, 'DIV2K_train_HR');
lrDir = fullfile(apath, dataset, ['DIV2K_train_LR_bicubic', num2str(quality)]);
if ~exist(lrDir, 'dir')
mkdir(lrDir)
end
for sc = 1:length(scale)
lrSubDir = fullfile(lrDir, sprintf('X%d', scale(sc)));
if ~exist(lrSubDir, 'dir')
mkdir(lrSubDir);
end
end
hrImgs = dir(fullfile(hrDir, '*.png'));
for idx = 1:length(hrImgs)
imgName = hrImgs(idx).name;
try
hrImg = imread(fullfile(hrDir, imgName));
catch
disp(imgName);
continue;
end
[h, w, ~] = size(hrImg);
for sc = 1:length(scale)
ch = floor(h / scale(sc)) * scale(sc);
cw = floor(w / scale(sc)) * scale(sc);
cropped = hrImg(1:ch, 1:cw, :);
lrImg = imresize(cropped, 1 / scale(sc), 'bicubic');
[~, woExt, ext] = fileparts(imgName);
lrName = sprintf('%sx%d%s', woExt, scale(sc), '.jpeg');
imwrite( ...
lrImg, ...
fullfile(lrDir, sprintf('X%d', scale(sc)), lrName), ...
'quality', quality);
end
if mod(idx, 100) == 0
fprintf('Processed %d / %d images\n', idx, length(hrImgs));
end
end
import os
import argparse
import skimage
import skimage.io as sio
import torch
parser = argparse.ArgumentParser(description='Pre-processing DIV2K .jpeg images')
parser.add_argument('--pathFrom', default='../../../../dataset/DIV2K',
help='directory of images to convert')
parser.add_argument('--pathTo', default='../../../../dataset/DIV2K_decoded',
help='directory of images to save')
parser.add_argument('--split', default=False,
help='save individual images')
parser.add_argument('--select', default='',
help='select certain path')
args = parser.parse_args()
for (path, dirs, files) in os.walk(args.pathFrom):
print(path)
targetDir = path.replace(args.pathFrom, args.pathTo)
if len(args.select) > 0 and path.find(args.select) == -1:
continue
if not os.path.exists(targetDir):
os.mkdir(targetDir)
if len(dirs) == 0:
pack = {}
n = 0
for fileName in files:
(idx, ext) = os.path.splitext(fileName)
if ext == '.jpeg':
png = sio.imread(os.path.join(path, fileName))
tensor = torch.Tensor(png.astype(float)).byte()
if args.split:
torch.save(tensor, os.path.join(targetDir, idx + '.pt'))
else:
pack[int(idx.split('x')[0])] = tensor
n += 1
if n % 100 == 0:
print('Converted ' + str(n) + ' images.')
if len(pack) > 0:
torch.save(pack, targetDir + '/pack.pt')
print('Saved pt binary.')
del pack
File moved
File moved
File moved
import random import random
import numpy as np import numpy as np
import skimage.io as sio
import skimage.color as sc import skimage.color as sc
import skimage.transform as st
import torch import torch
from torchvision import transforms from torchvision import transforms
......
File moved
File moved
File moved
File moved
...@@ -42,8 +42,11 @@ ...@@ -42,8 +42,11 @@
#python main.py --data_test Urban100 --scale 4 --n_resblocks 32 --n_feats 256 --res_scale 0.1 --pre_train ../experiment/model/EDSR_x4.pt --test_only --self_ensemble #python main.py --data_test Urban100 --scale 4 --n_resblocks 32 --n_feats 256 --res_scale 0.1 --pre_train ../experiment/model/EDSR_x4.pt --test_only --self_ensemble
#python main.py --data_test DIV2K --ext img --n_val 100 --scale 4 --n_resblocks 32 --n_feats 256 --res_scale 0.1 --pre_train ../experiment/model/EDSR_x4.pt --test_only --self_ensemble #python main.py --data_test DIV2K --ext img --n_val 100 --scale 4 --n_resblocks 32 --n_feats 256 --res_scale 0.1 --pre_train ../experiment/model/EDSR_x4.pt --test_only --self_ensemble
python main.py --data_test DIV2K --ext img --n_val 10 --scale 2 --n_resblocks 32 --n_feats 256 --res_scale 0.1 --pre_train ../experiment/model/EDSR_x2.pt --test_only
python main.py --data_test DIV2K --ext img --n_val 10 --scale 2 --n_resblocks 32 --n_feats 256 --res_scale 0.1 --pre_train ../experiment/model/EDSR_x2.pt --test_only --self_ensemble
# Test your own images # Test your own images
python main.py --data_test Demo --scale 4 --pre_train ../experiment/model/EDSR_baseline_x4.pt --test_only --save_results #python main.py --data_test Demo --scale 4 --pre_train ../experiment/model/EDSR_baseline_x4.pt --test_only --save_results
# Advanced - Test with JPEG images # Advanced - Test with JPEG images
#python main.py --model MDSR --data_test Demo --scale 2+3+4 --pre_train ../experiment/model/MDSR_baseline_jpeg.pt --test_only --save_results #python main.py --model MDSR --data_test Demo --scale 2+3+4 --pre_train ../experiment/model/MDSR_baseline_jpeg.pt --test_only --save_results
......
File moved
File moved
File moved
File moved
File moved
File moved
File moved
File moved
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment