tensorlayer/srgan
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
repo name | tensorlayer/srgan |
repo link | https://github.com/tensorlayer/srgan |
homepage | https://github.com/tensorlayer/tensorlayer |
language | Python |
size (curr.) | 124714 kB |
stars (curr.) | 2142 |
created | 2017-04-20 |
license | |
Super Resolution Examples
We run this script under TensorFlow 2.0 and the TensorLayer 2.0+. For TensorLayer 1.4 version, please check release.
ππππππ THIS PROJECT WILL BE CLOSED AND MOVED TO THIS FOLDER IN A MONTH.
ππππππ THIS PROJECT WILL BE CLOSED AND MOVED TO THIS FOLDER IN A MONTH.
ππππππ THIS PROJECT WILL BE CLOSED AND MOVED TO THIS FOLDER IN A MONTH.
SRGAN Architecture
TensorFlow Implementation of “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network”
Results
Prepare Data and Pre-trained VGG
-
- You need to download the pretrained VGG19 model in here as tutorial_models_vgg19.py show.
-
- You need to have the high resolution images for training.
- In this experiment, I used images from DIV2K - bicubic downscaling x4 competition, so the hyper-paremeters in
config.py
(like number of epochs) are seleted basic on that dataset, if you change a larger dataset you can reduce the number of epochs. - If you dont want to use DIV2K dataset, you can also use Yahoo MirFlickr25k, just simply download it using
train_hr_imgs = tl.files.load_flickr25k_dataset(tag=None)
inmain.py
. - If you want to use your own images, you can set the path to your image folder via
config.TRAIN.hr_img_path
inconfig.py
.
Run
- Set your image folder in
config.py
, if you download DIV2K - bicubic downscaling x4 competition dataset, you don’t need to change it. - Other links for DIV2K, in case you can’t find it : test_LR_bicubic_X4, train_HR, train_LR_bicubic_X4, valid_HR, valid_LR_bicubic_X4.
config.TRAIN.img_path = "your_image_folder/"
- Start training.
python train.py
- Start evaluation.
python train.py --mode=evaluate
Reference
- [1] Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
- [2] Is the deconvolution layer the same as a convolutional layer ?
Author
Citation
If you find this project useful, we would be grateful if you cite the TensorLayer paperοΌ
@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}
Other Projects
Discussion
License
- For academic and non-commercial use only.
- For commercial use, please contact tensorlayer@gmail.com.