Pyramid Attention for Image Restoration
This repository is for PANet and PA-EDSR introduced in the following paper
Yiqun Mei, Yuchen Fan, Yulun Zhang, Jiahui Yu, Yuqian Zhou, Ding Liu, Yun Fu, Thomas S. Huang and Honghui Shi "Pyramid Attention for Image Restoration", [Arxiv]
The code is built on EDSR (PyTorch) & RNAN and tested on Ubuntu 18.04 environment (Python3.6, PyTorch_1.1) with Titan X/1080Ti/V100 GPUs.
Contents
Train
Prepare training data
Download DIV2K training data (800 training + 100 validtion images) from DIV2K dataset or SNU_CVLab.
Specify '--dir_data' based on the HR and LR images path.
Organize training data like:
DIV2K/
βββ DIV2K_train_HR
βββ DIV2K_train_LR_bicubic
β βββ X10
β βββ X20
β βββ X30
β βββ X40
βββ DIV2K_valid_HR
βββ DIV2K_valid_LR_bicubic
βββ X10
βββ X20
βββ X30
βββ X40
For more informaiton, please refer to EDSR(PyTorch).
Begin to train
(optional) All the pretrained models and visual results can be downloaded from Google Drive.
Cd to 'PANet-PyTorch/[Task]/code', run the following scripts to train models.
You can use scripts in file 'demo.sb' to train and test models for our paper.
# Example Usage: Q=10 python main.py --n_GPUs 2 --batch_size 16 --lr 1e-4 --decay 200-400-600-800 --save_models --n_resblocks 80 --model PANET --scale 10 --patch_size 48 --save PANET_Q10 --n_feats 64 --data_train DIV2K --chop
Test
Quick start
Cd to 'PANet-PyTorch/[Task]/code', run the following scripts.
You can use scripts in file 'demo.sb' to produce results for our paper.
# No self-ensemble, use different testsets (Classic5, LIVE1) to reproduce the results in the paper. # Example Usage: Q=40 python main.py --model PANET --save_results --n_GPUs 1 --chop --data_test classic5+LIVE1 --scale 40 --n_resblocks 80 --n_feats 64 --pre_train ../Q40.pt --test_only
The whole test pipeline
- Prepare test data. Organize training data like:
benchmark/
βββ testset1
β βββ HR
β βββ LR_bicubic
β βββ X10
β βββ ..
βββ testset2
Conduct image CAR.
See Quick start
Evaluate the results.
Run 'Evaluate_PSNR_SSIM.m' to obtain PSNR/SSIM values for paper.
Citation
If you find the code helpful in your resarch or work, please cite the following papers.
@article{mei2020pyramid,
title={Pyramid Attention Networks for Image Restoration},
author={Mei, Yiqun and Fan, Yuchen and Zhang, Yulun and Yu, Jiahui and Zhou, Yuqian and Liu, Ding and Fu, Yun and Huang, Thomas S and Shi, Honghui},
journal={arXiv preprint arXiv:2004.13824},
year={2020}
}
@InProceedings{Lim_2017_CVPR_Workshops,
author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}
Acknowledgements
This code is built on EDSR (PyTorch), RNAN and generative-inpainting-pytorch. We thank the authors for sharing their codes.