December 15, 2020

450 words 3 mins read

yinboc/liif

yinboc/liif

Learning Continuous Image Representation with Local Implicit Image Function

repo name yinboc/liif
repo link https://github.com/yinboc/liif
homepage https://yinboc.github.io/liif/
language Python
size (curr.) 51 kB
stars (curr.) 245
created 2020-12-16
license BSD 3-Clause “New” or “Revised” License

LIIF

This repository contains the official implementation for LIIF introduced in the following paper:

Learning Continuous Image Representation with Local Implicit Image Function

Yinbo Chen, Sifei Liu, Xiaolong Wang

The project page with video is at https://yinboc.github.io/liif/.

Citation

If you find our work useful in your research, please cite:

@misc{chen2020learning,
      title={Learning Continuous Image Representation with Local Implicit Image Function}, 
      author={Yinbo Chen and Sifei Liu and Xiaolong Wang},
      year={2020},
      eprint={2012.09161},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Environment

  • Python 3
  • Pytorch 1.6.0
  • TensorboardX
  • yaml, numpy, tqdm, imageio

Quick Start

  1. Download a DIV2K pre-trained model.
Name Pre-trained model
EDSR-baseline-LIIF Download (19M)
RDN-LIIF Download (256M)
  1. Convert your image to LIIF and present it in a given resolution (with GPU 0, [MODEL_PATH] denotes the .pth file)
python demo.py --input xxx.png --model [MODEL_PATH] --resolution [HEIGHT],[WIDTH] --output output.png --gpu 0

Reproducing Experiments

Data

mkdir load for putting the dataset folders.

  • DIV2K: mkdir and cd into load/div2k. Download HR images and bicubic validation LR images from DIV2K website (i.e. Train_HR, Valid_HR, Valid_LR_X2, Valid_LR_X3, Valid_LR_X4). unzip these files to get the image folders.

  • benchmark datasets: cd into load/. Download and tar -xf the benchmark datasets (provided by this repo), get a load/benchmark folder with sub-folders Set5/, Set14/, B100/, Urban100/.

  • celebAHQ: mkdir load/celebAHQ and cp scripts/resize.py load/celebAHQ/, then cd load/celebAHQ/. Download and unzip data1024x1024.zip from the Google Drive link (provided by this repo). Run python resize.py and get image folders 256/, 128/, 64/, 32/. Download the split.json.

Running the code

0. Preliminaries

  • For train_liif.py or test.py, use --gpu [GPU] to specify the GPUs (e.g. --gpu 0 or --gpu 0,1).

  • For train_liif.py, by default, the save folder is at save/_[CONFIG_NAME]. We can use --name to specify a name if needed.

  • For dataset args in configs, cache: in_memory denotes pre-loading into memory (may require large memory, e.g. ~40GB for DIV2K), cache: bin denotes creating binary files (in a sibling folder) for the first time, cache: none denotes direct loading. We can modify it according to the hardware resources before running the training scripts.

1. DIV2K experiments

Train: python train_liif.py --config configs/train-div2k/train_edsr-baseline-liif.yaml (with EDSR-baseline backbone, for RDN replace edsr-baseline with rdn). We use 1 GPU for training EDSR-baseline-LIIF and 4 GPUs for RDN-LIIF.

Test: bash scripts/test-div2k.sh [MODEL_PATH] [GPU] for div2k validation set, bash scripts/test-benchmark.sh [MODEL_PATH] [GPU] for benchmark datasets. [MODEL_PATH] is the path to a .pth file, we use epoch-last.pth in corresponding save folder.

2. celebAHQ experiments

Train: python train_liif.py --config configs/train-celebAHQ/[CONFIG_NAME].yaml.

Test: python test.py --config configs/test/test-celebAHQ-32-256.yaml --model [MODEL_PATH] (or test-celebAHQ-64-128.yaml for another task). We use epoch-best.pth in corresponding save folder.

comments powered by Disqus