April 28, 2021

833 words 4 mins read

GaParmar/clean-fid

GaParmar/clean-fid

FID calculation with proper image resizing and quantization steps

repo name GaParmar/clean-fid
repo link https://github.com/GaParmar/clean-fid
homepage https://www.cs.cmu.edu/~clean-fid/
language Python
size (curr.) 4566 kB
stars (curr.) 200
created 2021-04-23
license

clean-fid for Evaluating Generative Models

Project | Paper | Colab Demo

The FID calculation involves many steps that can produce inconsistencies in the final metric. As shown below, different implementations use different low-level image quantization and resizing functions, the latter of which are often implemented incorrectly.

We provide an easy-to-use library to address the above issues and make the FID scores comparable across different methods, papers, and groups.

FID Steps


On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation Gaurav Parmar, Richard Zhang, Jun-Yan Zhu arXiv 2104.11222, 2021 CMU and Adobe


Buggy Resizing Operations

The definitions of resizing functions are mathematical and should never be a function of the library being used. Unfortunately, implementations differ across commonly-used libraries. They are often implemented incorrectly by popular libraries. Try out the different resizing implementations in the Google colab notebook here.

The inconsistencies among implementations can have a drastic effect of the evaluations metrics. The table below shows that FFHQ dataset images resized with bicubic implementation from other libraries (OpenCV, PyTorch, TensorFlow, OpenCV) have a large FID score (≥ 6) when compared to the same images resized with the correctly implemented PIL-bicubic filter. Other correctly implemented filters from PIL (Lanczos, bilinear, box) all result in relatively smaller FID score (≤ 0.75). Note that since TF 2.0, the new flag antialias (default: False) can produce results close to PIL. However, it was not used in the existing TF-FID repo and set as False by default.

JPEG Image Compression

Image compression can have a surprisingly large effect on FID. Images are perceptually indistinguishable from each other but have a large FID score. The FID scores under the images are calculated between all FFHQ images saved using the corresponding JPEG format and the PNG format.

Below, we study the effect of JPEG compression for StyleGAN2 models trained on the FFHQ dataset (left) and LSUN outdoor Church dataset (right). Note that LSUN dataset images were collected with JPEG compression (quality 75), whereas FFHQ images were collected as PNG. Interestingly, for LSUN dataset, the best FID score (3.48) is obtained when the generated images are compressed with JPEG quality 87.


Quick Start

  • install requirements

    pip install -r requirements.txt
    
  • install the library

    pip install clean-fid
    
  • Compute FID between two image folders

    from cleanfid import fid
    
    score = fid.compute_fid(fdir1, fdir2)
    
  • Compute FID between one folder of images and pre-computed datasets statistics (e.g., FFHQ)

    from cleanfid import fid
    
    score = fid.compute_fid(fdir1, dataset_name="FFHQ", dataset_res=1024)
    
    
  • Compute FID using a generative model and pre-computed dataset statistics:

    from cleanfid import fid
    
    # function that accepts a latent and returns an image in range[0,255]
    gen = lambda z: GAN(latent=z, ... , <other_flags>)
    
    score = fid.compute_fid(gen=gen, dataset_name="FFHQ",
            dataset_res=256, num_gen=50_000)
    
    

Supported Precomputed Datasets

We provide precompute statistics for the following configurations

Task Dataset Resolution split mode
Image Generation FFHQ 256,1024 train+val clean, legacy_pytorch, legacy_tensorflow
Image Generation LSUN Outdoor Churches 256 train clean, legacy_pytorch, legacy_tensorflow
Image to Image horse2zebra 128,256 train, test, train+test clean, legacy_pytorch, legacy_tensorflow

Using precomputed statistics In order to compute the FID score with the precomputed dataset statistics, use the corresponding options. For instance, to compute the clean-fid score on generated 256x256 FFHQ images use the command:

fid_score = fid.compute_fid(fdir1, dataset_name="FFHQ", dataset_res=256,  mode="clean")

Create Custom Dataset Statistics

  • dataset_path: folder where the dataset images are stored

  • custom_name: name to be used for the statistics

  • Generating custom statistics (saved to local cache)

    from cleanfid import fid
    fid.make_custom_stats(custom_name, dataset_path, mode="clean")
    
  • Using the generated custom statistics

    from cleanfid import fid
    score = fid.compute_fid("folder_fake", dataset_name=custom_name,
              mode="clean", dataset_split="custom")
    
  • Removing the custom stats

    from cleanfid import fid
    fid.remove_custom_stats(custom_name, mode="clean")
    

Backwards Compatibility

We provide two flags to reproduce the legacy FID score.

  • mode="legacy_pytorch" This flag is equivalent to using the popular PyTorch FID implementation provided here The difference between using clean-fid with this option and code is ~2e-06 See doc for how the methods are compared

  • mode="legacy_tensorflow" This flag is equivalent to using the official implementation of FID released by the authors. The difference between using clean-fid with this option and code is ~2e-05 See doc for detailed steps for how the methods are compared


CleanFID Leaderboard for common tasks

FFHQ @ 1024x1024

Model Legacy-FID Clean-FID
StyleGAN2 2.85 ± 0.05 3.08 ± 0.05
StyleGAN 4.44 ± 0.04 4.82 ± 0.04
MSG-GAN 6.09 ± 0.04 6.58 ± 0.06

Image-to-Image (horse->zebra @ 256x256) Computed using test images

Model Legacy-FID Clean-FID
CycleGAN 77.20 75.17
CUT 45.51 43.71

Building from source

python setup.py bdist_wheel
pip install dist/*

Citation

If you find this repository useful for your research, please cite the following work.

@article{parmar2021cleanfid,
  title={On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation},
  author={Parmar, Gaurav and Zhang, Richard and Zhu, Jun-Yan},
  journal={arXiv preprint arXiv:2104.11222},
  year={2021}
}

torch-fidelity: High-fidelity performance metrics for generative models in PyTorch. TTUR: Two time-scale update rule for training GANs. LPIPS: Perceptual Similarity Metric and Dataset.

Credits

PyTorch-StyleGAN2 (LICENSE)

PyTorch-FID (LICENSE)

StyleGAN2 (LICENSE)

converted FFHQ weights: code | LICENSE

comments powered by Disqus