October 30, 2019

509 words 3 mins read

hzwer/ICCV2019-LearningToPaint

hzwer/ICCV2019-LearningToPaint

ICCV2019 - A painting AI that can reproduce paintings stroke by stroke using deep reinforcement learning.

repo name hzwer/ICCV2019-LearningToPaint
repo link https://github.com/hzwer/ICCV2019-LearningToPaint
homepage
language Python
size (curr.) 19111 kB
stars (curr.) 1605
created 2019-03-11
license MIT License

ICCV2019-Learning to Paint

Arxiv | YouTube | Reddit

Abstract

We show how to teach machines to paint like human painters, who can use a small number of strokes to create fantastic paintings. By employing a neural renderer in model-based Deep Reinforcement Learning (DRL), our agents learn to determine the position and color of each stroke and make long-term plans to decompose texture-rich images into strokes. Experiments demonstrate that excellent visual effects can be achieved using hundreds of strokes. The training process does not require the experience of human painters or stroke tracking data.

You can easily use colaboratory to have a try.

DemoDemoDemo DemoDemoDemo

  • Our ICCV poster

Installation

Use anaconda to manage environment

$ conda create -n py36 python=3.6
$ source activate py36
$ git clone https://github.com/hzwer/LearningToPaint.git
$ cd LearningToPaint

Dependencies

pip3 install torch==1.1.0
pip3 install tensorboardX
pip3 install opencv-python

Testing

Make sure there are renderer.pkl and actor.pkl before testing.

You can download a trained neural renderer and a CelebA actor for test: renderer.pkl and actor.pkl

$ wget "https://drive.google.com/uc?export=download&id=1-7dVdjCIZIxh8hHJnGTK-RA1-jL1tor4" -O renderer.pkl
$ wget "https://drive.google.com/uc?export=download&id=1a3vpKgjCVXHON4P7wodqhCgCMPgg1KeR" -O actor.pkl
$ python3 baseline/test.py --max_step=100 --actor=actor.pkl --renderer=renderer.pkl --img=image/test.png --divide=4
$ ffmpeg -r 10 -f image2 -i output/generated%d.png -s 512x512 -c:v libx264 -pix_fmt yuv420p video.mp4 -q:v 0 -q:a 0
(make a painting process video)

We also provide with some other neural renderers and agents, you can use them instead of renderer.pkl to train the agent:

triangle.pklactor_triangle.pkl;

round.pklactor_round.pkl;

bezierwotrans.pklactor_notrans.pkl

We also provide 百度网盘 source. 链接: https://pan.baidu.com/s/1GELBQCeYojPOBZIwGOKNmA 提取码: aq8n

Training

Datasets

Download the CelebA dataset and put the aligned images in data/img_align_celeba/******.jpg

Neural Renderer

To create a differentiable painting environment, we need train the neural renderer firstly.

$ python3 baseline/train_renderer.py
$ tensorboard --logdir train_log --port=6006
(The training process will be shown at http://127.0.0.1:6006)

Paint Agent

After the neural renderer looks good enough, we can begin training the agent.

$ cd baseline
$ python3 train.py --max_step=40 --debug --batch_size=96
(A step contains 5 strokes in default.)
$ tensorboard --logdir train_log --port=6006

FAQ

Why does your demo look better than the result in your paper?

In our demo, after painting the outline of each image, we divide it into small patches to paint parallelly to get a high resolution.

Your main difference from primitive

Our research is to explore how to make machines learn to use painting tools. Our implementation is a combination of reinforcement learning and computer vision. Please read our paper for more details.

Resources

  • Chinese introductions

大三少年造出AI写意画家,像人类一样挥笔作画

Learning to Paint:一个绘画 AI

旷视研究院推出基于深度强化学习的绘画智能体

Contributors

Also many thanks to ctmakro for inspiring this work. He also explored using greedy algorithm to generate paintings - opencv_playground.

If you find this repository useful for your research, please cite the following paper:

@article{huang2019learning,
  title={Learning to Paint with Model-based Deep Reinforcement Learning},
  author={Huang, Zhewei and Heng, Wen and Zhou, Shuchang},
  journal={arXiv preprint arXiv:1903.04411},
  year={2019}
}
comments powered by Disqus