Skip to content

Latest commit

 

History

History
28 lines (18 loc) · 2.04 KB

README.md

File metadata and controls

28 lines (18 loc) · 2.04 KB

Pytorch-DeepImageAnalogy

Implementation of Deep Image Analogy algorithm [Liao et al. 2017] using Pytorch. It is meant to be as simple and easy to read as possible, to allow everyone to uderstand how the algorithm works along with the original paper.

Deep Image Analogy is a adaptation of Image Style Transfer [Gatys et al. 2016] that uses feature maps constructed by a deep CNN such as VGG [Simonyan, Zisserman. 2015] and the Randomized PatchMatch technique [Barnes et al. 2009] to allow transfering visual attributes (color, style) from one image to another, while conserving the semantic attributes of the original image.

The following images are examples of the kind of results that we are able to get so far. There are still important artefacts to be improved, but we can definitely see that's it's doing the right thing!

Dependencies

We use Python 3.6.1, along with the following dependencies. I assume you use a conda virtual environment. If you don't, use pip3 instead of pip.

  • pytorch : pip install http://download.pytorch.org/whl/cu80/torch-0.3.0.post4-cp36-cp36m-linux_x86_64.whl
  • numpy : pip install numpy==1.13.3
  • matplotlib : pip install matplotlib==2.0.2
  • torchvision : pip install torchvision
Python built-in dependencies
  • pickle
  • os

To run

  1. Edit config.py to choose the images you want to run on
  2. Run using python DeepImageAnalogy.py
  3. When it is done, the results will be saved in Results/ folder. If you had config['save_NNFs'] = True and config['save_FeatureMaps'] = True in the config file, you can also open a notebook using jupyter-notebook Visualize.ipynb and visualize the generated Feature Maps and NNFs there.