Bringing Portraits to Life
Hadar Averbuch-Elor       Daniel Cohen-Or       Johannes Kopf       Michael F. Cohen
Tel Aviv University       Tel Aviv University       Facebook       Facebook
To be presented in Siggraph Asia 2017
Results 1
Given a single image (top row), our method automatically generates photo-realistic videos that express various emotions. We use driving videos of a different subject and mimic the expressiveness of the subject in the driving video. Representative frames from the videos are displayed above.


We present a technique to automatically animate a still portrait, making it possible for the subject in the photo to come to life and express various emotions. We use a driving video (of a different subject) and develop means to transfer the expressiveness of the subject in the driving video to the target portrait. In contrast to previous work that requires an input video of the target face to reenact a facial performance, our technique uses only a single target image. We animate the target image through 2D warps that imitate the facial transformations in the driving video. As warps alone do not carry the full expressiveness of the face, we add fine-scale dynamic details which are commonly associated with facial expressions such as creases and wrinkles. Furthermore, we hallucinate regions that are hidden in the input target face, most notably in the inner mouth. Our technique gives rise to reactive profiles, where people in still images can automatically interact with their viewers. We demonstrate our technique operating on numerous still portraits from the internet.


Paper (PDF)
Supplementary Material

BibTex Reference

author = {Hadar Averbuch-Elor and  Daniel Cohen-Or and Johannes Kopf and Michael F. Cohen},
title = {Bringing Portraits to Life},
journal = {ACM Transactions on Graphics (Proceeding of SIGGRAPH Asia 2017)},
volume = {36},
number = {4},
pages = {to appear},
year = {2017},


We thank Peter Hedman, Noa Fish, Tal Hassner and Amit Bermano for their insightful comments and suggestions. We also thank Ohad Fried, Justus Thies, Matthias Niessner, Pablo Garrido, and Christian Theobalt for providing us with comparisons to their techniques. This work is partially supported by the Israeli Science Foundation, research program (1790/12 and 2366/16).