Daniel Sýkora has developed software that uses the stylistic information from a painting or sculpture,and applies it to video of a moving face, changing the metrics to match the target face. (Link to YouTube) Note: the video is silent.
It's reminiscent of a Snapchat filter or of a painted-over video, but it seems a bit more sophisticated than either.


It would be fun to see what would happen if they tried to push the limits of the software by testing it against an animal face or a Picasso.
The paper, presented at Siggraph, is called "Example-Based Synthesis of Stylized Facial Animations."
4 comments:
Looks like a whole new way to do animation (or maybe not). Has this technique been used in film before? Can it be applied only to faces, or can a whole person or animal be mimicked this way? I can just imagine walking up to a portrait in a museum, and have it start talking to me. Telling me what the artist was trying to accomplish, what materials were used, etc. :-)
Algorithm mumbo jumbo. Filters like this have been around for a while (i.e. in programs like Adobe After Effects, Adobe Photoshop etc.)
I used Crazytalkanimator (https://www.reallusion.com/crazytalk-animator/) a few years ago to have my cat talk.
Post a Comment