Neil deGrasse Tyson plus Kandinsky’s Jane Rouge Bleu.
Photo by Guillaume Piolle, Via Google Research
Modern apps can accomplish more than a Photoshop filter can, because they enlist neural algorithms to separate style from content when they look at images.
They appear to set up a hierarchy of what's important about an image. In the portrait above, they keep the eyes and mouth in place while scrambling the less important jacket and tie.
|Image by Manenti1 using the Aptitude filter via Dreamscope|
In order to better simulate childlike, subjective, or naïve styles, such as those of Cezanne, Renoir, or Matisse, the computer will have to redraw the image to make the placement and proportions deviate from photographic reality in ways that those humans practitioners do.
Nat and Lo, two Google employees who go around the company asking how things work, do a good job explaining how deep learning techniques help computers solve this problem. You may need to follow this link to watch the video on YouTube.
Understanding this process helps us understand how we humans see and interpret images, and it also can help us as artists if we want to develop our own style, or on the contrary, if we want to try to rid ourselves of stylistic conventions.
Previous Related Posts:
Using Computers to Create a Typical Rembrandt