It begins with a photo. But unlike the standard Photoshop filters that apply an overall template to the image, Mr. Balestrieri’s system deconstructs the image using a model of human visual perception. It then reconstructs the picture, making basic artistic choices. From left: photo, drawing layer, and painting.
He told me about the process:
“The software looks at the colors, the shapes, the surface gradients, edges, etc, and converts it into paint strokes, which is then rendered with a digital paint brush -- all through code, without the user making any strokes.
My aim is to apply low-level visual perception in the form of machine-vision algorithms as well as a "painter's sense" to transform the images into painted representations -- to “de-photofy” them. I'm not saying the result is art -- I'm merely trying to computer-assist painting techniques.
It's a very difficult problem, I've found, to remove the elements of an image that make it appear photographic and to rebuild the image with something that approximates a human process -- the best I can do so far is try and fake it as best I can. :) Beyond low-level visual perception you start to get into very difficult areas of perception, including object recognition, 3D scene reconstruction, etc. The research into the field of artificial intelligence is still at the very, very early stages."
Flickr page explaining "John Glenn Suiting Up"
Flickr set of additional images.
Interesting that he should point out that he is not trying to make 'art', but simply trying to make an algorithm. I'm sure it would cause a big uproar in the art world!
ReplyDeleteBut indeed, the style is pretty convincing, especially with that last Glenn painting.
Maybe it would look even more convincing if the strokes were somehow trained to use a certain hand. I used to have a drawing professor that could tell a drawing was done by a left handed or right handed person; strokes have certain directions that are dominant in the different sides. I'm not good enough at observing that really, but we can all probably see something like that on a subconscious level.
It was just a matter of time...
ReplyDeleteThe algorithm doesn't seem to translate soft and hard edges. If it did that would probably add another layer of refinement.
Very interesting! :D
fascinating, but also unnerving.
ReplyDeleteI suppose this adds credence to the idea my teacher proposed while I was in school: "Anyone can learn to render with realism. Even a monkey could do it. The real trick it making pictures worth looking at."
ReplyDeleteAs this sort of programming becomes commonplace, I suppose traditional media will become even more valuable since the "hand of the maker" will be evident.
I have to say though, the computer has a pretty good color sense!
Hi, I'm the software's author.
ReplyDeleteNatalia:
Re: Art -- Nothing makes me bristle more than software publisher's claims push-a-button "art" -- which is why I think it is helpful to make the distinction between craft of painting, and Art. :)
Re: Strokes -- There has been research on 'learned' styles, you might find this interesting:
image analogies
And here are the fascinating pictures:
image analogies samples
But as you all suspect, this rendering is merely "superficial emulation" of the style, and is absent of human-level thinking or intentionality.
Nana:
Yes, I would like to differentiate between different types of edges, and Mr. Gurney's blog has been a fantastic resource, particularly this post on depth and edges:
depth and edges
But for that to be emulated in software, the computer must learn (or guess) how to detect 3d space from a monocular 2d images. There is much research by the machine vision field in this area, but nothing is even close to being accurate yet. Fortunately, I'm not trying to design a seeing-robot to drive cars on the highways, so I don't have to worry about it killing anyone. My hope is to implement something that is 'good enough' for painting. :)
Corel Painter X introduced this sort of technology, calling it Smart Strokes. Painter has had "auto painting" for a while, but the results were much like a Photoshop filter --only slower. Smart Stroke claims to follow contours and vary strokes based on the subject matter. It does just that, but not very well, IMHO. It can be a good starting point, however.
ReplyDeleteThis is great! Pretty soon computers will be able to do everything. Then we won't need to have any people at all.
ReplyDeleteJohn, about that 2D-3D issue, what if you skipped the 2D photo step and hooked up the software to a video camera "eye" that could make depth judgments directly from real XYZ space?
ReplyDeleteI also wonder what the potential would be to marry this technology with facial recognition programs. It seems a machine intelligence could assess how the metrics of a face is different from the norm, and then generate a caricature.
@John:
ReplyDeleteWhat an interesting and fun project!
As a professional computer programmer with no small familiarity with graphics and an amateur photographer with a few years experience fighting my DSLR to capture what I "see" (if there is such an objective thing!) I'll state without equivocation that the hardware and camera firmware (proprietary algorithms for turning sensor data into digital pictures), and limited display technology (color gamut and low dynamic range of even the best displays) are only the beginning of your troubles in getting painterly images out of photographs (manually with real materials or automatically as in the case of your software). My mother paints from photographs and suffers from some of the same problems: what is it about the source photo that makes it look so photographic, compared to a master painting created from life, which appears more real, more vivid?
For livelier images here are a few possible avenues of investigation:
* Perhaps consider working from the RAW images your camera produces (Google search for dcraw.c, a piece of code in the public domain that can read nearly any camera's RAW format) and performing your own tone mapping and sharpening. There are literally an infinitude of ways to turn that sensor data into an image and even the best Nikon and the best Canon cameras do things differently, so neither can be the final word.
* Consider working in a color space like CIE L*a*b* instead of RGB, since it matches more closely the data collected from human visual perception experiments instead of the (arbitrary) RGB display technology.
* Finally, to gain some insight into the various limitations that make an image seem "photographic" compared to the "real thing", try taking photos of the same high contrast scene (preferably outdoors) with two cameras from different manufacturers and see how the results vary, even on displayed on a variety of good LCD screens, especially compared to what you see in person. The camera's firmware does an amazing amount of work converting the numerical photon data from the sensor into a RAW or JPG digital image viewable on screen. And the monitor does quite a bit of finagling to render its final result so you can project it into your retina.
Just some rambling thoughts... good luck on your project.
I like the look it's getting. I bet this could seep into the mainstream as very inexpensive yet lucrative portrait business.
ReplyDeleteOtherwise, it saddens me to no end that it still looks better than what I'm doing. :(
=s=
Is it true? Is it true? Dinotopia: The Fantastical Art of James Gurney will be at the Delaware Art Museum Feb. 6-May 16? Can't wait!
ReplyDeleteJeanne, Yes, its true, and I'll be doing a couple of presentations there. I'll do a post about it in a week or so.
ReplyDelete@John, and taking from the difference of strokes like Nana said and what James Gurney said here ("John, about that 2D-3D issue, what if you skipped the 2D photo step and hooked up the software to a video camera "eye" that could make depth judgments directly from real XYZ space?")
ReplyDeletePerhaps on the 'under drawing' stage, the program could also figure out some sort of 'bump map' or 'normal map' from the image. But instead of taking the texture from the image, creating textures from the color samples. (If that makes any sense at all.) The greyscale image could help in organizing where long and short, fat and thin, soft and hard strokes could go.
So instead of the bump map taking information from shadows and light in the way of finding how elevated they are, using it for the actual shadow and light! (It's hard to make this make sense!)
I'm not a programmer, but I'm an animation student, so I'm just trying to think the Maya way.
Thanks for the info! It's way too cool!
So.... why do this?
ReplyDeletePaintings are beautiful. Computer art can be beautiful too. This seems to just leave us with the worst of both worlds.
@ John-Paul: My teachers said that as well.
ReplyDeleteAlthough I'm currently putting out images based on photos, my main goal is to use this as a tool for my own images and 3d renderings. Using photos and de-photo-fying seems to be a tough problem (for me) so I'm using that as a starting point.
At the end of the day, it's just a tool, no more or less interesting than the latest power-tool.
@J. Gurney:
Yes, 3d live cameras are possible (You can even buy ones that export depth maps, for use in do-it-yourself driving robots). I have also imagined what it would be like to then bring this contraption to a open figure drawing session! :0
For caricatures, you're right, the computer would have to know the norm. But, it sounds like it would be years of work, and you'd still get a better caricature, and quicker, from any artist on the boardwalk, and probably for less money too. :)
@Jared:
Thank you. Gamut and color spaces aside, though, the main wall I seem to be hitting is making the leap from a collection of color pixels to 'higher level' shapes and groupings... (and that is why I'm grateful for resources like this blog.)
@Natalia M.:
This is very close to what I am doing. :)
@Shane: What?! The drawings and illustrations on your site & blog look way better. Seriously.
@Darren: This is what I like to do, and beyond that, I think I see the potential for interesting images at the end of it all. Besides, it keeps me off the streets. ;)
"Open the pod bay doors, HAL"
ReplyDeleteExcellent illustrations and paintings. Congratulations.
ReplyDeleteHealth and peace!
www.baratas.biz
I wish just think about this a few days ago- why can't someone make a program that makes the art for you? Here it is, crazy! Fascinating how it turned out, much better than filters we've seen before.
ReplyDeleteThis is very interesting but I am most surprised by the reaction. I expected more of a backlash, although there was some. I must admit an initial rejection of the concept, as an aspiring artist. However, I am also a curious person that takes an interest in technology, whether that be a walking robot or a cutter ship's rigging.
ReplyDeleteI imagine that as we unravel mysteries such as this and what makes us feel love that rather than destroying the magic it will only lead to more interesting questions.
You can only ask why so many times before the answer inevitably becomes "just because!"
Quick addendum:
ReplyDeleteCongratulations on your hard work John. Only someone with an right brain for what looks good and the left brain to understand how to get there could have pulled this off. Not to mention the desire and drive to make it happen.
All the great artist I have ever met have this whole brain approach to their art: a good design sense mixed with the technical understanding of how to bring it to life.
Very interesting. For a while I've been wondering when someone works the photoshop filters further.
ReplyDeleteIt will be interesting to see where this side of "art by computer programming" goes to.
There certainly is no limit for human creativity. We make our future whether it's planned or accidental.
Which leads me to say what everyone is thinking I assume.
I can see this program being bought for a huge chunk of money and then being licensed to other companies who use "art" in their product, publications and what not and render the human artists useless. Has happened in many industries so why not here as well.
This is a great example of what humans are possible to do "materialistically". Use the previous knowledge and invent of top of that. Wonder when do we figure out to use the same principles to our selves and perhaps stop fighting? Whether that would be a good idea or not, who knows.
Thanks for another great post Jim and I hope to see more of where this leads to in the future. Curiosity has no limits:)
"Imagination rules the world"
-Napoleon Bonaparte
Corel's Painter X has an auto paint function and you can pick diffrent styles for it to paint in.
ReplyDeleteI don't particularly like the results but it looks like a similar thing.
Jeff
actually i can see how this could be a good learning tool (for both the programmer and us)
ReplyDeleteI'm going to say basically the same thing I said on the post on "art-generator" software.
ReplyDeleteLike with the avent of photography, all these digital techniques will force graphic artist to reposition themselves a bit, in the sense that they have to figure out what differenciates them from a photograher or a computer.
I do not fear all this new technology as it will never (are you sure, Erik?) be able to generate a good story, a good idea or concept, a good drama.
The most important ability of a brain is, in my opinion, empathy. That is, not only is it able to feel things, but even to imagine what someone else feels - and even actually feel what someone else feels!
Think about it. Without empathy, we woulnd't be able to get 'in' to a story or a painting and feel those emotions that presented in that work.
Get me a computer that is capable of empathy...and I'll start worrying about my job as an artist.
But until then - as the original post mentions - it's all just 'faking' stuff. (all be it more and more impressive)
I absolutely love visiting your blog because every time I do I learn so much about.. Full Of Awesome photos.. you should see also my great arts.. http://www.eugeneportfolio.com/
ReplyDeletethanks!
Very intriguing approach. I'd like to know more about how the program simulates human choices. How close is it to the artist's method?
ReplyDeleteThomas James
Escape From Illustration Island
I think this software would work better if it started with a stereo pair of images, as this would help it to detect which edges are important.
ReplyDeleteErik, that's extremely interesting to me and your post elicited a whole bunch of new and refreshing thoughts!
ReplyDeleteThat Astronaut image makes me think of slightly looser John Berkey.
ReplyDeleteAs usual, Eric has cut to the heart of the matter. It reminds me of Groucho Marx’s advice to young actors:
ReplyDelete“The most important and difficult aspect of good acting is sincerity,
If you can fake that you’ve got it made.”
Thanx for the Journey Mr. Gurney! -RQ
Thanks for this heads up! Fascinating. I'm convinced that teaching the computer to understand human intention in the artistic process will enable animated artwork of unseen character. Keep at it, John!
ReplyDeletei was just thinking about this a bit more.
ReplyDeleteWe have machines that fly, speed along the land at 500mph, on water at 100mph and more... yet we still take interest in human footraces and swimming.. the point is when the human element is involved we give it special attention - the same way we admire painting over mechanically reproduced photographs.
Hmmm . . .
ReplyDeleteMarshal McCluhan once said, "Every new technology makes its predecessor an art form."
And back in the '90s, Brad Holland once quipped, "Digital reproductions will make the one-off more valuable."
Kinda wondering as to which point of view is going to win out in this circumstance.
Especially since 'painting' was already an art form.
Thomas
http://thomaskitts.com
These results are extremely good painting wise, better than any PS or Painter filter: this particulary impresses me very much, along the standing astronaut portrait - it seems 100% done by a human - http://www.flickr.com/photos/tinrocket/2591699603/in/set-72157604026920816/
ReplyDeleteAnd the same time this makes me very sad, as if it gets turned into some sort of commercial software (or worse, acquired by one of the big guys like Adobe or Corel), it will become mainstream as the other filters have become. Illustration has some dark days ahead of it...
If you are not already familiar with it, take a close look at Dynamic Auto-Painter (DAP)(http://www.mediachance.com/dap/index.html). It's a good example of what coming out in commercial SW. Pre-process images with programs like Topaz Simplify, then run it through DAP, finish it off with a few manual strokes with an impasto brush or rake in Corel Painter, and apply suitable texture and you can get convincing results *very* quickly.
ReplyDeleteTom Mann / photo.net