Here are four photos of a real scene. In three of the images, the objects on the table are rendered by various 3D digital imaging methods.
The objects are real in only one of them. Can you tell which one?
Rendering Synthetic Objects into Legacy Photographs from Kevin Karsch on Vimeo.
(Video Link) The following video demonstrates a method called LuxRender, introduced at the 2011 SIGGRAPH convention by a team from the University of Illinois. It allows users to insert virtual elements into a pre-existing photograph.
A few easy controls allow you to input the parameters of the room and its light sources. Then the software generates all the diffuse and specular surface effects, glowing light interactions, cast shadows, and occlusion shadows.
As a traditional painter who only watches the technology from the sidelines, I find all this stuff very impressive, inspiring, and a little scary. When I’m doing a realistic painting of an imaginary scene, I know how hard it taxes my brain to figure out these complex lighting interactions. The same is true today, I suppose, for 2D digital painters.
Now our machine brothers can do the thinking for us. It’s amazing to see how the computer can make these subtle 3D judgments so effortlessly, especially given that it is inferring the light sources from an existing 2D photo image.
What are the artistic implications of this technology? Once this sort of software finds its way into the hands of everyday users and magazine editors, our visual environment will be flooded with ever more fishy photos. I can put your car in my driveway or my flying saucer on the roof of your house. And I suppose these tools will save a lot of tedious labor in the live action visual effects field.
Video of LuxRender on Vimeo (with abstract)
Color and Light: A Guide for the Realist Painter
thanks, Steve Merryman