This weblog by Dinotopia creator James Gurney is for illustrators, plein-air painters, sketchers, comic artists, animators, art students, and writers. You'll find practical studio tips, insights into the making of the Dinotopia books, and first-hand reports from art schools and museums.
You can write me at: James Gurney PO Box 693 Rhinebeck, NY 12572
or by email: gurneyjourney (at) gmail.com Sorry, I can't give personal art advice or portfolio reviews. If you can, it's best to ask art questions in the blog comments.
Permissions
All images and text are copyright 2015 James Gurney and/or their respective owners. Dinotopia is a registered trademark of James Gurney. For use of text or images in traditional print media or for any commercial licensing rights, please email me for permission.
However, you can quote images or text without asking permission on your educational or non-commercial blog, website, or Facebook page as long as you give me credit and provide a link back. Students and teachers can also quote images or text for their non-commercial school activity. It's also OK to do an artistic copy of my paintings as a study exercise without asking permission.
Showing posts with label Computer Graphics. Show all posts
Showing posts with label Computer Graphics. Show all posts
A free online tool lets you create a 3D reconstruction of a face from a single image.
Van Dyck's Portrait of Cornelis Van Der Geest in 3D
You can input a single photo or a painting. After it processes and outputs, you can drag the 3D model around with your mouse and see it in a variety of angles.
It's fun to try it out on a familiar face that's usually seen only from one angle, like Mad Magazine's Alfred E. Neuman.
The tool was created by computer vision scientists at the University of Nottingham using machine-learning software called a Convolutional Neural Network (CNN).
"Our CNN works with just a single 2D facial image, does not require accurate alignment nor establishes dense correspondence between images, works for arbitrary facial poses and expressions, and can be used to reconstruct the whole 3D facial geometry (including the non-visible parts of the face) bypassing the construction (during training) and fitting (during testing) of a 3D Morphable Model." ------
Chris Rodley set up his computer up with a deep-learning algorithm to combine 19th century fruit art with dinosaurs.
The resulting Arcimboldo-esque 'fruitosaurs' have pears and plums rounding out their rib sections. Berry textures stand in for pebbly scales.
Mr. Rodley's software also crossed dinosaurs with an old book of flowers, creating a botanical mashup that's different from what a human collage artist would invent.
There's an overall color and value logic to each dinosaur, and a clever solution for each of their eyes. The background texture is fragmentary, not quite identifiable as specific plants. And the "writing" along the bottom is mumbo-jumbo.
While it's all delightful fun, it raises some serious questions for working illustrators. Is this truly creative or artistic? How will illustrators—or art directors—use these tools? Should illustration competitions such as the Society of Illustrators or Spectrum permit entries created with artificial intelligence? How could they ever stop it?
-----
Here is Chris Rodley's website and Twitter feed. Thanks, Kevin Cheng
Daniel Sýkora has developed software that uses the stylistic information from a painting or sculpture,and applies it to video of a moving face, changing the metrics to match the target face. (Link to YouTube) Note: the video is silent.
It's reminiscent of a Snapchat filter or of a painted-over video, but it seems a bit more sophisticated than either.
It would be fun to see what would happen if they tried to push the limits of the software by testing it against an animal face or a Picasso.
The paper, presented at Siggraph, is called "Example-Based Synthesis of Stylized Facial Animations."
(Link to YouTube) Every year at the Siggraph conference, pioneers in the field of computer graphics share their new technology.
This geeky preview highlights the technical accomplishments that will filter down to the visual effects we see in movies and animated TV commercials. Some of the innovations are ever more complex interactions of particles and fur, and dissimilar materials flowing or melting.
Multispecies simulation of porous sand and water mixtures
Some additional highlights include:
1:30 A text-to-video synthesizer that can make Obama (or anybody) say anything.
1:50 Video of a face talking can be remapped to match a given drawing or painting style.
2:00 A deep-learning method to turn crude cartoons into 3D sculpts.
Novel photo-real images generated by an adversarial network of computers based solely on a written prompt, without human intervention or photo cues. Low resolution version on top row iterated to higher res on bottom row. via Olivier Grisel on Twitter
In the ten years of this blog so far we've witnessed startling advances in the ability of computers to create and interpret images.
Despite these advances, most of us human picture-makers can still pride ourselves in our unique ability to create a photo-real image based purely on a written description.
Suppose, for example, you were asked to paint a picture of "a small bird with a pink breast and crown, and black primaries and secondaries." Could you do it? And could you render your picture so believably that someone else might mistake if for a real photo?
Computers are figuring this out, and they're starting to get good at it. Scientists are approaching the problem of text-to-image synthesis by means of a deep-learning technique called "generative adversarial networks" or GANs for short.
This GAN strategy pits two separate computer networks against each other. The goal of the Generator one is to create images that fit the text prompt, and the goal of the Discriminator is to distinguish synthetic images from real ones.
As the Generator tries to create images to fool the Discriminator, it gets harder, because the Discriminator keeps learning, too. Exactly what the computer "knows" about the structure of form or the aptness of illustrative problem-solving is hard to say because it wasn't taught by a human; it figured it out on its own, in its own way.
The resulting images are not an average of existing photos. Rather they're completely novel creations.
Furthermore, GAN image synthesizers can be used to create not only real-world images, but also completely original surreal images based on prompts such as: “an anthropomorphic cuckoo clock is taking a morning walk to the pastry market.”
How good are these synthetic illustrations?
So far the images are small (about 64 x 64 pixels) and for the most part, they still won't fool any humans. But watch out: you're just seeing just baby steps.
GANs currently do pretty well generating plausible pictures of birds and flowers, but they have limited success with complex scenes involving human figures, or generalized text prompts such as "a picture of a very clean living room."
They're a bit garbled and incoherent at the moment, but they will develop rapidly. In a few years, advanced A.I. image-creating tools that can illustrate any text prompt in any style will be available cheaply to art buyers everywhere.
Using a combination of X-ray video, motion control tracking, and computer graphics, scientists are able to show what goes on inside animals while they're moving.
This video shows the biomechanics of a guinea fowl walking (Link to video). Knowing more about these movements can help us back-construct a dinosaur's movements based on trackways.
The study of fish using this technique has shown that the skull bones are loose jointed. Also, the powerful body muscles thought to be needed mainly for swimming also serve to aid the fish in suction feeding. Fish need to gulp a large volume of water in order to bring prey into their mouths, and scientists didn't fully appreciate this until seeing the videos.
Below are some close-up images of John Travolta. Which one is the most "classic" or identifiable as Travolta?
Every photo presents unique variations of hair, expression, angle, age, and lighting. Sometimes the person hardly looks like themself. If you have ever painted a portrait likeness, you know that it helps to have a lot of photos of the individual. If you copy just one piece of reference, you may not get a recognizable likeness at all.
So which one is the classic Travolta for you? If you chose the one in the center, there's a good reason for it. It's not a photo at all, but rather a computer average of the other photos.
Computers can take a set of photos of a person and blend them into a single average face, erasing vagaries of illumination, perspective, and expression.
Here is a set of those averaged faces of celebrities. They have polygonal borders, but those polygons at least keep some sense of shape and proportion. To my eye, they're almost all instantly recognizable (unless I didn't know them at all). I'd guess that they're probably more identifiable than any single photo taken at random.
The scientists who did this work discovered that these averaged faces are also far more recognizable to A.I. facial recognition systems than are random photos. As the authors put it "the simple process of image averaging can dramatically boost automatic face recognition." Sometimes the averaged faces boosted the success rate from 40% to 100%.
The video was a new 1-minute edit of the making of my handmade animated logo "Gurney Studio." The concept was simple: to alternate the motion graphics shots with behind-the-scenes clips.
I made the video for Instagram, where it got a respectable 10K views. I thought just for fun I'd put it on my public Facebook too. It has been shared especially strongly in the Spanish-speaking world and across Southeast Asia.
Facebook gives you some stats. The majority of watchers were men, age 25-34, and 82% of the audience watched it with the sound off.
Here are some preliminary guesses to why it went so big:
1. Simple intro line: "A different way to do logo animation."
2. No need to speak English to understand the video.
3. Simple, tight editing: Flurry of 1/2 sec. clips at the beginning, followed by A,B,A,B,A,B.
4. No links out, which probably boosted it in FB's algorithm.
5. "Share-ability" which is an elusive thing. People want to share something that makes them look good.
6. Bottom line is THANK YOU! for watching and sharing. That's what makes it happen.
The comments ranged from people who thought it was a funny stunt to:
"What if Cinema 4D was done practically?"
"Let's try this on our project"
"Bro, this is your kind of stuff,"
"Pretty good, Grandpa!"
"Hey, let's dump our computers; we can get the same results working in the garden."
A lot of shares were among people who work in the graphics trade. One multimedia company said "Reality, first and foremost."
Perhaps we have arrived at the intersection of two vectors: one being what is possible with cutting-edge digital tools and the other being what can be created by hand and shot in-camera. The former requires expensive software and expertise on how to use it, and the latter takes some workshop skills and some level of commitment.
As an artist, I am mesmerized by watching examples of the latest software and how it can capture complex interactions of particle effects and fluid dynamics. But I know that with my learning curve and my budget, the best I could ever accomplish with those tools is a very second rate effort. For me the fun of the practical build is that all those effects are "for free."
Once you make the device, you can place it into new visual environments and situations. It's the gift that keeps on giving.
Computers are able to take any photo and reinterpret it in any given artist's style. You can give the computer some examples of an artist's work along with a photo of your own, and then the app will come up with an image that superficially resembles the style of that artist.
Modern apps can accomplish more than a Photoshop filter can, because they enlist neural algorithms to separate style from content when they look at images.
They appear to set up a hierarchy of what's important about an image. In the portrait above, they keep the eyes and mouth in place while scrambling the less important jacket and tie.
Image by Manenti1 using the Aptitude filter via Dreamscope
With all these deep learning apps, I notice that the realism of the photograph always asserts itself through the shapes and colors, much in the same way rotoscoping does with animation.
In order to better simulate childlike, subjective, or naïve styles, such as those of Cezanne, Renoir, or Matisse, the computer will have to redraw the image to make the placement and proportions deviate from photographic reality in ways that those humans practitioners do.
Nat and Lo, two Google employees who go around the company asking how things work, do a good job explaining how deep learning techniques help computers solve this problem. You may need to follow this link to watch the video on YouTube.
Understanding this process helps us understand how we humans see and interpret images, and it also can help us as artists if we want to develop our own style, or on the contrary, if we want to try to rid ourselves of stylistic conventions.
-----
Previous Related Posts: Using Computers to Create a Typical Rembrandt Image Parsing
A trip to the grocery store turns trippy as everything morphs into a dog. (Link to YouTube). Such hallucinogenic images are created using artificial neural networks, computers which are set up to resemble the complex web of nerve cells in the brain. Google and Facebook have used these systems to recognize and classify objects and to recognize faces, but here they're being used to generate images.
They're able to mimic the human tendency for pareidolia and apophenia—the recognition of patterns in what we see, and in particular our hard-wired penchant for seeing faces faces in things.
As the computer recognizes faces, dogs, or other traits in the target image, it re-renders it to bring out that enhancement. The system can thus reinforce the kind of visualization we do when we're daydreaming. As Google wrote in a blog post: “This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird.”
What we're seeing so far definitely goes beyond the Photoshop filter look. The images strikes me as a weird combination of humorous, compelling, obsessive, hideous, and disturbing.
But the style of the images doesn't exactly resemble a the way my human brain generates novel images. It's definitely a computer's way of dreaming.
But what we're seeing so far is just the tip of the iceberg, and in our lifetimes we'll be surprised by completely different styles of images, some that resemble what we think of as humanlike, and others completely novel.
Software engineers have been coming up with tools to make computer-generated forms look less like they were molded from plastic and more like they were drawn by hand. (Link to YouTube)
StyLit is a new method previewed at SIGGRAPH that lets a user sketch out the light and shadow treatment on a simple form like a sphere. The software then translates that modeling information onto a more complex form in real time. The method could work not only for static illustrations but for animation.
Images captured on video contains a lot of subtle movement and vibrations. If the wind is blowing or a heavy truck drives by, objects may shift slightly. This shifting and bending reveals a great deal of information about structure and flexibility.
Image: Abe Davis, MIT / CSAIL
Researchers at MIT have developed a new computer software system that uses the tiny movements recorded from a single camera's perspective and inputs them into a 3D interactive capture of the scene whereby users can manipulate objects in the scene from a variety of control points.
The software has potential not only for structural engineering, but for low-budget special effects, because it allows you to make the environment respond to inputs into the system that you control.(Link to YouTube)
A documentary called "Graphic Means" is in the works for release this fall about how graphic design technology changed throughout the 20th century. (Link to Vimeo video) Most people realize that desktop publishing and the computer revolutionized everything, but it was changing incrementally in the decades leading up to the 1990s.
"For decades before that, it was the hands of industrious workers, and various ingenious machines and tools that brought type and image together on meticulously prepared paste-up boards, before they were sent to the printer."
"Symphony of Two Minds" is a short film about CG animation finding its own style amid a variety of influences. (Link to YouTube)
It begins with two cartoon characters eating a meal in an aristocratic dining parlor. They remark on how sophisticated their world is. It is visually sumptuous indeed, with hand-held photographic camera work and richly rendered textures.
But the low-class young man hasn't fully elevated himself from his origins in a hyper 2D anime universe, and he keeps experiencing flashbacks to it.
Director Valere Amirault says: "How do we choose to mix influences when dealing with a medium as new as CG animation? From live action independent movies to Japanese anime, CG animation is still a new form of media trying to find its own style, to differentiate itself from traditional cartoons."
----- Via Cartoon Brew
Where are we headed with augmented reality? This short film by Keiichi Matsuda presents an unsettling vision of a possible future. The film superimposes digital animations over a mundane live action video showing a person's point of view as they ride a bus and shop for food. (Link to Vimeo)
Apps address us as personal assistants. Rewards and bonuses tally up like in a video game. Ads and offers leap out from products. Guidelines appear on sidewalks. The person interacts with this hybrid reality by using voice and hand gestures.
At the website Hyper Reality, Mr. Matsuda says: "Our physical and virtual realities are becoming increasingly intertwined. Technologies such as VR, augmented reality, wearables, and the internet of things are pointing to a world where technology will envelop every aspect of our lives. It will be the glue between every interaction and experience, offering amazing possibilities, while also controlling the way we understand the world. Hyper-Reality attempts to explore this exciting but dangerous trajectory. It was crowdfunded, and shot on location in Medellín, Colombia."
----
via Cartoon Brew
A team of scientists and art historians announced today how they used statistical analysis, deep learning algorithms, and 3D printers to create an image that is intended to look like a typical Rembrandt. Here's how they did it (Link to YouTube)
DeepArt is a free online computer algorithm that claims to transform your photos into painterly images, using your own painting style to guide the computer.
I thought I'd try it out by uploading a photo (left) and a closely related plein-air oil painting (right) to see what the algorithm comes up with.
After waiting about 10 hours, I got an email saying my image was ready:
The result is pretty disappointing, with strange dark slashes in the sky that weren't in either the photo or the painting. The rest of the building looks really crudely painted.
I tried it once more with a photo and a painting that were really different.
The output uses the color scheme of the painting, and grafts paint-like textures to match the relative values. Beyond that it didn't make any choices that I would regard as artistic. Although the result is marginally more interesting than a Photoshop filter, it doesn't look like a painting. The algorithm doesn't do well with faces, which require a particular attention to the eyes and mouth.
Despite the shortcomings of this algorithm, it's easy to imagine the power of future software that not only patches together stylistic fingerprints, but also uses strategies of machine perception and image parsing.
But it's also a game that lets users customize various parameters of experience, resulting in something that resembles electronic lucid dreaming, or interactive hallucinogenic synesthesia.
There's something mesmerizing about watching little dragons made of semi-viscous cookie batter falling helplessly into heaps and melting into each other. (Link to YouTube)