Here are four photos of celebrities or politicians, greatly degraded by pixellation.
Can you recognize any of them?
Subject 1
Subject 2
If you're having a hard time recognizing them so far, you might try making the displayed images smaller. On a Mac, you can do that by pressing Command and - at the same time.
Subject 3
Subject 4
Ready for the answers? Here are higher resolution photos, together with the pixelated versions.
Leonardo Di Caprio
Scarlett Johansson
Anne Hathaway
Vladimir Putin
If you recognized any of them from the pixellated version, consider how remarkable that is. The images are highly degraded, with no indication of the shapes of the features, just some brown squares where they eyes would be.
Blurring is another way to reduce the information in a photo and to make it lower resolution. Can you recognize these faces? (Answers below in fine print.)
Individuals shown in order are: Michael Jordan, Woody Allen, Goldie Hawn, Bill Clinton, Tom Hanks, Saddam Hussein, Elvis Presley, Jay Leno, Dustin Hoffman, Prince Charles, Cher, and Richard Nixon.
Recognizing faces out of such incomplete information is a formidable achievement, which tells us something about how we process visual information about faces. Scientists found that "about half of the observers were able to recognize a face of merely 7x10 pixels, and recognition performance reached ceiling level at a resolution of 19x27 pixels."
Researchers have drawn some conclusions from experiments like this:
• "Unlike current machine-based systems, human observers are able to handle significant degradations in face images."*
• "Pigmentation cues are at least as important as shape cues."
• "Fine featural details are not necessary to obtain good face recognition performance."
• "Fine featural details are not necessary to obtain good face recognition performance."
• "The ability to tolerate degradations increases with familiarity."
Detail of a painting by Frank Duveneck |
As painters, this is a good reminder that the broad, simple, tonal lay-in stage is at least as important as the finicky details and the linear relationships that we obsess over.
Here's a practice idea for students: If you can take a big paintbrush and accurately translate it into a few spots of tone, you're well on the way to painting good likenesses.
—A. Yip and P. Sinha, B. Role of color in face recognition,[ Perception, vol. 31, pp. 995–1003, 2002.
—V. Bruce, Z. Henderson, K. Greenwood, P. J. B. Hancock, A. M. Burton, and P. I. Miller, B Verification of face identities from images captured on video,[ J. Experimental Psychol.: Applied, vol. 5–4, pp. 339–360, 1999.
—V. Bruce, Z. Henderson, K. Greenwood, P. J. B. Hancock, A. M. Burton, and P. I. Miller, B Verification of face identities from images captured on video,[ J. Experimental Psychol.: Applied, vol. 5–4, pp. 339–360, 1999.
—V. Bruce, Face recognition in poor-quality video,[ Psychol. Sci., vol. 10, pp. 243–248, 1999.
* Machine learning systems are getting much better at recognizing people despite pixelation (see comments).
* Machine learning systems are getting much better at recognizing people despite pixelation (see comments).
If you liked this topic, you'll love these previous posts
12 comments:
Dicaprio and Putin known to me and easily discerned - especially while squinting. The two women not known to me. No clue who they are so meaningless. Squint to see the essence. Good lesson.
IMO this is more of an memory thing. Those photos are really popular and a lot of people have seen them atleast once, if you would show me a less popular photo of putin i am certain, that I wouldn't be able to tell you who it is. I really like this thesis about the "mere exposure effect", which I think is more of a reason that I could tell you who the person in the image is WHY THE MONA LISA STANDS OUT
Ha, I thought Jay Leno was Angela Lansbury.
Omg, so did I 😂
Interesting.In the pixelated test, I only got Putin correctly. In the second test, I was correct on 6 of the 12. I thought that Tom Hanks was Jason Priestly! The squinting does help.
Been thinking about what it "feels" like recognizing a person lately... Because I don't really think humans analyse the features of the person they're looking in the conventional sense... It's more that you look at a person, and you remember how your neurons fired in that moment. So the next time you see the person, you feel the same neurons firing. You know you recognize that person and which person those particular neurons firing corresponded to. This is why it's really hard to draw someone from memory, even if you know that person really well and would instantly recognize them if you met them. Because it's not really their features that are recorded in your brain, but rather the "feeling" of them.
I got three out of the initial four, missing only the first image. In that regard I surprised myself.
According to Jeff Hawkins, author of “On Intelligence” inventor of the Palm Pilot, Treo and more, the brain forms "Invariant Representations" to build a model of the world. I propose these Invariant Representations are formed in much the same way as motion stabilization algorithms are used in video tracking software now available in many consumer video cameras. The brain uses a similar algorithm with all our senses inclusive with the dimension of time.
Timed input of information is the reason the mind can compare and draw useful analogy and connections between stabilized input from different senses. The mind is not matching precise information but the cadence, rhythm and interval in which it arrives.
One example of this is how easy it is to recognize the start of Beethoven’s 5th Symphony. Any person can mouth the opening notes with no regard to exact notes or tone. You just have to get close to the cadence.
Another example of this is when you reach into a backpack to find your gloves. You won’t be able to tell one thing from another until you move your hand around. The timed input of the texture and size of the object allows it to be perceived.
I think our brain perceives these overlaid patterns of timed information because all things are waves of energy. Movement makes meaning. With that we can integrate metaphor and logic as one in the same. The idea that metaphor and logic are the same is my own personal invention that I think is a profound discovery. Now how can I monitetize that? : )
Hi James,
This is very much how a caricature artist learns to see exaggeration, by breaking down the huge amounts of visual information we're given every day into manageable chunks. I did a Tedx talk a while back about it (if you're interested it's here https://www.youtube.com/watch?v=s6eXcSOJaT4, but skip to about 3.18 for the bit about processing visual information). As Neyutt mentioned, our memory is a key part to the process, which is linked to our subconscious which works on a faster level than our conscious brain. We see concepts first, detail last.
The only one I recognized is Putin and I think that's just because that exact photo is used so much in news and social media. The others I would have never been able to guess.
Not quite true about the machine-based systems. See, for example, https://www.wired.com/2016/09/machine-learning-can-identify-pixelated-faces-researchers-show/
Marian, wow, thanks for that link! Fascinating. I was quoting the paper's author. I think it was written a while ago, and machine learning has made some incredible advances since then.
Post a Comment