Third in the series on the megapixel myth.
I’m getting old; I actually had to look up some of this data for confirmation. Seems I just can’t remember everything anymore. As it is, I am not surprised at what I found; it was pretty much as expected.
We need to examine how our eyes see in order to understand how photographic technology relates to it. You probably all remember that there are two types of receptors in the retina; rods and cones. You probably also remember rods see light/shape and cones detect colour. What you probably don’t remember is that there are about 92 million rods and 6 or 7 million cones. Not everyone has the same amount or in the same ratio, of course. This is why some people are better at discerning different shades and others can’t tell blue from green. Another fact we need to know here is that on average humans can detect 10 million different colours. You’ve probably read that the JPEG format has 16 million different colours. Do you see where this is going?
Looking at resolution first, we have to recognize that there is not an exact correlation between the number of receptors in the retina and the pixels on a camera sensor because the two don’t function the same, and neither do the receptor types. On the one hand you could say “92 million rods plus 7 million cones equals 99 million pixels” but the eye doesn’t work like that. Most of the cones are centered in the macula, where they do the most good, whereas the peripheral vision is detected mainly by rods (to alert us to any motion on the edges of our vision which may be a danger). So you could also say the cones correlate to the pixels and our eyes are basically the equivalent of <10 MP sensors. This is also inaccurate as there are rods in the center as well.
In short, you could argue how to correlate the two until the cows come home. But you don’t need to in order to understand that Samsung’s 100+ MP sensors are technically beyond human visual capability, and in terms of center-weighted vision even a typical DSLR is a serious competitor for the human eye (think 7 million cones plus an equal number of rods in the same “area” coming out to a resolution of about 14 MP).
So it’s true: we have actually created technology capable of producing pictures which are sharper and have more colours than we can see. Brilliant. Never mind arguing there are reasons for it; we’re talking about how the technology relates to everyday photos, not specific and specialized applications.
Where our eyes win is in their superior low-light ability. Our pupils may only open to about f2.0, but in terms of light sensitivity we have a range around 46 worth of ‘stops’. Think of it as being able to use ISO 1.75 x 10^15 instead of ISO 25. Okay that number is just absurd and indicates I probably screwed up the math, so let’s just say we can see in really dim light that would leave a camera totally in the dark.
Some similarities between eyes and cameras occur in extreme lighting conditions. For one thing, both go white when the light is too bright; neither handles overexposure well. When the light goes down, both get grainy (or noisy if you prefer) and lose colour definition. You may have noticed this in a dark room. (If you’ve ever been in a colour dark room you’ve noticed how you see nothing, no matter how long you wait for your eyes to adjust, because there is no light to see by.)
Another similarity is in the ability to handle contrast. This is the one where the ultra-processed images start moving away from reality. When we look at a scene with a wide dynamic range our eyes adjust for the part we’re directly looking at (like using center-weighted metering). If we don’t look into the shadows, the shadows go black. If we don’t look into the highlights, the highlights go white. Eyes have a pretty good dynamic range (about 6 stops) but obviously can’t handle too much range. Neither can cameras. Not all in one shot anyway.
So there’s the first problem: High Dynamic Range (HDR) imaging can look fake because it can produce results outside the range our eyes can actually see. Some digital cameras can do better than the human eye right off the bat. Then if you take one frame perfectly exposed for shadows, another perfectly exposed for mid-tones, and a third perfectly exposed for highlights you can create a picture with twice (or even greater) the range we’d normally see. What’s more it is presented to the eye all at once, which is not the way we’re used to looking at things. The scene before our eyes constantly changes because our eyes constantly adjust to give us the best view of what we’re directly looking at, whereas the flat view from the camera is always the same from edge to center to edge.
If you add the extreme sharpness and greater colour definition of high resolution sensors to this high dynamic range as well, you get those stunning images that captivate us on ultra high definition monitors – and which don’t look real.
A certain amount of psychology comes into play at this point. We look at a lot of manipulated photos and instantly accept them as art, but with these ultra images … something grates across the psyche. They are reality made too real; the opposite of most ‘adjusted’ photos, which are reality made somewhat unreal. When we look away to the actual world around us our minds rebel because they’ve adjusted to this ‘new reality’: now everything else looks dull and fuzzy. We begin to question our own eyesight, rather than the image we just saw. After all, how can anything be more real than reality? It is a conundrum our minds struggle with, and consistently come up with the wrong solution to.
For the average person, the end result needs to look as much like reality as possible without going beyond it. Artistic shots are generally a reduction of this reality, not an enhancement of it. Buying the top-end equipment that can produce pictures with qualities we can’t actually see makes no sense for most people. Don’t fall for the numbers game: go for the camera that gives you the results you want at a price you can afford. That’s advice that applies to everyone.