Second in the series about megapixel hyperbole.
In researching this post I came across numbers that were so astonishing that I can’t believe I’ve got them right. Even when simplified for comparative and explanatory purposes, it’s so mind-bogglingly off that there simply must be a huge error somewhere. You’ll see what I mean in a bit. First I want to show another comparison set.
In these pictures (taken with the Nikon which has better optics than the Canon) we have a nice spider in its web. The image is displayed “full screen” on my 1366 x 768 display and then I’ve taken screen shots at 50%, 100%, and 200% magnification factors.
Nice detail, eh? You can see the little spider hairs and wisps of the almost invisible web. The spider’s body actually measures about 3/8″ across, so these images are all larger than life size.
At 100% we have maxed out the display’s ability, as well as the camera’s. You can really see the tiny details and they are sharp.
Now at 200% we see the image breaking down. The hairs no longer appear sharp, and pixelation is becoming evident in the body. Keep in mind this would be trying to make the 16MP sensor produce the equivalent of a 32MP sensor, and put it up on a display capable of only 1MP. I can not stress enough that what you view the final image on is as important as what it was taken with: if the two are not the same resolution (and they almost never are) you simply will not see what the original was like. You can’t get a 16MP image off a 1MP screen. No matter how sharp and finely detailed the original file, it is going to show up only as best as the display can manage.
Let’s look at the numbers. I’m going to cheat a lot here because image resolution and lens resolution are far more complex than just a single set of numbers. For one thing there are two axes to consider, and the ratio between them is often different from camera to view (as it so often is with film as well). For another, lens resolution varies with aperture and from center to edge. If we try to look at all of this at once the post will turn into a course in calculus, and none of us want that to happen. What we do want is some relevant relative numbers to help understand why this over-emphasis on megapixels is largely meaningless to the average camera user. So we’re going to skip one axis, and average out lens resolution over apertures and edge-to-center.
To start with, the Canon’s sensor has a horizontal resolution of 5184 pixels and a physical dimension of 22.5mm. Its “kit optics” are somewhat disappointing (hence why I took the spider with the Nikon) and I’m not investing as much as the camera cost in a high quality lens for it, but I do have the 28mm Pentax Super Takumar which has a quite good resolution of 73 lines per millimeter (this is how lens resolution is measured; how many lines can be clearly seen in a millimeter of space. You can look up the particulars).
If we correlate a line to a pixel (as you would obviously need a minimum of one pixel to define a line) and do some fast math we get 73 (l/mm) x 22.5 (mm width of sensor) = 1642 pixels of width. We have 5184 to work with. Conclusion: either it takes about 3 pixels to define a line or the sensor is capable of resolution 3X what the lens can deliver. I’m sure the engineers at any camera company can explain this for us.
Let’s work that backwards just for fun: you’d need a lens capable of resolving 230 l/mm to fill that sensor (at 1 line per mm), or the lens could display the same result with 1/3 the number of pixels; 6MP. That would still be six times this computer’s display resolution. Unfortunately 6MP is not one of the options on the Canon, so we’ll just have to compromise; adjusting the resolution to come closest to that 73 l/mm rating of the lens (1642mm) which is the “S2” setting of 1920 x 1280 or 2.5MP.
So here are the two “full size” 1920 x 1280 images, taken with the same lens (the Super Takumar 28mm) and exposure settings. Can you tell which is ‘native’ and which is ‘reduced’?
I would be remiss not to point out that there is some advantage in hi resolution images even when reduced to normal viewing size as it gives the computer ‘more to work with’ in deciding which pixels to keep and which to disregard. This is especially important when other processing of the image occurs.
The moral here is: don’t go spending extra money on more pixels if you don’t need them for the end result. Especially do not fall for the myth that those extra dots will somehow magically make your pictures better, because they won’t.
The next part of this series will deal with eyesight, and why hyper-processed images don’t look real. I will get to it as soon as I can figure out how to explain such a complex subject in easy-to-understand terms.