Have we hit a megapixel resolution limit?

We are doing quite well, but it's actually the opposite - in a few cases, we have finally achieved the minimum resolution, and some cameras have even started removing the anti-aliasing filters.

Our images have become large. Maybe the camera is 24 megapixels, but our common use is 2 megapixels for the video screen. Or we might print 7 megapixels in an 8x10 inch print. But we can crop a lot, and we might print 16x20 inches. Some people say that since some of our camera sensors now contain 200 or 250 pixels per mm, and since good lenses resolve maybe 100 line pairs per mm, that we must have hit a limit for resolution. They make a serious mistake though, not understanding how digital sampling works.

We're not near a limit. I'm not sure there is any concept of a limit. The manufacturers keep increasing the megapixels, they obviously don't see a limit. And certainly as long as we keep saying Wow about the new sensors, we're not there yet.

Resolutions of film and lenses generally are expressed in line pairs per mm. Black lines have white lines between them, which is a pair of two lines, black and white, both of which have to be resolved. The minium resolution needs to be 2x the lines. And the really big point is, that's just the Minimum. More is good.

35 mm format lenses resolve maybe 100 lp/mm ± 40 maybe, depending on the lens. Resolve means we can make it out to be lines. Not necessarily good clear sharp lines, but resolve means at least we can recognize vague smudges of lines. But higher resolution would see more distinct lines, with more detail in the lines.

Panatomic-X film can resolve 170 lp/mm. Color film maybe half of that.

But digital and film are extremely different worlds, very different rules. Film will have a limit. For one thing, film cannot oversample. Oversample is a keyword.

In the earliest days of inventing digital, Nyquist (of Nyquist sampling theorem) showed that we must sample AT LEAST at 2x the detail level to prevent aliasing, which is false detail created by artifacts of insufficient sampling resolution. One example of false detail is moire patterns, added detail that was not actually in the image. Jaggies (called aliasing) is another example. Basically the 2x is the line pair thing, but the theorem is much deeper than that. One result is that 2x sampling is the absolute minimum level for accurate reproduction without creating false detail (aliasing). A rate even higher than 2x (oversampling) is always a better quality of reproduction. However, our camera sensors have always required (until very recently) anti-aliasing filters to reduce the detail, to slightly blur the image enough so that the detail will not be greater than our sensor resolution can resolve. Meaning, we've not even been at the minimum resolution.

The mistake made is to imagine that image detail corresponds to pixel detail in any one for one relationship. Sampling simply doesn't work that way. Some might imagine 1x sampling is a limit, but instead, 1x sampling is simply insufficient. We need lots of pixels. Of course, the detail in vast areas in our images doesn't approach whatever maximum we do accomplish, so the problem is not always difficult. Depth of field sees to that, we are focused at only one distance. And the scene content also contributes to that. We can do pretty good now at lower levels. But we are not near a limit, if a limit even exists. There is no concept of a limit. We might do more than we need, or that is convenient, but there is no point where things start going bad.

A camera sensor with a 256 pixels per mm number can at best minimally resolve 128 line pair per mm, at the Nyquist 2x minimum. That may sound like a limit, but it's a minimum limit (not a maximum limit). I hope to show that more sampling pixels are always better. Making this part up, but possibly 2x sampling could be excellent if we could get all the lines perfectly aligned and centered on the pixels, with the same spacing as the pixels, and very straight, not slanted to the pixels? But the real world is random and chaotic, things don't line up. If the lines were slanted or curved, that is additional finer detail (detail within the lines) that are better if resolved.

How Digital Sampling Works

Here is an image from a printed Smithsonian Magazine, September 2014 page 52, a 9000 year old man in North America (it being my government's publication, I assume I share their rights to use it). These are scanner images, but a digital camera samples with exactly the same principles, same sampling concepts, be it pixels per inch, or pixels per mm (and the scanner is adjustable, handy here). It is a CCD flatbed scanner, which has a lens in it, focusing the 8.5 inch glass bed onto about a 2 inch digital sensor, like the camera does, which then samples it with normal digital sampling, like the camera does. The scanner resolution dpi is referenced to the original inches on the glass bed. The magazine image is printed at the normal 150 halftone dots per inch. Our brain may see a subject pattern in those ink dots, but the ink dots are the detail in this image. This digital reproduction job is to resolve those 150 dots per inch.

This first image is scanned at 150 dpi, which is 1x sampling. We knew that 1x sampling would be insufficient. The very word "sampling" means we only see a sample of the actual detail there, less than 100%. The 1x sampling does reproduce a picture, but it has the expected moire (aliasing, which is false detail due to insufficient sampling, less than the Nyquist minimum). The original was printed 1.6 inches wide. This is a 100% view (scan is shown full size). The scan is 150 pixels per inch, of 150 halftone dots per inch (much like 150 line pairs per inch). The arrow points to a little bump used to identify the area that is also shown enlarged, to see some pixels. This enlargement is 3200% size, shown for consistency with those below. This enlarged crop only shows about 12 pixels wide. A pixel is just digital numbers for the sample representing one single color, the averaged color of that pixel area (not unlike pictures set with mosaic tile chips).

Next scan is 2x sampling, the Nyquist minimum, at 300 dpi. The scanned image was twice as large as 150 dpi, so it is shown here at half size (to be same size). It looks pretty good, better than above. There is no aliasing, because we accomplished 2x sampling. But 2x sampling is a minimum, and in the enlargement of the larger original at 1600% (to be same size), we still don't see any ink dots, simply not resolved well enough to recognize them. The small image looks fine (has the minimum resolution), but we don't have sufficient resolution to reproduce the dot detail actually there. What we see is of course the image detail, but it's not an adequate reproduction of the subject detail. We see no halftone dots, however we can recognize where the edge and the little bump is (we see larger detail).

Let's try more, next scan is 4x sampling at 600 dpi. It is 4x size, reduced here to show same size. And a 800% enlargement of the original, same size, which is starting to show a strong hint of the dots, spots at least. We could claim to resolve the dots, so far as we can tell something is there, but like minimum line pairs, fuzzy stuff is not a great result. With only about four pixels from dot to dot, we really don't see any circles. Simply not enough sampling resolution. Better is possible.

Again more, to 8x sampling, next scan is 1200 dpi. It is 8x size, reduced to show same size here. And a 400% enlargement (same size). Bingo, we actually resolve some dots now - with about eight pixels from dot to dot, it has a few pixels across a dot to indicate its round shape. We did not see this before. Oversampling (more than Nyquist 2x) does improve the resolution seen at full size.

And next scan is a 2400 dpi scan, 16x size, shown reduced to 1/16 for same size, and also shown 200% enlarged (same size). About 16 pixels dot to dot now, and it is noticeably smoother, but not a big difference in detail. Oversampling is obviously a better image than at the point before where we could just make out that dots existed. We probably are approaching a limit of usefulness in this case, for this data (but again, it is 16x sampling, or 8x more than Nyquist, or 8x oversampled). Of course, when the image is resampled smaller to 1/16 size on the left (resolution is discarded), it's the same as the 300 dpi image then. But there is more detail in it if we see it larger. Again, here we are scanning printed ink dots, NOT a real photo. These dots are the detail in this image.

Note that these black spots are NOT holes in the skull or in the background. They are only black printed ink dots for the purpose to influence the average color we perceive there. Camera images don't do that, they just show more pixels of the actual correct color. But printing only has four colors of ink, and cannot do that. Printing uses dolts of the four colors (magenta, cyan, yellow and black) to average out to simulate one of the 16.7 million possible colors. Notice the skull in this last sample above has black dots on white paper, and also has traces of magenta, cyan, and yellow dots added in, to influence the final average color we see. The background overprints magenta and yellow to make red.

Any of these left side pictures (300 dpi to 2400 dpi which are at least 2x sampling) are fine if the purpose and goal is to reproduce it small (relatively near original size). So it is true that if the 2x image size was all you wanted, then it is enough. Except even then, oversampling significantly, and then reducing significantly smaller, is a noise reduction technique. And if you want to see more detail enlarged, more sampling resolution is simply a better reproduction. There is more detail available that oversampling resolutions can capture. And of course, we can always find a subject needing even greater resolution.

It should be also noted that digital camera sensors have required an anti-aliasing filter - to blur and remove the finest detail (high frequency content, finer detail than the sensor resolution can resolve), to prevent moire - because it is necessary, because sensor sampling has always been insufficient. Megapixels are finally becoming sufficient now to allow not using any AA filter, implying that sensors are reaching the minimum 2x level (except we can still see moire now and then, more pixels could help some cases). Maybe some cases have reached Minimum resolution, but certainly we have not reached any Maximum sampling resolution limit.

A camera example, Nikon D800 with 36 megapixels (f/8, 70-200 mm lens at 130 mm)

Full frame

100% crop, no sharpening

800%, 33x30 pixels

The 800% view shows this case has about 2 pixels across an eye lash. Aliasing causes the jaggies (false detail due to insufficient resolution) to generally be shown as 3, or maybe 4 pixels. This is certainly not excessive resolution - if we had more and smaller pixels, the detail would be smoother, smaller jaggies, etc. The digital pixels do not create any detail, a pixel merely shows the color of a sample spot of the detail already created by the lens. This one is adequate resolution for most purposes, but if we want to see maximum detail, there is no reason to imagine any sampling limit exists. This is 36 megapixels. If only 9 megapixels, pixels would be created as 2x larger dimension (jaggies become rather large). If we had 144 megapixels, the pixels would be half this size, which could show smaller finer detail.

So if all you want is 2x sampling of most image detail (the absolute minimum requirement), a sensor density of 256 pixels per mm would reproduce 256/2 = 128 line pairs per mm lens resolution at that level, and which probably will prevent moire.
But if you want 8x sampling, a sensor density of 256 pixels per mm would reproduce 256/8 = 32 line pairs per mm lens resolution at that level.

We can have more sampling than we need for our goal, more than many purposes need, more than a small image needs, but in more extreme cases, it's pretty difficult to have too much sampling resolution. So don't let them tell you we already hit a limit. If we incorrectly equate pixels per inch with lines per inch, we can't even start. :) At least not without an anti-aliasing filter (which is required when the sampling resolution is insufficient).

So more sampling is always a better quality reproduction (trying to reproduce the original lens image well). I hate to say it that way (could be misunderstood), because certainly we can scan resolutions much higher than our goal needs (to copy a paper photo print for example... scanning at 300 dpi is sufficient, because our printers cannot print more, and the color print probably didn't have more to give anyway). But if we're going to zoom in to examine finest detail, then it does show, if the detail was there (for example, scanning film).

But certainly there is no one for one relationship between pixels and line pairs. If we expect to zoom in and see more detail, we always need lots more pixels.

NOTE, FWIW: There are also other factors.

If we assume some hypothetical image that actually shows detail at say 40 line pair per mm, and if we then enlarge that image to view it at double size, obviously now its detail shows only 20 lp/mm. Enlargement reduces resolution.

A DX camera like the 24 megapixel Nikon D7200 has 6000 pixels across a DX 23.5 mm sensor, which computes 255 pixels per mm. But we must enlarge this to view it, maybe ten times larger, reducing viewed resolution to 25 pixels per mm, but we still have maybe 600 pixels per inch (capable of greater things).

A 36 megapixel Nikon D810 is 7360 pixels over a FX 35.9 mm sensor, which is "only" 205 pixels per mm (larger pixels).

However, since the DX image is cropped smaller (24x16 mm), it must be enlarged 50% more to be viewed at the same size as the FX (36x24 mm), comparing as if originally 255/1.5 = 170 pixels per mm in what we see. Enlargement reduces resolution.

Menu of the other Photo and Flash pages here

Copyright © 2015-2017 by Wayne Fulton - All rights are reserved.

Previous Menu