www.scantips.com

Have we hit a megapixel resolution limit?

We are doing quite well, but it's actually the opposite limit. In a few cases, our megapixels have finally achieved the minimum resolution, so some cameras have even started removing the anti-aliasing filters.

Our images have become large. Maybe the camera is 24 megapixels, but our common use is only 2.07 megapixels for the 1920x1080 pixel video screen. Or we might print 7.2 megapixels in an 8x10 inch print. But we might print 30x20 inches, which if at 300 dpi is 54 megapixels, but which 24 megapixels would only print at 200 dpi (which should be adequate enough at that large print size). Some people say that since some of our camera sensors now contain 200 or 250 pixels per mm, and since good lenses resolve maybe 100 line pairs per mm (which is 200 pixels per mm), that we must have hit a limit for resolution. They make a serious mistake though, not understanding how digital sampling works. Yes, our sensors have approached about two pixels per line pair of lens resolution now, but that's considered the minimum, and we’re finally there. This article is about sampling resolution. It might be considered a first primer on digital imaging resolution.

I'm not sure there is any concept of a limit, but we're not near it. The manufacturers keep increasing the megapixels, they obviously don't see a limit. And certainly as long as we keep saying Wow about the new sensors, we're not there yet.

Resolution test targets are often printed as closely spaced rows of parallel lines. Resolutions of film and lenses generally are expressed in line pairs per mm resolved. Black lines have white lines between them, which is a pair of two lines, black and white, which have to be resolved by (at least) two pixels. The minimum resolution needs to be 2x the black lines, called line pairs. And the really big point is, that's just the Minimum. A little more is always good.

Lenses for 35 mm format resolve maybe 100 lp/mm (± some maybe), depending on the lens. Resolve means we can make out the lines in the image. Not necessarily good clear sharp lines, but resolve means at least we can recognize a vague pattern of lines. But higher sampling resolution would see more distinct lines, clear and sharp, with more detail in the lines themselves. Higher sampling resolution can resolve that pattern detail better and sharper.

Panatomic-X film could resolve 170 lp/mm. Color film maybe half of that.

But digital and film are extremely different worlds, very different rules. Film will have a limit, and cannot oversample, which is a keyword. The image created by the lens is an optical analog image, with analog resolution. Then digital simply tries to reproduce that image by digital means, using tiny sampled areas called pixels, each of one color. Digital resolution cannot increase analog resolution, it only hopes to reproduce it.

In the earliest days of inventing digital, Nyquist (of Nyquist sampling theorem) showed that we must sample AT LEAST at 2x the detail level to prevent aliasing. Aliasing is false detail created by artifacts of insufficient sampling resolution to be able to reproduce the detail accurately (without adding false detail). One example of false detail is moire patterns, added detail that was not actually in the image. Jaggies (called aliasing) is another example, where smooth straight lines appear as staircase steps (steps of pixel size). Basically the 2x requirement is the line pair thing, but the theorem is much deeper than that. One result is that 2x sampling is the absolute minimum level for accurate reproduction without creating false detail (aliasing). A rate even higher than 2x (oversampling) is always a better quality of reproduction. However, our camera sensors have always required (until very recently) anti-aliasing filters to reduce the lens detail, to slightly blur the image enough so that the detail will not be greater than our sensor resolution can resolve. The anti-aliasing filter eliminates the artifacts by removing the higher resolution detail that the sensor sampling resolution could not resolve. Meaning, we had not reached even the minimum resolution. We could still use it this way however, by limiting the full resolution that the lens could do. And now we are starting to be able to remove some of the anti-aliasing filters (due to megapixel sampling finally reaching the minimum necessary).

The mistake made is to imagine that image detail corresponds to pixel detail in any one for one relationship. Sampling simply doesn't work that way. Some might imagine 1x sampling is a limit, but instead, 1x sampling is simply insufficient. The Minimum required is 2x, and even more pixels will reproduce the lens detail better. The detail in vast areas in our images doesn't approach whatever maximum we do accomplish, so the problem is not always difficult. Depth of field sees to that, since we are focused at only one distance. And the scene content also contributes to that. We can do pretty good now at lower levels. But we are not near a maximum limit, if a limit even exists. We might do more than we need, or more than is convenient to use, but there is no point where things start going bad.

A digital camera sensor with a 256 pixels per mm number can at best minimally resolve 128 line pair per mm of analog lens resolution, at the Nyquist 2x minimum. That may sound like a limit, but it's a minimum limit (not a maximum limit). I hope to show that oversampling with more than 2x sampling pixels is always better. Making this part up (call it a joke), but possibly 2x sampling could be excellent if we could get all the lines perfectly aligned and exactly centered on the pixels, with the same spacing as the pixels, and very straight, not slanted to the pixels? But the real world is random and chaotic, things don't line up. If the lines were slanted or curved, that is additional finer detail (detail within the lines) that may not be resolved.

The mistake that arguments make is not realizing this:

The pixels are a reproduction sampling method of the lens detail.

The resolution number computed from the pixels is NOT the image detail number.

The pixels are only the sampling detail of the lens image reproduction.

The image detail has already been defined by the lens. Nothing done in the camera or after can increase the image detail, but it can be reduced.

Enlarging the pixels to view the image larger reduces the viewing resolution.

A smaller sensor must be enlarged more than a larger sensor, to view both at the same viewing size.

Printing large enlargements from small sensor images make this very obvious to see. Monitors are less problem, but if you simply keep zooming the image larger in a photo editor, you will see it.

How Digital Sampling Works

Below is an image from a printed Smithsonian Magazine, September 2014 page 52, a 9000 year old man found in North America (it being my government's publication, I assume I can show it). These are scanner images, but a digital camera samples with exactly the same principles, same sampling concepts, be it pixels per inch, or pixels per mm (and the scanner is adjustable resolution, handy here to allow experimenting with variable resolution). This is a CCD flatbed scanner, which has a lens in it, focusing the 8.5 inch wide glass bed onto about a 2 inch digital sensor, like the camera lens does, which then samples that lens image with digital sampling, like the camera does (except the scanner samples horizontally one row at a time with the sensor pixel spacing, and vertically with the carriage motor stepping motion, so technically, the scanner always scans one dimension at highest resolution, and then it resamples that dimension smaller to the specified resolution.) The scanner resolution dpi is referenced to the original inches on the glass bed. This magazine image is printed at the normal 150 halftone dots per inch (common to most magazines). Our brain sees a subject pattern in those ink dots, but the ink dots are the actual detail in this magazine image.

We should never forget that a halftone screened image in printed material (in books, magazines, newspapers) is no longer a real photographic image. They are just patterns of printed dots of four colors simulating the image. Real actual genuine printed photographs (from the one hour labs) are NOT halftone screened images. They are pixels, but of continuous color. Images printed on ink jet printers are a bit iffy, they arguably attempt to print pixels, but use a scattered dither, a much less regular pattern with no exact boundaries.

This digital reproduction job of scanning is to resolve those 150 ink dots per inch. This first image is scanned at 150 dpi, which is 1x sampling (of the halftone 150 dots per inch on the original).

Sampling theory tells us that 1x sampling will be insufficient. The very word "sampling" means we only see a sample of the actual detail there, less than 100%. Each sample is a pixel, which numerically represents the averaged color of that pixel area. The 1x sampling does reproduce a picture if we stand back, but this one has the expected moire (aliasing, which is false detail due to insufficient sampling, less than the Nyquist minimum). This 391x313 pixel image is a 100% view (the image is shown full size, pixel for pixel). The scan is 150 pixels per inch (of roughly 2.6x2.1 inches on the magazine page), of 150 halftone dots per inch (much like 150 line pairs per inch), which is 1x sampling of that detail. The added yellow arrow points to a little bump corresponding to the area that is also shown enlarged, to see some pixels. The enlargement is shown here at 3200% size, shown for size consistency with those below. This enlarged crop only shows about 12 pixels wide, which pretty clearly shows what a digital image is. A pixel is just digital numbers for the sample representing one single color, the averaged color of that pixel area (not unlike pictures set with mosaic tile chips. Each tile or pixel is one color, and our brain recognizes the image in the pattern they make). Greater resolution simply shows smaller pixels, more accurately representing the color of that smaller area, which then can distinguish more original detail.


150 dpi scan

Next scan is 2x sampling at 300 dpi, which is the Nyquist minimum for 150 ink dots per inch. The scanned image was twice as large as 150 dpi, so it is shown here resampled to half size (to be same size). It looks pretty good, better than above, because there is no moire (aliasing), because we accomplished 2x sampling. But 2x sampling is a minimum, and in the enlargement of the larger original at 1600% (to be same size), we still don't see any ink dots, simply not resolved well enough to recognize them. The small image looks fine (has the minimum resolution), but we don't have sufficient resolution to reproduce the ink dot detail actually there. What we see in the left picture represents the lens image detail adequately (this viewing size cannot use any more detail), but it's not an adequate reproduction of the actual subject detail (which is halftone dots here). We see no halftone dots, however we can recognize where the little bump is (we see larger detail).


300 dpi scan

Let's try more, next scan is 4x sampling at 600 dpi. It is 4x size, reduced here to show same size. And a 800% enlargement of the original, same size, which is starting to show a strong hint of the halftone dots, spots at least. We could claim to resolve the dots, so far as we can tell something is there, but like minimum line pairs, fuzzy stuff is not a great result. With only about 3 pixels from dot to dot, we really don't see any circles. Simply not enough sampling resolution. Better is possible, however, it won't help our smaller left image, which is already doing all that its size can do.


600 dpi scan

Again more below, to 8x sampling, next scan is 1200 dpi. It is 8x size, reduced to show same size here. And a 400% enlargement (same size). Bingo, we actually mostly resolve some halftone dots now. With about 4 or 5 pixels across a dot, able to suggest its round shape. We did not see this before. They really didn’t look much like ink dots until now. It's much greater detail than our small complete picture can use, but the enlargement does better show the actual detail present, of the ink dots printing it (the purpose of high resolution is for enlargement. When printing, 300 dpi is normally optimum, meaning 10 inches requires 10 x 300 = 3000 pixels dimension.) The skull area is mostly white paper and large black ink dots, but we do see some scattered cyan and magenta and yellow and black ink dots (CMYK) in the skull (and some superimposed), which are just trying to add to and influence the average color of the area, and certainly should Not be confused with subject detail. Oversampling (more than Nyquist 2x) does improve the resolution of the dots as shown at enlarged full size. It is called oversampling in terms of the Nyquist Minimum (to prevent moire), but is NOT necessarily excessive for our usage enlargements. Oversampling is NOT about whatever your use, but refers to the Nyquist Minimum. Yes, this higher resolution is overkill (wasted effort) for our reduced size photo reproduction shown here. The greatly larger number of pixels is not helping our small reproduction at left if it was our goal. But it certainly can help larger goals.


1200 dpi scan

And next scan is a 2400 dpi scan, 16x size, shown reduced to 1/16 for same size, and also shown 200% enlarged (same size). Quite a few pixels across a dot now, and its circular dots are noticeably smoother, but not really a big difference in detail. Oversampling is obviously a better large image than at first where we could just make out that dots might exist. We probably are approaching a limit of usefulness in this case, for this data, for this use (but again, it is 16x sampling, or 8x more than Nyquist, or 8x oversampled). When the image is resampled smaller to 1/16 size on the left (resolution is discarded), it's the same as the 300 dpi image then. There is more detail in it, but only if we see it enlarged, so the question is, is that the goal or not? Again, here we are scanning printed ink dots, NOT a real photo. These ink dots are the detail in this image.

For scanning printed material, scanners normally offer a Descreen option, which blurs the image trying to hide the ink dots, and the same descreen blurring can be done in post processing. No moire descreen was done here, and normally, just simply scanning magazines at 300 dpi is sufficient to reprint at original size.


2400 dpi scan

Note that these Black (or Magenta, Cyan, or Yellow) spots are of course NOT holes or detail in the skull or in the background. They no longer have real photo detail. They are only printed ink dots for the purpose to influence the average color we perceive there. The screen dots hide the actual resolution of the original photo. It is no longer a real photo, and does not have photo resolution, and now might contain only perhaps as much as the 150 dpi resolution of the half tone ink dots. Nevertheless, these dots are the detail in our screened magazine reproduction. Higher scan resolution does help to resolve those ink dots.

Camera images don't do that screening, they just show more pixels of the actual reproduced color. But halftone printing only has four colors of ink (cyan, magenta, yellow, black), used in mixed patterns trying to simulate the other actual colors. We may see some random colored dots, but it is a carefully calculated attempt to mimic the actual color being reproduced. Printing uses dithered ink dots of the four ink colors to average out to simulate one of the 16.7 million possible colors. Notice the skull and the background above has black dots, and also has traces of magenta, cyan, and yellow dots added, to influence the final average color we see. The background overprints magenta and yellow to make red, and adds black areas to darken the red. Anyway, now we can see that the detail has its own detail (for example, like the roundness of the black dots).

Any of these left side pictures (300 dpi to 2400 dpi which are at least 2x sampling) are fine if the purpose and goal is to reproduce it small, relatively near original size. Page size at 300 dpi will be too large for a monitor screen, but scanning printed images at 300 dpi is necessary for moire in printed images, so then simply resample to the smaller desired size. It is true that if the 300 dpi image size was all you wanted, then it is enough. Except even then, oversampling significantly, and then reducing smaller, is a noise reduction technique. If you do want to see more detail enlarged (ink dots), more sampling resolution is simply a better reproduction (of the ink dots). There is more detail available that oversampling resolutions can capture. We can always find a subject needing even greater resolution. The screen dots are the detail in this printed image.

We said at the top above that "A camera sensor with a 256 pixels per mm number can at best minimally resolve 128 line pair per mm, at the Nyquist 2x minimum." Hopefully we just showed that 8x oversampling (2048 pixels per mm) can more clearly show finer detail than the minimum (the scanner is built to do that number, the camera sensor design is not). However, that can be much more detail than you plan to show in your smaller image, but it can always be resampled smaller.

It should be also noted that digital camera sensors have required an anti-aliasing filter - — to blur and remove the finest detail (high frequency content, finer detail than the sensor resolution can resolve), for the purpose to prevent moire — which was necessary because sensor sampling has always been insufficient. Megapixels are finally becoming sufficient now that the general case does not need an anti-aliasing filter any more, implying that sensors are reaching the minimum 2x level — except we can still see moire now and then in special cases, yet more pixels could help some cases. Or, just as a comment emphasizing the relationship, using a lesser lens with less resolving power could also act as an anti-aliasing filter. Moire is caused by the lens resolving greater detail than the sensor sampling can resolve.

So maybe some sensor megapixel cases have reached Minimum resolution, but certainly we have not reached any Maximum sampling resolution limit, whatever that might mean.

In contrast to a half tone screen above, a real photo image is continuous tone, and here’s an example from a camera. Nikon D800 with 36 megapixels, 7360x4912 pixels (f/8, Nikon 70-200 mm lens at 130 mm)


Full frame, 1/30 size.
This setup was sized
for the larger adults.
164x245 pixels here

100% crop, no sharpening.
If on a 21 or 23" LCD monitor (100 dpi),
at 100 dpi, the full 4912x7360 pixels would
compare to about 4x6 foot size, 30x

800%, 33x30 pixels
Middle lower eye
30×8 = 240x

Basically, greater pixel resolution is only needed for viewing enlargement (or to prevent moire). The full frame image shown here is 164×245 pixels, and we might argue that's enough for this small image on the screen (but it only prints 0.55×0.81 inches at 300 dpi). Even a wallet size print would need more than 4x that size.

The 100% crop image is 350 pixels wide, which would print 1.17 inches wide at 300 dpi. The full size 7360×4912 pixel image would print 16.37×24.53 inches at 300 dpi. If that was the goal, it is not excessive resolution. This one could use a little cropping, because actually, this framing was setup for other adults, but it is no problem, the megapixels offer cropping options.

The 800% view shows this case has maybe 3 pixels across an angled eye lash. The 36 megapixels is relatively low resolution for that enlargement purpose, which causes the jaggies here (which is aliasing, or false detail due to insufficient sampling resolution). It is excessive pixels for a 4x6 inch print, but for extreme enlargement, this is not excessive resolution — if we had more and smaller pixels, the detail would be even smoother, smaller jaggies, etc. if enlarged enough to see it. However, the small photo reproduction here has no use for so many pixels, we only need what our use can use. The only purpose of high resolution is simply to allow greater enlargement of detail, but which is pointless for much less enlargement.

Digital pixels do not create any detail, pixels only reproduce samples of the detail that the analog lens created. Specifically, a pixel merely shows the color of a tiny sample spot of the detail already created by the lens. Or rather, the meaning is that the small changes in color are the photo detail. More (smaller) pixels can show smaller changes in the lens detail. This one is very adequate resolution for most purposes, but if we want to see maximum detail, the lens resolution is the only actual limit. But that answer is NOT the computed Minimum Nyquist 2x sampling. This is 36 megapixels. If only 9 megapixels, pixels would be created as 2x larger dimension (jaggies become rather large). If we had 144 megapixels, the pixels would be half this size, which could show smaller finer detail (if that detail is in the lens image).

Any and all detail is created by the lens. The pixels simply try to reproduce it. Then if all you want is 2x sampling of most image detail (the absolute minimum requirement), a sensor density of 256 pixels per mm would reproduce 256/2 = 128 line pairs per mm resolution at that level, and which probably is sufficient to prevent moire.
But if you want 8x sampling (which is reasonable for seeing smaller detail at high enlargement), the sensor density of 256 pixels per mm would reproduce 256/8 = 32 line pairs per mm resolution at that 8x level.

We can have more sampling than we need for our goal, more than many purposes need, more than a small image needs, but in more extreme cases, it's pretty difficult to have too much sampling resolution. So don't let them tell you we already hit a limit. The need for the anti-aliasing filter to prevent moire is evidence of insufficient sampling resolution. The lens resolution is the only maximum limit, but digital sampling does have to reproduce that detail.

So more sampling is always a better quality reproduction (trying to reproduce the original lens image well). I hate to say it that way (could be misunderstood), because certainly we can scan resolutions much higher than our goal needs (to copy a paper photo print for example... scanning at 300 dpi is sufficient, because our printers cannot print more, and the color print probably didn't have more to give anyway). But if we're going to zoom in to examine finest detail, then it does show, if the detail was there (for example, scanning film).

But certainly there is no one for one relationship between pixels and line pairs. If we expect to zoom in and see more detail, we always need lots more pixels.


NOTE, FWIW: There are also other factors.

If we assume some hypothetical image that actually shows detail at say 40 line pair per mm, and if we then enlarge that image to view it at double size, obviously now its detail shows only 20 lp/mm. Enlargement requires higher resolution, because enlargement then reduces resolution.

A DX camera like the 24 megapixel Nikon D7200 has 6000 pixels across a DX 23.5 mm sensor width, which computes 255 pixels per mm. But we must enlarge this to view it, maybe 10 times larger (about 9x6 inches), reducing viewed resolution to 25 pixels per mm, but at 10x, we still have maybe 600 pixels per inch (capable of much enlargement, but much is lost in a 6x4 inch print).

A 36 megapixel Nikon D810 is 7360 pixels over a FX 35.9 mm sensor width, which is less, "only" 205 pixels per mm (but larger pixels, capable of lower noise levels and higher ISO).

However, since the DX sensor is smaller (24x16 mm), it must be enlarged 50% more to be viewed at the same size as the FX (36x24 mm), comparing then as if originally 255/1.5 = 170 pixels per mm in what we see. Enlargement reduces resolution. The purpose of higher resolution is greater enlargement. A 4x6 inch print at 300 dpi needs 1200x1800 pixels, and 7360 pixels will not improve that effort, but it does offer other opportunities.

Menu of the other Photo and Flash pages here

Copyright © 2015-2024 by Wayne Fulton - All rights are reserved.

Previous Menu