www.scantips.com

Diffraction Limited Pixels? Really?

In Support of Depth of Field

We read on the internet how the resolution of our digital cameras can become "diffraction limited" as we stop down more. We know stopping down does increase Depth of Field, and yes, diffraction also becomes greater if stopping down too much, but sometimes it is incorrectly worded in terms of the issue being caused when diffraction becomes larger than our digital sensor's pixel size. That pixel part is nonsense. It's not about the pixel, it's about the diffraction present. Tiny pixels simply just better resolve any detail that is present, but pixels do not affect diffraction.

The big point here is that sometimes stopping down for increased depth of field is desirable as an obviously better resolution result than the slight diffraction it causes (at least until it gets too great). You ought to at least try stopping down in situations when you know it will help, and then will have the two pictures to choose from.

Photos don't normally show diffraction as round Airy disks, but instead all the many fainter Airy disks are blurred together everywhere, and what we usually see is a slightly blurry picture overall, like a somewhat out-of-focus image. The diffraction blurring is all over the entire frame, making the one pixel notion moot. An Airy disk in even a best case like f/5.6 is larger than one pixel. Yes, diffraction is blurring, but is usually relatively mild as compared to out-of-focus situations, like if past the Depth of Field distance limits (see the large blue f/40 image below). There are times to consider all your options.

However, a bright point source when seen magnified (a single star isolated in a black sky seen at high power in a telescope is the clearest example of the diffraction Airy disk) can show as a larger diffraction disk of concentric rings called an Airy disk (see calculator below). Light passing by an edge (like a narrow slit, size comparable to the light wavelength) is deflected. Stopping down the aperture makes the diffraction become worse, when most of the aperture area is near the aperture edge. Then it covers and blurs the true detail, reducing resolution. The Airy disk is considered the limit on optical resolution, (see an example of "resolving", called the Rayleigh Criterion), and see SPIE for a study of the Rayleigh Criterion. So we hear how our camera has an aperture limit if a lens aperture is stopped far down, and that part is valid, stopping down the aperture does increase the diffraction which does limit the resolution.

And then yes, even if all the many faint Airy disks everywhere are generally all blurred together in regular photos, this theoretical Airy disk size can be computed in terms of our pixel's size, regardless if we see an Airy disk or not, but which is just a comparison magnitude scale, and the pixel size as a measurement unit is NOT the problem. The problem is the diffraction size, regardless of the pixel size. That absolutely does Not mean we need larger pixels, which would just be less resolution too. Less diffraction would be the solution. There's a diffraction calculator below.

First, a rant about imagining the size of one pixel increases some measure of limiting due to diffraction:

That "one pixel limit" notion comes from the fact that the pixel is the smallest dot that digital sampling can reproduce. But one pixel is NOT the measure of resolution in a digital image. Specifically, two adjacent pixels, a dark and a bright one, is the least difference that can be resolved as an edge of detail, and it is clear that yet many more pixels are necessary to actually see the shape of that edge. Too adequately resolve and recognize the smaller detail, you would be much more pleased to have 4x to 8x more pixels (A page showing that.) If a sensor has 200 pixels per mm, then the greatest resolution it can resolve is 100 lp per mm. A greater number of pixels are then smaller pixels, a plus which better resolves greater image detail (including that of diffraction). That's a fact, but far too simplistic regarding diffraction. The size of the pixels is of course a resolution limit, but that's already there regardless of the presence of any diffraction. I think this one pixel limit idea is absurd, reminding me of the old joke about giving the techie a calculator.

My Nikon full frame camera is 36 megapixels (7360x4912 pixels), and even at f/8 (which f/8 is normally considered near optimum detail), the Airy formula computes f/8 diffraction is already a bit larger than 2x2 pixels, even if we are able to imagine it gets somehow perfectly centered on those four pixels. That size is of course calculated, I can't see the diffraction to know, so I don't worry much about it. The number of four pixels does get larger with a smaller sensor, but of course is smaller with same size sensor with larger pixels (but larger pixels are not as good any other way, such as resolution of detail). See the diffraction calculator below, it's already setup and ready to go to show my supposed f/8 situation, and it should clear up the one pixel size situation. I'd also invite you to see a real f/8 situation detail at this page. Nevertheless, even if the f/8 Airy diffraction diameter does calculate a bit larger than 2x2 pixel size, it looks pretty good to me.

The Nyquist formula calculates the diffraction circle's x radius, and the diameter is twice larger than x. Calculations to imagine it centered within one pixel requires a f/number not greater than f/3.5. That might work for a cell phone camera, but would sound like my camera must be useless, however it's been a wonderful camera. And when I need the extreme depth of field, I happily use it at f/22 and even f/32. The depth of field is lot better, and the diffraction is a little worse. See the large f/40 picture next below. Even if the one single pixel is somehow magically involved, it is extremely unlikely that a diffraction disk is somehow magically centered anywhere. I think the idea is dumb, as it takes many pixels to show any photographic detail. Nevertheless, the notion is that if the diffraction spread outside of one pixel's area, then of course it also affects the neighboring pixels, presumably making things seen larger, and worse about obscuring pixel detail. But the diffraction was already larger, and more and smaller pixels simply better resolve the detail present. Pixels cannot affect or change the diffraction, but pixels outside the acceptable Depth of Field limits are always bad news. Smaller pixels do simply better resolve whatever detail that is present, if it is diffraction or not. I don't make sense of their notion about the one pixel, but diffraction is about diffraction size, not about pixel size. And even f/3.5 (on a full frame sensor) is right at the limit of diffraction being one pixel size. Diffraction is simply NOT about pixels.

I am Not saying diffraction is no concern. Of course it can be a problem and it certainly does get worse as we stop far down. Nevertheless, in some situations, the improvement in depth of field can sometimes be an improvement well worth it. On a full size sensor camera, yes, I'd try to stay under f/16. UNLESS you can't get the necessary depth of field then, and then if not, then go for it. You will likely be very pleasantly surprised. There is an old rule of thumb, from around time of Ansel Adams work in the 1930s, that for maximum sharpness, we should keep the f/number not exceeding focal length / 4. So 50 mm would be f/12.5 maximum, and 20 mm would be f/5 maximum, etc. See more about FL/4 below. But if the needed depth of field requires stopping down more, then you should go for it. Depth of field counts more than diffraction.

Some techie details: Our accepted standard of maximum permissible photo blurriness is Depth of Field CoC in out-of-focus areas. A smaller sensor requiring greater necessary greater viewing enlargement, so permissible CoC necessarily scales smaller in smaller sensors. That result is that in an example of a typical 6000x4000 pixel image, permissible CoC diameter is 5 pixels, regardless of sensor size (but it does vary with the sensor image dimension in pixels). Diffraction Airy disk is a similar blurriness (with diffraction details covering valid image data), but the size of one pixel has nothing to do with that. Smaller pixels are simply better in terms of resolving sensor detail. However, the diffraction Airy disk is Not affected by image or pixel size, but only by Fstop number. And even at f/4, an Airy disk is 0.27x CoC (still larger than one pixel) so that should be worth considering. Small pixels simply resolve detail better. Diffraction is simply NOT about one pixel. The issue is large diffraction which is NOT desirable. However, sometimes there are still cases when it's much better to stop well down than not (see the next blue images). Saying, sometimes stopping down helps detail much more than diffraction can hurt. And see the diffraction calculator below.

Think further a second. Diffraction is not just one spot on a few pixels. If the lens is causing diffraction, there is continuous diffraction all over the frame, like in every pixel, anywhere there is detail to resolve. Smaller pixels simply better resolve whatever detail is present, including the detail of diffraction. Again, of course the size of the pixels is a resolution limit, but that's already there regardless of the presence of any diffraction.

The stars in a black night sky are tiny infinitesimal "points" (at astronomical distances), so all appear here much smaller than any pixel. Diffraction Airy disks are lens artifacts that can be seen when optically magnified in a large telescope lens, around stars seen individually against a black background. In our cameras, the Circle of Confusion (CoC) defining the maximum blur that defines a "sharp" image is typically 4 to 6 pixels size. Acceptable diffraction Airy disk might be about the same size as the CoC limit, and bad diffraction Airy disk might be twice that CoC size. But in any ordinary non-stellar consistently brighter regular pictorial photo typically with detail everywhere (except maybe blue sky), the diffraction is not just a few random spots scattered around. All Airy disks are overlapped by a few others, and there will be no visible Airy disks as such. Instead diffraction everywhere causes a fuzzy lens, continuous blur that every pixel shares (if bright enough to be exposed). Diffraction is indeed serious, and can become pretty bad, but in smaller degree is often a smaller effect, the image detail is degraded, but still present, not quite sharp, but possibly hardly noticed. Yes, the diffraction is present, and is not zero, and is better without it, and diffraction can be a major issue, all true, but it is typically minor compared to Depth of Field issues for example. Do worry about diffraction, but not in regard to the size of one pixel.

And for another thing (about any one-pixel notion), the digital sensor Bayer pattern is roughly areas of four sensor pixels (2x2) to create any one RGB processed pixel, which is a complicated interpolation situation generally ignored when discussing resolution.

But primarily (still ignoring Bayer), sampling theory "resolution" is instead determined in units of two RGB pixels (Nyquist, et al.) A pixel is indeed the smallest dot possible, but resolution is expressed in "line pairs", a black line and a white line, which takes two pixels to show the difference. Overall, smaller pixels may have more noise, but more of them are never a resolution problem, more are instead a plus because more can simply resolve and show whatever lens detail is present with better precision than larger pixels. The tree or person or mountain in our pictures are larger than one pixel too, which is very necessary for resolving detail.

The lens image was created containing the added diffraction detail, and it is what it is, just another image. The sensor does not care what the image is, it's all simply detail, colors and intensities actually. The sensor adds a grid of pixels onto that image. The digital sensor reproduces the image by sampling colors of many areas (the pixels), the more pixels, the better for resolving finer detail. Diffraction is not possibly aligned centered on pixels anyway, but if the detail spills into neighboring pixels, then those pixels will simply reproduce the color of whatever they see there. If some specific detail is already big, the sampling will not make it bigger. The role of more pixels is to simply better reproduce the finer detail in that image. More smaller pixels show the existing detail better, but pixels do not create any detail (all detail is already in the lens image, each pixel simply records the color it sees in its area). Regardless of what the detail is, more and smaller pixels are always good for better reproduction of that detail. All detail is created by the lens. Recording that detail with more pixels certainly DOES NOT limit detail, more smaller pixels simply reproduce the existing detail better, with greater precision, showing finer detail within it (detail within the detail, so to speak). That's pretty basic. Yes, larger diffraction is a problem, because it's larger, but growing into adjacent pixels is not an additional problem. It was already larger. Anything that can be resolved is larger than one pixel. Don't worry about pixel size affecting diffraction resolution. Be glad to have the pixels, and worry about the diffraction instead.

However, comparing Airy disk size to only one pixel size is one way to describe and compare its size for scale. Even if a misunderstanding, we do see it done, but it's not useful. Generally we cannot see pixels. We can only "see" Airy disks on individual focused point sources (like stars) when the size is several pixels. But otherwise, diffraction is an overall blurring, everywhere in the lighted frame. There is always some degree of diffraction present, causing lens resolution to test lower at f/8 than at f/5.6, and again lower at f/11 than at f/8, so yes, stopping down considerably does make the diffraction become larger to limit resolution even more. But some do seem to get excited after it hits that so-called computed pixel limit. That problem is that the diffraction is larger, and not that our pixel size is any concern then. Our pixels continue to do exactly what they always did, still resolving whatever detail they see. But larger diffraction can blur more of the real scene image data, reducing maximum resolution. That seems obvious. Diffraction had exactly the same effect in film cameras, so any problem is NOT about the pixel, the problem is that the diffraction grew larger.

USAF-1951 Line Pair per mm - The valid method in analog media (like film) to compute a "numerical resolution limit" is that (1 / Airy disk radius) (or Airy 1/x) is the actual resolution limit in the customary line pairs per mm units. Using line pairs, the USAF-1951 resolution chart shown (Wikipedia) is one of most standard resolution charts (not at actual size here). The U.S. Air Force was very interested in photographing detail on the ground in fly-overs. In digital media, it takes at least two pixels (seeing one black pixel and one white pixel) to resolve a line pair into the separate lines. Therefore minimum useful digital resolution is considered to be (pixels / 2) per mm. Less scan resolution causes false detail called aliasing, often seen as moire patterns.

When the lines and spaces could not be resolved as such, the lens could not resolve that resolution (or perhaps it was the pixels scan resolution that could not reproduce it). That number is not numerically apparent to us when looking at photos though. Another method might be when the diffraction Airy disk grows larger than Depth of Field CoC value, which is another measurement of the smallest size we can visually perceive. Don't misunderstand, diffraction is something entirely different than Depth of Field. Both are blur, but diffraction exists over the entire frame, and Depth of Field CoC defines the acceptable depth of the focused zone. But CoC is the computed maximum tolerated size limit of out-of-focus blur which determines Depth of Field extent. Useful because it exists, and we're familiar with the scale of it, and it is about perceived sharpness in photos. The DOF concept is that blur larger than CoC is visible and objectionable, but blur smaller than CoC is too small to see, so then no issue. CoC is normally a few pixels in size. Those DOF blurred areas are not maximally sharp either, but acceptable if within the limit of CoC, still good enough that we don't even notice it then. That seems a suitably good limit for diffraction too. The calculator below is scaled to CoC units too. And there can be planned intentional cases when stopping down even more might not be full sharpness, but the improvement in Depth of Field is so huge and valuable that it typically overwhelms any diffraction concern (which is typically more minor than are depth of field issues). For example, like the blue ruler at f/40 shown just below.

So another thing: Speaking of larger cameras (say DSLR sensor size or more), the diffraction problem made it ideal to routinely safely stay at f/5.6 or f/8 when possible and suitable (for DSLR size sensors, or around f/2.8 for compacts or phones). But unfortunately, sometimes we hear that we should "Never" stop down past this computed diffraction "limit" of whatever it is. Regardless of accuracy, the warnings get overdone, or at least worded poorly, if making no exceptions to "never". Those who believe the warning about "never" can be turned away from a very important tool. It seems unhelpful poor advice when worded "never". It seems better to be guided by the results we can actually see happen. There are "ifs and buts" about everything, but that advice should NOT mean stopping down more should never be used. At least take "never" with a grain of salt, because while it may be good routine advice, still sometimes other very good things can happen when we intentionally go past that "limit", perhaps way past it. Yes, diffraction is bad, and yes, we certainly should be warned that stopping down does increase diffraction which lowers lens resolution. But sometimes we do have enough resolution to spare a bit of it, and we should also know that stopping down does increase Depth of Field, sometimes dramatically, which at times can be much more important and extremely helpful. Diffraction is not good, but sometimes stopping down helps detail much more than diffraction can hurt (speaking of when more depth of field is seriously needed). Diffraction is not at all a good thing, but within reason, maybe it's not a complete disaster. But it's not uncommon for insufficient depth of field to sometimes be that complete disaster. It depends on if you need more depth of field or not. My suggestion is that when more depth of field is needed, do try stopping down, and see what happens, and then believe what you can see. It's a basic principle, and a great tool to know about.

Said another way: The initial default case in the diffraction calculator below is for a Crop 1.5x DSLR. It computes diffraction limiting (based on reaching CoC size) occurring at f/11, however the Airy maximum resolution is still 127 line pairs per mm there. Which is FAR from poor results, it is still great quality (if the lens quality and pixel reproduction can do as much). CoC also defines the Depth of Field limits, where it is used to decide acceptable quality (but DOF can get far worse). The calculator also computes that maximum resolution at f/32 is 46 line pairs per mm, which is degraded, certainly not optimally sharp, worse than we would like, but very possibly might still improve difficult DOF enough to get an acceptable picture. See the f/32 examples further below. We would not routinely always use f/32, but when the DOF is a critical deciding factor, it can work a miracle. I'm suggesting stopping down can sometimes be a wonderful tool for you, in some situations. It is there to be used, when needed.

"Diffraction limited" can be used with two meanings. For a lens or telescope, it normally means its image quality is so good that it is limited only by theoretical diffraction limits. It's a compliment then, it means the lens could not be any better. Today, we generally assume that is a given for the lens. But for our digital cameras, "diffraction limited" can mean this: Normally, when we are comfortably back at say f/5.6, and diffraction is not even a thought, then our normal maximum resolution we see is often limited by the digital sampling (the pixels, specifically, how much of the analog lens image resolution our digital sensor sampling can reproduce). The final resolution result is the least of either what the lens can show due to optical issues, or what of it the digital sampling can reproduce. Until we had more megapixels, the limit has been the sensor, when it needed the anti-aliasing filter (which is installed on the sensor to blur away the smallest lens detail that the sensor cannot reproduce, to prevent causing false moire artifacts). Today, we have more megapixels, and now in the largest cases, some of the anti-aliasing filters are being omitted as unnecessary, so that today we see even sharper results. But still, as we stop down considerably, the lens does lose resolution due to increased diffraction.

We routinely assume sensor limited resolution is normal and OK, or at least, there is no other choice with that camera. More megapixels is always a good thing, the smaller pixels increase the sampling resolution. But then some tell us to get excited if diffraction size passes that pixel size a little. But the actual problem is always the diffraction, not the pixels. And even then at times, the depth of field improvement stopping down allows can make a great difference.

A quick real world example of actual evidence we can see. We hear advice that we should never stop down more than to about f/11 (in a so-called full frame size sensor, 35 mm film size), which is not bad advice for routine situations, but there are many obvious exceptions. This (special case) compares f/11 and f/40. In this case, my vote is strongly for the f/40, and the next two pages provide MANY regular snap shot comparisons of f/22 or f/32, which can sometimes be better, sometimes not. When it does should get our attention, so try it, and look at results. These pictures are ruler markings with 1/16 inch rulings (about 1.6 mm apart).


The full frame and crop
Cropped to near 40% of frame height, and then resampled to about 1/4 size.

Both pictures are Nikon D800 full frame sensor, 36 megapixels, and 105 mm macro lens (focused very close, but a bit less than half of the enlargement of 1:1). The only difference is the stopped down lens aperture, and of course the increased time of exposure. Depth of field can help far more than diffraction hurts.

The f/40 seriously improves a difficult problem this time. The advice to Never stop down past f/11 (full frame size sensors) is very counter-productive here. The f/40 may not be the sharpest picture I ever saw, but in this case, it sure is better than the f/11 try. My judgment is that (here) no feature of detail in the f/40 picture is worse than the corresponding in the f/11 picture. Only difference is lens aperture. The so-called "diffraction limit" computes f/14.6, so f/11 should be OK, and f/40 supposedly should be pretty bad. F/40 is definitely past any limit, and the diffraction is some worse at f/40, although I can't say I can see it here since f/11 is no better in the best spots. I do see the tremendous improvement in depth of field by stopping down, day and night better in this case. Diffraction is much more subtle than Depth of Field. The picture seems very acceptable for what it is, and f/40 is obviously what made it be acceptable. Stopping far down is sometimes a great tool when needed, no matter what you may have heard.

The number f/40 is possible here (in a f/32 lens) because macro lens f/stop numbers increase when focused up close, because focal length increases when focused closer. Typically at 1:1 macro, all marked f/stop numbers increase two full stops. f/32 would become f/64 at 1:1. This was more mild than 1:1, and became only f/40. Some modern macro lenses using internal focusing (I'm familiar with Nikon) reduce focal length slightly at close distances, and thus will show slightly less than the necessary two stop change at 1:1 (maybe 1/3 EV less).

Sure, holding at f/5.6 or f/8 is generally always very desirable, and holding there is good routine advice routinely (speaking of DSLR sensor class use), when they work. Stay there when they work. But when it is insufficient depth of field for the best picture, the game has to change, if you want results.

Sure, certainly f/40 is extreme, certainly it's not perfect, and not always ideal. But sometimes it's absolutely wonderful, when depth of field helps far more than diffraction hurts. It can solve serious problems. When we need more depth of field, falsely imagining that we ought to be limited to f/11 can be detrimental to results. Use the tools that are provided, when they will help. Try it, and believe what you can see.

However, f/40 does also require four stops more light and flash power (16x more) than f/10. 😊 But nothing blew up when we reached f/16.

My strong suggestion is that when you really need more depth of field, you should ignore any other notions, and actually TRY stopping down when needed. Then look at your result, and believe that. That is the standard solution, and is why that capability is provided, and you'll likely like it a lot (when appropriate). Certainly I don't mean routinely every time, because diffraction does exist, which we generally want to avoid, so do have a need and a reason for stopping down extremely. But needs and reasons do exist. Don't abuse it when not needed. Yes, routinely staying under f/11 is certainly a fine plan to reduce diffraction, when you can, when it works. But when it won't do the job, stopping down creates greater depth of field, which can be a tremendous improvement when needed. Photography is the game of doing what we see we need to do, when it actually helps.


Airy disk in image of magnified star.
From Wikipedia
A 2000 mm lens at f/25. The Airy calculation x is the radius of the first dark ring. The outer rings are often too dim to have great effect.
This star is isolated in a black sky, with no other diffraction disks overlapping it, which is NOT the case in a regular photo on Earth.

Calculate Airy Diffraction Limit

  Sensor pixels x

  Sensor dimension, two choices:

x mm
Crop Factor x

  Compare a Goal of

  Wavelength nm  


Airy disk: Calculating the numbers

Stars in the night sky are tens to millions of light years away, and so from Earth, they appear as zero diameter point sources. However at high optical magnification in telescopes, we see a star not as the smallest point source, but as a significantly larger appearance as an Airy disk, which is an artifact of the diffraction in any lens. The Airy disk diameter inversely depends on aperture diameter (half the aperture diameter creates twice the Airy disk size). The ability to separate and resolve two close points (two touching Airy disks) depends on Airy disk diameter (how much they overlap each other), which depends on aperture diameter — as seen though focal length magnification (twice the focal length is twice the magnification which shows twice the separation distance).

Telescope users know that telescopes with a larger diameter aperture have better resolution due to less diffraction. Then that smaller Airy disk diameter can better resolve (separate) two very closely spaced stars, to be seen and distinguished as two close stars instead of as one unresolved blob (blurred together). Resolving known double star pairs is the standard measure of telescope resolution. Camera lens are much smaller diameter than telescopes, so resolution is more limited. And microscopes are even smaller.

The diffraction disk is nothing new. It was first discussed by Grimaldi in 1665. It is named for George Airy's work in 1834, but this resolution formula is from Lord Rayleigh, 1879 (mostly about telescopes so far). Light passing close to the edge of the aperture gets spread out, larger and causing the rings. Light paths in a tiny aperture is all close to the edge.

Wikipedia shows its definition of that minimum separation to resolve two star points (see example of "resolving", called the Rayleigh criterion). It is used as the maximum possible optical resolution limit due to diffraction. The human eye color vision sees wavelengths from 400 to 700 nanometers, peaking strongly in the center at green, at wavelength λ (550 nm. A nanometer is a billionth of a meter, 1E-9 meter).

The formula we call the Airy disk size is the Rayleigh criterion, which shows the minimum angle of separation to be able to resolve the overlapped Airy disks of two closely spaced point sources of equal intensity is an Airy disk radius of angle θ = 1.2197 λ / 2r. The Small Angle Approximation says that for small angles (and this one is extremely small), then θ = sin θ = tan θ. For a lens focused at or near infinity, tan θ = dark ring radius on sensor / focal length, which is x/f. Substituting x/f for θ, then we often see the central Airy disk radius expressed as:

  x = 1.2197 λ  f 
D
  =   1.2197 λ N

λ is the wavelength of the light (which can be a wide range, but midpoint of our vision is about 550 nanometers, or about 0.00055 mm).
f is lens focal length (important here as viewed image magnification), and D is aperture diameter (larger is less diffraction and greater resolution).

The f/D is the f/stop number, so the formula can simply use it (N as f/D), but f/stop number does not indicate sensor size.

x is the Airy disk radius to the first dark ring. The disk is larger (if clearly seen). x is also the spacing in mm which limits maximum resolution, and 1/x can be directly expressed as line pairs per mm of maximum resolution.

Some Facts:

Notice that at the calculator's initial default f/8 settings, it shows for this full frame sensor's size and pixels (pixel pitch 4.891 µm), its maximum resolution is 102.2 line pair per mm.

At f/8, the Airy 1/x resolution is 186.3 lp/mm (more than the sensor's resolution), even if the 2x diameter does cover a bit more than 2x2 = 4+ pixels if well centered (technically 2.2 pixels disturbs 3x3 pixels). And diffraction is much larger than one pixel, but only a third of the CoC limit. And the whole disk is even larger (but more faint).

Changing the sensor to Crop factor 1.5 and 6000x4000 pixels (pixel pitch 4 µm), the sensor resolution is 125 lp/mm (more with the smaller pixels) but the f/8 diffraction resolution is now 2.7x2.7 pixels size (9 pixels), but is still 186.3 lp/mm (and half of the CoC limit of the smaller sensor).

I can't see how the size of one pixel could possibly be any factor about diffraction.

Sensor size does not affect diffraction size from the lens. But to be able to view it, a small image must be enlarged more than one from a larger sensor. That also magnifies the diffraction, so compact cameras and phones typically try to limit stopping aperture down to like maybe f/4 or less. The Aperture limit shown here is computed from pixel size, which is computed from sensor size. The CoC limit shown here is computed from CoC size, which also is computed from sensor size. The limit in terms of CoC is IMO more meaningful to what we see.

A small aperture diameter creates the diffraction, and a long focal length magnifies the view of it (and f/stop number is focal length / diameter). However a 200 mm lens and a 20 mm lens both at the same f/stop are Not the same thing. At the same f/stop, the 200 mm lens has a diameter 10x of the 20 mm lens (but 200 mm object details are 10x larger too), which makes a difference to diffraction. As evidence (speaking of lenses for DSLR sensors), most 20 mm lens offer a maximum aperture Number of f/16, but the 200 mm lens surely provides f/32. An intermediate focal length lens like 50 mm likely offers f/22. The 200 mm f/32 aperture is 6.25 mm diameter. The 20 mm f/16 is 1.25 mm diameter, and 50 mm f/22 is 2.27 mm diameter. In practice, we can generally use whatever f/stop is provided, but the most critical requirements won't stop down all the way, if avoidable. If not avoidable, then use what you've got, and be glad you've got it.

It is an analog lens image, which does not involve pixels. The minimum required separation x increases directly with f/stop number due to diffraction (the f/stop number N increases with smaller diameter). Focal length affects the magnification of the subject detail (relative to that blur diameter), and in practice, a longer focal length supports a higher f/stop number. Focal length f magnifies the image on the sensor, and f/d is f/stop number. But in photography, we also later must enlarge the sensor image significantly to view it.

To compute a maximum f/stop number where diffraction reaches some limit, we have to define what the limit is. There are two methods here.

First calculator method: lp/mm  Assuming the diameter is in mm, the 1/x is directly comparable to maximum resolution in line pairs per mm. Just because it calculates a large number like 300 lp/mm does NOT imply the lens or sensor can possibly do it, it only means that diffraction does not limit it (and the sensor resolution is also shown). But a line pair is two pixels in digital, so in that way, 1/x requires two pixels. Since x roughly compares to visible radius, my term 2x similarly compares to Airy diameter, and in the same way, 2x as a resolution limit requires 4 pixels. But this statement is only about digital basics of line pairs. For this to be true, then after determining our pixel size, we can compute the f/stop that does compute 2x = four of our pixels. That's the same as setting Airy diameter to 2 × 1/(sensor lp/mm), but which definition as a maximum is necessarily same as four pixels.

This method has the special property that for this f/stop, the computed Airy disk diffraction maximum resolution limit is exactly the SAME as the sensor sampling maximum resolution (so it is not quite limited). Resolution is strongly dependent on sensor size and pixel size, but it should make the notion about matching one pixel be pretty funny.

Second calculator method: CoC  (useful for film too) The second method shown seems possibly more useful, and its limit is to compare Airy disk diameter to the Circle of Confusion (CoC) that is used to define Depth of Field calculations (a more customary limit of blur). The Airy disk calculation was determined for direct viewing in telescopes (and the diffraction rings are observable on stars), but the Airy formula is independent of camera sensor size, and of the subsequent viewing enlargement needed. But as a comparison here, we compare 2x diameter to CoC size, which IS dependent on sensor size (but not pixel size), and solve for f/stop. Then sensor size becomes a factor, as it should. CoC is inversely proportional to sensor diagonal size. The Airy maximum diffraction resolution might be lower than the maximum sensor sampling resolution (perhaps sometimes negligibly so), but it will not be less than the resolution that defines the permissible extent of DOF range. CoC at the limit is a blur, but the idea of the CoC limit is that it should generally be acceptable (might Not be the sharpest, but it can save the day when more is needed. So do try it before turning up your nose.) Your choice probably depends on how much the situation needs maximum Depth of Field, if at all. There will be times when stopping down even more can help a lot. So in that comparison, sensor size is a big factor (which we might visually see), and the 2x comparison to pixel size might make it more useful to visualize for photography. But FWIW, in this second method, Airy disk and CoC are computed in mm units, and are only shown in pixels as a possibly interesting reference unit, but pixels played no part in computing Airy or CoC in this second method. However, Airy diameter was set to 4 pixels in the first method.

Results for CoC as a limit varies with sensor size, but CoC is often about 4 to 6 pixels size. The 2x is compared to Depth of Field CoC, that being an existing standard of focus sharpness. CoC is the maximum permissible size of the allowed blur at the maximum DOF distance from focus. Depending on sensor dimensions, CoC might be larger or smaller than this Airy diffraction limit. However, do note that diffraction applies everywhere, including at the sharpest point of focus, whereas DOF CoC applies to the most distant DOF limits. So at the focused distance, DOF CoC is zero size, but diffraction still exists. At the DOF span limits, CoC is 1x, the specified maximum permissible amount (computed from sensor size). At any greater furthest extremes from the focus point, DOF blur is larger than CoC, a maximum size, but diffraction is still whatever it was. There is no one answer about which is more important, it depends (but DOF blur can at times be a much worse factor than diffraction). You definitely should try stopping down more when you are in serious need of greater Depth of Field. It's a provided tool, use it when needed. Hitting this limit does NOT mean the world stops there. Stopping down (to any aperture) does limit maximum resolution a bit, but maybe other than a 100% crop, we typically have more than we actually need (to a degree). Stopping down even more can work miracles on depth of field at times, NOT as a routine procedure, but certainly when it is needed. Simply try it, see it for yourself, and believe what you see.

In the diffraction calculator, if you don't know sensor dimensions, a second choice is offered to compute it from crop factor. If you don't know Crop Factor, other calculators (Field of View or Depth of Field) can compute it from Equivalent focal length lens specs. Then computed values are shown for verification, but these may vary slightly, because camera specs of megapixels, crop factor, aspect ratio or focal length are normally rounded less-precise values. It should be close enough. Sensor size is Not used to compute the Airy disk itself, but is only used to dimension it in terms of pixels or CoC. Sensor size always does affect what we see.

Minor variations: You might worry a little about seeing minor differences in Airy diameter at various other places. If computing for a full frame sensor uses nominal f/11, it computes Airy x as 14.7 microns (0.00147 mm), where I use actual precise f/11.314 which computes 0.001518 mm. I see no harm in trying to be precise and using the actual camera numeric goals.


It can be seen that the way diffraction works is like this:

Today, both sensor size and its ultimate viewing enlargement are obviously extremely important factors of usage. That enlargement of a small sensor frame also greatly enlarges the diffraction in it.

Unfortunately, it's not uncommon to hear advice like this: "Use a 'full frame' DSLR or a 35mm and f/16 should be OK. Go any smaller and you'll run into diffraction limitation problems, so f/22 or f/32 will be a waste of time." So you may hear that you should "never" go past this "limit", or sometimes it's about f/11 (DSLR size). But Never say never. Yes, diffraction is bad, and minimizing diffraction is a good routine thing in general, but it's pretty harmful advice when worded as never. Limited Depth of Field can be a far worst problem, so there certainly are times when never can cost too much, bypassing the solution. I'm suggesting that you be aware of alternative ideas too. Stopping down is a tool provided for when it is needed. Try it sometime in difficult situations needing more, and see for yourself.

It seems quite obvious that in some situations, greater depth of field can be greatly more important than diffraction. Diffraction is not a good thing, it does reduce maximum resolution, but in real life, there are trade-offs of different properties. Diffraction is much more subtle than Depth of Field. Very often more depth of field can help tremendously more than diffraction hurts. See the f/40 picture at top of this page. When it's critical, depth of field can easily win, without a question. When greater depth of field is not needed, sharpness is a good way to bet. But there can be more ways of perceptual improvement than just sharpness and resolution. It seems obvious that (in some situations) sometimes stopping down to f/22 or more can give better detail than f/11 results. In other situations, maybe not. The lens provides these tools to choose when we need them, when they can help us.

If and when you have a situation specifically needing more depth of field, then you can simply laugh and ignore notions about f/11 being some kind of a necessary limit. Yes, sure, stopping down does increase diffraction, which is not good, and we should be aware of it. Due to diffraction, f/8 is not as good as f/5.6 either, but it is better about Depth of Field. In the cases when you can see that stopping down can obviously help so much, it seems dumb not to use it. The f/stops are provided on the lens to be used when it can help. Just try it both ways, and look at the results, and decide if the depth of field helps detail much more than the diffraction hurts. When needed, it will.

Yes, certainly diffraction does hurt resolution and sharpness, a little. You do need a good reason to stop down excessively, but yes, depth of field can help, often tremendously, often helps more than diffraction can hurt, especially obvious when depth of field is limiting you. That is a mighty fine reason, and it is the purpose of those higher f/stops. But if you listen to the wrong information, you might be missing out on the proper tools. Try it, see what happens, believe what you can see. Don't just walk away without knowing, and without getting the proper picture.

Don't misunderstand, certainly f/5.6 and f/8 (for a DSLR) are special good places to routinely be, when possible, when it works. Back in the 1950's, we marveled how sharp Kodachrome slides were. And it was sharp, but some of it was that Kodachrome was still ASA/ISO 10 then, requiring like f/5.6 at 1/100 second in bright sun. That f/5.6 helped our lenses too. There is the old adage about the Rule for press photographers of "f/8 and be there". 😊 Speaking of DSLR, f/5.6 and f/8 are special good places to be, routinely. However, cell phone cameras have tiny sensors requiring much more viewing enlargement, and their automation will try to prevent reaching even f/4, due to diffraction. But still, there are situations when depth of field helps more than diffraction hurts.

But depth of field can also really be a major help sometimes, results are typically poor if DOF is inadequate for the scene. When DOF is needed, there is no substitute. So try some things, try and see both choices before deciding. Don't be afraid of stopping down. Have a reason, but then that's what it's for, when it's needed, when it can help. Try a thing or two for yourself, believe what you can see, and choose your best result. My goal is to point out that sometimes, in special situations, when depth of field is really poor, stopping way down can improve depth of field tremendously, which at the time is obviously greatly more important and helpful than a bit of resolution loss. Stopping down "excessively" is a great tool, when it is needed.

Let's get started now, about how Depth of Field helps.

Depth of Field

Common situations always needing more depth of field: Any close work, at very few feet. Macro work always needs more depth of field, all we can get (so stop down a lot, at least f/16, and more may be better). Landscapes with very near foreground objects need extraordinary depth of field to also include infinity (using hyperfocal focus distance). Telephoto lenses typically provide a f/32 stop, and can often make good use of it, because at distance, the span is so great. But wide angle lenses already have much greater depth of field, and maybe diffraction affects them more.

Hyperfocal distance is defined as focusing at this special intermediate distance into the desired depth of field range, used so that the depth of field range includes both near and distant extremes — specifically depth of field extending from half of the hyperfocal distance to infinity. Said another more casual way, it is the focused distance at which that the depth of field will reach to infinity. Obviously stopping down will increase the depth of field to aid this effort. And obviously, the focused distance will always be sharper than infinity then, but infinity is still barely in the limits of perceived depth of field.

A good Depth of Field calculator will show hyperfocal focus distance, which does include DOF to infinity for various situations (determined by focal length, aperture, sensor size).

The practice of simply focusing on the near side of the subject typically wastes much of the depth of field range on the empty space out in front of the focus point, where there may be nothing of interest. Focusing more back into the depth centers the DOF range, often more useful. We hear it said about moderate distance scenes (not including infinity) that focusing at a point 1/3 of the way into the depth range works for this, which is maybe a little crude, better than knowing nothing, but situations vary from that 1/3 depth (more about that Here). Macro will instead be at 1/2. These are basic ideas which have been known for maybe 150 years.


Zoom lenses offering many focal lengths make this very difficult to show, but many prime lenses have, or used to have, a DOF calculator built into them. Speaking of prime lenses (those lenses that are not zoom lenses) which normally have f/stop marks at the distance scale showing the depth of field range at the various aperture f/stops. However, this tremendous feature is becoming a lost art today, because zoom lenses cannot mark this for their many focal lengths. Also today's faster AF-S focusing rates can put the marks pretty close together (the 85 mm shown still gives a DOF clue). (The "dot" marked there is the focus mark correction for infrared use).

For example of hyperfocal distance, the photo at right (ISO 400 f/16) is a 50 mm lens, showing focus adjusted to place the f/22 DOF mark at the middle of the infinity mark, which then is actually focused at about 12 feet, and the other f/22 DOF mark predicts depth of field from about six feet to infinity (assuming we do stop down to f/22). The DOF calculator says this example is precisely hyperfocal 12.25 feet (for full frame, 50 mm, f/22) giving DOF 6.1 feet to infinity. Stopping down to f/22 does cause a little more diffraction, but it can also create a lot more depth of field. Sometimes f/22 is the best idea, sometimes it is not. Other focal lengths and other sensor sizes cause different numbers.

Or other cases, not including infinity. If we instead focus this 50 mm lens at 7 feet, then the f/11 marks suggest DOF from about 5.5 to 10 feet (at f/11). The 7 feet is about 1/3 back into the DOF zone in this case. This is a full frame lens, so that DOF applies to full frame sensors. The idea of the markings (which only appear on prime lenses, zooms are too complex to mark) is to indicate the extents of the DOF range. And as marked directly on the lens, it can be very handy and helpful. In the prime lens days, this is how it was done.

We cannot read the distance scale precisely, but it can indicate ballpark, generally adequate to convey the DOF idea. Depth of field numbers are vague anyway. Do note that any calculated depth of field and hyperfocal distances are NOT absolute numbers at all. The numbers instead depend on a common but arbitrary definition of acceptable blurriness (called Circle of Confusion, CoC, the diameter of the blurred point source). This CoC limit is used in DOF calculations and varies with sensor size due to its necessary enlargement. This is because CoC also specifically assumes the degree of enlargement in a specific standard viewing situation (specifically an 8x10 inch print held about ten inches from eye, which standard viewing size allows seeing the size of that CoC spot). If your enlargement and viewing situations are different, your mileage will vary... DOF is NOT an absolute number. Greater enlargement reduces perceived depth of field, and less enlargement increases it (changes the degree of CoC our eye can see).

And make no mistake, the sharpest result is always at the one distance where the lens is actually focused. Focus is always gradually and continually becoming more blurry as we move away from the actual focus point, up until the DOF math computes CoC of some precise numerical value that is suddenly judged not acceptable (thought to become bad enough to be noticeable there, by the enlargement of the arbitrary CoC definition). But focus is about equally blurry on either side of that distance. DOF does Not denote a sharp line where blurriness suddenly happens, it is gradual. The sharpest focus is always at the exact focused distance, but a wider range can often be good enough, within certain viewing criteria based on how well we can see it. DOF numbers are NOT absolutes. But DOF certainly can be a useful helpful guide.


Dragging out a very old rule of thumb from the distant past, when it was considered a good trade-off combining both diffraction AND depth of field. It says: To limit excessive diffraction:

Those /4 limits:
600 mmf/150
300 mmf/75
200 mmf/50
100 mmf/25
50 mmf/12.5
24 mmf/6
12 mmf/3
6 mmf/1.5

Generally don't stop down to exceed f-stop number = focal length / 4.

Unless depth of field is more important. Just meaning, have a reason when you do. Diffraction is not good, but Depth of Field certainly can be more important in some situations. When it helps, go for it. If it does not turn out, no harm done, but you will often be very pleased about your try.

I don't know the origin of that old rule of thumb, but here is my suspicion: You may have read about Ansel Adam's Group f.64 in the 1930s which was an early purist photography group, promoting the art of the "clearness and definition of the photographic image", named for the greater depth of field of f/64 (remember, 8x10 inch view cameras were popular back then). That link is interesting reading, but here's a bit more: For Ansel's 8x10 inch view camera, a "normal" lens was around 300+ mm, but he also used longer lens. So f/64 really wasn't much of a stretch for this (other than exposure time of course, but their large cameras required a tripod anyway.) f/64 is the third full stop past f/22, i.e. f/22, f/32, f/45, f/64). They were seeking the greater depth of field, their longer lenses were limited at f/8. Note that Hyperfocal distance can play a big part of Depth of Field too.

Since f/stop number = focal length / aperture diameter, this FL/4 rule is technically just simply specifying at least a minimum 4 mm aperture diameter, so diffraction caused by the aperture edge won't excessively limit resolution. But focal length / 4 mm also defines the f/number. Later when 50 mm was the "normal" lens for the popular 35 mm film, we did hear that f/11 was about the limit to be less concerned with diffraction, which about matches this rule of thumb, and the later source probably just repeated it. We can't see diffraction in normal photos, we only see the damage of the blurring. There is a photo down with the calculator that shows diffraction of a star in a telescope (of course showing extreme overexposure of the star for astronomical purposes to see fainter stars), and it certainly would cover up any background detail, but our camera photos rarely look like that. We instead see a mild blur over the entire frame. I think quantifying it can only compare computed diffraction size the same way as computed CoC for Depth of Field (both just add blur, diffraction blur or out-of-focus blur). We always thought 8 mm movie film was too small, considered marginally acceptable quality, but many small digital cameras today are about that same size, necessarily using lenses of 4 or 5 mm focal length for the tiny sensor to have a normal field of view (which such short focal length has fixed focus at hyperfocal and wide aperture, extending Depth of Field within one foot, and extending to infinity, but with less diffraction). But sometimes we must be more concerned about depth of field. And for that purpose, 35 mm film class lenses longer then about 100 mm often offer f/32, which can be very helpful when needed. I object to advice saying we must never use some of our best tools. 😊

This old rule of FL/4 is near 100 years old (has always been known), and film formats were larger back then. It does not take sensor size into consideration (and neither does the Airy diffraction calculation), but it is about the focal length used. If you use a tiny sensor, you must use a very short lens to see a wider field of view, to then approach what we consider normal for what a camera sees. Back in that day, the view camera sensor size was common to all serious work. Today, the sensor size variations are extreme. But it is also about the relation of focal length, aperture, and diffraction. We do use a lot of very tiny sensors and very short lenses today. But it still seems a reasonable rule of thumb unless you have some other special concern. The division by 4 does place a 50 mm lens very near the f/11 diffraction limit we might hear about (for full frame). That 50 mm is often related to the "normal lens" for 35mm film (which was considered small in its day), but today there are many even smaller sensors. Compact camera automation rarely stops down past f/4, if even that, and is still diffraction limited (enlargement of tiny sensor size to viewing size does not help). Today's digital sensors can be literally tiny, and any necessary greater enlargement will show both diffraction and depth of field limits larger. A DSLR sensor might be 1/10 the dimension of Ansel's 8x10 film. A compact or phone camera sensor will be greatly smaller. Diffraction is not affected by pixels or which sensor was attached, but the necessary sensor enlargement does affect how well we see it.

None of this is about pixels. Sensor pixels will have their own resolution limit, unrelated. A tiny pixel cannot cover much area, but its small size does allow many more pixels to cover the detail. Pixels never can create detail, they merely try to record what is there. A greater number of pixels does not affect the sharpness that our lens can create, but a greater number of (smaller size) pixels is normally greater resolution of sampled detail can be resolved from that lens image. The pixels job is to merely try to digitally reproduce our analog lens image it sees. The lens image is what it is, and the better that the pixels can reproduce this image is a good thing (regardless of the detail that is there... a pristine image or one suffering diffraction).


Diffraction absolutely does happen, and it is not good, however there definitely are also times when greater depth of field can be much more important, and considerably more helpful.

Speaking of DSLR, it is true today that f/32 can be a pretty good match for our 200 mm lenses, speaking of when needed, when it can help. It is there to be used, provided by our lens, for when needed. Try it (when needed), don't let them scare you off. Shying away from the best tools would be missing out from some of the best results (when applicable).

All of this is about lens diffraction, it is Not about sensor pixel size. The role of pixels is that to resolve detail, we need enough pixels spread across the detail, which is saying , small enough pixels so that it takes many to cover the detail. The more pixels the better to resolve the finest detail. All of our images have some diffraction in them, which normally we never notice (but sure, there are worst cases). For a 105 mm lens (the tree samples below), then 105/4 is f/26, so f/22 is a good try, and f/32 is possibly acceptable (again, these are 100% crops below, which is considerable enlargement). The results below show (that on a DSLR sensor size) it may not be that bad when you really don't want to give up depth of field. Lenses of 100mm or longer typically do offer f/32, because it's good stuff (at the right times, when it is needed). So when more heroic efforts are necessary to get even more essential depth of field, consider doing what you need to do to get it. If important, at least try it, see if you like it (but f/32 will slow your shutter speed considerably, there are lots of trade-offs in photography).

It does coincidentally in fact imply f/16 could be a reasonable sharpness concern for a 50 mm lens (a normal lens for DSLR class cameras). That is a concern, which we've understood for almost forever. But it would not be the same situation for a 200 mm lens. Or an 18 mm lens either. And it is Not about pixels, diffraction exists regardless. The same diffraction affected film cameras too.


Using a shorter lens, or standing back at farther distance, improves depth of field, but both also reduce the size of the subject in a wider image frame. Or simply stopping down aperture offers great improvements to depth of field which are so easy and so obvious to actually see.

Yes, certainly diffraction does increase as we stop aperture down. But within reason, diffraction is a fairly minor effect, at least as compared to depth of field which can be a huge effect. Saying the detail suffering from diffraction is still recognizable, but the detail suffering from depth of field might not be there at all. Diffraction is serious, and not meaning to minimize it, but there are times when the need for depth of field overwhelms what diffraction can do. Yes, stopping down a lot can cause noticeable diffraction which is less good. But greater depth of field sometimes can be a night and day result, make or break. So the tools are provided for when we need to use them, when they can help.

One tool is the Smart Sharpen in Photoshop (specifically with its Lens Blur option). Sharpening is limited too, but it can help. Diffraction is pretty much linear, the same effect in all photo areas (whereas for example, depth of field is not linear, its blur is mild close to focus but much worse far from focus).

My goal here is to suggest that, no matter what you have heard about diffraction and limited pixel size, yes, you can still usefully stop down to f/16 or f/22 or f/32 as they are intended to be used for the goal of greater depth of field. You wouldn't always use f/22, not routinely nor indiscriminately, but in the cases when you do need it, the overall result can be a lot better. It can be a great benefit, when you need it. Yes, stopping down so much certainly does cause diffraction losses which should be considered. But Yes, stopping down certainly can help depth of field much more than diffraction can hurt. This is why those f/stops are provided, for when they can help. When needed, if they help, they help.

When you need maximum DSLR lens sharpness, do think f/5.6, or maybe f/8, if that will work for you. But when you need maximum depth of field, consider f/16, or maybe f/22, or maybe even more at times. That's what it's for, and why it is there. Sure, f/8 will be a little sharper for general use, stick with it when you can, but when you need depth of field, that's hard to ignore. So when you need extra depth of field, try stopping down, that's how the basics work. Test it, see it for yourself, and don't believe everything you read on the internet. 😊 It's good to be able to actually see and verify that which we profess to believe.

Lens resolution certainly can be limited by diffraction. The lens situation has a resolution, and the digital sampling reproduces it. Pixel resolution simply tries to reproduce the image that the lens resolution created. This is less important if we necessarily resample much smaller anyway, for example to show a 24 megapixel image on a 2 megapixel HD video screen, or to print a 7 megapixel 8x10 print. Today, we typically have digital resolution to spare.

This is a (random but typical) lens resolution test from OpticalLimits.com. They have many good lens tests online, tests which actually show numbers. This one is 24 mm, and the red lines are drawn by me. Lenses do vary in degree, expensive vs. inexpensive is a factor, but in general, all lenses typically show similar characteristics. It shows maximum resolution at frame center (blue) and at frame edge (red).

The aperture when wide open is more soft (optical aberration issues in the larger glass diameter), but resolution typically increases to a maximum peak when stopped down a couple of stops (not necessarily f/5.6, but two stops down is half the aperture diameter, emphasizing the difficult aperture edges). The border sharpness can be a little worse (edges are at larger distance from center of lens).

Then resolution gradually falls off as it is stopped down more, due to increasing diffraction as the aperture becomes small. Yes, we can assume f/16 and f/22 get worse, on the same slope.

But lens resolution always falls off typically past about f/5.6, due to diffraction, regardless of any so-called "limit" around f/11 when the diffraction size number passes the pixel size number. It started earlier than this f/11 notion, and is about diffraction, not about pixels. The edge of the aperture hole bends or diffracts the light near it (paths very near the edge, causing diffraction and blurring). The clear center area is unobstructed, but a tiny hole is nearly all edge. Diffraction causes a blurring loss of the smallest detail (a loss of maximum resolution), caused by the smaller aperture diameter. The term "diffraction limited" is usually a good thing, meaning and used as: "An optical system with the ability to produce images with angular resolution as good as the instrument's theoretical limit is said to be diffraction limited", meaning as good as it is possible to be. However stopped down lens apertures do limit resolution more, affecting the smallest detail the lens can reproduce. Still, real world is that we often have sufficient resolution to spare, to trade for depth of field. Stopping down can be a big benefit, when it is needed.

We don't need to mention pixels. And f/22 might not always be a good plan for a short lens — or any lens, but not always bad either — detail depends on image size. Subject magnification is a factor of detail (more below). Focal length magnifies the subject detail. So a longer lens can often benefit greatly from the increased depth of field from f/22 or even f/32. It is why macro and longer lenses normally provide f/32, it provides an important feature that is of great interest and capability.

Next is what aperture in a short lens looks like: (the lens is 3.75 inches or 95 mm diameter). It is looking into the lens front element. The large glass diameter is to provide the wide field of view in the short lens. A line drawn from the glass edge, through the tiny aperture, reaches the opposite frame corner.

f2.8
A 14-24mm lens, at 14mm f/2.8
(aperture computes 5 mm diameter)
f22
A 14-24mm lens, at 14mm, f/22
(aperture computes 0.63 mm diameter)

The definition is: fstop number = focal length / aperture diameter. This definition causes the same f/stop number to be the same exposure on all lenses.

f/22 on a 14 mm lens has an aperture diameter of 14/22 = 0.63 mm. That is a tiny hole, which causes trouble. f/5 is sharper.
f/22 on a 50 mm lens has an aperture diameter of 50/22 = 2.2 mm. Borderline small, but rather bearable when it helps DOF.
f/22 on a 105 mm lens has an aperture diameter of 105/22 = 4.6 mm, much more reasonable, piece of cake.

Yes, stopping down does cause greater diffraction which limits the smallest detail we can see. The larger diffraction hides the smallest details in the lens image, which might otherwise be seen... which is normally about sharp edges on details. This diffraction is a property of the lens aperture diameter, and is true regardless of pixel size (it was always true of film lenses too). Combining the other regular optical problems normally reduce resolution below this theoretical diffraction limit anyway. We don't need pixels to know that, but this pixel notion is that when the Airy disk size exceeds the size of a pixel — or really two pixels (Nyquist), or really four pixels (Bayer) which is really eight pixels (Nyquist again), or really even more pixels because of the random misalignment of Airy disks on pixels. However many pixels we decide matters, the resolution of those pixels is limited by the larger diffraction disk size and coarseness. The pixel is certainly not the problem though, the only problem is the diffraction disk is large. Small pixels resolve everything better, including the diffraction. It's too late to worry with pixels anyway, the diffraction has already occurred, it is what it is. The best job the pixels can do is to reproduce what they see. The pixel analogy is like, if you don't wear your glasses to inspect your image, not seeing a problem is not the same as improving the diffraction. 😊 Pictures of faces or trees or mountains are larger than a pixel anyway, so this does not mean all is lost. The diffraction issue is NOT about pixels. The pixel size (hopefully small) is already the smallest possible detail, and the diffraction is already what it is.


Comparison of Pixel Size with Diffraction

I am using Nikons FX label for full frame sensors, and DX for 1.5 crop sensors.

To explain this next situation shown, these are the original images from which 100% crops are taken. Two cameras: D800 full frame with 36 megapixels, 35.9x24 mm sensor and 205 pixels per mm density. The D300 DX with 12 megapixels, 23.6x15.8 mm sensor, and 181.7 pixels per mm density (smaller pixels). Both are ISO 400, both using same 105 mm VR AF-S lens, both on the same tripod at the same spot. Full frame is the first wider view, and the cropped sensor crops the lens view smaller, which makes it look closer up when enlarged to the same size. The two frames are shown the same size here, so the smaller image is seen enlarged more than full frame (but both were the same lens image, from the same lens). Point is, both had the same crop box in ACR, both marked crops are about 12% of the frame width in mm. Sharpening can always help a little, but there was no processing done on these. There was a slight breeze to wiggle a few leaves. Shutter speed at f/32 got slow, around 1/40 second.


Full Frame sensor, 1x crop factor
 

APS-C crop 1.5x sensor, 1.5x crop factor

DX image is 67% smaller before greater enlargement to same size. Its sensor had larger pixels, but 1.5x greater enlargement reduces resolution 1/1.5 = 67%.

The point of these next 100% crops (a tiny central area cropped as shown here) is not just to show depth of field, which is impressive, but we already know what to expect of that. It is to show there is no greatly feared diffraction limit around f/11 or wherever (an exception for tiny cell phone sensors needing much enlargement to view). There is no large step representing any limit around f/11, or anywhere. Sure, f/8 is often better (because of diffraction), and sure, diffraction does increase, but sure, you can use f/16 and f/22, maybe f/32, because, sure, it can often help your picture. Diffraction does continually increase as the lens is stopped down, but which is about the aperture, it is not about pixel size. This is same 105 mm lens in both, and yes, we might debate about f/32, but it certainly does increase depth of field. Any diffraction would be much less visible if the full image was displayed resampled to smaller size instead of shown at 100% size. But obviously there is no reason to always fear some limit at f/11, if the depth of field can help more than the diffraction hurts. You can do this test too.

These are 100% crops of a tiny area. Both crops are the same, 12% of frame width in mm. Full frame is a wider view in a wider frame.
Specifically, the full frame sensor is 35.9x24 mm, and this crop is 613x599 of 7360x4912 pixels, or 1% of total full frame pixels.
The DX sensor is 23.6x15.8 mm, and its crop is 357x347 of 4288x2848 pixels, or 1% of total full frame pixels.
These 100% crops are ENLARGEMENT. At this scale, this uncropped full frame would be about 6 feet wide on a monitor (assuming a 1920 pixel monitor 20 inches wide, i.e., 96 dpi). The full DX image would be nearly 4 feet wide.

Distances: The near tree and focus are 20+ feet, so that will always be the sharpest point. The light pole is at about 250 feet, the power wires are at 1000 feet, so there is absolutely no question about stopping down improving Depth of Field.


Nikon full frame D800, 1x crop, 35.9x24 mm, 36 megapixels, 7360x4912, 105 mm lens, CoC 0.03 mm, 100% crop, 12% of the original frame width in mm (599 pixels).
Frame Width: 7360 pixels / 35.9 mm sensor = 205 pixels per mm density
(lp/mm is half of 205 sensor resolution at any f/stop).

Diffraction limit based on CoC size limit is f/22.31, Airy 66.7 lp/mm resolution. This larger CoC was 6.1 pixels, but this lower resolution due to x is still only 1x CoC. It doesn't need as much enlargement. On-screen of a 23 inch 96 dpi monitor, this enlargement is 181 mm wide / (35.9 mm x 0.12) = 42x, so on-screen is 102.5/42 = 2.4 lp/mm max at any f/stop.


Nikon DX D300, 1.5x crop, 23.6x15.8 mm, 12 megapixels, 4288x2844, 105 mm lens, CoC 0.02 mm. 100% crop, 12% of the original frame width in mm (357 pixels).
Frame Width: 4288 pixels / 23.6 mm sensor = 181.7 pixels per mm density
(lp/mm is half of 181.7 sensor resolution).

Diffraction limit based on CoC size is f/14.67, Airy 101.6 lp/mm maximum resolution, but sensor pixels limit it to 90.8 lp/mm. On-screen of a 23 inch 96 dpi monitor, this enlargement is 105 mm wide / (23.6 mm x 0.12) = 37x, so on-screen is 90.8/37 = 2.5 lp/mm max at any f/stop.

Both sensors use the same lens at the same distance. This is the same frame crop, but the 1.5x crop factor sensor is simply smaller, and normally has to be enlarged more to view at the same size (but not enlarged here).

It should be clear to see the depth of field in the f/32 images have a very different kind of advantage over the f/5.6 images, which are good in their way too. Which is best, f/5.6, f/32, or a f/16 or f/22 compromise, really depends on the scene, and what it needs. There is no one hard rule for all scenes.

The tree enjoys being the focus point, and we can tell diffraction is increasing when stopped down, but I don't see that these numerical limits are a factor. The CoC limits of depth of field seem more important, except these are enlarged here very greatly more than CoC expects. But the pixels are not the cause. I think it would be good to actually see what we claim to believe. The lens is the same lens in both cases. This full frame has 36 megapixels, with smaller pixels than this 12 megapixel DX. So the limit formula based on pixels assigns this FX with the lower diffraction limit (computes worse). However the results show it in fact performs better. That's because of sensor size and enlargement differences, but the formula doesn't know about that. Actually, the Airy formula doesn't know about pixels either. It says f/32 is f/32.

So the question is, do you want more Depth of Field, or not?

The images tell it, but here are Depth of Field values from the DOF calculator. Subject at 20 feet, and background 1000 feet behind it. So f/32 is not quite as sharp due to diffraction (again, this is an enlarged 100% view), but the DOF improvement is dramatic. Do you need that or not? In this case, the best f/32 Depth of Field does not extend past about 42 or 31 feet, and focus remains at less than hyperfocal (DOF does not reach infinity). However at f/32, the background CoC (S, at 1000 feet here) becomes only around 2x larger (FX) than the DOF CoC limit at the 42 or 31 feet (more detail at the calculator). Not quite to full DOF this time, but pretty close. We can see DOF looks pretty good, and if DOF is needed, I call that better, a lot better. Note this 100% crop is greatly enlarged here, depending on your screen size, but is several times larger than the DOF CoC formula has imagined.

105 mm 36x24 mm FX at 20 feet
f/DOFHyperfocalBKCoC
5.618.3 to 22 ft213.5 ft10.6x
817.7 to 23 ft151 ft7.5x
1116.9 to 24.5 ft106.9 ft5.3x
1615.9 to 27.1 ft75.7 ft3.7x
2214.6 to 31.7 ft53.6 ft2.7x
3213.1 to 41.8 ft38 ft1.9x

105 mm 24x16 mm DX at 20 feet
f/DOFHyperfocalBKCoC
5.618.8 to 21.3 ft320 ft15.9x
818.4 to 21.9 ft226.4 ft11.2x
1117.8 to 22.8 ft160.2 ft8x
1617 to 24.2 ft113.4 ft5.6x
2216.1 to 26.5 ft80.3 ft4x
3214.8 to 30.7 ft56.9 ft2.8x

Depth of Field can be confusing on cropped sensors. We do routinely say that in usual practice, the cropped cameras (with shorter lenses) do see greater DOF than a larger sensor. But this example was Not usual practice. If standing in same place, normally we would substitute a shorter lens (equivalent view of 105/1.5 crop = 70 mm) on the DX body, to capture the same view. The expected short lens is what helps small sensor DOF, but we didn't here. Or if using the same lens, DX has to stand back 1.5x further back to see the same field of view, and that greater distance helps DOF in usual practice. But we didn't. We didn't do anything, we just stood in the same place here at 20 feet with the same lens, so DX did see a smaller cropped view (see the first uncropped image views just above). And so here, the only difference is that the smaller DX sensor still has to be enlarged 1.5x more to compare its images at same size as FX. Greater enlargement hurts DOF, which is why sensor size is a DOF factor. So, the DOF numbers are correct (for the assumed standard 8x10 inch print size).

Degree of enlargement is a big factor. The same two f/32 images above are repeated below, with the smaller DX image enlarged more so it will view at the same size as FX now. FX D800 first, then DX D300 next. Both are same 105 mm lens on the same tripod in the same spot. But the DX looks telephoto because its sensor is smaller (sees a smaller cropped view), so it needs to be enlarged more here (done below), which also enlarges the diffraction too. FX is still shown about 100%, and DX is shown larger than 100%. We would not normally view these this hugely large — the uncropped frames were 7360x4912 and 4288x2848 pixels size — so a smaller view would look better than this.

The FX D800 is 36 megapixels, and the DX D300 is 12 megapixels, so this case is slightly larger pixels on DX, about 13% larger in this case. That may hurt resolution, but it does not affect lens diffraction. However, what we can see is that the smaller DX sensor cropping does require half again more enlargement to reach the same size as FX (not done above). That shows the diffraction larger too. Normally we think of DX having more depth of field than FX, however that assumes DX with same lens would stand back 1.5x farther to be able show the same image view in the smaller frame. We didn't here. Everything was the same here (except DX has to be enlarged half again more, below).

f/32 images using same lens, shown differently enlarged to be same size result

FX
Full frame f/32, 205 pixels/mm - smaller pixels, slightly higher resolution,
but same lens allegedly affecting f/32 diffraction more?
DX
Crop 1.5x f/32, 182 pixels/mm - larger pixels, slightly lower resolution,
but same lens allegedly affecting f/32 diffraction less?
The 1.5x crop sensor is 2/3 the size, so 50% greater enlargement, and the permissible CoC limit for Depth of Field is 2/3 the size of full frame.

Again, this is the same lens on both cameras, both standing in the same spot (on same unmoved tripod), both enlarged to same size here. The real difference is the sensor sizes. The actual difference in the 100% crops is the sensor pixel density, but to compare here at same size, the smaller image is enlarged 1.5x more, reducing its relative sensor density to 2/3, or 121 pixels/mm (larger pixels, less sampling resolution). Smaller pixels are simply greater sampling resolution, always good to show any detail present in the lens image.

But the enlargement is necessarily different (see same enlargement just above). Enlarging DX half again more is the necessary hardship, but that is what normal practice always has to do for smaller sensors. It's unfair to compare to FX if we don't compare the same image. But Depth of Field is often more important than diffraction.

Here they are repeated with both at same enlargement, both same 100% crops at f/32. These crops are both same 12% of frame width in mm, but yes, the pixels are 613/347, which is 1.7x instead of 1.5x, simply because the FX was 205 pixels/mm instead of 182 pixels/mm (higher resolution in the same mm, but smaller pixels, which were supposed to have some bad effect on diffraction?)

Same images, shown Equivalently enlarged to preserve original size ratio

FX
Full frame f/32, 205 pixels/mm - smaller pixels, slightly higher resolution,
but allegedly affecting f/32 diffraction more?
DX
Crop 1.5x f/32, 182 pixels/mm - larger pixels,
slightly lower resolution,
but allegedly affecting f/32 diffraction less?

Diffraction can often be reasonably helped by post processing sharpening (but none was done above). Just don't overdo sharpening.

For any lens at f/32, the Airy disk radius calculation is x = 0.02147 mm (for green light). These tree leaves are not point sources, and we won't see disks like stars would show (with enough magnification). The direct calculation of radius is x = minimum resolvable spacing, and so 1/x is the maximum analog resolution in "line pairs per mm", which this f/32 maximum is reduced to 46.6 LP/mm resolution. Each pair requires at least 2 pixels digital to resolve the black and white lines of a pair. More pixels could be better sampling, but pixels larger than 1/2 x could not resolve even this diffraction. But smaller and more pixels makes it easy. Easy does not mean we don't have diffraction, pixels don't affect what the lens does. Easy just means our better sampling density can resolve whatever is there.

Numbers: In this 36 megapixel FX, the pixel pitch is 35.9 mm / 7360 pixels = 0.004878 mm/pixel, so x = 0.02147 mm at f/32 is 4.4 pixels (2x is 8.8 pixels). And 2x is 43% larger than the FX 0.03 CoC which is 6.15 pixels (CoC is used to compute DOF sharpness). The maximum f/stop limit is f/14.5, but at f/32, 1/x is 46.6 LP/mm resolution maximum. The sensor has 7360/35.9 = 205 pixels/mm (pixel resolution), which is 102 LP/mm maximum. So this is definitely diffraction limited. The diffraction 46.6 LP/mm is worse then the sensors 102 LP/mm.


This is the 105 mm f/32 (DX, crop 1.5), sharpened a bit. The diffraction limit calculates f/16 resolution or f/14 CoC, but f/32 seems pretty decent, and the Depth of Field is obviously superior. Even the distant power wires are visible now.
In this 12 megapixel DX at f/32, x is 3.9 pixels (2x is 7.8 pixels), and is 7% larger than the DX 0.02 CoC. This sensor resolution is 4288 pixels / 23.6 mm = 182 pixels / mm, or 91 LP/mm resolution. The f/stop limit is f/16.4. However, the diffraction in the (same) lens at f/32 is the same 46.6 LP/mm. So it's the same lens and same diffraction, the difference is that these pixels are just a little larger. Sure, it is diffraction limited, but IMO, it's really not so bad. This image is that full DX f/32 image at 1/8 size. The f/32 Depth of Field is sufficient to even show the thin distant power lines at 1000 feet (by the light pole, where the leaf hangs down), even at 1/8 size. Sometimes sharpness can seem relative, but Depth of Field is a very strong factor, when you need more.

For some 24 megapixel DX (255 pixels/mm, 3.917 microns), x is 5.5 pixels, and 107% of the DX 0.02 CoC.

Comparing diffraction size to the diameter of Depth Of Field Circle of Confusion seems practical. CoC is an existing standard of sharpness (see CoC on the Depth of Field page). CoC is the DOF maximum limit of blurriness to be tolerated, for example maybe defined as 0.02 mm maximum diameter on some sensors, considered the limit of DOF acceptability. This maximum CoC limit is often 4 to 6 pixels diameter on today's DSLR. One pixel is not likely very important, and certainly one pixel cannot sample anything well. The diffraction Airy disk also has a diameter in mm. DOF CoC is often routinely larger than the Airy disk. Both always exist, and both do vary. Stopping down does hurt with greater diffraction, but instead helps Depth of Field more. If otherwise, diffraction would blur anything depth of field could do, and we see that is obviously not true. Neither is necessarily a big problem until reaching some more extreme limit. Both become important when blur diameter is enough pixels for us to see it (which is more than one pixel).

How x compares to CoC seems significant (regarding visibility of it), but how x compares to a pixel is not significant, unless the pixel is larger than 1/2 x, in which case the pixel will be the limiting factor of resolution. But the pixel size does not change, diffraction is what increases. Two pixels in x is the minimum to resolve what's there (and more smaller pixels would be better in that way). However, it is x that measures the diffraction. And Depth of Field is also another concern of sharpness.

Both of these FX and DX versions are way larger than the 8x10 inch print (203x254 mm) standard for comparing Depth of Field CoC. This 36 megapixel FX image printed at 250 dpi would be 29.4x19.6 inches (748x499 mm), and the 12 megapixel DX image at 250 dpi would be 17.1x11.4 inches (436x289 mm).

A large Airy disk does limit lens resolution, it's certainly not fully sharp at f/32 (but it obviously did not cut off and die either). And while we are aware of diffraction, yet we do normally still have quite a bit of resolution left, possibly adequate, very possibly good enough this time (this was after all a 100% crop), for some cases and some uses. Our final use is likely resampled much smaller than this extreme 100% crop view. You can say we have sufficient sampling resolution to even show the diffraction clearly. The digital job is to reproduce whatever the lens creates, and more pixels help to do that better.

But in this case, f/32 also creates awesomely better depth of field, if that is what's needed. And in various situations, we could decide that could be overwhelmingly important, could make all the difference, perhaps make or break. Or maybe sometimes not, when we may prefer to back off on f/stop, if it still works. If we do need really maximally sharp results at f/5.6 or f/8, then we know not to set up a situation needing extreme depth of field. It's a choice, it depends on goals, and we can use the tools we have to get the result we want. Photography is about doing what you have to do to get the result that you want. The alternative is Not getting what you want. But do realize that you have choices.


More images (maybe too many) are on next two pages


Copyright © 2014-2024 by Wayne Fulton - All rights are reserved.

Previous Menu Next