We read on the internet how our digital cameras can become "diffraction limited" as we stop down, worded in terms of the diffraction becoming larger than our digital sensor's pixel size. But it's not about the pixels, they don't change. It's about the diffraction present. A true point source when magnified (a star seen at high power in a telescope is a good example) shows as a larger diffraction disk of concentric rings called an Airy disk (see below). Stopping down the aperture makes the lens diffraction become larger. Becoming larger, it covers and hides the true detail, reducing maximum resolution. So we hear how our camera lens has an aperture limit, maybe a DSLR lens stopping down maybe past around f/11 for example, due to increased diffraction when stopping down that far, a limit when diffraction becomes larger than our pixels. That gets computed in terms of our pixels size (like that's a problem, but which was always fine before), instead of in terms of the diffraction size (which is the actual problem). That absolutely does Not mean we need larger pixels, that's less resolution too. Less diffraction would be good however.
That "limit" thought comes from that the pixel is in fact the smallest dot that digital sampling can reproduce. The pixel size is a limit of how much resolution the digital sampling can reproduce from the analog lens image. A pixel is the smallest dot possible, even if resolution is normally expressed in "line pairs", which takes two pixels. Of course, the pixel was always this same size, it's what our camera can do, and we've been happy with it. But there is always some degree of diffraction present, and lenses do test lower at f/8 than at f/5.6, and lower again at f/11 than at f/8, so as a limit, diffraction is a little fuzzy. And yes, stopping down considerably does make the diffraction become larger to limit resolution even more. But some do seem to get excited after it hits that computed pixel limit at f/11 (or wherever). The problem is that the diffraction is larger, not that our pixel size is any concern then. Our pixels continue to do exactly what they always did, still resolving the detail they can. But larger diffraction can hide more of the real scene image data, reducing maximum resolution. That seems obvious, the resolution does become limited by diffraction then. But then it seems rather a moot point about any pixels. Sure, geeks can compute pixel size, but it is Not a practical factor, at least not when the diffraction is already worse. Diffraction had exactly the same effect in film cameras. Any problem is NOT about the pixel, the problem is that the diffraction grew larger. There is a distinction then, the resolution limit name can change from pixels to diffraction, so the detail changes, but the pixels still resolve any detail like they always did..
But unfortunately, of course sometimes this gets stated that we should "Never" stop down past this computed "limit". The warnings get overdone, or at least worded poorly, if making no exceptions. We hear it said that we should Never consider stopping down more than say f/11 if we become diffraction limited there. Those who believe the warning about "never" can be turned away from an important tool. I'm not a fan of that advice, it seems unhelpful, at least when worded "never" or as a "limit". It seems better to go by the results we can actually see. Of course, there are "ifs and buts" about everything, and that advice does NOT mean stopping down more should never be used. At least take never with a grain of salt, because while it may be good routine advice, still sometimes very good things can happen when we intentionally go past that "limit". Yes, diffraction is bad, and yes, we certainly should be warned that stopping down does increase diffraction which lowers lens resolution. But sometimes we do have enough to spare a bit of it, and we should also know that stopping down does increase Depth of Field, which at times can be much more important and extremely helpful. Stopping down can tremendously improve depth of field, which at times, can help tremendously more than diffraction can hurt (speaking of when more depth of field is seriously needed). Diffraction is not at all a good thing, but within reason, it's rarely a complete disaster. But it's not uncommon for insufficient depth of field to sometimes be that complete disaster. It depends on if you need more depth of field or not. My suggestion is that when more depth of field is needed, do try stopping down, and see what happens, and then believe what you can see. It's basic principles, and a great tool to know about.
"Diffraction limited" has two meanings. For a lens or telescope, it means its image quality is so good that it is limited only by theoretical diffraction limits. It's a compliment then. But for our digital cameras, "diffraction limited" means this: Normally, when we are comfortably back at say f/5.6, and diffraction is not even a thought, then our normal maximum resolution we see is limited by the sampled pixel size (specifically, how much of the analog lens image resolution our digital sensor sampling can reproduce). The final resolution result is the least of either what the lens can do due to diffraction, or what the digital sampling can reproduce. Until we had more megapixels, the limit has been the sensor, when it needed the anti-aliasing filter (which is installed on the sensor to blur away the smallest lens detail that the sensor cannot reproduce, so that it won't cause false moire artifacts). Today, we have more megapixels, and now some of the anti-alaising filters are being omitted. But still, as we stop down considerably, the lens does lose resolution due to increased diffraction.
With fewer megapixels, we routinely assume sensor limited resolution is normal and OK, or at least, there is no other choice with that camera. More megapixels is always a good thing, the smaller pixels increase the sampling resolution. But then the smaller pixels lower this calculated diffraction limit (expressed in relative terms of the pixel size), and we're told to get excited if diffraction size passes that pixel limit a little. But the actual problem is the diffraction, however even then at times, the depth of field improvement it allows can make all the difference.
A quick real world example of actual evidence we can see. These are ruler markings showing 1/16 inch rulings (about 1.6 mm apart, shown larger here).
The f/40 may seriously improve a difficult problem this time. :) Both pictures are D800 FX, 36 megapixels, and 105 mm macro. Only difference is aperture. Both are cropped to about 1/3 frame height, and then resampled to 1/4 size here. The "limit" computes f/14.5, so f/11 should be fine, and f/40 supposedly not even imaginable. F/40 is definitely past any limit, and the diffraction must be some worse at f/40, although I can't say I can see it here, and f/11 is no better. I do see the tremendous improvement in depth of field by stopping down. Diffraction is much more subtle than Depth of Field. This may not be the sharpest image I ever saw, but it's much better than the f/11 alternative here, and it seems very acceptable for what it is. The f/40 is obviously what made it be acceptable. Stopping down is a good tool when needed.
The number f/40 is possible here because macro lens f/stop numbers increase when focused up close, because focal length increases then. Typically at 1:1 macro, all marked f/stop numbers increase two full stops. f/32 would become f/64 at 1:1. This was more mild and at only f/40. Modern macro lenses using internal focusing can show slightly less than the necessary two stop change at 1:1.
Sure, holding at f/5.6 or f/8 is always generally very desirable, and holding there is good routine advice (speaking of DSLR sensor class use), when they work. Stay there when they work. But when it is insufficient depth of field for the best picture, the game has to change, if you want results.
Sure, certainly f/40 is very extreme, certainly it's not perfect, and not always ideal. But sometimes it's wonderful, when depth of field helps far more than diffraction hurts. It can solve serious problems. When we need more depth of field, falsely imagining that we ought to be limited to f/11 can be detrimental to results. Use the tools that are provided, when they will help. Try it, and believe what you can see.
However, f/40 does also require four stops more light and flash power than f/10. :)
But nothing blew up when we reached f/16.
My strong suggestion is that when you need more depth of field, you should ignore any silly notions, and of course TRY stopping down. Then look at your result, and believe that. That is the standard solution, and is why that capability is provided, and you'll likely like it a lot (when appropriate). Certainly I don't mean routinely every time - because diffraction does exist, which we generally want to avoid, so do have a need and a reason for stopping down extremely. But needs and reasons do exist. Don't abuse it when not needed. Yes, routinely staying under f/11 is certainly a fine plan to reduce diffraction, when you can, when it works, but when it won't do the job, stopping down creates greater depth of field, which can be a tremendous improvement when needed. Photography is the game of doing what we see we need to do, when it actually helps.
Stars in the night sky are tens to millions of light years away, and so from Earth, they should appear as zero diameter point sources. However at high optical magnification in telescopes, we see a star not as the smallest point source, but as a larger Airy disk, which is an artifact of the telescope diffraction. The Airy disk diameter inversely depends on aperture diameter (half the aperture diameter creates twice the Airy disk size). The ability to separate and resolve two close points (two touching Airy disks) depends on Airy disk diameter (how much they overlap each other), which depends on aperture diameter - as seen though focal length magnification (twice the focal length shows twice the separation distance).
Telescope users know that telescopes with a larger diameter aperture have better resolution due to less diffraction. That smaller Airy disk diameter can better resolve (separate) two very closely spaced stars, to be seen and distinguished as two close stars instead of as one unresolved blob (blurred together). Known double star pairs are the standard measure of telescope resolution.
The diffraction disk is nothing new, it is named for George Airy's work in 1834, but this resolution formula is the Rayleigh criterion, Lord Rayleigh, 1879, as θ = 1.2197 λ/2r. Wikipedia shows its current derivation of that minimum separation to resolve two star points (see example of "resolving", called the Rayleigh criterion). It does not actually compute resolution, but it is a useful approximation. The human eye color vision sees wavelengths from 400 nm to 700 nm, peaking strongly in the center at green, at wavelength λ about 0.00055 mm (550 nm). A nanometer is a micrometer, or a millionth of a meter, 1E-6.
In this Airy formula, x is the minimum separation to resolve two of these points. x is the spacing, and 1/x is the resolution. The combination f/d is the f/stop number. It is an analog image, which does not mention pixels. The minimum separation x increases directly with f/stop number. Focal length also affects the magnification of the subject detail (relative to that blur diameter), and in practice, a longer focal length supports a higher f/stop number. The beginning formula for "diffraction" is about aperture diameter d. Focal length f magnifies it on the sensor, and f/d is f/stop number. But in photography, we also enlarge the sensor image significantly to view it.
The reciprocal 1/x of such minimum separation x (of two resolved adjacent point sources) is the theoretical maximum resolution allowed by diffraction, 1/x is directly comparable to line pairs per mm (which requires two pixels). Which applies to our camera lenses too, except we don't often photograph point sources. This is NOT "THE" lens resolution, it is a theoretical maximum. We test to determine actual resolution. Measured resolution numbers of real world lenses are of course less than this theoretical limit, because one factor is that they can't be greater than diffraction allows (lenses are diffraction limited, independent of pixel size). Or insufficient digital sampling resolution is another factor that can reduce results seen. The concept of lenses that are "diffraction limited" would be the impressive feat of reaching those theoretical numbers, limited only by diffraction.
I am computing 2 times the radius x, however I am calling it 2x instead of the diameter of the Airy disk. x is the radius of the first minimum (the first dark ring). The disk is larger than that, but the outer rings are dim and don't show well. Instead, the significance of x is considered to be the direct minimum resolution spacing necessary to resolve two dots (at least for point source star photos). See example of "resolving". Then 1/x is considered resolution in line pairs per mm, and it does take two pixels to resolve both the black and white lines of one pair (so x is two pixels, and 2x is four pixels). But the outer diffraction rings are typically dark at normal exposures, and 2x is "nearly a full diameter", and is the number we can compare, and 2x is four pixels at that limit point. Of course, when diffraction is large, the pixels are not limiting anything (because the diffraction is worse). More smaller pixels won't change the diffraction, or the lens resolution, they will just resolve the rings better (and sensor enlargement should show it better). So pixel size can be computed, but it is Not a practical factor, at least when the diffraction is already worse.
The 2x is compared to Depth of Field CoC, that being an existing standard of focus sharpness. CoC is the size of the allowed blur at the maximum permissible distance from focus. Depending on sensor dimensions, CoC might be larger or smaller than this Airy diffraction limit. However, do note that diffraction applies everywhere, including at the sharpest point of focus, whereas DOF CoC only applies to the most distant DOF limits. Still, you definitely should try stopping down more when you are in serious need of greater Depth of Field. Simply try it, see it for yourself, and believe what you see.
A diffraction calculator is provided here. If you don't know sensor dimensions, a second choice is offered to compute it. Computed values are shown for verification, but these may vary slightly, because the inputs of megapixels, crop factor and aspect ratio are often rounded less-precise values. It should be close enough.
Twice the f/stop number will double Airy x, which halves maximum theoretical lens resolution.
It seems quite obvious that in some situations, greater depth of field can be greatly more important than diffraction. Diffraction is not a good thing, it does reduce maximum resolution, but in real life, there are of course trade-offs of different properties. Diffraction is much more subtle than Depth of Field. Very often more depth of field can help tremendously more than diffraction hurts. When it's critical, depth of field should easily win. When greater depth of field is not needed, sharpness is a good way to bet. But there can be more ways of perceptual improvement than just sharpness and resolution. It seems obvious that (in some situations) sometimes stopping down to f/22 or more can give better than f/11 results. In other situations, maybe not. The lens provides these tools to choose when we need them, when they can help us.
If and when you have a situation specifically needing more depth of field, then you can simply laugh and ignore notions about f/11 being some kind of a necessary limit. Yes, sure, stopping down does increase diffraction, which is not good, and we should be aware of it. Due to diffraction, f/8 is not as good as f/5.6 either, but it is better about Depth of Field. In the cases when you can see that stopping down can obviously help so much, it seems dumb not to use it. The f/stops are provided on the lens to be used when it can help. Just try it both ways, and look at the results, and decide if the depth of field helps much more than the diffraction hurts. When needed, it will.
Yes, of course, diffraction does hurt resolution and sharpness, a little. You do need a good reason to stop down excessively, but yes, of course, depth of field can help, often tremendously, often more than diffraction can hurt, especially obvious when depth of field is limiting you. That is a mighty fine reason, and it is the purpose of those higher f/stops. But if you listen to the wrong information, you might be missing out on the proper tools. Try it, see what happens, believe what you can see. Don't just walk away without knowing, and without getting the picture.
Don't misunderstand, certainly f/5.6 and f/8 are special good places to routinely be, when possible, when it works. Back in the 1950's, we marveled how sharp Kodachrome slides were. And it was sharp, but some of it was that Kodachrome was still ASA/ISO 10 then, requiring like f/5.6 at 1/100 second in bright sun. That f/5.6 helped our lenses too. There is the old adage about the Rule for press photographers of "f/8 and be there". :) But they were using large film then, and often reproduced contact prints, and we have to enlarge digital more now.
But depth of field can also really be a major help sometimes, results are typically poor if DOF is inadequate for the scene. When DOF is needed, there is no substitute. So try some things, try and see both choices before deciding. Don't be afraid of stopping down. Have a reason, but then that's what it's for, when it's needed, when it can help. Try a thing or two for yourself, believe what you can see, and choose your best result.
Let's get started now, about how Depth of Field helps.
Common situations always needing more depth of field: Any close work, at very few feet. Macro work always needs more depth of field, all we can get (so stop down a lot, at least f/16, and more may be better). Landscapes with very near foreground objects need extraordinary depth of field to also include infinity (using hyperfocal focus distance). Telephoto lenses typically provide a f/32 stop, and can often make good use of it, because at distance, the span is so great. But wide angle lenses already have much greater depth of field, and maybe diffraction affects them more.
A good Depth of Field calculator will show hyperfocal focus distance, which does include DOF to infinity for various situations (determined by focal length, aperture, sensor size).
The practice of simply focusing on the near side of the subject typically wastes much of the depth of field range on the empty space out in front of the focus point, where there may be nothing of interest. Focusing more back into the depth centers the DOF range, often more useful. We hear it said about moderate distance scenes (not including infinity) that focusing at a point 1/3 of the way into the depth range works for this, which is maybe a little crude, better than knowing nothing, but situations vary from that 1/3 depth (more about that Here). Macro will instead be at 1/2. These are basic ideas which have been known for maybe 150 years.
Many prime lenses have, or have had, a DOF calculator built into them. Speaking of prime lenses (those lenses that are not zoom lenses) which normally have f/stop marks at the distance scale showing the depth of field range at the various aperture f/stops. However, this tremendous feature is becoming a lost art today, because zoom lenses cannot mark this for their many focal lengths. Also todays faster AF-S focusing rates can put the marks pretty close together (the 85 mm shown still gives a DOF clue). (The "dots" marked there are the focus mark correction for infrared use).
For example of hyperfocal distance, the photo at right (ISO 400 f/16) is a 50 mm FX lens, showing focus adjusted to place the f/22 DOF mark at the middle of the infinity mark, which then actually focuses at about 12 feet, and the other f/22 DOF mark predicts depth of field from about six feet to infinity (assuming we do stop down to f/22). The DOF calculator says this example is precisely hyperfocal 12.25 feet (for FX, 50 mm, f/22) giving DOF 6.1 feet to infinity, FX. Stopping down to f/22 does cause a little more diffraction, but it can also create a lot more depth of field. Sometimes f/22 is the best idea, sometimes it is not. Other focal lengths and other sensor sizes cause different numbers.
Or another case, not including infinity. If we instead focus this 50 mm lens at 7 feet, then the f/11 marks suggest DOF from about 5.5 to 10 feet (at f/11). The 7 feet is about 1/3 back into the DOF zone in this case. This is a FX lens, so that DOF applies to FX sensors. The idea of the markings (which only appear on prime lenses, zooms are too complex to mark) is to indicate the extents of the DOF range. And as marked directly on the lens, it can be very handy and helpful. In the prime lens days, this is how it was done.
We cannot read the distance scale precisely, but it can indicate ballpark, generally adequate to convey the DOF idea. Of course, depth of field numbers are vague anyway. Do note that any calculated depth of field and hyperfocal distances are NOT absolute numbers at all. The numbers instead depend on a common but arbitrary definition of acceptable blurriness (called Circle of Confusion, CoC, the diameter of the blurred point source). This CoC limit is used in DOF calculations and varies with sensor size due to its necessary enlargement. This is because CoC also specifically assumes the degree of enlargement in a specific standard viewing situation (specifically an 8x10 inch print held about ten inches from eye, which standard viewing size allows seeing the size of that CoC spot). If your enlargement and viewing situations are different, your mileage will vary... DOF is NOT an absolute number. Greater enlargement reduces perceived depth of field, and less enlargement increases it (changes the degree of CoC our eye can see).
And of course, make no mistake, the sharpest result is always at the one distance where the lens is actually focused. Focus is always gradually and continually becoming more blurry as we move away from the actual focus point, up until the DOF math computes CoC of some precise numerical value that is suddenly judged not acceptable (thought to become bad enough to be noticeable there, by the enlargement of the arbitrary CoC definition). But of course, focus is about equally blurry on either side of that distance. DOF does Not denote a sharp line where blurriness suddenly happens, it is gradual. The sharpest focus is of course only at the focused distance, but a wider range can often be good enough, within certain viewing criteria based on how well we can see it. DOF numbers are NOT absolutes. But DOF certainly can be a useful helpful guide.
Dragging out a very old rule of thumb from the distant past, when it was considered a good trade-off combining both diffraction AND depth of field. It says:
To limit excessive diffraction (unless depth of field is more important):
|The /4 limits are|
Generally don't exceed f-stop number = focal length / 4.
(Just meaning, have a reason when you do. Depth of field is certainly a good reason.)
You may have read about Ansel Adam's Group f/64 in the 1930s (an early purist photography group, promoting the art of the "clearness and definition of the photographic image", named for the f/64 DOF). For Ansel's 8x10 inch view camera, a "normal" lens was around 300+ mm, but he also used 600 mm and 800 mm. So f/64 really wasn't much of a stretch for him (other than exposure time of course).
Since f/stop number = focal length / aperture diameter, this FL/4 rule is technically just simply specifying at least a 4 mm aperture diameter, so that diffraction doesn't excessively limit resolution. Later when 50 mm was the "normal" lens for popular 35 mm film, we did hear that f/11 was about the limit to be less concerned with diffraction, which matches this rule. We always thought 8 mm movie film was too small, considered barely acceptable quality, but compact digital cameras today are about that same size, or maybe smaller. But sometimes we must be more concerned about depth of field. For that purpose, lenses longer then about 100 mm usually offer f/22, and probably f/32, which can be very helpful.
I don't mean to promote use of this old rule of FL/4. It is old, was for film, and it does not take enlargement of sensor size into consideration (and of course, neither does the Airy calculation). We do use a lot of very tiny sensors and very short lenses today. But it's not a bad rule, and /4 does place a 50 mm lens very near the f/11 diffraction limit we might hear about (without even mentioning pixels). Of course, that 50 mm is often related to the "normal lens" for 35mm film (which was considered small in its day), but today there are many even smaller sensors. Compact camera automation rarely stops down past f/4, and is still diffraction limited (due to enlargement of tiny sensor size). Today's digital sensors can be literally tiny, and any necessary greater enlargement will show both diffraction and depth of field limits larger. A DSLR sensor might be 1/10 the dimension of Ansel's 8x10 film - that was several inches then, sensors today may be a few millimeter. :) A compact or phone camera sensor might have 1/50 that CoC dimension. Diffraction is not affected by which sensor was attached, but the necessary sensor enlargement does affect how well we see it.
None of this is about pixels. Sensor pixels will have their own resolution limits, unrelated. A greater number of pixels does not affect the sharpness that our lens can create, but a greater number of (smaller size) pixels is normally greater resolution of sampled detail that we can resolve in that lens image. The pixels job is to merely try to digitally reproduce our analog lens image it sees. The lens image is what it is, and the better that the pixels can reproduce this image is a good thing (regardless of the detail that is there... a pristine image or one suffering diffraction).
Diffraction absolutely does happen, however there definitely are also times when greater depth of field can be much more important.
Speaking of DSLR, it is true today that f/32 can be a pretty good match for our 200 mm lenses, when needed, when it can help. It is there to be used, provided by our lens, for when needed. Try it (when needed), don't let them scare you off. You would be missing out on a really big thing.
All of this is about lens diffraction, it is Not about sensor pixel size (pixel size also does not take enlargement into account). For a 105 mm lens (the tree samples below), then 105/4 is f/26, so f/22 is a good try, and f/32 is close (again, these are 100% crops below, which is enlargement). The results below show (that on a DSLR sensor size) it's not that bad when you really don't want to give up depth of field. Lenses of 100mm or longer typically offer f/32, because it's good stuff (at times). So when more heroic efforts are necessary to get even more essential depth of field, consider doing what you need to do to get it. If important, at least try it, see if you like it (but f/32 will slow your shutter speed considerably, there are lots of trade-offs in photography).
It does coincidentally in fact imply f/16 could be a reasonable sharpness concern for a 50 mm lens (a normal lens for DSLR class cameras). That is a concern, which we've understood for almost forever. But it would not be the same situation for a 200 mm lens. Or an 18 mm lens either. And it is Not about pixels, diffraction exists regardless. The same diffraction affected film cameras too.
Using a shorter lens, or standing back at farther distance, improves depth of field, but both also reduce the size of the subject in a wider image frame. Or simply stopping down aperture offers great improvements to depth of field which are so easy and so obvious to actually see.
Yes, of course diffraction does increase as we stop down. But within reason, diffraction is a fairly minor effect, at least as compared to depth of field which can be a huge effect. Saying, the detail suffering from diffraction is still recognizable, but the detail suffering from depth of field might not be there at all. Diffraction is serious, and not meaning to minimize it, but there are times when the need for depth of field overwhelms any real concern about diffraction. Yes, stopping down a lot can cause some noticeable diffraction which is less good. But greater depth of field sometimes can be a night and day result, make or break. So the tools are provided for when we need to use them, when they can help.
One tool is the Smart Sharpen in Photoshop (specifically with its Lens Blur option). Sharpening is limited too, but it can help. Diffraction is pretty much linear, the same effect in all photo areas (whereas for example, depth of field is not linear, its blur is mild close to focus but much worse far from focus).
My goal here is to suggest that, no matter what you have heard about diffraction and limited pixel size, yes, of course you can still usefully stop down to f/16 or f/22 or f/32 as they are intended to be used for the goal of greater depth of field. You wouldn't always use f/22, not routinely nor indiskriminately, but in the cases when you do need it, the overall result can be a lot better. It can be a great benefit, when you need it. Yes, stopping down so much certainly does cause diffraction losses which should be considered. But Yes, stopping down certainly can help depth of field much more than diffraction can hurt. This is why those f/stops are provided, for when they can help. When needed, if they help, they help.
When you need maximum DSLR lens sharpness, of course do think f/5.6, or maybe f/8, if that will work for you. But when you need maximum depth of field, consider f/16, or maybe f/22, or maybe even more at times. That's what it's for, and why it is there. Sure, f/8 will be a little sharper for general use, stick with it when you can, but when you need depth of field, that's hard to ignore. So when you need extra depth of field, try stopping down, that's how the basics work. Test it, see it for yourself, and don't believe everything you read on the internet. :) It's good to be able to actually see and verify that which we profess to believe.
Lens resolution certainly can be limited by diffraction. The lens situation has a resolution, and the digital sampling reproduces it. Pixel resolution simply tries to reproduce the image that the lens resolution created. This is less important if we necessarily resample much smaller anyway, for example to show a 24 megapixel image on a 2 megapixel HD video screen, or to print a 7 megapixel 8x10 print. Today, we typically have digital resolution to spare.
At right is a (random but typical) lens resolution test from Photozone. They have many good lens tests online, tests which actually show numbers. This one is 24 mm, and the red lines are drawn by me. Lenses do vary in degree, expensive vs. inexpensive is a factor, but in general, all lenses typically show similar characteristics.
The aperture when wide open is more soft (optical aberration issues in the larger glass diameter), but resolution typically increases to a maximum peak when stopped down a couple of stops (not necessarily f/5.6, but two stops down is half the diameter, avoiding the difficult outer diameters of glass far from center). The border sharpness can be a little harder (edges are at larger diameter from center of lens).
Then resolution gradually falls off as it is stopped down more, due to increasing diffraction as the aperture becomes small. Yes, we can assume f/16 and f/22 get worse, on the same slope.
But lens resolution always falls off typically past about f/5.6, due to diffraction, regardless of any so-called "limit" around f/11 when the diffraction size number passes the pixel size number. It started earlier than this f/11 notion, and is about diffraction, not about pixels. The edge of the aperture hole bends or diffracts the light near it (paths very near the edge, causing diffraction and blurring). The clear center area is unobstructed, but a tiny hole is nearly all edge. Diffraction causes a blurring loss of the smallest detail (a loss of maximum resolution), caused by the smaller aperture diameter. The term "diffraction limited" is usually a good thing, meaning and used as: "An optical system with the ability to produce images with angular resolution as good as the instrument's theoretical limit is said to be diffraction limited" - meaning as good as it is possible to be. However stopped down lens apertures do limit resolution more, affecting the smallest detail the lens can reproduce. Still, real world is that we often have sufficient resolution to spare, to trade for depth of field. Stopping down can be a big benefit, when it is needed.
We don't need to mention pixels. And f/22 might not always be a good plan for a short lens - or any lens, but not always bad either - detail depends on image size. Subject magnification is a factor of detail (more below). Focal length magnifies the subject detail. So a longer lens can often benefit greatly from the increased depth of field from f/22 or even f/32. It is why macro and longer lenses normally provide f/32, it provides an important feature that is of great interest and capability.
Next is what aperture in a short lens looks like: (the lens is 3.75 inches or 95 mm diameter)
The definition is: fstop number = focal length / aperture diameter. This definition causes the same f/stop number to be the same exposure on all lenses.
f/22 on a 20 mm lens has an aperture diameter of 20/22 = 0.9 mm. That is a tiny hole, which causes trouble. f/5 is sharper.
f/22 on a 50 mm lens has an aperture diameter of 50/22 = 2.2 mm. Borderline small, but rather bearable when it helps DOF.
f/22 on a 105 mm lens has an aperture diameter of 105/22 = 4.6 mm, much more reasonable, piece of cake.
Yes, stopping down causes greater diffraction which limits the smallest detail we can see. The larger diffraction hides the smallest details in the lens image, which might otherwise be seen... which is normally about sharp edges on details. This diffraction is a property of the lens aperture diameter, and is true regardless of pixel size (of course it was always true of film lenses too). Combining the other regular optical problems normally reduce resolution below this theoretical diffraction limit anyway. We don't need pixels to know that, but this pixel notion is that when the Airy disk size exceeds the size of a pixel - or really two pixels (Nyquist), or really four pixels (Bayer), which is really eight pixels (Nyquist again), or really even more pixels because of the random misalignment of Airy disks on pixels - but however many pixels we decide matters, those small pixels resolution capability is limited by the larger diffraction disk size and coarseness. The pixel is certainly not the problem though, the only problem is the diffraction disk is large. It's too late to worry with pixels anyway, the diffraction has already occurred, it is what it is. The best job the pixels can do is to reproduce what they see. The pixel analogy is like, if you don't wear your glasses to inspect your image, not seeing anything is not the same as improving the diffraction. :) Of course, pictures of faces or trees or mountains are larger than a pixel anyway, so this does not mean all is lost. The diffraction issue is NOT about pixels. The pixel size (hopefully small) is already the smallest possible detail, and the diffraction is already what it is.
To explain this next situation shown, these are the original images from which 100% crops are taken. D800 FX with 36 megapixels, and D300 DX with 12 megapixels, both ISO 400, same 105 mm VR AF-S lens on both, both on the same tripod at the same spot. FX is of course the first wider view, and the DX sensor crops the lens view smaller, which makes it look closer up when enlarged to the same size. The two frames are shown the same size here, so DX is seen enlarged more than FX (but both were the same lens image, from the same lens). Point is, both had the same crop box in ACR, both marked crops are about 12% of the frame width in mm. Sharpening can always help a little, but there was no processing done on these. There was a slight breeze to wiggle a few leaves. Shutter speed at f/32 got slow, around 1/40 second.
The point of these next 100% crops (a tiny central area cropped as shown here, then shown 100% full size, actual pixels) is not just to show depth of field, because we already know what to expect of that. It is more to show there is no greatly feared diffraction limit around f/11 or wherever. There is no large step representing any limit around f/11, or anywhere. Sure, f/8 is often better (because of diffraction), and sure, diffraction does increase, but sure, you can of course use f/16 and f/22, maybe f/32, because, sure, it can often help your picture. Diffraction does continually increase as the lens is stopped down, but which is about the aperture, it is not about pixel size. This is same 105 mm lens in both, and yes, we might debate about f/32, but it certainly does increase depth of field. Any diffraction would be much less visible if the full image was displayed resampled to smaller size instead of shown at 100% size. But obviously there is no reason to always fear some limit at f/11, if the depth of field can help more than the diffraction hurts. You can do this test too.
These are 100% crops of a tiny area. Both crops are the same, 12% of frame width in mm. FX is a wider view in a wider frame.
Specifically, the FX is 35.9x24 mm, and the crop is 613x599 of 7360x4912 pixels, or 1% of total full frame pixels.
The DX is 23.6x15.8 mm, and the crop is 357x347 of 4288x2848 pixels, or 1% of total full frame pixels.
These 100% crops are ENLARGEMENT. At this scale, this uncropped full frame FX would be about 6 feet wide on a monitor (assuming 1920 pixel monitors are 20 inches wide, i.e., 96 dpi). The full DX image would be nearly 4 feet wide.
The near tree and focus are 20+ feet, so that will always be the sharpest point. The light pole is about 250 feet, the power wires are at 1000 feet.
Both are the same image from the same lens at the same distance. This is the same crop of the frames, but the DX sensor is simply smaller, and has to be enlarged more to view at the same size (not done here yet).
We see diffraction increasing of course, but I don't see these numerical limits are a factor. The CoC limits seem more applicable, except these are enlarged here very greatly more than CoC expects. But the pixels are not the cause. I think it would be good to actually see what we claim to believe. The lens is of course the same in both cases. This FX has 36 megapixels, with smaller pixels than this 12 megapixel DX. So the limit formula based on pixels assigns this FX with the lower diffraction limit (computes worse). However the results show it in fact performs better. That's because of sensor size and enlargement differences, but the formula doesn't know about that. Actually, the Airy formula doesn't know about pixels either. It says f/32 is f/32.
So the question is, do you want more Depth of Field, or not?
The images tell it, but here are Depth of Field values from the calculator. Subject at 20 feet, and background 880 feet behind it. So f/32 is not quite as sharp due to diffraction (again, this is an enlarged 100% view), but the DOF improvement is dramatic. Do you need that or not? In this case, the best f/32 Depth of Field does not extend past about 42 or 31 feet, and focus remains at less than hyperfocal (DOF does not reach infinity). However at f/32, the background CoC (BKCoC, at 1000 feet here) becomes only around 2x larger (FX) than the DOF CoC limit at the 42 or 31 feet (more BKCoc detail at the calculator). Not quite to full DOF this time, but pretty close. We can see DOF looks pretty good, and if DOF is needed, I call that better, a lot better. Note this 100% crop is greatly enlarged here, depending on your screen size, but several times larger than the DOF CoC formula has computed.
|105 mm 36x24 mm FX at 20 feet|
|5.6||18.3 to 22 ft||213.5 ft||10.6x|
|8||17.7 to 23 ft||151 ft||7.5x|
|11||16.9 to 24.5 ft||106.9 ft||5.3x|
|16||15.9 to 27.1 ft||75.7 ft||3.7x|
|22||14.6 to 31.7 ft||53.6 ft||2.7x|
|32||13.1 to 41.8 ft||38 ft||1.9x|
|105 mm 24x16 mm DX at 20 feet|
|5.6||18.8 to 21.3 ft||320 ft||15.9x|
|8||18.4 to 21.9 ft||226.4 ft||11.2x|
|11||17.8 to 22.8 ft||160.2 ft||8x|
|16||17 to 24.2 ft||113.4 ft||5.6x|
|22||16.1 to 26.5 ft||80.3 ft||4x|
|32||14.8 to 30.7 ft||56.9 ft||2.8x|
Depth of Field can be confusing on cropped sensors. We do of course routinely say that in usual practice, the cropped cameras (with shorter lenses) do see greater DOF than a larger sensor. But this example was Not usual practice. If standing in same place, normally we would substitute a shorter lens (equivalent view of 105/1.5 crop = 70 mm) on the DX body, to capture the same view. The expected short lens is what helps small sensor DOF, but we didn't here. Or if using the same lens, DX has to stand back 1.5x further back to see the same field of view, and that greater distance helps DOF in usual practice. But we didn't. We didn't do anything, we just stood in the same place here at 20 feet with the same lens, so DX did see a smaller cropped view (see the first uncropped image views just above). And so here, the only difference is that the smaller DX sensor still has to be enlarged 1.5x more to compare its images at same size as FX. Greater enlargement hurts DOF, which is why sensor size is a DOF factor. So, the DOF numbers are correct (for the assumed standard 8x10 inch print size).
Degree of enlargement is a big factor. The same two f/32 images above are repeated below, with the smaller DX image enlarged more so it will view at the same size as FX now. FX D800 first, then DX D300 next. Both are same 105 mm lens on the same tripod in the same spot. But the DX looks telephoto because its sensor is smaller (sees a smaller cropped view), so it needs to be enlarged more here (done below), which also enlarges the diffraction too. FX is still shown about 100%, and DX is shown larger than 100%. We would not normally view these this hugely large - the uncropped frames were 7360x4912 and 4288x2848 pixels size - so a smaller view would look better than this.
The FX D800 is 36 megapixels, and the DX D300 is 12 megapixels, so this case is slightly larger pixels on DX, about 13% larger in this case. That may hurt resolution, but it does not affect lens diffraction. However, what we can see is that the smaller DX sensor cropping does require half again more enlargement to reach the same size as FX (not done above). That shows the diffraction larger too. Normally we think of DX having more depth of field than FX, however that assumes DX with same lens would stand back 1.5x farther to be able show the same image view in the smaller frame. We didn't here. Everything was the same here (except DX has to be enlarged half again more, below).
f/32 images, shown differently enlarged to be same size result
Again, this is the same lens on both cameras, both standing in the same spot (on same unmoved tripod), both enlarged to same size here. The real difference is the sensor sizes. The actual difference in the 100% crops is the sensor pixel density, but to compare here at same size, the smaller image is enlarged 1.5x more, reducing its relative sensor density to 2/3, or 121 pixels/mm (larger pixels, less sampling resolution). Smaller pixels are simply greater sampling resolution, always good to show any detail present in the lens image.
But the enlargement is necessarily different (see same enlargement just above). Enlarging DX half again more is the necessary hardship, but that is what normal practice always has to do for smaller sensors. It's unfair to compare to FX if we don't compare the same image. But Depth of Field is often more important than diffraction.
Here they are repeated with both at same enlargement, both same 100% crops at f/32. These crops are both same 12% of frame width in mm, but yes, the pixels are 613/347, which is 1.7x instead of 1.5x, simply because the FX was 205 pixels/mm instead of 182 pixels/mm (higher resolution in the same mm, but smaller pixels, which were supposed to have some bad effect on diffraction?)
Same images, shown Equivalently enlarged to preserve original size ratio
Diffraction can often be reasonably helped by post processing sharpening (but none was done above). Just don't overdo sharpening.
For any lens at f/32, the Airy disk radius calculation is x = 0.02147 mm (for green light). These tree leaves are not point sources, and we won't see disks like stars would show (with enough magnification). The direct calculation of radius is x = minimum resolvable spacing, and so 1/x is the maximum analog resolution in "line pairs per mm", which this f/32 maximum is reduced to 46.6 LP/mm resolution. Each pair requires at least 2 pixels digital to resolve the black and white lines of a pair. More pixels could be better sampling of course, but pixels larger than 1/2 x could not resolve even this diffraction. But smaller and more pixels makes it easy. Easy does not mean we don't have diffraction, pixels don't affect what the lens does. Easy just means our better sampling density can resolve whatever is there.
Numbers: In this 36 megapixel FX, the pixel pitch is 35.9 mm / 7360 pixels = 0.004878 mm/pixel, so x = 0.02147 mm at f/32 is 4.4 pixels (2x is 8.8 pixels). And 2x is 43% larger than the FX 0.03 CoC which is 6.15 pixels (CoC is used to compute DOF sharpness). The maximum f/stop limit is f/14.5, but at f/32, 1/x is 46.6 LP/mm resolution maximum. The sensor has 7360/35.9 = 205 pixels/mm (pixel resolution), which is 102 LP/mm maximum. So this is definitely diffraction limited. The diffraction 46.6 LP/mm is worse then the sensors 102 LP/mm.
For some 24 megapixel DX (255 pixels/mm, 3.917 microns), x is 5.5 pixels, and 107% of the DX 0.02 CoC.
Comparing diffraction size to the diameter of Depth Of Field Circle of Confusion seems practical. CoC is an existing standard of sharpness (see CoC on the Depth of Field page). CoC is the DOF maximum limit of blurriness to be tolerated, for example maybe defined as 0.02 mm maximum diameter on some sensors, considered the limit of DOF acceptability. This maximum CoC limit is often 4 to 6 pixels diameter on today's DSLR. One pixel is not likely very important, and certainly one pixel cannot sample anything well. The diffraction Airy disk also has a diameter in mm. DOF CoC is often routinely larger than the Airy disk. Both always exist, and both do vary. Stopping down does hurt with greater diffraction, but instead helps Depth of Field more. If otherwise, diffraction would blur anything depth of field could do, and we see that is obviously not true. Neither is necessarily a big problem until reaching some more extreme limit. Both become important when blur diameter is enough pixels for us to see it (which is more than one pixel).
How x compares to CoC seems significant (regarding visibility of it), but how x compares to a pixel is not significant, unless the pixel is larger than 1/2 x, in which case the pixel will be the limiting factor of resolution. But the pixel size does not change, diffraction is what increases. Two pixels in x is the minimum to resolve what's there (and more smaller pixels would be better in that way). However, it is x that measures the diffraction. And Depth of Field is also another concern of sharpness.
Both of these FX and DX versions are way larger than the 8x10 inch print (203x254 mm) standard for comparing Depth of Field CoC. This 36 megapixel FX image printed at 250 dpi would be 29.4x19.6 inches (748x499 mm), and the 12 megapixel DX image at 250 dpi would be 17.1x11.4 inches (436x289 mm).
A large Airy disk does limit lens resolution, it's certainly not fully sharp at f/32 (but it obviously did not cut off and die either). And while we are aware of diffraction, yet we do normally still have quite a bit of resolution left, possibly adequate, very possibly good enough this time (this was after all a 100% crop), for some cases and some uses. Our final use is likely resampled much smaller than this extreme 100% crop view. You can say we have sufficient sampling resolution to even show the diffraction clearly. :) Digital's job is to reproduce whatever the lens creates, and more pixels help to do that better.
But in this case, f/32 also creates awesomely better depth of field, if that is what's needed. And in various situations, we could decide that could be overwhelmingly important, could make all the difference, perhaps make or break. Or maybe sometimes not, when we may prefer to back off on f/stop, if it still works. If we do need really maximally sharp results at f/5.6 or f/8, then we know not to set up a situation needing extreme depth of field. It's a choice, it depends on goals, and we can use the tools we have to get the result we want. Photography is about doing what you have to do to get the result that you want. The alternative is Not getting what you want. But do realize that you have choices.
More images (maybe too many) are on next page