www.scantips.com

Diffraction Limited Pixels? Really?

In Support of Depth of Field

We read on the internet how our digital cameras can become "diffraction limited" as we stop down, sometimes worded in terms of our digital sensor's pixel size. It's not really about the pixels, they don't change. It's about the diffraction present. A true point source when enlarged (a star seen in a telescope is best example) shows as a larger diffraction disk of concentric rings called an Airy disk (more below). Stopping down the aperture makes it become larger. Being larger, it covers and hides adjacent true detail, reducing resolution. We hear how our DSLR camera lens has an aperture limit, maybe around f/11 for example, due to increased diffraction when stopping down that far.

Unfortunately, of course sometimes this gets stated that we should "Never" stop down past this "limit". I'm not a fan of that advice, it seems unhelpful, at least when worded "never" or as a "limit". It may routinely be an advised threshold, but sometimes good things can happen when we go past that "limit". Yes, we certainly should be warned that stopping down does increase diffraction which lowers lens resolution (but sometimes we can still spare a bit of it), but we should also know that stopping down increases Depth of Field, which at times can be much more important and extremely helpful. My suggestion is that when more depth of field is needed, do try stopping down, and see what happens, and then believe what you can see. It's a great tool to know about.

"Diffraction limited" has two meanings. For a lens or telescope, it means its image quality is so good that it is limited only by theoretical diffraction limits. It's a compliment then. But for our digital cameras, "diffraction limited" means this: Normally, when we are comfortably back at say f/5.6, and diffraction is not even a thought, then our normal maximum resolution limit we see is often reduced by the sampled pixel size (specifically, how much of the analog lens image resolution our digital sensor sampling can reproduce). The resolution result is either the least of what either the lens can do, or what the digital sampling can reproduce. The limit has been the sensor, when it needed the anti-aliasing filter (which is installed on the sensor to blur away the smallest lens detail that the sensor cannot reproduce, so that it won't cause false moire artifacts). Today, we have more megapixels, and then some of the anti-alaising filters are being removed. But still, as we stop down considerably, the lens loses resolution due to increased diffraction.

A nitpick with the wording is that when the diffraction size increases to become larger than the pixel, then we hear that the diffraction becomes the limiting factor of resolution. And it certainly does, but diffraction is not about pixels. The pixel is the smallest dot that digital can reproduce. The pixel size may have been the previous normal limit, but it was always this same size. There was some degree of diffraction before, and now the diffraction is larger and can limit resolution even more. The problem is that the diffraction is larger, not that our pixel is a concern. Our pixels continue to do exactly what they always did. But the larger diffraction hides more of the real scene image data then. That happens, and seems obvious. So when diffraction is larger, it seems rather a moot point about any pixels. Stopping down more does increase diffraction which reduces the maximum analog resolution from the lens. But any problem is NOT about the pixel, the only problem is that the diffraction grew larger.

But the warnings get overdone, or at least worded poorly, making no exceptions. We hear it said that we should Never consider stopping down more than say f/11 if we become diffraction limited there. Those who believe the warning about "never" can be turned away from an important tool. But never say never, because sometimes (special difficult cases) the diffraction is far from the worst thing. Even perhaps at times is even desirable, in that it allows greater depth of field, which can change the worst thing to be the best thing that can happen. Not that diffraction is ever "good", it's not. But because stopping down can tremendously improve depth of field, which at times, can help tremendously more than diffraction can hurt (speaking of when more depth of field is seriously needed). Diffraction is not at all a good thing, but within reason, it's rarely a complete disaster. But it's not uncommon for insufficient depth of field to sometimes be that complete disaster. It depends on if you need more depth of field or not. Stopping down is photography's basic solution.

With fewer megapixels, we routinely assume sensor limited resolution is normal and OK, or at least, there is no other choice with that camera. But then we're told to get excited if diffraction size passes that pixel limit a little. When at times, the depth of field improvement it allows can be a simple fix that makes all the difference.

My goal is to suggest you strongly keep in mind what stopping down for depth of field can do for you. Experiment a bit, practice with it, actually see it for yourself, so you will know when it is needed. Greater DOF may not always be a big concern in many routine pictures, but sometimes can be a tremendous all-important concern. When you need depth of field, then you need it, and ignoring diffraction to get it can be the best thing to do to get a picture there.

More diffraction is not a good thing of course, and yes, staying back at f/5.6 or f/8 is a generally good plan, when it works, which seems the usual case. But it is so trivially easy to show that regardless of diffraction, sometimes more depth of field can often greatly improve a photo (one that needs more). That's a basic principle of photography. So at times, a valuable trade-off can be a little more diffraction for a lot more depth of field. Don't let some silly comment by someone stop you. Does stopping down some lose resolution? Yes, a little, regardless of pixel size, regardless of if that limit, that's how it works. Does stopping down improve the picture? Yes, sometimes very great improvement of the picture due to depth of field. It should of course be a standard tool in your bag of tricks. Not always needed, but to be strongly considered when needed.

A real world example of the actual evidence, ruler markings showing 1/16 inch rulings (about 1.6 mm apart, shown larger here).

I think the f/40 may seriously improve a difficult problem this time. :) It's definitely past any limit, and the diffraction must be some worse at f/40, although I can't say I can see it here. I do see the tremendous improvement in depth of field by stopping down. It may not be the sharpest image I ever saw, but it's much better than f/11 here, and it seems very acceptable for what it is, much better than the alternative, and f/40 is obviously what made it be acceptable.

Both pictures are D800 FX, 105 mm macro. Only difference is aperture. Both are cropped to about 1/3 frame height, and then resampled to 1/4 size here.

The number f/40 is possible here because macro lens f/stop numbers increase when focused up close, because focal length increases then. Typically at 1:1 macro, all marked f/stop numbers increase two full stops. f/32 would become f/64 at 1:1. This was more mild and used f/40. Modern macro lenses using internal focusing can show slightly less than the two stop change at 1:1.

Sure, holding at f/5.6 or f/8 is always generally very desirable, and holding there is good routine advice (speaking of DSLR class use), when they work. Use them when they work. But when it is insufficient depth of field for the best picture, the game has to change, if you want results.

Sure, certainly f/40 is very extreme, certainly it's not perfect, and not always ideal. But sometimes it's wonderful, when depth of field helps far more than diffraction hurts. It can solve serious problems. When we need more depth of field, falsely imagining that we ought to be limited to f/11 can be detrimental to results. Use the tools that are provided, when they will help. Try it, and believe what you can see.

However, f/40 does also require four stops more light and flash power than f/10. :)
But nothing blew up when we reached f/16.

My strong suggestion is that when you need more depth of field, you should ignore any silly notions, and of course TRY stopping down. Then look at your result, and believe that. That is the standard solution, and is why that capability is provided, and you'll likely like it a lot (when appropriate). Certainly I don't mean routinely every time - because diffraction does exist, which we generally want to avoid, so do have a need and a reason for stopping down extremely. But needs and reasons do exist. Don't abuse it when not needed. Yes, routinely staying under f/11 is certainly a fine plan to reduce diffraction, when you can, when it works, but when it won't do the job, stopping down creates greater depth of field, which can be a tremendous improvement when needed. Photography is the game of doing what we see we need to do, when it actually helps.

My goal is to point out that you may hear you should "never" go past about f/11. Never say never. :) It's pretty dumb advice when worded that way, it may cost too much of the good stuff.

It seems quite obvious that in some situations, greater depth of field can be greatly more important than diffraction. Diffraction is never a good thing, it does reduce maximum resolution, but in real life, there are of course trade-offs of different properties. Very often more depth of field can help tremendously more than diffraction hurts. When it's critical, depth of field should easily win. When greater depth of field is not needed, sharpness is a good way to bet. But there can be more ways of perceptual improvement than just sharpness and resolution. It seems obvious that (in some situations) sometimes stopping down to f/22 or more can give better than f/11 results. In other situations, maybe not. The lens provides these tools to choose when we need them, when they can help us.

If and when you have a situation specifically needing more depth of field, then you can simply laugh and ignore notions about f/11 being some kind of a necessary limit. Yes, sure, stopping down does increase diffraction, which is not good, and we should be aware of it. Due to diffraction, f/8 is not as good as f/5.6 either, but it is better about Depth of Field. In the cases when you can see that stopping down can obviously help so much, it seems dumb not to use it. The f/stops are provided on the lens to be used when it can help. Just try it both ways, and look at the results, and decide if the depth of field helps much more than the diffraction hurts. When needed, it will.

Yes, of course, diffraction does hurt resolution and sharpness, a little. You do need a good reason to stop down excessively, but yes, of course, depth of field can help, often tremendously, often more than diffraction can hurt, especially obvious when depth of field is limiting you. That is a mighty fine reason, and it is the purpose of those higher f/stops. But if you listen to the wrong information, you might be missing out on the proper tools. Try it, see what happens, believe what you can see. Don't just walk away without knowing, and without getting the picture.

Don't misunderstand, certainly f/5.6 and f/8 are special good places to routinely be, when possible, when it works. Back in the 1950's, we marveled how sharp Kodachrome slides were. And it was sharp, but some of it was that Kodachrome was still ASA/ISO 10 then, requiring like f/5.6 at 1/100 second in bright sun. That f/5.6 helped our lenses too. There is the old adage about the Rule for press photographers of "f/8 and be there". :) But they were using large film then, and often reproduced contact prints, and we have to enlarge digital more now.

But depth of field can also really be a major help sometimes, results are typically poor if DOF is inadequate for the scene. When DOF is needed, there is no substitute. So try some things, try and see both choices before deciding. Don't be afraid of stopping down. Have a reason, but then that's what it's for, when it's needed, when it can help. Believe what you can see.

Let's get started now, about how Depth of Field helps.

Depth of Field

Common situations always needing more depth of field: Any close work, at very few feet. Macro work always needs more depth of field, all we can get (so stop down a lot, at least f/16, and more may be better). Landscapes with very near foreground objects need extraordinary depth of field to also include infinity (using hyperfocal focus distance). Telephoto lenses typically provide a f/32 stop, and can often make good use of it, because at distance, the span is so great. But wide angle lenses already have much greater depth of field, and maybe diffraction affects them more.

Hyperfocal distance is defined as focusing at this special intermediate distance into the desired depth of field range, used so that the depth of field range includes both near and distant extremes - specifically depth of field extending from half of the hyperfocal distance to infinity. Said another more casual way, it is the focused distance at which that the depth of field will reach to infinity. Obviously stopping down will increase the depth of field to aid this effort. And obviously, the focused distance will always be sharper than infinity then, but infinity is still barely in the limits of perceived depth of field.

A good Depth of Field calculator will show hyperfocal focus distance, which does include DOF to infinity for various situations (determined by focal length, aperture, sensor size).

The practice of simply focusing on the near side of the subject typically wastes much of the depth of field range on the empty space out in front of the focus point, where there may be nothing of interest. Focusing more back into the depth centers the DOF range, often more useful. We hear it said about moderate distance scenes (not including infinity) that focusing at a point 1/3 of the way into the depth range works for this, which is maybe a little crude, better than knowing nothing, but situations vary from that 1/3 depth (more about that Here). Macro will instead be at 1/2. These are basic ideas which have been known for maybe 150 years.


Many lenses have a DOF calculator built into them. Speaking of prime lenses (i.e. those lenses that are not zoom lenses) which normally have f/stop marks at the distance scale showing the depth of field range at the various aperture f/stops. However, this tremendous feature is becoming a lost art today, because zoom lenses cannot mark this for their many focal lengths. Also todays faster AF-S focusing rates can put the marks pretty close together (the 85 mm shown still gives a DOF clue). (the "dots" marked there are the focus mark correction for infrared use).

For example of hyperfocal distance, the photo at right (ISO 400 f/16) is a 50 mm FX lens, showing focus adjusted to place the f/22 DOF mark at the middle of the infinity mark, which then actually focuses at about 12 feet, and the other f/22 DOF mark predicts depth of field from about six feet to infinity (assuming we do stop down to f/22). The DOF calculator says this example is precisely hyperfocal 12.25 feet (for FX, 50 mm, f/22) giving DOF 6.1 feet to infinity, FX. Stopping down to f/22 does cause a little more diffraction, but it can also create a lot more depth of field. Sometimes f/22 is the best idea, sometimes it is not. Other focal lengths and other sensor sizes cause different numbers.

Or another case, not including infinity. If we instead focus this 50 mm lens at 7 feet, then the f/11 marks suggest DOF from about 5.5 to 10 feet (at f/11). The 7 feet is about 1/3 back into the DOF zone in this case. This is a FX lens, so that DOF applies to FX sensors. The idea of the markings (which only appear on prime lenses, zooms are too complex to mark) is to indicate the extents of the DOF range. And as marked directly on the lens, it can be very handy and helpful. In the prime lens days, this is how it was done.

We cannot read the distance scale precisely, but it can indicate ballpark, generally adequate to convey the DOF idea. Of course, depth of field numbers are vague anyway. Do note that any calculated depth of field and hyperfocal distances are NOT absolute numbers at all. The numbers instead depend on a common but arbitrary definition of acceptable blurriness (called Circle of Confusion, CoC, the diameter of the blurred point source). This CoC limit is used in DOF calculations and varies with sensor size due to its necessary enlargement. This is because CoC also specifically assumes the degree of enlargement in a specific standard viewing situation (specifically an 8x10 inch print held about ten inches from eye, which standard viewing size allows seeing the size of that CoC spot). If your enlargement and viewing situations are different, your mileage will vary... DOF is NOT an absolute number. Greater enlargement reduces perceived depth of field, and less enlargement increases it (changes the degree of CoC our eye can see).

And of course, make no mistake, the sharpest result is always at the one distance where the lens is actually focused. Focus is always gradually and continually becoming more blurry as we move away from the actual focus point, up until the DOF math computes CoC of some precise numerical value that is suddenly judged not acceptable (thought to become bad enough to be noticeable there, by the enlargement of the arbitrary CoC definition). But of course, focus is about equally blurry on either side of that distance. DOF does Not denote a sharp line where blurriness suddenly happens, it is gradual. The sharpest focus is of course only at the focused distance, but a wider range can often be good enough, within certain viewing criteria based on how well we can see it. DOF numbers are NOT absolutes. But DOF certainly can be a useful helpful guide.


Dragging out a very old rule of thumb, in the distant past considered then a good trade-off combining both diffraction AND depth of field, says:
To limit excessive diffraction (unless depth of field is more important):

The /4 limits are
600 mmf/150
300 mmf/75
200 mmf/50
100 mmf/25
50 mmf/12.5
24 mmf/6
12 mmf/3

Generally don't exceed f-stop number = focal length / 4.

(Just meaning, have a reason when you do. Depth of field is certainly a good reason.)

You may have read about Ansel Adam's Group f/64 in the 1930s (an early purist photography group, promoting the art of the "clearness and definition of the photographic image", named for the f/64 DOF). For Ansel's 8x10 inch view camera, a "normal" lens was around 300+ mm, but he also used 600 mm and 800 mm. So f/64 really wasn't much of a stretch for him (other than exposure time of course).

Since f/stop number = focal length / aperture diameter, this FL/4 rule is technically just simply specifying at least a 4 mm aperture diameter, so that diffraction doesn't excessively limit resolution. Later when 50 mm was the "normal" lens for popular 35 mm film, we did hear that f/11 was about the limit to be less concerned with diffraction, which matches this rule. We always thought 8 mm movie film was too small, considered barely acceptable quality, but compact digital cameras today are about the same size, or maybe smaller. But sometimes we must be more concerned about depth of field. For that purpose, lenses longer then about 100 mm usually offer f/22, and probably f/32, which can be very helpful.

I don't mean to promote use of this old rule of FL/4. It is old, was for film, and it does not take enlargement of sensor size into consideration (and of course, neither does the Airy calculation). We do use a lot of very short lenses today. But it's not a bad rule, and /4 does place a 50 mm lens very near the f/11 diffraction limit we might hear about (without even mentioning pixels). Of course, that 50 mm is often related to the "normal lens" for 35mm film (which was considered small in its day), but today there are many even smaller sensors. Compact camera automation rarely stops down past f/4, and is still diffraction limited (due to enlargement of tiny sensor size). Today's digital sensors can be literally tiny, and any necessary greater enlargement will show both diffraction and depth of field limits larger. A DSLR sensor might be 1/10 the dimension of Ansel's 8x10 film - that was several inches then, sensors today may be a few millimeter. :) A compact or phone camera sensor might have 1/50 that CoC dimension. Diffraction is not affected by which sensor was attached, but the necessary sensor enlargement does affect how well we see it.

Diffraction absolutely does happen, however there definitely are also times when greater depth of field can be much more important.

Speaking of DSLR, it is true today that f/32 can be a pretty good match for our 200 mm lenses, when needed, when it can help. It is there to be used, provided by our lens, for when needed. Try it (when needed), don't let them scare you off. You would be missing out on a really big thing.

All of this is about lens diffraction, it is Not about sensor pixel size (pixel size also does not take enlargement into account). For a 105 mm lens (the tree samples below), then 105/4 is f/26, so f/22 is a good try, and f/32 is close (again, these are 100% crops below, which is enlargement). The results below show (that on a DSLR sensor size) it's not that bad when you really don't want to give up depth of field. Lenses of 100mm or longer typically offer f/32, because it's good stuff (at times). So when more heroic efforts are necessary to get even more essential depth of field, consider doing what you need to do to get it. If important, at least try it, see if you like it (but f/32 will slow your shutter speed considerably, there are lots of trade-offs in photography).

It does coincidentally in fact imply f/16 could be a reasonable sharpness concern for a 50 mm lens (a normal lens for DSLR class cameras). That is a concern, which we've understood for almost forever. But it would not be the same situation for a 200 mm lens. Or an 18 mm lens either. And it is Not about pixels, diffraction exists regardless. Diffraction affects film cameras too.


Using a shorter lens, or standing back at farther distance, improves depth of field, but both also reduce the size of the subject in a wider image frame. Or simply stopping down aperture offers great improvements to depth of field which are so easy and so obvious to actually see.

Yes, of course diffraction does increase as we stop down. But within reason, diffraction is a fairly minor effect, at least as compared to depth of field which can be a huge effect. Saying, the detail suffering from diffraction is still recognizable, but the detail suffering from depth of field might not be there at all. Diffraction is serious, and not meaning to minimize it, but there are times when the need for depth of field overwhelms any real concern about diffraction. Yes, stopping down a lot can cause some noticeable diffraction which is less good. But greater depth of field sometimes can be a night and day result, make or break. So the tools are provided for when we need to use them, when they can help.

One tool is the Smart Sharpen in Photoshop (specifically with its Lens Blur option). Sharpening is limited too, but it can help. Diffraction is pretty much linear, the same effect in all photo areas (whereas for example, depth of field is not linear, its blur is mild close to focus but much worse far from focus). So diffraction can often be reasonably helped in that post processing sharpening (but none was done here).

My goal here is to suggest that, no matter what you have heard about diffraction and limited pixel size, yes, of course you can still usefully stop down to f/16 or f/22 or f/32 as they are intended to be used for the goal of greater depth of field. You wouldn't always use f/22, not routinely nor indiskriminately, but in the cases when you do need it, the overall result can be a lot better. It can be a great benefit, when you need it. Yes, stopping down so much certainly does cause diffraction losses which should be considered. But Yes, stopping down certainly can help depth of field much more than diffraction can hurt. This is why those f/stops are provided, for when they can help. When needed, if they help, they help.

When you need maximum DSLR lens sharpness, of course do think f/5.6, or maybe f/8, if that will work for you. But when you need maximum depth of field, consider f/16, or maybe f/22, or maybe even more at times. That's what it's for, and why it is there. Sure, f/8 will be a little sharper for general use, stick with it when you can, but when you need depth of field, that's hard to ignore. So when you need extra depth of field, try stopping down, that's how the basics work. Test it, see it for yourself, and don't believe everything you read on the internet. :) It's good to be able to actually see and verify that which we profess to believe.

Lens resolution certainly can be limited by diffraction. The lens situation has a resolution, and the digital sampling reproduces it. Pixel resolution simply tries to reproduce the image that the lens resolution created. This is less important if we necessarily resample much smaller anyway, for example to show a 24 megapixel image on a 2 megapixel HD video screen, or to print a 7 megapixel 8x10 print. Today, we typically have digital resolution to spare.

At right is a (random but typical) lens resolution test from Photozone. They have many good lens tests online, tests which actually show numbers. This one is 24 mm, and the red lines are drawn by me. Lenses do vary in degree, expensive vs. inexpensive is a factor, but in general, all lenses typically show similar characteristics.

The aperture when wide open is more soft (optical aberration issues in the larger glass diameter), but resolution typically increases to a maximum peak when stopped down a couple of stops (not necessarily f/5.6, but two stops down is half the diameter, avoiding the difficult outer diameters of glass far from center). The border sharpness can be a little harder (edges are at larger diameter from center of lens).

Then resolution gradually falls off as it is stopped down more, due to increasing diffraction as the aperture becomes small. Yes, we can assume f/16 and f/22 get worse, on the same slope. The edge of the aperture hole bends or diffracts the light near it (paths very near the edge, causing diffraction and blurring). The clear center area is unobstructed, but a tiny hole is nearly all edge. Diffraction causes a blurring loss of the smallest detail (a loss of maximum resolution), caused by the smaller aperture diameter. The term "diffraction limited" is usually a good thing, meaning and used as: "An optical system with the ability to produce images with angular resolution as good as the instrument's theoretical limit is said to be diffraction limited" - meaning as good as it is possible to be. However stopped down lens apertures do limit resolution more, affecting the smallest detail the lens can reproduce. Still, real world is that we often have sufficient resolution to spare, to trade for depth of field. Stopping down can be a big benefit, when it is needed.

We don't need to mention pixels. And f/22 might not always be a good plan for a short lens - or any lens, but not always bad either - detail depends on image size. Subject magnification is a factor of detail (more below). Focal length magnifies the subject detail. So a longer lens can often benefit greatly from the increased depth of field from f/22 or even f/32. It is why macro and longer lenses normally provide f/32, it provides an important feature that is of great interest and capability.

Next is what aperture in a short lens looks like: (the lens is 3.75 inches or 95 mm diameter)

f2.8
A 14-24mm lens, at 14mm f/2.8
(aperture computes 5 mm diameter)
f22
A 14-24mm lens, at 14mm, f/22
(aperture computes 0.63 mm diameter)

The definition is: fstop number = focal length / aperture diameter. This definition causes the same f/stop number to be the same exposure on all lenses.

f/22 on a 20 mm lens has an aperture diameter of 20/22 = 0.9 mm. That is a tiny hole, which causes trouble. f/5 is sharper.
f/22 on a 50 mm lens has an aperture diameter of 50/22 = 2.2 mm. Borderline small, but rather bearable when it helps DOF.
f/22 on a 105 mm lens has an aperture diameter of 105/22 = 4.6 mm, much more reasonable, piece of cake.

Yes, stopping down causes greater diffraction which limits the smallest detail we can see. The larger diffraction hides the smallest details in the lens image, which might otherwise be seen... which is normally about sharp edges on details. This diffraction is a property of the lens aperture diameter, and is true regardless of pixel size (of course it was always true of film lenses too). Combining the other regular optical problems normally reduce resolution below this theoretical diffraction limit anyway. We don't need pixels to know that, but this pixel notion is that when the Airy disk size exceeds the size of a pixel - or really two pixels (Nyquist), or really four pixels (Bayer), which is really eight pixels (Nyquist again), or really even more pixels because of the random misalignment of Airy disks on pixels - but however many pixels we decide matters, those small pixels resolution capability is limited by the larger diffraction disk size and coarseness. The pixel is certainly not the problem though, the only problem is the diffraction disk is large. It's too late to worry with pixels anyway, the diffraction has already occurred, it is what it is. The best job the pixels can do is to reproduce what they see. The pixel analogy is like, if you don't wear your glasses to inspect your image, not seeing anything is not the same as improving the diffraction. :) Of course, pictures of faces or trees or mountains are larger than a pixel anyway, so this does not mean all is lost. The diffraction issue is NOT about pixels. The pixel size (hopefully small) is already the smallest possible detail, and the diffraction is already what it is.


To explain this next situation shown, these are the original images from which 100% crops are taken. D800 FX with 36 megapixels, and D300 DX with 12 megapixels, both ISO 400, same 105 mm VR AF-S lens on both, both on the same tripod at the same spot. FX is of course the first wider view, and the DX sensor crops the lens view smaller, which makes it look closer up when enlarged to the same size. The two frames are shown the same size here, so DX is seen enlarged more than FX (but both were the same lens image, from the same lens). Point is, both had the same crop box in ACR, both marked crops are about 12% of the frame width in mm. Sharpening can always help a little, but there was no processing done on this page. There was a slight breeze to wiggle a few leaves. Shutter speed at f/32 got slow, around 1/40 second.


FX

DX

The point of these next 100% crops (a tiny central area cropped as shown here, then shown 100% full size, actual pixels) is not just to show depth of field, because we already know what to expect of that. It is more to show there is no feared diffraction limit around f/11 or wherever. There is no large step representing any limit around f/11, or anywhere. Sure, f/8 is often better (because of diffraction), and sure, diffraction does increase, but sure, you can of course use f/16 and f/22, maybe f/32, because, sure, it can often help your picture. Diffraction does continually increase as the lens is stopped down, but which is about the aperture, it is not about pixel size. This is same 105 mm lens in both, and yes, we might debate about f/32, but it certainly does increase depth of field. Any diffraction would be much less visible if the full image was displayed resampled to smaller size instead of shown at 100% size. But obviously there is no reason to always fear some limit at f/11, if the depth of field can help more than the diffraction hurts. You can do this test too.

The near tree and focus are 20+ feet, so that will always be the sharpest point. The light pole is about 250 feet, the power wires are about 900 feet. These are 100% crops of a tiny area. Both crops are the same, 12% of frame width in mm.
Specifically, the FX crop is 613x599 of 7360x4912 pixels, or 1% of total full frame pixels.
The DX crop is 357x347 of 4288x2848 pixels, or 1% of total full frame pixels.
This is ENLARGEMENT. At this scale, the uncropped full frame FX would be about 6 feet wide on a monitor large enough. The full frame DX would be nearly 4 feet wide.


FX D800, 105 mm lens, 100% crop, 12% of the original frame width in mm (599 pixels).
Frame Width: 7360 pixels / 35.9 mm sensor = 205 pixels per mm density (pixel size).

DX D300, 105 mm lens, 100% crop, 12% of the original frame width in mm (357 pixels).
Frame Width: 4288 pixels / 23.6 mm sensor = 181.7 pixels per mm density (pixel size)

Both are the same image from the same lens at the same distance. This is the same crop of the frames, but the DX sensor is simply smaller, and has to be enlarged more to view at the same size (not done here yet).

So the question is, do you want more Depth of Field, or not?

The images tell it, but here are Depth of Field values from the calculator. Subject at 20 feet, and background 880 feet behind it. So f/32 is not quite as sharp due to diffraction (again, this is an enlarged 100% view), but the DOF improvement is dramatic. Do you need that or not? In this case, the best f/32 Depth of Field does not extend past about 42 or 31 feet, and focus remains at less than hyperfocal (DOF does not reach infinity). However at f/32, the background CoC (BKCoC, at 900 feet here) becomes only around 2x larger (FX) than the DOF CoC limit at the 42 or 31 feet (more BKCoc detail at the calculator). Not quite to full DOF this time, but pretty close. We can see DOF looks pretty good, and if DOF is needed, I call that better, a lot better. Note this 100% crop is greatly enlarged here, depending on your screen size, but several times larger than the DOF CoC formula has computed.

105 mm 36x24 mm FX at 20 feet
f/DOFHyperfocalBKCoc
5.618.3 to 22 ft213.5 ft10.6x
817.7 to 23 ft151 ft7.5x
1116.9 to 24.5 ft106.9 ft5.3x
1615.9 to 27.1 ft75.7 ft3.7x
2214.6 to 31.7 ft53.6 ft2.7x
3213.1 to 41.8 ft38 ft1.9x

105 mm 24x16 mm DX at 20 feet
f/DOFHyperfocalBKCoc
5.618.8 to 21.3 ft320 ft15.9x
818.4 to 21.9 ft226.4 ft11.2x
1117.8 to 22.8 ft160.2 ft8x
1617 to 24.2 ft113.4 ft5.6x
2216.1 to 26.5 ft80.3 ft4x
3214.8 to 30.7 ft56.9 ft2.8x

Depth of Field can be confusing on cropped sensors. We do of course routinely say that in usual practice, the cropped cameras (with shorter lenses) do see greater DOF than a larger sensor. But this example was Not usual practice. If standing in same place, normally we would substitute a shorter lens (equivalent view of 105/1.5 crop = 70 mm) on the DX body, to capture the same view. The expected short lens is what helps small sensor DOF, but we didn't here. Or if using the same lens, DX has to stand back 1.5x further back to see the same field of view, and that greater distance helps DOF in usual practice. But we didn't. We didn't do anything, we just stood in the same place here at 20 feet with the same lens, so DX did see a smaller cropped view (see the first uncropped image views just above). And so here, the only difference is that the smaller DX sensor still has to be enlarged 1.5x more to compare its images at same size as FX. Greater enlargement hurts DOF, which is why sensor size is a DOF factor. So, the DOF numbers are correct (for the assumed standard 8x10 inch print size).

Degree of enlargement is a big factor. The same two f/32 images above are repeated below, with the smaller DX image enlarged more so it will view at the same size as FX now. FX D800 first, then DX D300 next. Both are same 105 mm lens on the same tripod in the same spot. But the DX looks telephoto because its sensor is smaller (sees a smaller cropped view), so it needs to be enlarged more here (done below), which also enlarges the diffraction too. FX is still shown about 100%, and DX is shown larger than 100%. We would not normally view these this hugely large - the uncropped frames were 7360x4912 and 4288x2848 pixels size - so a smaller view would look better than this.

The FX D800 is 36 megapixels, and the DX D300 is 12 megapixels, so this case is slightly larger pixels on DX, about 13% larger in this case. That may hurt resolution, but it does not affect lens diffraction. However, what we can see is that the smaller DX sensor cropping does require half again more enlargement to reach the same size as FX (not done above). That shows the diffraction larger too. Normally we think of DX having more depth of field than FX, however that assumes DX with same lens would stand back 1.5x farther to be able show the same image view in the smaller frame. We didn't here. Everything was the same here (except DX has to be enlarged half again more, below).

Shown differently enlarged to be same size

FX
FX f/32, 205 pixels/mm - smaller pixels, slightly higher resolution,
but same lens allegedly affecting f/32 diffraction more?
DX
DX f/32, 182 pixels/mm - larger pixels, slightly lower resolution,
but same lens allegedly affecting f/32 diffraction less?

Again, this is the same lens on both cameras, both standing in the same spot (on same unmoved tripod), both enlarged to same size here. The real difference is the sensor sizes. The actual difference in the 100% crops is the sensor pixel density, but to compare here at same size, the smaller image is enlarged 1.5x more, reducing its relative sensor density to 2/3, or 121 pixels/mm (larger pixels, less sampling resolution). Smaller pixels are simply greater sampling resolution, always good to show any detail present in the lens image.

But the enlargement is necessarily different (see same enlargement just above). Enlarging DX half again more is the necessary hardship, but that is what normal practice always has to do for smaller sensors. It's unfair to compare to FX if we don't compare the same image. But Depth of Field is often more important than diffraction.

Here they are repeated with both at same enlargement, both same 100% crops at f/32. These crops are both same 12% of frame width in mm, but yes, the pixels are 613/347, which is 1.7x instead of 1.5x, simply because the FX was 205 pixels/mm instead of 182 pixels/mm (higher resolution in the same mm, but smaller pixels, which were supposed to have some bad effect on diffraction?)

Shown Equivalently enlarged to preserve original size ratio

FX
FX f/32, 205 pixels/mm - smaller pixels, slightly higher resolution,
but allegedly affecting f/32 diffraction more?
DX
DX f/32, 182 pixels/mm - larger pixels,
slightly lower resolution,
but allegedly affecting f/32 diffraction less?

Getting a bit ahead, the detail about this paragraph is in the next section below. But for any lens at f/32, the Airy disk radius calculation is x = 0.02147 mm (for green light). These tree leaves are not point sources, and we won't see disks like stars would show (with enough magnification). The direct calculation of radius is x = minimum resolvable spacing, and so 1/x is the maximum analog resolution in "line pairs per mm", which this f/32 maximum is reduced to 46.6 LP/mm resolution. Each pair requires at least 2 pixels digital to resolve the black and white lines of a pair. More pixels could be better sampling of course, but pixels larger than 1/2 x could not resolve even this diffraction. But smaller and more pixels makes it easy. Easy does not mean we don't have diffraction, pixels don't affect what the lens does. Easy just means our better sampling density can resolve whatever is there.

In this 36 megapixel FX, the pixel pitch is 35.9 mm / 7360 pixels = 0.004878 mm/pixel, so x = 0.02147 mm at f/32 is 4.4 pixels. And it is 72% of the FX 0.03 CoC which is 6.1 pixels (CoC is used to compute DOF sharpness). 1/x is 46.6 LP/mm resolution maximum, and our sensor coincidentally has 7360/35.9 = 205 pixels/mm, which is 102 LP/mm maximum. So this is definitely diffraction limited. The diffraction 46.6 LP/mm is worse then the sensors 102 LP/mm.


DX f/32
In this 12 megapixel DX, x is 3.9 pixels, and is 107% of the DX 0.02 CoC. This sensor resolution is 4288 pixels / 23.6 mm = 182 pixels / mm, or 91 LP/mm resolution. The diffraction in the same lens at f/32 is the same 46.6 LP/mm. it is diffraction limited, but IMO, it's really not bad. This image is that full DX f/32 image at 1/8 size. The f/32 Depth of Field is sufficient to show even that thin distant power line at 900 feet, even at 1/8 size. Sometimes sharpness can seem relative, but Depth of Field is a strong factor.

For a 24 megapixel DX (255 pixels/mm, 3.917 microns), x is 5.5 pixels, and 107% of the DX 0.02 CoC.

Comparing diffraction size to the diameter of Depth Of Field Circle of Confusion seems practical. CoC is an existing standard of sharpness. CoC is the DOF maximum limit of blurriness to be tolerated, for example maybe defined as 0.02 mm maximum diameter on some sensors, considered the limit of DOF acceptability. This maximum CoC limit is often 4 to 6 pixels diameter on today's DSLR. One pixel is not likely very important, and certainly one pixel cannot sample anything well. The diffraction Airy disk also has a diameter in mm. DOF CoC is often routinely larger than the Airy disk. Both always exist, and both do vary. Stopping down does hurt with greater diffraction, but instead helps Depth of Field more. If otherwise, diffraction would blur anything depth of field could do, and we see that is obviously not true. Neither is necessarily a big problem until reaching some more extreme limit. Both become important when blur diameter is enough pixels for us to see it (which is more than one pixel).

How x compares to CoC seems significant (regarding visibility), but how x compares to a pixel is not significant, unless the pixel is larger than 1/2 x, in which case the pixel will be the limiting factor of resolution. But the pixel size does not change, diffraction is what increases. Two pixels in x is the minimum to resolve what's there (and more smaller pixels would be better in that way). However, it is x that measures the diffraction. And Depth of Field is also another concern of sharpness.

Both of these FX and DX versions are way larger than the 8x10 inch print (203x254 mm) standard for comparing Depth of Field CoC. This 36 megapixel FX image printed at 250 dpi would be 29.4x19.6 inches (748x499 mm), and the 12 megapixel DX image at 250 dpi would be 17.1x11.4 inches (436x289 mm).

A large Airy disk does limit lens resolution, it's certainly not fully sharp at f/32 (but it obviously did not cut off and die either). And while we are aware of diffraction, yet we do normally still have quite a bit of resolution left, possibly adequate, very possibly good enough this time (this was after all a 100% crop), for some cases and some uses. Our final use is likely resampled much smaller than this extreme 100% crop view. You can say we have sufficient sampling resolution to even show the diffraction clearly. :) Digital's job is to reproduce whatever the lens creates, and more pixels help to do that better.

But in this case, f/32 also creates awesomely better depth of field, if that is what's needed. And in various situations, we could decide that could be overwhelmingly important, could make all the difference, perhaps make or break. Or maybe sometimes not, when we may prefer to back off on f/stop, if it still works. If we do need really maximally sharp results at f/5.6 or f/8, then we know not to set up a situation needing extreme depth of field. It's a choice, it depends on goals, and we can use the tools we have to get the result we want. Photography is about doing what you have to do to get the result that you want. The alternative is Not getting what you want. But do realize that you have choices.



Airy disk in image of magnified star.
A point source should be zero diameter.
The Airy calculation x is the radius of the first dark ring. The outer rings are often too dark to have much effect.
Airy disk: A note about the numbers

Stars in the night sky are tens to millions of light years away, and so from Earth, they should appear as zero diameter point sources. However at high optical magnification in telescopes, due to diffraction, we see a star not as the smallest point source, but as a larger Airy disk, which is an artifact of the telescope diffraction. The Airy disk diameter inversely depends on aperture diameter (half the aperture diameter creates twice the Airy disk size). The ability to separate and resolve two close points (two touching Airy disks) depends on Airy disk diameter (how much they overlap each other), which depends on aperture diameter - as seen though focal length magnification (twice the focal length shows twice the separation distance).

Telescope users know that telescopes with a larger diameter aperture have better resolution due to less diffraction. That smaller Airy disk diameter can better resolve (separate) two very closely spaced stars, to be seen and distinguished as two close stars instead of as one unresolved blob (blurred together). Known double star pairs are the standard measure of telescope resolution.

The diffraction disk is nothing new, it was known to John Herschel, 1828. The disk is named for George Airy's work in 1834, but this resolution formula is the Rayleigh criterion, Lord Rayleigh, 1879, as θ = 1.2197 λ/2r. Wikipedia shows its current derivation of that minimum separation to resolve two star points (see example of "resolving", called the Rayleigh criterion). Green is about the center of the span of visible light, with wavelength λ about 0.00055 mm (550 nm).

In this formula, x is the minimum separation to resolve two of these points. x is the spacing, and 1/x is the resolution. The combination f/d is the f/stop number. The minimum separation x increases directly with f/stop number. Focal length also affects the magnification of the subject detail (relative to that blur diameter), and in practice, a longer focal length supports a higher f/stop number. The beginning formula for "diffraction" is about aperture diameter 1/d. Focal length distance magnifies it on the sensor, and f/d is f/stop number. But in photography, we also enlarge the sensor image significantly to view it.

The reciprocal 1/x of such minimum separation x (of two resolved adjacent point sources) is the theoretical maximum resolution allowed by diffraction, 1/x directly comparable to line pairs per mm (which requires two pixels). Which applies to our camera lenses too, except we don't often photograph point sources. Measured resolution numbers of real world lenses are of course less than this theoretical limit, because they can't be greater than diffraction allows. The concept of lenses that are "diffraction limited" would be the impressive feat of reaching those theoretical numbers, limited only by diffraction.

This "x" is Not the Airy disk diameter. It is the radius of the first minimum (the first dark ring). That would seem to complicate computing pixel sizes, we would assume we should compare the diameter to a pixel. But instead, X is considered to be the direct resolution spacing. Then 1/x is considered resolution in line pairs per mm, and it does take two pixels to resolve both the black and white lines of a pair. More smaller pixels won't change the diffraction, or the lens resolution, they will just resolve the rings better (and sensor enlargement should aid seeing that).


None of this is about pixels. Sensor pixels will have their own resolution limits, unrelated. A greater number of pixels does not affect the sharpness that our lens can create, but a greater number of (smaller size) pixels is normally greater resolution of sampled detail that we can resolve in that lens image. The pixels job is to merely try to digitally reproduce our analog lens image it sees. The lens image is what it is, and the better that the pixels can reproduce this image is a good thing (regardless of the detail that is there... a pristine image or one suffering diffraction).


More images (maybe too many) are on next page


Copyright © 2014-2017 by Wayne Fulton - All rights are reserved.

Previous Menu Next