All of the world's photo and tonal image data contains Gamma Correction (gamma is in the specifications, in sRGB for computer monitors, and for the past 80 years in all television images, and HDTV now).
Creation of a digital image (by the digital camera, the digital scanner, or the graphics editor program) always adds gamma to all created tonal image data (meaning color or grayscale images, but excluding one-bit line art images). Gamma is done to artificially boost or increase image tonal data values before it is shown (in the image file).
Because, the purpose of Gamma Correction is to oppositely correct for the deficiencies (non-linearity) of CRT monitors (CRT = Cathode Ray Tubes), which we used for many years, from earliest television. CRT losses make dimmer values darker, therefore the data is intentionally made overly bright first, in the special way so that it will come out just right (the data values are raised to the power of exponent 1/gamma).
However, today's LCD monitors (LCD = Liquid Crystal Display) are already linear, and don't need gamma, but we continue to use and expect gamma, because all of the worlds existing digital photo images already contain gamma correction (and CRT will still need it too).
Longer version, more history detail:
Newbies may not know yet, but gamma correction was developed 80 years ago (before 1940) to make CRT tubes be suitable to show tonal images, specifically for the first television video. The CRT response curve is exponential, which is a very serious problem for images. The CRT response misbehaves to show it as if all image data values are raised to the power of 2.2 (This CRT 2.2 response curve was given the symbol name of gamma by the science people). 2.2 is near 2, so we could say it was approximately as if all data values were squared before the CRT showed it (and even a bit more at 2.2). That made bright tones very bright, but the dimmer tones are lost in the dark, and it is really tough on showing tonal images. The gamma correction solution was to first apply the reverse power of 1/gamma (1/2.2 = 0.45) on the image data. This was done at the TV transmitter, so it automatically corrected all television set receivers. All they had to do was to show it. That 0.45 power is a reduction, but it brought the bright tones down much more and the dark tones much less, which smoothed the curve to straight line linear again. The signal could then be amplified to suitable level, and television turned out to be a great invention. It has a lot of technology, but gamma let it actually be shown.
Then when computers were invented 40 years later, they used CRT tubes too, so same issue. It didn't matter for text characters (all shown at the same brightness), but when we started dealing with images, computer people got very familiar with gamma. But now, for the last several years, we have used LCD monitors, which are linear response, and don't need gamma. However, all of our existing digital data has gamma in it, and implementing gamma is easy today, so we continue to use gamma for compatibility, and also any CRT monitors can continue to work. You may hear foolish stuff from newbies who don't actually understand, but gamma was for using the CRT. And gamma is pretty much fully automatic now, so it is no worry. Digital cameras and scanners still always add gamma, photo editors still edit gamma, and LCD monitors simply decode it to remove it.
Our eyes and the camera lens expect to see the light from the original photo scene as linear data (linear also meaning here, not yet converted to gamma data). But digital cameras and scanners still add gamma encoding, and our photo image files and our histogram data values will always contain gamma encoding, normally sRGB gamma 2.2 (assumed here). Gamma was required to correct the brightness response of CRT monitors, and our standards still continue including gamma now for compatibility with all old images. And printers expect and need much of it, but LED monitors don’t, and simply decode it now, to remove it. Our eye always sees it only after it is converted again back to linear, but our histogram data is always gamma encoded. Gamma 2.2 is in our specifications for sRGB, which is the standard default RGB color space, used and expected by our monitors and phones and tablets, our printers, the internet, HDTV, and generally by photo printing labs. Gamma 2.2 is assumed here.
So gamma is automatic, but not quite hidden. The main reason we need to know about gamma now is that our image histogram data is gamma encoded. We know exposure 1 EV lower is half the exposure, therefore exposure 1 EV down from the histogram 255 end is half exposure, but that does NOT MEAN HALF OF THE HISTOGRAM. It will be near 3/4 scale in the gamma histogram.
The internet newbies sure can confuse that. No, gamma is NOT done for the eye — the human eye never has any opportunity to see any gamma data. No, gamma is Not about 8-bits, gamma is decoded back to original linear values before it is seen. Gamma correction is required to correct and use nonlinear CRT monitor response (the reason gamma correction was used for all 80 years of television history). LCD monitors are linear and don't need or use gamma, but they must expect it and must simply decode and discard its effect, since gamma is still continued for compatibility with CRT and with all the worlds image data.
Fundamentals: Our eye Never sees gamma data, but for this purpose, all of our digital cameras and scanners always output images corrected for gamma. All displays expect to see it, but gamma is removed in their output to our eye. Raw images (which we cannot directly view anyway) do defer adding this gamma step until the later raw processing, but gamma correction will always be present in the RGB JPG output image.
So all color and grayscale digital image files contain gamma data.
Therefore the histograms necessarily show gamma data values.
The RGB numbers we see in the photo editor are necessarily gamma values.
However, the human eye never ever sees gamma data. The eye only wants to see a reproduction of the original linear scene. Anything else would be distortion. Any view of the photo image that our eyes can ever see has necessarily been decoded back to the original linear version (one way or another), but until then, the actual image file data still contains the gamma data numbers.
Gamma Correction is pretty much an invisible operation, it just always happens in background. But it does affect our histogram data. When you edit a color with RGB numbers in Photoshop, those numbers are gamma corrected data. The midpoint data that we imagine as 128 in linear data is in fact near 186 in our histogram (near 3/4). This also affects photos of the 18% gray card, which in the histograms gamma data is near 46%, which is shown near 50%, which causes some of us to imagine the 18% gray card is 50% of something. But it is 18%, and gamma correction is what makes 18% appear near 46% in the histogram. 18% is not the middle of anything digital.
We know that one stop less exposure is 50% brightness, by definition. The literature we see therefore implies the brightest tone drops halfway, to midscale. And that is correct for the linear data the lens saw, however all we see in histograms and photo editors is the gamma data, so one stop drops only to about 3/4 scale (to about 186, more or less, a vague result because the camera is also making other corrections, white balance, color profile, and contrast). So one stop down may be a vague 3/4 scale in the histogram.
If we could see linear digital data, then 128 is half of 255, which are one stop apart.
But the histogram shows gamma data, and then:
One stop down from 255 is gamma 186, or 73% in gamma histogram.
One stop above histogram gamma 128 is 175, or 69%.
Even two stops down from 255 is gamma 136, or 53% in gamma histogram.
This should affect how you use the histogram to compensate your exposures.
Linear is what our eye or camera lens sees in the real world. In math, "linear" means it graphs as a straight line, meaning twice the input produces twice the output (proportionally, linear, not distorted). Gamma data is exponential, NOT linear. So when discussing images, linear simply means "not gamma", implying the analog scene data (also still linear at the camera sensor). Also linear implies the final linear reproduction that is presented to our eye, hopefully to be the same as we saw when either standing in front of the original scene, or viewing a reproduction of it. Our eye always wants to see only the linear version, like the original scene. Anything else would be distortion. So gamma data is necessarily always "decoded" first, one way or another, back to linear, before any eye ever sees it.
Gamma Correction is non-linear logarithmic processing, done to correct for CRT monitors which show data as if it had been raised to the exponent of 2.2 (roughly numerically 2, so showing as if data values had been approximately squared). That has the effect of leaving the low values behind (reproduction is too dark). So to correct the expected CRT action, we prepare by first doing Gamma Correction, by applying exponent 1/2.2 to the data first (roughly square root), exactly offsetting the expected losses. (The math details are that the data is normalized first, to be a fraction between 0 and 1, and then square root gives a larger number than before, specifically without changing end point range, more below.) Then the corrected image will come out of the CRT just right, i.e., a linear reproduction again for the eye to see. We've had to do that for CRT for the life of television, about 80 years.
Now and then, I get email... 😊 Charles Poynton wrote a gamma article making the statement that we would still need to use gamma even if we no longer use CRT monitors. He didn't explain why, but IMO, the only reason that makes any sense is because then by continuing gamma, we can still continue to correctly show all the past years of gamma-encoded images, exactly as we still do. But readers seem to imagine he was speaking of something else, the so-called 100 perceptual steps we perceive, and how gamma separates the low values better (in days of analog television). However the eye simply never sees any gamma data. Hopefully and ideally, what we see is always decoded exactly reversed, back to the same original linear values for the eye (any other result is distortion). Today, LCD monitors contain special gamma lookup table chips to exactly restore the recomputed original linear values. Storing as 8-bit data can cause slightest rounding differences, but that is an 8-bit issue, and Not a gamma issue. But gamma is the cause needing that numerical substitution, it is not the solution. The math of CRT gamma is the only reason we don't simply store the desired original linear number that we actually want to see (there would be no question about the tonal value then). Still, in the last few years, some do seem to imagine that gamma is somehow done to benefit the eye. It's impossible to agree with that interpretation, since no eye ever sees gamma data (more than 8-bits can be slightly more accurate, but that is an 8-bit argument, not even about gamma, still true even if we did store the original linear data). And 8 bits does obviously seem generally "good enough". Gamma was developed about 80 years ago specifically to make CRT monitors usable for tonal television images.
But yes, even after LCD monitors, it certainly was much better to continue to use gamma for the reason that we do continue it, because continuing to use gamma for compatibility prevented obsoleting and rendering incompatible 100% of the world's existing images and video systems (and any CRT will still need gamma too). That must have been a very easy decision, it was vastly easier than starting over, since the world is full of gamma images. And it's no big deal today, since gamma has become so easy and inexpensive to implement (it's just a chip now). Compatibility is a great and best reason to continue using gamma (even if LCD has no use for it, and so to speak, must just discard it).
The big problem I do see is that this wrong notion (that the eye itself needs gamma somehow?) has recently caused some false internet notions (of a theory impossible to explain). Their only argument is that the eye has a similar nonlinear response, which Poynton says is "coincidental". I think the eye is even the opposite effect, and our brain does its own correction, but be that as it may, human eyes simply NEVER see any gamma data. We only see analog linear scenes or linear reproductions of it. Our eyes evolved without CRT gamma, and eyes don't need gamma now. We did use gamma data for decades, knowing it was designed only to correct the CRT gamma response. Now CRT is about gone, but we still use gamma, so suddenly newbies decide gamma must have instead been needed for the human eye response somehow? Come on guys, don't be dumb. 😊 (See more below.) Gamma Correction was done for CRT for decades before we ever even imagined digital images. Now all of our digital photo images already have gamma in them, which must still be addressed. And no matter what gamma might do, the eye simply never has any opportunity to ever see any gamma data, and gamma data is still totally unrelated to our vision response. Our eye only wants to view a properly decoded linear image reproduction, the same as the lens saw, the same as our eyes see if we were still standing there at the scene. However, all of the worlds images are in fact already gamma encoded, so we do continue gamma for that compatibility.
So today, LCD displays simply decode gamma and discard it first, so to speak. The LCD must convert the gamma data back to linear data first, because our eye expects only linear data. We do need to accurately furnish gamma 2.2 data, because gamma 2.2 is what the LCD monitor is going to remove. Our other uses (CRT, and printers to some extent) still rely on the gamma data. The "why" we use gamma may not actually matter much, gamma is pretty much an invisible automatic operation to us, a no-op now, encoded and then decoded back to linear. But still, we can hear dumb things on the internet, and novices are being told wrong things, which is a pet peeve for me.
Our eye always expects to see only linear data, a linear reproduction of the original linear scene, same as if we were still standing there in front of it. Any deviation from linear would be seen as data corruption. Any "need" for gamma for the eye or for the LCD is laughable technically, but our need for compatibility is still extremely important, so we still do it. The eye has no use for gamma. And the eye never sees gamma data. We may still encode gamma into the data for other reasons (CRT before, compatibility now), but it is always necessarily decoded back to linear before any eye ever sees it. Anything else would be distortion. The eye has no use for gamma, and the eye never sees gamma data.
The fundamentals are impossible to ignore: Our eye only looks at linear scenes, or at linear reproductions of scenes. The eye never ever sees gamma data, which would be too bright and unnatural if it did. The reason to do gamma correction is to correct CRT response. CRT once was all there was, but CRT is no longer popularly used today. However, since all of the worlds tonal images are gamma encoded, the obvious reason we still continue gamma today is for compatibility, which is all-important. That does NOT mean we ever see gamma images.
Our eyes never see gamma data (it's always decoded first, one way or another). We expect to see linear data. Our camera sensors also see linear data. Our images do begin and end as linear, and it seems every minimal article about our histograms portray them as linear data. However all our tonal image files and all of their histograms contain gamma data, which changes the numerical values you will see in your histograms. Our photo images are gamma data, our files and histograms contain gamma data, and the numerical values are gamma encoded. (One-bit line art is the exception, 0 and 1 is not tonal, so no issues with linearity, and no gamma needed). But our eyes see only decoded linear images.
Histogram data is gamma values. A gamma correction calculator (for photo images) to help with the numbers:
Numbers Only. Seeing a NaN result is an error which would mean input is Not A Number. The calculator always also shows the number it interpreted as input.
The "values" are the 0..255 values in a histogram (which are gamma numbers in the histogram, and linear numbers at the sensor). The percentages are of the 255 full scale histogram value.
Histogram values are integer [0..255] values, so entering numbers like 127.5 cannot actually exist in our image files. The calculator and math will use the decimal fraction value though, if it pleases you. And you can show a couple of decimal places on Option 1, 2, 5 histogram values (if interested in rounding maybe).
Option 5 (measuring stops down of a histogram value) is possibly the most interesting. It offers a rough approximation about how exposure changes affect the gamma histogram that we see. For example, one stop underexposed from the 255 end should be 50% at 128 IF IN LINEAR SENSOR DATA (which we never see). But in the gamma histogram image data, one stop it is about 186 or 73%, about 3/4 scale. Or 1/3 stop down from 255 should be about 230 at 90%.
However, while the math is precise, practice is a bit rough because digital cameras are also making a few simultaneous tonal adjustments, for White Balance, Contrast, Color Profile like Vivid, Saturation, etc, which also shift the histogram data. Therefore the gamma values in histogram data probably are a little different than the exact values predicted. But it likely will be in the ballpark. More gamma calculations, including more detail of this "stops down", is on next page.
Comparing with Option 5, the Exposure slider of Adobe Camera Raw (ACR, in Photoshop, Elements, and Lightroom) does not quite match calculations on EV values, and you may see things you don’t expect. Its Exposure slider with -1 EV adjustment is not the same as the camera does (but is fancier and the difference is good). It appears to be a standard Curve tool that raises or lowers the 3/4 point (192 X level) of the response curve by the specified amount. The change at that point does agree numerically with the specified EV calculation. This method is a standard gamma adjustment by raising or lowering the center curve part, which is an excellent Brightness adjustment because the end points of gamma cannot move, which is an excellent safety measure to prevent it from clipping. Whereas actual camera exposure adjusts all levels equally which can cause clipping.
(Detail: The 0..255 range is first scaled to be 0..1 for gamma, and then 0 to any exponent is still 0 and 1 to any exponent is still 1. But the gamma exponent still changes all of the other data values in between.)
This chart would agree with the Curve Tool in the next diagram below, if the X 75% point (192 level) was lowered to the (140/255) = 140 level at the Y 55% point, which is -1 EV at that point.
But another Exposure tool at Photoshop menu Images - Adjustments - Exposure does match these calculator values, and -1 EV does reduce 255 to 186 as expected (and +1 EV certainly can clip the highlights then).
We never see linear histograms for our photo images, our image files and histograms are RGB gamma data that we see. These tonal images are always gamma data (but one bit line art is not). The camera sensor chip was linear, and raw files are still linear, but are converted to RGB data for use, and RGB images are gamma encoded (Until when shown, when they are decoded to be linear data again for the eye). But even raw files show gamma histograms (from an embedded JPG image included in the raw file).
There is a Where in the Histogram Gamma is X stops down from 255? chart that might be of interest.
But gamma in 8-bit files and 8-bit target video space can create tiny rounding errors in the numbers (possibly "off by one"). This is no big deal (8-bits is what we do, and it obviously works fine). This is Not related to the eye. It is because of the rounding to integers [0..255] in 8-bits, brought about because we do gamma math in 8-bits.
It can be made to sound like voodoo, but it's actually pretty simple. A CRT display is not a linear device. CRT displays have serious tonal losses, the brightest tones get through and the dimmer tones are lost (dark). We used CRT display for nearly 80 years for television, so this fix is very well understood.
This nonlinear CRT response curve is named gamma, which is shown at right. Gamma is the problem, and Gamma Correction is how we fix it. These graph axes represent the [0..255] range, but the numbers are normalized to show [0..1] values (like percentage). Normalization is very important, because 0 to any exponent is still 0, and 1 to any exponent is still 1 (gamma does not cause any clipping).
Read it like any regular curve tool (which I have marked an example in Blue), with input at bottom, and output at left. The straight 45 degree line is the linear response, linear because the blue marked 60% input has a 60% output. The straight line is also gamma 1.0, linear.
The lower solid red curve named gamma shows the typical nonlinear response of a CRT monitor. This CRT response curve shows 50% input is down to 22% output (and much of the output will be too dark). To correct this nonlinear CRT response, the image data is first boosted nonlinearly, to new values modified equally in the opposite direction of the expected losses (the upper dashed red curve named gamma correction). Then combining these opposite effects, the resulting straight 45 degree line shows the corrected linear response, so that any input is the same numerical output (60% input from bottom is 60% output at left, no change, example shown in blue). That corrected curve is shown on a CRT, the image data is boosted so that midpoint 50% is raised to 73% (the red dashed line) in gamma data (calculator above, Option 2, 50%). After which (the solid red CRT response curve), the image will display properly (linearly, at the light gray 45% linear curve) on a nonlinear CRT. So even after suffering these expected CRT losses, the corrected CRT output will come out linear (the straight line in graph).
But our eye will NEVER EVER see gamma, because gamma is always decoded first. Our eyes HAVE ABSOLUTELY NO USE FOR GAMMA, it would instead be added distortion if gamma were ever seen. The eye only wants to see the corrected linear reproduction, intended to exactly match what the lens originally saw at the scene. Our eyes did not see or need gamma data when viewing the actual real scene. Assuming our eyes know how to work properly, it does not matter how our eyes actually do it, the eye expects to see real things in the world (without gamma). All that matters in regard to our photos is showing our eyes a proper linear reproduction of that real world. Due to gamma, we do have to help the CRT to do that. And because of that plan, the LCD monitor has to deal with it, and discard gamma first (the gamma 2.2 number must still be carefully observed by all involved). Some seem not to understand the actual facts about gamma.
Since CRT is not used so much now, then how gamma affects our histogram is about the last visible evidence about gamma today (gamma does affect printing images too). Otherwise in LCD today, gamma is just an invisible background process, it happens, the camera encodes it, and then the LCD monitor decodes and discards gamma before we see it, and we're never even aware of it. But the hypothetical "midpoint" of our image data which is the linear 50% level at the camera sensor is in fact near 73% in our gamma histograms. The 50% "midpoint" of our histogram is in fact 22% linear (graph above). Meaning, if the right end of your histogram data is seen at the 50% midpoint, you will need 2.2 stops more exposure to raise it to 100% (Not the one stop that might be imagined - see calculator).
The correct Google search term for this subject is Gamma Correction. The excellent graph above came from Wikipedia, but that article on Gamma Correction is totally corrupted now, and this graph was removed. <sigh> So don't believe everything you read now. It is the internet, and there are good sources, and also those that frankly don't know. That part you see about the purpose of gamma being to aid the human eye response is utter nonsense, made-up gobbledygook. The eye never ever even sees gamma data, the eye only sees the decoded data, reversed back to be the same original linear version again. That's the purpose of gamma, meaning gamma is only to correct the response of CRT, so we can in fact see the original linear image. And we still do it today (LCD) for compatibility with all the world's previous images.
The Adobe Levels tool (CTRL L in Photoshop and Elements) has a gamma option. Adobe Help calls its center slider "Midtones", but describes it as "The middle Input slider adjusts the gamma in the image." It certainly is a very good tool to adjust overall image brightness (much better than "Brightness" tools, which merely add a constant to all tones to shift the histogram sideways, which can cause clipping). This gamma tool raises the center of the curve, but the endpoints stay fixed (same range is fixed).
The tool (Levels center slider) changes image brightness by changing gamma, raising the curve shown above. This center slider of Levels shows 1.00 by default, which means gamma of 1x of existing value (whatever it was, but probably 2.2, and so default 1x x 2.2 is still 2.2, no change at 1x). But other slider values are multipliers of that existing gamma. A calculator using the math of this gamma multiplier is on next page.
Today, our LCD display is considered linear and technically does not need gamma. However we still necessarily continue gamma to provide compatibility with all the world's previous images and video systems, and for CRT and printers too. The LCD display simply uses a chip (LookUp table, next page) to decode it first (discarding gamma correction to necessarily restore the original linear image). Note that gamma is a Greek letter used for many variables in science (like X is used in algebra, used many ways), so there are also several other unrelated uses of the word gamma, in math and physics, or for film contrast, etc, but all are different unrelated concepts. For digital images, the use of the term gamma is to describe the CRT response curve.
We must digitize an image first to show it on our computer video system. But all digital cameras and scanners always automatically add gamma to all tonal images. By "tonal", I mean a color or grayscale image is tonal (has many different tones), as opposed to a one-bit line art image (two colors, black or white, 0 or 1, which has no gray tones, like clip art or fax) does not need or get gamma (values 0 and 1 are entirely unaffected by gamma).
Gamma correction is automatically done to any image from by any digital camera (still or movie), and from any scanner, or created in graphic editors... any digital tonal image that might be created in any way. A raw image is an exception, only because then gamma is deferred until later when it actually becomes a RGB image. Gamma is an invisible background process, and we don't have to necessarily be aware of it, it just always happens. This does mean that all of our image histograms contain and show gamma data. The 128 value that we may think of as midscale is not middle tone of the histograms we see. This original linear 128 middle value (middle at 50% linear data, 1 stop down from 255) is up at about 186 in gamma data, and in our histograms.
The reason we use gamma correction. For many years, CRT was the only video display we had (other than projecting film). But CRT is not linear, and requires heroic efforts to properly use them for tonal images (photos and TV). The technical reason we needed gamma is that the CRT light beam intensity efficiency varies with the tubes electron gun signal voltage. CRT does not use the decode formula, which simply resulted from the study of what the non-linear CRT losses already actually do in the act of showing it on a CRT ... the same effect. The non-linear CRT simply shows the tones, with the response is sort of as if the numeric values were squared first (2.2 is near 2). These losses have variable results, depending on the tones value, but the brighter values are brighter, and the darker values are darker. Not linear.
How does CRT Gamma Correction actually do its work? Gamma 2.2 is roughly 2, so there's only a small difference from 1/2 square root, and 2 squared. I hope that approximation simplifies instead of confusing. So Encoding Gamma Correction input to the power of 1/2.2 is roughly the square root, which condenses the image gamma data range smaller. Then later, CRT Gamma Decodes it to power of 2.2, roughly squared, which expands it to bring it back exactly to the original value (reversible). Specifically, for a numerical example for two tones 225 and 25, value 225 is 9x brighter then 25 (225/25=9). But (using easier exponent 2 instead of 2.2), the square roots are 15 and 5, which is only 3 times more then, compressed together, much less difference ... 3² is 9 (and if we use 2.2, then 2.7 times more). So in that way, gamma correction data boosts the low values higher, they move up more near the bright values. And 78% of the encoded values end up above the 127 50% midpoint (see LUT on next page, or see curve above). So to speak, the file data simply stores roughly the square root, and then the CRT decodes by showing it roughly squared, for no net change then, which was the plan, to reproduce the original linear data. The reason is because the CRT losses are going to show it squared regardless (but specifically, the CRT response result is power of 2.2).
Not to worry, our eye is NEVER EVER going to see any of these gamma values. Because, then the non-linear CRT gamma output is a roughly squared response to expand it back (restored to our first 225 and 25 linear values by the actual CRT losses that we planned for). CRT losses still greatly reduce the low values, but which were first boosted in preparation for it. So this gamma correction operation can properly show the dim values linearly again (since dim starts off condensed, up much closer to the strong values, and then becomes properly dim when expanded by CRT losses.) It has worked great for many years. But absolutely nothing about gamma is related to the human eye response. We don't need to even care how the eye works. The eye NEVER sees any gamma data. The eye merely looks at the final linear reproduction of our image on the screen, after it is all over. The eye only wants to see an accurate linear reproduction of the original scene. How hard is that?
Then more recently, we invented LCD displays which became very popular. These were considered linear devices, so technically, they didn't need CRT gamma anymore. But if we did create and use gamma-free devices, then our device couldn't show any of the world's gamma images properly, and the world could not show our images properly. And our gamma-free images would be incompatible with CRT too. There's no advantage of that, so we're locked into gamma, and for full compatibility, we simply just continue encoding our images with gamma like always before. This is easy to do today, it just means the LCD device simply includes a little chip to first decode gamma and then show the original linear result. Perhaps it is a slight wasted effort, but it's easy, and reversible, and the compatibility reward is huge (because all the worlds images are gamma encoded). So no big deal, no problem, it works great. Again, the eye never sees any gamma data, it is necessarily decoded first back to the linear original. We may not even realize gamma is a factor in our images, but it always is. Our histograms do show the numerical gamma data values, but the eye never sees a gamma image. Never ever.
Printers and Macintosh: Our printers naturally expect to receive gamma images too (because that's all that exists). Publishing and printer devices do also need some of gamma, not as much as 2.2 for the CRT, but the screening methods need most of it (for dot gain, which is when the ink soaks in to the paper and spreads wider). Until recently (2009), Apple Mac computers used gamma 1.8 images. They could use the same CRT monitors as Windows computers, and those monitors obviously were gamma 2.2, but Apple split this up. This 1.8 value was designed for the early laser printers that Apple manufactured then (and for publishing prepress), to be what the printer needed. Then the Mac video hardware added another 0.4 gamma correction for the CRT monitor, so the video result was an unspoken gamma 2.2, roughly — even if their files were gamma 1.8. That worked before internet, before images were shared widely. But now, the last few Mac versions (since OS 10.6) now observe the sRGB world standard gamma 2.2 in the file, because all the world's images are already encoded that way, and we indiscriminately share them via the internet now. Compatibility is a huge deal, because all the worlds grayscale and color photo images are tonal images. All tonal images are gamma encoded. But yes, printers are also programmed to deal with the gamma 2.2 data they receive, and know to adjust it to their actual needs.
Extremely few PC computers could even show images before 1987. An early wide-spread source of images was Compuserve's GIF file in 1987 (indexed color, an 8 bit index into a palette of 256 colors maximum, concerned with small file size and dialup modem speeds instead of image quality). Better for graphics, and indexed color is still good for most simple graphics (with not very many colors). GIF wasn't great for color photos, but at the time, some of these did seem awesome seen on the CRT monitor. Then 8-bit JPG and TIF files (16.7 million possible colors) were developed a few years later, and 24-bit video cards (for 8-bit RGB color instead of indexed color) became the norm soon, and the internet came too, and in just a few years, use of breathtaking computer photos literally exploded to be seen everywhere. Our current 8 bits IS THE SOLUTION chosen to solve the problem, and it has been adequate for 30+ years. Specifically, the definitions are that these 8-bit files had three 8-bit RGB channels for 24 bit color, for RGB 256x256x256 = 16.7 million possible colors.
While we're on history, this CRT problem (non-linear response curve named gamma) was solved by earliest television (first NTSC spec in 1941). Without this "gamma correction", the CRT screen images came out unacceptably dark. Television broadcast stations intentionally boosted the dark values (with gamma correction, encoded to be opposite to the expected CRT losses, that curve called gamma). That was vastly less expensive in vacuum tube days than building gamma circuitry into every TV set. Today, it's just a very simple chip for the LCD monitors that don't need gamma... LCD simply decodes it to remove gamma now, to restore it to linear.
This is certainly NOT saying gamma does not matter now. We still do gamma for compatibility (for CRT, and to see all of the worlds images, and so all the worlds systems can view our images). The LCD monitors simply know to decode and remove gamma 2.2, and for important compatibility, you do need to provide them with a proper gamma 2.2 to process, because 2.2 is what they will remove. sRGB profile is the standard way to do that. This is very easy, and mostly fully automatic, about impossible to bypass.
The 8-bit issue is NOT that 8-bit gamma data can only store integers in range of [0..255]. Meaning, we could use the same 8-bit gamma file for a 12 bit or 16 bit display device (if there were any). The only 8-bit issue is that our display devices can only show 8 bit data. See this math displayed on next page.
The human eye issue is that we expect to see an accurate linear reproduction.
The manufacturers issue is that 8 bits has been deemed very adequate.
I suspect the users issue is that we really don't want to pay more for a bigger solution not really needed.
Some Details: This calculator is in support of the next several paragraphs. Option 6 shows the action that occurs. Option 7 shows that result on all possible 8-bit values. The percentages are of 255 or of normalized 1.0, i.e., percentage of the full scale histogram. This calculator simulation assumes a lookup table is normally used, so that only the final result is stored as 8 bits.
Today, 8-bits is the sticky part: We do store computed gamma into 8-bit data and JPG files, which a LCD monitor does decode into their 8-bit video space (I would make sure any lower cost LCD actually has specifications saying "16.7 million colors", which means 8-bits. In the past, some were just 6 bits, which is 0.262 million colors, not a bragging point to advertise). I'd suggest that serious photo users search their dealer for an "IPS monitor" that specifies 16.7 million colors (just meaning 8 bits). Price of many 23 inch IPS models is down around $150 US now.
How big a deal is gamma in 8-bits? We hear theories, but we all use 8-bit video, and we seem pretty happy with how it works, because it's very adequate (it might be a best compromise for cost, but if it were not adequate, we would have done it another way by now). But our cameras and scanners are not 8-bit devices. We can use 8 bit gamma files for 12 or 16 bit images (and our cameras and scanners do that), and also for 12 and 16 bit display devices (if any, if with proper drivers). We tend to blame any imagined problem on 8 bit gamma, but the only actual issue is with 8 bit display devices. There is a closer look at this math situation on the next page.
In 12 bits, as if it were a choice in gamma (it's not a choice so far), there are 4096 possible values (finer steps, closer choices). In 8 bits, there are 256 possible values, from 0 to 255. From linear 80, the calculated gamma value 150.56 must become integer 150 or 151. If rounded to 151, it would decode back to 80.52, which has to be called 81, which is not 80 (an Off by One error). We can round it or truncate it. If we round it, we throw some values off. If we don't, we throw other values off. It's not predictable after we do the exponentiation. The system has no clue which way would be best for any specific sample. More precision could help, but it is quite minor, and questionable if worth the cost. Actually, for that, it would seem best to not use gamma at all, and simply just store 80 linear in the file, and then directly read it as 80, with no question about how to reproduce linear. But that does not solve the CRT problem, which is why we use gamma.
Our cameras and scanners are typically 12 bits today, which is about improving dynamic range. 12 bits has more dynamic range than 8 bits (it captures more detail into darker blacks). Sure, we're still going to convert it to 8 bits for our monitors and printers, but there's still a difference. The camera captures a 12 bit image, and the gamma operation normalizes all of the data into a 0..1 range. Our new 8-bit data does not just chop it off at 8-bits, but instead gamma scales this 0..1 range proportionately into 8 bits, a percentage between 0..1, but fitting into a smaller space (of 0..255 values). Saying, if the dark tone were at 1% of 4095 12-bits, it will also be at 1% of 255 8-bits. That might be low data values of 2 or 3, but it is theoretically there. This might be what Poynton is getting at (still requiring gamma), but I think he doesn't say it. So our image does still contain hints of the 12-bit detail, more or less (it is 8-bit data) However yes, it is compressed into the range of 8-bits, and NOT just chopped off at 8-bits. It's still there. That's a difference.
The obvious purpose of gamma was to necessarily support CRT monitors, and now that LCD monitors have replaced CRT, we could use an 8-bit camera that outputs 8-bit linear directly, and our LCD monitor could just show the linear image directly, bypassing its operation to decode and discard gamma. And that could work (on that one equipped LCD), but it would just chop off range at 8-bits. Unless the camera were higher bit count and also did this proportional 0..1 scaling, which gamma just happens to already do. And continuing gamma is simple to do, which also does not obsolete all of the worlds old images and old viewing devices. That's my take.
FWIW, converting 16 bit RGB files to 8-bits uses truncation. We never notice, and it is fast. Truncation was popular for gamma in the earlier days, it is very simple and very fast, and was all the simplest CPU chips could do then. Affects different values, but worse results. However today, we have the LookUp table chips (LUT, next page), and they are easily created with rounded data. If we use the rounded value, most of the values work out very well, but other possible values might still be off by one. Off by one going to happen, but it's a minor problem, all at the high end where it really doesn't matter. Sometimes the numbers can look kinda bad, until we realize it's only Off by One. But for example, if rounded, maybe 28% of the integer values are still simply not exactly reproducible (encoded and then decoded back to same linear number, off by one). In rounded 8-bits, some of these will be Off by One (it is the least possible error). But if rounded, only five of the 256 values will barely reach 1%, which is arguably enough for those few values to perhaps be slightly just detectable to the eye (it's very optimistic that we might ever notice it down among all of the pixels).
Note that Options 6 & 7 normalize the data to a [0..1] range, then convert linear values to gamma, and then back to linear, and looks for a difference due to 8-bit rounding. Option 6 does any one value, but Option 7 does all possible values, to see how things are going. But our photos were all encoded elsewhere at 12 bits (in the camera, or in the scanner, or in raw, etc), so encoding is not our 8-bit issue (it's already done). So my procedure is that Option 6 & 7 always round the Input encoded values, and only uses the Truncate gamma values checkbox for the decoding, which will convert the 8-bit output values by either truncating or by full rounding. This still presents 8-bit integer values to be decoded, which matches the real world, but rounding the input introduces less error, which the camera likely would not cause.
And Note too, in calculator Option 7 that these Rounded value errors (28% of values are Off by One) are roughly evenly distributed between +1 or -1 difference. But these Rounded Off by One errors are mostly brighter values (no low end problem), and only five rounded values (2% of values) sightly meet the perceptual limit of 1% (72, 80, 86, 92, 96 being differences of One, i.e., one in the values less than 100), so this seems a pretty minor issue. Being able to intently study it and possibly detecting the change is NOT at all the same thing as casually noticing one pixel somewhere in a photo image.
But the Truncated value errors (50% of values, but no longer current practice) are worse. All errors are -1 (except only one value 143 was -2, only because actual -241.986 was truncated as 241). But the dark end suffers seriously from these truncated differences (1 is a major fraction at the low end). We surely would notice that. But LUT are the norm today, easily rounded and still fast. We are speaking of tiny rounding effects on precision (only "off by one" errors).
Our 8-bit video does seem adequate. One possible exception might be a large gradient filling most of the screen width (needing more values to not show noticeable differences, i.e., banding), but that is not a photo situation. Hollywood graphics might see it. In general, our monitors black level cannot show differences in the lowest black tones. And our human eyes cannot detect less than 1% differences at the high end.
If we did not use gamma for a LCD, then our JPG 8-bit file could instead simply already contain any linear values directly. We could simply store the linear value in the file, and then read the same original value back out, and simply show it as is (speaking of on a LCD). No technical issues then. I don't advocate doing that, instead we necessarily and desirably still use gamma for compatibility with all the images and video and printer systems in the world. And compatibility still with CRT monitors. It's easy to continue, it's just a LookUp chip now (next page).
Gamma values stored in an 8-bit JPG file, or decoded into 8-bit video space, are necessarily 8-bit integers, which means the values can only be a limited set of numbers of [0..255]. When computing gamma, we can compute a number like 19.67. But the result can only be 19 or 20, and not a fraction. So the values might change by a value of one (almost always only one, Option 7). Rounding up or down appropriately certainly helps accuracy, and video LookUp tables can provide that. Or code could do that, but it's slow processing math. For the huge load of millions of pixels (x3 for RGB), an expedient method to convert to 8-bits is to truncate. It's really not all that bad.
The Truncate gamma values checkbox in the calculator will do the same rounding down, or not (just click it, on and off, and watch the values). Output storage results go into 8-bit integers [0..255], i.e., floating point fractions are Not stored, and may not be rounded, same as storing into 8-bits would do. Random values might change by one. 8-bits may not be perfect, but the point is to show worst case isn't too bad.
So 8-bits has effect, but not a big difference. Our accepted computer color system plan is 8-bits (called 24 bit color, 8-bits each of RGB). We all use 8-bits and find no problems. Yes, linear values might decode to come back one less than they went in. A difference of 1 down at 5 or 10 or 20 could possibly be a significant percentage at the low end (where it is very black, and our monitors can hardy show it anyway), but this is nothing at higher values. And this change of One is random, no pattern to it, don't count on the eye to help figure it out. The 8-bit "problem" is largely only about if the integer should have randomly rounded up instead of down. The computer can compute gamma conversions to any high precision desired, but the final act of storing a precise result value into an 8-bit file MAY expediently truncate it to be a value of maybe one less. It is the tiniest error, which depends on the decode procedure.
Gamma is just reversible math. The formulas graph out the gamma curve above. There's no mumbo-jumbo involved, and it's not rocket science. It is simply a reversible exponential function. Gamma is used to exactly offset CRT losses, to be able to correct and use a CRT display. Simple CPUs are not equipped for much math, so on a LCD display, a LUT chip (below) first simply decodes back with the exact reversed math operation to simply recover the same exact linear value we started with (leaving no change the eye could see). Decode uses exponent of 2.2 instead of reciprocal 1/2.2 for Encode, which is reversible math. It's like 8/4 = 2, and 4x2 is 8 again. Reversible math, we simply get the same value back. However there can be slight 8-bit rounding variations of gamma in between, which might change the value by a difference of one sometimes. A small error, but not really a big deal, virtually all of our images and video systems and printers are 8-bits. If 8-bits were not acceptable, we would be doing something else.