www.scantips.com

What and Why is Gamma Correction in Images?

All of the world's photo and tonal image data contains Gamma Correction (gamma 2.2 is in the specifications, in sRGB, in digital HDTV, and for many years before that, in analog TV NTSC and PAL).

Because, Gamma Correction oppositely corrects for the deficiencies (non-linearity) of CRT monitors (CRT = Cathode Ray Tubes), which we used for many years, from earliest television. Today's LCD monitors (LCD = Liquid Crystal Display) are linear, and don't need gamma, but we continue to use and expect gamma, because all of the worlds photo images already contain gamma correction (and CRT will still need it too). For this purpose, all of our digital cameras and scanners always output images corrected for gamma (except Raw images defer this step until later processing, and one-bit line art images are not tonal, and don't need gamma).

So photo image histograms show the gamma corrected data. While any view of the photo image that our eyes can ever see has necessarily been decoded back to the original linear version, the actual image file data still shows the gamma numbers.

Gamma is pretty much an invisible operation, it just always happens in background. But it does affect our histogram data. When you edit a color with RGB numbers in Photoshop, those numbers are gamma corrected data. The midpoint data that we imagine as 128 in linear data is in fact near 186 in our histogram (near 3/4). This also affects photos of the 18% gray card, which in the histograms gamma data is near 46%, which causes some of us to imagine the 18% gray card should be 50%. But it is 18%, and gamma correction is what makes 18% appear near 46% in the histogram.

Use of the Terms linear and gamma: Our color or grayscale images contain many tones (bright or dark or intermediate tones, which I call tonal data). Our real world vision sees only analog linear data, however all digital tonal image data files are gamma encoded (but always necessarily converted back to linear analog before our eyes see them). In video system use, the common use of these terms is that gamma data means gamma correction has been added, and linear data just means the data is not gamma encoded (either not yet encoded, or no longer encoded).

Linear is what our eye or camera lens sees. In math, "linear" means it graphs as a straight line, meaning twice the input produces twice the output (proportionally, linear, not distorted). So linear also implies the analog scene data (also still linear at the camera sensor). Also linear implies the final linear reproduction that is presented to our eye, hopefully to be the same as we saw when either standing in front of the original scene, or viewing a reproduction of it. Our eye of course always wants to see only the linear version, like the original scene. Anything else would be distortion. So gamma data is necessarily always "decoded" first, back to linear, before any eye ever sees it.

Gamma Correction is non-linear logarithmic processing, done to correct for CRT monitors which show data as if it had been raised to the exponent of 2.2 (roughly numerically 2, so showing as if data values had been approximately squared). That has the effect of leaving the low values behind (reproduction is too dark). So to correct the expected CRT action, we prepare by first doing Gamma Correction, by applying exponent 1/2.2 to the data first (roughly square root), exactly offsetting the expected losses. (The math details are that the data is normalized first, to be a fraction between 0 and 1, and then square root gives a larger number than before, specifically without changing end point range, more below.) Then the corrected image will come out of the CRT just right, i.e., a linear reproduction again for the eye to see. We've done that for television CRT for about 80 years.

Gamma is NOT in any way related to the human eye response

Now and then, I get email... :) Charles Poynton's gamma articles have made the statement that we would still need to use gamma even if we no longer use CRT monitors. It's impossible to agree with his comment about the eye needing gamma since the eye NEVER EVER sees gamma data (actually, it's the gamma math that causes some of the 8-bit issue today). However it certainly was much better to continue to use gamma for the reason that we do continue, because continuing to use gamma for compatibility prevented obsoleting and rendering incompatible 100% of the world's existing images and video systems (and any CRT will still need gamma too). That must have been a very easy decision, it was vastly easier than starting over, since the world is full of gamma images. And it's no big deal today, since gamma has become so easy and inexpensive to implement (it's just a chip now). Compatibility is a great and best reason to continue using gamma (even if LCD has no use for it, and so to speak, must just discard it).

The big problem I do see is that this wrong notion (that the eye itself needs gamma somehow?) has recently caused some false internet notions (of a theory impossible to explain). Their only argument is that the eye has a similar nonlinear response, which even Poynton admits is "coincidental". I think the eye is even the opposite effect, and our brain does its own correction, but be that as it may, human eyes simply NEVER see any gamma data. We only see analog linear scenes or linear reproductions of it. Our eyes of course evolved without CRT gamma, and eyes don't need gamma now. We did use gamma data for decades, knowing it was designed only to correct the CRT gamma response. Now CRT is about gone, so suddenly newbies decide gamma must have instead been needed for the human eye response somehow? Come on guys, don't be dumb. :) (See more below.) No matter what gamma might do, the eye simply never has any opportunity to ever see any gamma data, and gamma data is still totally unrelated to our vision response. Our eye only wants to view a properly decoded linear image reproduction, the same as the lens saw, the same as our eyes see if we were still standing there. However, all of the worlds images are in fact already gamma encoded, so we do continue gamma for that compatibility.

So today, LCD displays simply decode gamma and discard it first, so to speak. Technically, the LCD must convert the gamma data back to linear data first, for our eye. Our other uses (CRT, and printers to some extent) still rely on the gamma data. The "why" we use gamma may not actually matter much, gamma is pretty much an invisible automatic operation to us, a noop now, encoded and then decoded back to linear. But still, we can hear dumb things on the internet, and novices are being told wrong things, which is a pet peeve for me.

Our eye of course always expects to see only linear data, a linear reproduction of the original linear scene, same as if we were still standing there in front of it. Any deviation from linear would be seen as data corruption. Any "need" for gamma for the eye or for the LCD is laughable technically, but our need for compatibility is still extremely important, so we still do it. The eye has no use for gamma. And the eye never sees gamma data. We may still encode gamma into the data for other reasons (CRT before, compatibility now), but it is always necessarily decoded back to linear before any eye ever sees it. Anything else would be distortion. The eye has no use for gamma, and the eye never sees gamma data.

The fundamentals are impossible to ignore: Our eye of course only looks at linear scenes, or at linear reproductions of scenes. The eye never ever sees gamma data, which would be too bright and unnatural if it did. The reason to do gamma correction is to correct CRT response. CRT once was all there was, but CRT is no longer popularly used today. However, since all of the worlds tonal images are gamma encoded, the obvious reason we still continue gamma today is for compatibility, which is all-important. That does NOT mean we ever see gamma images.

Our eyes never see gamma data (it's always decoded first, one way or another). We expect to see linear data. Our camera sensors also see linear data. Our images do begin and end as linear, and it seems every minimal article about our histograms portray them as linear data. However all our tonal image files and all of their histograms contain gamma data, which changes the numerical values you will see in your histograms. Our photo images are gamma data, our files and histograms contain gamma data, and the numerical values are gamma encoded. (One-bit line art is the exception, 0 and 1 is not tonal, so no issues with linearity, and no gamma needed). But our eyes see only decoded linear images.

A gamma calculator (for photo images) to help with the numbers:

Gamma Calculator

Gamma     2.2 is the standard profile

Show decimal places on computed values

Values are converted both as linear or gamma

1. Value [0..255]  

2. Percent [0..100%]   %


3. Difference of two values [0..255]

Linear or Gamma

-->


4. Difference of two values [0..100%]

Linear or Gamma

% -> %


5.   stops from gamma

Formats can be ± 1 or 2/3 or 1 1/2 or 2.333


Please report ( Here ) any problems with the calculator, or with any aspect of this or any page. It will be appreciated, thank you.

Numbers Only. Seeing a NaN result is an error which would mean input is Not A Number. The calculator always also shows the number it interpreted as input.

The "values" are the 0..255 values in a histogram (which are gamma numbers in the histogram, and linear numbers at the sensor). The percentages are of the 255 full scale histogram value.

Histogram values are integer [0..255] values, so entering numbers like 127.5 cannot actually exist in our image files. The calculator and math will use the decimal fraction value though, if it pleases you. And you can show a couple of decimal places on Option 1, 2, 5 histogram values (if interested in rounding maybe).

Option 5 (measuring stops down of a histogram value) is possibly the most interesting. It offers a rough approximation about how exposure changes affect the gamma histogram that we see. For example, one stop underexposed from the 255 end should be 50% at 128 IF IN LINEAR SENSOR DATA (which we never see). But in the gamma histogram image data, one stop it is about 186 or 73%, about 3/4 scale. Or 1/3 stop down from 255 should be about 230 at 90%. However, while the math is precise, practice is a bit rough because digital cameras are also making a few simultaneous tonal adjustments, for White Balance, Color Profile like Vivid, Saturation, Contrast, etc, which also shift the histogram data. Therefore the gamma values in histogram data probably are a little different than the exact values predicted. But it likely will be in the ballpark. More gamma calculations, including more detail of this "stops down", is on next page.

Gamma
Value
Calculated
-1 EV
ACR Exposure
-1 EV
255186255
224163185
192140141
160117108
1289383
967060
644738
322318
000

Comparing Option 5, the Exposure slider of Adobe Camera Raw (in Photoshop, Elements, and Lightroom) does not quite match on EV values.

I'm thinking this might be explained as two ways to look at it. Any "multiplier" of gamma 255 is still of course 255, as ACR shows (normalized 1.0 to any exponent is still 1.0). But one stop of "exposure" down from 255 is 186.

This Exposure slider shifts the data, and the tool appears to be similar to the Levels White Point (and Blacks is the Black Point), including holding ALT key showing clipped pixels. The data can be shifted up to clip at the 255 point, but the end points are fixed, and it seems a different curve. Speaking of Photoshop CS6 ACR 9.1.1.461, the Exposure slider at -1 EV does not affect values 255 at all (-1 should become 186, but it doesn't). Values 254 are shifted to be 250, but which is not -1 EV (should be 185). Probably not at all a bad thing, but not a linear -1 EV move. However most of the curve is close to expected (otherwise within 1/3 EV or better), but these EV numbers don't compare well with the calculator Option 5 above. It works well, but don't assume ACR Exposure slider +1 means +1 EV. But we do adjust the photos by eye anyway.

Another Exposure tool at Photoshop menu Images - Adjustments - Exposure does match these calculator values, and -1 EV does reduce 255 to 186 as expected.

We never see linear histograms for our photo images, our image files and histograms are RGB gamma data that we see. These tonal images are always gamma data (but one bit line art is not). The camera sensor chip was linear, and raw files are still linear, but are converted to RGB data for use, and RGB images are gamma encoded (Until when shown, when they are decoded to be linear data again for the eye). But even raw files show gamma histograms (from an embedded JPG image included in the raw file).

But gamma in 8-bit files and 8-bit target video space can create tiny rounding errors in the numbers (possibly "off by one"). This is no big deal (8-bits is what we do, and it obviously works fine). This is Not related to the eye. It is because of the rounding to integers [0..255] in 8-bits, brought about because we do gamma math in 8-bits.

The Gamma Procedure, and How It Works:

It can be made to sound like voodoo, but it's actually pretty simple. A CRT display is not a linear device. CRT displays have serious tonal losses, the brightest tones get through and the dimmer tones are lost (dark). We used CRT display for nearly 80 years for television, so this fix is very well understood.

This nonlinear CRT response curve is named gamma, which is shown at right. Gamma is the problem, and gamma correction is how we fix it. These graph axes represent the 0..255 range, but the numbers are normalized to show 0..1 values (percentage).

Read it like any regular curve tool (which I have marked an example in Blue), with input at bottom, and output at left. The straight 45 degree line is the linear response, linear because the blue marked 60% input has a 60% output. The straight line is also gamma 1.0, linear.

The lower solid red curve named gamma shows the typical nonlinear response of a CRT monitor. This CRT response curve shows 50% input is down to 22% output (and much of the output will be too dark). To correct this nonlinear CRT response, the image data is first boosted nonlinearly, to new values modified equally in the opposite direction of the expected losses (the upper dashed red curve named gamma correction). Then combining these opposite effects, the resulting straight 45 degree line shows the corrected linear response, so that any input is the same numerical output (60% input from bottom is 60% output at left, no change, example shown in blue). That corrected curve is displayed to our eyes, the image data is boosted so that midpoint 50% is raised to 73% in gamma data (calculator above, Option 2, 50%). After which the image will display properly (linearly) on a nonlinear CRT. So even after suffering these expected CRT losses, the corrected CRT output will come out linear (the straight line in graph).

But our eye will NEVER EVER see gamma, because gamma is always decoded first. Our eyes HAVE ABSOLUTELY NO USE FOR GAMMA, it would instead be added distortion if gamma were ever seen. The eye only wants to see the corrected linear reproduction, intended to exactly match what the lens originally saw at the scene. Our eyes did not see or need gamma data when viewing the actual real scene. Assuming our eyes know how to work properly, it does not matter how our eyes actually do it, the eye expects to see real things in the world (of course without gamma). All that matters in regard to our photos is showing our eyes a proper linear reproduction of that real world. Due to gamma, we do have to help the CRT to do that. And because of that plan, the LCD monitor has to deal with it, and discard gamma first (the gamma 2.2 number must still be carefully observed by all involved). Some seem not to understand the actual facts about gamma.

Since CRT is not used so much now, then how gamma affects our histogram is about the last visible evidence about gamma today (gamma does of course still affect printing images too). Otherwise in LCD today, gamma is just an invisible background process, it happens, the camera encodes it, and then the LCD monitor decodes and discards gamma before we see it, and we're never even aware of it. But the hypothetical "midpoint" of our image data which is the linear 50% level at the camera sensor is in fact near 73% in our gamma histograms. The 50% "midpoint" of our histogram is in fact 22% linear (graph above). Meaning, if the right end of your histogram data is seen at the 50% midpoint, you will need 2.2 stops more exposure to raise it to 100% (Not the one stop that might be imagined - see calculator).

The correct Google search term for this subject is Gamma Correction. The excellent graph above came from Wikipedia, but that article on Gamma Correction is totally corrupted now, and this graph was removed. <sigh> So don't believe everything you read now. It is the internet, and there are good sources, and also those that frankly don't know. That part you see about the purpose of gamma being to aid the human eye response is utter nonsense, made-up gobbledygook. The eye never even sees gamma data, the eye only sees the decoded data, reversed back to be the same original linear version again. That's the purpose of gamma, meaning gamma is only to correct the response of CRT, so we can in fact see the original linear image. And we still do it for compatibility with all the previous images.


The Adobe Levels tool (CTRL L in Photoshop and Elements) has a gamma option. Adobe Help calls its center slider "Midtones", but describes it as "The middle Input slider adjusts the gamma in the image." It certainly is a very good tool to adjust overall image brightness (much better than "Brightness" tools, which merely add a constant to all tones, and can cause clipping). This tool raises the center of the curve, but the endpoints stay fixed (same range is fixed).

The tool (Levels center slider) changes image brightness by changing gamma, raising the curve shown above. This center slider of Levels shows 1.00 by default, which means gamma of 1x of existing value (whatever it was, but probably 2.2, and so default 1x x 2.2 is still 2.2, no change at 1x). But other slider values are multipliers of that existing gamma. A calculator using the math of this gamma multiplier is on next page.


Center slider at 0.45 = 1/2.2, opposite action, now an image with no gamma correction (with 2.2 x 1/2.2 = gamma 1, linear). Too dark because it simulates CRT losses showing an image with no gamma, because now your LCD also decodes gamma 1 to gamma 0.45, also too dark. This is the effect of CRT gamma losses. CRT is why we use gamma correction.

Center slider at 1.0, normal default, normal gamma 2.2. However, an LCD monitor has to specifically decode gamma first, by applying the 0.45 curve before showing it as linear, or gamma 1. CRT losses would also show it this proper way. This is the plan. Your LCD also decodes 2.2 to (2.2 x 1/2.2) = gamma 1, linear, reproducing the original scene in front of the camera.

Center slider at 2.2, adding gamma 2.2 to already 2.2 (i.e., 2.2 x 2.2 = gamma 4.8 now). Too bright, but we never see this. If we could see gamma data directly, our histogram data should look this way (too bright, the point is data does have gamma 2.2 added to linear). This is done so when a CRT shows it darker suffering the gamma losses, it will look right after all.
The Levels center slider is a multiplier of the current image gamma. I don't find that "multiplier" written about so much any more, but it was popular, widely known and discussed 15-20 years ago (CRT days, back when we knew what gamma was). Gamma used to be very important, but today, we still encode with 1/2.2, and the LCD monitor must decode with 2.2, and for an LCD monitor, gamma is just an automatic no-op now (printers and CRT still make use of it).
Evidence of the tool as multiplier: An eyedropper on the gray road at the curve ahead in the middle image at gamma 2.2 reads 185 (I'm looking at the red value). Gamma 2.2 puts that linear value at 126 (midscale). 126 at gamma 1 is 126 (measured in top image, 0.45x x 2.2 = gamma 1). 126 at gamma 4.8 is 220 (measured in bottom image, 2.2x x 2.2 = gamma 4.8). Q.E.D.


Today, our LCD display is considered linear and technically does not need gamma. However we still necessarily continue gamma to provide compatibility with all the world's previous images and video systems, and of course for CRT and printers too. The LCD display simply uses a chip (LookUp table, next page) to decode it first (discarding gamma correction to necessarily restore the original linear image). Note that gamma is a Greek letter used for many variables in science (like X is used in algebra, used many ways), so there are also several other unrelated uses of the word gamma, in math and physics, or for film contrast, etc, but all are different unrelated concepts. For digital images, the use of the term gamma is to describe the CRT response curve.

We must digitize an image first to show it on our computer video system. But all digital cameras and scanners always automatically add gamma to all tonal images. A color or grayscale image is tonal (has many tones), but a one-bit line art image (two colors, black or white, 0 or 1) does not need or get gamma (values 0 and 1 are entirely unaffected by gamma).

Gamma correction is automatically done to any image from by any digital camera (still or movie), and from any scanner, or created in graphic editors... any digital tonal image that might be created in any way. A raw image is an exception, only because then gamma is deferred until later when it actually becomes an image. Gamma is an invisible background process, and we don't have to necessarily be aware of it, it just always happens. This does mean that all of our image histograms contain and show gamma data. The 128 value that we may think of as midscale is not middle tone of the histograms we see. This original linear 128 middle value (middle at 50% linear data, 1 stop down from 255) is up at about 186 in gamma data, and in our histograms.

The reason we use gamma. For many years, CRT was the only video display we had. But CRT is not linear, and requires heroic efforts to properly use them for tonal images (photos and TV). The technical reason we needed gamma is that the CRT light beam intensity efficiency varies with the tubes electron gun signal voltage. CRT does not use the decode formula, which simply resulted from the study of what the non-linear CRT losses already actually do in the act of showing it on a CRT ... the same effect. The non-linear CRT simply shows the tones, with the response is sort of as if the numeric values were squared first (2.2 is near 2). These losses have variable results, depending on the tones value, but the brighter values are brighter, and the darker values are darker. Not linear.

How does CRT Gamma Correction actually do its work? Gamma 2.2 is roughly 2, so there's only a small difference from 1/2 square root, and 2 squared. I hope that approximation simplifies instead of confusing. So Encoding Gamma Correction input to the power of 1/2.2 is roughly the square root, which condenses the image gamma data range smaller. Then later, CRT Gamma Decodes it to power of 2.2, roughly squared, which expands it to bring it back exactly to the original value (reversible). Specifically, for a numerical example for two tones 225 and 25, value 225 is 9x brighter then 25 (225/25=9). But (using easier exponent 2 instead of 2.2), the square roots are 15 and 5, which is only 3 times more then, compressed together, much less difference ... 3² is 9 (and if we use 2.2, then 2.7 times more). So in that way, gamma correction data boosts the low values higher, they move up more near the bright values. And 78% of the encoded values end up above the 127 50% midpoint (see LUT on next page, or see curve above). So to speak, the file data simply stores roughly the square root, and then the CRT decodes by showing it roughly squared, for no net change then, which was the plan, to reproduce the original linear data. The reason of course is because the CRT losses are going to show it squared regardless (but specifically, the CRT response result is power of 2.2).

Not to worry, our eye is NEVER EVER going to see any of these gamma values. Because, then the non-linear CRT gamma output is a roughly squared response to expand it back (restored to our first 225 and 25 linear values by the actual CRT losses that we planned for). CRT losses still greatly reduce the low values, but which were first boosted in preparation for it. So this gamma correction operation can properly show the dim values linearly again (since dim starts off condensed, up much closer to the strong values, and then becomes properly dim when expanded by CRT losses.) It has worked great for many years. But absolutely nothing about gamma is related to the human eye response. We don't need to even care how the eye works. :) The eye NEVER sees any gamma data. The eye merely looks at the final linear reproduction of our image on the screen, after it is all over. The eye only wants to see an accurate linear reproduction of the original scene. How hard is that?

Then more recently, we invented LCD displays which became very popular. These were considered linear devices, so technically, they didn't need CRT gamma anymore. But if we did create and use gamma-free devices, then our device couldn't show any of the world's gamma images properly, and the world could not show our images properly. And our gamma-free images would be incompatible with CRT too. There's no advantage of that, so we're locked into gamma, and for full compatibility, we simply just continue encoding our images with gamma like always before. This is easy to do today, it just means the LCD device simply includes a little chip to first decode gamma and then show the original linear result. Perhaps it is a slight wasted effort, but it's easy, and reversible, and the compatibility reward is huge (because all the worlds images are gamma encoded). So no big deal, no problem, it works great. Again, the eye never sees any gamma data, it is necessarily decoded first back to the linear original. We may not even realize gamma is a factor in our images, but it always is. Our histograms do show the numerical gamma data values, but the eye never sees a gamma image. Never ever.

Printers and Macintosh: Our printers naturally expect to receive gamma images too (because that's all that exists). Publishing and printer devices do also need some of gamma, not as much as 2.2 for the CRT, but the screening methods need most of it (for dot gain, which is when the ink soaks in to the paper and spreads wider). Until recently (2009), Apple Mac computers used gamma 1.8 images. They could use the same CRT monitors as Windows computers, and those monitors obviously were gamma 2.2, but Apple split this up. This 1.8 value was designed for the early laser printers that Apple manufactured then (and for publishing prepress), to be what the printer needed. Then the Mac video hardware added another 0.4 gamma correction for the CRT monitor, so the video result was an unspoken gamma 2.2, roughly - even if their files were gamma 1.8. That worked before internet, before images were shared widely. But now, the last few Mac versions (since OS 10.6) now observe the sRGB world standard gamma 2.2 in the file, because all the world's images are already encoded that way, and we indiscriminately share them via the internet now. Compatibility is a huge deal, because all the worlds grayscale and color photo images are tonal images. All tonal images are gamma encoded. But yes, printers are also programmed to deal with the gamma 2.2 data they receive, and know to adjust it to their actual needs.

While we're on history, this CRT problem (non-linear response curve named gamma) was solved by earliest television (first NTSC spec in 1941). Without this "gamma correction", the CRT screen images came out unacceptably dark. Television broadcast stations intentionally boosted the dark values (with gamma correction, encoded to be opposite to the expected CRT losses, that curve called gamma). That was vastly less expensive in vacuum tube days than building gamma circuitry into every TV set. Today, it's just a very simple chip for the LCD monitors that don't need gamma... they simply decode to remove it now.

This is certainly NOT saying gamma does not matter now. We still do gamma for compatibility (for CRT, and to see all of the worlds images, and so all the worlds systems can view our images). The LCD monitors simply know to decode and remove gamma 2.2, and for important compatibility, you do need to provide them with a proper gamma 2.2 to process, because 2.2 is what they will remove. sRGB is one way to do that. This is very easy, and mostly fully automatic, about impossible to bypass.

Unfortunately, some do like to imagine that gamma must still be needed (now for the eye?), merely because they once read Poynton that the low end steps in gamma data better matches the human eye 1% steps of perception. Possibly it may, but it was explained as coincidental. THIS COULD NOT MATTER LESS. They're simply wrong about the eye, it is false rationalization, obviously not realizing that our eye Never sees any gamma data. Never ever. We know to encode gamma correction of exponent 1/2.2 for CRT, which is needed because we've learned the CRT response will do the opposite. It really wouldn't matter which math operation we use for the linear LCD, if any, so long as the LCD still knows to do the exact opposite, to reverse it back out. But the LCD monitor expects gamma 2.2 data, and gamma 2.2 is exactly undone. Gamma data is universally present, and it is always first reversibly decoded back to be the original linear reproduction that our eye needs to view. That's the goal, linear is exactly what our eye expects. Our eye never sees, and has no use for gamma data. It would be distortion if it ever did. But the CRT did have use for specific gamma data (to be able to show a linear reproduction our eye wants).

Reversible Gamma Calculator

Gamma 2.2 is the standard profile

Decoded value truncated, not rounded

6. Linear to gamma and back

7. All values, linear to gamma and back


This calculator is in support of the next several paragraphs. Option 6 shows the action that occurs. Option 7 shows that result on all possible values.

Today, 8-bits is the sticky part: We do store computed gamma into 8-bit data and JPG files, which a LCD monitor does decode into their 8-bit video space (except many lower cost LCD WITHOUT specifications of 16.7 million colors are just 6 bits (0.262 million colors, not a bragging point to advertise). I'd suggest that serious photo users search their dealer for an "IPS monitor" that specifies 16.7 million colors (just meaning 8 bits). Price of many 23 inch IPS models is down around $150 US now.

How big a deal is gamma in 8-bits?   We all use 8-bit video, and we seem pretty happy with how it works, because it's very adequate (might be a best compromise, but if it were not adequate, we would have done it another way by now).

In 8 bits, a calculated decimal value like 100.73 must become integer 101 or 100. We can round it or truncate it. Truncation was popular in the earlier days, very simple and very fast. However today, we have the LookUp table chips (next page), and they can round. If we use the rounded value, most of the values work out very well, but other possible 8-bit values might still be off by one (one is a tiny number, any number's least significant bit). For example, some values (28% if rounded) are simply not exactly reproducible in rounded 8-bits, so these will be Off by One (in 8-bits). Only a very few of these might be slightly perceptible to the eye.

The 8-bit issue is that no matter what the numerical value might be, 8-bit data can only store integers in range of [0..255].
The human eye issue is that we expect to see an accurate linear reproduction.
The manufacturers issue is that 8 bits has been deemed very adequate.
I suspect the users issue is that we really don't want to pay more for a bigger solution not really needed.

The history is that extremely few PC computers were able to show photos before 1987. An early beginning was Compuserve's GIF file in 1987 (indexed color, 256 colors maximum, and concerned with file size and dialup modem speeds instead of image quality). But also 8-bit JPG and TIF files (16.7 million possible colors) were developed about then too, and 8-bit video cards (for 24 bit color) became the norm soon, and the internet came too, and in just a few years, use of computer photos literally exploded to be seen everywhere. Our current 8 bits IS THE SOLUTION chosen to solve the problem, and it has been adequate (24 bit color, for 256x256x256 = 16.7 million possible colors).

About the 1% perception threshold:   Human responses are logarithmic (Wikipedia: Weber Fechner Law). It is said that our eye cannot perceive less than a 1% change of brightness. That 1% limit is a change of 1 different at value 100. But 200 would require a change of 2 for us to even perceive it. However, 1 at 12 is an 8.3% change (if our monitor can reproduce black that well).

Note in calculator Option 7 that these Rounded value errors (28% of values are Off by One) are roughly evenly distributed between +1 or -1 difference. But these Rounded Off by One errors are all brighter values (no low end problem), and only five rounded values (2%) sightly meet the perceptual limit of 1% (differences of One, on values less than 100), so this seems a very minor issue. Our 8-bit video does seem adequate.

But the Truncated value errors (50% of values, but no longer current practice) are worse. All errors are -1 (except only one value 143 was -2, only because actual -241.986 was truncated as 241). But the dark end suffers seriously from these truncated differences (1 is a major fraction at the low end). We might claim the overall truncated result is more consistent, less variation (evidence offered in Option 7). But we are speaking of tiny rounding effects on precision (only "off by one" errors), and LUT are commonly used today, fast, and they are easily rounded.
FWIW, converting any 16 bit file to 8-bits uses truncation. We never notice, and it is fast.

Note that Options 6 & 7 convert linear values to gamma, and then back to linear, and looks for a difference due to 8-bit rounding. Any one value is what 6 does, but Option 7 does all possible values, to see how things are going. But our photos were all encoded elsewhere at 12 bits (in the camera, or in the scanner, or in raw, etc), so encoding is not our 8-bit issue (it's already done). So my procedure is that Option 6 & 7 always round the Input encoded values, and only uses the Truncate gamma values checkbox for the decoding, which will convert the 8-bit output values by either truncating or by full rounding. This still presents 8-bit integer values to be decoded, which matches the real world, but rounding the input introduces less error, which the camera likely would not cause.

If we did not use gamma for a LCD, then our JPG 8-bit file could instead simply already contain any linear values directly. We could simply store the linear value in the file, and then read the same original value back out, and simply show it as is (speaking of on a LCD). No technical issues then. I don't advocate doing that, instead we necessarily and desirably still use gamma for compatibility with all the images and video and printer systems in the world. And compatibility still with CRT monitors. It's easy to continue, it's just a LookUp chip now (next page).

Gamma values stored in an 8-bit JPG file, or decoded into 8-bit video space, are necessarily 8-bit integers, which means the values can only be a limited set of numbers of [0..255]. When computing gamma, we can compute a number like 19.67. But the result can only be 19 or 20, and not a fraction. So the values might change by a value of one (almost always only one, Option 7). Rounding up or down appropriately certainly helps accuracy, and video LookUp tables can provide that. Or code could do that, but it's slow processing math. For the huge load of millions of pixels (x3 for RGB), an expedient method to convert to 8-bits is to truncate. It's really not all that bad.

The Truncate gamma values checkbox in the calculator will do the same rounding down, or not (just click it, on and off, and watch the values). Output storage results go into 8-bit integers [0..255], i.e., floating point fractions are Not stored, and may not be rounded, same as storing into 8-bits would do. Random values might change by one. 8-bits may not be perfect, but the point is to show worst case isn't too bad.

So 8-bits has effect, but not a big difference. Our accepted computer color system plan is 8-bits (called 24 bit color, 8-bits each of RGB). We all use 8-bits and find no problems. Yes, linear values might decode to come back one less than they went in. A difference of 1 down at 5 or 10 or 20 could possibly be a significant percentage at the low end (where it is very black, and our monitors can hardy show it anyway), but this is nothing at higher values. And this change of One is random, no pattern to it, don't count on the eye to help figure it out. The 8-bit "problem" is largely only about if the integer should have randomly rounded up instead of down. The computer can of course compute gamma conversions to any high precision desired, but the final act of storing a precise result value into an 8-bit file MAY expediently truncate it to be a value of maybe one less. It is the tiniest error, which depends on the decode procedure.

For example, if to be stored in an 8-bit file, linear 20 goes to 80 in 2.2 gamma, which then decodes back to 19 or 20 in 8-bit video space (to see this, just enter 20 into calculator field 6, choose Option 6, and then toggle the Truncate gamma values checkbox repeatedly). Then compare it to value 21. Option 6 is only for this purpose, and Option 7 just counts the values with the different differences for the two rounding cases. The point is, 8-bits can cause minor "off by one" errors in gamma data. You probably may never detect this difference in a screen image. And it's just math, we certainly don't see any way here that the human eye could help with it? Notice that this difference is Not just because the gamma data was 8-bits, it is also because the target video space was 8-bits. But 8-bits is not a big problem. It is our standard, and it works well.

The gamma math: (indented to be easy to bypass, or skip to next page now). But you can repeat the gamma math yourself for the concept. The Reversible calculator option 6 above shows the math steps. The percentages are of the 255 range. For gamma values, the percent represents position in the histogram.

Normalization:   When the digital image is converted to RGB, then for computing gamma, the [0..255] data is normalized into a [0..1] range (divide each value by 255 to be a fraction [0..1]). Normalization is necessary because the end values of 0 or 1 raised to any exponent are still 0 or 1 (unchanged). Therefore, the overall range is not extended or changed, no clipping is added by gamma. The end points remain fixed (see the curve above.) FWIW, gamma encoding would have absolutely zero effect on one-bit line art data consisting only of 0 or 1.

For each red, green, or blue component of every RGB pixel:
The encode math is Gamma = 255 * (Linear / 255) 1/2.2
The decode math is Linear = 255 * (Gamma / 255) 2.2

  1. Encode: Divide Linear value by 255 to normalize it into a [0..1] fraction.
  2. Raise to power of exponent (Encode = 1/2.2, Decode = 2.2. It will still be a [0..1] fraction).
  3. Multiply by 255 to scale into [0..255] gamma value seen in histogram.
  4. Decode is the same, start with gamma value, result is Linear.

Then we store the result in a 8-bit file, which is an integer in range [0..255], which cannot store the fractional part, but we can truncate or round the floating point result to be an integer. Then when displayed, the goal is to get that same tonal number back when decoded to linear again to view the accurate reproduction of the original image.

CRT displays take care of their own decoding (the CRT losses occur to decode simply by showing the image on a CRT). This is of course the purpose that gamma plans for, but even with LCD today, we still continue to do gamma for compatibility with the world.

LCD displays are considered to be linear, not needing gamma, but our images are all gamma encoded, so a LCD chip simply decodes it back to original linear (so we can show our gamma images). There is more than one way to do it, but LCD monitors and television normally use LookUp tables. Math is slow for millions of pixels, so these LookUp tables have previously been computed for all values, and are now used by the device to avoid the math (see example LUT next page). Then the table can simply provide the linear values for substitution to be shown.

The final value can be truncated or rounded to an integer. At most, it's Off by One, which really is not that much, but we can argue this difference at lower values like 3 vs. 4 is great. However we likely cannot distinguish 4 from black on most monitors. But 8-bits has to be an integer, which the receiving hardware can choose to process by truncating or rounding. Simple small CPU chips (in a LCD monitor or printer) have no floating point, so historically, truncation was much simpler and faster than rounding, but LookUp tables make rounding extremely feasible today. But IF 20 comes back as 19, this is what Option 7 above calls a difference of one.

It's just math, the formulas graph out the curves above. There's no mumbo-jumbo involved, and it's not rocket science. It is simply a reversible exponential function. Gamma is used to exactly offset CRT losses, to be able to correct and use a CRT display. Simple CPUs are not equipped for much math, so on a LCD display, a LUT chip (below) first simply decodes back with the exact reversed math operation to simply recover the same exact linear value we started with (leaving no change the eye could see). Decode uses exponent of 2.2 instead of reciprocal 1/2.2 for Encode, which is reversible math. It's like 8/4 = 2, and 4x2 is 8 again. Reversible math, we simply get the same value back. However there can be slight 8-bit rounding variations of gamma in between, which might change the value by a difference of one sometimes. A small error, but not really a big deal, virtually all of our images and video systems and printers are 8-bits. If 8-bits were not acceptable, we would be doing something else.


Continued - Gamma Multipliers and Gamma LookUp Tables

Copyright © 2011-2017 by Wayne Fulton - All rights are reserved.

Previous Menu Next