www.scantips.com

Gamma Multipliers

Gamma value to recompute
Current gamma to multiply

Page menu

Gamma Multipliers, and Gamma Lookup Tables (here)

Gamma LookUp Tables (LUT)

Stops down from 255 in a Histogram

Gamma Multipliers, and Gamma Lookup Tables

This calculator recomputes a new gamma value using a different gamma due to a variable gamma multiplier (as used in Photoshop Levels and Adjustment - Exposure tools).

This calculation shows the effect of the gamma multiplier on any specific current tonal value you input. If current gamma is specified to be gamma 2.2, and if you specify a gamma multiplier of 1.4, then data values are computed with gamma = 1.4 x 2.2 = gamma 3.08. The calculator shows the expected numerical data output of those multiplier tools.

It matches the Photoshop Levels Tool which has a center slider which is gamma (CTRL L in Photoshop or Elements). This Levels center slider is a very desirable way to brighten or darken your image. It is not only a good and efficient way to change image brightness, but it does it WITHOUT shifting the 0 and 255 end points, so no clipping and no shift of dynamic range (which is Not true of simple Brightness controls, which just scoot the histogram data back or forth). Gamma is used because tonal values [0..255] must be normalized to be [0..1] (fractional) values before computing gamma. Because then, the 0 or 1 end points are still 0 or 1 when raised to any exponent, so the end points are very fixed at 0 and 1.

The Levels center slider default value is marked 1.0, which is a multiplier of your current gamma. If your image is gamma 2.2, then 1.0 x 2.2 is still 2.2, unchanged. But if you wanted to see linear with 1.0 gamma, then (since 1/2.2 is 0.4545), the slider moved right to 0.4545 darkens the image to be gamma 0.4545 x 2.2 = gamma 1.0.

The Photoshop menu Image - Adjustments - Exposure also has the Exposure and Gamma sliders, both of which match these calculator numbers well.

However, the Exposure slider in Adobe Camera Raw, not so much (see previous page).


Gamma LookUp Tables (LUT)

This previous page article is about gamma correction, so to be more complete, and for example of concept, here are 8-bit video LookUp tables (LUT) for gamma 2.2. Both Encode and Decode tables are shown. Simpler CPU chips don't have floating point instructions. Even if they did, instead of computing math millions of times (megapixels, and three times per RGB pixel), a source device (scanner or camera) could read an Encode LUT, and an output device (LCD monitor) could read a Decode LUT. Here is one example of such a LUT chip from Texas Instruments. These tables use input addresses simply directly reading the corresponding output value. A binary chip would not use decimal notation, it would use hexidecimal [00h..ffh], but decimal seemed better here for humans. A linear display device (LCD monitor) can simply read a Decode LUT to discard gamma (to convert and restore to be linear data to be able to show it).

For  an example of Encode use, a linear value of 3 (4th value in table, 0,1,2,3) is simply read to be 34 (decimal here) and output as gamma 34 corresponding to linear 3. Or the highlighted value for Encode below has the input address of the linear center value 127, which reads there its 186 gamma output. Then the highlighted Decode value reads input address of gamma value 186 which reads back the center 127 linear for display. Could not be simpler or faster.

A device would only use ONE table, either for Encode or Decode, as the device required. Scanners and cameras encode gamma, and LCD monitors and printers decode gamma (a CRT monitor decodes by simply showing it in its lossy way). The simpler processors in such devices typically do not have floating point multiply and divide capability. The idea is that simply reading the table is fast, and prevents the device from having to do the actual math three times for each of millions of pixels. All math was done once in preparation of the chip, and now it works without any additional math in this device (fast and simple). It is just a table LookUp. The decode LookUp procedure is for linear devices, like LCD displays, which don't need or use gamma (as CRT does). They simply just decode the data to be linear values again, and proceed.

Another example below is that Encode address linear 82 is gamma value 152. And then Decode address gamma 152 is linear value 82. But in 8-bits, some of these could be off by one, like 80 is ... 80 encodes to 150.56, and 81 encodes to 151.46, but rounding in 8 bits can only store both as 151. And 151 can only have one output value.

79 -> 150, 150 -> 79
80 -> 151, 151 -> 81
81 -> 151, 151 -> 81
82 -> 152, 152 -> 82

But off by one is not a big deal (and rounding puts it closer), since there are so many other variables anyway. White Balance for one for example, Vivid for another, these skew the camera data. So this is an instance to not sweat the small stuff.

In an 8-bit integer, where could any supposed so called perceptual improvements by gamma even be placed? :) The lowest linear steps (1,2,3,4) are separately distinguished (with numbers 1,2,3,4), perhaps coarsely percentage wise, but in 8-bits, 1,2,3,4 is all they can be called. And of course, they are the exact values we hope to reproduce (however real world video monitors probably cannot show levels that black, which cause is neither gamma nor 8-bits). Notions about the human eye can't help (the eye never sees gamma data). An analog CRT is all that sees any gamma data directly, but the eye notion is the full opposite, imagining gamma is still somehow needed without CRT? Anyway, gamma is obviously not related to the human eyes response in any way.

Except for the 8-bits, the LUT is simply a faster way to do the same math ahead of time. The LUT decode chip can also additionally correct for any other color nonlinearity in this specific device, or provide additional features like contrast or monitor color calibration for example, or just routine corrections. The LUT provides the mechanism to expand it further (if the data is this, then show that). Color correction would require three such tables, for each of red, green and blue. This table is for gamma 2.2, but other tables are quickly created. A 12-bit encode table would need 4096 values, a larger table, but it still reads just as fast.

The lowest linear values, like 4, 5, 6, may seem to be coarse steps (percentage-wise), but they are what they are (still the smallest possible steps of 1), and are what we've got to work with, and are the exact values that we need to reproduce, regardless if we use gamma or if we hypothetically somehow could bypass it. These low values are black tones, and the monitor may not be able to reproduce them well anyway (monitors cannot show the blackest black). And gamma values of less than 15 decode to zero anyway, if in 8-bits.

But gamma is absolutely Not related to the response of the eye, and gamma is obviously NOT done for the eye, and so of course, we don't even need to know anything about the eyes response. Gamma could use any other algorithm, cube root of 1.9, or value / 5, or whatever, and the LCD encode/decode reversal concept would be the same (except of course other values would not match CRT displays then). But the eye never ever sees any gamma data, because gamma is always first completely decoded back to linear. The entire goal is to see an accurate linear reproduction. We couldn't care less what the eye does with it, the eye does whatever it does, but our brain is happiest when shown an exact linear reproduction of the original linear scene, the same as if we were still standing there looking.

Yes, it is 8-bits, and not perfect. However, the results are still good, very acceptable, which is why 8-bit video is our standard. It works. Gamma cannot help 8-bits work, actually instead, gamma is the complication requiring we compute different values then requiring 8-bit round off. Gamma is a part of that problem, not the solution. But it is only a minor problem, necessary for CRT, and today, necessary for compatibility with the worlds images.


Stop and think. It is obvious that gamma is absolutely not related to the response of our eye. This perceptual step business at the low end may be how we might wish the image tones instead were, but it is Not an option, not in the linear data that we can see. Instead, the numbers are what they actually are, which we hope to reproduce accurately. The superficial "gamma is for the eye" theory falls flat (fails) when we think about it once. If the eye needs gamma, how can the eye see scenes in nature without gamma? And when would the eye ever even see gamma? All data is ALWAYS decoded back to linear before it is ever seen. Gamma is obviously NOT related to the response of the eye. A linear value of 20 would be given to a CRT monitor as gamma value 80, and we expect the CRT losses will leave the corresponding linear 20 visible. A LCD monitor will simply first decode the 80 to 20, and show that. Our eye will always hopefully see this original linear value 20, as best as our monitor can reproduce it. But gamma is certainly Not in any way related to matching the response of our eye.

Gamma has been done for nearly 80 years to allow television CRT displays to show tonal images. CRT are non-linear, and require this correction to be useful for tonal images. Gamma is very well understood, and the human eye response is NOT any part of that description. The only way our eye is involved is that we hope the decoded data will reproduce a good linear copy of the original scene for the eye to view (but the eye is NOT part of this correction process). Gamma correction is always done, automatically, pretty much invisible to us. We may not use CRT today, but CRT was all there was for many years, and the same gamma is still done automatically on all tonal images (which are digital in computers), for full compatibility with all of the world's images and video systems, and for CRT too (and it is easy to just keep doing it). LCD monitor chips simply easily decode gamma now (the specification 2.2 value does still have to be right). The 8-bit value might be off by one sometimes, but you will never know that. Since all values are affected about this same way, there's little overall effect. More bits could be better, but the consensus is that 8-bits work OK.

I am all for the compatibility of continuing gamma images, but gamma has absolutely nothing to do with the human eye response. Gamma was done so the corrected linear image could be shown on non-linear CRT displays. Gamma is simply just history, of CRT monitors. We still do it today, for compatibility of all images and all video and printer systems, and of course, for any CRT too.

We probably should know though, that our histogram data values are gamma encoded. Any data values you see in the histogram are gamma values. But the human eye never sees gamma values.


Stops Down from 255

 Bits  10   12  14  16

Gamma  

Show Individual Values

 Bits  10  12  14  16

Show Individual values

Gamma  

Stops down from 255 in a Histogram

You must be pretty interested in gamma to reach this point, good for you. But this table chart can get very long, so unless you have a wide screen, it should be mentioned that the math is shown below, but in some cases, it might be pushed way down out of sight down there.

Here's a calculation that seems interesting, showing histogram gamma values at X stops down from 255 (or from 1023 or 4095 or 16383). Our RGB image histograms show 8-bit gamma data. The 8-bit Gamma column is the value you would see in your histogram or in Photoshop. I'm not aware that 12 or 14 bit gamma actually is used anywhere, but it can be calculated.

Linear stops are 2x, so the light decreases 1, 0.5, 0.25, 0.125, etc. Our histograms show gamma 2.2 data. Gamma 2 would decrease in steps of √2, but 2.2 slightly skews it. But if the exposure is only reaching to near 3/4 of full scale in your histogram, this measures that it could use +1 EV more exposure (each Red, Green, Blue histogram channel is likely different, watch them all). That 3/4 point is also about where gamma would adjust a 50% middle gray linear value to be. The math is precise, but this 3/4 result is not exact in the histogram, because the camera is also making other tonal adjustments, white balance, color profile, contrast, etc, which all shift things a bit.

Our JPG files, video cards, monitors and our printers are all 8-bits, so more data bits cannot help them. However, 10-bit monitors and video cards are available (B&H), used by Hollywood to create their modern graphic effects.

Regardless of bit count, gamma data at X stops down is always the same "percentage of full scale" value (it is normalized 0..1 and then scaled to full scale). The numbers here represent the maximum storage capability of the bits, it does not imply the available data ever completely fills it. The percentages in the columns here are the percent of full scale 100% data, which may not necessarily exist every time.

It may seem to appear that gamma data has greater range than linear, but of course, that's just a numerical math conversion, it cannot change the range, which always remains a normalized [0..1]. Normalized gamma 1 represents 255 or 1023 or 4095, depending on the bit count. Gamma does boost the values to be new higher values, but gamma was converted from linear, and must always be converted back to linear for the human eye to view it. No human eye ever sees gamma data. Gamma Correction was very important to correct the CRT losses, but the LCD monitor simply first reverses gamma 2.2 back to be linear again, and then shows it. Math is an entirely reversible operation. Except maybe not for dividing by zero and Infinity. :) But if we have an 8-bit tone of Linear 16, we encode it as 8-bit Gamma 2.2 value 72 (for CRT monitors where it would come out as 16). And at any LCD display, it decodes to Linear 16 again and is shown. We necessarily see the same linear scene that the camera lens saw.


The Individual Values chart is more about seeing what gamma is and does. More a tabular representation of what actually happens. The math is a bit random due to rounding, Off by One errors are unavoidable. Any error won't be more than ± 1, which is quite minor. This would be true at any bit level, but it seems bigger in 8-bits. But my own notion is that while some factors may superficially appear less than perfect, there are obvious explainable reasons why there is no actual bad effect. Our existing video system delivers, which is why it has never been changed.

The "Show Individual values" feature may be interesting to show the problem perceived with gamma. It's showing Encoded gamma values, which the 8-bit table matches the Encode LUT just above. Each of the Red, Green and Blue channels has gamma, so each RGB value is three gamma values. Here, number of numeric values shown will be limited at the maximum value, or at 14-bit 16384 because more can get slow and excessively long without obvious purpose.

Possible confusion: If showing all 4096 values of 12 bits, note that the many orange decoded 8-bit values are all actually only one 8-bit value, representing the 12-bit values (normally 16 of them for 12-bits, but in a few cases a couple more or less than 16 due to rounding). The 12-bit gamma (if any exists anywhere) could contain 4096 gamma values, but 8-bits only contains 256.

When high-bit data is converted to 8-bit gamma, the lower color resolution must represent many of the high bit integer values. Repeated individual numbers are shown highlighted with orange. A repeated orange decoded (linear) value in 8-bit decodes is necessarily followed by an omitted linear value (just saying, one value is one too high). The decoded linear value is Off by One. If in 8-bit mode, the blue linear values mark these cases (original linear and final decoded linear are not the same value, off by one). The same situation exists in all bit modes, but it's hard to chose one 4096 value to compare to the 256 values. The calculator shows that encoding 8-bit gamma from 10 or 12 or 14 bits linear gets exactly the same non-repeated 8-bit numbers. Off by One is just going to happen when rounding integer gamma.

This is only a high end issue (show all 256 8-bit values to see it), the low end is not affected. Option 7 on the previous page counts occurrences of this Off by One issue, and the values. When 8-bit linear is converted to 8-bit gamma, and then decoded back to 8-bit linear, 28% of all values are in fact "off by one" then (and these corresponding locations omit the next value). We get the same count here. Truncation instead of rounding doubles the error rate. Kludges to add 0.5 before rounding (tampering with the data) makes it much worse, 145 instead of 72 come back wrong, 38 instead of 5 exceed 1%. However, this necessary rounding issue is only a difference of One, and only 2% (the bottom 5, none much lower than near mid-scale) slightly exceed a 1% threshold that our eyes could detect anyway. Fortunately, these are a high end issue where our eyes cannot resolve a one step difference anyway.

Saying, it may look bad at first superficial glance, but it's really hard to pinpoint much real world harm. We do much worse messing with Color Profile (Vivid) and such. The system obviously works well. There's more discussion on the previous page and a quote is here:

So it's just normal rounding, little stuff. We can't even perceive vision differences of less than about 1%. Which to see generally requires a change of 1 at RGB(100) or 2 at RGB(200), which is a stretch for these gamma issues. Humans have thresholds before we can detect changes in things like vision, etc. (start at Wikipedia: Weber Fechner Law, and on to Just Noticeable Difference).

Our cameras and scanners are typically 12 bits today, which is about improving dynamic range. 12 bits has more dynamic range than 8 bits (it captures more detail into darker blacks). Sure, we're going to convert it to 8 bits for our monitors and printers, but there's still a difference. The camera captures a 12 bit image, and the gamma operation normalizes all of the data into a 0..1 range. Our new 8-bit data does not just chop it off at 8-bits, but instead gamma scales this 0..1 range proportionately into 8 bits, still 0..1, but fitting into a smaller space (of 0..255 values). Saying, if the dark tone were at 1% of 4095 12-bits, it will also be at 1% of 255 8-bits. That might be low data values of 2 or 3, but it is theoretically there. This might be what Poynton is getting at (still requiring gamma), but I think he never says it. So our image does still contain hints of the 12-bit detail, more or less. However yes, it is compressed into the range of 8-bits. But NOT just chopped off at 8-bits. It's still there. That's a difference.

The obvious purpose of gamma was to necessarily support CRT monitors, and now that LCD monitors have replaced CRT, we could use an 8-bit camera that outputs 8-bit linear directly, and our LCD monitor could just show the linear image directly, bypassing its operation to decode and discard gamma. And of course, that could work (on that one equipped LCD), but it would just chop off range at 8-bits. Unless the camera were higher bit count and also did this proportional 0..1 scaling, which gamma just happens to already do. And continuing gamma is simple to do, which also does not obsolete all of the worlds old images and old viewing devices. That's my take.

The math for example:

Normalizing gamma to 0..1 is necessary to preserve overall range. Because, zero to any exponent is still zero. One to any exponent is still one. The image low and middle zones are boosted most by gamma, but the end points at 0 and 1 are NOT affected by gamma.

For each red, green, or blue component of every RGB pixel:

The encode math is Gamma =
  MV2 * (Linear / MV1) 1/2.2
The decode math is Linear =
  MV2 * (Gamma / MV1) 2.2

MV1 = max value of bit level decoded from.
MV2 = max value of bit level decoded into.
(8-bits is 255, 12 bits is 4095, etc.)

For Linear value 2000 in a 12-bit system:

1) 2000 / 4095 = 0.488400 (0..4095 normalized to a 0..1 range for gamma). This is 48.8% of full range.

2) 0.488400 to power of 0.454545 = 0.721996 (exponent 1/2.2 to encode gamma 2.2 normalized to 0..1)

3) 0.721996 x 4095 = 2956.57 gamma (scaled to 12 bits gamma). This is 72.2% of full range. Gamma is intended to boost it.

4) 2957 when rounded to allow storing in an integer. Values near 0.5 might go either way.


If storing as 8-bit gamma (as our cameras and scanners do in JPG), it would have been:

3) 0.721996 x 255 = 184.109 (scaled to 8 bits, with same normalized gamma value)

4) 184 when rounded to allow storing in an integer, still 72.2%, 8-bit gamma in our histogram and editor.


LCD monitor decoding 8-bit gamma into 8-bit memory:

5) 184 / 255 = 0.721569 (0..255 normalized to a 0.1 range for gamma), 72.2% of full range (gamma).

6) 0.721569 to power 2.2 = 0.487765 (decoded with gamma 2.2)

7) 0.487765 x 255 = 124.380 (scaled to 8 bits linear). Note 2000 / 16 = 125

8) 124 when rounded to allow storing in an integer. This is 48.6% of full range (linear).


So the rounding may shift our point slightly, but it's just rounding, and still remains less than our eye can detect a difference, which is the point of showing this individual data. And this issue is NOT in the lowest data as we've been told. We're told One Off at value 4 is a 25% situation, but One Off does not occur at 4. And our monitor couldn't show it if it did. It's mostly a high end situation, with negligible effect.

Our monitors black level cannot reproduce black well enough to resolve the lowest few black data points near zero anyway, even in 8-bits. And at the high end, our eyes cannot resolve adjacent colors of less than 1% difference anyway (1% of the value, which in 8-bit linear data occurs first if higher than 100 in 8-bit linear data). So 14 bit gamma might improve the data, but we still couldn't see the difference in much of the range. The only way to reproduce 12-bit data is to pay the price to develop 12-bit files and video systems. So some of these things seem unsolvable, but seen not an actual problem. We could of course easily solve any perceived gamma issue by simply developing 12 or 14 bit JPG files and monitors. However no one has as yet seen any necessity of doing that (there are expensive 10-bit systems). Some day it may become cheap enough to do it anyway. We do seem to be doing pretty well, and there's no reason to lose any sleep over it.

Human eyes never see gamma data anyway, our LCD monitors only decode and discard it today, so the only real issue is decoding to reproduce the same original scenes linear values from 8-bit gamma. IMO, No big deal, the system sure seems to work well as is.

Yes, 12 bits can have greater range [0..4095] than 8 bits [0..255] (which is not a gamma issue). We hear claims that some cameras have 12 or 14 stops of dynamic range. That maximum theoretical possibility does exist for 12 or 14 bit data. Each additional bit is a theoretical stop of potential range at the far dark end (the bright end is just a matter of exposure, but the actual data present may not fill it). Except even then, the 8-bit data that we eventually must see is always limited to 8 stops of dynamic range, and in 8 bits, 2^8=256 is the maximum limit.

For images printed on paper, forget about 8 stops. Printed images are then limited to not more than about 3 stops dynamic range, if quite that, because the white paper is not bright and reflective enough, and the black ink is not dark and absorbing enough, to show more range. A typical 15% to 90% reflectance is a range of 2.58 EV. 10% to 90% reflectance would be 3.17 EV, if we could manage it. The metallic print papers are attracting attention now.

Note that regardless of bit count, 7 stops down is always 0.78% of linear maximum (in linear data, which is all that we can look at). More bits can go lower and darker. But 1 stop down (in linear) is defined as 50%, and half done seven times is 0.78%. And 2^7=128, and 1/128 = 0.78%. Reminds me of only being able to fold a piece of paper in half only 7 times. :)

Copyright © 2011-2017 by Wayne Fulton - All rights are reserved.

Previous Menu