Here's a calculation that seems interesting, showing histogram gamma values at X stops down from the end point at maximum scale (down from 255 or 1023 or 4095, etc). Our sensors typically capture linear data as 12-bits. Our JPG data files are always 8-bits with gamma. Our RGB image histograms show this 8-bit gamma data. The 8-bit Gamma column is the value you would see in your histogram or in Adobe Photoshop.
The orange values (if any) were duplicated in the next step, simple loss of precision due to rounding to the same integer value.
Linear stops are 2x, so the light decreases in steps as 1, 0.5, 0.25, 0.125, etc. Our histograms show gamma 2.2 data. Gamma 2 would decrease in steps of √2, but gamma 2.2 skews it. But if the exposure is seen as only reaching to near 3/4 of full scale in your histogram, this measures that it could use +1 EV more exposure (each Red, Green, Blue histogram channel is likely different, watch them all). That 3/4 point is also about where gamma would adjust a 50% middle gray linear value to be. The math is precise, but this 73% result is not exact in the histogram, because the camera is also making other tonal adjustments, white balance, color profile, contrast, etc, which all shift things a bit.
It may seem to appear that gamma data has greater range than linear, but the numbers are just a numerical math conversion, which always remains a normalized [0..1] span. Normalized gamma 1 represents 255 or 1023 or 4095, depending on the bit count. There are more values and finer precision steps in 12 bits, but regardless of bit count, -7 EV value is always the same 11% of full scale in gamma. Analog broadcast television transmitted over the air could pick up noise, and low signal levels were obscured by noise. But when the CRT monitor showed gamma (decoding gamma), the low level losses of merely showing it on a CRT lost much noise too. Today, digital TV gamma data is protected by CRC, noise mostly only occurs in the linear circuits.
Our JPG files, video cards, monitors and our printers are all 8-bits, so more data bits cannot help them. However, 10-bit monitors and video cards are available (B&H), used by Hollywood to create their modern graphic effects (using 10-bit data). I'm not aware that 12 or 14 bit gamma actually exists or is used anywhere, but it can be calculated, for fun.
JPG files are always 8 bit gamma, but showing two values of gamma (like 12 bits and 8 bits) is only to make the point that if it existed, and regardless of bit count, gamma data at X stops down is always the same "percentage of full scale" value (necessarily true, because gamma is normalized 0..1 and then scaled to full scale). The numbers here represent the maximum storage capability of the bits, it does not imply the available data ever completely fills it. The percentages shown here are the percent of full scale 100% data, which 100% data may not necessarily exist every time.
Gamma does boost the values to be new higher values, but gamma was converted from linear, and must always be converted back to linear for the human eye to view it. No human eye ever sees gamma data. Gamma Correction was very important to correct the CRT losses, but the LCD monitor simply first reverses gamma 2.2 back to be linear again, and then shows it. Valid math is an entirely reversible operation. But if we have an 8-bit tone of Linear 16 (6% of full scale), we encode it as 8-bit Gamma 2.2 value 72 (28% full scale) for CRT monitors where due to the CRT response losses, it would come out displayed as 16 again, which is the whole idea. And in any LCD display, it is decoded to Linear 16 again and is shown. The goal is to see the same linear scene that the camera lens saw.
This calculator recomputes a new gamma value using a different gamma due to a variable gamma multiplier (as used in Adobe Photoshop Levels, and in the Adjustment - Exposure tool).
This calculation shows the effect of the gamma multiplier on any specific current tonal value you input. If current gamma is specified to be gamma 2.2, and if you specify a gamma multiplier of 1.4, then data values are computed with gamma = 1.4 x 2.2 = gamma 3.08, but using the exponent for Encoding of 1/3.08 = 0.32. (Gamma is the CRT expected response which will decode it so to speak, and 1/gamma is the correction factor encoded to expect it). The calculator shows the expected numerical data output of those multiplier tools.
It matches the Photoshop Levels Tool which has a center slider which is gamma (CTRL L in Photoshop or Elements). This Levels center slider is a very desirable way to brighten or darken your image. It is not only a good and efficient way to change image brightness, but it does it WITHOUT shifting the 0 and 255 end points, so no clipping and no shift of dynamic range (which is Not true of simple Brightness controls, which just scoot the histogram data back or forth). Gamma is used because tonal values [0..255] must be normalized to be [0..1] (fractional) values before computing gamma. Because then, the 0 or 1 end points are still 0 or 1 when raised to any exponent, so the end points are very fixed at 0 and 1.
The Levels center slider default value is marked 1.0, which is a multiplier of your current gamma. If your image is gamma 2.2, then 1.0 x 2.2 is still 2.2, unchanged. But if you wanted to see linear with 1.0 gamma, then (since 1/2.2 is 0.4545), the slider moved right to 0.4545 darkens the image to be gamma 0.4545 x 2.2 = gamma 1.0, which is linear, no gamma correction.
The Photoshop menu Image - Adjustments - Exposure also has the Exposure and Gamma sliders, both of which match these calculator numbers well.
However, the Exposure slider in Adobe Camera Raw, not so much (see previous page).
This previous page article is about gamma correction, so to be more complete, and for example of concept, here are 8-bit video LookUp tables (LUT) for gamma 2.2. Both Encode and Decode tables are shown. A LCD monitor would use the Decode table, and a scanner or camera would use the Encode table. Simpler CPU chips don't have floating point instructions. Even if they did, instead of computing math millions of times (megapixels, and three times per RGB pixel), a source device (scanner or camera) could read an Encode LUT, and an output device (LCD monitor) could read a Decode LUT. Here is one example of such a LUT encode chip from Texas Instruments. These tables use input addresses simply directly reading the corresponding output value. A binary chip would not use decimal notation, it would use hexidecimal [00h..ffh], but showing it as decimal seemed better here for humans. A linear display device (LCD monitor) can simply read a Decode LUT to convert and restore to be linear data to be able to show it. The LCD does not need gamma, but since gamma correction is in the sRGB specs for the data, it must be converted to linear data.
For an example of Encode use, a linear value of 3 (4th value in table, 0,1,2,3) is simply read to be 34 (decimal here) and output as gamma 34 corresponding to linear 3.
For 8-bit Decode, gamma values of 0 to 24 all round to 8-bit 0 or 1, losing much precision in the lowest values. However 12-bit gamma (values 0..4095) would have 16 values for every one in 8-bits (0..255), so scanners and digital cameras today are 12 bits internally, mostly for creating gamma data. They might still output 8-bit JPG images however, but Raw images are another choice.
Or the highlighted value for Encode below has the input address of 127, which reads there its 186 gamma output. Then the highlighted Decode value reads input address of gamma value 186 which reads back the 127 linear for display. Could not be simpler or faster. All of the possible 255 addresses are available, so there are no missed values.
A device would only use ONE table, either for Encode or Decode, as the device required. Scanners and cameras encode gamma, and LCD monitors and printers decode gamma (a CRT monitor decodes by simply showing it in its lossy way). The simpler processors in such devices typically do not have floating point multiply and divide capability. The idea is that simply reading the table is fast, and prevents the device from having to do the actual math three times for each of millions of pixels. All math was done once in preparation of the chip, and now it works without any additional math in this device (fast and simple). It is just a table LookUp. The decode LookUp procedure is for linear devices, like LCD displays, which don't need or use gamma (as CRT does). They simply just decode the data to be linear values again, and proceed.
Another example below is that Encode address linear 82 is gamma value 152. And then Decode address gamma 152 is linear value 82. But in 8-bits, rounding can make a few of these be off by one, like 80 encodes to 150.56, and 81 encodes to 151.46, but rounding for 8 bit integers can only store both as 151. And 151 can only have one output value.
79 -> 150, 150 -> 79
80 -> 151, 151 -> 81
81 -> 151, 151 -> 81
82 -> 152, 152 -> 82
But off by one is not a big deal (and rounding does generally put it closer), since there are so many other variables anyway. White Balance for one for example, Vivid for another, these skew the camera data. So this is an instance to not sweat the small stuff.
In an 8-bit integer, where could any supposed so called perceptual improvements by gamma even be placed? :) The lowest linear steps (1,2,3,4) are separately distinguished (with numbers 1,2,3,4), perhaps coarsely percentage wise, but in 8-bits, 1,2,3,4 is all they can be called. And they are the exact values we hope to reproduce (however real world video monitors probably cannot show levels that black, which cause is neither gamma nor 8-bits). Notions about the human eye can't help (the eye never sees gamma data). An analog CRT is all that sees any gamma data directly, but the eye notion is the full opposite, imagining gamma is still somehow needed without CRT? Anyway, gamma is obviously not related to the human eyes response in any way.
Except for the 8-bits, the LUT is simply a faster way to do the same math ahead of time. The LUT decode chip can also additionally correct for any other color nonlinearity in this specific device, or provide additional features like contrast or monitor color calibration for example, or just routine corrections. The LUT provides the mechanism to expand it further (if the data is this, then show that). Color correction would require three such tables, for each of red, green and blue. This table is for gamma 2.2, but other tables are quickly created. A 12-bit encode table would need 4096 values, a larger table, but it still reads just as fast.
The lowest linear values, like 4, 5, 6, may seem to be coarse steps (percentage-wise), but they are what they are (still the smallest possible steps of 1), and are what we've got to work with, and are the exact values that we need to reproduce, regardless if we use gamma or if we hypothetically somehow could bypass it. These low values are black tones, and the monitor may not be able to reproduce them well anyway (monitors cannot show the blackest black). And gamma values of less than 15 decode to zero anyway, if in 8-bits.
But gamma is absolutely Not related to the response of the eye, and gamma is obviously NOT done for the eye, and so we don't even need to know anything about the eyes response. Gamma could use any other algorithm, cube root of 1.9, or value / 5, or whatever, and the LCD encode/decode reversal concept would be the same (except other values would not match CRT displays then). But the eye never ever sees any gamma data, because gamma is always first completely decoded back to linear. The entire goal is to see an accurate linear reproduction. We couldn't care less what the eye does with it, the eye does whatever it does, but our brain is happiest when shown an exact linear reproduction of the original linear scene, the same as if we were still standing there looking.
Yes, it is 8-bits, and not perfect. However, the results are still good, very acceptable, which is why 8-bit video is our standard. It works. Gamma cannot help 8-bits work, actually instead, gamma is the complication requiring we compute different values then requiring 8-bit round off. Gamma is a part of that problem, not the solution. But it is only a minor problem, necessary for CRT, and today, necessary for compatibility with the worlds images.
Stop and think. It is obvious that gamma is absolutely not related to the response of our eye. This perceptual step business at the low end may be how we might wish the image tones instead were, but it is Not an option, not in the linear data that we can see. Instead, the numbers are what they actually are, which we hope to reproduce accurately. The superficial "gamma is for the eye" theory falls flat (fails) when we think about it once. If the eye needs gamma, how can the eye see scenes in nature without gamma? And when would the eye ever even see gamma? All data is ALWAYS decoded back to linear before it is ever seen. Gamma is obviously NOT related to the response of the eye. A linear value of 20 would be given to a CRT monitor as gamma value 80, and we expect the CRT losses will leave the corresponding linear 20 visible. A LCD monitor will simply first decode the 80 to 20, and show that. Our eye will always hopefully see this original linear value 20, as best as our monitor can reproduce it. But gamma is certainly Not in any way related to matching the response of our eye.
Gamma has been done for nearly 80 years to allow television CRT displays to show tonal images. CRT are non-linear, and require this correction to be useful for tonal images. Gamma is very well understood, and the human eye response is NOT any part of that description. The only way our eye is involved is that we hope the decoded data will reproduce a good linear copy of the original scene for the eye to view (but the eye is NOT part of this correction process). Gamma correction is always done, automatically, pretty much invisible to us. We may not use CRT today, but CRT was all there was for many years, and the same gamma is still done automatically on all tonal images (which are digital in computers), for full compatibility with all of the world's images and video systems, and for CRT too (and it is easy to just keep doing it). LCD monitor chips simply easily decode gamma now (the specification 2.2 value does still have to be right). The 8-bit value might be off by one sometimes, but you will never know that. Since all values are affected about this same way, there's little overall effect. More bits could be better, but the consensus is that 8-bits work OK.
I am all for the compatibility of continuing gamma images, but gamma has absolutely nothing to do with the human eye response. The eye has no use for gamma. Gamma was done so the corrected linear image could be shown on non-linear CRT displays. Gamma is simply just history, of CRT monitors. We still do it today, for compatibility of all images and all video and printer systems, and for any CRT too.
We probably should know though, that our histogram data values are gamma encoded. Any data values you see in the histogram are gamma values. But the human eye never sees gamma values.