Two of the basic color systems commonly relating to photographs are:
This article is about RGB color, which is universally used in our digital cameras and scanners, almost all photo image files, computers, video monitors, cell phones, television and computer screens, stage lighting, and about anything else working with light (and our human eyes are a RGB system too). Raw files are not yet exactly RGB yet, but raw files are also not viewable images yet, not until the RGB conversion that we can view. See more RGB specifics below.
Painting (art or houses, etc.) and printing (commercial prepress, etc.) and pigments and ink and dye and photo print paper use the CMYK system that we see as reflected from the colored surface. Our home printers also use CMYK ink, but they are designed to expect to receive and convert RGB images (such as JPG files), since that's what we have to print.
Both RGB and CMYK are “device dependent” color, meaning any final result somewhat depends on how the specific device is able to show it, like depending on just how red can the chosen ink or phosphorus actually make it? Different devices might show it a little different. There is another system called CIELAB or Lab Color (Wikipedia) that tries to define a specific color independent of how various devices might be able to reproduce it.
RGB color: Red, Green, and Blue are the RGB primary colors of light. The light from our monitor screen's RGB elements is seen directly by our eye. We look directly at the transmitted light itself. The RGB primaries mixed together will ADD (gets brighter) to possibly combine into White if bright enough. This is called Additive Color (mixing all RGB colors gets brighter, the sum of more light is brighter). Sunlight is white, a mix of all colors adding to white (around 5500°K) Sunlight mixes to what we call White, basically meaning bright and with no color cast. White does not mean no color, it means equal RGB colors, balanced (all equal), so no color tints appear. Neutral gray and even black is also a balanced color, meaning equal RGB components (no pinkish or bluish tints in the gray). Less intense RGB colors trend darker, toward black (if no light, no color). So White is balanced and bright enough to lighten Gray to White. The format of RGB nomenclature is RGB(255,255,255) is White (255 is the Maximum of 8-bit RGB), and RGB(0,0,0) is Black, which specifies the value of the combined Red, Green, and Blue components of the color.
However, printers print with CMYK ink, because what we see is reflected light that was not absorbed by the printed ink, which becomes a different inverse principle. The difference is that RGB is about a direct beam of light transmitted into our eyes vs. CMY is about the light reflected from color pigments on physical objects.
CMY color: The complements of RGB primaries are Cyan, Magenta, and Yellow, which are the CMY primary colors of pigments, of paint, ink, dye, etc, called the Subtractive system, which are viewed by reflected light (seen reflected from the inks or pigments printed on paper for example). The CMY inks absorb light, mixing CMY colors subtracts, so it gets darker, due to absorption of more colors). Less intense thinner CMY ink (towards zero) trends brighter, toward white (which is in fact the blank white paper when we can see it through sparse ink). For example, the light (assuming white sunlight) hits printed Yellow ink, which absorbs the Blue complement, but reflects the other colors of the white light, and the red and green components are seen as Yellow (with no Blue present).
CMYK: Printed or painted color ideally would need only the three CMY inks, but which is less precise than RGB because the pigments are not perfect in their light absorption and reflection properties. Ideally equal mixes of the three CMY primaries should absorb all light, so none is reflected, which should be Black, but since the inks don’t absorb perfectly, the mixed result is typically more a muddy brown. Black ink is added as a helper, so the system is CMY, but the inks are CMYK. The K is for the Key ink (Wikipedia), which is Black. Black ink ideally (hopefully) absorbs all light, but there's always still a little reflection, the pigments are not perfect absorbers. To make this point better, Black ink alone comes out a little flat, so CMYK often prints "rich black", which is 100% Black, and 50% each of the other three inks, which absorbs the light better. So CMYK is a solution, and another is that chemical photo printing (the one hour machines today) must shine light onto the paper emulsion, so corrective filters can be added to the light to offset any known emulsion dye problems. But pigments of ink and paint don't work that same optical way. Printing is a rather different subject than RGB.
RYB color is something else (red, yellow, blue): Early history of artists seeking to make colors by mixing primaries of pigments did assume RYB as primaries. The exact CMY primaries of reflected painting were not known until well into the 20th century. Before that, artists mixing pigments used a centuries older view about RYB primaries, but it was ultimately discovered that RYB is not quite correct. Artists found RYB couldn't make all painted colors without using additional paint colors (like green), but they adapted to deal with it. The imperfect pigments also caused issues for the artists too, but the situation was not clear back then. RYB is still taught in elementary school, where students there don’t yet know magenta and cyan colors. Today, the red-purple Magenta is the same as the color called Fuchsia (both colors are RGB(255, 0, 255)). And the blue-green Cyan is the same as the color often called Aqua (both colors are RGB(0, 255, 255)). RYB has this historical basis, and is still widely used in art education, but which we've finally learned is not the exact technically correct physics of color. You will still commonly find RYB color wheels (possibly called Traditional), which was the early theory, but not actual precise reality, so instead specifically look for RGB color wheels (which also apply to CMY work). Tradition may still remember RYB, but modern science uses RGB. But CMY are the fundamental primaries for modern printing and painting and color photo prints.
These Primary colors are the three colors from which all the other colors can be made by mixing proportions of these three. Today, cameras, scanners, computers and video monitors (mixing light, seen directly as light) use RGB primaries. In RGB, mixing more colors of light gets brighter, more light. In CMY, color printing or painting (mixing ink or dye or paint, seen as light reflected from the pigments) uses the complements (in CMY, more colors absorb more colors of reflected light, darker). RGB and CMY are complements of each other (one inverted is the other), both using the same RGB color wheel. In RGB light, mixing all colors of light is more light and higher intensity which ultimately becomes white, or no light at all is black. In CMY pigments, adding more mixed pigment colors absorbs more colors of light and reflects less as dark, ideally like black, but no pigment at all is the white paper showing through.
Black and white are intensities of neutral gray, which are not on the color wheel. These color wheels only show the strongest bright shades. The color wheel shows for example that equally mixing Red and Green make Yellow, or that equally mixing Red and Yellow makes Orange.
Do pay attention to your selection of color wheels, because some are RGB and some are RYB, and not all specify which they are.
To easily identify the two types of wheels:
In RGB, RGB are each equally spaced 1/3 way around it, and Red is directly opposite Cyan.
In RYB, RYB are each equally spaced 1/3 way around it, and Red is directly Opposite Green
But Red actually inverts to Cyan, NOT Green. So Red and Cyan are compliments, and Green and Magenta are compliments, and Yellow and Blue too.
As an example, the RGB color wheel shows that mixing Red and Green gives Yellow, which is indicated by Yellow being halfway between Red and Green on the color wheel. Or mixing Yellow and Cyan gives Green, which is halfway between Yellow and Cyan on the RGB color wheel. But a RYB wheel says Green is between Yellow and Blue, but which are in fact actually complements of each other (meaning equal parts of perfect pigments should mix to be some shade of gray). So RYB mixing may get some unexpected variations of colors. Varying the intensity levels can help, but practice finds it better to obtain some actual green pigmented paint, or to mix Yellow and Cyan.
RYB is called "traditional" by artists (dating back a few hundred years). But modern science has learned the accurate physics is RGB and CMY, certainly for photography or for commercial or scientific work. See Wikipedia for RGB color.
You are necessarily looking at RGB images here on the screen, so this CMY image is just a simulation of how the overlapped CMY pigments would mix printed on paper. The three overlapped primaries in the center of the RGB image all add to a brighter White (lights add to be brighter), and white because they are equal and bright. The three overlapped primaries in the center of the CMY image is dark, because each of three printed primaries absorb its complement color, and the three absorptions leave the reflection dark and near black. So try to imagine it is a photo of the printed paper. :) The additive and subtractive concept is correct, and the difference is directly viewed light vs. viewing light reflected from and absorbed by pigments.
Complementary color: These RGB and CMY sets of primaries are complements of each other. These actual true complements are on directly opposite sides of the RGB color wheel, and are inverse, or actual opposite colors so to speak, meaning because when a color and its complement are equally mixed together, the color cancels out, and the mixed result becomes a colorless grayscale tone between white and black. This is true of RGB, but RYB is not quite proper theory.
Inverting color negatives: In routine photo printing, when projecting the color negative image onto print paper, the orange film mask is filtered out with optical filters under the lens, and then printed on the CMY print paper which inverts (more light darkens the paper emulsion more, so to speak). The analog light is NOT limited to values of 0..255.
But computer processing is limited to 0..255, and numerically, the complement of a RGB Value of [0..255] is Complement Value = 255 - Value (which becomes [255..0]). This is the inversion operation as done in the computer inversion tool. Inversion processes each of the three components of RGB this way.
Color film negatives have the light orange mask base which becomes strong deep blue when simply inverted. And since computer RGB processing is limited to 8-bit values 0..255, larger shifts can cause clipping, which shifts color mixes. Inversion is Not done for positive film slides of course, so camera chips can copy them well. But a real film scanner (in Color Negative Mode, as opposed to a camera chip) can remove the orange mask by varying the exposure time duration of the three RGB channels. A film scanner better processes color negatives by exposing the blue channel about 3.5x longer than the red channel, and the green channel is exposed about 2.5x longer than red. The analog light is not limited at 255, so this simulates an analog optical filter to remove the orange mask, which is a big advantage. Otherwise, see Processing Scanned Color Negatives.
RGB and CMY share the same RGB color wheel, but their results are viewed very differently (by direct light vs reflected light). Yellow light looks yellow to us, but yellow pigment looks yellow because it absorbs the complement blue and reflects all else (more or less, pigments may have skewed response curves). White light RGB(255,255,255) reflected from yellow pigment has had the complement blue absorbed by the yellow pigment, but which reflects red and green, which we view as yellow RGB(255,255,0).
Photo editors may have White Balance tools with sliders for color Temperature varying from Yellow to Blue, and also for Tint varying from Green to Magenta, which are Complements in the RGB system. These White Balance sliders are also actually two of the axis in the CIE Lab color system (developed to match the human eye). That third CIE Lab axis is Lightness, or intensity, from dark to bright. See more about White Balance below.
Our human eye has many of three types of "cone sensors" (Wikipedia), and each cone type is sensitive to one of red or green or blue light. Then our processed digital images will store the sampled red and green and blue components of the color as pixels, which we call RGB color.
Somewhat similar to the eye cones, in the camera raw image, each sensor pixel is simply filtered to be ONE of the three RGB numeric data values of one color (called a Bayer filter or pattern) which is sampled at that pixels spot on the sensor. Then each pixel of the final processed RGB image that we will see will contain three RGB numbers representing a resampled color version of all three primaries.
Each raw Bayer sensor pixel has a color filter over it to separate and collect ONE of the values of the three RGB component colors (each sensor pixel to be ONE of red or green or blue). Each raw pixel contains data for only the one filtered color. But then each pixel of the processed and finished RGB image that we eventually see (for example, a JPG image) is created to contain data of three RGB numbers interpolated from neighboring Bayer pixel values. This is a limitation of our technology, each physical sensor pixel contains only one voltage representing one number representing the color of its filter. Foveon sensors are an exception (Wikipedia), the pixels storing the three colors with some issues.
Each final processed RGB image pixel is three RGB digital NUMBERS which combine to show as ONE RGB COLOR (of 16.7 million possible color combinations). A pixel is just numbers that specify the definition of one RGB color — just a tiny dot of color, much like one small colored tile in a mosaic tile picture. The digital numeric concept may be relatively new today, but the mosaic tile concept is at least 5000 years old.
Printing uses CMYK ink (Cyan, Magenta, Yellow, and Key, which is Black), so images for commercial printing presses are converted to CMYK color in halftone screens (an operation called prepress). Our cameras and scanners and monitors are RGB, but our printers use CMYK ink. However, our home printers do expect to receive RGB images because that's what we have to give them, and then they do the CMYK ink conversion.
|RGB (Red, Green, Blue)|
This chart of RGB colors just tries to show some examples of basic concepts of the RGB numbering nomenclature. There are many possible shades of a color. Don't worry if the numeric values seem difficult to predict, some of them certainly can be. A little experience helps some, but typically only web page work needs to know much detail. And in practice, it's easy, as there are many web references to help, and for web pages, we can simply look up the proper codes for the desired color. There's also a color picker from Mozilla to play with to match the colors to the numbers. The Photoshop Color Picker is a similar tool.
Each of the individual R or G or B components are normally 8-bit values, each in the range of [0..255], because 255 is the largest value possible to store in 8 bits (28 = 256 values of [0..255]). One image pixel's color is a combination of the three R,G,B basic components (then called 24-bit color). For example, we know from grade school that red and green make yellow. Later in science we learned White light is a mix of all colors. Numbers up toward 255 are bright, and down near zero are dark or black.
Numbers: Any light when in any path to our eyes (from the scene or a monitor), is ordinary linear color. Our eyes expect to only see ordinary linear analog light (not digital). However, the usual numeric interest of photographers are the digital RGB numbers in the camera or scanner, in the image file or histogram, in the computer or internet (meaning any digital image, as shown here) which are encoded to be different gamma numbers. But thereafter, any image's light from the monitor has been decoded to linear again. Linear in math has a proportional straight line curve definition, but linear in photography also means "not a gamma image" (either not yet, or not still). Gamma is an automatic process handled by cameras, scanners, monitors and printers, and so can be ignored, but if interested in the histogram or editor numbers, there is a short summary of gamma below.
All JPG color files are 24-bit "color" in a 8-bit "mode" (but JPG can also be single 8-bit grayscale values instead). This 24 and 8 are two definitions with two different meanings. The three RGB colors are each 8-bits (possible values [0..255], 28 = 256) of each of Red, Green, and Blue. The three 8-bit RGB components can create up to 256×256×256 = 16.7 million possible RGB color combinations, called 24-bit "color". Of course, any one real photo image will not use most of these possible colors. A forest scene is probably mostly green colors, and an ocean or sky scene mostly blue, etc.
A crop from the same original large size shows more color detail, but is still shown here near 1/3 of full size (3×3 = 9 pixels combined into 1).
300x382 pixels, 27K colors, mostly blue.
100% crop, full size, includes every original pixel in this cropped area.
301x142 pixels, 30K colors.
Pixel color differences are the only visible image detail: The only data that a pixel contains is the specification of its one color. A pixel simply specifies the one RGB color of its own tiny area. All detail in a digital image is reproduced and shown only by the differing pixel colors. A large digital image has potential to show finer detail if shown large, but its many megapixels won't help if its only use is to resample into a small image, which means much of its detail is lost. Both large and small images have purposes.
This original image was with a Nikon D800 and 24-120 mm lens. ISO 200, f/6.3, 1/160 second at 40 mm. It originally contained 6520x4347 pixels (as cropped here to 28 megapixels) and 718K colors. The top copy is resampled smaller (28 megapixels to 0.06 megapixels, to 4.6% of dimensions and of maximum spatial resolution), literally meaning every 27x27 pixels (about 472 pixels) were condensed into one pixel of one average color, as totally appropriate for the new small size, but necessarily losing detail and resolution and colors (6% of the colors, and color variations are the only detail). Most photo images of modest size might use only maybe a couple percent of the 16.7 million theoretical maximum possible RGB colors (the free Irfanview menu Image - Information will show this Number of unique colors).
This original image was with a Nikon D800 and 24-120 mm lens. ISO 200, 40 mm, f/6.3, 1/160 second. It originally contained 718K colors as 6520x4347 pixels (as cropped to 28 megapixels). Ths top one is resampled smaller (28 megapixels into 0.6 megapixels, to 4.6% of dimensions), meaning 27x27 pixels (about 472 pixels) are condensed into one pixel of one color, losing resolution and losing colors. Most photo images of modest size might use only maybe a couple percent of the 16.7 million maximum possible RGB colors (the free Irfanview menu Image - Information will show this Number of unique colors).
But we don't bother to count colors, because the simple count varies with scene content and image size. But the pixel count is important to know how the image can be used. I'm just trying to mention that images resampled smaller must lose detail (from combining multiple pixels into one), because the color differences of adjacent pixels is the detail. That's just normal. There's not much that can be done to help a small image to retain detail, but if suitable, first cropping it tighter to enlarge the area of the important detail can help (as in this second image).
Our eye cannot even differentiate all of the possible 16.7 million colors if in shades differing less than about 1%, but they are all possibilities. There are many shades of any color, but adjacent numeric steps are less than 1% apart if brighter than 100 (1 is 1% of 100). For example, the green samples of Green = 200 are shown at the box above, where 1% is 202 and 2% is 204. Here, you may see the straight edge line between the close values, but probably can’t notice any actual color difference when it is only a few random pixels in a photo. The difference in adjacent numerical values can be very subtle (Google Just Noticeable Difference Color).
This 16.7 million RGB possibilities is just a calculated maximum number (256×256×256) of 24-bit possibilities. Most possibilities will not exist in any one set of data. But 8-bit grayscale has a maximum of 256 possibilities of tone. Therefore it does generally become important for B&W images to have at least a small area of pure white and also some pure black, for better increased contrast (see the easy procedure for this).
16 bit values can be created (48-bit RGB color), typically 12 and 14 and 16 bits are internal values inside cameras and scanners, but we really don't have ways to show them, since our monitors and printers use 24-bit color (with 8-bit RGB values). We can only show 16 bit data truncated to the 8 bits we can display. A very crude analogy of the idea (seems conceptually adequate here, but it mixes definitions of binary and decimal, and of floating point and integer) is that truncation to 8-bits might be thought of similar to currency values of $151.79 being called $151 (truncation). Rounding is possible, but with many millions of values to process (megapixels x 3 RGB values), truncating to leave the most significant digits is much faster and more likely. The value $151 (represented in three digits) is still accurate, just less precise (which are different things). Binary bits and decimal digits are not the same thing or scale, but are conceptually similar. A conversion from 16 bit integers to 8 bits simply retains the most significant 8 bits and truncates the least significant 8 bits. That is the same as dividing by 256 (28 = 256), but very much faster.
JPG File Size: Repeating from the JPG artifacts page, these next four JPG images were originally each 36 megapixels, but now all are 450x300 pixels, or 135,000 pixels (0.135 megapixels), x RGB 3 bytes per pixel is 405,000 data bytes (395.5 KB, 0.386 megabytes) when open and uncompressed in memory. The four images compare JPG compression efficiency, of this uncompressed data size in memory to the compressed size when in the JPG file. All four images are the same 450x300 pixel image size and all used the same High JPG Quality level here, but the file size varies greatly due to the varying image detail which affects compression efficiency. Bland smooth featureless image areas create smaller JPG files, and heavily detailed images are larger files. These do not have any EXIF data in them now, which could have added several KB to file size.
Black is the zero value at RGB(0,0,0), and White is maximum at RGB(255,255,255). 255 is the brightest shade of a RGB color that the viewing device can use (because 255 is the largest number that can be stored in an 8-bit byte). RGB is called "device dependent" color, because 255 may not appear exactly the same on different devices, but 255 will be the brightest that each device can show (depending on its inks or phosphors, etc). A printer and a video monitor have different capabilities: viewed with light reflected from ink on paper, or direct transmitted light from the video screen. A paper print will appear lower contrast than a monitor image, because the black ink will still reflect some light, and the whitest paper might reflect only 90% of it. (Speaking of the reflection of black, one yard of dress makers black velvet cloth material will make a really black background for tabletop photos, because the deep fibers trap and capture the light. It will appear utterly jet black, and other black cloth or paper won't, not unless you can keep the light off of them.)
My notion is that this is the reason there has been never been any push to implement 16 bit output as standard. It's doable today, but the precision really wouldn't offer much advantage (it might help at the extreme low end, if not too dark for our monitors). And 16 bits has so many more possible values (65536 vs 256), so lossless 16 bit compression is much less effective. Our eyes can't distinguish many of the 8-bit steps that we do use, and we're doing fine with 8 bits. Even harder, our eyes and brain can do strange things to us. The brain is an active sensor that sometimes instead sees what it expects to see. Optical illusions are common. For example, in the RGB chart above, I'm sure I see the gray background lighten under the top line (0,0,0) black box. 16 bit color could not help these perceptions.
We can create data with 16-bit color channels (48-bit RGB color), but today, the extreme vast majority (virtually all) of our monitors, video cards and printers are 8-bit devices, meaning 24-bit color. We can create 16 bit images, but our 8-bit systems can only show 8 bits. 10-bit monitors are available, but rare. Hollywood film makers use them for special effect graphics, to prevent banding in very wide gradients). You'd need a 10-bit video card and software to use them at home, and you'd need more than 8-bit images. Our digital cameras and scanners are 12 or 14 or 16 bits internally, for the necessary purpose of more precision when editing extreme shifts, like gamma or white balance. We cannot ever "see" 16-bit images, because our 8-bit monitors and printers will truncate the more-precise result when we see it (8-bits compares similar to a rounded result, fully "accurate" but less precise, just represented by fewer bits of precision). All JPG images are 8-bit data (but TIF and PNG do have 16-bit options). See more about file formats.
Luminance: Our human eye does not perceive all colors as equally the same brightness. Of the three, we see Green as brightest. Red is next, and Blue is least bright (to us). The perceived brightness of color (luminosity) is judged to be: Green 0.59, Red 0.3, Blue 0.11. That's the NTSC television formula for converting RGB scenes to grayscale TV: RGB Luminance = 0.3 R + 0.59 G + 0.11 B (the three coefficients adding to 1.0 when combined). Luminance is the computed equivalent brightness of color as seen in grayscale (or by B&W film) but it is NOT actual real RGB data.
One important significance of luminance we should know is this: Our digital cameras often show one gray histogram and also show the three RGB histograms. The one gray histogram is computed as Luminance (from our three RGB channels), and only shows apparent brightness, but Luminance is of NO USE to determine image clipping. Clipping is when the data makes a peak against the 255 right edge of the histogram, implying great likelihood that it is clipped there. But as just described, Luminance is computed to show only about 30% of red, 59% of green, and 11% of Blue intensities (only a fraction of the actual intensity). Viewing a luminance histogram will NOT show clipping (the recomputed reduced values cannot reach 255, even if the actual RGB data does). Luminance is just a different computed number for another purpose (it indicates perceived grayscale brightness of colors to the eye, Not the actual color brightness), but is NOT the real actual values actually in our RGB pixels. So instead, always examine all three RGB channels for clipping. However, there are different styles of histograms, and photo editors like Adobe can overlay the three RGB channels into one gray histogram, which is OK then, may look gray, but is still real RGB data. It is NOT luminance then (but there may be an option to show luminance). Make sure your single histogram still shows the peaks and shape and extents corresponding to actual RGB.
The single channel grayscale tone is one byte in 8-bit grayscale (0 to 255), which is 1/3 size of color files that have three bytes of 24-bit RGB. The single byte of gray has no color cast possible (three bytes of RGB can). We can print this single value with only dots of only one black ink (dithering is a way of combining sparse black ink dots with tiny blank white paper "dots" to create an averaged appearance of gray in the area size of one pixel). Or, Grayscale can be represented in color mode, for example if scanning a grayscale photo in color mode. Then equal RGB components, for example RGB(110, 110, 110) represents a true Grayscale single channel value 110 (Luminance = 110x0.3 + 110x0.59 + 110x0.11 = 110). Or we can print grayscale with CMYK inks that are combined into shades of gray (but then there is risk of unequal printer inks introducing a color cast). The few best inkjet printers for grayscale mode also offer a couple of lighter gray inks to improve dithering. Or to print color images, inkjet color printers also have to dither their four colors of CMYK ink to simulate 16.7 million possible colors (in the space of one pixel). Some may also offer lighter shades of cyan and magenta ink, to improve color dithering in photos.
A very good type of White Balance tool uses a known neutral white or light gray white balance card (Known to actually have equal RGB components). This known WB card can be included in a first test picture in the light of the scene to correct White Balance in this situations batch of color images. Clicking the WB tool on that card tells computer "this spot is known to be neutral so make it be neutral", then the computer corrects the image so that known neutral spot is adjusted to actually be neutral (equal RGB, no color cast). This same change also corrects the rest of the image to remove that same color cast (assuming card and subject are in the same light). This works great, and for JPG too, but works best in raw image processing (because of greater range of greater bit depth). Also, just clicking on white objects found naturally in the scene, judged to be pretty close to neutral, can work pretty well, not precise but far better than nothing, and worth a try to judge the result. The sites White Balance page is here.
8-bits: As is common practice, there are often multiple definitions used for the same words, with different meanings: 8-bits is one of those.
In RGB images - 8-bit "mode" means three 8-bit channels of RGB data, also called 24-bit "color depth" data. This is three 8-bit channels, one byte for each of the R or G or B components, which is 3 bytes per pixel, 24-bit color, and up to 16.7 million possible color combinations (256 x 256 x 256). Our monitors or printers are 8-bit devices, meaning 24-bit color. 24-bits is very good for photos.
In Grayscale images (B&W photos), the pixel values are one channel of 8-bit data, of single numbers representing a shade of gray from black (0) to white (255).
Indexed color: Typically used for graphics containing relatively few colors (like only 4 or 8 colors). All GIF and PNG8 files are indexed color, and indexed is an option in TIF. Indexed color is always limited to 256 colors max, and some choices are as few as 2 colors. These indexed files include a color palette (is just a list of the actual RGB colors). An 8-bit index is 28 = 256 values of 0..255, which indexes into a 256 color palette. Or a 3-bit index is 23 = 8 values of 0..7, which indexes into an 8 color palette. The actual pixel data is this index number into that limited palette of colors. For example, the pixels data might say "use color number 3", so the pixel color comes from the palette color number 3, which could be any 24-bit RGB color stored there. The editor creating the indexed file rounds all image colors into the closest values of just this limited number of possible palette values. The indexed pixel data is most commonly still one byte per pixel before compression, but if the bytes only contain these small index numbers for say 4-bit 16 colors, compression (lossless) can do awesome size reductions in the file. Being limited to only 256 colors is not good for photo images, which normally contain 100K to 400K colors, but 8 or 16 colors is a very small file and very suitable for a graphics of only a few colors. More on Indexed color.
8-bit color was in common use before our current 24-bit color hardware became available. A note from history, we might still see old mentions of "web safe colors". This wasn't about security, this standard was back in the day when our 8-bit monitors could only show the few indexed colors (maximum of 256 colors). The "web safe" palette was six shades of each R,G,B (216), plus 40 system colors the OS might use independently. These colors would be rendered correctly, any others were just nearest match. "Web-safe" is obsolete now, every RGB color is "safe" for 24-bit color systems today.
Line Art (also called Bilevel) is two colors, normally black ink dots on white paper (the printing press could use a different color of ink or paper, but your home printer will only use black ink). Line art is packed bits and is not indexed (and is not the same as 2 color Indexed, which can be any two colors from a palette, and indexed uncompressed is still one byte per pixel, but compression is very efficient on the smaller values). Scanners have three standard scan modes, Line art, Grayscale, or Color mode (they may call it these names, or some (HP) may call them B&W mode and B&W Photo mode and Color, same thing. Line art is the smallest, simplest, oldest image type, 1 bit per pixel, which each pixel is simply either a 0 or 1 data. Examples are that fax is line art, sheet music would be best as line art, and printed text pages are normally best scanned as line art mode (except for any photo images on the same page). The name comes from line drawings such as newspaper cartoons which are normally line art (possibly color is added today inside the black lines, like a kids coloring book). We routinely scan color work at 300 dpi, but line art is sharper lines if created at 600 dpi, or possibly even 1200 dpi if you have some way to print that (that works because it's only one ink, there are no color dots that have to be dithered). Even so, line art makes very small files (especially if compressed). Line art is great stuff when applicable, the obvious first choice for these special cases. Line art mode in Photoshop is cleverly reached at Image - Mode - BitMap, where it won't say line art, but line art is created by selecting 50% Threshold there in BitMap (which has to already be a grayscale image to reach BitMap). BitMap there is actually for halftones, except selecting 50% Threshold there means all tones darker than middle will simply be black, and all tones lighter than middle will be white, which is line art. Two colors, black and white.
Gamma Correction is the adjustment of digital photo data to correct for the non-linear response of CRT monitors to allow their use to show tonal image data.
Linear as related to digital photo data applies to Before Gamma Encode or After Gamma Decode, just meaning Is Not Now Gamma Correction... current data state is linear, as in real life scenes. Our human eye expects linear, and never sees any gamma encoded data.
Gamma correction was done at every TV transmitter so that every CRT monitor did not have add circuitry to correct the tonal response. Gamma was added to television specifications by 1940 (decades before digital) by the earliest television broadcasters. It is still universally done in digital images today, and every digital camera or scanner still adds gamma correction to tonal images per our image specifications. LCD monitors now are linear and don't need gamma, but all digital image data still contains the gamma encoding necessary for the CRT (so today's LED monitors simply first decode to linear, to ignore gamma). The reason we still need to know about gamma is that it is in today's sRGB specifications, and all our past and present tonal digital images contain gamma encoding. For example, we know exposure 1 EV lower is half the exposure, therefore exposure 1 EV down from the histogram 255 end is half exposure, which is true of Linear data, but that does NOT MEAN HALF OF THE GAMMA HISTOGRAM RANGE. It will normally be about 3/4 scale in the gamma histogram due to gamma encoding for the CRT. You are invited to a full description of gamma.
In addition to the mathematical straight-line graph definition of linear, "linear" in this photo context also is generally used to also mean "not yet gamma encoded" or "after decoding" (gamma is a nonlinear process). So common usage about photo data is Linear meaning Not currently gamma encoded. The natural light scenes we photograph are all linear data, and our eyes of course don't understand about seeing anything except real world natural linear light. Only our photo files and histograms are gamma encoded (which encoding is named "gamma correction"), always universally done for the purpose to compensate for the CRT nonlinear response to make the CRT acceptable for showing tonal data like photos. Usage of CRT monitors might be rare today, but photo files are always still gamma encoded for compatibility, and is specified by our standards. LCD monitors are linear, so they know to first decode it to restore the data to liner. This is a simple process today.
The camera lens sees ordinary linear light from the original scene, and our eyes also expect to see only linear light. But then all digital cameras and scanners add gamma BEFORE they output the digital images we use (to correct the CRT response to become linear). Our photo image files contain gamma data, our histograms show the gamma data, and our editors edit the gamma data. Film and print media remain analog and do not use gamma, because they cannot be viewed by a CRT (they must be scanned to digital first for CRT to show them, and digital then adds gamma for CRT). Our digital image viewing devices (monitors, televisions, cell phones, etc) decode gamma (a CRT monitor decodes by the nonlinear response losses of just showing it, or a LCD monitor uses a chip to just decode it numerically to linear). One way or the other, gamma is always necessarily decoded to output the linear visual data again for the eye to see.
Middle Gray and Gamma: Perhaps a more advanced topic, but many things do have multiple definitions, which must be interpreted in context, and Middle Gray is one of the most confusing terms, with various meanings. We might imagine RGB(128, 128, 128) to be Middle Gray, simply because it is mid-scale in our [0..255] histogram, but which is incorrect logic, because the histogram data is gamma data values, values of gamma 2.2 calculations.
Linear 50% is the numeric middle, which is exactly one EV down from the 255 end point, but photo digital data is gamma encoded, and 50% linear is at gamma 186, near 73% or 3/4 of full scale in the histogram (Is NOT a precise number, because our cameras are busy doing White Balance and Contrast and Saturation adjustments which shift it variably). This 50% linear would seem a correct center, except it is not the way our eye and brain perceive it. Our eyes encode it too, but we do of course expect to only see linear real world scenes.
Our 18% gray cards (the neutral color said to reflect 18% of the light on it) are said to be the Middle Gray that our eye and brain perceives as halfway between black and white. The 18% linear value is 46, but converting it to gamma comes out at 117 gamma — halfway close to the middle of our histograms, but which is coincidentally only due to gamma math (and numerically is Not at all about the word middle). There are a few vague and different definitions of middle gray, but this one seems most accepted. But 18% is not mid-point of anything digital, and our eye never sees any gamma data. Incidentally, Kodak always advised if metering on their 18% gray card (to be independent of subject colors, simulating using an Incident meter), to then increase exposure an additional 1/2 stop, which computes 12.7% reflectance at linear value 32.5 and gamma 100 (which was repeated from the early days when our cameras could not do third stops, but would be said as adding 1/3 EV today).
Our reflected camera light meter calibration uses K=12.5 (Wikipedia, and says ISO recommends K = 10.6 to 13.4) near the middle of the range (the histogram values are gamma values). This was based on studied trials of many general or typical scenes, which is merely a "not too bright and not too dark" guess. Since the meter understands nothing about any specific image, or what it is or how it should come out, don't presume any great precision from reflected meters, because reflectance brightness is affected by degree of reflection from the various subject colors. Reflected meters are still a tremendous help, however Incident meters instead meter the actual direct light intensity, totally independent of the subject's colors, and generally are reliably more accurate (but are more awkward to use from the subjects position, except in the studio when subject location is accessible).
Which value is to be called "Middle Gray" is confused for us. Any exact definition is rarely involved. Our nonlinear human eye perceives 18% linear as about the halfway point between white and black, which is technically and numerically about gamma 117 in our histograms, before adjustments.
What is true and important is that our photo data (RGB and Grayscale) in our histogram is always gamma encoded data (our sRGB specification is for gamma 2.2). But our eyes never see any gamma values. Gamma is a correction factor to be able to use non-linear CRT monitors for pictures, but our eye necessarily wants to see only real world linear data, so our LCD monitors (which are linear) must decode the data back to linear first, because the histogram data they receive is always gamma encoded. And our photo editors edit the gamma data (but our monitors show us the linear conversion that our eyes expect). But when we edit a red color of RGB(180,10,20), those are gamma encoded values. But our eye only sees the decoded linear result.
We tend to call it Gamma, but the correct name is Gamma Correction (must search for that term, just gamma also has other definitions). The non-linear response curve of a CRT monitor is named Gamma, and our preventative gamma correction of the data is done to correct and restore it. The photo image data values (as seen in the histogram) are modified, computed as Linear Value(1/gamma), because the response of a CRT monitor will show it as Gamma Valuegamma, all planned so that the output remains linear. And because all of our image data is gamma encoded, a linear LCD monitor must simply first decode it too.
An example of computing added gamma for 18% linear: As the first action, data for the exponential step must be scaled to a 0..1 range, because 0 to any power is 0, and 1 to any power is 1, so the end points don’t change, nothing gets clipped, and contrast is not changed. So 18% becomes 0.18. The gamma value will get numerically larger, except that it is never seen, because it always decoded again before any eye sees it. One plus was that this decode reduction greatly also reduced added signal noise in the days of analog television. However, the actual and necessary reason for using gamma was because CRT displays suffer severe nonlinear intensity losses that this gamma boost intentionally corrects. Today's LCD monitors are linear, so they know to simply mathematically decode and discard gamma, but we still continue adding gamma for continued compatibility with CRT (if any), and for obvious compatibility with all the old images that exist. Today, this is a minor thing to continue, not any problem.
So the numbers are that the 18% linear gray card is value 46 (18% of 255), which is 0.18 on the necessary 0..1 scale. Gamma uses exponent 2.2 for decode, and encode reverses it by using exponent 1/2.2 = 0.455. So the histogram gamma value is encoded as (46/255)0.455 = 0.458 (18% linear becomes 45.8% of full scale on the gamma histogram scale), which is 0.458 x 255 = 117 value in the 0..255 gamma histogram. That 117 is near the histogram midpoint of 128, but it is NOT near 50% intensity, because it represents the 18% linear value of 46. The histogram shows nonlinear gamma values. This is exactly mathematically reversible, so the decoding this as (117/255)2.2 = (0.458) 2.2 = 0.18 (same 18% linear again), and 0.18 × 255 is value 46 linear again (46 in the 0..255 scale. The linear light is digitized, but histograms show the gamma data). The gamma data becomes linear again by simply showing images on a CRT monitor, which losses result in the expected linear response, which is the specific purpose of gamma correction (see more about gamma). LCD monitors are already linear, and simply just mathematically decode gamma data to produce linear again.
Maybe when considering Exposure Compensation, you see the blank space at right end of the histogram, and may wonder how more exposure can it use?
The table shows that if the top end of the data only reaches to about 73% of histogram full scale, then 1 EV compensation will move it to the top. If top end of the data only reaches 53% of scale, then 2 EV will move it to the top. If it reaches 90%, it can only accept 1/3 EV more before clipping. This is pretty good to know.
We see only linear natural light (between the eye and the scene or the monitor or print), but the computer's image data we might examine is gamma data (including the histogram). Our eye does it's own thing, but it expects to see only linear light. Gamma data is necessarily always converted to linear before it is output from the monitor. But the calculation does verify that -1 EV is 50% exposure, and -2 EV is 25% exposure (if linear). So to make the histogram more useful, then for an example of the idea intended here, if your photo data fills the left 60% of your histogram but the right 40% is empty, then find 60% on the gamma side (59.1% here, in data for Thirds), and see that a plus 1.67 EV exposure should be big help. This may not always be that easy or exactly straight forward to use, but again, it might help if you at least realize 1/3 EV down is at about 90% scale, 1 EV down is about 3/4 scale and 2 EV down is near half scale.
The standard gamma 2.2 is assumed (it is the sRGB standard, and 2.2 is what the LCD monitor plans to decode).
Then watch histogram to prevent clipping at 255 (in any of the three RGB histograms).
Clipping: The 0..255 width range of histograms shows presence and distribution of the various tonal colors, and dark color scenes should be lower in the histogram (black should be black, not gray). But depending on colors, most average mixed-color bright scenes typically do nearly fill the histogram. The histogram is a bar chart, and the vertical bar height represents the Count of the number of pixels of that tone value (showing distribution). The height is scaled relatively, meaning the maximum height value should always reach to the top of the histogram. The histogram simply shows the presence and distribution of the tonal values. Greater exposure shifts the tone values right, which is brighter color tone. Our exposure goal is to reproduce the colors that are actually in the scene. Our primary histogram concern is a noticeable count peak at 255 which strongly implies clipping, where higher values have been altered by being limited to 255. That changes the color, of them, and also all lower values have been brightened. Clipping is not recoverable, it instead needs exposure to be reduced, because those pixels clipped to 255 have altered inaccurate color representation. The one possible exception that might be considered is a scene containing bright white metered with an incident meter, which could be a 255 value in all three RGB histograms. However large predominant white values are metered significantly lower by reflective meters, and then the actual goal is to increase exposure so white is near 255. Incident and reflective meters are very different, see a metered white value.
Note that computing cannot always predict the exact gamma result value seen in the histogram. Tones will move around a bit in the histogram, because the camera settings (color profile, white balance, saturation, contrast, etc) are always changing the data somewhat. But if it were otherwise unaffected, 18% is linear value which would compute value 117 (46%) in the gamma 2.2 data. Another example is that a value one stop down from 255 would be 50% and 128 Linear, but it is 186 in gamma data (73%, more or less ballpark of around 3/4 of full scale). But the camera is also busy doing other things to the data too.
If interested in more than 8 bits, or other than sRGB gamma 2.2, then see another similar calculator.
So if not realizing the difference in linear and gamma, we might imagine 18% Middle Gray is somehow instead 128 midpoint in the histogram. It is called Middle Gray after all, and it does coincidentally come out fairly close, but only because of gamma is added. Gamma 117 is only about 1/8 EV difference from 128, so gamma does make the 18% gray card appear fairly close to 128, but for an entirely different concept and unrelated coincidental reason (certainly 18% is not a middle number in digital). It is true that the CRT response and the human eye response are both similarly logarithmic (exponential), but they are entirely different situations, and the human eye never sees any gamma data. Gamma correction is done for the CRT, not for the eye.
Online articles about histograms for beginners rarely ever even mention gamma (showing a concept as if presumed linear data). Either they don't want to explain gamma too, or possibly just don't understand the difference. But histograms do show gamma data, so do realize that one exact stop down from the 255 end of histogram is 50% linear analog exposure, but it will actually be about 3/4 scale on the histogram, and WILL NOT be near half scale (128 gamma is only 22% linear reflection). That is because the histogram data is gamma encoded (non-linear). It is still fully useful for our purpose to check on clipping, because gamma itself never affects clipping (because in the scale of 0..1 used for gamma, either 0 or 1 to any exponent is still 0 or 1). These end points are unchanged by gamma, however overexposed data can exceed 1, which is then clipped at digital 255.
The camera’s three RGB histograms are real pixel data and are very useful to show clipping, but again, the single grayscale histogram may instead show computed luminance, which cannot show clipping correctly.