These basics are pretty much about the single issue: How do I use my image, how to I make it be the proper size for viewing, for printing, or for the video monitor? All this is really quite easy, but digital may just be a new concept. It is like learning to drive — once you learn an easy thing or two, it's a skill helpful for life. When you know, you will simply just know. But yes, it does seem that we could subtitle this: Details that no beginner wants to know. However the point is: You'll never grasp digital images until you get it ... until you know what digital images are, what to do with them, and how to do it. And it is very easy, if you will accept that pixel is all there is.
Seriously, once we accept that pixels actually exist, then all this stuff is rather easy. It's all about pixels.
We just gotta know about pixels, and if any mystery, a very short primer is here: What is a Digital Image Anyway? But the detail is below
This page tries to be a quick summary of the digital concepts, about how things work. The answer to virtually any question about image size starts with one of these basics. To be able to use digital images well, we need this understanding. This may perhaps be written a little like an argument, refuting the dumb incorrect myths we may have heard about how digital works. The concepts below are instead what you need to know to use digital images properly. It is actually rather easy to grasp, if you get started right.
Color images are commonly RGB data (three values per pixel representing Red, Green, Blue), but there are also other ways of encoding the image data, CYMK, grayscale, line art, indexed color (see more detail about bits). And there are different methods of compressing the data in the file, JPEG for one example.
This was a 4288x2828 (3:2) 12-megapixel Nikon D800 image but in portrait orientation. Then cropped to 2502x3127 pixels (4:5, 7.8 megapixels), and then resampled to 400x500 pixels (0.2 megapixels). We tend to think of these numbers as the "resolution" of the image, but at full size, it is the digital reproduction resolution of the image from the lens (the lens resolution is the limit). The pixels do indicate the "fineness" of the smallest possible digital detail which is a pixel, and a pixel is just a dot of one color). This example is to show the idea about pixels.
This enlargement might look fuzzy, but actually each pixel is sharp. 😊 Do realize the original 2502x3127 pixel (7.8 megapixels) image would look great printed 8.3×10.4 inches at 300 dpi, but this 12x size would print 8.3×10.4 feet at 25 dpi. You would stand well back to view that. The trick is to have enough pixels for your enlargement size.
The rule of thumb for printing high quality photos is to have 300 pixels for every inch of print dimension (more or less, the 300 dpi need not be precisely exact, but for prints that will be viewed closely or hand-held, best quality is to stay within about 240 to 360 dpi, or 300 dpi). Specifically, this means 1200x1800 pixels printed at 300 dpi is 4x6 inches on paper.
FWIW, some crime movies have shown how greatly enlarging a newspaper picture can reveal new details with clues about who dunnit. But that enlargement is just movie fiction, not at all real life. Enlarging the original film or a straight-out-of-the-camera digital image might show more detail than otherwise seen, but when a digital image reaches the pixel size limit, all it shows is the pixels (as above). But don't expect enlarging a printed image or a video screen image to go any further.
A digital image is composed entirely of pixels. A pixel is purely numeric data representing some one color, of one tiny dot at its specific location in the image. The main concept of digital images is that each pixel is just NUMBERS. A pixel is numeric data describing ONE RGB COLOR of one tiny area. The color in that tiny pixel's area was sampled by the camera or scanner sensor. Then each pixel is ONE COLOR described digitally in numbers, representing the color of one tiny dot at its location in the image. The color of each of millions of tiny dots is so described (a 6000 x 4000 pixel image is a grid of 24 million pixel locations).
A pixel represents the color of a tiny dot with numbers, but conceptually is not unlike a small piece of colored tile in a mosaic tile picture (awesome). The numeric concept may be relatively new today (called digital), but the concept of producing a picture in decorative tile is a few thousand years old. Tile and film both capture tiny samples of the actual color, but digital represents the colors with numbers, necessary because numbers can be written into the image file text, but actual colors cannot. And then the pixel that describes a pink color is shown that color, which has the similar effect as a small piece of tile the same shade of pink. Our brain recognizes the reproduced image pattern in those pixels or tiles. But enlarge these enough, and all you will see is the pixels or the individual tiles. Pixels are all there is in a digital image, and we must think of it that way. Ignoring pixels will never grasp the digital concept. Digital will make sense when you do think of pixels.
FWIW, a digital pixel in an image file does NOT have a size, at least not until it is displayed, and is variable size in different print sizes. A pixel is just numbers representing a color of a tiny dot location. A pixels width is (the final image width in mm / the image width in pixels), after it is displayed. Yes, there was a camera sensor with maybe about 4000 pixels per inch on it. But then a pixels actual final use depends on how we show it enlarged (for example) at 100 pixels per inch on the video screen, or at 300 pixels per inch on the print paper. This "scaling" (using a different dpi number, spacing the pixels differently) is how enlargement is accomplished. (The sensor's 4000 pixels might print so 400 dpi is 10 inches, or 100 dpi is 40 inches.) Our digital image does have a size, in pixels, perhaps 6000 x 4000 pixels in data size. But we can view that image file at any reasonable size. A pixel is just numbers representing a color of a dot at a location in the image.
Pixels are how a digital image reproduces a scene and its colors. The camera lens creates an image. Then to reproduce that image digitally (numerically), the digital camera sensor merely takes millions of color samples (each pixel is a tiny sample of color), at each of every very tiny area of the image, in the way shown above. In contrast, film (which is NOT digital) uses tiny specks of silver or emulsion dyes instead of pixels, which is not digital numbers, but film does a similar sampling idea (actual colors of every tiny area). Film areas actually show the actual color, which we can directly see. However, digital images are totally about pixels, which are simply numbers representing the color at a spot location.
A pixel is three numbers, of the red, green, and blue primary components of the color. For example, the reddest orchids in the picture above have RGB components of about RGB(220, 6, 136), each component on a scale of 0..255, so the RGB components of it are red is bright, green is very weak, near black 0, and blue is about mid-range. A pixel can describe that shade of bluish red of one tiny area. We don't have to know much detail, but see Understanding RGB color.
The "photo detail" that we perceive in a digitally reproduced picture is entirely due to the color differences in the pixels. A pixel is simply a color description. Color is the detail. Pixels show the colored detail. The image detail is shown by the color differences. The colored pixels are all there is in a digital picture.
Pixels are real, they exist, in fact, pixels are ALL that does exist in digital images. There is nothing else in a digital image. We don't need to see each pixel individually, but the image Size dimension in Pixels is the First Thing To Know about using any digital image, because this size in pixels is what is important for any use. The size of a digital image is dimensioned in pixels, for example 6000x4000 pixels is the size of some one image.
FWIW, we see some fanciful things in movies, where tremendously enlarging photo prints provides clues to solve crimes. The resolution decreases as the size increases, so it really does not work well in that degree. Enlarging the original film is by far the best chance, enlarging prints is poor, and enlarging newspaper images is totally hopeless. Enlarging data of a large digital image does show much detail not seen in a small print (see that in images here), but enlarging digital data excessively only shows pixels.
Human eyes have rods and cones which are a similar sampling system of tiny areas. Cones are color sensitive, our eye has red, green or blue cones. Sampling the color of tiny areas is not unlike pixels in that way. The color difference of adjacent areas is how image detail is perceived. We can see a black power wire running across a blue sky because the colors are different. Color difference is the detail that we perceive (including slightest differing tonal shades of same color). In our digital pictures, a pixel is the smallest dot of color that can be reproduced, so we do think of more and smaller pixels as greater resolution of detail.
However, digital reproduction is a "copy" of a lens image. We should also realize that it is the camera lens that creates the image that we will reproduce digitally, and pixels are the detail of reproducing the lens image. For example, in a APS-C size cropped sensor camera, the original is the image from the lens projected onto the 24x16 mm APS-C digital camera sensor. The image has this 24x16 mm size there, comparing to the size of an APS-C size film image. Then, the camera pixels merely digitally sample that lens image (very much like any scanner samples an image, meaning taking many color samples called pixels) to try to digitally reproduce (convert to numbers) the image that the lens created. A pixel is just numbers, three binary RGB numbers representing the red, green and blue components of the color of the area of that pixel. The pixels do NOT create the image, and cannot improve the lens image detail. The pixel sampling merely strives to reproduce its detail. At best, it can hopefully be a very good reproduction. A 24 megapixel cropped APS-S image and a 24 megapixel full size image are NOT equal, because the full size image is simply half again larger (36x24 mm), and so does not have to be enlarged as much to show it.
Any given digital image file can have one of several sizes in bytes due to variable compression. The paper print can have various sizes in inches or cm due to variable scaling when printing. But the actual image has only one size in pixels, which defines how it might be used. The image size can be changed by resampling.
A pixel is just numbers that represent one color, specifically, the three numbers of a RGB color specification — which represent the average color that was sampled from this tiny dot of image area. When the image is viewed or printed, each little dot of image area is shown to be that corresponding color. In that way, digital images are a little like mosaic tile pictures (but in an ordered grid pattern). Each little dot is one color, and our human brain puts them together to recognize an image in all those colored dots. If it is an ordinary standard 24-bit RGB image (like JPG), the pixel data is one byte for each of the Red, Green, Blue components of the pixel, which is three bytes per pixel. So if 12 megapixels, then x3 is 36 million bytes of data (assuming the standard 24 bits). That is simply the actual data size of any 12 megapixel RGB image data, however you will see it compressed much smaller while it is in a JPG file (JPG file size is much smaller than the image data size, via JPG compression). But when that file is opened, it is full size again in computer memory, three bytes per pixel (24 bits). For other than 24-bit, and for the special interpretation of "megabytes", see more detail, and also for a calculator to convert bytes, KB, MB, GB, and TB.
The size of that image data when opened in memory is in bytes of memory. 24-bit RGB images (8-bit color) is always three bytes of RGB data per pixel. So bytes is the "data size", but "image size" is always in pixels. Whereas, inches only refer to the paper where these pixels will be printed.
Data compression is frequently used to reduce file size. Data compression does not change the number of pixels, but does reduce the data into fewer data bytes while stored in the file. There are two types, lossless compression which never changes the pixel data values, and lossy compression which can modify the pixel data values (we don't get exactly the expected values back out of the file then). JPG files use lossy compression, but many other file types use lossless compression methods, including common PNG, GIF, and almost all TIF file choices. Lossless is more conservative and a larger file, where lossy is more aggressive, and a smaller file (and possible losses of quality).
JPG is lossy compression. A JPG file is compressed to be maybe 1/10 this data size (roughly, maybe 1/4 to 1/20, as we can choose it to be very variable) while in the JPG file, but 12 megapixels opens again to 12 megapixels and three bytes per pixel again when open in memory. JPG uses lossy compression, which means the pixels might not always have exactly the original color values (pixels are just numbers describing color, and image detail is just color differences). But we can specify High JPG Quality for a larger better file, or Low JPG Quality for a smaller worse file (when and if file size is possibly more important than image quality). High Quality JPG should be "good enough", with any changes minor, not actually noticeable to us, so very adequate for viewing or printing. But do not skimp on JPG Quality (specify greater JPG Quality factor).
And a second choice: Don't re-edit any JPG copy additional times, meaning if subsequent plans require yet another edit or resized image, don't start from that previously edited JPG file (JPG lossy compression means it already has two sets of JPG artifacts in it, from the camera and then the first edit), so a third or fourth won't help that. Treat a JPG copy as expendable, discard it when done with it). START OVER from the archived unmodified original file. Because, each SAVE operation to a JPG file does the JPG compression again, on top of any previous Saves as JPG. Using RAW images make this plan be automatic and easy and fail-safe. Or, if the first edit was extensive work (more than just simple), you could think ahead then to also save that work into a lossless file (TIF LZW or 24-bit PNG, which are lossless and will not add additional JPG artifacts), and also save that file as an archive, and then use it as a master version, and make any subsequent JPG copy from it. This TIF save will not remove any existing JPG artifacts in the image data, but it will not add more.
You might reason that these first edits are important, and any use would want them, so overwriting the original file is a good plan. I've been there, and done that, but won't again, because it adds additional JPG artifacts, and my future plans could change too. A High Quality JPG save does not seem to hurt, but eventually you may discover that your most important image has suffered damage (there's a few possible causes), and then it is too late. If the image has any importance, I would reason that each Save as JPG would be at least one more cumulative SAVE AS JPG, which adds additional JPG losses each time saved, and the only way to prevent that is to not do it, and to instead go back to the unmodified original file, if you still have it. If the image has importance, plan to have it, and keep it safe. I've ruined a few images that way in the past that I wish I had back now. The best insurance is to preserve your original image (and keep a backup too, on a different disk drive).
So do plan ahead, there is no going back. The more important the image, the more you need to think this out. Don't mess up your only original image. After you have "been there, done that", this idea will become very important to you. One advantage of using Raw files is that it makes this step mandatory but very easy (lossless edits, but Raw also has other bigger advantages).
Again, image size on a monitor screen is still dimensioned in pixels (print paper is dimensioned in inches or mm, but screens are dimensioned in pixels). If the image size is larger than the screen size, we normally are shown a temporary resampled smaller copy of more suitable smaller size.
Continuing now with the list of Essential Basics to Know to USE images. This is the part that confuses people the most (about dpi), but it is pretty simple, and this should clarify. The basics continue here, with the very important basic info about how things actually work on screen and paper.
This is a very big deal. Printers print on paper which is dimensioned in inches, but video screens are instead dimensioned in pixels (there is no concept of inches in video systems). This difference gets our attention. These devices do NOT work alike. They both show the same pixels in their way, but the basic concepts are quite different. Printers space the pixels on paper, at perhaps 300 pixels per inch of paper. Video monitor screens show the image pixels directly, one for one on the monitor pixels.
When I say Video, I don't just mean movies, I mean anything on the monitor viewing screen, computer or TV. The video screen size is dimensioned in pixels, and the image is dimensioned in pixels, and the pixels are simply shown directly — without any concept of dpi. The video screen simply shows image pixels one for one — one image pixel on one monitor pixel. So for example (one pixel of image on one pixel of screen), an image 800 pixels wide will fill exactly half the width of a 1600 pixel screen width (but we do have ways to change its viewed size). People telling you the image needs to say 72 dpi for the screen or web are simply just wrong. Video shows pixels, with no concept of inches or dpi. On video screens, it does not matter at all what the image dpi number is. The screen shows pixels directly. What matters on the video screen is how large is the image size in pixels. If we show a 300x200 pixel image on the screen, it will be shown in 300x200 pixels of that screen. Video shows pixels.
When we show a too-big image (larger than our viewing screen or window, everything is dimensioned in pixels), our viewing software normally instead shows us a temporary quickly resampled copy, small enough to be able to fit on the screen so we can see it, for example, perhaps maybe 1/4 actual size (this fraction is normally indicated to us in photo editors, so we know it is not the full size real data). We still see the pixels of that smaller image presented directly, one for one, on the screen, which is the only way video can show images. When we edit it, we change the corresponding pixels in the original large image data, but we still see a new smaller resampled copy of those changes.
When we show a too-small image on the screen, it is simply too small. The LCD screen construction possibly is for example, 1920 pixels in 20 inches width, which computes 1920/20 = 96 pixels per inch. But scanning at 96 dpi for this screen is simply the wrong concept. If we scanned a 35 mm slide (36x24 mm) at 96 dpi, it creates an image about 136x91 pixels in size, and we would see this case ON THIS SCREEN as original size of the slide. Screens vary greatly in size, from cell phones to wall TVs, so this image is always 136x91 pixels on any screen, but we only see original slide size on this one screen that we measure to be 96 dpi (assuming we scanned at 96 dpi). Seeing "actual original size" can only work on some one specific matching screen size. This 136x91 pixel image is thumbnail size on that screen. Scanning at appropriate higher resolution enlarges the image for viewing bigger uses.
Dpi and inches are unknown concepts (not used) in video systems, or in digital cameras. But printing does use dpi and inches, as does the print paper.
I hope you're not thinking this is non-critical detail, because believe me, it's the most important thing you can learn about how to use digital images.
Scaling: If you print a 1200 pixel image at 300 dpi, it will cover 1200/300 = 4 inches of paper. If you print it at 200 dpi, it will cover 1200/200 = 6 inches of paper. Or if you scale it to print 5 inches of paper, it will print at 1200/5 = 240 dpi. This is how printing works. Normally we prefer to print photos at from 240 dpi to 300 dpi.
The dpi value shown in fresh camera images is just meaningless clutter, merely an arbitrary number which has not affected the pixels in the image file in any way. Dpi is only for printing, or for scanning. The scanner does assign the scaled dpi number you choose when scanning, so that has meaning then, so it will print that same original size. But the camera just assigns some arbitrary dpi number to the image file (print size might indicate a few feet). The camera has no clue what size you might choose to print it later, if you even decide to print it. Otherwise, it simply does not matter what this dpi number is, it has no use, not until the time you actually print it on paper, when you will decide an appropriate value (see Scaling below).
There is no concept of inches or dpi used in the video system. It doesn't matter if the monitor is a 6 inch cell phone screen or a 72 inch HDTV screen, if it is set to show 1920x1080 pixels, it will show 1920x1080 pixels (about 2 megapixels). Both monitor sizes show the SAME 1920x1080 image pixels, just at different sizes on the two physical screens. You might think you are showing your image to be, say 8 inches wide on your computer monitor, but it will show a different size on some other monitor of different size or different resolution setting. Inches have no meaning in video, other than perhaps what you're used to on your own monitor size. In our photo editor, we would see whatever size the image actually is (in pixels), but large images are normally resampled to show a copy that fit on the screen. We don't all see the same size in video, it depends on your screen size (both pixels and inches). Especially for web images, the site has no clue what size monitor might view it. Yes, all of our 8x10 inch paper is the same size, but there is no concept of inches or dpi in any video system. Video shows pixels, directly. Really pretty simple (but different).
Some image formats intended for the screen intentionally contain no dpi information (because video simply has no use for inches or dpi). — Some examples are:
Adobe shows us 72 dpi when the dpi value is empty (blank) in the image properties. When we ask to see properties, the photo program has a big problem with an empty property field (it wants to show printed inches), so it makes something up to tell us. The dummy 72 dpi value we see is just something to fill the missing field. Adobe shows us 72 dpi when we ask about a missing dpi value. This 72 dpi value only means "the dpi value is "empty, undefined". The way Adobe says "blank, no data" is "72 dpi". The Windows default (like the Paint program) will show it as 72 unless Large Fonts are active, which then shows 96 dpi (while Adobe says 72). But the image dpi data is likely blank if you see those numbers.
So yes, when dpi is blank in these images, the Adobe Elements or Adobe Photoshop Save for Web menu might appear to scale these images to 72 dpi. Save it For Web, then check it, and the image properties do appear to say 72 dpi then. However don't be fooled, it doesn't actually do that. Instead the Save for Web menu simply removes the JPG EXIF data (containing the dpi field) from the JPG image file, to save a few bytes of file size for the web. Video monitors have no use for dpi. The dpi information is omitted, discarded (however, the other menu File - Save As - JPG menu does save dpi, if any). The web purpose has no use for dpi, because the video system will always ignore dpi. Retaining it would just be wasted bytes in the file (only a few bytes, but the web purpose has no use for it).
Note that other programs may tell us different values, for the SAME image file (we are still speaking about image files which contain no dpi information, for example images from the Save For Web menu which have no dpi number stored — technically, no Exif which would have contained the dpi info).
Today's digital cameras with many megapixels make up and store a dpi number in the JPG (probably at least 180 dpi, probably 300 dpi) because if Adobe showed 72 dpi for it, the print size would be dimensioned as several feet. This made up number has no meaning, the camera does not know how you will print it. You will fix it when ready to print.
If the image dimension is 3000 pixels, and if printed at 300 pixels per inch, the image dimension will cover 3000/300 = 10 inches on paper. The image contains pixels, but all of the inches are on the sheet of paper. Within a reasonable small range, we can print different sizes by just spacing the same pixels differently (or for a larger range, we could resample the pixels to be a different image size). The only purpose of the dpi number is to space the pixels, pixels per inch, on paper. We can change this dpi number at will, to print different sizes on paper, without changing any pixel at all (called scaling).
3000 pixels / 400 dpi - 7.5 inches of paper
3000 pixels / 300 dpi = 10 inches of paper
3000 pixels / 250 dpi = 12 inches of paper
3000 pixels / 200 dpi = 15 inches of paper
Or the other way, 3000 pixels / 11 inches = 272 dpi (scaling, next below)
If you print the image at home, from the image editor File - Print menu, the computer will use the dpi value in the file to compute the size of the image on paper. If it is 2000 pixels and says 180 dpi, it will try to print 2000/180 = 11.1 inches size. This is the only use for dpi in camera files (printing). Some print menus offer a way you can scale the size first however, to print a different size. If you scale this image to print 10 inches (to fit the paper), then it will scale to print at 2000/10 = 200 dpi. If you want 10 inches at 300 dpi, then you need to provide 10 inches x 300 dpi = 3000 pixels, scaled to print 300 dpi.
However, if you upload the image file to be printed somewhere online, they don't ask dpi, nor look at your files dpi value. They only ask what paper size to print the pixels that you provided. They will scale it for you. If you upload a dimension of say 2000 pixels, and ask them to print it 10 inches, you will necessarily get 2000 pixels / 10 inches = 200 dpi result. Most online printers have 250 dpi capability, which is a good upload goal (provide enough pixels to print at 250 dpi). And they surely have a Minimum dpi result they will accept (probably 100 to 150 dpi minimum result). But there is no point of uploading way more pixels than they can possibly print.
Printer machines simply are not designed to reproduce tonal image pixels at more than about 250 to 300 dpi (meaning color or grayscale, but lineart can go higher). Our eye could not benefit from it if they did. So they don't. Wishful thinking will not make it so. If you upload 24 megapixels for a 4x6 inch print, they will resample it to about 2 megapixels to be able to print 4x6. You should have done that first, but they are well equipped to handle it for you. It is not a choice, it is a requirement.
Scaling is adjusting the value of the dpi number itself in order to fit the image pixels to the paper size, for printing.
Word definition: A scale is a graduated measurement, like a map scale, and scaling is creating a proportionate size or extent, in this case of pixel distribution relative to the paper dimension. Scaling is computing that 3000 pixels printed at 300 pixels per inch will scale to cover 3000/300 = 10 inches of paper. Or scaling to 200 dpi size, 3000 pixels / 200 dpi = 15 inches of paper. The dpi number scales the pixel size so the overall image dimension fits the paper (more specifically, dpi scales the image size into inches, for paper, like in a book.)
So in any existing image, the only purpose of dpi is about scaling the image size on paper, pixels per inch. And that numerical dpi result should also be an acceptable printing resolution for good quality. Just saying, printing at 100 dpi will be pretty poor (but 3000 pixels will print 30 inches then). Also excessively high values like 500 dpi will be pointless, just wishful thinking (but 3000 pixels will print 6 inches then). Printer capabilities are such that we can expect best results around 250 to 300 dpi, so we supply sufficient pixels to print the size we want, for example, 2500 to 3000 pixels for 10 inches. See how easy this is?
There are two different uses of term dpi in printing: Image resolution, or printing quality? Is it dpi or ppi? In attempt to clarify possible confusion about how things are, we should know that printing menus of ink jet printers have sometimes used the term dpi in their own other definition, about quality of printing (how they dither several ink dots of color into one pixel space). This alternate use of dpi refers to ink drops per inch, which is basically the possible spacing of the print head ink dots (which involves carriage stepping actually). A typical ink jet printer only has its four colors of CYMK ink (Cyan, Yellow, Magenta, and Black) to print pixels of say, green color. It has no green ink (has only four of the 16.7 million possible color shades). So for each pixel, it must print several dots of the colors of the ink colors it does have, dithered to simulate the green that it needs to print (Cyan and Yellow ink make Green, mixed for tint, and adding complement Magenta and also Black affects the dark tone of it).
This use of dpi (for ink drops) is Not about image resolution, instead more smaller ink dots is higher quality color simulation, and fewer ink dots is lower quality (of color of each one pixel). These several ink dots means printing ink drops at say 4800 dpi is making many very small dots of different ink colors, to simulate the color of the one pixel, which ideally should be constrained within the paper area of one pixel, like 1/300 inch. So ink jet printers are concerned with print quality of ink drops per inch. And then the spacing of the pixels on media (pixels per inch, also called dpi) is the image resolution, the degree of small detail that the image can show.
So there is some controversy over the term dpi. We hear some imagining the rules were changed so that any other use of the term dpi (except ink drops per inch) is outlawed now, and that we must only call image resolution to be ppi (pixels per inch), instead of the dpi it was always called. There's nothing wrong with the term ppi, it also means pixels per inch, which is what it is... but that's not the way we went, years ago. And it still works, so not everyone agrees there is any need to change anything. Pixels are indeed another form of a color dot, and those dots per inch represent image resolution.
So ink jet printers are phasing down their use of the pirated term dpi for ink drop density, and the printers quality selection menu choice is now normally stated as Good, Better, Best (HP), or Fast, Standard, High (Canon), or Draft, Standard, High (Epson), all regarding choosing printing quality. That's much more clear anyway, and not confusing us with X dpi ink drop density any more seems good to me, and print quality is what it is.
The important point to be made here about it is that beginners must realize that they will read and hear some saying dpi and some saying ppi, both meaning image resolution, so they should know that they definitely need to understand it either way. It's no big deal, if it's about image resolution, dpi can only mean pixels per inch (pixels spaced on media for viewing). If about ink jet printer quality settings, dpi can only mean ink drop spacing, when dpi was also used to mean ink drops per inch (multiple ink dots to simulate the color of one pixel). Both pixels and ink drops are called "dots" of color. But there is less conflict today.
FWIW, my own experience learned saying dpi for "pixels per inch" of image resolution, so dpi is my natural thought, and my choice. I am aware that nowadays, some do instead prefer to say ppi for same thing, which is perfectly valid, but I am also aware it has always been called dpi. Choose to use either that you prefer, but we all definitely must understand seeing it either way, because the world is full of both ways. If interested or confused about the term dpi, see more details here.
Normally, our usual goal is that we try to print photo images at about 250 to 300 dpi. This is the capability of the printers (designed for the capability of our eye to see it). 250 to 300 dpi is good for our printers at home, and also good for printing services such as Shutterfly.com, Mpix.com, Snapfish.com, Walmart, etc. We adjust for the paper size by Scaling the image (setting the dpi number value to print that size). Or, if the image is much too large, we Resample it to be smaller, so that we can scale to around 300 dpi. We also need to crop it to the same shape as the paper. See Resize Images about Cropping and Scaling and Resampling, to fit and print the image.
If we print the image on our home printer, by selecting menu File - Print, the printer will honor the dpi number specified in the file, and will print the pixels at the size (inches) determined by the pixel dimensions and the specified pixels per inch number.
(Pixel dimension) / (paper dimension inches) = pixels / inches = pixels per inch
If we send the image out somewhere to be printed, and specify "print this 5x7 inches", they will. They will necessarily ignore our dpi number, and will rescale the image to the necessary dpi number to print the requested 5x7 inches (to cover the 5x7 inches with the provided pixels). The printer machine only has capability in the 250 to 300 dpi range. If their scaled dpi number comes out higher than 250 or 300 dpi, it won't hurt, but it cannot improve the quality. You can upload your 12 megapixel images to them, but if printing 6x4 inches, then about 1500x1000 pixels is all that can help (250 dpi). I am being ambiguous about 250 vs 300 dpi, normally it won't matter much which we use (we are at printer limits), but both will print slightly better than 200 dpi.
Sufficient pixels to print at 250 to 300 dpi is optimum to print photo images. More pixels really cannot help the printer (for photos), but very much less is detrimental to quality. This is very simple, but it is essential to know and keep track of. This simple little calculation will show the image size needed for optimum photo printing. This method is one thing you really need to know, it should be second nature to you, considered when printing any image.
This little calculator has these purposes: (or there's another fancier calculator)
Call it dpi or ppi as you prefer, but the idea is that this resolution is the spacing of the pixels on paper, pixels per inch.
It's important to realize that an area scanned at 300 dpi will create the pixels necessary to also print the same size at 300 dpi. The concept either way is pixels per inch. 300 dpi is likely what you want for a photo copy job (a line art scan of black text or line drawings can use 600 dpi well).
Or for example, you could scan at 150 dpi and print at 300 dpi for a half size copy.
Or you could scan at 600 dpi and print at 300 dpi for a double size copy.
The concept either way is pixels per inch, in the scanner and in the printer.
But NOT on monitor video screens. Images are shown on the video screen at their actual size in pixels. Image pixels are shown one for one on the screen pixels, so to speak. There are no inches or mm inside video monitors. You might have bought a 23 inch monitor, but its screen is dimensioned in pixels.
300 dpi is likely what you want for printing a high quality photo copy job (a line art scan of black text or line drawings can better use 600 dpi, but 300 dpi is plenty for photo work).
This dpi number does NOT need to be exact at all, but planning size to have sufficient pixels to be somewhere near this size ballpark (of 250 to 300 pixels per inch) is a very good thing for printing.
Printing dpi is dependent on the capabilities of the printing process, see a Printing Resolution Guide.
And there is a larger photo dpi calculator that knows about scanning, printing, and enlargement.
Cropping Aspect Ratio to fit the paper size is an important concern too.
Resampling changes the pixels. When the image is too large, resampling entirely replaces the image with a different smaller image, with a different count of new different pixels. Maybe resampling changes an image that is 6000 pixels wide to be only 1000 pixels wide, so it will fit on the video screen. But this is a destructive loss, which may be perfect for the current goal, but destructive meaning that we cannot go back (we discarded pixels, so save this copy with a different file name — always save the original image too). Resampling is a big deal, destructive to the original. But scaling is not — we can instead just change only this dpi number (called scaling) with wild abandon, back and forth, at will. Changing the stored dpi number does not change any pixel in any way. It is just a separate number, a future instruction for printing. It has absolutely no use until time of printing. Then it will control the size in inches when it prints on paper (unless it is scaled again at that later time).
However (a major point), changing this dpi number will cause absolutely no change at all on the video screen (unless resampling is also selected). Video is not concerned with dpi or inches. Video ignores any dpi number, and simply shows the pixels directly, one for one, one image pixel on one video pixel location. No matter what number the dpi says, you will never see any effect of it on the video screen, which simply just shows the pixels directly. See an example of that.
Aspect Ratio: The image itself, and the printing paper size, are commonly different shapes, causing printing problems unless handled. This is not speaking of size, but speaking only of shape, for example, 4x6 paper is more long and skinny, where 4x5 paper is more short and relatively fat. Different shapes, and we cannot print the same image on both in the same way. But 8x10 paper is the SAME SHAPE as 4x5 paper, and 8x12 paper is the SAME SHAPE as 4x6 paper — the proper image shape can be simply enlarged and still fits exactly. To fit our image on the paper, we crop the image shape to match the shape of our paper choice. An image has a property called Aspect Ratio (shape). This is the simple ratio of the two image dimensions. Maybe the image size is 3000x2000 pixels, so the aspect ratio is 3000:2000. We reduce this to the lowest common denominator, and call it 3:2. It just means the two dimensions are in ratio 3 to 2, which is a shape, which can be compared to the paper shape, which normally needs to be the same shape. And 4x6 is also called 3:2 (2:3 paper is simply rotated), but 4x5 is 4:5. Different shapes. More at Aspect Ratio.
Printing paper also has a similar shape, and the same Aspect Ratio applies. For example, 6x4 inch paper is also 3:2 aspect ratio. If we print a 3:2 image on 3:2 paper, it will fit — the shapes are the same 3:2 aspect ratio (3000x2000 pixels is quite excessive though, for 4x6 inches), and really ought to be resampled to about 1800x1200 pixels first (3:2), to about 300 pixels per inch size.
However, if we want to print this image on 8x10 paper, the paper shape is different (4:5 aspect ratio) than the image (3:2), and some of the image will be lost (cropped, outside the paper edge, off the paper — the shapes are simply different). Or we could choose to fit the tightest dimension, leaving blank white borders the other way (we hate that too). We had exactly the same issues with film, not necessarily the same shape as our paper, but digital methods are a bit different. Now, we need to do Crop and Resample and Scale when printing digital images.
Video screens also have aspect ratio. Non-widescreen monitors used to all be 4:3, and HDTV wide screen TV is 16:9. This is equally important if we are trying to fill full screen, but we are more comfortable with blank space bordering our video images, than on paper.
Scanners do use a specified dpi number (scanning resolution) to create pixels from inches on paper, for example creating 300 pixels per inch. If we scan 10 inches of paper at 300 pixels per inch, we create 3000 pixels in that dimension. If we scanned it at 600 dpi, we create a 6000 pixel dimension, which could then be printed double size at 300 dpi (scaling). The basic scanning scaling concept is:
Scan 10 inches at 300 dpi, and it creates 10x300 = 3000 pixels.
Print 3000 pixels at 300 dpi, and it prints 3000/300 = 10 inches.
See how that works out? The concept is pixels per inch. Scanning at 300 dpi automatically ensures that you will have sufficient pixels to print original size at 300 dpi. Even if not printing, scan dpi still determines the image size in pixels (created from the inches scanned).
Digital cameras create pixels directly, a fixed size image. If the camera sensor size is 12 megapixels, it creates a 12 megapixel image, dimensioned in pixels. There is absolutely no concept of dpi yet (no paper size is defined in the camera, inches are a very undefined concept at that point). The camera cannot possibly guess what size we might print it someday, but we will figure it out later, when we are ready to print, if we even print it. We do not care about dpi otherwise, it is an unused number until we print. However, we are always concerned with image size (pixels), which determines how we can use that image, on the video screen, or when ready to print.
The camera will stick in some arbitrary dummy dpi number, just so some believable printed size can be shown. If they didn't, then Photoshop will automatically call the blank to be 72 dpi, which indicates some unreal print size in feet, so the cameras do stick in a dummy dpi number, maybe 200 to 300 dpi. They don't know what size you may print it later. Camera brands vary in the dpi number they make up, but this value is a meaningless arbitrary number, confusing if we try to make any sense of it. There is NO CONCEPT of inches in the camera (just pixels). The image is dimensioned in pixels. We will change that dummy dpi number when we decide how we want to print it.
But digital basics are all the same for all images, so after image creation, then it is a digital image, dimensioned in pixels. Dpi is only used to control the size of the printed image on paper (paper has inches). Video screens are dimensioned in pixels, and video has no use for any dpi number.
Repeating: Inches only exist on the paper we print on, or the paper that we scan. Inches do not exist in the camera, in the image file, in the video system, or in computer memory. In those situations, only pixels exist. Without inches, there can be no concept of dpi. Instead, digital images are dimensioned in pixels. The single most important thing to know about digital images is their dimensions in pixels. This affects how you can use them.
The camera sensor is dimensioned in mm, but it also has dimensions in pixels. For example, a full frame 36x24 mm sensor might be divided into 6000x4000 pixels. The sensor size in mm is all important for computing Field of View or Depth of Field. And the sensor mm dimensions also affect the necessary enlargement factor to print size, but the pixel dimensions are all important for viewing the image on screen or paper.
Data Size is its uncompressed size in bytes when file is opened into computer memory (and image size viewed on the monitor screen is still dimensioned in pixels).
File Size is its size in bytes stored in a file (which is Not a meaningful number regarding how the image might be used. Image size is in pixels). Data compression can affect file size drastically smaller, but it is still the same image size in pixels. Also the degree of detail in the image content can also affect compression degree. So saying “I have an 8 megabyte JPG file” says nothing to describe the image size or potential use. Image size is dimensioned in pixels. We might be concerned with size in bytes if storing image files, but if using and displaying them, image size in pixels is all important.
Print Size is its size in inches or mm when printed on paper (paper is dimensioned in inches or mm). The size of film is also inches or mm. Sensor size (mm) or film size (mm) is small and must be enlarged to the print or viewing size. By varying the printing resolution (pixels per inch on paper), we can print the image about any size we wish, but the quality will vary. 250 to 300 dpi are usual high quality goals.
Bytes or inches may be the size of storage or paper prints, but digital images are dimensioned in Pixels. How they can be used is about pixels.
In strong contrast to paper, monitor screen size is dimensioned in pixels, and image size is also dimensioned in pixels. The image pixels fit the screen pixels one for one, so to speak. A 600x400 image will show as 600x400 pixels on the screen, and normally will be 600 pixels wide, which may be full screen on a cell phone, or less than a third of the width of a 1920x1080 pixel wide screen monitor or HDTV. If the image size is larger than the screen size, computers normally show a temporary resampled copy of more suitable smaller size that will fit on the screen. However, print paper is dimensioned in inches or mm, so images for printing must be scaled to be spaced out as so many pixels per inch or mm (often called dpi, jargon for pixels per inch on paper). See basic differences, and more detail between using images printed or on the video screen.
The most common type of color image (such as any JPG file, but Not Raw files) is the RGB 24-bit choice. Note that uncompressed 24-bit RGB data is three bytes per pixel, regardless of image size. However many/most files are compressed into a smaller file size (same pixels, but JPG data is normally compressed to unusually small size in bytes, which can involve some quality losses). Compressed files are uncompressed again when opened into computer memory for showing (the count of pixels remains unchanged).
The usual and most common type of color image (such as any JPG file) is the 24-bit RGB choice.
As examples, the JPG file from my Nikon D800 DSLR is 23300 bytes of Exif (as reported by ExifTool). But then a Photoshop edit save reduces it to about half size, or "Save For Web" to zero. Raw does not report Exif size, but assuming it is likely about same data as in a JPG in same camera. A small Canon compact JPG Exif is 12300 bytes. An iPhone 4S JPG Exif is 14050 bytes and a iPhone 5S is 12278 bytes. I've seen Exif in TIFs and PNG created in Photoshop vary from 2 KB to 9 KB which values seemed affected by indexed bit depth for no apparent reason (data appeared the same, with different numbers). Perhaps adding 12 KB or more KB for Exif is reasonable for cameras, but maybe 6 KB for editor files? Exif might add from 0 to 25 KB or so... but which is still hardly noticeable in megabytes.
Note that uncompressed 24-bit RGB data is always three bytes per pixel, regardless of image size. For example, an uncompressed 24 megapixel 6000x4000 pixel image is 6000x4000 x 3 = 72 million bytes, also 24 x 3, every time. That is its size in computer memory when the file is opened. Fill in your own numbers, but if divided by 1048576 (or just divide by 1024 twice) converts units to 68.66 megabytes. The JPG files will vary in size, because JPG compression degree varies with scene detail level. If you have several dozen widely assorted JPG in a folder (all same uncropped image size written by the same camera), and then sorted by size, the largest and smallest might vary by 2.1 size. Smooth areas of featureless detail (sky, walls, etc) compress significantly smaller than a scene full of highly detailed areas (like many trees or many tree leaves for example). If a JPG in this 24 megapixel example is say 12.7 MB size, then (ignoring small Exif) it is 12.7 MB / 68.66 MB = 18.5% size of uncompressed, which is 1/0.185 = 5.4 : 1 size reduction. That would be a high quality JPG. But JPG file size does also vary with the degree of scene detail, so file size is not a hard answer of quality.
Note that 24-bit RGB data (like JPG) is 3 bytes per pixel, regardless of image size.
Data size is the uncompressed data, the actual data size — how large your uncompressed image data actually is — normally 3 bytes per pixel (for 24-bit RGB, for example JPG files). Compressed File Size in bytes is the least useful number, only of interest for internet transfer or memory card capacity or disk storage. But pixels is the only important number which determines how an image can be used.
Raw files cannot be viewed or printed directly. When in editor memory (or camera rear LCD display), Raw is converted to 16 bit RGB, and processed, and then typically output as 8 bit RGB for viewing or printing. JPG is always 8 bits per RGB channel, or 24 bit color.
The compressed file size will be smaller. JPG will be drastically smaller, variable with JPG Quality setting, but file perhaps only 10% or 20% of data size. We should favor the High JPG Quality settings, that larger file is still very small.
The camera adds Exif data to the file, and a few formatting bytes. Indexed files add a small RGB color table (called the color palette), for each color still used. Camera Raw image files also contain the cameras Large JPG image too (this JPG is shown on the camera rear LCD, and it provides the histogram too). Simple photo editors (not raw editors) which can "open" raw files just show this included JPG as being the Raw image.
Regarding color bit depth (Google), our monitors and printers are 8-bit devices. Many inexpensive LCD monitors have used only 6 bits internally (18-bit color). For photo work, look for the better monitors that actually specify 24-bit color (more common today). Good IPS monitors are becoming inexpensive now (I've been really pleased with a low priced Dell IPS monitor).
Repeating, to be sure it is clear that images have four very different "sizes", of interest in different situations.
If someone tells us they are sending us a 12 megabyte file, that tells us maybe the internet load, or how it will fill our disk, but bytes tells us nothing about the image, or about the image size, or about how we might use it. Bytes involve data compression, another variable. Images are dimensioned in pixels.
For example, if about a 12 megapixel image:
A few specifics about Data Size: (See formats and megabytes, or a megapixel converter). Bytes are 8-bit numbers, of values ranging from 0..255. Because 2 to the power of 8 is 256, which is the maximum number (of values 0..255) that can be stored in 8 bits. Larger numbers require multiple bytes.
The data in JPG files especially, is dramatically compressed extremely smaller, in variable degree, typically perhaps to only 1/4 to 1/16 of Data size, but too much JPG compression can reduce image quality. The JPG file size varies widely with JPG Quality setting. High JPG Quality is a larger file but better image, and Low JPG Quality is a smaller file but a worse image (but who wants lower quality?) The JPG Quality number is a better quality guide than the file megabyte size. We should always favor a larger JPG file size, because smaller is counter-productive to quality. For the file to be so small, JPG is lossy compression, meaning liberties are taken, so that recovery is not perfect, and image quality can be reduced. We still get the same megapixel count back out, but the pixels you wrote into the JPG file are not necessarily quite the same (color of) pixels you see when opened to retrieve them (see JPG Artifacts). A pixel is only the color definition of a tiny spot of area, so a JPG artifact is a pattern of changed colors. Color difference is the detail we can detect and observe.
Our digital cameras have two options affecting file size.
Camera Large might be 24 megapixels (for example). However, an image for a large video monitor or HDTV, or for a 4x6 inch print, needs only about 2 megapixels. If these are our only goals, and if we do want a smaller file, then for best image quality, I suggest that Small Fine is a greatly better choice then Large Basic (but Small won't print 8x10 inches as well, nor will it allow as much cropping).
Nikon DSLR manuals say their quality levels use (roughly) Fine: 1:4 size, Normal 1:8 size, and Basic 1:16 size after compression. Their files seem to be smaller, but Fine 1:4 size means that the file size is about 1/4 the size in bytes of the uncompressed image. Basic 1:16 size means the file size is only 1/16 the size of the actual image data. I guess they needed a friendly word for Basic, but it's too small for good quality. You'd think we would all know we should use the highest quality level, because there is no later recovery of image quality.
The terms "Normal" and "Basic" are arguable, compression is the opposite of best image quality, and Fine is the better "normal" default (why would we want less quality?) Lossless compression (choices other than JPG) is less effective to reduce file size, because lossless has to promise to preserve and deliver the full quality of the image (no heroic shortcuts, no quality losses). Notice that lossless compression can still be impressively small, but maybe not incredibly small. The Windows file Explorer "Properties" will show file size in MB and in bytes.
The RGB image Data size is always the X by Y pixel dimensions times 3 bytes per pixel, which is simply how large your data is (for JPG and other 24 bit images). But the compressed file size varies somewhat with the individual image content in the scene (much fine detail is larger, much blank areas compress smaller). For example, a picture of a featureless blank wall or sky will compress exceptionally small. A tight picture of a tree full of leaves has much fine detail, and won't compress so much. If you have a couple hundred camera JPG in one disk folder (if all are the same size settings from same camera, but are of varied image content), and click to sort them by file size, the largest (most detailed) JPG is probably about 2x larger then the smallest (least detailed), with the average size more in the middle.
Make no mistake though, Image size is dimensioned in pixels. It is always all about pixels. Digital cameras create pixels. Inches are only about the specific piece of paper. Bytes are only about memory. Pixels are about the image.
Next page is about what you actually need to know to print a photo or document image.