These basics are pretty much about the single issue: How do I use my image, how to I make it be the proper size for viewing, for printing, or for the video monitor? All this is really quite easy, but digital may just be a new concept. It is like learning to drive - once you learn an easy thing or two, it's a skill helpful for life. When you know, you will simply just know. But yes, it does seem that we could subtitle this: Details that no beginner wants to know. However the point is: You'll never grasp digital images until you get it ... until you know what digital images are, what to do with them, and how to do it.
Seriously, once we accept that pixels actually exist, then all this stuff is rather easy. It's about pixels.
We just gotta know about pixels, and if any mystery, a very short primer is here: What is a Digital Image Anyway?
This page tries to be a quick summary of the digital concepts, about how things work. The answer to virtually any question about image size starts with one of these basics. To be able to use digital images well, we need this understanding. This may perhaps be written a little like an argument, refuting the dumb incorrect myths we may have heard about how digital works. The concepts below are instead what you need to know to use digital images properly. It is actually rather easy to grasp, if you get started right.
The size of an image might be, for example, 4000x3000 pixels. That is 4000x3000 = 12 megapixels. Or, 4288x2848 is also 12 megapixels (4:3 vs 3:2 aspect ratio). We tend to think of this as the "resolution" of the image. The pixels do indicate the "fineness" of the smallest possible digital detail (a pixel, which is a dot of one color). This example is borrowed from the image Resize page, to show the idea about pixels.
Pixels are how digital reproduces a scene and its colors. The camera lens creates an image. To reproduce that image digitally, the digital camera sensor merely takes many color samples (each is a pixel), of many very tiny areas of the image, in this way shown. Film uses tiny specks of silver or emulsion dyes instead of pixels, which is not digital numbers, but film does the same sampling idea (colors of many tiny areas). Film areas actually show the color, which we can see. However, digital is totally about pixels, which are simply numbers representing the color. For example, the reddest orchids above have RGB components of about RGB(220, 6, 136), each on a scale of [0..255], so the RGB components of it are red is bright, green is weak, blue about mid-range. This color describes that shade of bluish red in one tiny area, a pixel. We don't have to know much detail, but more is at Wikipedia about the RGB color system.
The main concept of digital is that each pixel is just NUMBERS, binary data describing ONE RGB COLOR of one tiny area, a tiny dot of color, much like one small colored tile in a mosaic tile picture. The numeric concept may be new today (called digital), but the tile concept is at least 5000 years old. A pixel that is pink has the similar effect as a small piece of tile the same shade of pink. Our brain recognizes the reproduced image in those pixels or tiles. But enlarge these enough, and all you will see is the pixels or the individual tiles. Pixels are all there is in a digital image, and we must think of it that way. Ignoring them will Not grasp the concept. Digital will make sense when you do think of pixels.
The concept is that the "photo detail" that we perceive in a digitally reproduced picture is entirely due to the color differences in the pixels. A pixel is simply a color description. Color is the detail. Pixels show the colored detail. The detail is shown by the color differences. The colored pixels are all there is in a digital picture.
Pixels are real, they exist, in fact, pixels are ALL that exist in digital images. There is nothing else in a digital image. We don't need to see each pixel individually, but the image Size dimension in Pixels is the First Thing To Know about using any digital image, because this size in pixels is what is important for any use. The size of a digital image is dimensioned in pixels.
FWIW, we see some fanciful things in movies, where tremendously enlarging photo prints provides clues to solve crimes. The resolution decreases as the size increases, so it really does not work that well in that degree (enlarging film is much better than prints). Enlarging digital excessively only shows pixels.
Human eyes have rods and cones which are a similar sampling system of tiny areas. Cones are color sensitive, with red, green or blue cones. Sampling the color of tiny areas is not unlike pixels in that way. The color difference of adjacent areas is how image detail is perceived. We see a black power wire running across a blue sky because the colors are different. Color difference is the detail that we perceive (including slightest tonal shades of same color). In our digital pictures, a pixel is the smallest dot of color that can be reproduced, so we do think of more and smaller pixels as greater resolution of detail.
However, digital reproduction is a "copy" of a lens image. We should also realize that it is the camera lens that creates the image that we will reproduce digitally, and pixels are the detail of reproducing the lens image. For example, in a DX cropped sensor camera, the original is the image from the lens projected onto the 24x16 mm DX digital camera sensor. The image has this 24x16 mm size there, comparing to the size of an APS-C size film image. Then, the camera pixels merely digitally sample that lens image (very much like any scanner samples an image, meaning taking many color samples called pixels) to try to digitally reproduce (convert to numbers) the image that the lens created. A pixel is just numbers, three binary RGB numbers representing the red, green and blue components of the color of the area of that pixel. The pixels do NOT create the image, and cannot improve the lens image detail. The pixel sampling merely strives to reproduce its detail. At best, it can hopefully be a very good reproduction. A 24 megapixel DX image and a 24 megapixel FX image are NOT equal, because the FX image is simply half again larger (36x24 mm), and so does not have to be enlarged as much to show it.
Any given digital image file can have one of several sizes in bytes due to variable compression. The paper print can have various sizes in inches or cm due to variable scaling when printing. But the actual image has only one size in pixels, which defines how it might be used. The image size can be changed by resampling.
A pixel is just numbers that represent one color, specifically, the three numbers of a RGB color specification - which represent the average color that was sampled from this tiny dot of image area. When the image is viewed or printed, each little dot of image area is shown to be that corresponding color. In that way, digital images are a little like mosaic tile pictures (but in an ordered grid pattern). Each little dot is one color, and our human brain puts them together to recognize an image in all those colored dots. If it is an ordinary standard 24-bit RGB image (like JPG), the pixel data is one byte for each of the Red, Green, Blue components of the pixel, which is three bytes per pixel. So if 12 megapixels, then x3 is 36 million bytes of data (assuming the standard 24 bits). That is simply the actual data size of any 12 megapixel RGB image data, however you will see it compressed much smaller while it is in a JPG file (JPG file size is much smaller than the image data size, via JPG compression). But when that file is opened, it is full size again in computer memory, three bytes per pixel (24 bits). For other than 24-bit, and for the special interpretation of "megabytes", see more detail, and RGB color, and also for a calculator to convert bytes, KB, MB, GB, and TB.
The size of that image data when opened in memory is in bytes of memory. 24-bit RGB images (8-bit color) is always three bytes of RGB data per pixel. So bytes is the "data size", but "image size" is always in pixels. Whereas, inches only refer to the paper where these pixels will be printed.
A JPG file is compressed to be maybe 1/10 this data size (roughly, can be very variable), while in the JPG file, but 12 megapixels opens again to 36 million bytes in memory. JPG uses lossy compression, which means we can specify High JPG Quality for a larger better file, or Low JPG Quality for a smaller worse file (when and if file size is more important than image quality). See JPG.
Please report ( Here ) any problems with the calculator, or with any aspect of this or any page. It will be appreciated, thank you.
This calculator tries to make the point that images involve four different sizes, used for different purposes. The numbers used to describe the actual size of the image is width x height, in pixels.
Data size is the uncompressed data, the actual data size - how large your uncompressed image data actually is - normally 3 bytes per pixel (usual RGB, for example JPG files). Compressed File Size in bytes is the least useful number, only of interest for internet transfer or memory card capacity. But pixels is the important number which determines how an image can be used.
Raw files cannot be printed directly. When in editor memory, Raw is converted to 16 bit RGB, and processed, and then typically output as 8 bit RGB for viewing or printing. JPG is always 8 bits per RGB channel, or 24 bit color.
The compressed file size will be smaller (JPG will be much smaller, variable with JPG Quality setting, but file perhaps only 10% or 20% of data size).
Exif data may be added, and a few formatting bytes. Indexed files add a small RGB color table, for each color. Not added into the size here, but Camera Raw image files also contain the cameras Large JPG image too (this JPG is shown on the camera rear LCD, and it provides the histogram too). Simple photo editors (not raw editors) may show this included JPG as being the Raw image.
Regarding color bit depth, our monitors and printers are 8-bit devices. Many inexpensive LCD monitors have used only 6 bits (18-bit color). For photo work, look for the better monitors that actually specify 24-bit color. Good IPS monitors are becoming inexpensive now (I've been really pleased with a low priced Dell IPS monitor).
If someone tells us they are sending us a 12 megabyte file, that tells us maybe the internet load, or how it will fill our disk, but bytes tells us nothing about the image, or about the image size, or about how we might use it. Bytes can involve data compression, another variable. Images are dimensioned in pixels.
For example, if about a 12 megapixel image:
A few specifics about Data Size: (See formats and megabytes, or a megapixel converter). Bytes are 8-bit numbers, of values ranging from 0..255. Because 2 to the power of 8 is 256, which is the maximum number (of values 0..255) that can be stored in 8 bits. Larger numbers require multiple bytes.
The data in JPG files especially, is dramatically compressed extremely smaller, in variable degree, typically perhaps to only 1/4 to 1/16 of Data size, but too much JPG compression can reduce image quality. The JPG file size varies widely with JPG Quality setting. High JPG Quality is a larger file but better image, and Low JPG Quality is a smaller file but a worse image (but who wants lower quality?) The JPG Quality number is a better quality guide than the file megabyte size. We should always favor a larger JPG file size, because smaller is counter-productive to quality. For the file to be so small, JPG is lossy compression, meaning liberties are taken, so that recovery is not perfect, and image quality can be reduced. We still get the same megapixel count back out, but the pixels you wrote into the JPG file are not necessarily quite the same (color of) pixels you see when opened to retrieve them (see JPG Artifacts). A pixel is only the color definition of a tiny spot of area, so a JPG artifact is a pattern of changed colors. Color difference is the detail we can detect and observe.
Our digital cameras have two options affecting file size.
Camera Large might be 24 megapixels (for example). However, an image for a large video monitor or HDTV, or for a 4x6 inch print, needs only about 2 megapixels. If these are our only goals, and if we do want a smaller file, then for best image quality, I suggest that Small Fine is a greatly better choice then Large Basic (but Small won't print 8x10 inches as well, nor will it allow as much cropping).
Nikon DSLR manuals say their quality levels use Fine: 1:4, Normal 1:8, and Basic 1:16 compression. Fine 1:4 means that the file size is about 1/4 the size in bytes of the uncompressed image. Basic 1:16 means the file size is only 1/16 the size of the actual image data. You'd think we would all know to use the highest quality level. There is no later recovery of image quality.
The terms "Normal" and "Basic" are arguable, compression is the opposite of best image quality, and Fine is the better default (why would we want less quality?) Lossless compression (choices other than JPG) is less effective to reduce file size, because lossless has to promise to preserve and deliver the full quality of the image (no heroic shortcuts, no quality losses). Notice that lossless compression can still be impressively small, but maybe not incredibly small. The Windows file Explorer "Properties" will show file size in MB and in bytes.
The RGB image Data size is always the X by Y pixel dimensions times 3 bytes per pixel, which is simply how large your data is (for JPG and other 24 bit images). But the compressed file size varies somewhat with the individual image content in the scene (much fine detail is larger, much blank areas compress smaller). For example, a picture of a featureless blank wall or sky will compress exceptionally small. A tight picture of a tree full of leaves has much fine detail, and won't compress so much. If you have a couple hundred camera JPG in one disk folder (if all are the same size settings from same camera, but are of varied image content), and click to sort them by file size, the largest (most detailed) JPG is probably about 2x larger then the smallest (least detailed), with the average size more in the middle.
Make no mistake though, Image size is dimensioned in pixels. It is always all about pixels. Digital cameras create pixels. Inches are only about the specific piece of paper. Bytes are only about memory. Pixels are about the image.
Continuing now with the list of Essentials to Know to USE images. This is the part that confuses people (about dpi), but it is pretty simple, and this should clarify.
This is a very big deal. Printers print on paper which is dimensioned in inches, but video screens are instead dimensioned in pixels (there is no concept of inches in video systems). This difference gets our attention. These devices do NOT work alike. They both show the same pixels in their way, but the basic concepts are quite different. Printers space the pixels on paper, at perhaps 300 pixels per inch of paper. Video monitor screens show the image pixels directly one for one on the monitor pixels.
When I say Video, I don't mean movies, instead I mean the monitor viewing screen, computer or TV. The video screen size is dimensioned in pixels, and the image is dimensioned in pixels, and the pixels are simply shown directly - without any concept of dpi. The LCD screen construction possibly is say for example, 1920 pixels in 20 inches width, which computes 1920/20 = 96 pixels per inch. But the video screen simply shows image pixels one for one - one image pixel on one monitor pixel. So for example (one pixel of image on one pixel of screen), an image 800 pixels wide will fill exactly half the width of a 1600 pixel screen width. People telling you the image needs to say 72 dpi for the screen or web are simply just wrong. Video shows pixels, with no concept of inches or dpi. On video screens, it does not matter at all what the image dpi number is. The screen shows pixels directly.
When we show a big image, larger than our viewing screen (both are dimensioned in pixels), our viewing software normally instead shows us a temporary quickly resampled copy, small enough to be able to fit on the screen so we can see it, for example, perhaps maybe 1/4 actual size (this fraction is normally indicated to us, so we know it is not the full size real data). We still see the pixels of that smaller image presented directly, one for one, on the screen, which is the only way video can show images. When we edit it, we change the corresponding pixels in the original large image data, but we still see a new smaller resampled copy of those changes.
Dpi and inches are unknown concepts (not used) in video systems, or in digital cameras.
The dpi value shown in camera images is just some clutter in the file header, merely a separate arbitrary number which has not affected the pixels in the image file in any way. Dpi is only for printing, or for scanning. The scanner does assign the scaled dpi number you choose when scanning, so that has meaning, it will print that size. But the camera just assigns some meaningless arbitrary dpi number to the image file (print size might indicate a few feet). Of course, it has no clue what size you might choose to print it later, if you even decide to print it. Otherwise, it simply does not matter what this dpi number is, it has no use, not until the time you actually print it on paper, when you will decide an appropriate value (see Scaling below).
There is no concept of inches or dpi used in the video system. It doesn't matter if the monitor is a 12 inch screen or a 72 inch HDTV screen, if it is set to show 1920x1080 pixels, it will show 1920x1080 pixels (about 2 megapixels). Both monitor sizes show the SAME 1920x1080 image pixels, just at different sizes on the two physical screens. You might think you are showing your image to be, say 8 inches wide on your computer monitor, but it probably will show a different size on some other monitor of different size or different resolution setting. In our photo editor, we would see whatever size the image actually is (in pixels), but large images are normally resampled to show a copy that fit on the screen. We don't all see the same size in video, it depends on the screen size (both pixels and inches). Especially for web images, the site has no clue what monitor might view it. Yes, all of our 8x10 inch paper is the same size, but there is no concept of inches or dpi in any video system. Video shows pixels, directly. Really pretty simple (but different).
If the image dimension is 3000 pixels, and if printed at 300 pixels per inch, the image will cover 3000/300 = 10 inches on paper. The image contains pixels, but all of the inches are on the sheet of paper. Within a reasonable small range, we can print different sizes by just spacing the same pixels differently (or for a larger range, we could resample the pixels to be a different image size). The only purpose of the dpi number is to space the pixels, pixels per inch, on paper. We can change this dpi number at will, to print different sizes on paper, without changing any pixel at all (called scaling).
3000 pixels / 400 dpi - 7.5 inches of paper
3000 pixels / 300 dpi = 10 inches of paper
3000 pixels / 250 dpi = 12 inches of paper
3000 pixels / 200 dpi = 15 inches of paper
Or the other way, 3000 pixels / 11 inches = 272 dpi (scaling, next below)
If you print the image at home, from the image editor File - Print menu, the computer will use the dpi value in the file to compute the size of the image on paper. If it is 2000 pixels and says 180 dpi, it will try to print 2000/180 = 11.1 inches size. This is the only use for dpi in camera files (printing). Some print menus offer a way you can scale the size first however, to print a different size. If you scale this image to print 10 inches (to fit the paper), then it will scale to print at 2000/10 = 200 dpi. If you want 10 inches at 300 dpi, then you need to provide 10 inches x 300 dpi = 3000 pixels, scaled to print 300 dpi.
If you upload the image file to be printed somewhere, they don't ask dpi, they only ask what size to print the pixels that you provided. They will scale it for you. If you upload a dimension of say 2000 pixels, and ask them to print it 10 inches, you will necessarily get 2000 pixels / 10 inches = 200 dpi result. Most online printers have 250 dpi capability, which is a good upload goal. But there is no point of uploading way more pixels than they can possibly print.
Printer machines simply are not designed to reproduce pixels at more than about 250 to 300 dpi, and our eye could not benefit from it if they did. So they don't. Wishful thinking will not make it so. If you upload 24 megapixels for a 4x6 inch print, they will resample it to about 2 megapixels to be able to print 4x6. You should have done that first, but they are well equipped to handle it for you. It is not a choice, it is a requirement.
Scaling is adjusting the value of the dpi number itself in order to fit the image pixels to the paper size, for printing.
Word definition: A scale is a graduated measurement, like a map scale, and scaling is creating a proportionate size or extent, in this case of pixel distribution relative to the paper dimension. Scaling is computing that 3000 pixels printed at 300 pixels per inch will scale to cover 3000/300 = 10 inches of paper. Or scaling to 200 dpi size, 3000 pixels / 200 dpi = 15 inches of paper. The dpi number scales the pixel size so the overall image dimension fits the paper (more specifically, dpi scales the image size into inches, for paper, like in a book.)
So in any existing image, the only purpose of dpi is about scaling the image size on paper, pixels per inch. And of course, that numerical dpi result should also be an acceptable printing resolution for good quality. Just saying, printing at 100 dpi will be pretty poor (but 3000 pixels will print 30 inches then). Also excessively high values like 500 dpi will be pointless, just wishful thinking (but 3000 pixels will print 6 inches then). Printer capabilities are such that we can expect best results around 250 to 300 dpi, so we supply sufficient pixels to print the size we want, for example, 2500 to 3000 pixels for 10 inches. See how easy this is?
Two different uses of term dpi in printing: Image resolution, or printing quality? Is it dpi or ppi? In attempt to clarify possible confusion about how things are, we should know that printing menus of printers have sometimes used the term dpi in their own other definition, about quality of printing. This alternate use of dpi refers to ink drops per inch, basically the possible spacing of the print head ink dots (which involves carriage stepping actually). A typical inkjet printer only has its four colors of CYMK ink (Cyan, Yellow, Magenta, and Black) to print pixels of say, green color. It has no green ink (has only four of the 16.7 million possible color shades). So for each pixel, it must print several dots of the colors of the ink colors it does have, dithered to simulate the green that it needs to print. It's not about image resolution, instead more ink dots is higher quality color simulation, fewer ink dots is lower quality (of color of each one pixel). These several ink dots means printing ink drops at say 4800 dpi is making several very small dots of different ink colors, to simulate the color of the one pixel, which should be constrained within the paper area of one pixel, like 1/300 inch. So ink jet printers are concerned with print quality of ink drops per inch. And then the spacing of the pixels on media (pixels per inch, also called dpi) is the image resolution, the degree of small detail that the image can show.
There is some controversy over the term dpi. We hear some imagining the rules were changed so that any other use of the term dpi (except ink drops per inch) is outlawed now, and that we must only call image resolution to be ppi (pixels per inch), instead of the dpi it was always called. There's nothing wrong with the term ppi, it also means pixels per inch, which is what it is... but that's not the way we went, years ago. And it still works, so not everyone agrees there is any need to change anything. Pixels are indeed another form of a color dot, and those dots per inch represent image resolution. Printers are phasing down their use of the pirated term dpi, and now the printers quality selection menu choice is usually stated as Good, Better, Best (HP), or Fast, Standard, High (Canon), or maybe Draft, Standard, High (Epson), regarding choosing ink drop printing quality. Not confusing us with X dpi ink drops seems good to me, quality is what it is.
The important point to be made here about it is that beginners must realize that they will read and hear some saying dpi and some saying ppi, both meaning image resolution, so they should know that they definitely need to understand it either way. It's no big deal, if it's about image resolution, dpi can only mean pixels per inch (pixels spaced on media, normally for viewing). If about printer quality settings, maybe less so today, but dpi has also been used to mean ink drops per inch (multiple ink dots to simulate the color of one pixel).
FWIW, my own experience learned saying dpi for "pixels per inch", so dpi is my natural thought. I am aware that nowadays, some instead prefer to say ppi for same thing, but I am also aware it has always been called dpi. Choose to use either that you prefer, but we all definitely must understand it either way. If interested or confused about the term dpi, see more details here.
Normally, our usual goal is that we try to print photo images at about 250 to 300 dpi. This is the capability of the printers (designed for the capability of our eye to see it). 250 to 300 dpi is good for our printers at home, and also good for printing services such as Shutterfly.com, Mpix.com, Snapfish.com, Walmart, etc. We adjust for the paper size by Scaling the image (setting the dpi number value to print that size). Or, if the image is much too large, we Resample it to be smaller, so that we can scale to around 300 dpi. We also need to crop it to the same shape as the paper. See Resize Images about Cropping and Scaling and Resampling, to fit and print the image.
If we print the image on our home printer, by selecting menu File - Print, the printer will honor the dpi number specified in the file, and will print the pixels at the size (inches) determined by the pixel dimensions and the specified pixels per inch number.
(Pixel dimension) / (paper dimension inches) = pixels / inches = pixels per inch
If we send the image out somewhere to be printed, and specify "print this 5x7 inches", they will. They will necessarily ignore our dpi number, and will rescale the image to the necessary dpi number to print the requested 5x7 inches (to cover the 5x7 inches with the provided pixels). The printer machine only has capability in the 250 to 300 dpi range. If their scaled dpi number comes out higher than 250 or 300 dpi, it won't hurt, but it cannot improve the quality. You can upload your 12 megapixel images to them, but if printing 6x4 inches, then about 1500x1000 pixels is all that can help (250 dpi). I am being ambiguous about 250 vs 300 dpi, normally it won't matter much which we use (we are at printer limits), but both will print slightly better than 200 dpi.
Sufficient pixels to print at 250 to 300 dpi is optimum to print photo images. More pixels really cannot help the printer, but very much less is detrimental to quality. This is very simple, but it is essential to know and keep track of. This simple little calculation will show the image size needed for optimum photo printing. This method is one thing you really need to know, it should be second nature to you, considered when printing any image.
This dpi number does NOT need to be exact at all, but planning size to have sufficient pixels to be somewhere near this size ballpark (of 250 to 300 pixels per inch) is a very good thing for printing.
However (a major point), changing this dpi number will cause absolutely no change at all on the video screen (unless resampling is also selected). Video is not concerned with dpi or inches. Video ignores any dpi number, and simply shows the pixels directly, one for one, one image pixel on one video pixel location. No matter what number the dpi says, you will never see any effect of it on the video screen, which simply just shows the pixels directly. See an example of that.
Printing paper also has a similar shape, and the same Aspect Ratio applies. For example, 6x4 inch paper is also 3:2 aspect ratio. If we print a 3:2 image on 3:2 paper, it will fit - the shapes are the same 3:2 aspect ratio (3000x2000 pixels is quite excessive though, for 4x6 inches), and really ought to be resampled to about 1800x1200 pixels first (3:2), to about 300 pixels per inch size.
However, if we want to print this image on 8x10 paper, the paper shape is different (4:5 aspect ratio) than the image (3:2), and some of the image will be lost (cropped, outside the paper edge, off the paper - the shapes are simply different). Or we could choose to fit the tightest dimension, leaving blank white borders the other way (we hate that too). We had exactly the same issues with film, not necessarily the same shape as our paper, but digital methods are a bit different. Now, we need to do Crop and Resample and Scale when printing digital images.
Video screens also have aspect ratio. Non-widescreen monitors used to all be 4:3, and HDTV wide screen TV is 16:9. This is equally important if we are trying to fill full screen, but we are more comfortable with blank space bordering our video images, than on paper.
Scan 10 inches at 300 dpi, and it creates 10x300 = 3000 pixels.
Print 3000 pixels at 300 dpi, and it prints 3000/300 = 10 inches.
See how that works out? The concept is pixels per inch. Scanning at 300 dpi automatically ensures that you will have sufficient pixels to print original size at 300 dpi. Even if not printing, scan dpi still determines the image size in pixels (created from the inches scanned).
The camera will stick in some arbitrary dummy dpi number, just so some believable printed size can be shown. If they didn't, then Photoshop will automatically call the blank to be 72 dpi, which indicates some unreal print size in feet, so the cameras do stick in a dummy dpi number, maybe 200 to 300 dpi. They don't know what size you may print it later. Camera brands vary in the dpi number they make up, but this value is a meaningless arbitrary number, confusing if we try to make any sense of it. There is NO CONCEPT of inches in the camera (just pixels). The image is dimensioned in pixels. We will change that dummy dpi number when we decide how we want to print it.
Repeating - Inches only exist on the paper we print on, or the paper that we scan. Inches do not exist in the camera, in the image file, in the video system, or in computer memory. In those situations, only pixels exist. Without inches, there can be no concept of dpi. Instead, digital images are dimensioned in pixels. The single most important thing to know about digital images is their dimensions in pixels. This affects how you can use them.