Graphics Muse
vlink="#fa3333" alink="#33CC33" link="#0000FA">
More...
|
Raster images are always discussed in terms of pixels, the
number of dots on the screen that make up the height and width of the
image. As long as the image remains on the computer there is no need to
worry about converting this to some other resolution using some other
terminology. Pixel dimensions work well for Web pages, for example.
The reality is that many images aren't very useful if they remain on the
computer. Their real usefulness lies in their transfer to film, video tape or
printed paper such as magazines or posters. The trouble with this is that
printing an image is much different than simply viewing it on the screen.
There are problems related to color conversions (RGB to CMYK), for example.
We'll have to deal with that some other time (like when I learn something
about it). Printing also requires a different set of dimensions
because of the way they work. Printed images are handled by the number of
Dots Per Inch that the printer can handle. In order to get the image to
look the way you want it on the printer, you'll need to understand how
printers work.
First, some background information:
- Printer resolution is given in dots per inch (DPI). That is the number
of dots the device can output in an inch.
-
Lines per inch (LPI) relates to halftoning. Many devices (such as
printers) only have bilevel channels, that is, either they paint a
colored dot or they don't (there are no smooth steps). Halftoning
means using different patterns of dots to simulate a greater
number of color shades. Obviously, several dots must be used in
certain on/off combinations to simulate having more shades.
LPI comes from the world of photography while DPI comes from the world of
design. Whether it makes sense to speak of DPI resolution for a raster image
depends on what you'll be using that image for. Most magazines, such as
Time, are printed with 153 LPI or less. Newspapers such as the Wall Street
Journal are printed at 45-120 LPI.
Halftoning masks are the patterns used to create the shades of color
or levels of gray seen in the lines per inch on the printed media.
Most masks are square. Let's say you have a printer which can do 300 DPI,
that is, it can print 300 dots in an inch. If the halftoning mask is
4 pixels wide, then you'll have 300/4 = 75 lines per inch (LPI) for
the halftones. That is the effective resolution of the device, since
you are interested in nice shaded printouts and not in single bilevel
dots.
An ultra-expensive 1200 DPI typesetter will be able to do 300 LPI if
you use 4-pixel wide halftone masks. Of course, the larger the
halftone size, the more shades you'll get, but the lower the effective
resolution will be.
If you are only going to display an image on your screen, then perhaps
speaking of DPIs in the image is pointless. You'll be mapping one
pixel in the image to one pixel on your display, so how physically
big the image looks will depend only on the size of your monitor.
This makes sense; when you create images for display on a monitor, you
usually only think in terms of available screen space (in pixels), not
about final physical displayed size. I.e. when you create a web page
you try to make your images so that they'll fit on the browser's
window, regardless of the size of your monitor.
The story is a bit different when you are creating images for output
on a hardcopy device. You see, sheets of paper have definite physical
sizes and people do care about them. That's why everyone tries to
print Letter-sized documents on A4 paper and vice-versa.
The simplest thing to do is to just create images considering the
physical output resolution of your printer. Let's say you have a 300
DPI printer and you create an image which is 900 pixels wide. If you
map one image pixel to one device pixel (or dot), you'll get a 3-inch wide
image:
900 pixels in image / 300 dots per inch for printing = 3 inches of image.
That sucks, because most likely your printer uses bilevel dots and
you'll get very ugly results if you print a photograph with one image
pixel mapped to one device pixel. You can get only so many color
combinations for a single dot on your printer --- if it uses three
inks, Cyan/Magenta/Yellow (CMY) and if it uses bilevel dots (spit ink
or do not spit ink, and that's it), you'll only be able to get a
maximum of 2*2*2 = 8 colors on that printer. Obviously 8 colors is
not enough for a photograph.
So you decide to do the Right Thing and use halftoning. A halftone
block is usually a small square of pixels which sets different
dot patterns depending on which shade you want to create.
Let's say you use 4-pixel square halftones like in the previous
paragraphs. If you map one image pixel to one halftone block, then
your printed image will be four times as large as if you had simply
mapped one image pixel to one printer dot.
A good rule of thumb for deciding at what size to create images is the
following. Take the number of lines per inch (LPI) that your printer
or printing software will use, that is, the number of halftone blocks
per inch that it will use, and multiply that by 2. Use that as the
number of dots per inch (DPI) for your image.
Say you have a 600 DPI color printer that uses 4-pixel halftone
blocks. That is, it will use 600/4 = 150 LPI. You should then create
your images at 150*2 = 300 DPI. So, if you want an image to be 5
inches wide, then you'll have to make it 300*5 = 1500 pixels wide.
Your printing software should take all that into account to create the
proper halftoning mask. For example, when you use PostScript, you can
tell the interpreter to use a certain halftone size and it will convert
images appropriately. However, most Linux software doesn't do this yet.
If you have a need to create an image destined for print you should check
with the printer to get either the LPI or DPI and the number of pixels used
in the Halftone that will be used. You can then compute the number of
pixels you'll need in your image.
The story is very different if you do not use regular halftoning
masks. If you use a stochastic (based on randomness) dithering
technique, like Floyd-Steinberg dithering, then it may be a good idea
to design images with the same resolution as the physical (DPI)
resolution on your output device. Stochastic screening is based on
distributing the dithering error over all the image pixels, so you
(usually) get output without ugly Moire patterns and such. Then
again, using the same physical resolution as your output device can
result in really big images (in number of bytes), so you may want to
use a lower resolution. Since the dithering is more or less random,
most people won't notice the difference.
My thanks to Federico Mena Quintero for the majority of this discussion.
He summarized the discussion for the GIMP Developers Mailing List quite
some time back. Fortunately, I happened to hang onto this his posting.
How many frames makes a movie?
The following comes from Larry Gritz in response to a question I posed to
him regarding something I noticed while framing through my copy of Toy
Story one day. I thought his explanation was so good it deserved a
spot in the Muse. So here it is.
BTW: I noticed, as I framed through various scenes, that I had 4 frames of
movement and one frame of "fill" (exactly the same as the previous frame).
Standard video is 30 frames/sec and I've read that 15 or 10 animated frames
is acceptable for film but that this requires some fill frames. Lets see,
if you did 15 frames per second you could actually render 12 frames with 3
fill frames. Is this about right?
No, we render and record film at a full 24 frames a second. We
do not "render on two's", as many stop motion animators do.
When 24 fps film is converted to video, something called 3:2 pulldown
is done. Video is 30 frames, but actually 60 fields per second --
alternating even and odd scanlines. The 3:2 pulldown process records
one frame for three fields of video, then the next frame for 2 fields
of video. So you get something like this:
video frame
| video field
| film frame
|
1
| 1 (even)
| 1
|
1
| 2 (odd)
| 1
|
2
| 1 (even)
| 1
|
2
| 2 (odd)
| 2
|
3
| 1 (even)
| 2
|
3
| 2 (odd)
| 3
|
4
| 1 (even)
| 3
|
4
| 2 (odd)
| 3
|
5
| 1 (even)
| 4
|
5
| 2 (odd)
| 4
|
So every 4 film frames get expanded into 5 video frames, and hey,
30/24 == 5/4 ! This is how all films are transferred to video
in a way that doesn't mess up the original timing.
Your video probably only shows the first field when you're paused,
which makes it look like 1 in 5 frames is doubled, but it's actually
just a bit more complicated than that.