Generally speaking, the quality of any substance, surface, or object with respect to the spectral component(s) of the light reflecting from it. The term color also refers to the quality of light possessing certain dominant wavelength(s). Color is a vast, complex subject that encompasses nearly every aspect of human endeavor. Consequently, the following article will concentrate primarily on the application of color to the graphic arts, in particular color printing, following a short overview of color theory.
The colors of the visible spectrum include (in order of decreasing wavelength) red, orange, yellow, green, blue, and violet. Light or objects which appear blue, for example, reflect light in that portion of the visible spectrum. White light appears to have no color because all the wavelengths are present in equal amounts, effectively "cancelling" each other out. Sometimes, light beyond the range of the visible spectrum (in particular, ultraviolet) is also included under the term "color." Some species of insects, for example, can see into the ultraviolet, and many types of flowers which use these insects as pollinators also include an ultraviolet component among the colors they display.
The perception or sensation of color, despite attempts to objectively quantify it, is a highly subjective phenomenon. We speak of, for example, a "red apple," but the redness of the apple is more dependent on our own peculiar visual systems than any inherent "redness" in the apple. (To organisms with different types of photoreceptors, it could appear to possess a much different color.) Even among different humans, the redness perceived is not absolute, varying according to minute physiological differences in visual acuity or according to the illumination used.
Interestingly, according to Hope and Walch in The Color Compendium (see bibliography), polls have consistently found that in Western Europe and North America over half of the adults surveyed name "blue" as their favorite color, while children under eight consistently name "red" as their favorite. (In Japan, however, over half of the people surveyed named either white or black as their favorite color.) Color preferences tend to vary by culture, not unexpectedly. This may seem like a trivial matter, but it is an important consideration in planning multinational advertising campaigns, designing products such as clothing for other markets, and other such endeavors. It also manifests itself in appropriate dress when visiting other cultures; white isn't universally accepted as the bride's dress color at a wedding, for example, nor is black universally appropriate for funerals or other mourning rites. In other words, color is a cultural-specific concept; various colors are symbolic of different things, and these symbols aren't universally consistent.
Color begins with light. Light is a small portion of the much larger electromagnetic spectrum, a broad range of different types of generated energy, ranging from radio waves and electrical oscillations, through microwaves, infrared, the visible spectrum, ultraviolet radiation, gamma rays and high-energy cosmic rays. All of these sources of electromagnetic radiation exist as waves, and it is the variations in wavelength and frequency that determine the precise nature of the energy. These wavelengths range in size from many meters (such as radio waves) to many billionths of a meter (gamma and cosmic rays). Visible light is technically defined as electromagnetic radiation having a wavelength between approximately 400 and 780 nanometers (one nanometer is equal to one billionth of a meter).
An object or surface appears to possess a certain color because of the chemical makeup of that substance. In an atom, negatively-charged electrons "orbit" around a positively-charged nucleus. These electrons are in certain "energy levels," each having a certain "default" energy level. When these electrons are subjected to an energy source (in this case, visible light), they absorb the energy, but only at certain wavelengths (or colors). When these wavelengths are absorbed by the electrons, the remaining wavelengths of light are—essentially—reflected back to the observer. These reflected wavelengths are the opposites of those that were absorbed. Thus, we can say that an apple appears red because the atoms within the apple absorb light corresponding to the blue portion of the spectrum, reflecting back the opposite—or red—wavelengths. (The process is actually quite a bit more complicated than this.)
All of the colors we can perceive are produced by the mixing of certain basic colors. There are three categories of colors: primary colors, secondary colors, and tertiary colors. Primary colors are those which are not formed by the mixing of any other colors, and can be said to be "pure" colors. Secondary colors are those formed by the mixing of two or more primary colors. Tertiary colors are those produced by the mixing of two or more secondary colors. What constitutes a primary color differs depending on whether one is talking about light or pigments.
'Additive Color Mixing'. In additive color theory, which describes the interaction of different colors of light, three primary colors are recognized: red, green, and blue. Mixing these three primaries in various proportions is what creates the wide range of the visible spectrum. Mixing green light with blue light of equal strength will produce cyan, a secondary color (but see subtractive color mixing below); red light plus blue light yields magenta light; red light plus green light yields yellow light, while an equal mixture of all three additive primaries yields white light. Varying the intensities of these lights produces many other colors as well. Television screens and computer monitors display colors using additive color mixing; three different projectors (a red, a green, and a blue) vary the intensity of their respective colors to produce the range of displayed colors.
'Subtractive Color Mixing'. Colorants (such as inks, pigments, dyes, etc.) produce the sensation of a particular color in a different manner than does light. Since there is an extra step involved in the generation of a colorant (namely, the absorption of certain wavelengths of light, and the reflection of the opposite wavelengths), the subtractive color primaries are those colored substances that absorb the three additive color primaries. The subtractive color primaries are cyan (which absorbs red light), magenta (which absorbs green light), and yellow (which absorbs blue light). Mixing these colorants produces different color effects. Thus, if cyan ink is placed on a substrate, it will absorb the red light striking it. If a yellow colorant is overprinted on the cyan, then all the blue will also be absorbed, and the only remaining color of light that will be reflected back is green. Consequently, cyan plus yellow yields green. (Thus, subtractive color mixing operates by "subtracting" out one or more colors of light.) Varying the proportions of these colorants creates a wide variety of different colors. Cyan, magenta, and yellow are, in printing terminology, known as process colors since it is these pigments that are used in process color printing—also known as full-color or, when black ink is added, four-color printing. Full-color printing is accomplished by overprinting dots of these colors in varying amounts. (See Color Separation below.) It is for this reason that transparent pigments are required for process color inks, while opaque pigments are more often required for spot color printing. These three subtractive primaries are often referred to as "blue, red, and yellow," which isn't entirely appropriate. In theory, the combination of the three subtractive primaries in equal amounts should yield black, but in practice that isn't always the case.
In addition to additive and subtractive primaries, there are also memory colors, which are the colors we often find in daily life—the green of grass, the blue of the sky, the color of human skin, etc., which printers and color separators need to ensure are reproduced accurately.
The "birth" of color as a science came in 1872, when the Scottish physicist James Maxwell attempted to apply mathematics to the description of color. He created an equilateral triangle (called Maxwell's triangle) which identified red, green, and blue as the three primary colors of light, each located at one of the corners of the triangle. White was located in the exact center of the triangle, the point where all spectral components exist in equal amounts. All the other colors and combinations moved inward toward white, as well as around the perimeter of the triangle. (See Maxwell's Triangle.)
In 1931, the Commission Internationale de l'Eclairage (CIE) developed the first CIE color space. The CIE system, which has been revised many times (both by the CIE itself and by other organizations) is essentially based on tristimulus values, derived from standard observer data. Essentially, these tristimulus values (plotted on a three-axis coordinate system) are derived from the relative amounts of the three additive primary colors that a particular sample comprises. These values are gathered either using colorimeter data (gleaned by evaluating a sample using three filters—red, green, and blue—that simulate the human eye's perception of color) or from spectral data derived from a quantitative measurement of each of the three wavelengths of light. From the three-axis coordinate system, a two-dimensional chromaticity diagram can be mathematically derived, which is a horseshoe-shaped chart whose curved outer rim ranges from red to blue, with the saturation of a particular hue decreasing toward the center of the diagram.
Later CIE (and other) color measurement systems recognize three color attributes, which are based on these tristimulus values: hue (which is the dominant wavelength or most easily-discernible color) of a sample; saturation (or how "pure" the hue is, to what extent it is contaminated with other colors, or the degree of grayness); and brightness (the amount of light that is being reflected, or how bright the sample is). In other color spaces, saturation is known as chroma, while brightness is variously known as value or lightness. Three-term color spaces include HSL (hue, saturation, and lightness), HSB (hue, saturation, and brightness), HSV (hue, saturation, and value), and LCH (luminance, chroma, and hue). In all of them, the same fundamental color properties are recognized. An important distinction between HSL and HSB is that the latter is based on the RGB intensities of a computer monitor.
Different color measurement systems measure the color depending on the medium. Two variations of the CIE color space are CIEL*a*b* (L*a*b* referring to hue, saturation, and brightness, respectively) which is designed for measuring reflective color samples and CIELUV designed for color displays, such as television sets and computer monitors. The Hunter L,a,b values are also widely used. (See CIE Color Space.)
Related to color models are color matching systems, which are collections of printed or computer-generated color samples, used by designers and printers to match and specify colors. An advantage to these systems—used for both spot color and process color—is that they often include the proportions of colored inks required to produce the desired color. (See Color Matching System.)
TONE REPRODUCTION AND IMAGING
Whether prepress is performed in the traditional method or on digital systems, the evaluation of a reproduction, proof, and/or original comprises the same basic issues. One of the inherent difficulties, however, involves the dichotomy between analog and digital data. Analog data essentially refers to any continuous wave, such as sound, light, or an electrical signal. Digital refers to any information which exists as discrete bits. Digital is most often used to describe computers, as a computer can only understand two commands—on or off. A computer's central processing unit consists of thousands of tiny transistors, which either complete or interrupt an electrical signal. Thus, the binary language of the computer (called machine code) consists of only 1s and 0s (on or off, respectively). Everything the computer does is some pattern of 1s and 0s (these two digits are called bits, a term which is short for binary digi't'). Eight bits equal one byte (for example, 01001001). Consequently, any digital signal is composed of very small "steps" which aim to describe as closely as possible the contours of the original analog wave. In order for an analog signal to be effectively turned into a digital signal, the sampling rate (or the frequency with which the steps are captured by the computer) must be greater than human perception can detect, otherwise the discrete steps would be perceived. In digital color imaging, this translates into ensuring that the range of tones of an original (analog) image are accurately translated into digital form.
'Gray Levels'. In order for a digital image to reproduce well, it must render transitions between gray scale values imperceptibly. The greater the number of discrete steps that can be inserted between black and white, the smoother the grayscale transitions will appear. Most digital systems can accomodate 256 gray levels, which is somewhat beyond the number that research suggests that most humans can detect. The inadequacy of the number of gray levels in a digital image manifests itself in tints and gradients as the phenomenon of banding, or the visibility of the discrete gray levels.
'Density'. In imaging, density is a quantitative measure of the amount of light a particular surface absorbs. In a printed reproduction, density becomes a measure of how well the tone depth of an image has been reproduced. Density measurements can be made independent of hue, by using a densitometer and filters that can evaluate each color in a reproduction as a shade of gray. (See Density and Density Range.)
'Contrast'. The term contrast refers to the distribution of tones in an image. For example, an image with a great deal of shadow and highlight—but little middle tone—detail is considered to be high-contrast, while an image with a great deal of detail in the middle tone region would be said to be low-contrast. (See Contrast and Gamma.)
'Tonal Gradation'. An important issue in color reproduction is ensuring that the tonal range of the image is optimized for the printing process to be utilized. Depending on the printing process, halftone dots below a certain minimum size (which will correspond to highlight areas of the image) may be lost. Consequently, prior to scanning the original image, it is necessary to map the lightest highlights to the smallest halftone dot the printing process can reproduce. For example, the smallest dot that can be printed on an offset press is about 8 microns, while the smallest dot that can be printed on a flexographic press is 40 microns. Similarly, dots in the darkest shadow areas should not be larger than the largest dot that can be printed, otherwise the space between the dots will fill in while on press. Many image-processing programs (such as Photoshop) will let you adjust the middle tones as desired.
'Resolution'. When a digital image is on a computer screen, it is displayed at whatever maximum resolution the monitor is capable of. On the Macintosh, for example, that number is 72 dots per inch. The screen resolution is determined by the number of pixels per unit of linear space. The monitor screen resolution is usually very far below the minimum resolution needed for high-quality output. The image itself, however, will be at whatever resolution it was scanned at, which may be anywhere from 300 to 3,000 dpi. The only issue involved with resolution is the capability of the output device. Like the computer monitor, the output device describes images—be they text, line art, or photographs—as a grid of tiny dots. The higher the number of dots per inch, the higher the resolution and the smoother and sharper the image. A laser printer, for example, is capable of only as high a resolution as 600 dpi, which is fine for most type and line work, but inadequate for halftone screens. An imagesetter—a high-end device which exposes type and images directly on film—is capable of resolutions as high as 3,000 dpi or greater. The resolution of an original image can be changed either by downsampling (reducing the resolution by discarding image data) or upsampling (increasing the resolution by interploating data between pixels). This latter technique rarely works effectively, and it is best to scan at the desired (or higher) resolution. However, scanning at too high a resolution though not deleterious to image quality requires more computer power, disk space and memory, and slows down quite significantly the rate at which the image can be processed and output.
When a digital file is sent to an imagesetter for output, it first needs to pass through a raster image processor which maps the type and images to the grid of spots on the film recording section of the output device, effectively controlling which spots on the device's recording grid need to be exposed and which should remain unexposed. For type and line art, this is a straightforward process, but for digital halftones, there are some unique wrinkles.
'Halftone Screening'. All continuous tone images—be they color or black-and-white—need to be converted to halftones before they can be reproduced. Because few printing processes can lay down varying densities of ink, images must first be broken down into very small, discrete dots of varying size, density, and distribution in a process called halftone screening. This was originally accomplished by photographing the original image through a fine grid, or screen, of a set number of lines per inch. When the film is exposed, the image will consist of thousands of tiny dots: dark, tightly-packed dots in the shadow areas, a moderate mount of dots in the middle tones, and few, light dots in the highlights. Each color separation negative is processed as a halftone. However, when successively-colored dots are overprinted, if the angle of the lines of dots is the same for all four colors, the lines will interfere with each other, and produce an undesirable moiré pattern. Consequently, each screen needs to be placed at a different angle, experience generating certain specified screen angles that are the most effective for reducing moiré.
On digital systems, halftoning is performed electronically. Computer output devices need to create images as a series of tiny dots (called spots to distinguish them from halftone dots). These spots are much smaller than the halftone dots, and in fact each halftone dot is composed of many of these printer spots. (Thus in digital output, halftone dots are referred to as halftone cells.) The problem with halftone screening is that two of the conventional screen angles are known as irrational screen angles. This means that the tangents of these angles (the tangent being, in trigonometry, the ratio of one side of an angle to the opposite side) are irrational numbers (in other words, a number that is not a whole integer, but an endlessly repeating number such as 3.333333.... or π—3.14159....). The upshot of this is that the computer cannot calculate irrational screeen angles, and the rows of cells will not align with the grid perfectly. Rounding to the nearest rational angle helps the computer, but more often than not causes distortion and moiré. A combination of proprietary screen algorithms and the development of supercell screening has finally solved the problem of irrational screen angles. (See Screen Angles.)
Other aspects of the halftoning process also need to be considered, such as dot size, dot shape, screen count, and the screening method employed. Newer types of screening procedurea are enhancing the quality of color halftone reproduction. (See Stochastic Screening.) One inevitable aspect of any printing process is dot gain, or an undesired increase in the size of halftone dots. Dot gain can be caused by any or all of the steps from prepress to press: imagesetter output, platemaking, and the nature of the substrate used in printing are all contributors to dot gain. (There is also, in addition to this physical dot gain an optical dot gain, which is an optical illusion caused by the scattering of light at the edges of halftone dots.) Therefore, some degree of dot gain compensation is performed prior to final output of an image. (See Dot Gain.)
'Color Imaging'. Images are usually obtained from a variety of sources. The most common means of capturing images is by scanning them. A scanner is an optical device which converts reflected or transmitted light (depending on whether the image is a print or a transparency, respectively) to digital data for each primary color (RGB). A scanner scans each pixel of each line of an original image for each primary color and stores all the variations as a series of gray values. It can either output the color separations directly on film or, more often, store the image as an RGB file for later manipulation. Scanners come in two basic varieties: drum scanners and flatbed scanners. The former mounts the original image inside or on the surface of a drum, which rotates the image past the optics. The latter type of scanner places the image on a flat platen and moves the optics beneath the image. Drum scanners are high-end devices and tend to produce higher-quality scans. They also tend to cost upwards of $10,000. However, flatbed scanners are commonly available for near or under $1,000, and their quality is quickly catching up to what can be achieved on drum scanners. When scanning, transparencies tend to reproduce better than prints. (See Scanner.)
A scanner captures what are known as bit map images, in which an image exists solely as a grid of variously-colored pixels. The alternative type of graphics are vector graphics, in which images are stored as mathematical descriptions of lines, curces, and other shapes. Vector graphics have the advantage of taking up less disk space, can be scaled in size easily, and will print at the highest resolution the output device is capable of. The disadvantage, however, is that they cannot be as easily edited (i.e., pixel by pixel) as bitmapped images can. Vector graphics are only possible for line art; photographs cannot be saved as vectors. Bit maps are also known as raster images. (See Bit Map and Vector Graphics.)
Color image files can be saved in one of many file formats. The choice of a file format is crucial if the image needs to be transferred to other computers or systems which use different software. A file format is essentially a scheme for organizing the digital image data. Some standard file formats can be read by a wide variety of devices and programs. For bitmapped images, TIFF is the most widely-used file format, while encapsulated PostScript is widely used for vector graphics. (See File Format, TIFF, and Encapsulated Post Script.)
An increasingly popular source of photographic images is the Kodak Photo CD, or a compact disc with prescanned images at a variety of resolutions which can then be manipulated, color corrected (if necessary) and otherwise processed for output. (See Photo CD.)
DESKTOP COLOR AND COLOR SYSTEMS
Most color prepress these days is performed digitally, utilizing either a high-end color electronic publishing system running proprietary software, or a desktop microcomputer running on a standard platform with off-the-shelf software. Regardless of which type of system is utilized, there are certain basic features and considerations which need to be taken into account.
'Computer System'. The heart or brain of any computer is the central processing unit, housing the microprocessor(s) that execute all system functions, often measured in MIPS, or millions of instructions per second. (Desktop computers are still often measured in some fraction of MIPS, but that is likely to change in the near future.) The microprocessor—on IBM-compatible computers, the Intel 386, 486, or Pentium chips, and on the Apple Macintosh, the Motorola 680X0, or PowerPC chip—can operate at a variety of speeds, measured in megahertz (MHz). As of this writing, 70+ MHz machines are common, and 130:150+ MHz machines are quickly emerging. Obviously, the faster the processor, the less time will be spent waiting for the computer to finish performing a function. Especially when dealing with large graphics files, fast processor speed is highly desirable.
The software that controls all aspects of the computer and which forms the interface between user and device is the operating system, or platform. For many years, Microsoft's text-based DOS (a command-line interface) was the dominant operating system, but it was the Macintosh operating system, the first commercially successful graphical user interface (GUI), that made the desktop publishing revolution a reality. Later, Microsoft Windows provided a GUI for PC clones, and although PCs are gaining momentum in desktop publishing and graphics work, the Macintosh is still the computer of choice. Workstations—such as those by Sun—commonly run on one of the many flavors of UNIX. High-end systems usually utilize their own proprietary and/or custom-made operating system. Although as of this writing, most operating systems are incompatible with each other, the use of standard file formats (such as TIFF or EPS or the increasingly popular Adobe portable document format) can allow files to be transferred from system to system. (See Platform and File Format.)
Another important consideration in computer graphics is the system's random-access memory (or RAM). Although it is only one of several types of computer memory, it is what is most often meant by "memory." RAM—these days measured in megabytes—is the memory allotted by the computer for the storage of open applications, system software, and other data the computer needs to deal with at any given time. Whatever important data cannot fit in RAM is stored on the computer's hard disk, and read from it, which is a much slower process than reading from RAM. The more RAM the better; one can never have enough. Most graphics applications—espeically those that deal with full-color images—are very RAM-intensive; although it is increasingly difficult to get by with 8 MB of RAM, it can be done, but not easily. Computer systems are increasingly being shipped with at least 16 MB of RAM, but even that is not enough for large image files. Computer systems that regularly work with very large image files should not have less than 120 MB of RAM. (See RAM.)
Most computers also come with at least one hard disk, frequently located internally (although external hard disks are available). The hard disk is the site of the system software and important applications. A hard disk is read and written much faster than most other types of storage devices. Hard disks are available in a large number of storage capacities. Once measured in kilobytes, hard disks are more often measured in megabytes and gigabytes. (One byte equals eight bits. One kilobyte equals 1,024 bytes, one megabyte equals 1,024 kilobytes, and one gigabyte equals 1,024 megabytes. See Bit, Byte, Kilobyte, Megabyte, and Gigabyte.) For graphics files, a minimum hard-disk storage capacity of 200 MB is recommended, although 1 GB is a common size, and hard disks are getting larger all the time. Much larger disks capable of storing data measured in terabytes (1,024 gigabytes, or over one trillion bytes) are now beginning to appear.
'Peripherals'. Computer peripherals are those devices which are connected to the computer, as opposed to being built in to them. The monitor or video display isn't technically a peripheral, but is an important "add-on" to any system. Most monitors utilize cathode-ray tubes (CRTs), which comprise threee electron guns, one for the red, one for the green, and one for the blue signal comprising an image. The guns repeatedly scan across the screen in discrete bursts, which are known as pixels (which is an abbreviation for "picture element"). The size of a particular pixel is referred to as its dot pitch, which is a function of the smallest point the electron beam can describe. The color of a particular pixel is controlled by a digital-to-analog converter, which translates the computer's internal digital "picture" of the monitor to the variations in voltage to regulate the intensity of the electron beam(s) being fired at the screen. The number of colors that can be displayed by any given pixel is determined by the color depth of the monitor, or a measure of how many bits describe each pixel. This can range from 1-Bit color, in which each pixel is described by one bit of information (in other words, either "on" or "off," black or white) to—as of this writing—32-Bit color. For most graphics work, 24-Bit color is required, which means that twenty-four bits of information are used to describe each pixel, or eight bits for each of the three colors. This allows for the use of 224, or 16,777,216 possible colors for each pixel. (See 24-Bit Color; see also 8-Bit Color and 16-Bit Color.) Other monitor considerations include the refresh rate (or how fast the electron beam scans the screen anew) and the addressable resolution, or the number of pixels that can be displayed on the screen. Some computers and monitors require expansion boards to expand the number of colors or the resolution the monitor can display.
The most commonly-installed peripheral devices are storage media, which can be any of a number of devices. These roughly fall into two categories: magnetic and optical. A magnetic medium is any disk that stores information by orienting magnetic particles in a particular way. Examples include a floppy disk or any of a variety of removable cartridges (the most oft-used of which, as of this writing, are SyQuest cartridges and Iomega's Zip or Jaz disks). Other non-computer examples include audio- and videocassettes. Optical media include a variety of optical discs such as CDs and CD-ROMs, or magnetic-optical discs. (By the way, the word