<<

Computer Graphics Unit Manchester Computing Centre University of Manchester Department of Computer Science University of Manchester

Colour in Computer Graphics

Student Notes

C. Lilley F. Lin W.T. Hewitt T.L.J.Howard

ITTI Computer Graphics and Visualisation

Table of Contents

1 Introduction ...... 1

1.1 Contents and scope ...... 1

2 Seeing in colour ...... 3

2.1 The electromagnetic spectrum ...... 3

2.2 Spectra ...... 4

2.3 The eye...... 5

2.4 The retina ...... 7

2.5 Receptor cells ...... 8

2.6 Colour reception ...... 9

2.7 The fovea ...... 12

3 Measuring colour ...... 13

3.1 Colour matching ...... 13

3.2 A standardised observer ...... 14

3.3 Coloured objects ...... 16

3.4 Emissive colour ...... 16

3.5 Illuminants ...... 18

3.6 Reflective colour ...... 21

3.7 ...... 22

3.8 Atypical colour response ...... 26

4 Colour models ...... 31

4.1 Why use colour models? ...... 31

4.2 Primary colours ...... 31

4.3 CIE colour models ...... 34

4.4 Device dependent models ...... 38

University of Manchester i 4.5 Other colour models ...... 42

5 Colour output ...... 49

5.1 Displaying colour ...... 49

5.2 Colour video...... 55

5.3 Broadcasting colour ...... 61

5.4 Coping with insufficient colours...... 63

5.5 Printing in colour ...... 69

5.6 Colour photography ...... 73

5.7 Gamut mapping ...... 74

6 Usage of colour ...... 79

6.1 When to use colour ...... 79

6.2 Selecting a colour model ...... 79

6.3 Colour schemes ...... 80

6.4 Interpolation ...... 82

A Gamma correction ...... 83

A.1 Determining gamma ...... 83

A.2 Direct measurement ...... 83

A.3 Visual calibration ...... 84

B Monitor calibration ...... 87

C Glossary ...... 89

ii Computer Graphics and Visualisation 1 Introduction

A good understanding of colour is essential for effective use of computer graphics. This module describes the science of colour as it applies to computer graphics and visualisation.

1.1 Contents and scope

This module is divided into five sections.

Firstly, we look at how colour is seen. This draws together information from such diverse disciplines as physics, optics, physiology, neurology and psychology to show that colour is an internal, subjective sensation rather than an external, ob- jective entity. This helps explain just what colour is.

Given the biological basis of colour, how can it be measured and standardised? The second section explains how colour is measured and introduces the CIE in- ternational standard, used to define colour. This provides the vital link between biological sensation and physical measurement. Examples are also given of how colour measurements can be used and manipulated, such as predicting the result of a colour mixture or designing displays for people with defective colour vision.

An abstraction called a colour model is used to specify colour. The third section explains the concept of primary colours and then examines the many colour mod- els that are available, and the particular strengths and weaknesses of each. A colour described in one colour model can often be converted to a description in another. The CIE standard functions as a universal yardstick in this process.

The preceding sections have focused on specifying colour without considering how colours are physically produced in a computer graphics context. The fourth section examines how different types of hardware work, emphasising the impact this has on displaying colour. Guidance is given in making best use of the avail- able hardware for portable and effective colour graphics.

The final section provides guidelines for using colour. Rather than presenting an arbitrary series of rules, the intention was to show how the guidelines follow di- rectly from the material presented in the preceding sections.

There are a three appendices related to displaying standardised colours on a computer graphics screen, and a glossary of terms. All words in bold like this:

technical term

will be found in the glossary.

University of Manchester 1

2 Seeing in colour

2.1 The electromagnetic spectrum

Light is a form of energy. Visible light is only one form of electromagnetic en- ergy; other forms include infrared, ultraviolet, radio waves, microwaves and X rays. Electromagnetic energy can be considered to behave like a wave, and the factor that distinguishes these many types of energy is the wavelength. This is illustrated in Figure 1, which uses a logarithmic scale to encompass the wide range of wavelengths. Visible wavelengths are most conveniently measured in nanometres (nm, 10-9 m).

Wavelength (m)

4 10 Long wave radio Ultra violet 2 10 300 VHF radio 0 Violet 10 400 Purple -2 Blue 10 500 Radar Green Yellow Microwaves -4 Orange 10 600

-6 Red 10 700 Visible light -8 10 800 X rays Infra red -10 10

-12 Gamma rays 10

Figure 1: The electromagnetic spectrum.

University of Manchester 3 The ranges of wavelengths which broadly correspond to the colours of the spec- trum are shown in Table 1 and Plate 25.

Range (nm) Colour

380 – 450 Violet

450 – 490 Blue

490 – 560 Green

560 – 590 Yellow

590 – 640 Orange

640 – 730 Red

Table 1: Approximate wavelengths of spectral colours.

White light consists of a mixture of all the visible wavelengths, which was first described by Sir Isaac Newton in the Optiks (1704). He found that light could be split by a glass prism into a rainbow of colours, and combined again to form white. He also found that individual colours could not be further subdi- vided.

2.2 Spectra

It could be imagined that measuring the intensity of light emitted or reflected from an object at all visible wavelengths would completely define its colour. Such a measurement will indeed define those optical properties which influence the observed colour. An example of such a measurement is given in Figure 2. There is no easy way to predict the visual appearance from this information. The domi- nant wavelength can readily be identified, but what of the contribution from the rest of the spectrum? What will the overall colour be?

0.2

0.1

Relative reflectance

0.0 400 500 600 700

Wavelength (nm)

Figure 2: Typical reflectance spectrum of grass.

4 Computer Graphics and Visualisation Seeing in colour

The range of wavelengths which are visible varies between species; some snakes can see portions of the infrared, and many insects can see into the ultraviolet. When white light is split by a prism, the wavelengths are separated, but it is the eye and brain that produce the sensation we call colour.

2.3 The eye

The function of the eye is to capture a visual image, and convert the light energy into nerve impulses to be interpreted by the brain. The overall structure of the human eye, shown in Figure 3, is analogous to a camera. Table 2 compares the functions of the eye and a video camera.

Conjunctiva

Zonula Retina

Aqueous humour Fovea

Lens

Pupil

Cornea

Iris Optic nerve

Figure 3: The human eye.

University of Manchester 5 Eye Video Camera Function

cornea and primary focusing lens bend light to form image aqueous humour

lens secondary lens fine focusing

iris aperture depth of field & light level adjust

zonula auto focus move lens

conjunctiva clear daylight filter protect optics from scratches

sclera casing mechanical framework

retina photoelectric surface convert light to electrical signal

retinal blood ves- power cables supply energy to retina sels

optic nerve video signal output transmit data

Table 2: Comparison of the eye with a video camera

The major optical power of the eye comes from the transparent, curved cornea, which can bend light because of the large change in refractive index between air on the outside and the liquid (aqueous humour) on the inside. This delicate com- ponent is covered by the conjunctiva to prevent scratching from small particles such as grit, dust and smoke; tears are continually secreted to wash the conjuctiva, and the combination of eyelashes, eyelids and the bony structure of the skull protect the eye against more major damage.

The iris is a muscle which, when contracted, covers all but a small central por- tion of the lens, blocking the majority of light and increasing the depth of field. This provides a greatly increased dynamic range of usable viewing conditions, from very dim to very bright. The process of responding to a large change in over- all light intensity is termed adaptation.

Focusing on objects at different distances is accomplished by the lens, which is moved by a muscle called the zonula. Some of this movement is like a camera, forwards and backwards. The lens in the eye is however pliable, and can be pulled at the edges to form a thinner, flatter shape with a longer focal length. The portion of the lens not covered by the iris looks black from the outside, and is termed the pupil.

Because the refractive index of the lens and aqueous humour varies with wave- length, different colours require slightly different lens positions for crisp focus. This is termed chromatic aberration, and is noticed by a blurring of focus when colours of widely separated wavelength are seen side by side.

6 Computer Graphics and Visualisation Seeing in colour

2.4 The retina

Light energy is transformed to electrical impulses by the retina, a thin network of cells lining the back and sides of the eye. In some respects, this is like an array of charge coupled devices similar to that used in video cameras. It is however a more complex device than this, and it is necessary to have some idea of its action to understand how colour is perceived.

The cells making up the retina are specialised nerve cells, and are related devel- opmentally and morphologically to the nervous tissue in the brain. Thus, some of the retinal nerve cells perform visual processing even before the signals have left the eye. Another curious consequence of the embryological development of the eye from brain tissue is that the retina seems ‘inside out’; as Figure 4 shows, light has to pass through the ‘wiring’ of nerve cells to reach the photosensitive cells, which are at the back face of the retina.

ganglion cell choroid

horizontal cell rod

bipolar cell cone

Light

Figure 4: The human retina (schematic diagram).

The light sensitive receptor cells at the back of the retina face onto a black lining, the choroid, which enhances contrast by eliminating internal reflections and pre- venting light filtering through the front of the eyeball. Receptors are connected via bipolar cells (so called because of their double-ended shape) to ganglion nerve fibres, which pass out of the eye to form the optic nerve leading to the brain.

The retina also contains horizontal cells, which connect small clusters of recep- tors. When a receptor is illuminated, adjacent receptors are made less sensitive by the horizontal cells, increasing the local contrast. This is a preliminary form of edge detection, and causes an optical effect known as Mach banding. This is il- lustrated in Figure 5, which shows a series of grey rectangles. Each is a uniform

University of Manchester 7 shade of grey, but the edge near the darker rectangle looks lighter. Similarly, the edge near the lighter rectangle looks darker. Mach banding can be troublesome in computer graphics; if a smooth gradation in is simulated by a small number of shades, ensure that the edges are ragged to reduce this effect.

Figure 5: Mach banding.

2.5 Receptor cells

There are two classes of receptor cells: rods and cones, named from their shape. These are illustrated in Figure 6. They have a similar structure: a central nu- cleus, many mitochondria to provide chemical energy, and a stack of disks containing photo-sensitive pigment.

Figure 6: Schematic diagram of human rod (left) and cone (right) cells.

8 Computer Graphics and Visualisation Seeing in colour

Rods are sensitive to very low light levels, but reach their maximum output at only moderate light intensities. Thereafter they give a constant output regard- less of increases in light level. Cones are less sensitive, but can handle high light intensities.

The light sensitive pigment in rods, called rhodopsin, is a protein bound to a form of vitamin A. Absorption of a single photon of light causes a molecule of rhodop- sin to change from a low energy to a high energy form. This small energy change is greatly amplified by a cascade of chemical reactions, to produce a nervous sig- nal. Unlike most nerve cells, which transmit impulses in a digital, on/off form, the receptor cells produce a graduated, analogue response to light intensity, rather like a light meter.

Figure 7 plots the ISO standardised luminous efficiencies of a statistically normal observer (around 96% of the population; the rest have various forms of atypical colour vision frequently but incorrectly termed ‘colour blindness’). This represents the perceived brightness of a light as the wavelength is varied while holding the light level constant.

At low light levels, when the eye is dark adapted, only the rods are active. This is termed scotopic vision, and is most sensitive in the green region, at 510nm. In brighter light, rods are overloaded and the cones are active; the maximum lumi- nous efficiency for this photopic vision shifts to the yellow/green region at 555nm. This effect is termed the Purkinje shift.

1.0 Scotopic vision Photopic vision

0.8

0.6

0.4 Relative luminous efficiency

0.2

0.0 300 400 500 600 700 800 Wavelength (nm)

Figure 7: Luminous efficiency.

2.6 Colour reception

In addition to sensing the brighter lights, cones also provide colour sensation. There are three types of cones, differing in the protein component of the visual

University of Manchester 9 pigment and thus in the range of wavelengths of light to which they are sensi- tive. Referred to as S, M and L cones (for short, medium and long wavelengths) they have maximal sensitivities at 445nm (violet), 535nm (green) and 570nm (yellow). They are also called β, γ and ρ cones by some authorities.

In contrast to measuring the overall luminous efficiency of a human subject, de- termining the response of an isolated single type of cone cell is more difficult and the precise results depend to some extent on the methods and assumptions used. Two approaches have commonly been used: experiments on subjects who lack one of the three cone types (which assumes the other two are normal) and meas- urements of extracted cone pigments (which assumes that the rest of the cone cell and other retinal structure has no modifying effect).

An example of one set of measured spectral sensitivities for the three cone types is shown below in Figure 8. It is immediately apparent that S and M cones dis- play significant overlap and have similar sensitivities and wavelength maxima; L cones have a much lower sensitivity.

0.8

L cones M cones 0.6 S cones

0.4 Relative sensitivity

0.2

0.0 400.0 500.0 600.0 700.0 Wavelength (nm)

Figure 8: Sensitivity curves for the three cone types.

Examination of the data on a log scale Figure 9 shows that all three cone types in fact have similar, low sensitivities in the blue and purple region, but L cones do not have the large, short-wavelength sensitivity peak possessed by S or M cones.

10 Computer Graphics and Visualisation Seeing in colour

0.0 L cones M cones S cones

-1.0

-2.0

-3.0 Log relative sensitivity

-4.0

-5.0 400.0 500.0 600.0 700.0 Wavelength (nm)

Figure 9: Cone sensitivities on a log scale.

One consequence of the marked overlap between M and L cones is that their re- sponses to a given colour will be highly correlated. Transmitting the signals from each cone type straight to the visual cortex in the brain would therefore be ineffi- cient, It would also require four separate signals - S, M, L and brightness. What happens instead is a current topic of research and debate. All researchers seem to agree, however, that colour difference signals are produced.

Most researchers agree that in a second stage of colour detection, the difference of the M and L cones is used to provide a signal which discriminates between or- ange and bluish green.

The sum of the M and L cones is also transmitted, to provide a brightness chan- nel. Rods, which are saturated at photopic light levels, provide a small and effectively constant input to this channel, but there seems to be no input from S cones.

A second colour difference channel is provided by a weighted combination of S, M and L cones to aid discrimination of greenish yellow from purplish blue. The ex- act weightings and combinations of cones, and the probable multiplexing of difference signals for transmission to the brain, are not yet known with cer- tainty.

It is likely that there is a third stage of colour processing, somewhere in the brain, to generate three opposing pairs of colours: black/white. red/green and blue/yellow. Each of these colours is commonly seen as being in some way unique or distinct from the others. This idea of opponent colours has a long history, being described by Leonardo da Vinci and Goëthe, among others.

These three stages – detection by rods and cones, initial combination and final combination – are shown in Figure 10. The thickness of each line gives an idea of the contribution of each factor. For example, the contribution of S and M cones to the second stage greenish yellow / purplish blue channel is about equal, with a smaller contribution from L cones.

University of Manchester 11 S M L rod

+/- - +

+ -

yellow/blue red/green black/white

Figure 10: Opponent colour signals.

2.7 The fovea

This is a small yellowish spot on the retina, directly in line with the optical axis of the eye shown in Figure 3. The fovea is the area most sensitive to subtle vari- ation in colour, and is also most sensitive to small details of lightness and shape. It contains few rods, in contrast to the outer edges of the retina, far from the op- tical axis, which are rich in rods and optimised for detecting motion at the edges of vision – a clear survival advantage. Because the eye can be moved to point to- wards any object of interest, the small size of the fovea is not a disadvantage.

The nerves and blood vessels which cross most of the retina, and through which light must pass to reach the photoreceptors, are pushed aside from the fovea to reduce blurring. Extra large M and L cones are packed into the fovea in a hex- agonal tiled pattern, to give the maximum spatial resolution for the lightness (M + L) and orange/bluish-green (L – M) second stage channels. Less than two percent of the foveal receptors are S cones, as these make no contribution to the lightness channel. A consequence of this low S density is that the spatial resolu- tion - the amount of fine detail that can be seen - is much lower for blues and violets than for other colours.

12 Computer Graphics and Visualisation 3 Measuring colour

3.1 Colour matching

As we have seen the phenomenon of colour is a subjective one; a model of colour must take into account our knowledge of the mechanism of colour vision if it is to have any utility. The most obvious way to compare two colours for similarity is thus to look at them side by side. In the early days of colour science, this is ex- actly what was done.

3.1.1 Tristimulus colour matching

To exactly reproduce the colour of a given object, it would at first seem necessary to have a multitude of light sources corresponding to all the different spectral colours and adjust the intensity of each one separately until the objects spectrum was precisely duplicated.

In practice, however, equivalent colour sensations can be produced by a mixture of only three colours. This is because the analytical resolving power of the eye for colour is poor, compared to the resolving power of other sense organs such as the ear or the nose; a complex light stimulus is perceived as a single sensation. For example, given a complex audio stimulus, such as a concert orchestra, it is possi- ble to resolve the individual instruments and listen to just the violins. It is not possible, given a complex visual stimulus such as white light, to ‘pay attention’ to just the red components. (In contrast, the spatial resolving power of the eye is much better than that of the ear).

700

546.1

435.8

700

546.1

435.8

?

Figure 11: Trichromatic colour matching.

University of Manchester 13 Figure 11 shows the experimental set up for a colour matching experiment. Lights are projected onto a diffuser, shown as a grey rectangle in the figure, so that the observer sees a uniform single colour. The unknown light to be matched, marked ‘?’, is viewed side by side with the three standard lights, the intensities of which are individually varied until the colours are seen to match.

Three lights often chosen for colour matching experiments are monochromatic (single wavelength) sources at 700nm (scarlet red), 546.1nm (yellowish green) and 435.8nm (bluish violet). These are shown in Figure 11 and Plate 26.

The green and blue-violet lights correspond to sharp peaks in the spectrum from a mercury vapour lamp; this allows calibration and exchange of experimental data between different sites. The red light is in an area of the spectrum where changes in wavelength produce little change in perceived colour, minimising the effect of mis-calibration.

In some cases, a match can only be obtained by adjusting the unknown colour; this is done by adding a proportion of one or more of the standard lights using a second set of standard lamps, shown greyed in Figure 11. This is equivalent to a negative quantity of one or more lights being required. The specification of a col- our in terms of the amounts of energy required from each of the three lights to match it is termed its tristimulus value.

3.1.2 Additivity

The colour resulting from two coloured lights can be exactly predicted; it is the sum of the tristimulus values of the two lights. The tristimulus value of a 50-50 mixture of two lights is thus the average tristimulus value.

This important property, termed additivity, allows the colour of a mixture of an arbitrary number of lights to be predicted. Considering the spectrum of a colour to be made of a large number of wavelength bands allows the tristimulus value of any object to be calculated as the additive mixture of these bands.

If the predicted tristimulus value of a mixture is positive, it can be mixed with the colour matching apparatus described. This is true even if one of the compo- nents of the mixture has negative tristimulus values.

3.2 A standardised observer

Based on a series of matching experiments, a standard observer was defined in 1931 by the International Lighting Committee (Committee Internationale de l’Éclairage, CIE). This is a set of data which defines three primary colours for col- our measurements and states, for each wavelength interval, the amount of these primaries which would be required to match a spectrally pure colour for a statis- tically normal (non ‘colour blind’) observer. A graph of the matching functions is shown in Figure 12.

14 Computer Graphics and Visualisation Measuring colour

2.00

X 1.50 Y Z

1.00

Amount of primary for match 0.50

0.00 350.0 400.0 450.0 500.0 550.0 600.0 650.0 700.0 750.0 Wavelength (nm) Figure 12: CIE colour matching functions.

The three primaries are called X, Y and Z. A graph of the amounts of each CIE primary required to match any pure spectral colour is called the matching func- tion, and is shown in Figure 12. To match a particular colour, a vertical line is drawn at that colour’s wavelength and the quantities read off from the intersec- tions with each matching function. For example, to match the blue/violet colour of wavelength 450 nm requires 0.33 units of X, 0.04 units of Y and 1.77 units of Z.

In mathematics, many problems are made easier by the use of imaginary num- bers, which contain the square root of minus one. Similarly, the X, Y and Z primaries used to define the standard observer are ‘imaginary’ colours, in that they do not correspond to visible colours. They have the property of being consid- erably more saturated than real colours, so that no real colour requires a negative contribution from any of the primaries for a colour match. X is a super- saturated purplish red; Y is a supersaturated form of the real spectral green of wavelength 520nm, and Z is a supersaturated form of the real spectral blue of wavelength 477nm. Also, the spectral matching function of the Y primary was chosen to exactly match the CIE standard photopic luminous efficiency function shown in Figure 7. It therefore carries all the luminance information about the colour.

The assumptions made in defining this observer were that the colour subtends a visual angle of 2° or less and the illuminant is not too dissimilar to daylight. These are easy conditions to meet in practice. The restriction on angular size is so that the image of patch of colour on the retina falls on the fovea, the area most sensitive to small changes in colour. This is the usual situation when looking at a coloured object. For those occasions – rare in practice – where wide field colour specification is required, the CIE 1964 Supplemental Observer should be used. The experiments which were performed to define the 1964 observer used a 10° field, so the resulting CIE values are denoted X10, Y10 and Z10. The shape of the matching functions is broadly similar to the 1931 Standard Observer, although tristimulus values calculated from the two observers are different and should not be mixed.

University of Manchester 15 Because the standard observer is a mathematically defined set of functions, the results of colour matching experiments can be simply calculated without actually having to do the experiment. Colour measurement can thus become an auto- mated process.

3.3 Coloured objects

There are two broad classes of coloured object, shown in Figure 13 and in Plate 27. Emissive objects produce their own light. Reflective objects, on the other hand, are totally dependent on an external light source; they provide a modifica- tion of the colour of the illuminant by absorbing different amounts of light at different wavelengths. Fluorescent objects are a special case, in that they take in light at one wavelength and re-emit some of it at a longer, lower energy wave- length.

Surface gloss Illuminant Emitted colour reflected colour

Light source

Figure 13: Reflective and emissive objects.

3.4 Emissive colour

3.4.1 Calculating tristimulus values

The colour of a light source is defined by the quantities of X, Y and Z primaries which would be required to match it. To calculate these quantities, the visible spectrum of 380 to 730nm is divided into a number of wavelength intervals and the intensity of the sample measured for each interval to produce its spectrum. A 10nm interval is commonly used. Then, for each primary in turn, the height of the sample spectrum is multiplied by the height of that primary’s matching func- tion. These products are summed across all wavelength intervals to calculate the overall quantity of that primary required for the match. This is illustrated in Figure 14.

16 Computer Graphics and Visualisation Measuring colour

x x x

= = =

Σ Σ Σ

XYZ Figure 14: Calculation of CIE tristimulus values.

3.4.2 The XYZ diagram.

The resulting value (X, Y, Z) may be plotted on a 3D diagram, and will fall in the positive (XYZ) quadrant within the cone-shaped solid shown in Figure 15 and in colour in Plate 28. Notice that the coordinate axes are not inside this solid; the XYZ primaries are imaginary colours. Black, corresponding to the lack of light, is at the origin. The curved boundary represents the tristimulus values of pure spectral colours. Because these are of a single wavelength, they represent the maximum attainable saturation. This boundary is called the spectral locus; all visible colours are inside or on it.

Wavelengths from 400nm (violet) to 700nm (red) are shown; note that the wave- length spacing is not at all even. The straight line connecting the ends of the spectral locus corresponds to additive mixtures of the red nearest infrared and the violet nearest ultraviolet to produce purple.

University of Manchester 17 Y 520nm

500nm 600nm

700nm

X

Z 400nm

Figure 15: The CIE 1931 XYZ diagram.

3.4.3 Automatic measurement

In practice, the process of measuring the spectrum of a light source and obtain- ing its tristimulus values is automated. An instrument called a spectroradiometer measures the brightness at each wavelength interval with a photocell, and has the values of the standard observer matching functions at each wavelength interval stored internally. A small microprocessor performs the calculations, and readout is directly in the form of X Y and Z tristimulus values.

A luminance meter is similar but cheaper, as it only gives a readout of the Y measurement.

3.5 Illuminants

Many objects are coloured which do not themselves emit light. The colour is due to reflection of light from other light sources. The object absorbs varying amounts of light of each wavelength, and the unabsorbed portion is reflected back to the eye to give the sensation of colour. Clearly, the quality of light illuminating the object will affect the perceived colour.

18 Computer Graphics and Visualisation Measuring colour

3.5.1 White light

What is seen as white light generally does not have a flat spectral response. Hu- man eyes have evolved to consider daylight the ‘normal’ illuminant. For centuries, artists have used overcast northern daylight for preference when mix- ing colours, an important consideration when a painting is being worked on for a prolonged time and colours must match those used earlier in the work. The qual- ity of daylight is very variable, depending on location, season and degree of cloud among other factors. It is clearly desirable to have a standardised light source, representative of daylight. It is no surprise then that the CIE has recommended that daylight-like sources be used for matching.

3.5.2 Illuminant C

The CIE have produced such a standard, called illuminant C. This consists of two parts:

1. a tungsten light, the electrical filament of which is operated at a precise temperature, which gives a very warm, orange light. (This light is a stan- dard, Illuminant A, but is little used by itself.)

2. a filter, consisting of a tank of blue liquid, the chemical composition of which is specified by the CIE, to produce a simulation of neutral daylight.

While this arrangement is suitable for laboratory use, the light output is fairly low and the liquid tank inconvenient. Furthermore, Illuminant C is lacking in the near ultraviolet region, making it less useful for fluorescent materials. Rec- ognising this, the CIE has specified another class of daylight illuminants which rectify these deficiencies.

3.5.3 The D series illuminants

These differ according to their colour temperature, which is a one dimensional specification for the colour of an approximately white light; the colour produced by a theoretical black body radiator when heated to a particular white-hot tem- perature. The most common D illuminant corresponds to a temperature of 6500° Kelvin, and is thus termed D65. Other illuminants in the D series are sometimes encountered, notably D50 (5000°K) in the graphic arts industry. The spectra of illuminants A, C and D65 is shown in Figure 16. Notice that in the ultraviolet region (below 400 nm) the curve for Illuminant C is much lower than for D65.

University of Manchester 19 Illuminant A Illuminant C Illuminant D65 2.0

1.0 Relative intensity

0.0 300.0 400.0 500.0 600.0 700.0 800.0 Wavelength (nm) Figure 16: Spectra of some standard illuminants.

The spectrum of D65 is complex, and difficult to reproduce exactly with an artifi- cial light. There is no actual apparatus specified as part of the standard. Instead, there is a procedure to determine how good a particular light is at giving the same colour matching results as D65.

A number of selected paint samples are measured under the light in question and their tristimulus values compared to theoretical values calculated from the reflectance spectrum of each paint chip and the spectrum of D65. The results from each chip are combined to obtain an overall index of colour matching fidel- ity. Over 90% is a good result, below 60% will give significant errors.

The phenomenon of two samples matching in colour under one illuminant but not another is called . The CIE procedure is designed to minimise metamerism between the theoretical D65 and practical implementations of it.

3.5.4 Other illuminants

Fluorescent strip lamps are frequently encountered as indoor lighting. These typically produce metamerism with colours which were specified using a daylight illuminant. Figure 17 compares the spectrum of a typical fluorescent light with D65. The differences are considerable.

20 Computer Graphics and Visualisation Measuring colour

6.0

Illuminant D65 Fluorescent striplight

4.0 Relative intensity

2.0

0.0 300.0 400.0 500.0 600.0 700.0 800.0 Wavelength (nm)

Figure 17: Spectra of daylight and a fluorescent striplight.

Ordinary household light bulbs have a spectral distribution similar in overall shape to Illuminant A, shown in Figure 16. As this is significantly different from D65, metamerism problems can be expected with light bulbs too. It is unfortu- nate that the two most common sources of indoor illumination are very different from daylight. This is why, when a precise estimate of colour quality or colour balance must be done visually, a special matching cabinet is used. This is simply a box containing a , into which the sample is placed for view- ing. It is painted grey on the inside to provide a neutral background.

3.6 Reflective colour

Measurement of the tristimulus values of a light source involves multiplying the emission spectrum of the sample with the spectra of the three standard observer matching functions. For reflective light, the procedure is the same except that the emission spectrum of the illuminant is first multiplied by the percentage re- flectance of the sample for each wavelength interval. This converts the spectrum of the incident illuminant to the spectrum of the reflected light. For this reason, specification of the colour of a reflective object by tristimulus values is meaning- less without also specifying which illuminant was used for the measurement.

3.6.1 Adaptation

The visual appearance of a white surface, such as a piece of paper, varies slightly when seen under different illuminants, but still looks white. In contrast, the

University of Manchester 21 measured spectrum and tristimulus values will vary considerably. This phe- nomenon is termed colour constancy and is due to complex visual processing in the brain which tries to correct for slow changes in overall light level and qual- ity. This is dependent on the overall state of adaptation of the eye. For example, a white sheet of paper will look white under the orangish glow of a tungsten light bulb or the bluish glare of a fluorescent striplight. This is because the illuminant fills the whole field of vision and provides the dominant adaptive stimulus. If a photograph is taken of the paper under tungsten light, there will be a marked orange tint to the white paper when the print is developed. This is because the amount of light reflected from the photograph is a small fraction of the total light entering the eye, which does not therefore adapt to it.

3.6.2 The

To account for this phenomenon, it is customary to define a white point which is taken to be the colour currently accepted as white. For emissive colour, this is one of the standard ‘white’ illuminants. For reflective colour, a theoretical entity called the perfect diffuse reflector is defined to have the convenient property of 100% reflectance at all wavelengths, which allows the illuminant to become the white point.

The tristimulus values of a reflective sample may be measured with a spectrophotometer, which is similar to a spectroradiometer except that it also contains a light source. A spectrophotometer is calibrated by simply measuring a pure white surface, to reflect the illuminant back onto the detector.

3.6.3 Fluorescent colours

These have the property of absorbing light at one wavelength and re-emitting it at a longer wavelength (which will have less energy). In many cases, the ab- sorbed light will be in the ultraviolet range but the emitted light will be visible. As this affects the measured tristimulus values, it is important that a daylight illuminant is used which has similar levels of ultraviolet to real sunlight, a fail- ing of the original Illuminant C.

Substances which absorb ultraviolet and emit blue light are used in some wash- ing powders; the blue neutralises the yellowish tint of residual soiling to give the illusion of white. Such substances are termed optical brighteners.

3.7 Chromaticity

3.7.1 The CIE 1931 (xy) diagram

If a given colour is increased in brightness, the amount of light required from each primary to match the colour increases. The increases will be in proportion, so the ratio X:Y:Z remains constant as the colour moves away from the origin.

It is often useful to examine the colour of a sample separately from its bright- ness. To do this, the tristimulus values are normalised:

22 Computer Graphics and Visualisation Measuring colour

x = X / (X + Y + Z)

y = Y / (X + Y + Z)

z = Z / (X + Y + Z)

Clearly, x + y + z = 1 in all cases. It is therefore customary to drop the z coordinate and produce a 2D plot of x against y. This is equivalent to projecting the XYZ colour solid onto the X + Y + Z = 1 plane, which is shown in Figure 18.

Y

The X+Y+Z=1 plane

X

Z

Figure 18: The X + Y + Z = 1 plane on the CIE 1931 XYZ diagram.

The resulting diagram is called the CIE 1931 chromaticity diagram, and is shown in Figure 19 and in Plate 29. It represents the perceptual attributes of and saturation, separated from luminance.

Chromaticity (x,y) values are sometimes encountered together with a Y value (xyY) to allow conversion back to XYZ.

University of Manchester 23 525 0.8

550

0.6 500 575

y 0.4 600

660-780

0.2

475 450 0.0 0.0 0.2 0.4 0.6 0.8 x

Figure 19: The CIE 1931 chromaticity diagram

Remember that in Section 3.1 (page 13) colour matching experiments used spe- cific calibrated wavelengths for yellow/green and blue/violet lights, but red light was chosen to be in a wavelength range where mis-calibration would not affect the colour. Looking at Figure 19, notice that colours from 660nm (red) right up to 780nm, the limit of visible colour bordering on infra-red, are at the same place on the chromaticity diagram; they are all the same hue. A similar ‘bunching up’ of wavelengths occurs at the border of violet and ultraviolet (380nm).

One problem with this diagram is that it is not perceptually uniform. In other words, the distance between two colours which are just noticeably different var- ies across the surface of the diagram.

3.7.2 CIE 1976 uniform chromaticity scale (UCS)

Investigation of the perceptual uniformity of the 1931 chromaticity diagram, by plotting the size of a just noticeable colour change in various parts of the dia- gram showed that the portion at the top of the curve, in the green region, showed little variation in colour compared to the blue violet portion. This perceptual dis- crepancy has been likened to the distortions on flat maps of the world, in that it cannot be removed entirely by a linear projection of the 1931 chromaticity dia- gram. However, some projections will be better than others. One such projection was recommended by the CIE in 1976, the uniform chromaticity scale (UCS).

Also known as the u′v′ diagram, from the labels on its axes, this is a projection of CIE 1931 XYZ space designed to produce much less distortion than the xy dia- gram. (The curious ‘dash’ notation is intended to distinguish it from a similar (uv) diagram which was used prior to 1976.) The axes are calculated as follows:

u′ = 4X/(X + 15Y + 3Z) = 4x/(-2x + 12y + 3)

24 Computer Graphics and Visualisation Measuring colour

v′ = 9Y/(X + 15Y + 3Z) = 9y/(-2x + 12y + 3)

The resulting diagram, shown below in Figure 20, is of the same general shape as the 1931 chromaticity diagram but stretched to give a more uniform distribu- tion of colours.

0.6 525 550 575 600 500 675-780

v’ 0.4

0.2 475

450 400 0.0 0.0 0.2 0.4 0.6 u’

Figure 20: The CIE 1976 uniform chromaticity scale diagram

3.7.3 Putting chromaticity diagrams to work

Chromaticity diagrams have a variety of uses. They all share the property that an additive mixture of two colours will lie along the line connecting them. This can be used to calculate colorimetric data such as the and excitation purity of a colour.

Dominant wavelength is determined by constructing a line from the white point, through the colour in question, to the spectral locus and reading off the wave- length. This is shown in Figure21. Regardless of the spectral composition of the original colour, an equivalent colour sensation will be obtained by mixing mono- chromatic light of the dominant wavelength with the specified white. The excitation purity gives the proportions of this mixture. In this example, the exci- tation purity is 40%.

University of Manchester 25 525 0.8 dominant wavelength 540 nm 550

0.6 500 575 sample

y 0.4 600

D65 whitepoint 660-780

0.2

475 450 0.0 0.0 0.2 0.4 0.6 0.8 x Figure 21: Dominant wavelength calculation

In other words, the sample colour is made from the additive mixture of 40% spec- tral light of wavelength 540nm, and 60% of D65 white at the same luminance.

Chromaticity diagrams may be used to pick complementary colours by selecting points opposite to each other across the white point, and to design continuous col- our scales by tracing straight or curved paths across the diagram. In general, however, a three dimensional colour model is used for this task so that the brightness of the colours can also be altered. Colour models are discussed in the next section.

3.8 Atypical colour response

More than 95% of the world population have statistically normal colour vision. These are the people whose colour matching abilities are represented mathe- matically by the CIE standard and supplemental observers.

Although people with an atypical colour response are often called ‘colour blind’, there are in fact very few people who have absolutely no colour perception. The proportion of the population with reduced colour discrimination is however sur- prisingly large. Atypical colour response is sex-linked; while being found quite rarely in Caucasian females and non Caucasians, it occurs in 8% of Caucasian males.

People with normal colour vision are termed trichromats because they have three functional cone types. Those with reduced functioning of one cone type are called anomalous trichromats.

26 Computer Graphics and Visualisation Measuring colour

In some people, one cone type is absent or completely non-functional. Because they have two functioning cone types, they are termed dichromats. Finally, those very rare people with only one functional cone type, who see only in shades of grey, are called monochromats.

Dichromats will see certain colours as the same which are clearly different to tri- chromats. These confused colours lie along straight lines on a chromaticity diagram, converging on a single point.

As one would expect, there are three types of dichromatism, depending on which one of the three cone types is affected. Each type has a different confusion point.

• Protanopia is due to missing or dysfunctional L cones. Protanopes have greatly reduced discrimination of reds from greens, and reds look dimmer than normal. The confusion point P, shown in Figure 22, is at or near u′ = 0.61, v′ = 0.51, very close to the far red corner of the chromaticity diagram.

0.6 525 550 575 600 500 P

675-780 0.4

v’

0.2 475

450 400 0.0 0.0 0.2 0.4 0.6 u’

Figure 22: Protanopic confusion lines.

• Deuteranopia is due to missing or dysfunctional M cones. It can also be caused by the lack of the L-M colour difference signal shown in Figure 10. Deuteranopes also have reduced discrimination of reds from greens, but without any colours seeming dimmer than normal. The dueteranopic confu- sion point lies well off to the left of the chromaticity diagram, at or near u′ = -4.75, v′ = 1.31. As can be seen from Figure 23, this means that the lines of confused colours are nearly parallel.

University of Manchester 27 0.6 525 550 575 600 500

675-780 0.4

v’

0.2 475 D at u’ = -4.75 v’ = 1.31

450 400 0.0 0.0 0.2 0.4 0.6 u’ Figure 23: Deuteranopic confusion lines.

• Tritanopia is caused by a lack or deficiency in the S cones. Because these make no contribution to the lightness channel, all colours are the same brightness as normal. However there is greatly reduced discrimination be- tween yellows and blues. The confusion point, shown in Figure 24 is at or near u′ = 0.26, v′ = 0.003, very close to the violet corner on the chromaticity diagram.

0.6 525 550 575 600 500

675-780 0.4

v’

0.2 475

450 0.0 0.0 0.2 T 0.4 0.6 u’ Figure 24: Tritanopic confusion lines.

28 Computer Graphics and Visualisation Measuring colour

In each case, confused colours lie along converging lines; these may be used to select colour schemes that can be used by colour deficient subjects.

University of Manchester 29

4 Colour models

4.1 Why use colour models?

Given the complexities of colour perception, it is useful to define a simplified, ab- stract method of succinctly specifying colour with a small number of parameters. Typically, the tristimulus theory is used so there are three parameters. There may be other underlying assumptions too. Often colour models are defined in terms of three primary colours, from which all others are obtained by mixture. In other cases, the three parameters represent more readily understood attrib- utes such as lightness or saturation.

Considering the three parameters to be orthogonal axes produces a geometric colour space. The geometric position of a colour in this space can be used to see its relationship with other colours. 48 shows a program which can be used to mix colours in a variety of different colour spaces.

4.2 Primary colours

The choice of ‘primary’ colours depends on a number of factors:

• can they take negative values ? • are they hardware dependent ?

If the primaries can be negative and are not constrained by hardware, any pri- maries can be used. A single primary is represented by a point on a chromaticity diagram; it can only produce that one colour at varying intensities. Two prima- ries produce a line segment on a chromaticity diagram, as shown in Figure 25. Any colour on that segment can be produced by a non-negative mixture of the two primaries. If negative values for primary colours are allowed, the line seg- ment can be extended out from each primary to the spectral locus as shown by the dotted line in Figure 25; any colour on this line can be specified.

University of Manchester 31 0.6 525 550 575 600 675-780 500

A

0.4 B

v’

0.2 475

450 400 0.0

0.0 0.2 u’ 0.4 0.6

Figure 25: Two primaries define a line.

Figure 26 shows how a third primary C can be used to cover the entire visible area; on the line joining A and B the contribution from C is zero; on the side to- wards C, the value of C is positive and on the other side it is negative.

0.6 525 550 575 600 675-780 500

A

0.4 B

C negative C v’

C zero

0.2 475

C positive

450 400 0.0

0.0 0.2 u’ 0.4 0.6 Figure 26: Negative and positive contributions from one primary.

Three primaries define a plane, and so any visible colour can be specified with any arbitrary choice of primaries provided negative values are allowed. Figure 27 shows three random primaries - a yellow (A), a purple (B) and a turquoise (C) can be used in this fashion. Note the large areas that require one or more prima- ries to be negative.

32 Computer Graphics and Visualisation Colour models

0.6 525 550 575 600 B- C- 675-780 500

B - A C -

0.4 All + B A - C - C v’ A - B -

0.2 475 A -

450 400 0.0

0.0 0.2 u’ 0.4 0.6

Figure 27: Three primaries define a plane.

If primaries cannot take negative values, only colours within the central triangle can be produced. In this case, it makes sense to maximise the area of this trian- gle by aligning it with the broadly triangular shape of the chromaticity diagram. This gives one primary somewhere near the far red corner, one in the green cor- ner and one near the far violet corner.

Another way to increase the range of colours is to use more primaries. Figure 28 shows a system with five primaries. All colours within the pentagon can be pro- duced in more than one way. Removing any one primary reduces the number of colours that can be produced.

0.6 525 550 575 600 500 B 675-780 A

0.4 C

E v’

D 475 0.2

450

400 0.0

0.0 0.2 u’ 0.4 0.6 Figure 28: A system using five primaries.

University of Manchester 33 4.3 CIE colour models

4.3.1 CIE 1931 XYZ

This colour space has already been discussed; it is the primary colour space for colorimetric measurement and the basis for other CIE colour spaces. It has the advantage of being rigorously defined, and an international standard. Whilst good for presenting a description of an existing, measured colour, it is not par- ticularly easy to use for specifying new colours. This is because the primaries are not visible colours; while they are always positive, the axes lie outside the range of visible colour as can be seen in Figure 29.

Y 520nm

500nm 600nm

700nm

X

Z 400nm

Figure 29: The CIE 1931 XYZ colour space

4.3.2 1976 CIELUV

If the luminance of a colour is divided by that of some reference white, a relative luminance scale from 0 to 100% (black to white) is obtained. Measured lumi- nance, however, does not correspond well to perceived lightness; the scale looks markedly non uniform, with all the dark colours bunched up at one end. The CIE has recommended a non-linear formula for lightness, L*, which corresponds more closely to the perceived sensation. The medium, 50% grey occurs at L* = 50. The appearance of grey scales constructed from luminance and lightness is shown in the diagram below.

34 Computer Graphics and Visualisation Colour models

Figure 30: Lightness (above) and luminance (below).

* 1/3 L = 116 (Y/Yw) – 16 for most values of Y (Y/Yw > 0.008856) or

* ≤ L = 903.3 (Y/Yw)for very dark colours (Y/Yw 0.008856)

Yw is the Y tristimulus value for the reference white, in other words that white to which the eye is adapted. For most purposes, a standard illuminant such as D65 is used as the reference white.

The CIE has recommended a uniform, 3 dimensional colour space incorporating both the UCS and this lightness parameter, called the CIE 1976 (L*u*v*) colour space. Shown in Figure 31 and in Plate 30, it is commonly referred to as CIELUV, and may be considered a uniform version of the CIE 1931 XYZ space.

The formulae are:

* * u = 13 L (u’ - u’w )

* * v = 13 L (v’ - v’w ) where u’w and v’w are the UCS coordinates of the chosen reference white.

These formulae correspond to translating the origin of the UCS diagram to the white point and scaling the relative chromaticity co-ordinates by the lightness so that the geometrical distance between two colours is reduced as they are made darker. This takes account of the fact that dark colours look more alike than light ones, even when the are the same. The resulting colour space therefore forms a cone-like solid. Black, at L* = 0 is thus a single colour at the apex of this cone.

CIELUV space could also be considered as a stack of scaled UCS diagrams, as Figure 31 shows.

University of Manchester 35 L*

-600

u* -200 200 500 v*

Figure 31: The CIE 1976 (L*u*v*) colour space.

As the spectral colours form a loop around the origin, it is possible to define a * hue angle huv which specifies hue with a single numerical value. The positive u axis is defined to be 0°, and angles are measured anticlockwise.

* * huv = arctan (v / u )

An advantage of this is that hue is a readily understood concept. The colours of the rainbow are arranged in a circle. The distance from the achromatic L* axis may then be used as a measure of chroma, or colourfullness:

* *2 *2 1/2 C uv = (u + v )

Lightness, chroma and hue angle define an alternative, polar form of CIELUV, shown in Figure 32. This is easier to use for mixing colours than CIELUV.

36 Computer Graphics and Visualisation Colour models

L*

C*

v* H

u*

Figure 32: Hue angle and chroma.

Saturation is calculated from:

S* = C* / L*

Figure 33 shows planes of constant chroma and constant saturation in CIE LCH space. Chroma is clearly seen as independent of lightness.

L*

constant saturation

v*

constant chroma

u*

Figure 33: Hue angle and chroma.

The CIELUV model, because of its greater perceptual uniformity than other models, has been used in the film and TV industries and is finding increasing use in computer graphics. It is generally used for emissive colours such as lights or computer monitors. Conforming PHIGS and PHIGS PLUS implementations are required to support this model.

University of Manchester 37 4.3.3 1976 CIELAB

An alternative colour model is recommended by the CIE for reflective colours such as paints or dyed fabrics. It is optimised for quantifying the colour differ- ence between two samples of nearly identical colour, such as between two batches of dyestuff, and producing similar numerical results to other existing col- our difference formulae. The lightness parameter is the same L* as in CIELUV, but the other two are:

* 1/3 1/3 a = 500 [ (X/Xw) - (Y/Yw) ]

* 1/3 1/3 b = 500 [ (Y/Yw) - (Z/Zw) ]

Because of the cube roots in these equations, there is no chromaticity diagram for CIELAB. Straight lines in CIE 1931 xy space remain straight in the UCS, but would not in a chromaticity diagram based on CIELAB. It would therefore not represent additive mixture.

Since 1976, more complex non-Euclidean formulae have been devised for colour difference in CIELAB which add varying weights of lightness, hue angle and chroma depending on the region of colour space in which the samples lie. These formulae are used in high precision industrial applications

Because reflective colours are being specified or measured, the illuminant used should also be stated. In general, D65 is used unless there is some special reason to select another illuminant. Both CIELUV and CIELAB assume that the illuminant, or reference white, is close to natural daylight.

* * CIELAB has a polar form called L C abhab, the subscripts serving to distinguish it from the polar form of CIELUV. The formulae are identical.

CIELAB is sometimes encountered in the specification of colour printers. Given the reference illuminant used, it is simple to convert these values to XYZ or LUV. The second edition of the Computer Graphics Metafile (ISO/IEC 8632: 1992) allows colours to be specified in either CIELUV or CIELAB. The Open Document Architecture, an ISO standard for compound documents, uses CIELAB for colour specification, as does the Image Processing and Interchange (IPI) standard.

4.4 Device dependent models

It is sometimes convenient or customary to specify colour directly in the native colour space of a particular device. In the case of devices which use emitted light, such as colour monitors, an additive geometrical space can be produced subject to certain constraints discussed later (Section 5 ). Devices which use reflected light, such as all forms of printing, are not additive and a geometrical space cannot be formed. While individual colours can be specified in such a system, the relation- ship between colours or the result of mixing two colours can only be determined on an empirical basis.

38 Computer Graphics and Visualisation Colour models

4.4.1 Red, green, blue (RGB)

This colour space is commonly used, and corresponds to the input data for a spe- cific colour CRT computer monitor. The three primaries are the particular colours emitted by the three phosphors. It is therefore highly device specific; the same colour will be specified as two different sets of numbers on two different monitors. The parameters are the quantities of red, green and blue light to emit, generally in the range 0 to 1. The RGB colour space is shown in Figure 34 and in Plate 31.

One strength of the RGB colour space is that it is a unit cube, and thus all possi- ble values of R,G,B correspond to realisable colour. This makes it convenient from a programming point of view, in that range checking is straightforward.

Red

Magenta Yellow

White

Black

Blue Green

Cyan

Figure 34: RGB colour space.

A major weakness is that colours specified in RGB space are not at all perceptually uniform, and it is not sensible to measure colour differences in RGB space. This colour model is further discussed in Section 5.1, (Displaying colour).

If the chromaticity coordinates of the monitor phosphors are known, and also both the chromaticity and luminance of the white produced by equal quantities of red, green and blue, it is possible to interconvert between RGB and the CIE col- our spaces. This conversion is described in Appendix B.

RGB colour space is widely used in computer graphics and is supported by GKS, PHIGS and most other graphics systems. It is adequate for use in situations where producing different colours is more important than portability or repro- ducibility. Specifying colour in RGB space is more convenient if hue, chroma and

University of Manchester 39 lightness are separate parameters. There are two transformations of RGB space used to achieve this: HLS and HSV. These will be considered next.

4.4.2 Hue, saturation, value (HSV)

The major diagonal of the RGB cube, from black at (0,0,0) to white at (1,1,1) forms an achromatic axis or gray scale. If the cube is rotated so that the white corner points towards the viewer and the black corner points away, as in Figure 34, a hexagon is seen with radiating around the achromatic axis. The HSV colour space uses this concept to define a hue angle, saturation and a third pa- rameter, value, which broadly corresponds to lightness. HSV is thus a polar coordinate model, and these terms are analogous to, but not identical with, the * * similarly named terms in the polar coordinate L C Huv form of the CIELUV model.

The space, shown in Figure 35 and in Plate 32, is a cylinder centred on the achro- matic axis. Value is the distance along this axis, saturation is the radial distance from it. Hue is an angular measure with 0 representing red and 180 representing cyan (a greenish blue).

Because HSV uses saturation rather than chroma, the perceived change in colour as saturation varies between 0 and 1 is less for dark (low value) colours than for light (high value) colours. To compensate for this, the HSV colour space is often shown distorted to form a cone rather than a cylinder. Other diagrams show HSV as a hexcone, to reinforce the link with RGB. However, saturation still ranges from 0 to 1 regardless of value or hue so these changes do not represent the geometric space accurately.

Green Yellow White Cyan Red

Blue Magenta

S H

V

Figure 35: The HSV colour space

40 Computer Graphics and Visualisation Colour models

Being a transformation of RGB space, HSV shares the advantage that all possi- ble values of H,S and V correspond to displayable colours. In addition, it is easier to mix colours than in RGB because the three parameters correspond more closely to perceptual attributes. On the other hand, HSV is just as device specific as RGB so descriptions of colours in HSV are not portable. Like RGB, only dis- playable colours can be specified.

Unlike CIELUV, HSV space is not perceptually uniform. Equal increments of hue angle do not produce smooth changes of perceived hue. Also, the three pa- rameters are not independent. For example, a pure yellow and a pure blue both have S=1, V=1 yet the yellow will have a significantly higher luminance and be perceptually lighter than the blue.

4.4.3 Hue, lightness, saturation (HLS)

Similar to HSV, this colour model has a lightness axis rather than a value axis. Pure colours, shown in Figure 36 and in Plate 33, have a saturation of 0.5, rather than 1.0 in HSV. HLS may be considered a simple deformation of HSV produced by moving the white point as far above the pure colours as the black point is be- low them. Like HSV, it is a cylindrical colour space but is often drawn as a cone - in this case, a double ended one.

White

S H

L Green Yellow

Cyan Red

Blue Magenta

Black

Figure 36: The HLS colour space

HLS, like HSV, is simply another representation of RGB space. These 2 colour models may optionally be supported in a PHIGS implementation – they are de- fined but a conforming implementation need not support them.

University of Manchester 41 4.4.4 Cyan, Magenta and Yellow (CMY)

CMY is sometimes presented as a colour space, and corresponds to the input data for colour printing. However it deals with the proportions of real pigments rather than abstract colours. Furthermore, mixing two colours is not additive, which makes the representation of CMY as a geometric solid of little value. Specification of colours in CMY, even when the CIE tristimulus values of the inks are known, is complicated by a great many factors as will be seen in Section 5.5. A variation of CMY adds black ink, and is called CMYK. (Black is referred to as K rather than B to avoid confusion with blue in RGB).

4.4.5 Video and broadcast colour models

These are derivations of particular calibrated RGB colour spaces, using monitor primaries whose chromaticities are well defined. In the context of computer graphics, they are met with when animated sequences are stored on video tape or broadcast on television. These are hardware-led colour models, optimised to make best use of the information capacity of the medium, and will be discussed in more depth in Section 5.1.

4.5 Other colour models

There are a wide variety of colour models which may occasionally be encountered in computer graphics and visualisation work. Some of these are used in a par- ticular application domain, such as architecture or textiles. A number of models are specified as national standards in particular countries and will be met when producing work for those countries. Others are included here because they illus- trate a particular point about colour science.

4.5.1 Munsell system

This is a perceptually uniform system for reflective colours used in graphic arts, textile and paint industries, particularly in the USA. It was recommended by ANSI, the American National Standard Association. It consists of a book of painted colour samples classified by three parameters – Munsell hue, value and chroma. There are five principal hues: red, yellow, green, blue and purple. A fur- ther five hues are mixtures of these: green/blue, blue/purple. Hues are referred to by their initial letter, for example R, YG. Each of these ten hues are further sub- divided by a decimal number. There are four divisions reproduced in the Munsell Book of Colour (2.5, 5, 7.5, 10), giving 40 hues in total, which radiate in a circle from the achromatic value axis as shown in Figure 37 and Plate 35.

42 Computer Graphics and Visualisation Colour models

d

e

R

R

/ Y

P /

R

e e

l p r

u

P ow

Yell

B /

P Y/G

B

l G u e re G e n

/

B

Figure 37: The Munsell hue wheel

Value is specified by an integer greater than 0 (black) and less than 10 (white). Chroma is the radial distance from the achromatic axis, and ranges from 2 to 14 or more; the book has samples at steps of two. All colours with a chroma of 0 are on the achromatic axis and do not have a hue. An example of a colour specified in the Munsell system is 5.0PB/4/10 which means a purple-blue with value 4 (darker than a mid grey) and chroma 10 (very colourful, near the maximum at- tainable pigment limit for that colour). CIE values for all the patches, measured using Illuminant C, have been published; this particular colour has chromaticity values of x=0.1773, y=0.1659, and Y=0.1200 for example.

Designed by the artist Albert Munsell, this system relies on the subjective selec- tion of hues for perceptual uniformity rather than a colorimetric approach. The original selection of colours was entirely ‘by eye’. Comparisons of the Munsell system with CIELUV and CIELAB show that, while none of these systems is completely perceptually uniform, the agreement between them is surprisingly close.

Colours in the Munsell system correctly separate out chroma and value. For ex- ample, a medium yellow such as 5.0Y has maximum chroma at a high value; this corresponds to the observation that the purest intense yellow is a light colour.

The diagram below is intended to give the idea of the Munsell system, and de- picts two pages arranged about the achromatic axis. It is also shown in colour Plate 34.

University of Manchester 43 5PB 5Y

Value

Chroma Figure 38: A pair of leaves from the Munsell system

4.5.2 Natural Colour System

The natural colour system (NCS) is an opponent colour system recommended by the Swedish Standards Institution. It uses the theory of opponent colours ad- vanced by Herring, which states there are two pairs of colours (red/green and yellow/blue). None of these colours resembles any of the others, and any colour can only contain a contribution from one of each pair. For example, a colour can be greenish blue, but cannot be reddish green or bluish yellow. There is a third pair, black/white, and colours can contain a mixture of both to form greys.

Initially, the opponent theory was disputed by proponents of the trichromatic theory, who pointed out that a yellow sensation can be produced with red and green light. It is now generally accepted that opponent colours, which as a con- cept date back to Leonardo da Vinci’s four visual primaries, are not in conflict with the trichromatic theory and indeed can be related to the opponent wiring of the retina. The point is not that yellow can be produced by mixture, but that it has the visual appearance of a distinct colour, having neither reddish or greenish tint.

The three parameters of NCS space are hue (φ) blackness (s, from the German Schwarz) and chromaticness (c). Hues are specified as a percentage of two of the four colours. For example, a lime green might consist of a mixture of 65% yellow and 35% green. This would be written as Y35G. The relationship of all colours of a given hue, φ to the achromatic axis can be drawn on an equilateral triangle. The distance of a particular colour from each edge may be expressed as a per- centage; the percentages of white (W) black (S) and the hue (F) will always add to 100, so it is customary to omit the percentage of white. NCS is shown in Fig- ure 39 and Plate 36.

44 Computer Graphics and Visualisation Colour models

R W

φ B Y

Blackness Y35G

S Chromaticness G

Figure 39: The Swedish NCS

Plotting the NCS colours on a CIE UCS shows that chromaticness is fairly uni- form, but hue is not; there are more colours in the blue to red quadrant than the others. This means that complementary colours selected with NCS will not be the same as those selected with CIELUV or Munsell colour models.

NCS has been used in some paint catalogues for the painting and decorating trade. Accordingly, it may occasionally be encountered in architectural visualisa- tion work.

4.5.3 DIN system

The German national standards body, Deutsches Institut für Normung (DIN) has developed a colour model (DIN 6164) which again has a circular hue specifi- cation. Colours of the same dominant wavelength have the same hue (T), which ranges from 1 for yellow through red, purple, blue and green to 24 for greenish yellow. The other two variables are saturation (S) and darkness (D). The overall shape approximates a round-topped cone, with black at the apex. However, the maximum saturation varies for different hues. Because of the use of saturation rather than chroma, there is less difference between the dark colours than the light ones. In common with other models which have ‘leaves’ of constant hue, such as Munsell and NCS, low chroma colours near the achromatic axis are closer together than vivid, high chroma colours. CIE standard illuminant D65 was used to select the samples, and their CIE tristimulus values are published.

4.5.4 system

This system is used primarily by architects and interior designers in Europe. As with the DIN system, the measure of hue is related to dominant wavelength. The spacing between hues was determined from a series of subjective assessments. Chroma is measured on a scale from 0, for the D65 reference white, to 100 for the spectrally pure colours. Lightness is related to CIE luminance, Y, with a square root weighting to give a perceptually even scale. Formulae have been published to convert between Coloroid specifications and CIE xyY values.

University of Manchester 45 4.5.5 OSA cubo-octahedron

The Optical Society of America have designed a colour space which avoids the uneven spacing of hue wheel systems. It is based on a rhombohedral crystal lat- tice structure, where each point has twelve closest neighbours in 3D. The unit solid which stacks in this fashion is a cubo-octahedron (a cube with all eight cor- ners sliced off). This space lattice provides an even sampling of the colour space, and allows samples to be readily ordered in planes which are not parallel to the axes. This is intended to help designers see new scales and arrangements of col- our.

The three parameters in the OSA model are lightness (L), yellowness (j, French jaune) and greenness (g). The selection of colours for the lattice points was the result of over thirty years of colour matching experiments. D65 is used as an illuminant and, unusually, the CIE 1964 supplementary observer is used. This poses problems for the use of OSA in a computer graphics system, because moni- tor chromaticities are measured for the 1931 standard observer.

4.5.6 TekHVC

* * This relatively new system is a development of the polar coordinate L C Huv form of CIELUV. It has three parameters:

H = Huv - offset

V = L*

C = 7.50725 C*

where offset is the angle in the CIE UCS between the u’ axis and the line joining the selected white point to a particular ‘best’ red. This means that a hue angle of 0° always points to this red regardless of the white point. However, a hue angle of 90° will still point at different colours as the white point is changed. The scal- ing factor is intended to make the visual effect of a change in C the same as that of a similar change in V or H.

4.5.7 Colour naming scheme

This non-geometric model produces an English name for a colour. There is an achromatic axis with seven points along it: black, very dark gray, dark gray, gray, light gray, very light gray and white. Six hues are named: red, orange, yel- low, green, blue and purple. There are three intermediate points between each hue, for example between yellow and green are, in order: yellowish green, yellow green, greenish yellow. Hues are modified according to saturation (greyish, mod- erate, strong or vivid) and lightness (very light, light, medium, dark, very dark). So an example of a colour specified using this scheme would be ‘greyish light orangish red’.

There are problems implementing this model in a computer graphics system in that the colours are specified as Munsell colours, which must be converted to CIE XYZ with a look up table. Also, it can describe very few colours (about 340).

46 Computer Graphics and Visualisation Colour models

4.5.8 X11 named colours

This is another naming scheme, although not as regular as CNS. Intended for the specification of such items as window borders and mouse cursors in the X Window System, it is essentially an enumeration scheme, with colours described as ‘Dodger Blue’ or ‘Wheat’ for example. The specification of each colour is given in terms of the RGB colour model. This has the inherent problem that colours look different on different monitors, so that they do not match people’s idea of the names. A number of ‘alternative’ specifications have been proposed, each of which looks correct on the particular monitor of the person who designed it.

The X11 named colour list has largely been superseded, from X11R5 onwards, by the much more comprehensive colour management services (Xcms). This allows the specification of colour as RGB, HLS, CIEXYZ, CIExyY, CIELUV, CIELAB, CIEu’v’Y or TekHVC and provides interconversion routines between these. The chromaticity coordinates of the monitor phosphors and white point have become a property of the root window, inherited by all other windows.

4.5.9

This is a proprietary system for specifying colour, widely used in the commercial graphic design world. Originally it specified a large series of pigments for spot colour. This has now been extended to the Pantone process colour range which relates colours to percentages of standardised, Pantone certified cyan, ma- genta, yellow and black process inks used with standardised screen angles. The process colours can vary markedly from the ‘equivalent’ spot colour.

Pantone is generally used with a book of printed samples; some applications have a Pantone license and can produce on-screen colour, but this is only a rough approximation as the colours are specified in the RGB colour model. Pantone is a method of getting precise results, particularly for spot colour, but is tedious to use. CIE tristimulus values for the samples are available. Other similar systems include Focoltone and TrueMatch.

4.5.10 SML space

This tristimulus space is similar to CIE XYZ except that human cone pigment spectra are used as matching functions. Colours are thus defined in terms of the degree of excitation produced in each of the three cone types, before any visual processing is performed by the nervous system. SML space has been used in vi- sion research, particularly in the design of colour blindness tests and in visualising the effects of vision defects. Given the specification of the particular cone pigment spectra used, it is possible to interconvert between SML and CIE 1931 XYZ.

4.5.11 Hunt-ACAM model

Developed to predict the appearance of colours irrespective of viewing conditions, this complex model requires two sets of measurements with different light sources. Using the CIE XYZ model as a base, it tries to take into account factors

University of Manchester 47 such as adaptation, chromatic induction, and simultaneous contrast. The Hunt-ACAM model gives numerical predictions for colourfullness, saturation, in- tensity, lightness, and brightness.

48 Computer Graphics and Visualisation 5 Colour output

5.1 Displaying colour

Interactive computer graphics are displayed on a colour monitor. The signals to drive the monitor are generated by the video circuitry of the graphics worksta- tion. A broad understanding of these components helps explain the limitations which are met with when displaying colour, and how to minimise or work around these restrictions.

5.1.1 Colour monitors

The majority of colour monitors in use today for computer graphics use a cath- ode ray tube (CRT), similar to that found in a television, to generate the picture. Other technologies, such as active matrix colour liquid crystal displays (LCD), do not at present give the high quality colours needed for computer graphics although they are widely used for less critical applications such as port- able computers.

The principle of a CRT is that one or more electron guns produce variable amounts of electrons in response to an applied voltage. The electrons are acceler- ated towards the front of the tube by applying a large positive voltage to a grid.

Electron Focussing Deflection guns plates coils

Accelerating Shadow grids mask

Figure 40: Anatomy of a CRT

The front of a colour tube is covered with three types of phosphor, which emit red, green and blue light when hit by electrons. Monochrome and greyscale moni- tors have only a single colour of phosphor. Electron beams from the guns are swept from top to bottom and left to right by the deflection plates to cover the

University of Manchester 49 screen area, and the voltages applied to the three guns are varied to adjust the intensity of the electron beam and hence the brightness of light emitted. A shadow mask is used to ensure that the electron beam from each gun can only fall on the appropriate type of phosphor.

5.1.2 Monitor gamut

Producing different colours by variable mixture of light from three coloured phos- phors is very similar to the colour matching experiments described in section 3.1 except that:

• the light from the phosphors is not as saturated as a pure spectral colour • negative values cannot be applied to the guns.

These differences mean that some visible colours cannot be reproduced on a CRT. The range of displayable colour is termed the gamut and varies for different makes and models of monitor. It may conveniently be depicted on a CIE 1976 UCS diagram, where it forms a triangle bounded by the monitor primaries. Each secondary lies on the line connecting the appropriate primaries, because the col- ours are additive. The white point should correspond to equal maximal output from the three guns. The example below (Figure 41) shows the gamut of the monitor on a VAXstation 3540 workstation.

0.6 525 550 575 600 500 675-780

0.4

v’

475 0.2

VAXstation 3540

monitor whitepoint 450 400 0.0

0.0 0.2 0.4 0.6 u’

Figure 41: The VAXstation 3540 gamut on the CIE 1976 UCS diagram

50 Computer Graphics and Visualisation Colour output

The choice of monitor primaries is a trade off between obtaining a large gamut and making the display sufficiently bright. As the ISO luminous efficiency func- tion (Figure 7) shows, the extremes of the visible wavelengths are seen as very dim. So the primary in the long wavelength corner tends to be a bright, orangish red rather than a dim deep red; similarly the primary in the short wavelength corner tends to be a fairly bright blue rather than a very dim violet.

The gamut of a monitor shrinks as the ambient light level increases, a fact which will be familiar to anyone who has tried to use a monitor in bright sunlight. Am- bient light is reflected back from the monitor, adding white to all colours. This means that black becomes a dark grey. All colours move towards the white point, the darkest colours moving most. So, as the ambient light level is increased, typi- cally deep blues are lost first, and only the lightest colours such as yellow and white can still be seen at high ambient light levels.

Appendix B gives details of how to calibrate a colour monitor for use with CIE colours.

5.1.3 Factors affecting monitor quality

The colour fidelity and ergonomics of a colour monitor can be adversely affected by a number of factors:

• misconvergence: the electron beam does not hit the correct pixel. This results in blurring of the edges of shapes and upsets the colour balance; if for exam- ple the green gun is also lighting up the red pixels to an extent, then all greens will be tinged with yellow (the secondary colour resulting from a mix- ture of green and red). The gamut will clearly be reduced, the position of the green corner of the gamut triangle moving towards the red corner in this ex- ample. Misconvergence tends to be most apparent at the edges of the display and in older monitors. Solution: Many monitors have internal controls to ad- just convergence. Have these adjusted by a competent service engineer. Use a degauss button regularly, if there is one. Do not site monitors next to mag- netic fields, such as loudspeakers or power cable conduits. • flicker: caused by the refresh rate of the screen being too low, or the use of an interlaced display (where the electron beam traces all the even lines, then all the odd lines). Solution: do not use a video mode of higher resolution than the monitor can cope with. Do not use interlaced modes. • phosphor ageing: over a period of a year or so, the brightness of the phos- phors will fall. Blue is affected faster than red or green. Solution: Do not rely manufacturers data for old monitors; have the values measured. For accurate work, use an auto-calibrating monitor. • gun interaction: the intensity of the electron beam depends on the power be- ing produced by the other two guns at the time. Also, the intensity of a white pixel will be different if the rest of the screen is all white or all black, because of power drain. Solution: avoid cheap monitors with inadequate power sup- plies.

University of Manchester 51 5.1.4 Video circuitry

The image displayed on a computer graphics monitor is composed of a two di- mensional array of dots, termed pixels. These are the smallest addressable areas on the screen whose colour can be individually changed. The video image is defined by an area of memory in the computer, the video RAM (Random Access Memory), which in most workstations can be written to at the same time as the video circuitry is reading from it. Graphics workstations typically write to this memory with a mixture of both software and specialised hardware which per- forms common tasks (such as drawing polygons). Video RAM is read continuously by the video circuitry, which scans each pixel in turn and sends the values as a serial stream to be converted into monitor signals.

Considering the video RAM to be a two dimensional array, displays differ in both the size of this array and the colour resolution (number of bits per pixel). To- gether with the physical size of the monitor, this defines the spatial resolution (in pixels per inch) and the total number of simultaneously displayable colours.

Monochrome devices use one bit per pixel, so each pixel can be on or off, white or black. Greyscale devices use more bits per pixel, the total number of displayable greys being 2n, where n is the number of bits per pixel, typically 8. The binary number stored in video RAM for each pixel in turn is accessed by the video hard- ware and converted to an analogue voltage using a fast digital to analogue converter (DAC) as shown in Figure 42. This voltage is used to modulate the in- tensity of the electron beam in the monitor and so give different brightnesses.

DAC Gun Video RAM Screen Figure 42: A greyscale display.

Colour devices used in computer graphics typically use 24 bits to represent each pixel. These are organised as three groups of eight, giving 28 = 256 levels of in- tensity for each of the red, green and blue guns; 16.7 million colours in all. There are thus three DACs. This, plus the cost of the extra memory and the colour monitor, is why colour displays are more expensive than monochrome or greyscale displays. The layout of a 24 bit colour display is shown in Figure 43.

52 Computer Graphics and Visualisation Colour output

Video RAM DACs Guns Screen

Figure 43: A 24 bit colour display.

Some displays, used for less demanding computer graphics applications, use only eight bits to represent each pixel. Very few devices organise this into three groups like the 24 bit displays; this would give far too few colours in most appli- cations. Instead, each location in video RAM stores an 8 bit value which is used to index into a table of 256 colours. These colours are specified to a greater preci- sion than 8 bits; 18 or 24 is common. The total range of colours is termed the palette; the table of selections from this palette is called the colour look-up ta- ble, or CLUT. Single chips containing a CLUT and three DACs are available, the combination being referred to as a RAMDAC.

Although the total number of colours in the palette can be as high as the total number of colours in a 24 bit display, only 256 of them can be used in any one image. This configuration is an example of indexed colour, whereas the 24 bit display described previously is an example of a direct colour system. The layout of an 8 bit indexed display is shown in Figure 44.

CLUT 0 1 ...

255

Video RAMDACs Guns Screen

Figure 44: An 8 bit indexed display.

University of Manchester 53 It is significantly faster to rewrite the data in the colour lookup table (256 en- tries, 3 × 8 bits, so 768 bytes) than to change the colour of each pixel in video RAM (typically 1280 × 1024 entries, 8 bits, so around 1.3 million bytes). Rewrit- ing the CLUT can be used to provide fast animation of an image with few colours.

A refinement of the 24 bit display uses this indexing technique for each 8 bit group, to index into a table of (typically 12 bit) colour values. There are thus three of these tables, one for each gun, and three 12 bit DACs are used. Whereas the entries in the CLUT of an 8 bit indexed display are independent of one an- other, the entries in this system are typically ordered to form a colour scale. This allows the maximum value and response curve of each DAC to be changed, to compensate for drift or ageing in the calibration of the monitor.

5.1.5 Gamma correction

One of the assumptions made when converting between XYZ and RGB is that colours are linearly additive. This assumption is invalid for a number of reasons, primarily because linear increases in the voltage applied to the guns does not produce a linear increase in luminance. The light produced by a phosphor is pro- portional to the electron beam power, rather than the gun voltage.

Power = voltage × current 1 5 Current ∝ grid voltage ·

So Luminance ∝ voltage2·5

γ In practice, luminance is proportional to the DAC voltage , where γ is in the range 1·5 to 3·0. Thus, the values in video RAM or in the CLUT should be ad- justed to compensate for this. Some display hardware has this correction built in. Because the spacing of values has the same minimum (0) and maximum (255) values, but is non linear, one result of gamma correcting the values in video RAM is a decrease in the number of available colours. This is why some systems use 24 bits to represent each pixel, but then use three 12 bit lookup tables to per- form gamma correction and drive the DACs, maintaining the full range of colours. An example of gamma correction is given in Appendix A.

5.1.6 Viewing with non-RGB models

Some display software allows colour to be specified in a standard, device- independent format such as one of the CIE models. For example, PHIGS, GKS 9x, and ODA require the display to provide this facility. The current version (X11R5) of the X Window System gives a means to implement this as it accepts colour specifications in CIE XYZ, CIELUV, CIELAB and TekHVC among others; mapping to RGB is handled transparently. This allows applications to specify the same colour on a variety of monitors.

A problem with using a non-RGB model is that unprintable, outside-gamut col- ours can be specified. These must be dealt with in a device dependent way. A colour management system can be used for this. The problem of gamut map- ping and the treatment of out of gamut colours is discussed in Section 5.7

54 Computer Graphics and Visualisation Colour output

5.2 Colour video

Video equipment has some similarities to computer graphics monitors, in that the display is an array of pixels on an RGB monitor. There are differences, how- ever, to do with the mechanics of encoding the signal and the different working practices in that sector of the industry. Familiar sounding terms may have new, different meanings in a video context.

Because animated sequences of computer graphics are frequently stored as a video, it is helpful to have some understanding of video technology as it applies to the use of colour. Computer graphics may then be designed with the inherent limitations of the medium in mind. It should be remembered that the very best professional video equipment will still give much worse resolution and colour fi- delity than even a modest graphics workstation.

5.2.1 Broadcast monitors

Colours are specified in the RGB space of standardised broadcast monitors, whose phosphor chromaticities, white point, and gamma value are tightly speci- fied by the appropriate standards body. This enables an engineer to check the quality reliably in any studio. It also enables a computer graphic to be con- structed to those standards, so that it will look correct when viewed. Table 3 shows the definitions for three popular standards.

Red x,y Green x,y Blue x,y

NTSC 0·670 0·330 0·210 0·710 0·140 0·080

SMPTE 0·630 0·340 0·310 0·595 0·155 0·070

EBU 0·640 0·330 0·290 0·060 0·150 0·06

Table 3: Chromaticities of standard broadcast monitors

Broadcast standards in the United States of America are defined by the National Television Systems Committee (NTSC). The NTSC standard is also used in other countries, such as Japan. It specifies illuminant C for the white point and a gamma value of 2·2.

The United Kingdom and Europe use standards laid down by the European Broadcasting Union (EBU). These use a D65 whitepoint and a gamma of 2·2.

Colour correction in modern studios is done on a studio monitor which conforms to the code of practice of the Society of Motion Picture and Television Engineers (SMPTE).

The gamuts of these three monitors are shown in Figure 45. To provide the cor- rect colour in high quality computer graphics, the CIE XYZ values should be converted to RGB using the SMPTE monitor chromaticities.

University of Manchester 55 0.6 525 550 575 600 500 675-780

0.4

v’

D65 whitepoint 475 0.2 Illuminant C

SMPTE monitor

EBU monitor 450 NTSC monitor 400 0.0

0.0 0.2 0.4 0.6 u’

Figure 45: SMPTE, EBU and NTSC monitor gamuts

5.2.2 Gamma correction

In contrast to general computer graphics practice, where gamma correction is the last step before display, the RGB signals which are to be recorded on video are gamma corrected before encoding. The alternative would require gamma correc- tion circuitry in every replay monitor and, if the signal were to be broadcast, in every TV set. It is easier and more cost effective to produce a single stable, closely tracking corrector in the studio.

Because of this, it is important that computer graphics images which are to be recorded on video should first be gamma corrected. Gamma correction in a com- puter graphics context is discussed in Appendix A. While any arbitrary transform can be applied to a computer graphic image, the standards for video assume that the input signals are coming from a TV camera and that the gamma correction will be performed by an electronic circuit. The graph of a pure power law meets the origin vertically, which corresponds to infinite gain in a circuit. This is clearly impossible to attain, and would give severe problems with noise amplification, so gamma correction in a video context uses a modified power law with a linear section near the origin. To preserve compatibility, the same trans- formation should be used for computer graphics that are to be stored on video.

The formulae given below are defined in CCIR Rec. 709, the international stan- dard for high definition television (HDTV). Older video standards have less precise definitions for gamma correction, ignoring the infinite gain problem.

56 Computer Graphics and Visualisation Colour output

CCIR Rec. 709 formalises the range of similar values used in modern studio equipment, and these equations are suitable for use in computer graphics.

For values in the range 0.018 to 1.0:

0.45 Rγ = 1·099 R - 0·099

0.45 Gγ = 1·099 G - 0·099

0.45 Bγ = 1·099 B - 0·099

For values in the range 0.0 to 0.017, the slope of the curve is limited to 4.5:

Rγ = 4·5 R

Gγ = 4·5 G

Bγ = 4·5 B

5.2.3 Bandwidth

In computer graphics, a picture is thought of as a 2D array of pixels, each of which can be any colour. This is simple and intuitive, and the very high speed electronics which make this possible go unremarked most of the time. In video recording, a picture is thought of as a waveform which shows how the intensity or colour of the picture varies as the electron beam scans across the display. As the lines on screen are traced sequentially, a picture in video terms is thought of as a 1D waveform.

A slowly changing picture corresponds to a low frequency waveform. The greater the changes between one pixel and the next, the higher the frequency required to encode it and thus the greater the total bandwidth. This is the fundamental dif- ference between computer graphics and video; adjacent pixels in each horizontal scanline are related. There is however no special relationship between a pixel and the one above or below it. They are on different scanlines.

A typical computer graphics image, where adjacent pixels could be completely different, requires an extremely high bandwidth to represent it. In video record- ing, the available bandwidth is severely limited and so the amount by which one pixel can vary from the one that preceded it is limited. The maximum number of pixels in a scanline in 768, but the effective horizontal resolution will thus be less.

5.2.4 Refresh rate

A typical computer graphics monitor may have a vertical resolution of 1024 lines and be refreshed 72 times a second or more. This means that 1024 × 72 = 73,728 scanlines are refreshed each second. A video recorder has 625 lines, of which 50 are used for other purposes, and is refreshed 25 times a second; in other words it can only draw (625 - 50) × 25 = 14,375 scanlines a second.

University of Manchester 57 To reduce flicker, the even scan lines are drawn first, followed by the odd scan lines. This raises the effective screen refresh rate to 50 times a second, provided there is some correlation between one scanline and the next. When the picture comes from a camera this is often the case, but is less likely in a computer graph- ics image. Horizontal lines which are one pixel wide will flicker noticeably. The effective horizontal resolution is thus reduced from 575 to between 200 and 300.

5.2.5 Video luminance

Colours are converted from RGB to a video luminance signal and two colour difference signals. This is similar to the encoding of three colour signals to lu- minance and colour difference signals in the eye, which was described in Section 2.6, and is done for the same reason: to make the best use of the available infor- mation carrying capacity.

As originally calculated in the 1950s, the luminance encoding corresponded to a weighted sum of the luminance contribution from each phosphor, using the moni- tor chromaticities of the original NTSC broadcast monitor.

This gives the formula for video luminance which is:

Y = 0·299 Rγ + 0·587 Gγ + 0·114 Bγ

This has now become standardised (CCIR Rec. 601-1) and the equivalent decoder is built into all video recorders and TV sets.

Over the years, the phosphors used in colour televisions have changed, so that modern sets are quite different from the NTSC primaries. To provide more useful colour monitoring, the Society of Motion Picture and Television Engineers (SMPTE) published a standard code of practice which defined new chromaticites, shown in Table 3. Colour balancing is now done using the SMPTE broadcast monitor. Because the NTSC monitor is not representative of modern broadcast monitors or TV sets, the video‘ luminance’ signal is no longer the correct formula to calculate the luminance which would be measured from the monitor. Apart from making a less than optimal use of the information capacity of the system, this does not matter. It is more important that the encoder and decoder work cor- rectly as a pair.

Use the above formula to calculate the video luminance signal which will be re- corded onto tape. This is the same regardless of the monitor which is being used for colour balancing.

To calculate the actual measured luminance, for example to convert a full colour image to greyscale, calculate directly from the chromaticities of the monitor in question using the procedure in Appendix B. For example, for an SMPTE moni- tor the formula would be:

Y = 0·2122 R + 0·7013 G + 0·0865 B

Notice that the RGB values are should not be gamma corrected in this formula. It is unfortunate that the symbol Y has these two related but distinct meanings. The chromaticity of the video luminance signal Y is at the system white point

58 Computer Graphics and Visualisation Colour output

(usually D65) whereas the CIE Y primary is an imaginary colour with chromaticity u´= 0, v´ = 0·6.

As an additional trap for the unwary, HDTV will use a different formula for video luminance. Being a new standard (SMPTE 240M) for a much improved video and broadcast system, it does not need to maintain compatibility with an installed base of older decoders. So the video luminance encoding was calculated from the SMPTE monitor set:

Y = 0·2122 Rγ + 0·7013 Gγ + 0·0865 Bγ

5.2.6 Video chrominance

Having produced a video luminance signal, another two channels are required to carry enough information to reconstruct the original RGB values. These are called chrominance, as they were originally related to the chromaticity differ- ence between the colour being encoded and the white point. Because video luminance is no longer equal to CIE luminance, this is no longer the case.

The chrominance signals are produced by subtracting the video luminance from any two of the three (RGB) channels. Green makes the most contribution to the video luminance, so it is the red and blue channels that are used to make chrominance signals.

C1 = Rγ - Y

C2 = Bγ - Y

Because of these formulae, chrominance is also sometimes called colour differ- ence.

Clearly, the chrominance signals can be positive or negative. Putting the RGB values of the primary (red, green, blue) and secondary (cyan, magenta, yellow) colours into these equations shows that C1 varies between ± 0·866 and C2 be- tween ± 0·701.

Converting an RGB signal to a luminance (Y) and two chrominance (C1C2) sig- nals is a linear, reversible process and is the starting point for all video encoding. Of itself, it produces no limitations or distortions of colour quality, although later stages certainly do.

5.2.7 Analogue component video

These video systems record the three components of the signal - video luminance and two video chrominances - separately, so that on playback the picture looks very similar to the original RGB signal. The main limitations are the signal to noise ratio, stability and linearity of the analogue recording system. More costly professional broadcast equipment gives a better picture than industrial grade equipment.

University of Manchester 59 This type of system is used for professional video equipment. Examples are the Sony Betacam system, widely used for electronic news gathering and studio qual- ity mastering, and the Panasonic MII system.

In analogue component recording, the chrominance components are scaled so that the somewhat awkward minimum and maximum values become ± 0·5:

Pb = 0·5 / (1 - 0·114) C1

Pr = 0·5 / (1 - 0·299) C2

Additionally, the Pb and Pr components are subsampled by a factor of two to give some signal compression. This bandwidth limitation is similar to replacing pairs of pixels with their average colours. The visual effect of this is less than it would seem, however, as the video luminance is left intact. Subsampling is only done on the chrominance components. The human eye is poor at identifying ex- tremely small areas of colour, and can only see the difference in brightness. To the extent that video luminance represents the brightness of the video signal, this optimisation is successful. The result is a very slight smearing of small de- tails in the image.

5.2.8 Digital component video

Like analogue component video, the three components are stored separately and the chrominance components are subsampled by a factor of two. The resolution is fixed by the number of bits used to code each component. There are two methods defined, either 8 bits or 10 bits per component; at present, only the 8 bit form is in general use.

Video luminance is recorded as an 8 bit unsigned number and the chrominance components as 8 bit signed numbers. Not all 256 codes are used, however; the standards for digital component video, CCIR Rec. 601-1 and CCIR Rec. 656 spec- ify that some codes at the top and bottom of the range are left unused for headroom and footroom.

Y = 219 Y + 16

Cr = 224 C1 + 128

Cb = 224 C2 + 128

The big advantage of digital component video is the lack of noise and the possi- bility of many generations of copying without degeneration of the quality. Digital component video recording equipment is however considerably more expensive than analog component equipment.

If computer graphics are being made specifically for recording to digital compo- nent video, the values of R G and B should be gamma corrected as floating point values then converted directly into the range 16 to 226 rather than the usual 0 to 255. This avoids introducing two sets of round off errors.

The three components may then be calculated directly:

60 Computer Graphics and Visualisation Colour output

Y = (77 Rγ + 150 Gγ + 29 Bγ) / 256

Cb = (131 Rγ - 110 Gγ - 21 Bγ + 128) / 256

Cr = (44 Rγ - 87 Gγ + 131 Bγ + 128) / 256

Other systems which use YCbCr or derivatives of it include the JFIF image for- mat from JPEG, and the PhotoCD format from Kodak.

5.3 Broadcasting colour

The broadcast system used in Western Europe, PAL, is a development of the older American NTSC system. The same considerations given in the video sec- tion also apply to encoding for broadcast. However there are additional limitations which must be taken into account if acceptable quality pictures are to be produced.

Video luminance is combined with a composite chroma signal (which has been modulated onto a colour subcarrier), colour burst and synchronisation signals to form a single composite video signal ready for broadcast. The details of this proc- ess need not concern us here, but once video luminance and chrominance have been mixed together they can never be fully separated again, even by the best quality decoders. There is inevitably some interference between luminance and chrominance channels. This gives rise to many types of colour artefacts.

Composite video recorders record a broadcast signal (or some intermediate stage) and are thus subject to the same limitations as if the signal were to be broadcast.

5.3.1 UV encoding

Formation of U and V components is the first step in preparing a video signal for broadcast. The chrominance signals are scaled so that the sum of the video lumi- nance and the composite chrominance is less than 1.34:

U = 0·493 C2

V = 0·877 C1

This is done so that the final composite signal stays within the safe limits of car- rier modulation. The U and V components are then subsampled by a factor of two to reduce bandwidth.

Some computer graphics systems designed specifically for broadcast use allow colour specification directly in YUV. While not particularly easy to use, this does have the advantage that untransmissible colours are not inadvertently produced.

5.3.2 Composite chrominance

Composite chrominance is produced by modulating the U and V signals onto a 4.43 MHz colour subcarrier, t :

University of Manchester 61 C = U cos(t) + V sin (t)

Figure 46 shows how the phase and amplitude of this signal correspond to the original U and V components.

V A φ U

Figure 46: Phase angle (φ) and amplitude (A) of composite chroma.

5.3.3 S Video

S video is a compromise between full component recording and composite record- ing. Video luminance and composite chroma are recorded as two separate signals; while there is a degradation in colour quality caused by the use of the colour subcarrier, there is no interference between video luminance and compos- ite chrominance.

The most common example of an S video system is S-VHS. The input connectors are often labelled Y/C, to show that video luminance (Y) and composite chrominance (C) are kept separate. Other formats which record separate Y and C signals are U-Matic and Hi-8 video.

5.3.4 PAL broadcast

The composite chrominance signal is combined with the video luminance signal, synchronisation and other control information to form a single, composite signal.

As a result of mixing the luminance and chrominance signals, some very bright, saturated RGB colours map to signals which cannot be broadcast; either because they overload the transmitting equipment or because they give shimmering, un- stable colours. Although such highly saturated colours are not commonly found in film of the real world, they are a problem in computer generated images, and the colours must be mapped to broadcastable ones by reducing the luminance or saturation.

The constraints on acceptable colours are firstly that the amplitude of the com- posite chroma must be less than 53, and secondly that the sum of video luminance and composite chroma must be less than 1·2.

62 Computer Graphics and Visualisation Colour output

Broadcast signals are sensitive to phase errors in transmission, which alter the chrominance phase f and thus change the hue. To correct this for small errors, the PAL system inverts the Phase of V on Alternate scan Lines, hence the name. By averaging consecutive scanlines, the phase error is cancelled at the expense of a further reduction in vertical resolution.

Another consequence of mixing chrominance and video luminance is that the two signals interact. High frequency video luminance information is decoded as chrominance, giving shimmering rainbows of colour on detail such as finely spaced lines. The colour subcarrier is incompletely removed from the video lumi- nance information, giving a periodic variation in brightness across a scanline. Rapid transitions in colour produce a flickering edge, an effect that can be re- duced by smoothing the image before it is encoded.

Domestic VHS video records a composite video signal, and thus suffers from all the defects in colour quality that have been noted for PAL broadcast. This should be borne in mind when planning a video animation which is to be shown at a con- ference or presentation. The most common format to use is VHS; so although the material is never broadcast, it is subject to all the limitations of broadcast media.

5.3.5 NTSC broadcast

This is the broadcast system used in the USA and Japan. As it was the basis for PAL, most of the comments in the previous section also apply to NTSC. Some dif- ferences from PAL:

• 525 lines rather than 625 lines vertical resolution • screen refresh is 30 rather than 25 times a second • no phase alternation, so there are hue errors • there is even less bandwidth available than in Europe

As originally defined, NTSC used I and Q rather than U and V, defined by:

I = V cos(33°) – U sin(33°)

Q = V sin(33°) + U cos(33°)

This is a simple rotation and flip of the axes so that Q contains blues and violets; as was seen in Section 2.7, these are the colours for which the spatial resolution of the eye is poorest. To cope with a narrow available bandwidth, the Q signal is broadcast at much lower resolution than Y or I. Considering the bandwidth of the Y signal to be 100%, I uses 25% and Q only 10%. Modern production equip- ment uses U and V rather than I and Q; the decoder cannot tell the difference.

5.4 Coping with insufficient colours

Sometimes, a graphical image containing a lot of colours must be output to a de- vice with less colours. For example, a 24 bit image may need to be displayed on an 8 bit display. Or an 8 bit image may have to be output on a printer having

University of Manchester 63 only eight colours. There are three classes of technique that can be used: quantisation, dithering and halftoning.

5.4.1 Quantisation

Each original colour is mapped to the nearest of a subset of new colours. The no- tion of ‘nearest’ implies that the distances between colours are measured and compared; thus, the colours should be quantised in a colour space where the dis- tance between points correlates with their perceived colour difference. CIE LUV is a suitable colour space, although others can be used. Distance may also be weighted to take advantage of perceptual effects. For example, preserving lumi- nance may be more than preserving hue.

There are four main quantisation methods:

5.4.1.1 Uniform Quantisation

Each original colour is mapped to the nearest of a standard, evenly distributed range of colours. This is computationaly simple, and may be the only method pos- sible if the palette of colours is fixed. It is also of use with animated graphics; each frame must use the same palette of colours to avoid noticeable flickering. It gives poor results, however, when the original colours are not evenly distributed in the colour space.

To calculate a standard colourmap, a colourspace is chosen and sampled at regu- lar intervals. A problem with using device independent colour spaces when resolution is limited is that some of the samples represent out of gamut colours. It is therefore customary to sample the devices native colour space, such as RGB, directly. This has the disadvantage that equally spaced samples are not at equal perceptual intervals.

Sampling in RGB space should use an equal resolution for each axis. For exam- ple, in an 8 bit system with 256 colours available, 6 levels of red, green and blue may be used to give 216 different colours. This ensures that greys do not have objectionable tints. Having unallocated colours may be an advantage when a colourmap is shared between concurrent applications.

5.4.1.2 Median cut

This algorithm has two distinct phases. Firstly, the colours are divided into groups. Secondly, each group is assigned a new colour. To group colours, the smallest rectangular, axis-aligned space is calculated that will completely en- close all the colours. This is shown in Figure 47a, using only two dimensions for clarity. The rectangle is cut in two across its longest axis so that the resulting rectangles contain equal numbers of colours (Figure 47b). The process is re- peated, shrinking each new rectangle to its smallest volume (Figure 47c) and cutting the larger in two (Figure 47d) until enough rectangles have been pro- duced. To assign a new colour for the group, either the centroid of the rectangle or the group mean can be used.

64 Computer Graphics and Visualisation Colour output

a b

c d Figure 47: Median cut.

5.4.1.3 Histogramming

The most frequently used original colours are retained intact. The remaining col- ours are assigned to nearest of these popular colours. It is customary to drop some precision when constructing the frequency histogram, so that groups of col- ours which are close count as a single colour and are assigned the group mean. This allows other, less popular but significantly different colours to be retained intact. Another optimisation is to leave some room in the colour map for infre- quent colours which do not match any of the new colours well.

Whilst it is possible to construct a histogram across the entire colour space, this requires a lot of memory. It is more common to construct separate histograms for each axis. For example, to quantise down to 256 colours in CIELUV space one * * * might histogram over 50 bins in L and 25 bins in u and v . This would give 2% steps in lightness, which are just noticeable. Of these, the commonest 10 in L*and the commonest 5 in u* and v*would be used to construct a new colourmap with 10 × 5 × 5 = 250 colours, leaving 6 for those outliers furthest from the new colours.

5.4.1.4 Variance minimisation

A statistical approach, this is similar to fitting a line to a set of points by least squares analysis. For each group of similar original colours, the variance in the distance to the new colour is calculated. The aim is to minimise the sum of these variances. By minimising the variance rather than the mean distance, it avoids the situation where some groups of colours are widely separated while others cluster tightly.

University of Manchester 65 high variance low variance

Figure 48: Variance minimisation.

There are clearly many possible ways to partition the colour space, and it would be impossible to test them exhaustively. Some heuristic variance based methods therefore select a low, rather than optimal, variance. It is possible to calculate the optimal solution by a similar method to the median cut algorithm.

A plane is swept along each of the three axes of the colour space, and at each position on each sweep the variance on each side of the plane is calculated. The point which minimises the variance on both sides is noted. The colour space is then cut across whichever of the three axes gave the lowest result. This is re- peated on the side with the largest variance until sufficient subdivisions have been made.

5.4.2 Dithering

Dithering is most commonly encountered as a means of simulating grey scales on a monochrome device by printing black and white. However it can also be used in a similar manner to simulate more colours by displaying two or more colours close together; from a distance the eye will mix these to give the effect of more colours.

To dither an area in a given colour, the available colours that most closely resem- ble it are selected and a percentage mixture calculated. Suppose, for example, that an unavailable lime green is to be simulated with 70% pale green, 20% lemon yellow, and 10% dark green. For each pixel, a random number is gener- ated in the range 1 to 100. If it is below 70, the pixel is coloured pale green; between 70 and 90, lemon yellow, and above 90, dark green.

Clearly this technique will give the best results when the colours to be mixed are fairly close, and large areas are dithered. Accurate colour discrimination is in any case less easy on small areas.

A technique can be borrowed from monochrome dithering: error diffusion. For each pixel, the error between the intended and actual colour is calculated and used to adjust the intended colour of pixels near to it. The amount of error is weighted according to how far away the pixels are, so that the error is ‘diffused’. To keep the technique simple, only those pixels which have not yet been dithered are adjusted. This is shown in Figure 49. The shaded pixels have been dithered,

66 Computer Graphics and Visualisation Colour output and the pixel containing the star is currently being processed. The weighting fac- tors decrease with distance from the current pixel.

× 0.5 0·25

0.125 0.25 0.5 0.25 0.125

0.063 0.1250.25 0.125 0.063

Figure 49: Error diffusion

Because errors have both sign and magnitude, there is no net error propagated across the image. In monochrome dithering, the error is a single number whereas colour dithering produces three errors, one for each axis of the colour space. There are a number of weighting schemes for diffusing errors. Figure 49 uses Stucki weighting, for example.

Other enhancements include adding a noise function to the error, and processing alternate lines in opposite directions; both reduce patterning.

5.4.3 Halftones

Commercial printing presses are unable to mix inks in different proportions to achieve different colours. Instead, they lay down a grid pattern of dots for each ink; the size of each dot is varied and from a distance, the colours appear to mix. The dotted appearance of newspaper photographs is a familiar example of this, using a single colour. The difference between halftoning and dithering is that the latter keeps the same spatial resolution, whereas halftoning trades significant amounts of spatial resolution for an increase in the number of apparent colours.

While halftoning may in principle be applied to any device, in practice it gives best results on a device with high spatial resolution but poor colour resolution and is thus primarily encountered in the context of four colour process print- ing using cyan, magenta, yellow and black inks.

5.4.3.1 Screening

The process of converting a continuous tone image to a halftoned image is called screening. This is because, in the original optical procedure, a perforated metal screen was placed on or slightly away from a photographic film and the image re-photographed. This process is now generally performed by a computer.

University of Manchester 67 If the grid for each colour was aligned, each ink would be obscured by the next one to be printed as most inks are opaque. The grids are therefore rotated rela- tive to one another. Some angles give pronounced Moiré interference patterns. It has become conventional to use rotations of 0° for yellow, 15° for cyan, 45° for black and 75° for magenta to avoid this, as shown in Figure 50 and in Plate 37. These angles give small rosette patterns. Other angles are also used, often in conjunction with different screen frequencies for each colour.

Yellow 0 degrees Cyan 15 degrees

Black 45 degrees

Magenta 75 degrees

Figure 50: Conventional halftone screen angles.

A screened halftone is characterised by the screen angles and the frequency or number of lines per inch (lpi). A frequency of 133 lpi is common for general print- ing purposes. High quality book illustrations use a higher frequency such as 175; colour daily newspapers have frequencies of around 85lpi and some colour laser printers manage only 60 lpi.

5.4.3.2 Matrixing

Commonly referred to as matrixed dither, this is more correctly a form of halftoning. In this method, the physical pixels are grouped into clusters of, for example, 2×2 or 3×3 pixels. Each pixel in the cluster may be assigned a different colour from the ones available. From a distance, the eye will perceive this as an additively mixed colour. This works best when the available spatial resolution is high, and the colours to be mixed are not too dissimilar. If a large matrix is used, more colours can be produced but the image becomes visibly coarse as the effec- tive pixel size is increased. An objectionable textured effect is a problem of the method, caused by a repeating matrix pattern; the eye is particularly sensitive to such regular patterns and will enhance them. Texturing can be reduced by ran- domly selecting between a variety of different patterns. For example, in a 2×2 matrix containing three pixels in yellow and one in red, there are four possible positions for the red pixel as shown in Figure 51.

68 Computer Graphics and Visualisation Colour output

Figure 51: Equivalent matrixing patterns.

5.5 Printing in colour

5.5.1 Terminology

Printing may mean one of two things:

1. sending a file to a physical device, such as a laser printer, waxjet, or dye sublimation printer, which produces hardcopy.

2. sending some data, such as a file or a photograph, to a printing works to have halftone printing plates made. Large numbers of prints can be made from these plates, often by a process called offset lithography.

These two procedures, although different in details, have in common that a col- oured print is produced by mixing inks, wax, or other dyes. They can mostly be considered equivalent, except where noted.

Printing technologies can be divided into two types. In the first, more common type the amount of ink deposited on a particular spot has two values - some and none. With three colours of inks, this means that only eight colours are available. Halftoning, discussed in the previous section, is used to increase this paltry range. Offset lithography and most laser printers, waxjets and inkjets are exam- ples of this type. In the second type the quantity of ink deposited can be varied, allowing a wide range of colours to be produced without any halftoning. They are thus termed continuous tone printers, and are capable of near photographic qual- ity. Dye sublimation printers are an example of this type.

5.5.2 Mixing inks

Printing with coloured inks is a subtractive process, in contrast to the additive process when lights are mixed. Inks are reflective colours, and subtract from the illuminant to produce colour as shown in Figure 52 and in Plate 27.

University of Manchester 69 Surface gloss Illuminant Emitted colour reflected colour

Light source

Figure 52: Additive and subtractive colours.

This means that, in a three colour process system, the primary colours of inks broadly correspond to the secondary colours of three colour additive mixing – cyan, magenta and yellow - and that a mixture of all three primaries produces black rather than white. It is also possible to print with spot colours in addition to, or instead of, process colours. This may be done to produce colours outside a process colour gamut, to ensure that a particular colour looks the same on differ- ent printers, or to obtain special effects.

5.5.3 Printer Gamuts

Because printing uses subtractive mixing, the theoretical gamut of a printer is defined on a chromaticity diagram by six points: three primaries and three secon- daries. This is in contrast to the gamut of a monitor, which is defined by three points as additivity ensures that each secondary lies on the line connecting its constituent primaries. The shape of a typical printer gamut, in this case a Tektronix Phaser IIIPXi, is shown in Figure 53.

70 Computer Graphics and Visualisation Colour output

0.6 525 550 575 600 500 675-780

0.4

v’

0.2 475

450 400 0.0 0.0 0.2 0.4 0.6 u’ Figure 53: Example printer gamut

A good set of printing inks will have highly saturated colours to give a wide gamut. The variation in coloured inks used in different printers is at least as wide as the variation in monitor phosphors.

Although colour mixing is subtractive, the density of ink can be considered addi- tive. Printing the full amount of cyan, magenta and yellow gives a density of 300%, and a brownish black colour. Adding black ink would give a total achiev- able density of 400% (and a better black colour). In practice, density departs from additivity in a number of ways, which not only alters the gamut but also greatly complicates the colour separation process.

Because of this, the sort of gamut shown in Figure 53, though useful as a guide, is not completely accurate. Professional printers and colour pre-press agencies construct a more accurate gamut by measuring samples of many combinations of inks. A test sheet containing samples with a known proportion of CMY or CMYK is produced, and the CIE tristimulus values of each patch measured. Plate 38 shows part of such a sheet. The resulting data is used to construct a gamut and, arranged as a 3 or 4 dimensional lookup table, is used to predict the proportions of inks required to mix a given colour on that device.

5.5.4 Factors affecting quality

The number of variables affecting the quality of a finished print is enormous; only a selection can be covered here. This is why specialist colour pre-press agen-

University of Manchester 71 cies are used when high quality is desired, such as in book and magazine produc- tion.

It was stated earlier that ink density could be considered additive. Many of these variables are to correct for this assumption being only approximately true. Some of the more important factors are:

• Ink adhesion. Inks are layered on top of each other, and there is a difference in adhesion between printing on plain paper and on inked paper. This effect, which varies depending on the order inks are deposited onto the paper, re- duces additivity. • Maximum density. A density of 400%, corresponding to the maximum amount of each ink, would weaken most papers and result in buckling, tear- ing and smearing. Because equal quantities of the three primaries will produce a neutral gray, some or all of the common proportion of an ink mix is removed and substituted with black. For example, the colour C=20,M=30,Y=40 could be replaced with C=0, M=10, Y=20, K=20 or C=10, M=20, Y=30, K=10. This is called grey component replacement (GCR). Plate 39 shows 70% GCR. • Gray balance. In practice, equal amounts of each primary do not produce a pure achromatic grey. The proportions must be adjusted slightly to remove the colour tints which would result, and to which the eye is particularly sen- sitive. This problem, shown in Plate 40 is lessened by grey component replacement. • Under colour removal. Black ink is used to boost the ink density at high ink levels to correct for additivity failure. This enhances contrast, and is per- formed in addition to grey component replacement.

C+M+Y+K

C+M+Y

Measured density

Required density

Figure 54: Under colour removal

• Press registration. This applies both to printing plates and to multiple passes of paper inside a printer. Mis-registration alters the colour of an ink mixture from the measured, calibrated value and can introduce colour tints into greys. Very small amounts of mis registration can affect the final result, be- cause the degree of overlap of halftone dots and the effective screen angles are altered.

72 Computer Graphics and Visualisation Colour output

• Effective dot size. Measured ink density is not completely proportional to the specified area of dots on a halftone screen. Factors such as ink viscosity and diffusion through the paper result in dot gain, an increase the actual dot area. Dot gains of over 25% can occur in some cases. In addition, there is a sudden step in density at the point where individual dots become large enough to touch, giving the effect of white dots on a coloured background rather than coloured dots on a white background. • Ink variability. Batch to batch variation must be considered as, unlike phos- phors, inks are consumables. • The precise screen angles and frequencies, and the physical resolution of the device, determine the number of colours that can be printed.

5.5.5 Printing using non-CMYK models

Some printers allow colour to be specified in a standard, device-independent for- mat such as one of the CIE models. For example, the current version (Level 2) of the PostScript page description language, which is widely used in printers and imagesetters, provides this facility.

In PostScript, a basic framework for CIE based colour models is parameterised to define the desired colour model. Graphics are then described using that model. Examples of possible PostScript colour models include CIE 1931 XYZ, CIE 1976 LUV, or SMPTE RGB. Recall that RGB, in a broadcast context, relates to a par- ticular set of phosphors.

One problem with using a non-CMYK model is that colours can be specified which cannot be printed on a particular device. These must be dealt with in a device dependent way. Using a graphical colour selection tool or colour manage- ment system, such colours can be avoided. For example, Plate 41 shows such a tool displaying the difference between the gamut of a monitor (the Tektronix XP29P PEX terminal, solid line) and a printer (the Tektronix 4396DX, dotted line). The selected colour is shown in both the RGB and TekHVC colour models.

The problem of gamut mapping and the treatment of out of gamut colours is discussed in Section 5.7

5.6 Colour photography

Photography is a continuous tone process, capable of fine gradations of colour and very fine resolution. Colour film, once developed, may be output on print, transparency, or saved to compact disc.

In a photographic print, colour is produced by mixing of light reflected from three thin translucent layers superimposed on paper. The gamut is defined by the chromaticities of the dyes in these layers. Because the colour is produced by sub- tractive mixing, the gamut is not a regular shape and can only be accurately determined by measurement of many samples.

University of Manchester 73 Photographing the display on a graphics monitor gives good results, particularly if the room is darkened to exclude reflections on the glass and a tripod is used. The gamut of a colour film is typically large, and includes much of the gamut of many computer monitors. This is why screen photography often provides a more pleasing and accurate result than using a colour printer. Transparency (slide) film has wider colour gamut and better colour fidelity than print film. Magazines and journals tend to prefer transparencies to prints for this reason.

0.6 525 550 575 600 500 675-780

0.4

v’

475 0.2 D50 whitepoint

monitor whitepoint

typical transparency 450 400 HP A1097C monitor 0.0

0.0 0.2 0.4 0.6 u’

Figure 55: Comparison of monitor and slide film gamuts.

5.7 Gamut mapping

All physical devices can only reproduce a subset of the range of visible colours. As we have seen, this subset is device dependent. There is thus a problem dis- playing an image whose gamut is not a subset of the display device. This is most commonly encountered when an image is produced on one device and displayed on a second. For example, when an object displayed on a computer graphics monitor is to be printed. For comparison, the gamut of a typical printer (a Tektronix Phaser IIIPXi) is shown in Figure 56 together with that of a typical graphics monitor (a VAXstation 3540 24 bit display). Large amounts of the blue portion of the monitor gamut lie outside the printer gamut. On the other hand, the printer can reproduce green-cyan and magenta-red areas that cannot be dis- played on the monitor.

74 Computer Graphics and Visualisation Colour output

0.6 525 550 575 600 500 675-780

0.4

v’

475 0.2 monitor whitepoint

D65 whitepoint

VAXstation 3540 450 Tektronix Phaser III 400 0.0

0.0 0.2 0.4 0.6 u’

Figure 56: Comparison of printer and monitor gamuts.

There are a number of strategies for transforming colours between the two de- vices.

One simple solution is to use a direct mapping of RGB to RGB or CMY. This has the advantage that each unique colour in the source gamut maps to a unique col- our in the target gamut. However, none of the colours will be correct. For example, using the pair of gamuts in Figure 56, the monitor blue would be repre- sented by the printer blue. In this example, the printer blue is very close to the line joining monitor blue with monitor red. It will thus appear as purple, a mix- ture of those two colours.

While this direct mapping may be useful when colours are merely required to be distinguishable, the general requirement is to reproduce the original colour with the greatest perceived (rather than measured) accuracy.

One alternative is to use the full gamut of the first device, and approximate the out of gamut colours that result. This is shown in Figure 57a.

Another alternative, shown in Figure 57b, is to use only those colours which are in the intersection of the two device gamuts. This requires accurate sampling and measurement of the gamut in the case of non additive devices such as print- ers. It also introduces problems when colours must be generated automatically, for example in shaded images.

A compromise solution, Figure 57c, is to bound the gamut of the first device by some regular shape which approximates the gamut of the second device. This limits the number of out of gamut colours that are produced, while allowing lighting calculations to choose from a defined range of colours. Such a solution would however severely limit the range of possible colours, and still be quite com- plicated to implement.

University of Manchester 75 Figure 57: Gamut selection strategies.

5.7.1 Dealing with out of gamut colours

Given that a particular image contains out of gamut colours, a strategy must be developed to deal with these. The resulting image will contain colours that are not completely correct. The aim is to reduce the visual impact of such changes.

The phenomenon of colour constancy helps to reduce the complexity of this task. In any mapping transformation of this nature, the least noticeable change is a saturation shift. Greyscale shifts (lighter/darker) are also not too bad provided there are no sudden discontinuities. Overall shift in lightness of an image are tol- erated because of colour constancy and the adaptation of the eye to different lighting levels; white remains white.

Hue shifts are generally objectionable, particularly if a colour moves to a colour of a different name. For example, moving a colour which was originally blue looks steadily further from true until the blue suddenly becomes classified as green, or purple; the perceived jump at this point is very noticeable. This is a psychological effect, rather than a physical one. It corresponds to the third stage of visual processing in Figure 10. Clearly, colours near a name boundary are the most succeptible to hue shifts.

One method is to clip all colours outside the gamut to the gamut boundaries. This has the advantage that all colours inside the gamut are unchanged and will appear accurate. However, as shown in Figure 58, many out of gamut colours will map to the same colour. This plays havoc with graduated ranges of colour such as smoothly shaded lighting effects. The discontinuities in colour relation- ship are visually obtrusive.

76 Computer Graphics and Visualisation Colour output

Figure 58: Boundary clipping.

Another way is to uniformly scale all colours towards the grey axis, as shown in Figure 59, so the transformed set of colours fits within the gamut. This keeps smooth ramps of colour intact, although the colours can loose a lot of saturation. The hues will remain constant, however, so the image will look recognisably similar. Non linear scaling, where colours are moved progressively more the fur- ther they are from the grey axis, can help here.

Figure 59: Uniform scaling

A good method in practice is to scale so that most - 90 to 95% - of the colours fall within the gamut. Then the outliers are clipped to the boundary. This avoids sub- jecting the majority of colours to a substantial colour shift, just to accommodate a few outliers.

Although this transformation can be done once, to map the entire monitor gamut to the printer gamut, extra performance can be squeezed out of the method by computing the transformation for each image. The image gamut, being a subset of the monitor gamut, will hopefully have less outliers. The saturation loss can thus be minimised. Of course this is more computationaly expensive.

University of Manchester 77

6 Usage of colour

6.1 When to use colour

Used wisely, colour can greatly add to the usefulness, clarity and impact of a graphic. Used badly, however, it serves only to confuse and obscure. Colour can be used:

• to distinguish objects • to show relationship and connections between objects • to display additional information without an increase in dimensionality • to make clear the 3D form of an object • to make a graphic more interesting to the viewer • to direct attention to parts of a graphic

If colour is being assigned some coded meaning, for example in status displays or user interface design, the number of colours should be strictly limited. Many studies have shown that no more than six or seven colours should be used in this fashion, and they should be clearly distinguishable. Furthermore, it is preferable to add redundant information such as size or shape to reinforce the distinction, as shown in Plate 42

If a continuous colour scale is being used to display the value of some variable – such as temperature or stress – over a surface then using more colours gives a finer gradation and enables small details to be seen. In this case, 256 colours from an 8 bit display may well be insufficient.

6.2 Selecting a colour model

* * For device independence, use a CIE colour model such as L C Huv, or a model such as Munsell for which CIE tristimulus values are available. If the chosen output device does not directly support CIE colour specification, colours can often be converted to the native colour space of the device. Appendix B shows how this is done for an additive RGB device.

For accurate work, where an exact colour match is important, use a calibrated monitor and a viewing cabinet, with CIELUV or more specialised models such as Hunt ACAM.

For particular applications - textiles, architecture, graphic arts - use the appro- priate specialised colour model which is conventionally used in that application area. For example, CIELAB, Coloroid, or Pantone respectively.

University of Manchester 79 To mix device independent colour, use a colour model with a polar coordinate sys- * * tem to give a hue wheel. For example, L C Huv or TekHVC.

To mix distinguishable but device dependent colour, use a polar model such as HSV or HLS.

Use RGB if it is all that is available, but consider selecting from HSV and con- verting to RGB.

Do not use CMY directly. The number of variables which must be altered is too large.

6.3 Colour schemes

A range of related colours used together give a unified, uncluttered look. A poor selection of colours can look confusing or garish and may contribute to eyestrain if viewed for long periods of time. To some extent, the selection of related colours is a subjective process; however help can be obtained from both artistic conven- tions and colour science. In many cases, the empirical guidelines from the artistic world can be explained in terms of colour science.

6.3.1 Using artistic conventions

Complementary colours are those on opposite sides of a colour wheel. Using complementary colours produces a busy, attention-getting display. To avoid look- ing garish, the chroma should be reduced for large areas. It is often useful to use one colour or group of similar colours for large areas, so that their complemen- tary colours will stand out when used in small amounts as highlights or accents.

Different colour models use different spacings of colours round a hue wheel, so the exact colour which is found to be the complement of another varies with the colour model. A true perceptually uniform scale would give the correct colour.

Determining complementary colours can be readily done by eye. Simply stare fix- edly at a small patch of colour on a black background for a minute or so. Looking at a well illuminated white surface will produce an after image in the comple- mentary colour. Plate 43 can be used to try this out. Note that the precise colour obtained from an after image depends on the illuminant used.

After images are caused by the photosensitive pigment in the cones becoming bleached as a continuous high chroma colour stimulus is applied. Staring fixedly, without moving the eye, keeps the image on the same cells in the retina. In the resting cell, used pigment is replenished. By not allowing this to take place, the pigment in each cone type is depleted in proportion to the degree to which that colour excites each cone type. For example, a green stimulus will bleach M cones the most. Looking at a white surface will give the illusion of a pink colour until the pigment is replenished.

A graphic which incorporates many intense unrelated colours will look cluttered and confusing; there is no single point of focus. Using groups of related colours

80 Computer Graphics and Visualisation Usage of colour

and using high chroma colours sparingly for accentuation avoids this effect and gives a focused, controlled and professional look.

If particular colours are to have individual meanings, these should be clearly ex- plained and the colours readily distinguishable. Some colours have conventional meanings which are widely – if not universally – understood. For example, red is associated with action, excitement, danger, heat and stop. Such meanings are overloaded and may be contradictory. They may also be specific to a particular culture.

When a smooth range of colours is to be used, it is useful to incorporate existing meanings, especially those used unambiguously by a clearly defined group. For example, in medical imaging the convention is that red denotes normal tissue and blue, diseased tissue. In cartography, a range of dark blues shading to light blues and white represents progressively shallower seas; yellows, greens and browns represent increasing land height, culminating in purple and white for mountain tops.

6.3.2 Using colour science

Don’t have blue and red together. Don’t use blue as a foreground colour, where shape must be distinguished; however it makes a good background. Why: chro- matic aberration in the eye makes it impossible to fully focus on red and blue simultaneously as seen in Plate 44. The eye will tire from continual re-focusing, and settle on a lens position where neither colour is fully in focus. When using blue as a background, it has no fine shape so gentle blurring is unobtrusive. The phenomenon of stereopsis gives the appearance of depth.

Don’t have fine detail in blue or red on dark coloured backgrounds. Why: the photopic luminous efficiency curve is sharply peaked in the yellow and green part of the spectrum. Colours at the spectral extremes will appear much darker than yellows and greens at the same measured light power level. Similarly, yel- lows and greens on a light background will have low contrast and thus be difficult to see.

Don’t use blue or violet for small moving shapes such as mouse cursors. Why: S cones have a slower response than M or L cones. Therefore they cannot detect rapid changes in position of blue and violet objects. The density of S cones in the fovea is much less than the density of M and L cones. Therefore, the spatial reso- lution for blue objects is much less than for other colours ( a fact which, as we have seen, is made use of by subsampling in video encoding)

Don’t rely on red/green discrimination to convey important information. Why: a significant proportion of your audience will have reduced or missing sensitivity to red/green differences.

Do use perceptually uniform colour spaces to construct colour scales. Why: Colour scales with perceptual jumps can give a false impression of spurious detail. Areas of little colour change can mask details. A perceptually linear colour scale facili- tates estimates of the displayed parameter.

University of Manchester 81 6.4 Interpolation

The colour space chosen affects how colours are interpolated. This has bearing on the production of colour scales for visualisation. Perceptually uniform spaces are to be preferred to avoid discontinuities or distortions of scales. Polar coordinate spaces are often easier to work with than Cartesian spaces.

In addition to straight linear interpolation, it may be useful to construct a colour scale along a curve through some colour space. Some visualisation systems allow curves to be used in this manner. Examples of colour scale creation using this method are shown using AVS (Plate 45) and Explorer (Plates 46 and 47 ). Both examples use the HSV space, called ‘HSB’ in AVS. Plate 45 also clearly shows the lack of perceptual linearity of hue in HSV. Although the curve through HSV space is smooth, there are sudden perceptual jumps; for example between yellow and orange.

In Plate 47, the red curve is hue, moving from red (0°) at the bottom to violet (360°) at the top. The green curve is saturation, and the cyan curve is value. The small filled squares represent points that can be moved, through which the curve passes. (These are termed knots, and will be met in the Curves and Surfaces module). The open square on the green curve represents the tangent to the curve at the point being edited.

The oranges are desaturated to give browns, and the purples made lighter. This colour scale was chosen to illustrate the method rather than be an example of good practice.

Non linear colour interpolation may be used as a form of depth cueing. For exam- ple, in the representation of outdoor scenes, distant objects can be made more blue and less saturated. This mimics the effect of atmospheric haze.

Pseudo colour can be used to enhance detail or visualise small changes. Exam- ples of this type of application are medical imaging, geographical information systems, and finite element post-processing.

82 Computer Graphics and Visualisation A Gamma correction

A.1 Determining gamma

There are three ways of obtaining a value for gamma correction of a monitor:

1. Direct measurement of standard greys using a light meter or spectroradiometer

2. Asking the monitor or tube manufacturer

3. Visual calibration

The first method can have the additional refinement of measuring the value for each gun separately.

Some monitors have internal gamma correction in hardware, and need no fur- ther adjustment. To detect such a monitor, refer to the manual or carry out a quick visual calibration. The majority of monitors will, however, require gamma correction.

A.2 Direct measurement

For simplicity, we will assume that all three guns are to be calibrated together. The method simply consists of generating a series of test patches of known RGB value, measuring the actual light emitted, and plotting the test value against the measured value.

Using a large number of samples, and performing duplicate tests, helps reduce random errors and give a more precise result. Each patch should be measured at the same part of the screen, conventionally the centre, to minimise the effect of misconvergence. If desired, a patch near one corner can be measured in a sepa- rate series, and the results averaged.

Provided the phosphors are not being driven into saturation, the measured gamma value should be much the same regardless of the setting of the bright- ness control. Although the eye can adapt to the ambient light level, a meter cannot, so the screen and meter should be well shrouded with heavy cloth such as a curtain, to eliminate stray light.

Remembering that the gamma function is a power law, the input RGB value and output light level should be plotted on a log/log scale. The spacing of the test samples should take this into account, so that samples are evenly spaced on the log axis.

University of Manchester 83 If the data points cluster around a straight line, the slope of that line is the gamma value. Significant deviations from a straight line can only be dealt with by a lookup table.

A.3 Visual calibration

This simple method has the advantage of requiring no equipment. It relies on visual comparison of two grey patches. Visual comparison can be quite accurate and precise, and is after all the basis of the CIE standard observer.

The method relies on the fact that, regardless of the gamma value, white and black are fixed points. This is shown in Figure 60; altering the gamma value only affects the amount of curvature, not the position of the end points.

1.0

t

u

p

t

u

o

t

h

g

i

l

0.0 0.0 1.0 input RGB

Figure 60: Three gamma curves

If a chequerboard pattern of black and white squares is displayed, the result looks grey if the grid is fine enough. The light level will be 50% of the maximum, white light level, because of additivity (assuming that the black is effectively zero). This corresponds to the grey that would be obtained with R,G and B all 0·5, if the monitor had a gamma of one.

A grey colour is mixed, for example using a paint package, keeping R,G and B equal. If the HLS colour model is available, mixing a colour with a saturation of zero will accomplish this. Call the value of RGB which matches the chequerboard pattern V. The gamma value is given by:

gamma = log(0·5) / log(V)

For example, if V is 0·73, the gamma is 2·2. This is shown in Figure 61.

84 Computer Graphics and Visualisation Gamma correction

1.0

t

u

p

t

u

o

t

h

g

i

l

0.0 0.0 0.5 V 1.0

input RGB

Figure 61: Matching grey with checks.

University of Manchester 85

B Monitor calibration

For precise work, each monitor should be calculated by directly measuring the tristimulus values with a spectroradiometer. To compensate for drift in electronic components, and phosphor ageing, this calibration is performed monthly, weekly or even (in highly critical applications such as colour pre-press soft proofing or dyestuff quality control) daily. Monitors are available which can auto-calibrate themselves.

For general work, however, it is sufficient to obtain monitor tristimulus values which are typical for that model of monitor from the manufacturer. The data re- quired are the chromaticity values of the red, green and blue phosphors, and the chromaticity of the monitors natural white point where red, green and blue are all at maximum intensity.

Chromaticity values are generally supplied as x,y pairs. The first step is to calcu- late the z components:

z = 1 – x – y

The phosphor chromaticities are inserted into a matrix A, where:

2 3

xr xg xb 5 A = 4 yr yg yb zr zg zb

This matrix is inverted, to give B:

B = A−1

The monitor white point is converted from chromaticity values to relative tristimulus (XYZ) values. The absolute luminance is generally not required, Y can be set to unity:

Xw = xw/yw

Yw =1.0

Zw = zw/yw

These values are used to compute the column vector C:

2 3

Xw 5 C = B 4 Yw Zw

University of Manchester 87 The elements of C are inserted into the major diagonal of a new matrix D, the

other elements being zero:

2 3

C1 0 ⋅ 00⋅ 0 5 D = 4 0 ⋅ 0 C2 0 ⋅ 0 0 ⋅ 00⋅ 0 C3

The matrix to convert RGB to XYZ is then given by:

E = A D

and this is inverted for conversion in the reverse direction. Any point specified in

RGB may then be converted to CIE 1931 XYZ by:

3 2 3

2 X R

5 4 5 4 Y = E G Z B

88 Computer Graphics and Visualisation C Glossary

This glossary is not presented as a set of formal or precise definitions. It is delib- erately kept informal and colloquial. It is intended to refresh the memory when a particular word is encountered.

adaptation Response to an overall change in light intensity. A ‘win- dow’ of simultaneously perceivable light and dark is slid up or down the entire range of brightness. A sheet of white paper is seen as white regardless of the light level. achromatic Having no chroma or saturation; a pure untinted grey, white or black. bandwidth The range of frequencies used by a video signal, from zero on up. For example, a PAL signal has a bandwidth of 5·5 MHz. brightness The amount of light emitted or reflected from something, in absolute terms. A patch of grey on a white sheet of pa- per would be brighter in strong sunlight than in the shade. See lightness. cathode ray tube Device for producing a colour or monochrome display by inducing phophors to emit light. colour difference Another name for the two chrominance signals used in video encoding. chroma (1) A measure of colourfullness, similar to saturation but expressed relative to an area of similarly illuminated white. (2) In Munsell colour space, the departure of a colour from grey; colour intensity. chromatic aberration Inability to focus on both ends of the visible spectrum at once, caused by the refractive index of the aqueous hu- mour being dependent on wavelength. Leads to the phenomenon of chromostereopsis. Response to an overall change in colour of light. A sheet of white paper tends to be seen as white regardless of the colour of light, up to a point. chromatic induction Neutral surrounds to colours reduce the perceived colourfullness. Dark surrounds increase perceived colourfullness. High chroma colours induce a tint of the

University of Manchester 89 same colour into dark surrounds and the complementary colour into light surrounds. chromaticity Means of describing the hue and colourfullness of a sam- ple independent of its lightness. chrominance Originally related to the difference between a video com- ponent signal and the system white; now simply a name

attached to the two video components C1 and C2 that carry most of the colour information. chromostereopsis Phenomenon of blues seeming to recede, and reds seem- ing to come forward. Caused by feedback of muscular changes to re-focus the eye being interpreted as distance focusing. colour constancy An evolutionary survival tactic. The eye and brain try to hold the perceived colour of objects constant regardless of slow changes in the level or colour of daylight. Only partly successful, and less so for non-daylight illumina- tion. colour management system A piece of software that helps achieve screen to print col- our matching, often by displaying the gamuts of the two devices. colour temperature The temperature of a theoretical black body radiator which would produce light of the same or similar chromaticity. complementary colour The colour exactly opposite in hue to a given colour. density Log of one over the reflectance. White and light colours have a low density, black and dark colours have a high density. dithering Simulating more colours by printing two or more colours close together dominant wavelength (Of a colour). A single spectral wavelength which, when mixed with white, would give an equivalent colour sensa- tion. dot gain Non-linear increase in the size of halftone dots. Has a va- riety of causes, and is measured and corrected for in high quality commercial colour printing. dye sublimation Printing technique which sublimates (converts from solid to vapour) inks to allow varying quantities of each ink to be deposited, effectively forming a mixed colour for each pixel. This gives a continuous tone image, with no need for halftoning.

90 Computer Graphics and Visualisation Glossary electromagnetic energy A form of energy which can be transmitted through a vac- uum. Travels at the speed of light. Can be considered as a wave or as a particle. Characterised by wavelength. excitation purity Percentage distance of a point on a chromaticity diagram from the white point to the spectral locus. Very roughly corresponds to colourfullness, but better measures are available. fovea Small central area of the retina, containing many cones but few rods, and with nerves and blood vessels routed round the sides. Has the best colour discrimination and spatial resolution. gamut The range of colours displayable by a device; a subset of the range of perceivable colours. gamut mapping Matching and adjustment of colours from one device gamut for display on a second, different device, so that it looks as close to the original as possible. grey component replacement Removal of the common proportion of cyan,magenta and yellow in a colour, which would produce a grey, and sub- stitution of an equal amount of black ink. grid Part of a cathode ray tube which electrostatically acceler- ates the electron beams by means of an applied voltage. halftoning Simulation of intensity difference by varying the size of dots on a regular grid. hue The degree to which a sample resembles red, or yellow, or green, or blue; regardless of how light or colourful the sample is. hue angle A single numerical measure of hue, generally considering red to be 0°.

ISO International Standards Organisation, the body which produces world-wide standards in many fields. laser printer Printing device using a laser beam to control electrostatic deposition of toner(s) on paper. Toner is fused to the pa- per with heat or pressure. lightness The amount of light emitted or reflected from something, relative to white. A patch of grey on a white sheet of pa- per would have the same lightness in strong sunlight and in the shade. See brightness. Lightness takes adapta- tion into account. luminance Measure of how intense a light is, in isolation. To find out how bright it appears, the intensity of other lights in the

University of Manchester 91 surroundings must be considered. Luminance is a way of measuring light intensity that takes into account the photopic luminous efficiency curve. luminous efficiency Perceived brightness of a fixed intensity of a particular wavelength of light. For the same light level, green seems a lot brighter than red or blue. metamerism (1) The fact that two spectrally different samples can give the same colour sensation, which allows the simulation of many colours with only three primaries. (2) A consequence of the information loss when computing tristimulus values. Two spectrally different samples may give the same values (and hence look identical) under one illuminant, while giving different values (and looking dif- ferent) under another illuminant. opponent colours Three pairs of colours: red/green, blue/yellow, and black/white. A given colour can only have the characteris- tics of one colour from each pair. For example, a reddish yellow is possible (orange) whereas a reddish green is not. offset lithography A printing process where each halftoned colour separa- tion is printed from a cylindrical metal plate. Called offset because the inked plate prints onto a rubber roller, which then prints onto the paper to give even pressure. perceptually uniform Property of a colour space that equal distances represent equal observed colour differences. perfect diffuse reflector A white target that does not modify light at all, but re- flects 100% of it at all wavelengths. photopic vision Light adapted vision at low to high light levels, where rods are saturated and cones provide colour information. pixel Smallest individually addressable part of a display de- vice. pre-press Stage in commercial colour printing where images are scanned, separations are done, colour balance checked, and colour proofs made before committing to a print run. process colour Method of printing using three subtractive colours – cyan, magenta and yellow – to produce all colours; generally adds black for contrast.

Purkinje shift Shift in the wavelength of maximum sensitivity to light as vision moves from scotopic to photopic. quantisation Mapping a large range of colours to a smaller range of col- ours.

92 Computer Graphics and Visualisation Glossary

reflectance The amount of incident light which is reflected by some object, as a percentage.

retina The photosensitive receptor layer in the eye that converts light into nerve impulses.

saturation A measure of colourfullness. A dull grey has low satura- tion, a vivid red has high saturation. Saturation is the colourfullness of a sample relative to how bright it is.

scotopic vision Dark adapted vision, in very low light levels, using only rods for monochromatic vision.

separation Creation of four halftoned images from an original con- tinuous tone image, ready for process printing. Requires the specification of screen frequencies and angles for each process colour, and compensation for the characteristics of the printing press. Additional spot separations may also be made.

spectral locus The position of maximally saturated spectrally pure col- ours in one of the CIE colour spaces, representing the boundaries of visible light. Effectively, the gamut of the human eye. The ends of the spectral locus are joined by the purple line. spectrophotometer Device to measure the CIE tristimulus values of a reflec- tive sample; contains a light source. spectroradiometer Device to measure the CIE tristimulus values of a light source. spot colour Single colour of ink, used without mixing. Many hundreds of spot colours are available, covering a huge gamut. Gen- erally only a few spot colours are used in a single print. standard observer Table of experimental data predicting the amount of the CIE primaries required to match light of any wavelength. There are two standard observers, but the 1931 2° ob- server is the most commonly used in practice. subsampling A means of reducing the bandwidth of a video signal by filtering out rapid changes in colour or intensity. Reduces the effective resolution. tristimulus value The amount of each of the three CIE primaries which, when mixed, would give the same colour sensation as a sample of a given colour.

UCS CIE 1976 uniform chromaticity scale diagram, an attempt at a perceptually uniform presentation of colourfullness and hue.

University of Manchester 93 under colour removal Addition of black to dark colours, to boost contrast. video RAM Memory used by a workstation to contain the description of the displayed image. video luminance Originally a measure of the lightness component of a video signal, now simply a name for the component which carries most of the lightness information in an encoded video signal. Not to be confused with luminance, although for historical reasons they are both represented by the symbol Y. viewing cabinet A simple piece of apparatus used for colour matching of reflective samples (fabric, photographs, etc) consisting of a large box painted neutral grey with a standard light source, D50 or D65, inside it. wavelength The distance between two consecutive peaks in a regular wave. Measured in whatever divisions of a metre are ap- propriate. Visible light has wavelengths of between 380 and 730 nm. waxjet Printing device which sprays coloured, molten wax onto paper through a large number of very fine nozzles.

94 Computer Graphics and Visualisation