Course 2

Elements of Photonics

OPTICS AND PHOTONICS SERIES

© 2008 CORD

This document was produced in conjunction with the STEP project—Scientific and Technological Education in Photonics—an NSF ATE initiative (grant no. 0202424). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. For more information about the project, please contact any of the following persons: Dan Hull, PI, Executive Director National Center for Optics and Photonics Education 316 Kelly Drive Waco, TX 76710 (245) 751-9000 [email protected] Dr. John Souders, Director of Curriculum National Center for Optics and Photonics Education 316 Kelly Drive Waco, TX 76710 (245) 751-9000 [email protected]

Published and distributed by OP-TEC 316 Kelly Drive Waco, TX 76710 (245) 751-9000 http://www.op-tec.org/

ISBN 1-57837-398-0

PREFACE

The six instructional modules (chapters) contained in this text are designed for use by students and instructors involved in the preparation of technicians in the areas of optics, electro-optics, lasers, and photonics. The materials can be used as an introductory course in AAS programs in laser/electro-optics and photonics at two-year postsecondary community and technical colleges. They can also be used in conjunction with supplementary laser/electro-optics courses in related engineering technology programs such as electronics, instrumentation, telecommunications, and biomedical equipment, as well as in programs designed to retrain or update the skills of engineering technicians who are already employed. The modules can be used as a unit or independently, as long as prerequisites have been met. The materials were developed under NSF ATE grant number 0202424 (Scientific and Technological Education in Photonics—STEP II). Content specifications were determined from The National Photonics Skills Standards for Technicians, also developed (1995) and updated (2003) under the STEP project. Colleges that are creating or revising curricula in this field will benefit from using the skill standards document, which can be downloaded in PDF format from http://utopia.cord.org/STEPII/. To learn the technical content in the modules, students should be fluent in specific mathematical concepts in algebra, geometry, trigonometry, and statistics. For students who may need assistance with or review of the necessary concepts, a student review and study guide entitled Mathematics for Photonics Education (available from CORD) is highly recommended.

Acknowledgments The following persons produced the original manuscripts of the six modules: William P. Latham (Albuquerque, New Mexico) (Modules 2-1 and 2-2) Jack Ready (formerly with Honeywell Technology) (Modules 2-3 and 2-6) Nick Massa (Springfield Technical Community College) (Module 2-4) Harley Myler (Lamar University) (Module 2-5) Leno Pedrotti of CORD provided editorial oversight for the entire course.

CONTENTS

Module 2-1: Operational Characteristics of Lasers Module 2-2: Specific Laser Types Module 2-3: Optical Detectors and Human Vision Module 2-4: Principles of Optical Fiber Communication Module 2-5: Photonic Devices for Imaging, Display, and Storage Module 2-6: Basic Principles and Applications of Holography

Operational Characteristics of Lasers

Module 2-1

of

Course 2, Elements of Photonics

OPTICS AND PHOTONICS SERIES

PREFACE

This is the first module in Course 2 (Elements of Photonics) of the STEP curriculum. Following are the titles of all six modules in the course: 1. Operational Characteristics of Lasers 2. Specific Laser Types 3. Optical Detectors and Human Vision 4. Principles of Fiber Optic Communication 5. Photonic Devices for Imaging, Display, and Storage 6. Basic Principles and Applications of Holography

The six modules can be used as a unit or independently, as long as prerequisites have been met. This module relies heavily on Module 1-6, Principles of Lasers. A comprehensive understanding of the concepts presented in all of the modules in Course 1 (Fundamentals of Light and Lasers) is needed for this module. For students who may need assistance with or review of relevant mathematics concepts, a review and study guide entitled Mathematics for Photonics Education (available from CORD) is highly recommended. The original manuscript of this document was prepared by William P. Latham (Albuquerque, New Mexico) and edited by Leno Pedrotti (CORD). Formatting and artwork were provided by Mark Whitney and Kathy Kral (CORD).

CONTENTS

Introduction ...... 1 Prerequisites ...... 2 Objectives ...... 2 Scenario ...... 2 Basic Concepts ...... 3 The Amazing Laser ...... 3 Basic Relationships for Laser Energy and Operation ...... 4 Photon energy ...... 4 Laser power on target ...... 4 Laser gain volume ...... 5 Laser energy amplification ...... 5 Laser optical cavity ...... 7 Pulse properties of a laser...... 13 Mode formation in lasers ...... 17 Transverse modes ...... 19 The Helium-Neon Laser ...... 22 Excitation mechanism ...... 22 Gain medium ...... 22 Optical cavity ...... 23 Calculation of several operational characteristics ...... 25 Laboratory ...... 28 Exercises ...... 33 References ...... 34

COURSE 2: ELEMENTS OF PHOTONICS

Module 2-1 Operational Characteristics of Lasers

INTRODUCTION

This module on the operational characteristics of lasers presents a review of some of the basic concepts regarding energy, lasers, and amplification presented in Module 1-6, Principles of Lasers, of Course 1, Fundamentals of Light and Lasers. Following this brief review, the module focuses on useful relationships that describe the operation of a laser—involving such concepts as small signal gain, saturation, threshold gain, and power out. With the operational equations in hand, the module then selects a specific laser type—the Helium-Neon laser—as a “case study.” The HeNe laser and its characteristics are described in a “fact sheet,” and several important operational parameters are calculated, using the operational equations developed in the first part of the module. The companion module that follows (Module 2-2, Specific Laser Types) overviews many of the existing lasers classified as (1) atomic gas lasers, (2) molecular gas lasers, (3) liquid lasers, (4) solid-state lasers, and (5) semiconductor lasers. Again, as was done for the HeNe laser in this module, “fact sheets” and calculations of interacting operational parameters will be presented in Module 2-2. Recall that a laser is a special kind of light source. It can produce highly directional light with a high brightness within a very narrow range of wavelengths. Its properties differ significantly from those of ordinary light. Recall that the special properties of laser light include the following:  A nearly monochromatic (single wavelength) beam of light  A light beam, small in cross section, with a wide range of output powers  A coherent beam of light that spreads very little in propagation  A beam of light that can shine continuously or for short periods of time  A beam of light capable of depositing tremendous power per unit area near or on distant objects These properties make lasers very useful in areas such as material processing, measurement, laser sensing and imaging, inspection, medicine, information handling, military applications, entertainment, and holography.

1 PREREQUISITES

To understand this module the student should have completed Module: 1-6, Principles of Lasers. It is understood that the prerequisites for Module 1-6 include the completion of Module 1-1, Nature and Properties of Light; Module 1-3, Light Sources and Laser Safety; Module 1-4, Basic Geometrical Optics; and Module 1-5, Basic Physical Optics.

OBJECTIVES

Upon completion of this module, the student will be able to:  Calculate photon energy given the wavelength  Calculate the gain volume of a laser  Calculate beam irradiance given beam power and spot size  Explain how stimulated emission leads to light amplification  Describe what is meant by small signal gain in a laser  Describe what is meant by saturated gain in a laser  Describe what is meant by threshold gain in a laser  Use tables to identify small signal gain and saturation intensity  Calculate laser power out given power available and relevant optical cavity parameters  Describe the pulse properties of a laser  Describe longitudinal mode formation in a laser  Describe the transverse modes of a laser  Describe multimode laser outputs  Define the M-factor and its usefulness  Describe the important characteristics of a HeNe laser  Apply operational equations to determine the energy of a HeNe photon, the HeNe gain volume, the HeNe beam irradiance, and power out for a typical HeNe laser

SCENARIO

The laser job shop—After completing his photonics technician certificate at the local community college, Ramon was hired by the Laser Machining and Manufacturing Corporation to operate and maintain one of its solid-state lasers. Initially, the company required that he complete many hours of additional on-the-job training to familiarize himself with the company’s laser procedures to ensure safe operation of lasers. The primary use of this particular solid-state laser involved the drilling and cutting of materials to fabricate specialized

2 Optics and Photonics Series, Course 2: Elements of Photonics parts for use in military and commercial airplanes and automobiles. Ramon was very meticulous in his duties. He had learned in school that it would be useful to maintain a logbook for every task he accomplished. He was especially careful to document laser performance characteristics during every drilling and cutting procedure. In this way, he soon determined the optimum laser operating conditions for specific applications. He noted in his book that some materials required more laser power for efficient machining and some required better laser mode quality. By maintaining an accurate and complete log, he was able to make useful suggestions for other laser operators in the company. After several years, the company found that his understanding of laser operation and experience with laser machining processes qualified him to become the supervisor of laser machining.

BASIC CONCEPTS

The Amazing Laser Lasers have been developed with powers as low as a few microwatts and as high as megawatts. Lasers can be operated both in a continuous mode and in a pulsed mode. In a pulsed mode, lasers can produce pulses as short as one million-billionth of a second (10–15 sec), powers as high as a million billion watts (1015 W), and powers per unit area on targets of billions of trillions of watts per centimeter (1021 W/cm2). Laser emission wavelengths extend from the long microwave region to the X-ray region. Lasers exist that can produce several wavelengths in their output at the same time and, with appropriate tuning, can produce individual wavelengths of pure, monochromatic light. As explained in Module 1-6, Principles of Lasers, the word LASER is an acronym for Light Amplification by Stimulated Emission of Radiation. Recall that stimulated emission of radiation is a natural process first identified by Einstein. It occurs when a beam of light passes through a specially prepared medium and initiates or stimulates the atoms within that medium to emit light in exactly the same direction and exactly at the same wavelength as that of the original beam. As we learned, a typical laser device (Figure 1-1) consists of an amplifying or gain medium, a pumping source to input energy into the device (excitation mechanism), and an optical cavity (resonator) or mirror arrangement that reflects the beam of light back and forth through the gain medium for further amplification. A useful output laser beam is obtained by allowing a small portion of the internally reflecting light to escape by passing through the partially transmitting mirror.

Module 2-1: Operational Characteristics of Lasers 3

Figure 1-1 Schematic diagram showing the basic structure of a laser, including the basic components: (1) a gain or amplifying medium, (2) an excitation mechanism or pumping source, and (3) the mirrors, which make up the optical cavity or optical resonator

Basic Relationships for Laser Energy and Operation Before we begin the description of specific lasers and their characteristics, we shall pull together some important equations and relations that will help us describe how the individual lasers operate.

Photon energy We have already used the basic relationship for the energy carried by a photon in Module 1-1, Nature and Properties of Light, and Module 1-6, Principles of Lasers. We repeat the equation here for reference. hc Ephot = h = (1-1) 

where Ephot is the energy of the photon of frequency  and wavelength  –34 h is Planck’s constant, 6.625 × 10 J  s  is the photon frequency in hertz (Hz = cycles/s)  is the photon wavelength in meters c is the speed in light in a vacuum (3 × 108 m/s)

Laser power on target The power per unit area delivered by a laser beam on a specified target is defined here as the irradiance, usually denoted by the symbol E.1 However, in keeping with much customary use,

1Note that it is not uncommon in laser/optics literature to use the word intensity, symbol I, when referring to the energy or power in a laser beam. Strictly speaking, intensity, with units of watts per steradian, is defined as “power emitted by a radiant source per unit solid angle.” Irradiance, with units of watts per unit area, is defined as the power that falls on a unit area of detector or target surface, generally measured in watts/cm2, and denoted by the symbol E. Because much of the photonics industry still refers to irradiance as intensity with symbol I, we shall do likewise and use the symbol I for irradiance, indicating clearly, however, that the units are W/cm2 and not watts/steradian. Using the symbol I for irradiance frees up the symbol E for energy, used the world over.

4 Optics and Photonics Series, Course 2: Elements of Photonics we shall use the symbol I for irradiance, in order to reserve the symbol E for energy. The irradiance I of a laser beam on target is also referred to as power density, measured in units of power . area If we consider a laser beam of power P shining on a circular region of area A, the irradiance I is P given by I = . If the circular area is of radius r and diameter d = 2r, we can write A P 4P I = = (1-2) r2 d 2 where I is the irradiance in units watts/cm2 or watts/m2 P is the laser beam power, falling on the target, in watts r is the radius of the targeted circular spot, in cm or m d is the diameter of the targeted circular spot, in cm or m

Laser gain volume The volume of the region within a laser cavity where light amplification takes place—as sketched below—is given by the relationship 2 Vg = Ag × Lg = r Lg (1-3) 3 where Vg is the volume of the gain medium, usually in cm 2 Ag is the uniform cross sectional area of the gain medium, usually in cm Lg is the length of the gain medium, usually in cm If the gain medium is shaped like a regular cylinder—which is often the case in solid-state and gas lasers—as shown below in Figure 1-2, the value for the area Ag of the cross sectional circle 2 2 is simply r . Thus, the gain volume becomes Vg = r Lg, where r is the radius of the cross- sectional circle.

Figure 1-2 Gain volume for a typical cylindrical laser

Laser energy amplification The gain medium or amplifying medium is the region in the optical cavity where input laser energy is amplified. In the amplifying medium, the laser beam power is increased by the process of stimulated emission. Figure 1-3 shows a simple sketch of the amplifying medium with input, gain, and output parameters identified.

Module 2-1: Operational Characteristics of Lasers 5

Figure 1-3 Schematic diagram showing the basic structure of a laser gain or amplifying medium

To develop some appreciation of how Ig is related to Ii, in accordance with Figure 1-3, let’s consider for simplicity a laser medium made up of atoms that occupy one of two energy levels—E2 for the upper laser level and E1 for the lower laser level. Due to pumping action, there are more atoms N2 in the upper energy level E2 than there are atoms N1 in the lower energy level E1. That is, a population inversion, N2 > N1, has been achieved.

Then, in accordance with Figure 1-3, suppose a beam of photons of energy Ephot = E2 – E1 and irradiance Ii passes through the gain medium of length Lg. Due to stimulated emission and photon multiplication, the beam of incident irradiance Ii emerges from the output mirror with an amplified irradiance Ig. This always happens when the photon gain from stimulated emission is greater than the photon losses due to other mechanisms. These losses remove some of the incident photons without permitting them to take part in the process of stimulated emission.

The emerging irradiance Ig can be shown to be that expressed in Equation 1-4, which is given here without proof. (Refer to Module 1-6 in Fundamentals of Light and Lasers for more details.)

21(N2 – N1)Lg Ig = Ii e (1-4) where Ii is the irradiance of the beam incident on the laser gain medium Ig is the irradiance of the beam after passing through a distance Lg of the gain medium 21 is a “probability,” referred to as a cross section, that is, a probability that a stimulated transition between upper level E2 and lower level E1 will occur—generally given in units of cm2 3 N2 is the atom population per unit volume in the upper laser level E2 in atoms/cm 3 N1 is the atom population per unit volume in the lower laser level E1 in atoms/cm Lg is the distance the beam travels in the gain medium in cm

Equation 1-4 expresses the single-pass amplification of the incident irradiance Ii in a laser gain medium. Equation 1-4 holds when the incident irradiance is small and the gain medium is short in length. Actually, as the beam moves through the gain medium, the irradiance is changing continuously, as in the population difference N2 – N1. Note that in a laser, beam irradiance “enters” the gain medium from both the left and right.

One also defines a relationship gss—the small signal gain—as

gss = 21(N2 – N1) (1-5)

The overall amplification G21 in the gain medium of length Lg is given as

gssLg G21 = e (1-6) so that Ig = G21Ii, or G21 = Ig/Ii.

6 Optics and Photonics Series, Course 2: Elements of Photonics A list of the parameters and operating wavelengths for several common lasers is given in Table 1-1. Note carefully the units given for each parameter. Later, when we consider specific lasers, we will calculate values for Ig and G21, based on Table 1-1.

Table 1-1. Cross-Sections and Small-Signal Gain Coefficients of Some Important Lasers hc  N = N – N g = g =21 = Type of laser 21 21 2 1 ss 21 EE (cm2) (cm3) (cm–1) 21 (nm) HeNe (gas) 3×10–13 7×109 2×10–3 632.8 Argon (gas) 2.5×10–12 1×1015 5×10–3 488.0 –18 15 –3 CO2 (gas) 3×10 5×10 8×10 10,600 Copper Vapor (gas) 9 × 10–14 4 × 1012 3 × 10–3 510.5 Excimer (gas) 2.6×10–16 1×1016 2.6×10–2 248.0 Dye (Rh6-G) (liquid) 2×10–16 2×1018 2.4 577 Ruby (solid) 2.5×10–20 4×1019 1.0 694.3 Nd:YAG (solid) 6.5×10–19 3×1019 2.0 1064.1 –19 19 Ti:Al2O3 (solid) 3.4×10 3×10 1.0 760 Semiconductor (solid) 1×10–15 1×1018 1×103 800

From Table 1-1, it is clear that gas lasers have smaller gains g21 than liquid lasers and solid lasers. Most very high power lasers are gas lasers, because even though the small signal gains are small, gas lasers can be made extremely large in size.

Laser optical cavity

When light Ii is incident upon a mirror surface as shown in the sketch to the right, most of the light (IR) is reflected. But some will enter the mirror material and be absorbed (IA) and some will be transmitted (IT). From conservation of energy we know that Ii = IR + IA + IT. For an optical cavity in a laser we are interested in the value of IR, the irradiance redirected back into the gain medium from the cavity mirror. Thus, the optical cavity enhances the stimulated emission power by reflecting photons back and forth in the gain medium to amplify the light beam. Most lasers use an optical cavity or optical resonator that is made up of two end mirrors. One of the mirrors has a very high reflectivity so that almost all of the light that hits it is reflected back on itself into the cavity. That is called the high-reflectance mirror, with reflectivity R1. The second mirror, the output coupler mirror, has a lower reflectivity to allow some of the incident light to be “out coupled” to produce the laser beam. It has a reflectivity R2. A schematic of a two-mirror optical cavity is shown in Figure 1-4. In this figure, we have

Module 2-1: Operational Characteristics of Lasers 7 l = distance between mirrors Lg = length of gain medium R1 = reflectivity of high-reflectance mirror R2 = reflectivity of output coupler mirror.

Figure 1-4 Schematic diagram showing a two-mirror optical cavity

Based on Figure 1-4, Equation 1-4, and the details of laser power buildup in a laser cavity—too involved to go into here but explained in all beginning texts on laser physics—the laser power out, Pout, for a specific laser is given by Equation 1-7.   1 ggss  1 R1 Pout = Pavail  (1-7) gL g  ss g RL10(1 ) RRL 120 (1  ) where Pout is the output laser power for a laser with small signal gain gss, saturated gain g, and optical cavity of length L gss is the small signal gain defined earlier, for a maximum population inversion (N2 – N1), and given previously in Table 1-1 for several lasers g is the saturated gain, the gain value when a strong signal is present and the population inversion is at a minimum, that is, N2  N1 Lg is the length of the gain volume, always equal to or less than the cavity length l L0 is the loss factor for photons during one round trip through the cavity R1 is the reflectivity—a decimal value—of the high-reflectance mirror R2 is the reflectivity—a decimal value—of the output coupler mirror Pavail is the total power available in the laser cavity

The output power available, Pavail, is given by Equation 1-8.

Pavail = gssIsatVg (1-8) where gss is the small signal gain listed in Table 1-1 Isat is the laser saturation intensity (irradiance) for a signal so strong that the population inversion is almost zero (N2  N1), as listed in Table 1-2 Vg is the gain volume, a product of the length Lg of the gain volume and the cross sectional area Ag of the gain volume as given earlier in Equation 1-3 A word more here may be helpful in understanding the difference between the small signal gain gss and the saturated gain g. Recall that inside the gain medium, the input energy from the pumping source—the excitation energy—creates the population inversion between the upper

8 Optics and Photonics Series, Course 2: Elements of Photonics and lower laser energy levels. When the population inversion is at its maximum (N2 >> N1), the small signal gain gss = 21(N2 – N1) is at its maximum. As the pumping increases and the internal laser beam increases in intensity, the ever-increasing stimulated emission process begins to deplete the number of atoms N2 in the upper energy level E2. This in turn causes the population inversion (N2 – N1) to decrease and thus the gain g = 21(N2 – N1) to decrease. The process of gain reduction, known as gain saturation, is shown in Figure 1-5, where the gain is plotted as a function of frequency , both for a small cavity signal and a large cavity signal. The saturation intensity Isat is a measure of the maximum intensity—that is, the largest number of stimulated emission photons—the laser can support and still lase.

Figure 1-5 Small signal gain and saturated gain

Without proof we offer an equation that relates the saturated gain g to the small signal gain gss, in terms of irradiances within the cavity. g g = ss (1-9) II 1+ left right Isat

In this relationship, Ileft is the irradiance as the beam enters the gain medium from the left and Iright is the irradiance as the beam enters from the right. And, Isat is the saturation irradiance that makes the saturated (or operating) gain g equal to one-half the small signal gain. Clearly, this gss happens when Isat = (Ileft + Iright), for then g = = ½ gss. 11

Table 1-2 below lists gain bandwidths 21 (in Hz), laser output wavelengths 21 (in nm), and 2 Isat, in watts/cm , for some common lasers.

Module 2-1: Operational Characteristics of Lasers 9 Table 1-2. Gain Bandwidths 21 and Saturation Intensities Isat with Wavelengths for Several Common Lasers   I Type of Laser 21 21 sat (nm) (Hz) (W/cm2) HeNe 632.8 2 × 109 6.2 Argon 488.0 2 × 109 16.3 HeCd 441.6 2 × 109 7.1 Copper Vapor 510.5 2 × 109 9.0 7 –2 CO2 10,600 6 × 10 1.6 × 10 Excimer 248.0 1 × 1013 3.4 × 105 Dye (Rh6-G) 577 5 × 1013 3.4 × 109 Ruby 694.3 3 × 1011 3.8 × 107 Nd:YAG 1064.1 1.2 × 1011 1.2 × 107 14 9 Ti:Al2O3 760 1.5 × 10 2.0 × 10 Semiconductor 800 1 × 1014 2.5 × 109

Thus, with values for Isat from Table 1-2, and values for gss from Table 1-1, along with laser gain medium and optical cavity properties, one can use Equations 1-7 and 1-8 to calculate the power out for a specific laser. (This calculation will be made later when we examine a HeNe laser in detail.) Equations 1-7 and 1-8 have been used to describe continuous wave, cw, lasers, that is, lasers that run continually and are not pulsed. The minimum operating gain for a laser is the threshold gain. The threshold level must be maintained to continue lasing. When the operating gain in the laser falls below the threshold gain, lasing stops and the laser turns off. If the gain is “saturated” down to the threshold, the maximum amount of available energy is converted into photons in the laser beam. So then we take g = gop = gth. That is, the threshold gain, as given in Principles of Lasers (Module 1-6 of Course 1, Fundamentals of Light and Lasers) is

11 g = g = g = ln  (1-10) op th 21LRRL g120 where gth refers to the laser’s threshold gain value, where “gains equal losses” Lg refers to the gain medium length R1 and R2 refer to the reflectivities of the high-reflectance mirror and output coupler mirror respectively L0 refers to the roundtrip fractional loss—due to scattering, absorption, diffraction, –(2l) etc.—such that the loss L0 for a roundtrip 2l equals e , where  is the “average” loss coefficient in cm–1 and l is in cm. Using Equations 1-7, 1-8, and 1-10, we can develop an approximation for the power output of a laser in terms of the small signal gain gss, the operating gain g, and the losses within the two- mirror optical cavity. Figure 1-6 is based on calculations made with Equations 1-7, 1-8, and 1-

10 Optics and Photonics Series, Course 2: Elements of Photonics 10. This figure shows the fractional power available in the laser cavity as a function of the power coupled out of the laser, for four different cavity roundtrip losses. Note that when the power coupled out exceeds 0.8, the fractional power available in the cavity drops to zero. Note also that the higher the roundtrip loss the lower is the fraction of power available in the cavity, as we should expect.

Figure 1-6 Fraction of power available in the output power for various roundtrip loss values and various output coupler transmissions

In summary, the laser power output characteristics shown in Figure 1-6 give us some useful insight into laser operation. For a smaller roundtrip loss, L = 0.02 in Equation 1-7, a larger fraction of the available power, Pavail, is output by the laser because there is less loss. The power output is about 0.86 × Pavail. The maximum power output occurs when the output coupler transmission is about T = (1 – R) = 0.20, where T is the transmission and R is the reflectivity at the output coupler. As the roundtrip losses increase, the total possible power output is less and the optimum output coupler mirror transmission is greater. For twenty percent loss, L = 0.20 in Figure 1-6, the maximum possible power output is less that 0.6 × Pavail at an output coupler Pout mirror transmission of T2 = 0.40. The fraction of power available is = laser, which is often Pavail called the laser efficiency, laser. In general, for laser power extraction, the laser is more efficient when there is less loss, L  0, and for small output coupler transmission, T < 0.05. Note also that T must be greater than zero or there is no power. For a pulsed laser, Equations 1-7, 1-8, and 1-10 may not be useful because the values of the gain parameters are continually changing during the pulse. Some understanding of this effect can be obtained by examining the power and loop gain beginning with the onset of lasing, as shown in Figure 1-7. Many pulsed lasers operate for times that are very long compared to the

Module 2-1: Operational Characteristics of Lasers 11 2nl time required for light to complete one roundtrip in the optical cavity, tRT = , where l is the c mirror separation in Figure 1-4, c is the speed of light, and n is the index of refraction of the cm gain medium. For example, if the mirror separation is l =5 cm, c = 3  1010 , and n = 1, the s –9 roundtrip time is tRT = 3.3310 sec, about 3 nanoseconds. (tRT must be modified for some solid materials in which the speed of light is slowed down due to the greater index of refraction of the material.) So, for pulse lengths longer than about ten nanoseconds, the pulse completely fills the space between the mirrors, and the laser operates in the “quasi-cw” mode. That is, the time to pass along the optical cavity is very small compared to the pulse length. When the pulse length is less than a few nanoseconds, the laser is operating truly in the pulsed mode. In this case the narrow pulse bounces back and forth between the mirrors much as a ball would between two walls The following description—based on Figure 1-7—discusses the onset of lasing just after a cw laser has been turned on. In Figure 1-7 time is plotted on the x-axis and loop gain (solid line) and power output (dashed line) on the y-axis. When the laser is switched on (t = 0), there are not yet enough stimulated photons in the cavity to begin the lasing process. Thus, the loop gain GL is less than 1, since losses exceed gains. Hence, no output is possible from the laser. As time increases to t2, the photons in the cavity make many roundtrips and build up in number due to stimulated emissions. Here the loop gain increases to a value equal to 1. Just beyond t2, laser power begins to emerge from the output coupler as a laser beam.

Figure 1-7 Loop gain and output power in cw laser

But as power is drawn from the laser as part of the output beam, the system continually loses some of its stimulated photons. The loop gain GL is not allowed to reach its maximum value due to this process. This can be seen in Figure 1-7 since the gain peak for GL and the maximum power output do not coincide. In fact, the maximum laser power out coincides with the downward slope of the loop gain curve GL. The most efficient situation occurs when the peak of the loop gain GL coincides with the peak of the power output. This can be achieved by

12 Optics and Photonics Series, Course 2: Elements of Photonics preventing power output via a shutter that remains closed until the loop gain GL reaches its peak value. The shutter is then opened to release the output beam at maximum power. This allows large numbers of photons to emerge in a small burst. This technique is called Q-switching. (Details of this technique are beyond the scope of this module.)

The operating gain g = gop in a laser is smaller than the small signal gain gss, as discussed earlier. This is due to the many losses in the cavity and the continuous extraction of beam power from the laser. This gain reduced to its lowest permissible level before the laser turns off is called threshold gain, as we have also described earlier. During the onset of lasing, the operating gain in the laser continues to change until equilibrium is reached and the operating gain g is reduced to the saturated gain. In the model above, the saturated gain is taken to be the threshold gain given by Equation 1-10. Just as we were able to represent the roundtrip photon loss as –(2l) L0 = e , where  is a “distributed” loss coefficient averaged out along the length l of the gain l medium, so can we define a “distributed” gain coefficient , averaged over l and write GA = e for one pass. The GA quantity shown in Figure 1-7 refers to this amplified gain. In a pulsed laser, the losses and the operating gain are usually changing during the pulse. As a function of time then, the excitation mechanism builds up from zero to a maximum and then goes back to zero. Therefore, Equation 1-7 may not give a very good estimate of the power output of a pulsed laser, because all of the parameters used in Equation 1-7 are assumed to be constant for all times. Some properties of pulsed lasers are discussed next, repeating much of what has already been presented in Module 1-6, Principles of Lasers, for easy reference.

Pulse properties of a laser Using a laser in a pulse mode has many advantages. The beam parameters can be manipulated to suit the application. Pulse properties are discussed briefly here. A pulse of laser power versus time has a bell shape. For simplicity, it is assumed to be a triangle. See Figure 1-8.

Figure 1-8 Energy in a laser pulse

The peak of the triangle (triangle or pulse) at A is Pmax (the peak power of the pulse). The pulsewidth t1/2, distance BC, is the width of the pulse at half the height of the pulse, and is given in seconds. The pulse width t½.is referred to as full width at half maximum (FWHM).

Module 2-1: Operational Characteristics of Lasers 13 The total time duration of the pulse is represented by the width at the bottom of the pulse, distance DE. It is equal to 2 × t½. The energy of the pulse, which is given by the product of power × time, can be approximated by substituting the area of the triangle for the true area of the pulse. Thus,

Area of triangle DAE = ½ × base × height = ½ × (2t½) × Pmax Area of pulse  area of triangle DAE  energy of pulse

Energy of pulse (Ep)  ½ × (2t ½) × Pmax = (t ½) × Pmax So,

Ep = t½ × Pmax (1-11) When the laser output consists of a series of pulses, the output is called a pulse train. The characteristics of such a pulse train are as follows. The time from the beginning of one pulse to the beginning of the next pulse is called the pulse repetition time (PRT). The energy of the pulse, which is equal to the area of the triangle—as discussed above—is also equal in value to the rectangle whose width is PRT and whose height is Pav (the average power of the pulse). See Figure 1-9.

Figure 1-9 Average power and pulse repetition time of a pulse in a pulse train

The pulse energy in terms of PRT and Pav is

Ep = t ½ × Pmax = Pav × PRT (1-12)

The ratio of pulse width t½ and pulse repetition time PRT is called the duty cycle (DC). Rewriting Equation 1-12 in terms of the ratio of Pav and Pmax, we get an expression for the duty cycle DC: P t DC  avg = 1/2 (1-13) Pmax PRT The reciprocal of the pulse repetition time (PRT) is called the pulse repetition rate (PRR) and is given by Equation 1-14. 1 PRR = (1-14) PRT

14 Optics and Photonics Series, Course 2: Elements of Photonics Additionally, the pulse energy can be expressed in terms of the pulse repetition rate PRR as given by Equation 1-15.

Pavg Ep = (1-15) PRR In practice, PRT (and hence PRR), along with the pulse width, pulse height, and duty cycle DC can be controlled. This gives great flexibility in controlling the beam parameters of pulses in a laser pulse train.

Example 1

A particular pulsed laser has an average power Pav of 28 mW. The pulse width t½ is 2.8 s and the pulse repetition time PRT is 1.3 ms. Find the duty cycle DC, maximum power Pmax, and energy per pulse Ep. Solution From Equation 1-13, we can find the duty cycle DC as follows:

t 2.8 106 s DC = 1/2 = = 2.15 × 10–3 PRT 1.3 103 s This tells us that the FWHM of a pulse in the train is about two thousandths of the time interval between the pulses in the pulse train. So the pulses are narrow and widely separated.

Again, using an equivalent form of Equation 1-13, we can change the terms and solve for Pmax to obtain:

3 Pavg 28 10 W Pmax = = = 13.0 Watts DC 2.15 103

Then, using Equation 1-12, we can solve for the pulse energy Ep:

Ep = Pav × PRT –3 –3 –6 Ep = (28 × 10 W) (1.3 × 10 s) = 36.4 × 10 Joules

Ep = 36.4 J

Equivalently, using a different part of Equation 1-11, we can solve for Ep as follows: –6 –6 Ep = Pmax × t ½ = (13.0)(2.8 × 10 ) = 36.4 × 10 Joules

Thus Ep = 36.4 J, the same result.

Module 2-1: Operational Characteristics of Lasers 15 Example 2

The output pulse of a Q-switched laser has a pulse width t½ of 6 ns and energy Ep of 1.2 J. Find the peak power, Pmax. Solution Using Eq. 1-11, we have: E 1.2 J P = p = = 0.2 × 109 watts max 9 t1/2 610s

Pmax = 200 megawatts

Example 3

A Q-switched laser has a pulse repetition time of PRR equal to 1 KHz, a FWHM of t½ = 6 ns, and a pulse energy Ep of 1.2 J. Find: (a) Pulse repetition time PRT (b) Duty cycle DC

(c) Average power Pav Solution 1 1 (a) PRT = = = 10–3 sec = 1 millisecond PRR 103 Hz 9 t 1 610 s (b) DC = 2 = = 6 × 10–6 PRT 110 3 s E 1.2 J (c) P = (DC)(P ). But P = p = = 0.2 × 109 W av max max 9 t 1 2 610 s

Thus Pav = (DC)(Pmax) = (6 × 10–6)(0.2 × 109 W) 3 Pav = 1.2 × 10 W = 1.2 kW

By using the pulse energy from Equation 1-12 and the power output in Equation 1-7, the energy output per pulse can be obtained as  1 ggss  1 R1  Eout/pulse = Eavail/pulse  (1-16) gLss g  g RL11 RRL    10 120  where the energy available per pulse is given by

Eavail/pulse = t½ gssIsatAgLg = t½ gssIsatVg (1-17)

16 Optics and Photonics Series, Course 2: Elements of Photonics Equations 1-16, 1-17, and 1-10 give a reasonable estimate of the energy per pulse, if average values of the parameters are used. For very short pulse lasers, the lasing is very different. Both Neodymium Glass and Titanium Sapphire lasers have been operated as ultra short pulse lasers. More discussion of the characteristics of these two lasers is provided in Module 2-2, Specific Laser Types.

Mode formation in lasers The formation of longitudinal modes in a laser cavity made up of two mirrors is analogous to the standing waves set up in a vibrating string fixed at both ends. So first let’s review what one learns in a first course in physics about the properties of standing waves and the sound produced. When a string is stretched between two fixed points a distance l apart, as shown in Figure 1-10 below, and made to vibrate, the string will produce “sounds” of different frequencies. That’s what happens when the strings on a guitar are plucked or when the strings on a violin are “bowed” by the violinist. Because many different wavelength shapes can fit between the two fixed ends, as shown in Figure 1-10 (a, b, c), standing waves are set up. The “sounds” given off by the vibrating string of different shapes consist of many different frequencies f—a fundamental note (Figure 1-10a) and higher harmonics (Figure 1-10a, b).

2 0 2l 1  l 2 3 l v v v3v f  f  f  0 1 2 2 2l l 3 ll2

Figure 1-10 A string vibrates between two fixed end points A and B. Based on the properties of the string and the tension in the string, waves move along the string with a definite speed v. Different wave shapes (different wavelengths) fit between the two fixed ends A and B as shown in Figure 1-10 (a, b, c).

Since the frequency f is equal to v/, different frequencies (tones) such as f0, f1, f2, etc., are produced. Note that the string displacement is always zero at the fixed ends.

A two-mirror optical cavity, as shown previously in Figure 1-4, produces different frequencies of electromagnetic energy just as does a vibrating string. These different frequencies arise from the different wavelength shapes that can fit between the two mirrors, as shown in the multiple sketches of Figure 1-11.

Module 2-1: Operational Characteristics of Lasers 17

Figure 1-11 Longitudinal modes determined by standing waves set up between the cavity mirrors. Note that the wave displacement (electric field amplitude) is zero at the mirror surfaces for each standing wave pattern. Note also that the difference in frequency, , between adjacent wave patterns is always equal to c/2l.

In these sketches, by analogy with the standing waves in Figure 1-10, l is the distance between the mirrors, c is the speed of the EM waves traveling between the mirrors, and the strength of the electric field is like the displacement of the vibrating string. Just as there was no movement of the string at the fixed end points, the electric field value for all wave shapes is zero at the mirrors. The different wave shapes, generating different wavelengths and frequencies, are referred to as the longitudinal modes, since they depend directly on the length l between the mirrors. The different frequencies 1 through 5 are shown in Figure 1-12, giving rise to the spectral distribution of a laser beam.

18 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 1-12 Spectral distribution of laser output showing several different frequencies and corresponding mode numbers

The spacing between the longitudinal modes shown in Figure 1-11 can be calculated. For a laser with one single gain medium of refractive index n and gain length Lg, which is between the mirrors separated by a cavity length, l, the longitudinal mode spacing is given by Equation 1-18. cc mode =  for n 1 (1-18) 2(l LnLgg ) 2 2l where mode = frequency interval between adjacent longitudinal modes Lg = length of the laser gain medium with refractive index n l = mirror separation as shown in Figure 1-11

Transverse modes The transverse modes in a stable two-mirror cavity are referred to as Gaussian modes. The particular electromagnetic wave form that fits exactly to the mirror surfaces to shape the transverse mode is the Gaussian beam. The electromagnetic wave that makes up the laser beam has both electric and magnetic fields, as we know. If we examine in detail the variation in irradiance—due to the electric field—across the beam, we find many different irradiance patterns that are possible, each made up of patterns of bright and dark regions. These patterns of dark and bright in a plane perpendicular to the beam itself are called transverse modes. The zero order mode has one maximum; the first order mode has two maxima and so on. Different cavity geometries can give rise to either a single, pure mode such as the TEM00 mode, or a combination of many modes. Some transverse mode patterns are shown in Figure 1-13.

Module 2-1: Operational Characteristics of Lasers 19

Figure 1-13 Transverse electromagnetic modes of a laser beam as seen “looking” head on into the beam. The shaded “dark regions” are regions of high brightness; the plain “white” vertical and horizontal lines are regions of low or zero brightness.

The different modes shown in Figure 1-11 are identified as Transverse Electromagnetic Modes (TEMxy). They are given with two subscripts or mode indices, x and y. The subscript x indicates the number of zero-irradiance regions along the x-direction. The subscript y indicates the number of zero-irradiance regions along the y-direction. Thus, TEM00 refers to one single bright region with no regions of zero irradiance—except the one at the distant edges of the pattern. The mode TEM20, for example, has two minima along the x-direction and none along the y-direction; the TEM03 mode has zero minima along the x-direction and three minima along the y-direction, and so on. Notice that the higher order modes are more spread out than the lower order TEM00 mode. The TEM00 mode has the highest amount of irradiance near the center of the mode. When lasers operate in a so-called multi-mode operation, the resulting TEM beam irradiance pattern is sort of a sum of each of the individual modes, being wider than any single TEM mode and approaching a “flat-top” in irradiance across the width of the beam. The higher order TEM modes as well as the multimode outputs are a factor M times larger in transverse width than the simplest and purest mode, the TEM00 mode. Many of the operational equations for a laser are based on the mathematically “pure” TEM00 gaussian beam. For example, the “beam irradiance versus beam transverse dimension” for a TEM00 beam and multimode output are compared in Figure 1-14.

20 Optics and Photonics Series, Course 2: Elements of Photonics (a) TEM00 output (b) A multimode output

Figure 1-14 Comparison of beam irradiance versus beam width for a TEM00 beam and a multimode laser beam

A measure of the optical quality of a laser beam is given by its beam divergence. The full angle beam divergence  for a TEM00 beam of wavelength , from a circular “aperture” of radius wo, is given by the relationship 2 00 = (1-19) wo For a multimode output beam, whose multiplication factor is M, the full-angle beam divergence M is given approximately—and simply—by multiplying the result for the TEM00 calculation by the M-factor, as given in Equation 1-19. 2M  M = M00 = (1-20) w0 If the beam divergence angle is increased by a factor M for a multimode (or higher order TEM mode beam) compared with a TEM00 beam, then the diameter of the multimode beam is increased by a factor of M and the resulting area of the laser spot on a target is increased by a factor M2. Similar substitution of the M factor for higher order or multimode output beams, in relationship to formulas such as beam divergence, beam area on target, and beam irradiance on target, worked out for the TEM00 beam, simplifies greatly the calculation of corresponding laser characteristics for non-TEM00 beams. Of course, one must know the M-factor. Fortunately, most laser manufacturers provide the M-factor along with the laser specs for a given laser. In the next module, 2-2, Specific Laser Types, we will describe the characteristics of many different types of lasers and calculate some of their operational characteristics. We will do this based on spec sheets—we will call them FACT SHEETS—and operational equations we’ve developed in this module. To get started, we will indicate how we’re going to do this with a well-known laser, the helium-neon (HeNe) workhorse. First let’s describe the operational details of a HeNe laser.

Module 2-1: Operational Characteristics of Lasers 21 The Helium-Neon Laser The helium-neon laser is a neutral atom laser. Helium and neon are inert gases. The gas mixture is at a low-pressure (2–3 torr) of helium and neon in a ratio of approximately 5:1, helium to neon. The HeNe was the first gas laser built and was demonstrated in 1961 after the first solid laser, the ruby laser, was operated in 1960.

Excitation mechanism A gain tube for a HeNe laser is shown in Figure 1-15.

Figure 1-15 Example of gas gain tube for a helium-neon laser

In the HeNe laser, the mirrors are made an integral part of the glass tube that contains the mixture of helium and neon gas. The tube can be anywhere from 10 to 100 cm in length, with most lasers around 10–20 cm long. The excitation mechanism is an electrical discharge created by a power supply that establishes a voltage between the electrodes—the cathode and the anode. When a voltage is applied between the electrodes, a discharge current develops within the tube and electrons flow from the cathode toward the anode. These electrons form a plasma, a gas of charged particles. Thus, HeNe laser gain tubes are often called plasma tubes. As the discharge current develops, the tube begins to emit light in a fashion similar to that of a neon sign. The electrons accelerating toward the anode within the gas collide with both helium and neon atoms, “knocking” them up to excited energy levels, certain pairs of which exhibit a population inversion. The atoms in excited energy levels then fall to lower energy levels via spontaneous decay and stimulated emission. The stimulated emission process leads to the amplified laser beam, as we already know.

Gain medium Electronic energy levels of both neutral neon and neutral helium atoms are shown in Figure 1-16.

22 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 1-16 Energy level diagram for a helium-neon laser showing three laser transistors at 3.391 m, 632.8 nm, and 1.152 m

Within the HeNe laser, neon is the element that leads to the laser transitions. Helium is a so- called buffer gas that improves the operation of the laser through collisions with the neon atoms. In the plasma tube, most of the collisional energy from the electrons goes into the helium atoms. Then the helium atoms transfer their energy to the neon atoms through direct collisions. Because of this exchange of energy from helium to neon, the HeNe laser is a good example of a transfer laser. Notice in Figure 1-16 that the E3 and E5 energy levels of helium are at nearly the same value as the E3 and E5 energy levels of the neon atom. Since they are nearly at the same energy level, these energy states of the two atoms are resonant with one another. This means that when helium and neon atoms collide in the gas mixture, the helium atoms give their energy efficiently to the neon atoms, exciting them to the nearly equal higher energy levels. This process produces a population inversion and begins the laser process.

Optical cavity As shown earlier in Figure 1-15, the mirrors of the optical cavity are often an integral part of the gain tube for a HeNe laser. In some cases, the mirrors are external to the gas tube and the ends of the gas tube include Brewster windows, which determine the polarization state of the laser beam. The Brewster windows force the HeNe laser to operate in a specific linearly polarized mode. For many HeNe lasers, the high-reflectance mirror has some curvature and the output coupler mirror, of a lower reflectance, is a flat mirror. This optical resonator configuration forces the beam waist (narrowest width) of the Gaussian beam in the optical cavity to occur at the output coupler mirror. That is, the output laser energy seems to diverge from an “aperture” at the output coupler mirror. Having the mirrors integrated into the gas tube structure has the advantage that the laser is self-contained. The alignment of the mirrors is generally accomplished at the laser manufacturing plant when the mirrors are attached to the tube. However, attached mirrors have the disadvantage that the gases can come into direct contact with the mirrors, leading to contamination of the mirror surfaces. If the mirrors are somehow misaligned, the laser may not be repairable. However, HeNe lasers are rather inexpensive, so

Module 2-1: Operational Characteristics of Lasers 23 that when the laser has some problem, a new one usually replaces it. Most HeNe lasers are now prepackaged at the manufacturer and come to the user in a self-contained package. An example of a self-contained HeNe laser is shown in Figure 1-17.

Figure 1-17 Self-contained helium-neon laser from JDS Uniphase

The information given below is useful in describing the characteristics and operation of the HeNe laser. We will use this information to calculate several operational characteristics of the HeNe laser. Notice that the M2 factor is given.

Fact Sheet for Helium-Neon Lasers Excitation mechanism: Electric discharge – plasma Gain medium: A mixture of helium and neon gas. Neon is the lasing element. Helium is the energy transfer partner. The HeNe is a transfer laser. Optical cavity: Two-mirror, stable optical cavity Wavelengths: Primary wavelength = 632.8 nm; alternate wavelengths = 543.5 nm, 594 nm, 612 nm, 1.15 m, 1.523 m, and 3.39m; primary color is red. Output power: 0.5–80 mW Beam diameter: 0.5–2.5 mm Beam divergence: 0.5–3.0 mrad M2 factor: M2 = 1.0–1.2 –3 –1 Pulse format: Continuous wave (cw) Small signal gain: gss = 210 cm 2 9 Saturation intensity: Isat = 6.2 W/cm Gain bandwidth: 210 Hz

Coherence length: 0.1–2.0 m Gain length: Lg = 10–100 cm Power stability: less than 5%/hr Lifetime: Greater than 20,000 hrs Advantages: Very reliable, robust, power-stable, long-lifetime laser operating at multiple wavelengths; long coherence length Disadvantages: Low power; applications limited; difficult to repair when damaged Applications: Alignment, metrology, inspection

24 Optics and Photonics Series, Course 2: Elements of Photonics Calculation of several operational characteristics Based on equations developed in this module, let us now calculate some of the operational characteristics for a typical HeNe laser.

Example 4: Photon energy Problem Determine the energy in electron volts of a HeNe laser photon of wavelength  = 632.8 nm. Solution

hc –34 8 –9 Using Equation 1-1, Ephot = , where h = 6.625 × 10 Js, c = 3 × 10 m/s, and  = 632.8 × 10 m:  34 8 (6.625 10 J s)(3 10 m/s) –19 Ephot = = 3.14 × 10 J 632.8 109 m Since an energy of 1 eV is equal to 1.6 × 10–19 J, we have 3.14 1019 J Ephot (eV) = = 1.96 eV 1.6 1019 J/eV (Recall that 1 eV is the energy that an electron at rest gains when it “falls” through a potential difference of 1 volt.)

Example 5: Gain volume Problem If the gain volume for a HeNe laser tube is 2 mm in diameter by 10 cm in length, determine the gain volume, Vg. Solution 2 32 d (3.14)(2 10 m) –6 2 Using Equation 1-3, Vg = Ag × Lg, where Ag = = = 3.14 × 10 m 4 4

Lg = 10 cm = 0.1 m –6 2 –7 3 3 Vg = Ag × Lg = (3.14 × 10 m )(0.1 m) = 3.14 × 10 m , or about 0.314 cm

Example 6: Beam irradiance Problem Assuming that the irradiance is nearly uniform across the diameter of the gain tube of a certain HeNe laser, calculate the internal beam irradiance for a HeNe laser operating at a power of 1 mW inside a tube of diameter 2 mm.

Module 2-1: Operational Characteristics of Lasers 25 Solution 4P Using Equation 1-2, I = , d 2 (4)(1 103 W) I = = 3.18 × 102 W/m2, about 0.032 W/cm2 (3.14)(2 1032 m)

Example 7: Beam divergence Problem

Assuming the HeNe beam waist diameter 2wo at the output coupler to be a value of 0.2 mm, 2 determine the beam divergence 00 for a TEM00 beam and for a non-TEM00 beam of M factor 1.2. Solution 2 For 00, use 00 = . Remember that wo refers to the radius of the beam waist—not the diameter. wo

2 (2 632.8 109 m)   = = = 4.03 × 10–3 rad 00 4 wo (3.14)(1 10 m)

 00  4 milliradian 2 For M where M = 1.2 and M  1.1, use the relationship M = M 00. –3 –3  M = M 00 = 1.1 × 4 × 10 = 4.4 × 10 rad = 4.4 mrad

M = 4.4 mrad, not too different from the TEM00 beam

Example 8: Single-pass gain Problem

Assume that a HeNe laser amplifier is operating at its maximum gain so that g = gss. Calculate the irradiance Eg after one pass through a 10-cm-long HeNe laser gain tube if the initial irradiance E0 is 1 mW/cm2.

Solution Use Equation 1-6 and Table 1-1.

gssLg –3 2 –3 Ig = I0 e where I0 = 1 × 10 W/cm , gss = 2 × 10 /cm (from Table 1-1) and Lg = 10 cm

–3 (2×10–3/cm)(10 cm) –3 2×10–2  Ig = (1 × 10 )e = (1 × 10 )e

(Note that gss and Lg must be in the SAME units of length) –3 0.02 Eg = (1 × 10 )(e ) –3 Eg = 1.02 × 10 W = 1.02 mW So after one pass, the irradiance increases from 1 mW to 1.02 mW. But at the speed of light, c = 3 × 1010 cm/sec, this single pass occurs in the short time of 0.33 nanoseconds!! This means that the photons bounce back and forth between the mirrors about 3,000,000,000 times a second.

26 Optics and Photonics Series, Course 2: Elements of Photonics Example 9: Power out for a typical HeNe laser Problem Making use of Equations 1-7, 1-8, and 1-10 and Tables 1-1 and 1-2, and the following parameters for a typical HeNe laser, operating at threshold, estimate the power out, Pout. Cavity parameters

Reflectivity R1 = 0.999 (High-reflectance mirror)

Reflectivity R2 = 0.970 (Output coupler mirror)

Gain length Lg = 50 cm; beam diameter = 2 mm 2 (0.2 cm) –2 2 3 Gain volume Vg = 50 cm × Ag = 50 cm × = (50 cm)(3.14 × 10 cm ) = 1.57 cm 4

Gain coefficient g = gth (see below for calculation)

Loss factor L0 = 0.008 (assumed as reasonable) Table values –3 –1 Small signal gain coefficient gss = 2 × 10 cm (Table 1-1) 2 Isat = 6.2 W/cm (Table 1-2) Calculations

A. Since the laser is operating at threshold, g = gth, so 11 g = gth = n (Equation 1-10) 2(1)LRRLg120

11 g = n 2(50 cm) (0.999)(0.97)(1 0.008)

11 g = n 100 cm 0.9613

1 1.97 102 g = n 1.02 = = 1.97 × 10–4 cm–1 100 cm 100 cm

B. Now calculate Pavail from Equation 1-8.

Pavail = gssIsatVg 210 3 P = (6.2 W/cm2)(1.57 cm3) avail  cm –2 Pavail = 1.95 × 10 W = 19.5 mW

Now, knowing g and Pavail, we can use Equation 1-7 to estimate the power out.

1 gg 1 R P = P ss 1 out avail  gLss g  g RL1o(1 ) RRL 12o (1  )

Module 2-1: Operational Characteristics of Lasers 27 1 2 1034 1.97 10 1  0.999 Pout = (19.5 mW)  34 210 50 1.9710  (0.999)(0.992) (0.999)(0.970)(0.992) –4 Pout = (19.5 mW)(10)(9.15)(5.1 × 10 )

Pout = 0.9 mW or about 1 mW (reasonable value for a 50-cm HeNe laser)

LABORATORY

Laser Inspection and Measurement of Laser Operational Characteristics—HeNe and Diode Lasers Introduction Most lasers are delivered from the factory with a specification sheet. However, lasers do not always perform as well as the listed specifications indicate, although sometimes they perform better than the specifications. The wavelength and output beam size may be listed in the specifications. The power output of the laser, the beam diameter, the beam divergence, and M2 factor can easily be measured. When working with a laser, two things must be done before turning the laser on. First, be certain that everyone in the area is wearing the correct laser eye protection—that is, laser safety goggles or laser safety glasses. Second, perform an inspection of the laser to understand as much as possible about the laser’s operation before turning it on. The objective of this exercise is to reinforce good laser safety habits and to provide procedures for measuring several laser operational characteristics.

Materials required HeNe laser Diode laser or diode laser pointer Firm mount for diode laser Optical power meter Focusing lens with known focal length Linear translator with Adjustable lens mount Sharp-edged blade, lab jacks and meter stick

A. Preparing to inspect the laser Can you determine the mechanism by which the excitation energy gets into the gain medium? Sometimes lasers are completely sealed, which makes it difficult to determine what the constituent parts are and what those parts are doing during lasing action. However, it is very important to accomplish a visual inspection of the laser before turning it on. Even if the laser

28 Optics and Photonics Series, Course 2: Elements of Photonics you are operating is a very low power laser pointer, you will want to look at it to determine where the beam comes out and which direction to point the laser. Have your lab notebook ready to take notes on your observations. Inspect the laser. Write down the answers to the questions below in your lab notebook.

B. Laser inspection: General appearance  Does the laser have anything written on it?  Is there an information sheet or a specification sheet provided with the laser?  What are the items listed on the specification sheet?  What size is the laser? How long is it? How wide is it? How tall is it?  Is the laser in a box or container of some kind?  Identify the container material, if you can. Is it metal or plastic?  Does anything protrude or stick out from the container?  Is there any indication of where the laser beam comes out of the box?  Are there any lenses or mirrors visible?

C. Laser inspection: Major elements  Can you identify the three major elements of the laser, that is, the excitation mechanism, the gain or active medium, and the optical cavity?  For the excitation mechanism, can you identify the power source?  Is there an electrical power cord?  Can you determine what the active medium is? Is it a gas or solid material? Is it a liquid? (Remember, gases require a tube to hold them. Liquid lasers operate with a dye jet. Solid material lasers are different and usually much smaller.)  Can you identify the optical cavity in this laser? After working with lasers for some time, you will become familiar with various types of hardware and be able to distinguish a gas laser from a liquid or solid material laser.  What is the state of matter of the lasing material in the HeNe laser?

D. Measuring laser power If you point the laser directly into the power meter and read the power off of the detector face, as shown in Figure 1-18, you will have the power out of the laser.

Module 2-1: Operational Characteristics of Lasers 29

Figure 1-18 Experimental set up for measuring laser power output

To verify the power reading, it is a good idea to measure the power of the laser for various detector distances from the laser. Use at least three distances. One distance should have the laser very close to the power meter—a few centimeters. The next distance should be moderate, about 1 meter. And, the third distance should be as far as possible on the lab table, maybe 2 meters or more. Fill in the table below:

Table 1-3. Measurement of Laser Output Power Distance of Detector from Power Meter Reading (Watts) Laser Laser Off (Baseline Power)

Laser at 2 cm (P1)

Laser at 1 m (P2)

Laser at 2 m (P3)

Average Power (Pav = [P1+P2+P3]/3)

E. Measuring laser power at the focal point of a lens

The TEM00 gaussian beam coming out of the HeNe laser is diverging. In Module 1-6 of Course 1, Fundamentals of Light and Lasers, the beam divergence and beam size were measured for a HeNe laser. It is possible that the beam reaching the detector is too wide for the detector, so that only part of the laser beam actually goes into the detector. Placing a focusing lens in the laser beam and positioning the detector at a location beyond the lens—equal to the focal length of the lens—enables you to repeat the power measurements and be sure the detector captures the entire beam. Again, the measurement should be done at three locations near the lens focal point. The best place for the detector is at the focal plane of the beam. You should develop a procedure to find this best place for the position of the lens, using a blank, white card. Move the card along the beam until the spot is the minimum size. This is where the focal plane is located. Then, select two other locations near the focal point on either side.

30 Optics and Photonics Series, Course 2: Elements of Photonics Table 1-4. Measurement of Laser Output Power at Focus Distance From Laser Power Meter Reading Laser Off (Baseline Power)

At lens focal point—(l1) (P1)

Just before the focal point—(l2) (P2)

Just after the focal point—(l3) (P3)

Average Power (Pav= [P1+P2+P3]/3)

F. Measuring beam divergence Set up the laser, linear translator with knife blade, and detector as shown in Figure 1-19. The sharp-edged blade is vertically mounted on a linear translator such that the blade can be moved slowly across the laser beam by turning the adjusting micrometer screw. The laser, linear translator, and detector head are mounted on lab jacks and aligned such that the laser beam, razor blade edge, and detector are in a straight line, as shown in Figure 1-19. Perform all measurements of the beam width in the far field.

Figure 1-19 Experimental set up for measuring beam diameter and determining beam divergence

First measure the distance l1 between the laser (at the output coupler) and the knife blade—in the far field—as shown in Figure 1-19. Then, the micrometer screw is turned until the blade completely blocks the laser beam. Obviously, there will be no reading on the detector since no light is falling on the detector head. Now the blade is slowly withdrawn until a small amount of light reaches the detector head. The micrometer position and the reading on the detector are noted. The micrometer is turned continually in the same direction—in small but equal steps. The micrometer readings and the corresponding detector readings are noted. This is continued until the detector reads a maximum power. A graph plotting detector reading (power) versus sharp- edge blade position (micrometer reading) is shown in Figure 1-20.

Module 2-1: Operational Characteristics of Lasers 31

Figure 1-20 Graph showing power read by detector head versus knife-edge position in the beam. The dashed portion of the curve is completed based on symmetry.

Since the TEM00 beam will be symmetrical on either side of the central point, the bell-shaped curve as shown by the dashed line can be completed—without taking readings for the right half of the curve. The 1/e2 points are noted on the curve. The distance between the 1/e2 points on either side of the central maximum point gives the beam diameter (d1). The knife blade is moved 2 to a farther distance (l2) and the entire procedure is repeated to find the 1/e diameter (d2). The results are tabulated in Table 1-5. dd Finally, the beam divergence is calculated using the equation  = 21. ll21

Table 1-5. Measurement of Beam Diameter and Beam Divergence Beam 1/e2 1/e2 Position of Micrometer Power on Beam divergence position position knife blade reading detector diameter dd21 on left on right θ = ll21

l1 =

= = d1 =

l2 =

= = d2 =  =

G. Measuring beam power and divergence for a diode laser Repeat Parts A through G for the diode laser. How does the beam power and beam divergence of the diode laser compare with that for the HeNe laser? Which laser was easier to measure for

32 Optics and Photonics Series, Course 2: Elements of Photonics power and beam divergence? Which laser, for equal beam powers, leads to the largest irradiance on target at a given distance from the laser?

H. Using d = f to obtain the beam divergence at a focal spot Now place a lens of focal length f, as shown in Figure 1-21, in the far field of the laser. Locate the focal plane as you did in Part E, and repeat the knife blade measurement, again taking data and again making a plot similar to Figure 1-20. You should obtain a value for d. Then, using the relationship d = f , you can obtain  from  = d/f. This gives the beam divergence of the laser as it strikes the lens. If you do this all in the far field, your results for the beam divergence  from this measurement and those in Part F for the HeNe lasers should be comparable. How do they agree?

Figure 1-21 Setup for measuring beam divergence using a lens of focal length f and the equation d = f0

EXERCISES

1. If an atom moves to its ground state from an energy level 4.6 eV above the ground state and emits a photon, find the wavelength of the photon emitted. 2. Explain with a diagram the major components of a laser and how each works. Why should mirrors of high reflectivity be used in a laser? 3. Using a diagram, explain the relationship between the loop gain and power output for a cw laser. 4. What are the different losses of laser energy in a laser cavity? Explain the meaning and significance of the term threshold gain condition.

5. A cw Nd:YAG laser contains mirrors whose reflectivities are R1 = 0.988 and R2 = 0.78 and a roundtrip loss of 0.6 %. Find the amplifier gain GA during cw operation. 6. Why do longitudinal modes occur in a laser beam? A HeNe laser (n = 1) has a cavity length of 65cm and an output wavelength of 632.8 nm. Find the mode spacing . 7. Do all the longitudinal modes share the same gain? Is it better to have a larger number or a smaller number of modes? Explain.

Module 2-1: Operational Characteristics of Lasers 33 8. A laser has a mode spacing of 280 MHz and its fluorescence linewidth is 45 GHz. Find the number of longitudinal modes contained with the fluorescence linewidth. 9. A laser beam has a diameter of 3.4 mm at a distance of 2.35 m from the laser. If the diameter increases to 5.3 mm at a distance of 7.8m, find the beam divergence .

10. A laser whose wavelength is 1.06 m has an effective output aperture diameter 2wo of 1.65 mm. Where are the near-field and far-field regions relative to the laser? (You may have to refer to Module 1-6 of Course 1, Fundamentals of Light and Lasers, to make this calculation.)

11. A laser has an effective output aperture diameter 2wo of 1.6 mm at 488 nm. Find the beam divergence. 12. An aperture of diameter 0.9 mm is placed in the path of a beam of 1.8 mm diameter. Find the fraction of incident laser power transmitted by the aperture. 13. A laser has a beam divergence of 4.2 mrads. The beam is focused by a lens of focal length 2.05 cm. Find the diameter of the focused spot. (D  f) 14. Explain the meanings of the terms pulse width, PRT, and PRR. The output pulse of a Q-switched laser has a duration of 8 ns and an energy of 1.4 J. Find the peak power, Pmax. 15. Explain the terms full width at half maximum and duty cycle. A particular pulsed laser has an average power Pav of 32 mw. The pulse duration t½ is 2.5 s, and the pulse repetition time is 1.6 ms. Find the duty cycle DC, maximum power Pmax, and energy per pulse, Ep.

16. Draw the picture of a TEM23 transverse electromagnetic mode. 17. What is the difference between gas lasers and solid-state lasers? 18. What role does helium play in a HeNe laser?

REFERENCES

Books Hawkes, J., and I. Latimer. Lasers: Theory and Practice. New York: Prentice Hall, 1995. Hecht, Jeff. Understanding Lasers, 2nd Edition. Hoboken, New Jersey: Wiley-Interscience, 2001. Introduction to Lasers. (Laser Electro-Optics Technology Series.) Waco, Texas: CORD, 1986–. Laser Materials Processing Handbook, Laser Institute of America. Oshea, Donald C., W. Russell Callen, and William T. Rhodes. Introduction to Lasers and Their Applications. Reading, Mass.: Addison-Wesley Publishing Co., 1978. Siegman, A. E. Lasers. Mill Valley, California: Interscience Publishers, 1986. A very comprehensive, advanced treatment of lasers. Silfvast, William T. Laser Fundamentals. New York: Cambridge University Press, 1996. A comprehensive, calculus-based treatment of lasers suitable for a senior-level or first-year graduate-college engineering or science student.

34 Optics and Photonics Series, Course 2: Elements of Photonics Silfvast, William T. “Lasers,” Encyclopedia of Physical Science and Technology, Volume 7, Academic Press, 1987. A nonmathematical overview of lasers for a general audience. Silfvast, William T. “Lasers,” Handbook of Optics, 2nd Edition, Edited by Mike Bass, McGraw Hill and Optical Society of America, 1995. A brief overview of lasers with some algebraic mathematical equations and formulas. Tyagarajan, K., and A.K. Ghatak. Lasers: Theory and Applications. New York: Plenum Publishing Co., 1981.

Websites Rami Arieli, “The Laser Adventure,” http://stwi.weizmann.ac.il/Lasers/laserweb/index.htm Sam’s Lasers FAQ, http://www.repairfaq.org/sam/laserfaq.htm

Module 2-1: Operational Characteristics of Lasers 35

Specific Laser Types

Module 2-2

of

Course 2, Elements of Photonics

OPTICS AND PHOTONICS SERIES

PREFACE

This is the second module in Course 2 (Elements of Photonics) of the STEP curriculum. Following are the titles of all six modules in the course: 1. Operational Characteristics of Lasers 2. Specific Laser Types 3. Optical Detectors and Human Vision 4. Principles of Fiber Optic Communication 5. Photonic Devices for Imaging, Display, and Storage 6. Basic Principles and Applications of Holography

The six modules can be used as a unit or independently, as long as prerequisites have been met. This module relies heavily on Module 1-6, Principles of Lasers. A comprehensive understanding of the concepts presented in all of the modules in Course 1 (Fundamentals of Light and Lasers) is needed for this module. The student should also understand the concepts presented in Module 2-1, Operational Characteristics of Lasers. For students who may need assistance with or review of relevant mathematics concepts, a review and study guide entitled Mathematics for Photonics Education (available from CORD) is highly recommended. The original manuscript of this document was prepared by William P. Latham (Albuquerque, New Mexico) and edited by Leno Pedrotti (CORD). Formatting and artwork were provided by Mark Whitney and Kathy Kral (CORD).

CONTENTS

Introduction ...... 1 Prerequisites ...... 1 Objectives ...... 2 Scenario ...... 3 General Overview of Basic Concepts ...... 3 Laser Classification ...... 3 Functional Elements of a Laser ...... 4 Spectrum of Laser Emission Wavelengths ...... 5 Characteristics of Specific Laser Types ...... 6 Gas Lasers ...... 6 The Helium-Neon Laser ...... 6 The Helium-Cadmium Laser ...... 12 The Ion Lasers—Argon Ion and Krypton Ion ...... 15 The Metal Vapor Lasers—Copper Vapor and Gold Vapor ...... 18 The Carbon Dioxide Laser ...... 20 The Nitrogen Laser ...... 25 The Excimer Laser ...... 27 The Hydrogen Fluoride and Deuterium Fluoride Chemical Lasers ...... 30 Liquid Lasers ...... 32 The Dye Laser ...... 32 Solid Material Lasers ...... 34 The Ruby Laser ...... 37 Neodymium-Yttrium Aluminum Garnet (Nd:YAG) Solid-State Lasers ...... 40 Neodymium:Glass Solid-State Lasers ...... 43 Fiber Lasers ...... 44 Semiconductor Lasers ...... 45 Other Lasers ...... 48 Laboratory ...... 49 Exercises ...... 51 References ...... 52

COURSE 2: ELEMENTS OF PHOTONICS

Module 2-2 Specific Laser Types

INTRODUCTION

Module 2-2, Specific Laser Types, is intended as a follow-on module to Module 2-1, Operational Characteristics of Lasers. Both modules build on the basic concepts of laser operation presented in Module 1-6, Principles of Lasers (the last module in Course 1, Fundamentals of Light and Lasers). Following a brief overview of general laser classifications, laser materials, and a spectrum of current lasers, the discussion focuses on specific laser types and their characteristics. This discussion describes eight gaseous-state lasers, one liquid-state laser, and five solid-state lasers. The characteristics of each laser are summarized in a fact sheet. The fact sheet highlights the important characteristics of a particular laser and makes comparisons between different lasers easier. To make a good connection with Module 2-1, Operational Characteristics of Lasers, which concluded with a case study of the HeNe laser and typical operational calculations, this module will begin with an “identical” treatment of the HeNe laser and typical calculations of important laser parameters. Then, in a departure from our usual practice of involving the learner in appropriate hands-on laboratories, we invite the students to repeat the calculations given for the HeNe laser and apply them to a Nd:YAG laser. This calculational laboratory will provide practice with algebraic expressions, scientific calculations, and an understanding of key parameters involved in laser operation.

PREREQUISITES

To understand this module the student should have completed Module 1-6, Principles of Lasers and Module 2-1, Operational Characteristics of Lasers. It is understood that prerequisites for Module 1-6 include the completion of Modules 1-1, 1-2, 1-3, 1-4, and 1-5. In addition, the student should be familiar with basic high school algebra, trigonometry, logarithms, exponentials, scientific nomenclature, and units of conversion.

1 OBJECTIVES

At the completion of this module, the student will be able to:  Classify lasers into five commonly accepted categories.  Describe the functional elements required in the operation of a laser.  Distinguish between atomic and molecular gas lasers and give several examples of each.  Describe how a HeNe laser operates and sketch an energy-level diagram that shows the three laser line emissions.  Explain what is meant by the following laser operational terms: – Output power – Coherence length – Small signal gain coefficient – Beam divergence – Beam diameter – Gain length 2 – Saturation intensity – M factor – Lifetime  Use a fact sheet to obtain key parameters affecting laser operation.  Calculate the following quantities for a given laser: – Laser photon energy – Beam divergence – Gain volume – Single-pass gain – Beam irradiance on target – Power out for a cw laser  Describe the essential operational parameters for the following specific laser types, with the help of fact sheets: – HeNe – Dye – HeCd – Ruby – Argon-krypton ion – Nd:YAG – Copper vapor – Nd:Glass – Nitrogen – Titanium sapphire

– CO2/CO – Semiconductor – Excimer – Fiber – Chemical (HF/DF)  Calculate the power out for a typical cw Nd:YAG laser, given the gain volume, mirror reflectivities, gain coefficient, loss factor, small signal gain coefficient, and saturated gain.

2 Optics and Photonics Series, Course 2: Elements of Photonics SCENARIO

Alberto and Cindy are laser technicians in a laser maintenance shop. They have been told by their supervisor, Marta, to determine what’s wrong with a Nd:YAG laser that’s not operating properly. Together, Alberto and Cindy begin a routine check of the main systems of the laser— the gain medium with the two mirrors, the optical pump, and the cooling system. With the help of a HeNe laser they determine that the cavity mirrors are properly aligned. They are not part of the problem. Next they check out the optical pumping subsystem to ensure that the correct laser wavelengths and energy are focused on the gain medium. That part also checks out. Next they examine the operation of the cooling system and discover that a pump is not operating properly, not providing sufficient water for cooling, thereby allowing the laser to overheat and shut down. A simple replacement of the pump solves the problem. Then Alberto and Cindy—always working as a team to ensure proper safety procedures—move on to a CO2 laser, to attempt to increase the output power back to its rated “spec” level.

GENERAL OVERVIEW OF BASIC CONCEPTS

Before we begin with a description of specific lasers, we will discuss briefly the general ways in which different lasers can be classified. Then we will take a look at a comprehensive “spectrum of current lasers,” identifying them by name and emission wavelengths.

Laser Classification The most common laser specifications fall into five categories: (1) atomic lasers, (2) molecular lasers, (3) dye lasers, (4) solid-state laser, and (5) semiconductor diode lasers. For categories (1) and (2), the lasing material is a gas. Dye lasers in (3) are liquids, and the two categories in (4) and (5) are solids. Different pumping mechanisms and some differences in mirror types are used to produce lasing for the different materials in different categories. There are a few other types of lasers that will not be included within this discussion. These include the free electron laser, the nuclear-pumped X-ray laser, and lasers occurring naturally within stars in the universe. Schematic energy-level diagrams for various laser categories are shown in Figure 2-1. More specific energy level diagrams will be provided for specific lasers, along with the fact sheets that describe their operating parameters.

Module 2-2: Specific Laser Types 3

Figure 2-1 Schematic energy-level diagrams for gases, liquids, solids, and semiconductors

Functional Elements of a Laser Before continuing with the discussion of specific laser types, it is important that the student review the basic components of laser systems and the function within the laser. The functional elements of a laser—as presented in Module 2-1, Operational Characteristics of Lasers—are reproduced in Figure 2-2.

Figure 2-2 Functional elements of a laser: (1) the excitation mechanism, (2) the gain of amplifying or active medium, (3) and the optical cavity that provides the feedback mechanism

So, lasers have been developed out of materials for all three states of matter. That is, there are gas lasers, liquid lasers, and solid lasers. Within each of these categories, the material properties are different, which requires different components to realize lasing. The excitation mechanisms and optical cavities may be different for all categories. And, the quantum characteristics of each category are different. These differences create differences in the operational characteristics for each laser category. This means different wavelengths, different components, different excitation mechanisms, different operational power levels, and different optical quality, beam 2 divergence or M factors.

4 Optics and Photonics Series, Course 2: Elements of Photonics Spectrum of Laser Emission Wavelengths There are currently hundreds of types of lasers in operation. Figure 2-3 lists most of those in use today. In the pages that follow the figure, we will describe the more popular lasers in the gas, liquid, and solid-state categories.

Figure 2-3 Spectrum of laser emission wavelengths The following section should provide a foundation for understanding a broad class of lasers.

Module 2-2: Specific Laser Types 5 CHARACTERISTICS OF SPECIFIC LASER TYPES

Gas Lasers Gas lasers can be divided into two subcategories. The first category is atomic lasers. Atomic lasers are further subdivided into neutral-atom lasers and ionic-atom lasers. Ions are usually single atoms that have had an electron removed, leaving the atom with a net positive charge. In metal vapor lasers, a solid metal is heated to a temperature sufficiently high to create a gas vapor. Metal vapor lasers can be ion lasers or neutral atom lasers. The second category is molecular lasers, further subdivided into diatomic and triatomic molecules. All gas lasers are operated with dilute gases. The gas is maintained in a glass enclosure, usually at low pressure. Sometimes the gas flows through a glass tube. In the flowing case, the gas is recycled through the laser tube, then removed from the laser active region and replenished, then recycled back into the active lasing region. This enables continuous operation of the laser and accounts for the continuous wave (cw) laser. This flowing, recycling operation is a primary advantage of the gas laser. Other materials, such as solids, sometimes must be operated as pulsed lasers because there is no way to recycle the lasing material. Gas lasers are usually pumped with an electric discharge. However, they can also be pumped with optical radiation. Within each category of gas lasers, we describe several examples: Helium-neon lasers (HeNe) Helium-cadmium lasers (HeCd) Copper vapor lasers Argon ion and krypton ion lasers Carbon dioxide lasers (CO2) Carbon monoxide lasers (CO) Nitrogen lasers (N2) Excimer lasers (xenon fluoride [XeF]) and xenon chloride [XeCl]) Chemical lasers (hydrogen fluoride [HF] and deuterium fluoride [DF]) We begin with the HeNe laser.

For the sake of completeness and continuity, the following pages (6–12) on the helium- neon laser are nearly identical to treatment of the helium-neon laser in Module 2-1, Operational Characteristics of Lasers.

The Helium-Neon Laser The helium-neon laser is a neutral atom laser. Helium and neon are inert gases. The gas mixture is at a low-pressure (2–3 torr) of helium and neon in a ratio of approximately 5:1, helium to neon. The HeNe was the first gas laser built and was demonstrated in 1961, after the operation of the first solid laser, the ruby laser, in 1960.

6 Optics and Photonics Series, Course 2: Elements of Photonics Excitation mechanism—A gain tube for a HeNe laser is shown in Figure 2-4.

Figure 2-4 Example of gas gain tube for a helium-neon laser

In the HeNe laser, the mirrors are made an integral part of the glass tube that contains the mixture of helium and neon gas. The tube can be anywhere from 10 to 100 cm in length, with most lasers around 10–20 cm long. The excitation mechanism is an electrical discharge created by a power supply that establishes a voltage between the electrodes—the cathode and the anode. When a voltage is applied between the electrodes, a discharge current develops within the tube and electrons flow from the cathode toward the anode. These electrons form a plasma, a gas of charged particles. Thus, HeNe laser gain tubes are often called plasma tubes. As the discharge current develops, the tube begins to emit light in a fashion similar to that of a neon sign. The electrons accelerating toward the anode within the gas collide with both helium and neon atoms, “knocking” them into higher lying excited energy levels. Certain pairs of these levels exhibit a population inversion. The atoms in these excited energy levels then fall to lower energy levels via spontaneous decay and stimulated emission. The stimulated emission process leads to the amplified laser beam, as we learned in Module 1-6, Principles of Lasers.

Gain medium—Electronic energy levels of both neutral neon and neutral helium atoms are shown in Figure 2-5.

Figure 2-5 Energy level diagram for a helium-neon laser showing three laser transistors at 3.391 m, 632.8 nm, and 1.152 m

Module 2-2: Specific Laser Types 7 Within the HeNe laser, neon is the element that leads to the laser transitions. Helium is a so- called buffer gas that improves the operation of the laser through collisions with the neon atoms. In the plasma tube, most of the collisional energy from the accelerating electrons goes into the helium atoms. Then the helium atoms transfer their energy to the neon atoms through direct collisions. Because of this exchange of energy from helium to neon, the HeNe laser is a good example of a transfer laser. Notice in Figure 2-5 that the E3 and E5 energy levels of helium are at nearly the same value as the E3 and E5 levels of the neon atom. Since they are at nearly the same energy level, these energy states of the two atoms are resonant with one another. This means that when helium and neon atoms collide in the gas mixture, the helium atoms give their energy efficiently to the neon atoms, exciting them to the nearly equal higher energy levels. This process produces a population inversion and begins the laser process. Optical cavity—As shown earlier in Figure 2-4, the mirrors of the optical cavity are often an integral part of the gain tube for a HeNe laser. In some cases, the mirrors are external to the gas tube and the ends of the gas tube include Brewster windows, which determine the polarization state of the laser beam. The Brewster windows force the HeNe laser to operate in a specific linearly polarized mode. For many HeNe lasers, the high-reflectance mirror has some curvature and the output coupler mirror, of a lower reflectance, is a flat mirror. This optical mirror configuration forces the beam waist (narrowest width) of the gaussian beam in the optical cavity to occur at the output coupler mirror. That is, the output laser energy seems to diverge from an “aperture” at the output coupler mirror. Having the mirrors integrated into the gas tube structure has the advantage that the laser is self- contained. The alignment of the mirrors is generally accomplished at the laser manufacturing plant when the mirrors are attached to the tube. However, attached mirrors have the disadvantage that the gases can come into direct contact with the mirrors, leading to contamination of the mirror surfaces. If the mirrors are somehow misaligned, the laser may not be repairable. However, HeNe lasers are rather inexpensive, so that when the laser has some problem, a new one usually replaces it. Most HeNe lasers are now prepackaged at the manufacturer and come to the user in a self-contained package. An example of a self-contained HeNe laser is shown in Figure 2-6.

Figure 2-6 Self-contained helium-neon laser from JDS Uniphase The “summary” information given below in a fact sheet is useful in describing the characteristics and operation of the HeNe laser. We will use this information to calculate several 2 operational characteristics of the HeNe laser. Notice that the M factor is given.

8 Optics and Photonics Series, Course 2: Elements of Photonics Fact Sheet for Helium-Neon Lasers Excitation mechanism: Electric discharge – plasma Gain medium: A mixture of helium and neon gas. Neon is the lasing element. Helium is the energy transfer partner. The HeNe is a transfer laser. Optical cavity: Two-mirror, stable optical cavity Wavelengths: Primary wavelength = 632.8 nm; alternate wavelengths = 543.5 nm, 594 nm, 612 nm, 1.15 m, 1.523 m, and 3.39m; primary color is red. Output power: 0.5–80 mW Beam diameter: 0.5–2.5 mm 2 2 Beam divergence: 0.5–3.0 mrad M factor: M = 1.0–1.2 –3 –1 Pulse format: Continuous wave (cw) Small signal gain coefficient: go = gss = 210 cm 2 9 Saturation intensity: Isat = 6.2 W/cm Gain bandwidth: 210 Hz

Coherence length: 0.1–2.0 m Gain length: Lg = 10–100 cm Power stability: less than 5%/hr Lifetime: Greater than 20,000 hrs Advantages: Very reliable, robust, power-stable; long-lifetime operation at multiple wavelengths; long coherence length Disadvantages: Low power; applications limited; difficult to repair when damaged Applications: Alignment, metrology, inspection

Calculation of several operational characteristics—Based on equations developed in Module 2-1, Operational Characteristics of Lasers, let us now calculate some of the operational characteristics for a typical HeNe laser.

Example 1: Photon energy Problem Determine the energy in electron volts of a HeNe laser photon of wavelength  = 632.8 nm. Solution

hc –34 8 –9 Using Ephot = , where h = 6.625 × 10 Js, c = 3 × 10 m/s, and  = 632.8 × 10 m:  34 8 (6.625 10 J s)(3 10 m/s) –19 Ephot = = 3.14 × 10 J 632.8 109 m Since an energy of 1 eV is equal to 1.6 × 10–19 J, we have 3.14 1019 J Ephot (eV) = = 1.96 eV 1.6 1019 J/eV (Recall that 1 eV is the energy that an electron at rest gains when it “falls” through a potential difference of 1 volt.)

Module 2-2: Specific Laser Types 9 Example 2: Gain volume Problem If the gain volume for a HeNe laser tube is 2 mm in diameter by 10 cm in length, determine the gain volume, Vg. Solution 2 32 d (3.14)(2 10 m) –6 2 Using Vg = Ag × Lg, where Ag = = = 3.14 × 10 m 4 4

Lg = 10 cm = 0.1 m –6 2 –7 3 3 Vg = Ag × Lg = (3.14 × 10 m )(0.1 m) = 3.14 × 10 m , or about 0.314 cm

Example 3: Beam irradiance Problem Assuming that the irradiance is nearly uniform across the diameter of the gain tube of a certain HeNe laser, calculate the internal beam irradiance Eirrad for a HeNe laser operating at a power of 1 mW inside a tube of diameter 2 mm. Solution 4P Using Eirrad = , d 2 3 (4)(1 10 W) 2 2 2 Eirrad = = 3.18 × 10 W/m , about 0.032 W/cm (3.14)(2 1032 m)

Example 4: Beam divergence Problem

Assuming the HeNe beam waist diameter 2wo at the output coupler to be a value of 0.2 mm, 2 determine the beam divergence 00 for a TEM00 beam and for a non-TEM00 beam of M factor 1.2. Solution 2 For 00, we use 00 = . Remember that wo refers to the radius of the beam waist—not the wo diameter. 2 (2 632.8 109 m)   = = = 4.03 × 10–3 rad 00 4 wo (3.14)(1 10 m)

 00  4 milliradian 2 For M where M = 1.2 and M  1.1, use the relationship M = M 00. –3 –3  M = M 00 = 1.1 × 4 × 10 = 4.4 × 10 rad = 4.4 mrad

M = 4.4 mrad, not too different from that for the TEM00 beam

10 Optics and Photonics Series, Course 2: Elements of Photonics Example 5: Single-pass gain Problem

Assume that a HeNe laser amplifier is operating at its maximum gain so that g = gss. Calculate the irradiance Eg after one pass through a 10-cm-long HeNe laser gain tube if the initial irradiance Eo is

2 Eg 1 mW/cm . Recall that the gain G21 = I = . Eo

Solution

gssLg Use the relationship Eg = Eo e –3 2 –3 where Eo = 1 × 10 W/cm , gss = 2 × 10 /cm (from Table 1-1 in Module 2-1), and Lg = 10 cm

–3 (2×10–3/cm)(10 cm) –3 2×10–2  Eg = (1 × 10 )e = (1 × 10 )e

(Note that gss and Lg must be in the SAME units of length, here both in cm.) –3 0.02 Eg = (1 × 10 )(e ) –3 Eg = 1.02 × 10 W = 1.02 mW So after one pass, the irradiance increases from 1 mW to 1.02 mW. But at the speed of light, c = 3 × 1010 cm/sec, this single pass occurs in the short time of 0.33 nanoseconds!! This means that the photons bounce back and forth between the mirrors about 3,000,000,000 times a second. So, a single-pass doesn’t take very long at all!

Example 6: Power out for a typical HeNe laser Problem Making use of Equations 1-7, 1-8, and 1-10 and Tables 1-1 and 1-2 in Module 2-1, and the following parameters for a typical HeNe laser, operating at threshold, estimate the power out, Pout. Cavity parameters

Reflectivity of high-reflectance mirror R1 = 0.999

Reflectivity of output mirror R2 = 0.970

Gain length Lg = 50 cm; beam diameter = 2 mm 2 (0.2 cm) 3 Gain volume Vg = 50 cm × Ag = 50 cm × = 1.57 cm 4

Gain coefficient g = gth (see below for calculation)

Loss factor L0 = 0.008 (assumed as reasonable) Table values –3 –1 Small signal gain coefficient gss = 2 × 10 cm (Table 1-1 in Module 2-1) 2 Isat = 6.2 W/cm (Table 1-2 in Module 2-1)

Module 2-2: Specific Laser Types 11 Calculations

A. Since the laser is operating at threshold, g = gth, so 11 g = gth = n (Equation 1-10 in Module 2-1) 2(1)LRRLg120

11 g = n 2(50 cm) (0.999)(0.97)(1 0.008)

11 g = n 100 cm 0.9613

1 1.97 102 g = n 1.02 = = 1.97 × 10–4 cm–1 100 cm 100 cm

B. Now calculate Pavail.

Pavail = gssIsatVg (Equation 1-8 in Module 2-1) 210 3 P = (6.2 W/cm2)(1.57 cm3) avail  cm

Pavail = 19.5 mW

Now, knowing g and Pavail, we can use the following equation to estimate the power out:

1 gg 1 R P = P ss 1 (Equation 1-7 in Module 2-1) out avail  gLss g  g RL1o(1 ) RRL 12o (1  )

1 2 1034 1.97 10 1  0.999 Pout = (19.5 mW)  34 210 50 1.9710  (0.999)(0.992) (0.999)(0.970)(0.992) –4 Pout = (19.5 mW)(10)(9.15)(5.1 × 10 )

Pout = 0.9 mW or about 1 mW (reasonable value for a 50-cm HeNe laser) The material on pages 6–12 concludes the repetition of nearly identical material presented in Module 2-1.

The Helium-Cadmium Laser The operation of a helium-cadmium (HeCd) laser is similar to the operation of the HeNe laser. The HeCd laser is both a metal vapor laser and an ion gas laser. So it also has some similarities with the copper vapor laser and the argon ion and krypton ion lasers, which we discuss later. The HeCd laser is not as common in use as the HeNe laser. Excitation mechanism—The excitation mechanism for a HeCd laser is an electric discharge like that in a HeNe laser. The gas tube is prepared by injecting cadmium vapor into the helium- filled tube. The solid metal cadmium is heated to 250 C to create the vapor. In this heating process, the cadmium is also ionized. Some HeCd laser tubes have a heater that creates the metal vapor and a recycling loop to keep the vapor hot. When a voltage is applied between the

12 Optics and Photonics Series, Course 2: Elements of Photonics cathode and anode in the gain tube, the positive Cd ions move toward the negative cathode. The gas mixture is optimized to minimize the amount of cadmium that collides with the anode. Gain medium—In the HeCd laser, the cadmium ions serve as the lasing medium and the helium gas acts as a buffer to improve the lasing efficiency. The HeCd laser can emit radiation at twelve different wavelengths. The most common wavelengths are 441.6 nm and 325 nm. The laser tube for the HeCd laser includes a heater to melt the metal into vapor. The tube shown in Figure 2-7 has the capability to recycle the cadmium vapor.

Figure 2-7 Large omnichrome helium–cadmium laser tube

Optical cavity—The optical cavity for the HeCd laser is a stable two-mirror cavity creating a Gaussian-mode profile. However, the exact configuration of the cavity is different for various manufacturers. As with the HeNe laser, HeCd laser manufacturers now make HeCd lasers in a self-contained laser box (see Figure 2-8).

Figure 2-8 Helium–cadmium laser and power supply from Melles Griot

Module 2-2: Specific Laser Types 13 Helium-Cadmium Laser Fact Sheet Excitation mechanism: Electric discharge – plasma Gain medium: A mixture of helium gas and ionized cadmium metal vapor. Cadmium is the lasing medium. Helium is the energy transfer partner. Optical cavity: Two-mirror, stable optical cavity Wavelengths: Primary wavelengths: 325 nm and 441.6 nm; ten other wavelengths possible 2 Output power: 5–50 mW (325 nm); 10– Saturation intensity: Isat = 7.1 W/cm 150 mW (442 nm) 9 Gain bandwidth: 2  10 Hz Beam Diameter: 0.26 – 2.0 mm Coherence length:  15 cm Beam divergence: 1.3–3.0 mrad Gain length: Lg = 32.5–75 cm 2 2 M factor: M = 1.1–9; low value is single mode, high value is multimode. Power stability: About 2% over a 4-hour period Pulse format: Continuous wave Lifetime: Greater than 5000 Hrs –3 –1 Small signal gain: go = gss = 3  10 cm Advantages: Short wavelength operation provides penetrating wavelengths in the ultraviolet (UV) and blue regions of the EM spectrum; multiple wavelength operation possible; some versions emit 5 or 6 wavelengths. Disadvantages: Low power; applications limited; cadmium ions colliding with cathode reduce performance; gas has to be replenished; maintaining homogeneous distribution of metal vapor difficult; loses about 1 gram of cadmium for every 1000 Hrs of operation. Applications: Biological fluorescence, disc mastering, flow cytometry, flow visualization, stereo lithography, raman spectroscopy, surface inspection, alignment, confocal microscopy, capillary electrophoresis, cancer detection, holography/interferometry, semiconductor inspection, laser printing, metrology

Example 7: Beam divergence for the HeCd laser Problem From the HeCd fact sheet, the beam diameter can vary from 0.26 mm to 2.0 mm. Find the beam 2 divergence  if we take the beam waist wo to be 0.50 mm and the corresponding M value for a multimode beam to be 9.0.

Solution From Module 2-1 we know that the beam divergence is given by 2 M = M00 = (M)  wo

For the HeCd laser wavelength of  = 441.6 nm, M becomes (3.0)(2)(441.6 109 m) M = (3.14)(0.5 103 m) –3 M = 1.7  10 rad, within the given range of 1.3 to 3.0 mrad.

14 Optics and Photonics Series, Course 2: Elements of Photonics The Ion Lasers—Argon Ion and Krypton Ion Excitation mechanism—Ion lasers contain a high-voltage power supply that produces an electric discharge to ionize the argon or krypton gas in the gain tube. The process creates a plasma—wherein the electrons are separated from the argon or krypton atoms—which means that the plasma contains free electrons and ions. So an energetic electrical pulse followed by a continuous DC electrical current at a lower level starts an ion laser. The energetic pulse forms the plasma and the DC current sustains the lasing. The very high current in an ion laser creates a lot of excess heat, requiring the gain tube to be fabricated from heat-resistant materials. The dual-function electronics circuit is very complicated. The first ion laser used mercury, Hg+, and was operated in 1965. A schematic of the argon ion laser is shown in Figure 2-9.

Figure 2-9 Schematic showing argon ion laser operation

Gain medium—Electronic energy levels of argon are shown Figure 2-10.

Figure 2-10 Energy level diagram for an argon ion laser

The two primary wavelengths for the argon ion laser are 514.5 nm (green light) and 488 nm (blue light). Argon ion lasers were the first green lasers and are used currently in applications that require a green light source. Because of the high voltage requirement leading to excess heat, a lot of energy is wasted in an ion laser, making them very inefficient. Only about 1% of the electrical energy input actually goes into laser energy in the output beam.

Module 2-2: Specific Laser Types 15 Optical cavity—The optical cavity for ion lasers is a two-mirror stable cavity that produces a gaussian beam. For ion lasers, the high-reflectivity mirror is flat, and the output coupler mirror is curved.

Figure 2-11 Optical cavity for an argon ion laser

Because of the high gain, the reflectivity of the output coupler does not have to be very high. Recall for comparison that the optical cavity in the HeNe laser also had a flat mirror and a curved mirror. However, in the HeNe laser the output energy comes through the flat mirror. So, the cavity for an ion laser is “reversed” from the cavity for a HeNe laser. Other manufacturers may use other two-mirror optical cavity configurations. An example of an argon ion laser is shown in Figure 2-12. For lower power argon ion lasers, the power supplies are not as large. The power supply must be separate because of the high power and the long gain length. Argon ion lasers are not built as self-contained units.

Figure 2-12 Argon ion laser with power supply operating on multiple wavelengths

16 Optics and Photonics Series, Course 2: Elements of Photonics Argon Ion/Krypton Ion Laser Fact Sheet Excitation mechanism: Electric discharge – plasma Gain medium: Ionized argon/krypton gas and electrons in a plasma Optical cavity: Two-mirror, stable optical cavity Wavelengths: Primary wavelengths: 488 nm and 514.5 nm; other wavelengths UV at 334 nm, 351.1 nm, 363.8 nm; primary band of wavelengths 457–514 nm, including 457.9 nm, 476.5 nm, 496.5 nm, 501.7 nm, 528.7 nm, with primary wavelengths, blue, green, violet lines Output power: 10–25 W (Ar+); 2.6–20 W (Kr+) Gain bandwidth: 2  109 Hz Beam diameter: 1.9 mm (Ar+), 2.0 mm (Kr+) Coherence length: 2.5–10 cm + + Beam divergence: 0.8 mrad (Ar ), 1.0 mrad (Kr ) Gain length: Lg = 192–195 cm 2 2 (Compare with HeNe and HeCd lasers.) M factor: M = 1.0 for TEM00 with 1–2.6 W; 2 M greater than 1 for higher powers Power stability: About 3% without Pulse format: Continuous wave modification, 1% with power stability –3 –1 control Small signal gain: go = gss = 5  10 cm 2 Lifetime: Greater than 2000 Hrs/year Saturation intensity: Isat = 16.3 W/cm Advantages: Green-light laser energy; multiple wavelengths possible; high power Disadvantages: Very inefficient; needs regular maintenance; high temperature is dangerous to work with and requires specialized cooling equipment; Ar+ are being replaced by frequency- doubled Nd:YAG lasers. Applications: Entertainment, general surgery, ophthalmic welding of detached retina, forensic

medicine, holography, optical pumping of a dye laser and a titanium-sapphire (Ti:Al2O3) laser

Example 8: Pumping a dye laser Problem An argon-ion laser is used to pump a dye laser. Assume that the Ar+ laser puts out 20 watts of power, the beam divergence  is 0.8 mrad, and the entrance port on the dye laser to be pumped is 105 cm from the exit of the Ar+ laser. (a) What is the diameter of the Ar+ laser spot size at the 105 cm position? (b) What is the power density at this position?

Solution D (a) tan   . 105 D  105 tan ;  = 8  10–3 rad = 0.458 So D = 105 tan 0.458 = 0.84 cm 4P 420 W (b) E = = = 36.1 W/cm2 D2 (3.14)(0.84 cm)2

Module 2-2: Specific Laser Types 17 The Metal Vapor Lasers—Copper Vapor and Gold Vapor Similar to the HeCd laser, the copper vapor laser includes a mechanism to heat solid copper until it first liquefies and then gasifies to reach the vapor state. The heating process, however, does not ionize the copper atoms. So, the copper vapor laser and the gold vapor laser are neutral atom lasers. The copper vapor laser is much more common than the gold vapor laser. Excitation mechanism—A copper vapor laser includes a high-voltage electric discharge and a solid copper rod. The high voltage heats the rod to 1400 C, which creates the copper vapor and a lot of excess heat. The extremely high voltage and the extreme heat are safety hazards. A schematic of the Copper Vapor Laser (CVL) is shown in Figure 2-13.

Figure 2-13 Schematic showing copper vapor laser operation

The copper vapor laser is loaded with a charge of solid copper. The electric discharge is created by an extremely high voltage. The electrons collide with the copper atoms, ultimately producing

18 Optics and Photonics Series, Course 2: Elements of Photonics copper in the vapor phase. The tube is made out of high-temperature-resistant materials, such as alumina or zirconia. Gain medium—The gain is created in the neutral copper atom. The gain is high for a copper vapor laser. The energy level diagram for copper is shown in Figure 2-14.

Figure 2-14 Energy level diagram for the copper vapor laser

The copper vapor laser has two primary wavelengths, 510 nm and 578 nm. Optical cavity—The two-mirror optical cavity for the CVL is shown in Figure 2-15. The mirrors are isolated from the gain tube by windows. The optical cavity usually consists of two flat mirrors. One mirror has a very high reflectivity. Because of the high gain, the output coupler mirror has a very low reflectivity of R1 = 0.1, 10 percent. In fact, CVL manufacturers claim that the CVL can be operated without mirrors, because the gain is so high.

Figure 2-15 Picture of an operating copper vapor laser

Module 2-2: Specific Laser Types 19 Copper Vapor Laser Fact Sheet Excitation mechanism: Electric discharge – electron plasma Gain medium: Copper vapor and electrons in a plasma Optical cavity: Two-mirror, stable optical cavity Wavelengths: Primary wavelengths: 510.5 nm and 578 nm; these lines are green and yellow. –2 –1 Output power: 10—100 W average power; Small signal gain: go = gss = 510 cm peak power 50–5000 KW 2 Saturation intensity: Isat = 9.0 W/cm Beam diameter: 2–10 mm; 9 tube diameter 10–80 mm Gain bandwidth: 2  10 Hz Beam divergence: Not given by Coherence length: Not given by manufacturer manufacturer—short, of the order of a few centimeters 2 2 M factor: M = 1.1–1.2, higher for multimode operation Gain length: Lg = 20–100 cm Pulse format: Short pulses Power stability: Excellent power stability (t1/2 = 5–60 nsec) with high repetition rate Lifetime: Long lifetime (PRF = 2–100 KHz) Advantages: Green-light laser energy; two wavelengths possible; high power Disadvantages: Hazardous high temperatures and high electric fields; requires thermal management and copper replacement Applications: Precision micromachining (microdrilling, microcutting, micromilling); holography; atomic vapor laser isotope separation to enrich uranium

The Carbon Dioxide Laser

The carbon dioxide (CO2) laser was first demonstrated in 1964. Today it is the most widely used laser. It has been operated at both low and high power levels with different excitation mechanisms. Both static gain tubes and flowing gain tubes have been used.

Excitation mechanism—Most CO2 lasers are energized with some type of electric discharge. However, in the powerful Air Force Weapons Laboratory’s gas dynamic laser (GDL) the excitation mechanism involved combustion and flow through supersonic nozzles. This combination of combustion heating of the gas mixture and rapid expansion of the gases through supersonic nozzles created a population inversion. Many small CO2 lasers are now self- contained. The power supplies for large CO2 lasers require extremely high voltage. The gas mixture in the laser is helium (He), nitrogen (N2), and carbon dioxide (CO2), in a ratio of 8 parts He to 3 parts N2 to 1 part CO2—8:3:1.

Gain medium—For CO2, as for most molecular lasers, the lasing occurs between different energy states of the molecule. In three-atom and two-atom molecules, the atoms in the molecule are bound together. Along the direction of the bonds, the atoms can move with respect to each other and gain energy as the bonds are stretched or rotated, going as a result into a higher energy state. The possible motions are vibrations along the bond directions or rotations of the molecule around a symmetry axis. A carbon dioxide molecule—with three atoms—is generally arranged

20 Optics and Photonics Series, Course 2: Elements of Photonics in a line with the two oxygen atoms on the outside of the central carbon atom. Three different vibrational modes of CO2 are shown in Figure 2-16. The different vibrational modes of the energized CO2 molecule form the energy levels for lasing in the CO2 laser.

Figure 2-16 Three vibrational modes of the carbon dioxide molecule: (1) the symmetric stretch mode, (2) the bending mode, and (3) the asymmetric stretch mode

Within the gas mixture in the laser, helium atoms collide with both nitrogen and carbon dioxide molecules, and nitrogen and carbon dioxide molecules collide with each other. Helium is a buffer gas that enhances the collisions. Most of the energy from the electric discharge goes into the excitation of the nitrogen molecules. One of the excited energy levels of nitrogen is nearly resonant (that is, at the same energy level) with an excited level of the carbon dioxide molecule. Nitrogen is the energy transfer partner for CO2. Notice that the asymmetric stretch modes have higher energy levels than the symmetric stretch modes or the bending modes. Each of these vibrational modes represents a set of several closely spaced energy levels. The two primary wavelengths for CO2 are 10.6 m and 9.6 m (see Figure 2-17).

Module 2-2: Specific Laser Types 21

Figure 2-17 Energy level diaram for the carbon dioxide laser

Optical cavity—The most common optical cavity for CO2 lasers is a stable two-mirror resonator. However, the actual optical cavity design depends on the power level and the application. When more power is needed from a CO2 laser, the gain volume can be increased as shown in the equation Pavail = gssIsatVg, for the available power in the laser. Recall that gss is the small signal gain, Isat is the saturation intensity, and Vg is the gain volume. Increasing the gain 2 diameter results in an increase in the M factor of the laser. The CO2 laser is one of the lasers that led to the invention of the unstable optical cavity. This invention has been primarily used in the military, which has a need for very high-power applications. Two examples of high-power CO2 lasers are given in Figures 2-18 and 2-19.

Figure 2-18 The high-power electric discharge coaxial longitudinal-flow CO2 laser

22 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 2-19 The gas dynamic laser (GDL), a supersonic flow, combustion-driven carbon dioxide laser. Top schematic shows combustion chamber and nozzles. Bottom picture shows the pressure recovery system, the diffuser, and other elements. In its test run, the GDL was mounted in a KC-135 aircraft and dubbed the Airborne Laser Laboratory (ALL).

Module 2-2: Specific Laser Types 23 Carbon Dioxide Fact Sheet Excitation mechanism: Electric discharge – plasma or combustion-driven supersonic flow

Gain medium: A mixture of He, N2, and CO2. CO2 is the lasing species. N2 is the energy transfer partner. He is the buffer gas. Optical cavity: Two-mirror, stable optical cavity, or a more complicated unstable optical cavity Wavelengths: Primary wavelengths: 10.6 m and 9.6 m; alternate wavelengths 9–11 m; all wavelengths are in the far infrared region of the electromagnetic spectrum. Output power: 10 W–200 KW Gain bandwidth:  107 Hz Beam diameter: 1–3 mm; 5–10 cm Coherence length: 0.1–10 m

Beam divergence: 1–10 mrad Gain length: Lg = 10–200 cm 2 2 M factor: M = 1.0–2.0 near single mode to Power stability: As large as 5%/15 min; 2 M = 10–100 for large volume multimode varies with laser type Pulse format: Continuous wave or pulsed Lifetime: Greater than 100,000 Hrs; varies –3 –1 with laser type Small signal gain: go = gss = 810 cm –2 2 Saturation intensity: Isat = 1.610 W/cm Advantages: Very reliable, robust, power-stable, long-lifetime laser operating at multiple wavelengths; long coherence length, high efficiency and very high power; widely used for materials processing; readily available; efficiency = 30%; gases recyclable Disadvantages: For materials processing, the absorption of many materials is very low at this very long wavelength. So, much of the high power does not get into the material and is wasted. This laser is so common that it was overused for tasks that were better done by other lasers or by other technologies. Requires a laser technician to operate. Applications: Materials processing, drilling, medical applications, environmental sensing, IR spectroscopy, material cleaning, nondestructive testing

Carbon Monoxide Laser: The carbon monoxide (CO) laser is very similar to the CO2 laser. It is operated with an electric discharge or plasma discharge and is tunable within the wavelength

range of 5.2 m to 6.0 m. It is not as powerful as the CO2 laser. Output power for the CO laser ranges from 1 W to 10 W per wavelength with multiple wavelengths operating simultaneously. It provides lasing at a wavelength less than 10.6 m with much less power. CO lasers are research lasers, not production devices. CO lasers have also been used for materials processing applications. The CO2 laser is much more common than the CO laser.

24 Optics and Photonics Series, Course 2: Elements of Photonics Example 9: Focusing a CO2 laser for welding Problem According to Module 1-6, a laser beam of divergence angle , passing through a lens of focal length

f, is brought to a focal spot of diameter d, where d = f .

(a) Determine d for a CO2 laser beam of divergence  = 5 milliradian passing through a lens of focal length f = 20 cm.

(b) If the CO2 laser has an output power of 122 watts, what is the CO2 beam irradiance at the focal spot?

Solution

(a) Since d = f , we get d = (20 cm)(0.005) = 0.1 cm P 4P (b) The irradiance E = = A d 2 (4)(122) W  E = = 15,541 W/cm2 (3.14)(0.1 cm)2

The Nitrogen Laser The nitrogen laser was first developed in 1963 and became commercially available in 1972. The active medium in a nitrogen laser is N2 gas at pressures from 20 torr up to 1 atmosphere. Because of the properties of the nitrogen gain medium, the N2 laser must be operated as a short- pulse laser. The gas tube in a nitrogen laser can be sealed or can contain flowing nitrogen gas.

Excitation mechanism—In the N2 laser, the gas is excited with a short pulse (about 10 nsec) of high voltage (20–40 KV). The high voltage creates an electric discharge and a momentary population inversion in the gas. Then, the excited gas emits a short pulse of laser radiation.

Gain medium—The energy level diagram for the N2 laser is shown in Figure 2-20.

Figure 2-20 The energy level diagram for the nitrogen laser

Module 2-2: Specific Laser Types 25 For this laser, the lasing occurs between vibrational and rotational levels of the diatomic molecule. In Figure 2-20, both the upper and lower laser levels are shown with several sublevels due to the rotational and vibrational states of the molecule. Optical cavity—A two-mirror optical cavity is used with the nitrogen gas tube to produce a nitrogen laser. A self-contained nitrogen laser is shown in Figure 2-21. The package includes the electronics for the high power pulsing circuit, the gas laser tube, the optical cavity, and some switches and gauges. The gauges monitor the gas pressure. The switches engage the electronic pulse circuit.

Figure 2-21 A commercial nitrogen laser from Photon Technologies International

Nitrogen Laser Fact Sheet Excitation mechanism: Electric discharge or transverse electric discharge

Gain medium: Nitrogen (N2) gas; nitrogen is the lasing species. Optical cavity: Two-mirror, stable optical cavity Wavelengths: Primary wavelengths = 337.1 nm

Output power: Peak power Pmax = 2.4 MW; Pulse format: Pulsed, –12 energy per pulse E = 1.45 mJ; pulse repetition t1/2 = 600 psec = 60010 sec rate PRR = 1- 20 Hz Spectral bandwidth: 0.1 nm Beam dimensions: 3  6 mm (rectangular Gain length: Lg  20 – 40 cm cross section) Energy stability: As large as 2.5% Beam divergence: 3  7 mrad 2 2 M factor: M  2–9 Advantages: Reliable; power-stable; short wavelength; short temporal pulse laser source Disadvantages: High power but low energy per pulse Applications: Because of short wavelength and short pulse length, the nitrogen laser is often used to pump dye lasers.

26 Optics and Photonics Series, Course 2: Elements of Photonics Example 10: Pulse energy

For the nitrogen laser discussed in this section, the laser peak power is Pmax = 2.4 MW, the laser energy per pulse is Epulse = 1.45 mJ, and the laser pulse repetition rate is PRR = 20 Hz. Calculate the Pav average laser power, Pav, for a PRR of 20 Hz. Use the relationship Epulse = PRR

Pav –1 –3 Thus Epulse = where PRR = 20 Hz = 20 sec and Epulse = 1.45  10 J PRR

 Pav = (PRR)(Epulse)

–3 J Pav = (20)(1.45 10 ) sec

–2 J –3 Pav = 2.9  10 = 0.029 W = 29  10 W sec

Pav = 29 milliwatts

The Excimer Laser The word excimer is a contraction of two other words, excited and dimer. A dimer is a diatomic molecule. Thus, an excimer is an excited diatomic molecule. One of the atoms in an excimer is a noble gas atom, such as argon, krypton, or xenon. The noble gases are atoms that have full electron shells. The noble gases are inert. They exist normally in nature as single isolated atoms and do not often combine with other atoms to form molecules. The second type of atom in an excimer is a halogen, such as fluorine, chlorine, bromine, or iodine. When a noble gas atom and a halogen are forced together under high energy, they form a bound excited molecule known as an excimer. The excimer molecule does not exist in a ground state; it exists only in an excited state for a short time. When the excimer gives up its excess energy as a photon, and decays back to the “ground state,” the molecule no longer exists as two bound atoms. It reverts once again to two distinct atoms. Three Russian scientists, Basov, Danilychev, and Popov, first demonstrated the operation of excimer lasers in 1971. Some excimer laser types and their operating wavelengths are given in Table 2-1.

Table 2-1. Excimer laser types and operating wavelengths Excimer laser Wavelength (nm) ArCl 175 ArF 193 KrF 248 XeF 351, 353 KrCl 222 XeCl 308, 351 XeBr 282

Module 2-2: Specific Laser Types 27 Excitation mechanism—High-power, short-burst electric pulses excite the active medium of an excimer laser. The electric pulses accelerate electrons in the gas, and they transfer kinetic energy to the gas atoms or molecules by collisions. The pure halogen molecules used are F2, Cl2, or Br2. Other molecules containing halogen atoms are also used, such as HCl or NF3. Since the pure halogen molecules are highly reactive, the halogen compounds are somewhat easier to handle in the gas mixture. The gas mixture includes helium or neon as a buffer gas, very little halogen, and a small amount of a noble gas. The gas mixture is at a pressure of 1–5 atmospheres. Because of the high electric energy and short pulses, the excimer gain is very high for a very short period of time. Gain medium—The schematic energy level diagram for an excimer laser is shown in Figure 2-22. A noble gas atom is represented by the symbol R and a halogen atom is represented by the symbol H. The laser transition is from the excited excimer RH* to the separated-atom ground state R, H.

Figure 2-22 Schematic energy level diagram for an excimer laser. R denotes a noble gas atom. H denotes a halogen gas atom.

Optical cavity—A simple, two-mirror optical cavity is normally used with excimer lasers. The gain is so high that the laser can operate without an optical cavity. The output coupler mirror has very low reflectivity in an excimer laser optical cavity.

28 Optics and Photonics Series, Course 2: Elements of Photonics Excimer Laser Fact Sheet Excitation mechanism: High-power, short-pulse electric discharge Gain medium: A mixture of very little halogen (~ 0.1–0.2%), a little noble gas (~ 10%), and a buffer gas such as helium or neon (~ 90%)

Gain length: Lg = 10–20 cm Optical cavity: Two-mirror, stable optical cavity 2 2 M factor: M  2–9 Wavelengths: XeF—primary wavelength = 351 nm; XeCl—primary wavelength = 308 nm; and KrF—primary wavelength = 248 nm. Other wavelengths are given in Table 2-1. All excimer laser wavelengths are in the ultraviolet region of the electromagnetic spectrum. Output power: 10 W–125 W Beam diameter: A few mm Beam divergence: 4.5  1.5 mrad –6 –12 Pulse format: Short pulsed, t1/2 = 10 –10 sec Gain bandwidth:  1013 Hz –2 –1 Small signal gain: go = gss = 2.6  10 cm 5 2 Saturation intensity: Isat = 3.4  10 W/cm Power stability: 5% when new; degrades with use Coherence length: 0.01–0.1 m Lifetime: Hundreds of hours Advantages: Stable, short-pulse, ultraviolet laser source Disadvantages: High-power electric discharge is dangerous; halogen gas hazardous; very inefficient (efficiency measured in tenths of a percent); much energy wasted Applications: Photolithography; laser surgery (cutting biological tissue); laser keratotomy (corneal sculpting); laser marking of materials such as plastics, glasses, and metals

Module 2-2: Specific Laser Types 29 The Hydrogen Fluoride and Deuterium Fluoride Chemical Lasers The chemical laser was invented within the aerospace industry in 1965 as a potential high- power laser for military applications. The developers were J. V. V. Kasper and G. C. Pimental. The operation of the chemical laser is very similar to the operation of the gas dynamic laser (GSL), shown earlier in Figure 2-19. The lasing energy levels are made up of vibrational and rotational levels of the HF or DF molecule. Refer to Figure 2-23. Excitation mechanism—A schematic of the layout for an HF chemical laser is shown in Figure 2-23.

Figure 2-23 Schematic layout of a hydrogen fluoride chemical laser

Combustion is initiated in the dissociation chamber to break apart the molecular species to produce fluorine atoms. The gas is then forced through a nozzle and mixed with hydrogen from another nozzle to produce HF in an excited state. The gas flow is itself transverse to the direction of the optical axis of the laser optical cavity. Gain medium—The gain is created within the active medium by chemical reactions. The two primary reactions are:

H2 + F = HF* + H

H + F2 = HF* + F The hydrogen fluoride product of these reactions is in an excited state. The excited states are higher-energy vibrational and rotational states of HF. The band of wavelengths that are possible is very broad and a chemical laser generally operates on several wavelengths simultaneously. Optical cavity—To extract power from the chemical laser gain medium, a two-mirror stable cavity is normally used. Some times the output coupler mirror is a flat mirror so that the output beam is collimated. For higher power operation, the stable optical cavity is usually replaced by a much more complicated unstable resonator configuration.

30 Optics and Photonics Series, Course 2: Elements of Photonics HF and DF Chemical Lasers Fact Sheet Excitation mechanism: Chemical reactions in a combustion-driven supersonic flowing gas

Gain medium: A gas mixture of helium, fluorine, fluorine compounds (SF6 or NF3), oxygen, sulfur, sulfur dioxide, and hydrogen reacts chemically to produce hydrogen fluoride in an excited state. Deuterium is a hydrogen molecule with two neutrons in the nucleus. The gas constituents for DF are similar. Optical Cavity: Two-mirror, stable optical cavity, or more complicated optical cavity. Wavelengths: HF primary wavelength range, 2.6 m to 3.0 m; DF primary wavelength range, 3.5 m to 4.2 m. These wavelengths are in the near infrared region of the electromagnetic spectrum. Output power: 25 W–2 MW Beam diameter: A few mm to tens of cm Beam divergence: 1–10 mrad 2 2 2 M factor: M = 5.0–10.0 multiline, correctable to M  1.3 Pulse format: Continuous wave –1 –1 Small signal gain: go = gss = 1–2  10 cm Coherence length: 0.01–0.1 m

Gain length: Lg = 25–200 cm Power stability: 10% due to changing multiline content Lifetime: Only limited by amount of fuel Advantages: Very reliable, robust, long-lifetime laser operating at multiple wavelengths, short coherence length, high efficiency and very high power; efficiency = 15–20%; gases recyclable; no electrical connection required for chemical laser Disadvantages: Hazardous chemicals, gases must be “scrubbed” and cleaned before reuse Applications: Developed for military applications; some research applications; laser materials processing; drilling applications in oil and gas industry Chemical oxygen-iodine laser (COIL): The chemical oxygen-iodine laser is another type of chemical laser. It was invented at the Air Force Research Laboratory in 1977. The iodine atom is the lasing species, and an excited state of oxygen is used as the energy transfer partner to pump the iodine. COIL has also demonstrated very high power for military applications.

Module 2-2: Specific Laser Types 31 Liquid Lasers The category of liquid lasers has only one type of laser—the dye laser. All dye lasers have similar attributes although there are several different dyes that can be used. The dyes are essentially interchangeable within the dye laser system. The Dye Laser Peter P. Sorokin and J. R. Lankard first demonstrated the dye laser at IBM Laboratories in 1965. Exciting the dye with a ruby laser first demonstrated fluorescence in organic dye molecules. Excitation mechanism—The layout for a typical dye laser is shown in Figure 2-24. The gain region is a flowing jet of dye requiring that the optical cavity include a focus at the dye jet. Another laser that must also be focused at the dye jet optically pumps the system. There is normally a tuning element in the optical cavity to vary the output wavelength.

Figure 2-24 Schematic layout of a dye laser

Gain medium—A dye laser unit is shown in Figure 2-25, and a schematic energy level diagram for a dye laser is shown in Figure 2-26. Both the upper laser level and the lower laser level are very broad, so that many laser wavelengths are possible. A tuning element is included within the laser optical cavity to select the desired laser wavelength.

Figure 2-25 Dye laser system without the pump laser from Photon Technologies International (See Figure 2-21 for the N2 pump laser.)

32 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 2-26 Schematic energy-level diagram for a dye laser

A wavelength spectrum for several dyes is shown in Figure 2-27.

Figure 2-27 Gain spectra for several dyes used in a dye laser. The various dyes are coded from PLD365 to PLD960.

As indicated in Figure 2-27, dye lasers can produce wavelengths ranging from ultraviolet to infrared, with many visible wavelengths in between. Optical cavity—For a dye laser, the optical cavity includes three mirrors with a common focus. There is also usually a tuning element inside the optical cavity. The tuning element is either a prism or a grating. The optical elements in the optical cavity of the dye laser must be positioned to allow for another focusing mirror to focus the optical pump beam at the dye jet. The dye jet optical cavity is more complicated than a standard two-mirror laser cavity. It includes three mirrors, a common focus, and a tuning element. An external focusing element must focus the optical pump beam on the dye jet also.

Module 2-2: Specific Laser Types 33 Dye Laser Fact Sheet Excitation mechanism: Optical pumping by argon ion laser, nitrogen laser, or solid-state laser Gain medium: Liquid dye in a flow jet Optical cavity: Two-mirror, stable optical cavity including focus at dye jet Wavelengths: Rhodamin 6-G—primary wavelength 577 m; wavelength band from 540 to 630 m; other dyes for a broad range of wavelengths, 360 m–990 m (see Figure 2-26) 9 2 Output power: ~1 W to a few Watts Saturation intensity: Isat = 3.410 W/cm (extremely high Isat) Peak power: Pmax = 500 KW, energy per pulse, E = 250 J at a pulse repetition rate of Gain bandwidth:  1011 Hz PRR = 1 20 Hz Spectral bandwidth: 1–3 nm Beam diameter: 1 mm Laser linewidth:  = 100 Hz; Beam divergence: 4 mrad c Coherence length: = 3 108 cm 2 2  M factor: M = 1.0–3.0 nearly single mode 

Pulse format: Short pulsed, Gain length: Lg = 1 mm–1 cm t = 500 ps = 500  10–12 sec 1/2 Energy stability, peak-to-peak: 2.5% –1 Small signal gain: go = gss = 2.4 cm (very high gain) Advantages: Conveniently produces useful laser output power at a variety of wavelengths, ranging from ultraviolet through the visible to the infrared. Some lasers are made to change dyes very simply by exchanging the dye in the cuvette provided to hold the dye. The dye laser linewidth is very narrow, so that the coherence length of the dye laser is very long. Disadvantages: Very low energy pulses in picosecond pulse widths. Low repetition rate. Another laser must optically pump the dye gain medium. The optical cavity must include a focus at the dye jet and the optical pump laser must also be coincident at the same focal plane. Dye lasers are sensitive to misalignments of the optical cavity and the pump laser. The flowing liquid system can complicate the maintenance of the laser. The gain has a very short lifetime. Dye quality degrades with time requiring dye replacement. Toxic chemicals and volatile solvents are needed to make dyes. Hazardous waste disposal is required. Applications: Laser surgery (destroying tumors, destroying kidney stones), photo-dynamic therapy

Solid Material Lasers Within the laser community, there are a growing number of solid materials that have been used as laser media. Solid materials have been used for what are known as solid-state lasers, semiconductor lasers, and fiber lasers. Solid-state lasers refer to a class of lasers that have crystalline solids as the gain medium. The crystal is grown with a host material and a dopant material. Depending on the percentages of host and dopant materials, the energy level structure in the solid can be modified to produce different laser wavelengths.

34 Optics and Photonics Series, Course 2: Elements of Photonics Solid-state lasers identified by labels such as Nd:YLF and Nd:YAG are made up of a dopant and a host, as mentioned earlier. The first part of the crystal notation—before the colon—is the dopant material. The second part of the notation—after the colon—is the host material. Some common host materials are yttrium aluminum garnet (YAG), glass-phosphate or silicate, ytterbium lithium fluoride (YLF), aluminum oxide-sapphire (Al2O3), and ytterbium ortho +3 +3 vanadate (YVO4). Some common dopants are neodymium (Nd ) ions, chromium (Cr ) ions, titanium (Ti), erbium (Er), thulium (Tm), holmium (Ho), and ytterbium (Yb). Some of the most common solid-state lasers are ruby (Cr:Al2O3), neodymium-yttrium aluminum garnet (Nd:YAG), neodymium glass (Nd:Glass), and titanium sapphire (Ti:Al2O3). These crystalline solid materials make up the class of lasers known as solid-state lasers. The solid material lasers that operate at very short pulse lengths are included with solid-state lasers. The two other categories of solid material lasers are semiconductor lasers and fiber lasers. Most semiconductor lasers are grown crystal compounds of gallium (Ga), arsenic (As), indium (In), and phosphate (P). The most common semiconductor laser is GaAs. The optical fiber used for fiber lasers is made of pure solid materials such as erbium, thulium, or yttrium. Erbium fiber is the most common material for fiber lasers. There are now hundreds of different solid material laser types. Solid material lasers are operated as continuous wave or pulsed lasers. Pulse formats range from ultrashort laser pulses (picosecond to femtosecond pulse widths) to longer pulse widths. Solid-state lasers are sometimes operated with a Q-switch to increase the pulse energy. The Q in Q-switch stands for the optical cavity quality. High cavity Q means that the output coupler mirror reflectivity is high (near 1). Low cavity Q means that the reflectivity of the output coupler mirror is low. A Q- switch is a device that first allows the energy to build up within the gain medium and then switches the cavity quality to couple out an energetic pulse of high intensity. Another technique that is used with solid material lasers is mode locking. Mode locking creates a train of very short pulses as the laser output. Also, there are materials that can frequency-double or frequency- triple the output of solid material lasers. These are used to obtain the wavelength needed for a particular application. All solid material lasers can be made very small and compact. Solid material lasers are also usually packaged into convenient self-contained boxes, so that all the operator has to do is turn on the switch—after he or she has put on protective eyewear. So the advantage of solid material lasers is that they are small, easy to use, and self-contained. The disadvantage is that they may be too easy to operate and can be turned on before the user is ready. High-power solid material lasers require specialized electronics for the flash lamps or diode lasers and thermal management equipment to remove the waste heat. So, although the actual gain medium is quite small, a high-power solid material laser can still be fairly large. Solid material lasers are usually divided into pulsed lasers and continuous wave lasers for the sake of discussion. Ultrashort pulse lasers, semiconductor lasers, and fiber lasers are also usually discussed separately. For review and comparison, some pulsed solid material lasers are listed in Table 2-2 with some pulsed gas lasers and semiconductor lasers. The solid material lasers may be frequency modified. SH means second harmonic or frequency-doubled, therefore half of the wavelength. TH means third harmonic or frequency-tripled, therefore one third of the wavelength. FH means fourth harmonic or frequency-quadrupled, therefore one fourth of the wavelength. The type of laser is listed along with the energy per pulse, E, and the pulse width, t1/2. Several examples of solid material lasers are given in the following pages.

Module 2-2: Specific Laser Types 35 Table 2-2. Comparison of solid material pulsed lasers and gas pulsed lasers Wavelength (m) Laser type name Energy output Pulse duration Gas lasers 0.157 F2 excimer 0.01–0.06 J ~20 ns 0.193 ArF excimer 0.01–0.6 J ~20 ns 0.248 KrF excimer 0.1–1.2 J ~20 ns 0.308 XeCl excimer 0.1–0.6 J ~20 ns

0.3371 nitrogen (N2) 0.1–3 J 10 ns 0.351 XeF excimer 0.1–0.5 J ~20 ns [0.48–0.54] Xe ion 0.6 J 0.1–0.4 s 0.510/0.578 Cu vapor 1–20 J ~20 ns 0.628 Au vapor 0.2–0.6 mJ ~0.1 s [9.2–11.4] CO2 1–20 J ~1 s [9.2–11.4] CO2 1–500 J 1 ms

Solid-state lasers 0.263 (FH) Nd:YLF 0.2–2 mJ00 ~10 ns 0.266 (FH) Nd:YAG 2–150 J ~10 ns 0.266 (FH) Nd:YAG 10–100 mJ00 5 ns 0.347 (SH) ruby 0.1–0.3 J 25 ns 0.351 (TH) Nd:glassa 0.1–8 J 20 ns 0.355 (TH) Nd:glassb 0.3–2 J 10 ns 0.355 (TH) Nd:YAG 10–500 J ~5 ns 0.355 (TH) Nd:YAG 10–500 mJ00 5 ns 0.380 (SH) alexandrite 0.1 J 0.1 s 0.523 (SH) Nd:YLF 1–15 mJ00 ~10 ns 0.527 (SH) Nd:glassa 1–5 J — 0.53 (SH) Nd:glassb 0.2–22 J 20 ns 0.532 (SH) Nd:YAG 0.01–1 kJ 4–8 ns 0.532 (SH) Nd:YAG 0.01–1 J 6–8 ns 0.6943 ruby 1–100 J — 0.6943 ruby 1–25 J00 25 ns 0.7–1.1 Ti:Sapphire 0.01–3 J ~10 ns 0.72–0.82 alexandrite 0.1–3 J — 1.047 Nd:YLF 0.5 J ~20 ns 1.053 Nd:YLF 0.1–10 J ~20 ns 1.054 Nd:glassa 1–80 J 20 ns 1.061 Nd:glassb 1–20 J 10 ns 1.064 Nd:YAG 20–2000 J 5–10 ns 1.064 Nd:YAG 0.1–2.5 J00 5–10 ns [2.09–2.10] Ho:YAG 0.1–0.25 J00 — [2.09–2.10] Ho:YAG 1–5 J —

Semiconductor lasers [0.75–0.85] GaAlAs 10–20 mJ ~200 s [0.78–0.91] GaAlAs array 20–120 mJ ~200 s

a Phosphate glass b Silicate glass

36 Optics and Photonics Series, Course 2: Elements of Photonics A list of some solid-state lasers and their operational characteristics is broken out in Table 2-3.

Table 2-3. Summary of solid-state lasers that may have a growing role in materials processing in the future Wavelength Thermal shock Operational Laser type Pump source (m) RT (W/cm) modes Continuous, Nd:YLiF 1.047, 1.32 Lamps, diodes — 4 Q-switched

Nd:YVO4 1.06, 1.34 Diodes — Q-switched High energy, Nd:glass 1.05 Lamps, diodes 1.0 short-pulse Alexandrite 0.7–0.82 Lamps 20 High-power Short-pulse, Cr:LiSAF 0.78–1.01 Lamps, diodes 0.8 Q-switched Ti:Sapphire 0.66–1.2 Nd:YAG, Ar+ 31 Short-pulse Tm:YAG 2.0 Diodes 6.5 CW, pulsed Pulsed, CTH:YAG 2.1 Flash lamps 6.5 Q-switched Pulsed, Ho:YAG 2.1 Lamps 6.5 Q-switched Er:YAG 2.94 Diodes 6.5 CW, pulsed High-power, Yb:YAG 1.03 Diodes 6.5 Q-switched CW, medium Yb:SiO 1.0—1.2 Diodes 31 2 power Doubled 0.53 Lamps, diodes 6.5 Q-switched Nd:YAG

The Ruby Laser The ruby laser was the first operable laser and was initially made to lase successfully in 1960. Theodore Maiman is the man who demonstrated the first laser. Of course, ruby is a solid-state crystal. In the first laser the ruby crystal was made into a rod. The ruby laser had some problems, so other lasers soon followed it. Excitation mechanism—The ruby laser was excited by a flash lamp that was wrapped in a coil around the ruby rod, as shown in Figure 2-28.

Module 2-2: Specific Laser Types 37

Figure 2-28 The ruby laser

Gain medium—The energy level diagram for the ruby laser is shown in Figure 2-29. The flash lamp in a ruby laser emits broadband white light. The crystal absorbs only the green and blue light. Notice that the lower laser level is the ground state.

Figure 2-29 The ruby laser energy level diagram

Optical cavity—The optical cavity on the first laser was very simple as shown in Figure 2-29. The ends of the ruby rod were coated with silver. One end was coated so that almost all of the light was reflected back into the cavity. The other end allowed some of the light to be coupled out as the working laser beam.

38 Optics and Photonics Series, Course 2: Elements of Photonics Ruby Laser Fact Sheet Excitation mechanism: Optical pumping by electric flash lamp +3 Gain medium: Sapphire, Al2O3 host material doped with active chromium (Cr ) ions. Ruby is Cr:Al2O3. The Chromium ions serve as the lasing material. Optical cavity: Two-mirror, stable optical cavity formed by silvering ends of the ruby rod Wavelengths: Primary wavelengths, 694.3 nm and 692.7 nm. These are visible red wavelengths. Output power: 5.7 KW Beam diameter: A few mm Beam divergence: 5 mrad 2 2 M factor: M  3–5 multimode –6 Pulse format: Pulsed, t1/2  1  10 sec = 1 sec –1 Small signal gain: go = gss = 1 cm 7 2 Saturation intensity: Isat = 3.8  10 W/cm Gain bandwidth:  1011 Hz Coherence length: 0.1–10 m

Gain length: Lg = 7–20 cm Advantages: Produces red light with a moderate power Disadvantages: The ruby laser is a three-level laser. (This makes continuous operation of the laser impossible.) Applications: Other more robust, reliable lasers have replaced the ruby laser. It is no longer actively used.

Module 2-2: Specific Laser Types 39 Neodymium-Yttrium Aluminum Garnet (Nd:YAG) Solid-State Lasers The Nd:YAG laser is the primary laser of choice within the laser community. It has been successfully used to accomplish a variety of materials processing tasks. Excitation mechanism—A Nd:YAG laser is optically pumped. Initially the pumping was accomplished with a flash lamp. More recently, Nd:YAG laser pumping has been accomplished with diode lasers. The Nd:YAG laser is readily frequency doubled, tripled, or quadrupled. Some manufacturers make a laser box that can operate on one of three or four wavelengths. Nd:YAG lasers can be Q-switched or mode-locked. Gain medium—The energy level diagram for a Nd:YAG laser is shown in Figure 2-30.

Figure 2-30 The neodymium-yttrium aluminum garnet laser energy level diagram

The Nd:YAG laser is optically pumped and lases from one excited level, E3, to a lower excited level, E2. The lower laser level is not the ground state, so the Nd:YAG laser is classified as a four-level laser. Optical cavity—The Nd:YAG laser has a simple two-mirror stable optical cavity. Sometimes the optical cavity includes other optical elements to accomplish intra-cavity modification such as frequency doubling or Q-switching. The specific optical cavity type depends on the application and the other laser components that are required.

40 Optics and Photonics Series, Course 2: Elements of Photonics Nd:YAG Lasers Fact Sheet Excitation mechanism: Optical pumping by electric flash lamp or diode lasers Gain medium: Host material yttrium aluminum garnet (YAG) doped with active gain medium of neodymium (Nd+3) ions. Hence the name “Nd:YAG.” Optical cavity: Two-mirror, stable optical cavity. Or sometimes a more complicated cavity modified to produce internal cavity frequency doubling, Q-switching, or mode locking Wavelengths: Primary wavelength, 1.064 m. This wavelength is in the infrared region of the electromagnetic spectrum. Alternate wavelength, 1.39 m Output power: 5 W to 5 KW. Other prototypes up to 10 KW. Plans are to go to 100 KW. Beam diameter: A few mm Beam divergence: 5 mrad 2 2 M factor: M  1–10 (multimode) Pulse format: Continuous wave or pulsed –1 Small signal gain: go = gss = 2 cm (high gain) 7 2 Saturation intensity: Isat = 1.2  10 W/cm (large saturation intensity) Gain bandwidth:  1011 Hz Coherence length: 0.1–10 m

Gain length: Lg = 10– 50 cm Advantages: Produces infrared light at high power. Very stabile, robust, reliable, and flexible laser source. The Nd:YAG Laser is a four-level laser, so it can be used reliably for many applications. Easily frequency doubled, tripled, or quadrupled. Can be Q-switched. Disadvantages: For high power operation, the crystal must be cooled to remove the waste energy that goes into heat. Since the gain medium is a solid material, getting the heat out of the material takes some innovative technology. Applications: Widely used for materials processing tasks.

Module 2-2: Specific Laser Types 41 Example 11: Nd:YAG photon energy Problem What is the energy of a Nd:YAG photon whose wavelength is given as 1.064 m, in both Joules and eV?

Solution hc From the basic equation Ephot = , we have  (6.625 1034 J s)(3 10 8 m/s) Ephot = 1.064 106 m –19 Ephot = 1.87  10 Joules –19 Since 1 eV = 1.6  10 J, Ephot in eV is 1.87 1019 J Ephot = = 1.17 eV 1.6 1019 J/eV

Example 12: Apparent Nd:YAG laser beam waist Wo Problem Given the beam divergence angle  for a Nd:YAG laser, of emission wavelength 1.064 m, what is the apparent beam waist diameter 2Wo inside the optical cavity from which the beam diverges? Solution

Beam waist radius Wo and divergence angle  are related by the equation: 2 2 00 = , so Wo = Wo 00

From the fact sheet we find that 00 for a TEM00 beam is 5 milliradians. Thus, 2 (2)(1.064 106 m) W = = o 3 00 (3.14)(5 10 )

–4 Wo = 1.36  10 m = 0.14 mm So, the diameter of the beam at the beam waist inside the cavity is about 0.28 mm. Outside of the laser, some distance away, the beam diameter will spread to several millimeters, in accordance with the advertised beam divergence of 5 milliradians.

42 Optics and Photonics Series, Course 2: Elements of Photonics Neodymium:Glass Solid-State Lasers Another very common solid-state laser is neodymium-doped phosphate or silicate glass. Nd:Glass has a very broad gain bandwidth, thereby enabling it to produce ultrashort pulses. Initially, these lasers were used for accomplishing materials research. The Lawrence Livermore National Laboratory developed the technology to make large diameter thin disks of Nd:Glass material. The goal was to create a very large, ultrashort pulse laser facility. For ultrashort pulse lasers, the intensity in the pulse is extremely high, so that the focused laser intensity can produce extreme heat when incident on a material. The Nd:Glass laser at Lawrence Livermore is called the National Ignition Facility (NIF) (See Figure 2-31.). NIF has demonstrated nearly a petawatt (1015) of peak power in femtosecond (10–15) pulses. NIF includes 192 laser beam lines that are all focused on the target, a small pellet of fusionable material.

Figure 2-31 The National Ignition Facility Nd:Glass Laser for Fusion Research at Lawrence Livermore

Titanium–Sapphire (Ti:AL2O3) Solid-State Lasers Peter Moulton first demonstrated the titanium-doped sapphire laser in 1982 at the MIT Lincoln Laboratory. Excitation mechanism—The Ti:Sapphire laser is optically pumped. Initially the pump laser was and argon ion laser. More recently, frequency-doubled Nd:YAG lasers have replaced argon ion lasers for pumping Ti:Sapphire. Gain medium—Trivalent titanium (Ti+3) atoms create the gain medium. The Ti+3 ions are embedded in a crystalline host material of sapphire (Al2O3). Optical cavity—A two-mirror stable cavity can be used with Ti:Sapphire. External to the laser cavity, the ultrashort operation of the Ti:Sapphire laser is enhanced by a technique called chirped pulse amplification (CPA). In CPA, the ultrashort pulse is stretched in time with

Module 2-2: Specific Laser Types 43 gratings. Then, a Ti:Sapphire amplifier magnifies the pulse. And, finally, CPA is used again—in reverse—to compress the pulse to a very high energy and very short pulse width.

Titanium-Sapphire Solid-State Lasers Fact Sheet Excitation mechanism: Electric flash lamp, optical pumping, diode laser pumping +3 Gain medium: Sapphire (Al2O3) host material doped with active titanium ions (Ti ) to form Ti:AL2O3 or Ti:Sapphire Optical cavity: Two-mirror, stable optical cavity; a more complicated cavity for mode locking Wavelengths: Primary wavelength, 760–780 nm; wavelength range 660–1000 nm –1 Output power: Pav = 1.8 W Small signal gain: go = gss = 1 cm (high 12 gain) Peak power: Pmax  terawatts = 10 W Saturation intensity: I = 2  109 W/cm2 Beam diameter: A few mm sat (large saturation intensity) Beam divergence: 2 mrad Gain bandwidth: 1014 Hz 2 2 M factor: M  1–5 multimode Coherence length: 0.0001 m (very short) Pulse format: Continuous wave; pulsed or Minimum pulse length: ultrashort pulsed – mode locked –15 t1/2 = 1/ 6.67 fs = 6.67  10 sec Pulse width: –15 t1/2 = 10–100 fs = 10–100  10 sec Advantages: Produces ultrashort pulse infrared light at high powers up to terawatt levels. Interaction with materials at extremely high intensities produces novel effects. Disadvantages: Low energy per pulse. Pulse train format cannot be modified. E, PRT, and PRR are fixed. Applications: Materials research; secure communications; spectroscopy; materials processing; high-energy particle generation; replacement for dye lasers

Fiber Lasers Fiber lasers are very new to the laser community. Optical fiber has been produced in much larger quantities with a variety of types over the past several years to support the telecommunications industry. Some of this development in optical fibers has benefited the laser industry by producing an optical fiber that can lase when it is optically pumped. Most fiber lasers are pure solid materials such as erbium, thulium, or ytterbium. The fiber laser wavelength depends on the material. Fiber lasers operate continuously at over 100 W of power. A fiber laser is shown in Figure 2-32 and a fiber laser system in Figure 2-33. For this laser, the emission is at a wavelength that is in between yellow and green. However, the photo is shown only in grayscale. The fiber is coiled around a spool to keep the size compact. The output leaves the end of the fiber in the upper right-hand corner of the photograph (Figure 2-32). Fibers are usually tens to hundreds of microns (m) in diameter. However, larger core fibers are now being developed with diameters of millimeters.

44 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 2-32 A fiber laser

Excitation mechanism—Diode lasers can be used continuously to pump fiber lasers. The pumping is usually done at one end of the fiber with special optics to mate the optical energy to the fiber. Some other fibers can be side pumped, provided the refractive index permits refraction of light into the fiber.

Figure 2-33 A fiber laser system with optical fiber to transmit the output coupled energy

Gain medium and optical cavity—In a fiber laser, the optical fiber is the gain medium and it is also a wave guide that replaces the optical cavity in a traditional laser. The wave guide channels the emitted light along the direction parallel to the optic axis of the optical fiber. Then, in the fiber, stimulated emission adds more photons to the fiber laser beam. Thus, the fiber itself acts as the optical cavity and the gain medium.

Semiconductor Lasers Semiconductor lasers were discovered in the mid 1980s. A tremendous growth has occurred within the semiconductor laser industry over the past twenty years. Many different materials have been used for semiconductor lasers. The output power levels from single semiconductor lasers and arrays of semiconductor lasers have grown by many orders of magnitude. Large arrays of semiconductor lasers can produce hundreds of kilowatts of power. Single semiconductor lasers can now output nearly ten watts. Semiconductor lasers have changed our daily lives because they provide a small compact laser source for many applications. Bar code scanners in supermarkets use diode lasers. In fact, the indelible mark—the bar code—that the scanner reads was probably also accomplished with a diode laser marker. Sensors operate to open doors when a person passing by interrupts a diode laser beam. A closing garage door is

Module 2-2: Specific Laser Types 45 caused to reopen when an animal or small child interrupts the diode laser beam at the bottom of the door opening near the floor. A common semiconductor laser is shown in Figure 2-34.

Figure 2-34 A semiconductor laser compared to a pencil lead

Excitation mechanism—A semiconductor laser can be optically pumped or electrically pumped. Most semiconductor lasers are electrically pumped. These are diode lasers. A diode is formed within the semiconductor by growing positive type material next to negative type material, thereby creating a diode junction as shown in Figure 2-35. The most common type of semiconductor diode laser is gallium arsenide (GaAs). It is constructed of Figure 2-35 A gallium arsenide semiconductor diode laser positive type and negative type GaAs materials.

46 Optics and Photonics Series, Course 2: Elements of Photonics Gain medium—The p-n junction in a diode laser creates an active region where conduction electrons and holes occur. The construction of a semiconductor diode laser is shown in Figure 2-36.

Figure 2-36 The construction of a gallium arsenide semiconductor diode laser

Several different semiconductor diode laser compounds are given in Table 2-4.

Table 2-4. Semiconductor laser compounds and operating wavelengths Material Wavelength (m)

In1-xGaxAsyP1-x 1.2–1.6

GaAsxSb1-x 1.2–1.5

InAsxP1-x 1.0–3.1

(AlxGa1-x)yIn1-yAs 0.9–1.5

AlxGa1-xAs 0.7–0.9

GaAs1-xPx 0.6–0.9

InxGa1-xAs 0.55–3.0

(AlxGa1-x)yIn1-yP 0.55–0.8

CdSxSe1-x 0.5–0.7

CdxZn1-xSe 0.3–0.5 AlGaInN 0.2–0.64

Optical cavity—As shown in Figures 2-35 and 2-36, the ends of the semiconductor are cleaved to produce a flat surface at either end of the active gain region. The two-mirror optical cavity of a semiconductor diode laser is formed by these flat surfaces.

Module 2-2: Specific Laser Types 47 Semiconductor Laser Fact Sheet Excitation mechanism: Electric voltage across p-n junction or optical pumping Gain medium: Semiconductor laser material (see Table 2-4) Optical cavity: Two-mirror, stable optical cavity; sometimes a more complicated optical cavity Wavelengths: 600 nm to 1.6 m; see Table 2-4 9 2 Output power: Few mW to several W for Saturation intensity: Isat = 2.5  10 W/cm single lasers; 100 KW for diode laser arrays (extremely high saturation intensity) Beam size: 1–2 m  50–100 m Gain bandwidth:  1014 Hz (rectangular cross section) Gain length: Lg = 50–100 m Beam divergence: 20  200 mrad Power stability: 1–2% only 2 2 M factor: M  1.0 to 1.2 in small direction 2 Lifetime: Greater than 10,000 Hrs; varies M = 10 to 100 in large direction with laser type Pulse format: Continuous wave or pulsed 3 –1 Small signal gain: go = gss = 1  10 cm (extremely high gain) Advantages: Very reliable, efficient, robust, compact, power-stable, long-lifetime laser. Efficiency = 50–60%. Very inexpensive. Disadvantages: Large beam divergence. Specialized optics can be used to collect all of the emitted light. Low power. Applications: Laser marking, bar code readers, ink jet printers, laser pointers, compact disc players, optical communications, optical pumps for solid-state lasers

Other Lasers There are a few other types of lasers that do not fall into the categories of solid, liquid, and gas materials. The most notable in this area is the free electron laser. In the free electron laser, a beam of free electrons produces the active medium. With the help of a particle accelerator known as a Vander Graf generator, the laser beam is created. The electron beam is passed through a magnetic field that wiggles the beam back and forth to produce the quantum states for the lasing action. (See Figure 2-37.) Because of its construction, a free electron laser is very flexible and can operate at almost any wavelength. Accelerating or decelerating the electrons can change the energy of the electrons. The magnetic field can also be changed. The laser beam produced by the free electron laser is so intense that the mirrors can be damaged. Several creative optical cavity designs have been suggested to solve this problem.

48 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 2-37 Schematic diagram for a free electron laser

LABORATORY

As mentioned earlier, this laboratory is designed as a calculational laboratory to provide you with practice in solving algebraic equations related to important laser operational characteristics. In essence, you will repeat here for a typical Nd:YAG laser what was done for you (pages 6–12) for a HeNe laser. You will calculate the following for the Nd:YAG laser:

(a) Gain volume, Vg

(b) Grain threshold, gth

(c) Power available, Pavail

(d) Power out, Pout The following data are taken from the fact sheet for the Nd:YAG laser and Table 1-1 and 1-2 in Module 2-1, Operational Characteristics of Lasers. The data are assumed to be reasonable for an “average” gain Nd:YAG laser. Wavelength—1.064 m

Gain length (Lg)—15 cm

2 2 D (3.14)(0.1 cm) –3 2 Gain diameter (D)—1 mm; Ag = = = 7.85  10 cm 4 4 Saturation intensity—1.2  106 W/cm2 (not high gain) –1 Small signal gain (gss)—1 cm

Reflectivity of HR mirror (R1)—0.998

Reflectivity of output mirror (R2)—0.940

Roundtrip loss (L0)—0.008 The data given above will be used in the equations that follow to complete the calculational laboratory exercises. The symbols used throughout this module and the equations that follow are given below the equations to assist you in carrying out the proper calculations. Approximate answers are also indicated.

Module 2-2: Specific Laser Types 49 (a) The gain volume Vg is given by: D2 Vg = (Lg) 4 –3 3 (You should find that Vg  7.83  10 cm .)

(b) The gain threshold gth = g is given by: 11 g = gth = n 2(1)LRRLg120

–3 –1 (You should find that gth  1.2  10 cm .) (c) The power available is given by:

Pavail = (gss)(Isat)(Vg) 5 (You should find that Pavail  1.41  10 W.) (d) The power out of the laser is given by:   1 ggss  1 R1 Pout = (Pavail) gL g  ss g RL10(1 ) RRL 120 (1  )

(You should find that Pout  8.02 KW.) The symbols used in these equations are defined here. The units indicated are typical. 3 Vg = gain volume of the gain medium in cm

Lg = length of the gain medium in cm D = diameter of the gain medium in cm –1 gth = threshold gain coefficient below which a cw laser cannot operate, in cm g = operating gain coefficient for a cw laser that we take to be equal to the threshold gain, in cm–1

R1 = reflectivity of the high-reflectivity mirror; a decimal less than one

R2 = reflectivity of the output coupler; a decimal less than one

L0 = roundtrip distributed loss for a laser; a decimal

Pavail = power available in the gain medium in watts –1 gss = small signal gain coefficient in cm 2 Isat = saturated gain value for the laser in watts/cm

Pout = power out of the cw laser in watts

50 Optics and Photonics Series, Course 2: Elements of Photonics EXERCISES

1. A laser beam has a diameter of 2.8 mm at a distance of 1.75 m from the laser. If the diameter increases to 4.5 mm at a distance of 7.8 m, find the beam divergence . 2. A laser whose wavelength is 1.06 m has an effective output aperture diameter of 1.65 mm. What is the beam divergence far from the laser? 3. A laser has a beam divergence  of 4.2 mrads. The beam is focused by a lens of focal

length f equal to 2.05 cm. Find the diameter D of the focused spot. (Recall that D = f .) 4. Explain the terms pulse width, PRT, and PRR. (Refer back to Module 2-1, Operational Characteristics of Lasers.) The output pulse of a Q-switched laser has a duration of 8 ns and an energy of 1.4 J. Find the peak power. 5. Explain the terms full width at half maximum and duty cycle. (Refer back to Module 2-1, Operational Characteristics of Lasers.) A particular pulsed laser has an average power of 32 mw. The pulse duration is 2.5 s and the pulse repetition time is 1.6 ms. Find the duty cycle, maximum power and energy per pulse. 6. What is the difference between gas lasers and solid-state lasers? 7. What role does helium play in a HeNe laser? What role does neon play? 8. What is the approximate beam divergence of atomic gas lasers such as HeNe and HeCd? How does this compare to the beam divergence of diode lasers? 9. Give the approximate wavelength emission of HeNe, argon ion, copper vapor, Nd:YAG, diode, CO2, nitrogen, and excimer lasers. Which would you use for wavelength requirements in the far IR, near IR, visible, and UV? 10. What is the difference between “power available” and “power out” for a laser?

11. What three quantities does the power available (Pavail) of a given laser depend on? Which one can you control most easily? 2 12. Describe the meaning of the M factor for a given laser. How does the value of M affect the beam divergence and irradiance on target of a given laser? 13. Which of the lasers described in the module gives the longest wavelength? The shortest wavelength? Blue/green wavelengths? 14. Which of the following laser parameters are not needed to calculate the output power for a cw laser? mirror reflectivities small signal gain saturation intensity gain medium length saturated gain roundtrip loss factor cavity volume duty cycle pulse repetition rate 15. Which of the lasers described here are widely used in industry for material processing? For medical applications?

Module 2-2: Specific Laser Types 51 REFERENCES

Hawkes, J., and I. Latimer. Lasers: Theory and Practice. New York: Prentice Hall, 1995. Hecht, Jeff. Understanding Lasers, 2nd Edition. Hoboken, New Jersey: Wiley-Interscience, 2001. Introduction to Lasers. (Laser Electro-Optics Technology Series.) Waco, Texas: CORD, 1986–. Oshea, Donald C., W. Russell Callen, and William T. Rhodes. Introduction to Lasers and Their Applications. Reading, Mass.: Addison-Wesley Publishing Co., 1978. Silfvast, W.T. Laser Fundamentals. New York: Cambridge University Press, 1996. Tyagarajan, K., and A.K. Ghatak. Lasers: Theory and Applications. New York: Plenum Publishing Co., 1981.

52 Optics and Photonics Series, Course 2: Elements of Photonics

Optical Detectors and Human Vision

Module 2-3

of

Course 2, Elements of Photonics

OPTICS AND PHOTONICS SERIES

PREFACE

This is the third module in Course 2 (Elements of Photonics) of the STEP curriculum. Following are the titles of all six modules in the course: 1. Operational Characteristics of Lasers 2. Specific Laser Types 3. Optical Detectors and Human Vision 4. Principles of Fiber Optic Communication 5. Photonic Devices for Imaging, Display, and Storage 6. Basic Principles and Applications of Holography

The six modules can be used as a unit or independently, as long as prerequisites have been met. For students who may need assistance with or review of relevant mathematics concepts, a student review and study guide entitled Mathematics for Photonics Education (available from CORD) is highly recommended. The original manuscript of this document was authored by Jack Ready (formerly with Honeywell Technology) and edited by Leno Pedrotti (CORD). Formatting and artwork were provided by Mark Whitney and Kathy Kral (CORD).

CONTENTS

Introduction ...... 1 Prerequisites ...... 1 Objectives ...... 2 Scenario ...... 2 Basic Concepts ...... 3 Basic Information on Light Detectors ...... 3 Role of an optical detector ...... 3 Types of optical detectors ...... 3 Detector characteristics ...... 5 Noise considerations ...... 9 Types of Detectors ...... 11 Photon detectors ...... 11 Thermal detectors ...... 21 Calibration ...... 23 Response of detector ...... 24 Techniques to limit beam power ...... 24 Electrical calibration ...... 26 Circuitry for Optical Detectors ...... 26 Basic circuit for a photovoltaic detector ...... 26 Basic circuit for a photoconductive detector ...... 28 Human Vision ...... 29 The eye as an optical detector ...... 29 Structure of the eye ...... 29 Operation of the eye ...... 30 Color ...... 33 Defects of vision ...... 35 Laboratory ...... 35 Problems ...... 39 References ...... 39

COURSE 2: ELEMENTS OF PHOTONICS

Module 2-3 Optical Detectors and Human Vision

INTRODUCTION

Many photonics applications require the use of optical radiation detectors. Examples are optical radar, monitoring of laser power levels for materials processing, and laser metrology. Different types of optical detectors are available, covering the ultraviolet, visible, and infrared portions of the electromagnetic spectrum. Optical detectors convert incoming optical energy into electrical signals. The two main types of optical detectors are photon detectors and thermal detectors. Photon detectors produce one electron for each incoming photon of optical energy. The electron is then detected by the electronic circuitry. Thermal detectors convert the optical energy to heat energy that then generates an electrical signal. The detector circuit often employs a bias voltage and there is a load resistor in series with the detector. The incident light changes the characteristics of the detector and changes the current flowing in the circuit. The output signal is then the change in voltage drop across the load resistor. In this module, we will describe some common optical detectors—including the eye—and their important characteristics. We shall not attempt to cover the entire field of light detection, which is very broad. Instead, we shall emphasize those detectors that are most commonly encountered in photonics applications.

PREREQUISITES

You should have the ability to solve algebraic equations, should understand basic trigonometric functions, and should have knowledge of laser safety procedures. The following modules should have been completed previously or should be studied concurrently with this module: Module 1-1, Nature and Properties of Light; Module 1-2, Optical Handling and Positioning; Module 1-3, Light Sources and Laser Safety; Module 1-4, Basic Geometrical Optics; Module 1-5, Basic Physical Optics; and Module 1-6, Principles of Lasers.

1 OBJECTIVES

When you finish this module, you will be able to:  Define important detector response characteristics, including responsivity, noise equivalent power, quantum efficiency, detectivity, rise time, and cutoff wavelength for a photon detector.  Define sources of detector noise, including shot noise, Johnson noise, 1/f noise, and photon noise. Explain methods employed to reduce the effect of these noise sources.  Describe and explain the operation of important types of photodetectors, including photon detectors, thermal detectors, photoemissive detectors, photoconductive detectors, photovoltaic detectors, and photomultiplier detectors. Describe the spectral response of each type.  Draw and explain a typical circuit for a photovoltaic detector.  Draw and explain a typical circuit for a photoconductive detector.  Describe important concepts related to human vision, including structure of the eye, the formation of images by the eye, and common defects of vision.  Given the necessary information, calculate the noise equivalent power of a detector.  Given the necessary information, calculate the detectivity of a detector.  Given the necessary information, calculate the quantum efficiency of a detector.  Given the necessary information, calculate the power reaching a detector after a laser beam reflects from a Lambertian reflecting surface.  Fabricate a circuit for operation of a photodiode. Use the circuit for detection of light in both photoconductive and photovoltaic modes of operation.  Determine the relative response of the detector circuit as a function of wavelength for several wavelengths in the visible spectrum.

SCENARIO

Maria is a senior photonics technician who uses detectors for infrared, visible, and ultraviolet light in many applications. Maria works in the advanced research laboratory of a large industrial company and has many years of photonics experience. She employs detectors for monitoring the output of lasers as she adjusts their mirrors. Working under the direction of a well-known scientist who had once been a photonics technician herself, Maria has assembled equipment containing detectors for detecting the return signal in environmental monitoring applications and for controlling the progress of materials-processing applications. Her specific duties have included calibrating, cleaning, maintaining, testing, aligning, mounting, installing, operating, and demonstrating detectors for light.

2 Optics and Photonics Series, Course 2: Elements of Photonics BASIC CONCEPTS

Basic Information on Light Detectors When light strikes special types of materials, a voltage may be generated, a change in electrical resistance may occur, or electrons may be ejected from the material surface. As long as the light is present, the condition continues. It ceases when the light is turned off. Any of the above conditions may be used to change the flow of current or the voltage in an external circuit and thus may be used to monitor the presence of the light and to measure its intensity.

Role of an optical detector Many photonics applications require the use of optical detectors to measure optical power or energy. In laser-based fiber optic communication, a detector is employed in the receiver. In laser materials processing, a detector monitors the laser output to ensure reproducible conditions. In applications involving interferometry, detectors are used to measure the position and motion of interference fringes. In most applications of light, one uses an optical detector to measure the output of the laser or other light sources. Thus, good optical detectors for measuring optical power and energy are essential in most applications of photonics technology. Optical detectors respond to the power in the optical beam, which is proportional to the square of the electric field associated with the light wave. Optical detectors therefore are called square- law detectors. This is in contrast to the case of microwave detectors, which can measure the electric field intensity directly. All the optical detectors that we will describe have square-law responses. Detection and measurement of optical and infrared radiation is a well-established area of technology. This technology has been applied to photonics applications. Detectors particularly suitable for use with lasers have been developed. Some detectors are packaged in the format of power or energy meters. Such a device is a complete system for measuring the output of a specific class of lasers, and includes a detector, housing, amplification if necessary, and a readout device.

Types of optical detectors Optical detectors are usually divided into two broad classes: photon detectors and thermal detectors. In photon detectors, quanta of light energy interact with electrons in the detector material and generate free electrons. To produce free electrons, the quanta must have sufficient energy to free an electron from its atomic binding forces. The wavelength response of photon detectors shows a long-wavelength cutoff. If the wavelength is longer than the cutoff wavelength, the photon energy is too small to produce a free electron and the response of the photon detector drops to zero. Thermal detectors respond to the heat energy delivered by light. These detectors use some temperature-dependent effect, like a change of electrical resistance. Because thermal detectors rely on the total amount of heat energy reaching the detector, their response is independent of wavelength.

Module 2-3: Optical Detectors and Human Vision 3 The output of photon detectors and thermal detectors as a function of wavelength is shown schematically in Figure 3-1. This figure shows the typical spectral dependence of the output of photon detectors, which increases with increasing wavelength at wavelengths shorter than the cutoff wavelength. At that point, the response drops rapidly to zero. The figure also shows how the output of thermal detectors is independent of wavelength, and extends to longer wavelengths than the response of photon detectors.

Figure 3-1 Schematic drawing of the relative output per unit input for photon detectors and thermal detectors as a function of wavelength. The position of the long-wavelength cutoff for photon detectors is indicated.

Figure 3-1 is intended to show only the relative shape of the output curves for these two types of detectors and is not intended to show quantitative values. Quantitative values will be presented in later figures for some specific detectors.

Example 1 One popular type of silicon photon detector has a cutoff wavelength of 1.1 m. Why would you probably prefer to use that detector to detect light with wavelengths less than 1.1 m, rather than a thermal detector, and why might you replace it with a thermal detector to detect light with wavelengths longer than 1.1 m? Solution According to Figure 3-1, if the cutoff wavelength is 1.1 m, at wavelengths shorter than that value, the response of the photon detector is greater than that of the thermal detector, so the photon detector would be preferred. At wavelengths longer than 1.1 m, the response of the photon detector drops to zero, so one would have to use a different detector, possibly a thermal detector.

4 Optics and Photonics Series, Course 2: Elements of Photonics Photon detectors may be further subdivided according to the physical effect that produces the detector response. Some important classes of photon detectors are listed below. Photoconductive—The incoming light produces free electrons that can carry electrical current so that the electrical conductivity of the detector material changes as a function of the intensity of the incident light. Photoconductive detectors are fabricated from semiconductor materials such as silicon. Photovoltaic—Such a detector contains a junction in a semiconductor material between a region where the conductivity is due to electrons and a region where the conductivity is due to holes (a so-called pn junction). A voltage is generated when optical energy strikes the device. Photoemissive—These detectors are based on the photoelectric effect, in which incident photons release electrons from the surface of the detector material. The free electrons are then collected in an external circuit. Photoconductive and photovoltaic detectors are commonly used in circuits in which there is a load resistance in series with the detector. The output is read as a change in the voltage drop across the resistor. We shall discuss each of these effects in more detail later.

Detector characteristics The performance of optical detectors is commonly characterized by a number of different parameters. It is important to define these parameters, sometimes called figures of merit, because manufacturers usually describe the performance of their detectors in these terms. The figures of merit were developed to describe the performance of detectors responding to a small signal in the presence of noise. Thus, some of the figures of merit may not be highly relevant to the detection of laser light. For many laser applications, like laser metalworking, there is no problem with the detection of a small signal in a background of noise. The laser signal is far larger than any noise source that may be present. In other photonics applications, like laser communication, infrared thermal imaging systems, and detection of backscattered light in laser remote sensing, the signals are small and noise considerations are important. Responsivity—The first term that we will define is responsivity. This is the detector output per unit of input power. The units of responsivity are either amperes/watt (alternatively milliamperes/milliwatt or microamperes/microwatt, which are numerically the same) or volts/watt, depending on whether the output is an electric current or a voltage. The responsivity is an important parameter that is usually specified by the manufacturer. Knowledge of the responsivity allows the user to determine how much detector signal will be available for a specific application. Noise equivalent power—A second figure of merit, which depends on noise characteristics, is the noise equivalent power (NEP). This is defined as the optical power that produces a signal voltage (or current) equal to the noise voltage (or current) of the detector. The noise is dependent on the bandwidth of the measurement so that bandwidth must be specified. Frequently it is taken as 1 Hz. The equation defining NEP is

Module 2-3: Optical Detectors and Human Vision 5 HAVN NEP = 1 2 (3-1) VS() where H is the irradiance incident on the detector of area A, VN is the root mean square noise voltage within the measurement bandwidth , and VS is the root mean square signal voltage. The NEP has units of watts/(Hz)1/2, usually called “watts per root hertz.” From the definition, it is apparent that the lower the value of the NEP, the better are the characteristics of the detector for detecting a small signal in the presence of noise.

Example 2 The noise equivalent power of a detector with area 1 cm2 is measured to be 2 × 10–8 watts/(Hz)1/2 with a bandwidth of 1 Hz. What power is incident on the detector if the ratio of the noise voltage to the signal voltage is 10–6?

Solution According to Equation 3-1, the irradiance H at the detector must be NEP H = = 2 × 10–8/{(1) × (10–6) × (1)} = 0.02 W/cm2 VN 1 A 1 2 VS () Because the area of the detector is 1 cm2, the power is 0.02 W.

Detectivity—The NEP of a detector is dependent on the area of the detector. To provide a figure of merit that is dependent on the intrinsic properties of the detector, not on how large it happens to be, a term called detectivity is defined. Detectivity is represented by the symbol D*, which is pronounced “D-star.” It is defined as the square root of the detector area per unit value of NEP.

1 A 2 D* = (3-2) NEP Since many detectors have NEP proportional to the square root of their areas, D* is independent of the area of the detector. The detectivity thus gives a measure of the intrinsic quality of the detector material itself. When a value of D* for an optical detector is measured, it is usually measured in a system in which the incident light is modulated or chopped at a frequency  so as to produce an AC signal, which is then amplified with an amplification bandwidth . These quantities must also be specified. The dependence of D* on the wavelength , the frequency  at which the measurement is made, and the bandwidth  are expressed by the notation D*(,,). The reference bandwidth is often 1 Hertz. The units of D*(,,) are cm-Hz1/2/watt. A high value of D*(,,) means that the detector is suitable for detecting weak signals in the presence of noise. Later, in the discussion of noise, we will describe the effect of modulation frequency and bandwidth on the noise characteristics.

6 Optics and Photonics Series, Course 2: Elements of Photonics Example 3 A detector has a noise equivalent power of 3 × 10–9 watts/(Hz)1/2 and an area of 0.4 cm2. What is its value of D*? Solution According to Equation 3-2, D* = (0.4 cm2)1/2/3 × 10–9 watts/(Hz)1/2 = 0.632 cm × 0.333 ×109 Hz1/2/watt = 2.11 × 108 cm-Hz1/2/watt

Quantum efficiency—Another common figure of merit for optical detectors is the quantum efficiency. Quantum efficiency is defined as the ratio of countable events produced by photons incident on the detector to the number of incident photons. If the detector is a photoemissive detector that emits free electrons from its surface when light strikes it, the quantum efficiency is the number of free electrons divided by the number of incident photons. If the detector is a semiconductor pn-junction device, in which hole-electron pairs are produced, the quantum efficiency is the number of hole-electron pairs divided by the number of incident photons. If, over a period of time, 100,000 photons are incident on the detector and 10,000 hole-electron pairs are produced, the quantum efficiency is 10%. The quantum efficiency is basically another way of expressing the effectiveness of the incident optical energy for producing an output of electrical current. The quantum efficiency Q (in percent) may be related to the responsivity by the equation: 1.2395 Q = 100 × Rd ×  (3-3) λ where Rd is the responsivity (in amperes per watt) of the detector at wavelength  (in ).

Example 4 A detector has a quantum efficiency of 10% at a wavelength of 500 nm. At a wavelength of 750 nm, the responsivity is twice the responsivity at 500 nm. What is the quantum efficiency at 750 nm? Solution From Equation 3-3, we see that the increase in responsivity from 500 nm to 750 nm will increase the quantum efficiency Q by a factor of 2. But the increase in wavelength will decrease the quantum efficiency Q by a factor of 2/3. Thus, the net change in quantum efficiency will be an overall increase by a factor of 4/3, from 10% to 13.33%.

Detector response time—Another useful detector characteristic is the how fast the detector responds to changes in light intensity. If a light source is instantaneously turned on and irradiates an optical detector, it takes a finite time for current to appear at the output of the

Module 2-3: Optical Detectors and Human Vision 7 device and for the current to reach a steady value. If the source is turned off instantaneously, it takes a finite time for the current to decay back to zero. The term response time refers to the time it takes the detector current to rise to a value equal to 63.2% of the steady-state value which is reached after a relatively long period of time. (This value is numerically equal to 1 1 , where e is the base of the natural logarithm system.) The recovery time is the time it e takes for the photocurrent to fall to 36.8% of the steady-state value when the light is turned off instantaneously. Because optical detectors often are used for detection of fast pulses, another important term, called rise time, is often used to describe the speed of the detector response. Rise time is defined as the time difference between the point at which the detector has reached 10% of its peak output and the point at which it has reached 90% of its peak response, when it is irradiated by a very short pulse of light. The fall time is defined as the time between the 90% point and the 10% point on the trailing edge of the pulse waveform. This is also called the decay time. We should note that the fall time may be different numerically from the rise time. Of course, light sources are not turned on or off instantaneously. To make accurate measurements of rise time and fall time, the source used for the measurement should have a rise time much less that the rise time of the detector being tested. Generally, one should use a source whose rise time is less than 10% of the rise time of the detector being tested. The intrinsic response time of an optical detector arises from the transit time of photo-generated charge carriers within the detector material and from the inherent capacitance and resistance associated with the device. The measured value of response time is also affected by the value of the load resistance that is used with the detector, and may be longer than the inherent response time. There is a tradeoff in the selection of a load resistance between speed of response and high sensitivity. It is not possible to achieve both simultaneously. Fast response requires a low load resistance (generally 50 ohms or less), whereas high sensitivity requires a high value of load resistance. It is also important to keep any capacitance associated with the circuitry, the electrical cables, and the display devices as low as possible. This will help keep the RC (resistance × capacitance) time constant low. Manufacturers often quote nominal values for the rise times of their detectors. These should be interpreted as minimum values, which may be achieved only with careful circuit design and avoidance of excess capacitance and resistance in the circuitry. We will return to the issue of detector speed later in a discussion of high-speed detectors, designed for applications such as optical communications with large bandwidth. Linearity—Yet another important characteristic of optical detectors is their linearity. Detectors are characterized by a response in which the output is linear with incident intensity. The response may be linear over a broad range, perhaps many orders of magnitude. If the output of the detector is plotted versus the input power, there should be no change in the slope of the curve. Noise will determine the lowest level of incident light that is detectable. The upper limit of the input/output linearity is determined by the maximum current that the detector can produce without becoming saturated. Saturation is a condition in which there is no further increase in detector response as the input light intensity is increased. When the detector becomes saturated, one can no longer rely on its output to represent the input faithfully. The user should ensure that the detector is operating in the range in which it is linear.

8 Optics and Photonics Series, Course 2: Elements of Photonics Manufacturers of optical detectors often specify maximum allowable continuous light level. Light levels in excess of this maximum may cause saturation, hysteresis effects, and irreversible damage to the detectors. If the light occurs in the form of a very short pulse, it may be possible to exceed the continuous rating by some factor (perhaps as much as 10 times) without damage or noticeable changes in linearity. Spectral response—The spectral response describes how the performance of a detector (responsivity or detectivity) varies with wavelength. The spectral response is defined by curves such as those shown earlier in Figure 3-1, which presents generalized curves showing relative spectral response as a function of wavelength for photon detectors and thermal detectors. The exact shape of the spectral response and the numerical values depend on the detector type and the material from which the detector is fabricated. Many different types of detectors are available, with responses maximized in the ultraviolet, visible, or infrared spectral regions. Again, the manufacturer usually specifies the spectral response curve. One should choose a detector that responds well in the spectral region of importance for the particular application.

Noise considerations Noise in optical detectors is a complex subject. In this module we will do no more than present some of the most basic ideas. Noise is defined as any undesired signal. It masks the signal that is to be detected. Noise can be external and internal. External noise involves disturbances that appear in the detection system because of factors outside the system. Examples of external noise could be pickup of hum induced by 60-Hz electrical power lines or static caused by electrical storms. Internal noise includes all noise generated within the detection system itself. Every electronic device has internal sources of noise, which represent an ever-present limit to the smallest signal that may be detected by the system. Noise cannot be described in the same manner as usual electric currents or voltages. We think of currents or voltages as functions of time, such as constant direct currents or sine-wave alternating voltages. The noise output of an electrical circuit as a function of time is completely erratic. We cannot predict what the output will be at any instant. There will be no indication of regularity in the waveform. The output is said to be random. Now we will consider some of the sources of noise encountered in optical detector applications. A complete description of all types of noise would be very long. We will describe four noise sources often encountered in connection with optical detectors.  Johnson noise  Shot noise  1/f noise  Photon noise Johnson noise—Johnson noise is generated by thermal fluctuations in conducting materials. It is sometimes called thermal noise. It results from the random motion of electrons in a conductor. The electrons are in constant motion, colliding with each other and with the atoms of the material. Each motion of an electron between collisions represents a tiny current. The sum

Module 2-3: Optical Detectors and Human Vision 9 of all these currents taken over a long period of time is zero, but their random fluctuations over short intervals constitute Johnson noise. To reduce the magnitude of Johnson noise, one can cool the system, especially the load resistor. One can also reduce the value of the load resistance, although this is done at the price of reducing the available signal. One should keep the bandwidth of the amplification small; one Hz is a commonly employed value. Shot noise—The term shot noise is derived from fluctuations in the stream of electrons in a vacuum tube. These variations create noise because of the random fluctuations in the arrival of electrons at the anode. The shot noise name arises from the similarity to the noise of a hail of BB shots striking a target. In semiconductors, the major source of shot noise is random variations in the rate at which charge carriers are generated and recombine. This noise, called generation-recombination or gr noise, is the semiconductor manifestation of shot noise. Shot noise may be minimized by keeping any DC component to the current small, especially the dark current, and by keeping the bandwidth of the amplification system small. 1/f noise—The term 1/f noise (pronounced “one over f”) is used to describe a number of types of noise that are present when the modulation frequency f is low. This type of noise is also called excess noise because it exceeds shot noise at frequencies below a few hundred Hertz. The mechanisms that produce 1/f noise are poorly understood. The noise power is inversely proportional to f, the modulation frequency. This dependence of the noise power on modulation frequency leads to the name for this type of noise. To reduce 1/f noise, an optical detector should be operated at a reasonably high frequency, often as high as 1000 Hz. This is a high enough value to reduce the contribution of 1/f noise to a small amount. Photon noise—Even if all the previously discussed sources of noise could be eliminated, there would still be some noise in the output of an optical detector because of the random arrival rate of photons in the light being measured and from the background. This contribution to the noise is called photon noise; it is a noise source external to the detector. It imposes the ultimate fundamental limit to the detectivity of a photodetector. The photon noise associated with the fluctuations in the arrival rate of photons in the desired signal is not something that can be reduced. The contribution of fluctuations in the arrival of photons from the background, a contribution that is called background noise, can be reduced. The background noise increases with the field of view of the detector and with the temperature of the background. In some cases it is possible to reduce the field of view of the detector so as to view only the source of interest. In other cases it is possible to cool the background. Both these measures may be used to reduce the background noise contribution to photon noise. The types of noise described here, or a combination of them, will set an upper limit to the detectivity of an optical detector system.

10 Optics and Photonics Series, Course 2: Elements of Photonics Types of Detectors We now return to the discussion of different types of detectors.

Photon detectors We have defined photon detectors and thermal detectors briefly. We begin a more detailed discussion of detector types with photon detectors. In photon detectors, quanta of light energy produce free electrons. The photon must have sufficient energy to exceed some threshold. In other words, the wavelength must be shorter than the cutoff wavelength. We will consider three types of photoeffects that are often used for detectors. These are the photovoltaic effect, the photoemissive effect, and the photoconductive effect. Photovoltaic effect—The photovoltaic effect occurs at a junction in a semiconductor. The junction is the boundary between a region where the conductivity is due to electrons and a region where the conductivity is due to holes (the absence of electrons). This is called a pn junction. At the junction, an electric field is present internally because there is a change in the level of the conduction and valence bands. This change leads to the familiar electrical rectification effect produced by such junctions. The photovoltaic effect is the generation of a voltage when light strikes a semiconductor pn junction. The photovoltaic effect is measured using a high-impedance voltage-measuring device, which essentially measures the open-circuit voltage produced at the junction. In the dark, no open circuit voltage is present. When light falls on the junction, the light is absorbed and, if the photon energy is large enough, it produces free hole-electron pairs. The electric field at the junction separates the pair and moves the electron into the n-type region and the hole into the p-type region. This leads to an open circuit voltage that can be measured externally. This process is the origin of the photovoltaic effect. We note that the open-circuit voltage generated in the photovoltaic effect may be detected directly and that no bias voltage or load resistor is required. If the junction is short-circuited by an external conductor, current will flow in the circuit when the junction is illuminated. One may measure either the open-circuit voltage or the short-circuit current. Both these quantities will give measures of the light intensity falling on the junction. Photoemissive effect—Now we turn to the photoemissive effect. The photoemissive effect involves the emission of electrons from a surface irradiated by quanta of light energy. A photoemissive detector has a cathode coated with a material that emits electrons when light of wavelength shorter than the cutoff wavelength falls on the surface. The electrons emitted from the surface are accelerated by a voltage to an anode, where they produce a current in an external circuit. The detectors are enclosed in a vacuum environment to allow a free flow of electrons from cathode to anode. These detectors are available commercially from a number of manufacturers. They represent an important class of detectors for many applications. Some spectral response curves for photoemissive cathodes are shown in Figure 3-2. The cathodes are often mixtures containing alkali metals, such as sodium and potassium, from which electrons can easily be emitted. The responsivity in mA/watt of these devices is shown in the figure from the ultraviolet to the near infrared. At wavelengths longer than about 1000 nm, no photoemissive response is available. The short-wavelength end of the response curve is set by the nature of the

Module 2-3: Optical Detectors and Human Vision 11 window material used in the tube that contains the detector. The user can select a device that has a cathode with maximum response in a selected wavelength region. An important variation of the photoemissive detector is the photomultiplier, which will be described later.

Figure 3-2 Response as a function of wavelength for a number of photoemissive surfaces. Curve 1 is the response of a bialkali type of cathode with a sapphire window; curve 2 is for a different bialkali cathode with a lime glass window; curve 3 is for a multialkali cathode with a lime glass window; and curve 4 is for a GaAs cathode with a 9741 glass window. The curves labeled 1% and 10% denote what the response would be at the indicated value of quantum efficiency.

Example 5 You are a technician in an industrial processing facility and are setting up equipment to monitor a chemical reaction. You need to detect light with a wavelength of 820 nm. You have four photoemissive detectors available, with characteristics as shown in Figure 3-2. Which should you choose? Solution You should choose the GaAs cathode with a 9741 glass window (Curve 4), because this is the only one whose cutoff wavelength is longer than 820 nm.

12 Optics and Photonics Series, Course 2: Elements of Photonics Photoconductivity—A third phenomenon used in optical detectors is photoconductivity. A semiconductor in thermal equilibrium contains free electrons and holes. The concentration of electrons and holes is changed if light is absorbed by the semiconductor. The light must have photon energy large enough to produce free electrons within the material. The increased number of charge carriers leads to an increase in the electrical conductivity of the semiconductor. The device is used in a circuit with a bias voltage and a load resistor in series with it. The change in electrical conductivity leads to an increase in the current flowing in the circuit, and hence to a measurable change in the voltage drop across the load resistor. Photoconductive detectors are most widely used in the infrared region, at wave- lengths where photo- emissive detectors are not available. Many different materials are used as infrared photoconductive detectors. Some typical values of detectivity (in cm-Hz1/2/watt) as a function of wavelength for some devices operating in the infrared are shown in Figure 3-3, along with values of detectivity for some other detectors to be discussed later. The photoconductive detectors are denoted PC. The exact value of detectivity for a specific photoconductor depends on the operating temperature and on the field of view of the detector. Most infrared photoconductive detectors operate at a cryogenic Figure 3-3 Detectivity as a function of wavelength for a number of temperature (frequently different types of photodetectors operating in the infrared spectrum. liquid nitrogen temperature, The temperature of operation is indicated. Photovoltaic detectors are 77 K), which may involve denoted PV; photoconductive detectors are denoted PC. The curves for some inconvenience in ideal photodetectors assume a 2 steradian field of view and a 295 K practical applications. background temperature. A photoconductive detector uses a crystal of semiconductor material that has low conductance in the dark and an increased value of conductance when it is illuminated. It is commonly used in a series circuit with a battery and a load resistor. The semiconductor element has its conductance increased by light. The presence of light leads to increased current in the circuit and to increased voltage drop across the load resistor.

Module 2-3: Optical Detectors and Human Vision 13 Example 6 You are working on an infrared sensing application requiring a detector with an area of 4 cm2 to detect infrared radiation at a wavelength of 9.6 m. Your supervising scientist has calculated that the detector must have a noise equivalent power (NEP) less than 2 × 10–10 watts/Hz1/2. What detector should you choose? Solution

1 A 2 From Equation 3-2, D* = . Using the numbers given above, we determine that: NEP

11 (4)22 cm-Hz D* =  1010 (210) 10 watt From Figure 3-3 we find that the only detector with a value of D* that large at the specified wavelength is the photoconductive (Hg0.8Cd0.2)Te detector operating at 77 K. That is the detector that should be chosen.

We now consider two specific types of photon detectors that are especially useful in photonics—the photodiode and the photomultiplier. Photodiodes—We have discussed the photovoltaic effect, for which no bias voltage is required. It is also possible to use a pn junction to detect light if one does apply a bias voltage in the reverse direction. The reverse direction is the direction of low current flow, that is, with the positive voltage applied to the n-type material. A photodiode is a pn junction detector with an applied bias voltage. Figure 3-4 shows the current-voltage characteristics of a photodiode. The curve marked “dark” shows the current-voltage relationship in the absence of light. It shows the familiar rectification characteristics of a pn semiconductor diode. The other curves represent the current-voltage characteristics when the device is illuminated at different light levels. A photovoltaic detector, with zero applied voltage, is represented by the intersections of the different curves with the vertical axis. Figure 3-4 is intended to show qualitatively how a photodiode operates. No quantitative values are shown for the axes in this figure; these values will vary from one material to another.

14 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 3-4 Current-voltage characteristic for a photodiode

A photodiode detector is operated in the lower left quadrant of the current-voltage relationship shown in Figure 3-4, where the current that may be drawn through an external load resistor increases with increasing light level. In practice, one measures the voltage drop appearing across the load resistor. Operation of the pn junction device as a photodiode rather than as a photovoltaic detector gives the advantage of increased response to the presence of light.

Example 7 Compare the dark current expected from a photodiode with that from a photovoltaic detector.

Solution A photovoltaic detector is operated with no bias voltage, that is, at the zero voltage position in Figure 3-4. From the figure we see that the current will be very near zero when the device is in the dark. A photodiode is operated in the lower left quadrant of the figure, where we see that the current is non-zero, even when the device is in the dark. Thus, as compared to a photovoltaic detector, the increased response of the photodiode comes at the cost of increased dark current.

A variety of photodiode structures are available. No single photodiode structure can best meet all requirements. Perhaps the two most common structures are the planar diffused photodiode, shown in Figure 3-5a, and the Schottky photodiode, shown in Figure 3-5b. The planar diffused photodiode is fabricated by growing a layer of oxide over a slice of high-resistivity silicon,

Module 2-3: Optical Detectors and Human Vision 15 etching a hole in the oxide and diffusing boron into the silicon through the hole. This structure leads to devices with high breakdown voltage and low leakage current. The circuitry for operation of the photodiode is also indicated, including the load resistor.

Figure 3-5 Photodiode structures: (a) Planar diffused photodiode; (b) Schottky photodiode

The Schottky barrier photodiode is formed at a junction between a metallic layer (gold film in Figure 3-5b) and a semiconductor. If the metal and the semiconductor have work functions related in the proper way, this can be a rectifying barrier. The junction is fabricated by oxidation of the silicon surface, then etching of a hole in the oxide, followed by the evaporation of a thin transparent and conducting gold layer on top. The insulating guard rings serve to reduce the leakage current through the device. A number of different semiconductor materials are in common use as photodiodes. They include silicon for use in the visible, near ultraviolet, and near infrared; germanium and indium gallium arsenide in the near infrared; and indium antimonide, indium arsenide, mercury cadmium telluride, and germanium doped with elements like copper and gold in the longer-wavelength infrared. The most frequently encountered type of photodiode is silicon. Silicon photodiodes are widely used as the detector elements in optical disks and as the receiver elements in optical fiber telecommunication systems operating at wavelengths around 800 nm. Silicon photodiodes respond over the approximate spectral range of 400-1100 nm, covering the visible and part of the near infrared regions. With special enhancement, the response in the short wavelength region may be increased, so that silicon photodiodes are useful in the ultraviolet. The spectral responsivity (in amperes/watt) of typical commercial silicon photodiodes is shown in Figure 3-6. The responsivity reaches a peak value around 0.7 amp/watt near 900 nm, decreasing at longer and shorter wavelengths. Optional models provide somewhat extended coverage in the infrared or ultraviolet regions. Silicon photodiodes are useful for detection of many of the most common laser wavelengths, including argon, HeNe, AlGaAs, and Nd:YAG.

16 Optics and Photonics Series, Course 2: Elements of Photonics In practice, silicon photodiodes have become the detector of choice for many photonics applications within their spectral range. They use well-developed technology and are widely available. They represent the most widely used type of detector for lasers operating in the visible and near infrared portions of the spectrum.

Figure 3-6 Responsivity as a function of wavelength for typical silicon photodiodes

Example 8 You are setting up equipment using a diode laser operating at a wavelength of 800 nm and an output power of 3 milliwatts and a silicon photodiode with a response similar to that shown in Figure 3-6. Approximately what current output do you expect from the photodiode when the laser beam is directed onto it. Solution Reading from Figure 3-6, the responsivity of the silicon photodiode at 800 nm is approximately 0.65 amps/watt. The output of the photodiode would be expected to be 0.65 amps/watt × 0.003 watt = 0.00195 A, about 2 milliamperes.

There are also a variety of other types of photodiodes that are often encountered, especially in optical communications where wavelengths around 1.5 micrometers (1500 nm) are used in the fiber optics involved. Silicon photodiodes do not respond in this region, as Figure 3-6 shows. Table 3-1 presents some characteristics of photodiodes used in the visible and near infrared portions of the spectrum.

Module 2-3: Optical Detectors and Human Vision 17 Table 3-1. Performance Parameters of Some Photodiode Detectors Wavelength of Peak Usable Wavelength Peak Responsivity Detector Material Response Range (micrometers) (A/W) (micrometers) Si 0.9 0.4–1.1 0.7 Si (blue enhanced) 0.9 0.25–1.1 0.7 Ge 1.5 0.9–1.9 0.82 InGaAs 1.55 0.8–1.7 0.95 GaAs 0.85 0.47–0.87 0.6 GaN 0.35 0.18–0.365 0.13

Figure 3-3 showed earlier the detectivity (D*) for a number of commercially available detectors operating in the infrared region of the EM spectrum. The figure includes both photovoltaic detectors (denoted PV) and photoconductive detectors (denoted PC). The choice of detector will depend on the wavelength region that is desired. For example, for a laser operating at 5 m, an indium antimonide (InSb) photovoltaic detector would be suitable. Figure 3-3 also indicated the detectivity for “ideal” detectors, that is, detectors whose performance is limited only by fluctuations in the background of incident radiation, and that do not contribute noise themselves. Available detectors approach the ideal performance limits fairly closely. PIN photodiodes—Another common type of semiconductor structure used in photodiodes is the so-called PIN structure. This structure was developed to increase the frequency response of photodiodes. The device has a layer of nearly intrinsic semiconductor material bounded on one side by a relatively thin layer of highly doped p-type semiconductor and on the other side by a relatively thick layer of n-type semiconductor. Hence it is called a PIN device. Light that is absorbed in the intrinsic region produces free electron-hole pairs, provided that the photon energy is high enough. These carriers are swept across the region with high velocity and are collected in the heavily doped regions. The frequency response of PIN photodiodes can be very high, of the order of 1010 Hz. This is higher than the frequency response of pn junctions without the intrinsic region. Avalanche photodiodes—Another variation of the photodiode is the avalanche photodiode. The avalanche photodiode offers the possibility of internal gain. It is sometimes referred to as a “solid-state photomultiplier.” The most widely used material for avalanche photodiodes is silicon, but they have been fabricated from other materials, such as germanium. An avalanche photodiode has a diffused pn junction, with surface contouring to permit high reverse bias voltage without surface breakdown. A large internal electric field leads to multiplication of the number of charge carriers through ionizing collisions. The signal is thus increased, to a value perhaps 100–200 times greater than that of a nonavalanche device. The detectivity is also increased, provided that the limiting noise is not from background radiation. Avalanche photodiodes cost more than conventional photodiodes and they require temperature- compensation circuits to maintain the optimum bias, but they represent an attractive choice when high performance is required.

18 Optics and Photonics Series, Course 2: Elements of Photonics High-speed detectors—Detectors with a high-speed response are required in many applications, especially in optical communications where a very large bandwidth is needed. The threshold for considering a detector to be “high-speed” is often taken as 155 megabits per second, corresponding to a bandwidth around 100 MHz. Important factors for designing detectors with high bandwidth are the transit time and the capacitance. The transit time is the time for an electron to be swept out of the intrinsic region in a PIN photodiode. This may be reduced by making the intrinsic region thin. The capacitance is reduced by keeping the active area of the device small. InGaAs photodiodes with high bandwidths in excess of 40 GHz have been designed. Photomultipliers—Previously, we described photoemissive detectors in which current flows directly from a photocathode to an anode. We turn now to an important photoemissive detector that provides for amplifi- cation of the cathode-to-anode current. This is the photomultiplier. This device has a photoemissive cathode and a number of secondary emitting stages called dynodes. The dynodes are arranged so that electrons from each dynode are delivered to the next dynode in the sequence of dynodes. Electrons emitted from the cathode are accelerated by an applied voltage to the first dynode, where their impact causes emission of numerous secondary electrons. These electrons are accelerated to the next dynode and generate even more electrons. Finally, electrons from the last dynode are accelerated to the anode and produce a large current pulse in the external circuit. The photomultiplier is packaged as a vacuum tube. Figure 3-7 shows a cross-sectional diagram of a typical photomultiplier Figure 3-7 Diagram of typical photomultiplier tube structure. This tube has a tube structure transparent end window with the underside coated with the photocathode material.

Module 2-3: Optical Detectors and Human Vision 19 Figure 3-8 shows the principles of operation of the tube. Photoelectrons emitted from the cathode strike the first dynode, where they produce 1 to 8 secondary electrons per incident electron. These are accelerated to the second dynode, where the process is repeated. After several such steps the electrons are collected at the anode and flow through the load resistor. Voltages of 100 to 300 volts accelerate electrons between dynodes, so that the total tube voltage may be from 500 to 3000 volts from anode to cathode, depending on the number of dynodes. The current gain of a photomultiplier is the ratio of anode current to cathode current. Typical values of gain may be in the range 100,000 to 1,000,000. Thus 100,000 or more electrons reach the anode for each photon striking the cathode. Photomultiplier tubes can in fact detect the arrival of a single photon at the cathode. Figure 3-9 shows the gain as a function of the voltage from the anode to the cathode for a typical photomultiplier tube. This high gain process means that photomultiplier tubes offer the highest available responsivity in the ultraviolet, visible, and near infrared portions of the spectrum. But their response does not extend to wavelengths longer than about 1000 nm.

Figure 3-8 Principles of photomultiplier Figure 3-9 Photomultiplier gain as a function of operation. The dynodes are denoted D1, D2, etc. applied voltage

Photomultiplier tubes are used in many photonics applications, such as air-pollution monitoring, star tracking, photometry, and radiometry.

20 Optics and Photonics Series, Course 2: Elements of Photonics Example 9 You are a technician in an industrial laboratory using a photomultiplier with characteristics as shown in Figure 3-9 to detect a weak signal. You are using an applied voltage of 1000 volts. Your senior engineer requests you to increase the gain of the device by a factor of 10. What do you do?

Solution According to Figure 3-9, the gain of the photomultiplier at 1000 volts is 104. To increase the gain to 105, you should increase the applied voltage to near 1200 volts.

Thermal detectors A second broad class of optical detectors—thermal detectors—responds to the total electromagnetic energy absorbed, regardless of wavelength. Thus thermal detectors do not have a long-wavelength cutoff in their response, as do photon detectors. The value of detectivity (D*) for a thermal detector is independent of wavelength. Thermal detectors generally do not have as rapid a response as do photon detectors. For many photonics applications, they are often not used in the wavelength region in which photon detectors are most effective (1.55 m). They are often used at longer wavelengths. Bolometers and thermistors—In perhaps the most common use of thermal detectors, the optical energy is absorbed by an element whose properties change with temperature. As the light energy is absorbed, the temperature of the element increases and the change in its properties is sensed. The temperature-measuring elements include bolometers and thermistors. Bolometers and thermistors respond to the change in electrical resistivity that occurs as temperature rises. Bolometers use metallic elements; thermistors use semiconductor elements. The bolometer or thermistor is in a circuit in series with a voltage source. As the resistance changes, the current that flows through the thermal detector causes the voltage across the element to drop, thereby providing a sensing mechanism. Thermocouples—In another use, light is absorbed by an element to which a thermocouple is attached. The thermocouple is a device formed of two dissimilar metals joined at two points. Thermocouples may be fabricated from wires, but for detector applications they are often fabricated as thin films. The device generates a potential difference, which is a measure of the temperature difference between the points. One point is held at a constant reference temperature. The other point is in contact with the absorber. The light energy heats the absorber and the thermocouple junction in contact with it. This causes the voltage generated by the thermocouple to change, giving a measure of the temperature rise of the absorber and of the incident light energy. To enhance the performance of thermocouples, often one makes use of a number of thermocouples in series, perhaps as many as 100. The “hot” junctions are all attached close together. This type of device is called a thermopile. Figure 3-10 shows values of D*(,1000,1) for some thermal detectors, including thermistors, bolometers, thermopiles and pyroelectric detectors, which will be described later. The values are independent of wavelength. In the visible and near infrared, the values of D* for thermal

Module 2-3: Optical Detectors and Human Vision 21 detectors tend to be lower than for good photon detectors, but the response does not decrease at long wavelength.

Figure 3-10 Detectivity D* (,,) as a function of wavelength for several typical thermal detectors. The temperature of operation is 295 K, the measurement frequency  is 1000 H, and the reference bandwidth  is 1 Hz. Both scales are logarithmic.

Example 10 You are a technician working on remote detection of gases from a smokestack using their spectral emission signatures. Your supervisor requests you to select a thermal detector capable of detecting a total radiant power of 2 × 10–8 watts at a wavelength of 10 m that is focused to a spot of area 2 1 cm . It is desired that the signal-to-noise ratio (VS/VN) be at least 10, that the bandwidth  be 1 Hz, and that the measurement frequency  be 1000 Hz. What thermal detectors would be suitable? Solution We must first determine what value of D* is required. Combining Equations 3-1 and 3-2, we obtain: 1/2 1/2 1/2 D* = A /NEP = A (VS/VN)() /AH The symbols above are the same as those described in the discussion of Equations 3-1 and 3-2. We choose a detector with area equal to the focused spot so as to minimize the NEP. Then the product AH is equal to the total incident power. Substituting numbers, we have:

11 110122 D* = = 5 × 108 cm-Hz1/2/watt 210 8 as the required value of D*. Referring to Figure 3-10, we see that both pyroelectric detectors and bolometers have the required value of D* at the wavelength of 10 m. You can choose either one of those two detectors.

22 Optics and Photonics Series, Course 2: Elements of Photonics Calorimeters—Measurements of pulse energy are frequently made using a calorimeter, which represents a common thermal detector system. Calorimetric measurements yield a simple determination of the total energy in an optical pulse, but calorimeters usually do not respond rapidly enough to follow the pulse shape. Calorimeters designed for photonics measurements often use blackbody absorbers with low thermal mass and with temperature-measuring devices in contact with the absorber to measure the temperature rise. Knowledge of the thermal mass coupled with measurement of the temperature rise yields the energy in the optical pulse. A variety of calorimeter designs have been developed for measuring the total energy in an optical pulse or for integrating the output from a continuous optical source. Since the total energy in a pulse is usually not large, the calorimetric techniques are rather delicate. The absorbing medium must be small enough so that the absorbed energy may be rapidly distributed throughout the body. It must be thermally isolated from its surroundings so that the energy is not lost. A commonly encountered calorimeter design, the so-called cone calorimeter, uses a small, hollow carbon cone, shaped so that radiation entering the base of the cone will not be reflected back out of the cone. Such a design is a very efficient absorber. Thermistor beads or thermocouples are placed in contact with the cone. The thermistors form one element of a balanced bridge circuit, the output of which is connected to a display or meter. As the cone is heated by a pulse of energy, the resistance of the bridge changes, leading to an imbalance of the bridge and a voltage pulse that activates the display. The pulse decays as the cone cools to ambient temperature. The magnitude of the voltage pulse gives a measure of the energy in the pulse. Two identical cones may be used to form a conjugate pair in the bridge circuit. This approach allows cancellation of drifts in the ambient temperature. Pyroelectric detectors—Another type of thermal detector is the pyroelectric detector. Pyroelectric detectors respond to the change in electric polarization that occurs in certain classes of crystalline materials (like lithium tantalate) as their temperatures change. The change in polarization, called the pyroelectric effect, may be measured as an open-circuit voltage or as a short-circuit current. Because they respond to changes in temperature, pyroelectric devices are useful as detectors for only pulsed or chopped radiation. The response speed of pyroelectric detectors is fast, faster than that of other thermal detectors like thermistors and thermopiles. Pyroelectric detectors are fast enough to detect very short optical pulses. The spectral detectivity D* of pyroelectric detectors was shown in Figure 3-10. It tends to be higher than the detectivity of thermistor or thermopile detectors, and it is independent of wavelength.

Calibration The response of an optical detector in current (or voltage) per unit input of power is often taken as the nominal value specified by the manufacturer. But, for precise work, the detector may have to be calibrated by the user. Accurate absolute measurements of optical power or energy are difficult. A good calibration requires very careful work.

Module 2-3: Optical Detectors and Human Vision 23 Response of detector One widely used calibration method involves measurement of the total energy in the laser beam (with a calorimetric energy meter) at the same time that the detector response is determined. The temporal history of the energy delivery is known from the shape of the detector output. Since the power integrated over time must equal the total energy, the detector calibration is obtained in terms of laser power per unit of detector response. In one common approach, you can use a calorimeter to calibrate a detector, which is then used to monitor the laser output from one pulse to another. A small fraction of the laser beam is diverted by a beam splitter to the detector, while the remainder of the laser energy goes to the calorimeter. The total energy arriving at the calorimeter is determined. The temporal history of the detector output gives the pulse shape. Then numerical or graphical integration yields the calibration of the response of the detector relative to the calorimeter. The calibration may be in terms of power or energy in the laser pulse. If you know the fraction of the total beam energy diverted to the detector, you can calibrate the detector response in terms of the energy in the pulse. If the pulse shape is stable from pulse to pulse, you can use the results of the numerical or graphical integration to determine the peak power in the pulse. If the response of the calorimeter is fast, it can be used for measurement of power from a continuous source. The temperature of the absorber will reach an equilibrium value dependent on the input power. Such devices are available commercially as laser power meters. Compared to the power meters based on silicon or other photodiodes, the power meters based on absorbing cones or disks are useful over a wider range of wavelength and do not require use of a compensating factor to adjust for the change in response as the laser wavelength changes. After the calibration is complete, you can remove the calorimeter and use the main portion of the beam for the desired application. The detector, receiving the small portion of the beam directed to it by the beam splitter, acts as a pulse-to-pulse monitor.

Techniques to limit beam power Filters—Quantitative measurements of laser output involve several troublesome features. The intense laser output tends to overload and saturate the output of detectors if they are exposed to the full power. Thus, absorbing filters may be used to cut down the input to the detector. A suitable filter avoids saturation of the detector, keeps it in the linear region of its operating characteristics, shields it from unwanted background radiation, and protects it from damage. Many types of attenuating filters have been used, including neutral-density filters, semiconductor wafers (like silicon), and liquid filters. We note that filters themselves also may saturate and become nonlinear when exposed to high irradiance. If a certain attenuation is measured for a filter exposed to low irradiance, the attenuation may be less for a more intense laser beam. Thus, a measurement must be performed at a low enough irradiance so that the filter does not become saturated. Beam splitters—The use of beam splitters also can provide attenuation of an intense laser beam. If the beam is incident on a transparent dielectric material inserted at an angle to the beam, there will be specular reflection of a portion of the beam. One may measure this reflected beam, which will contain only a small fraction of the incident power. The fraction may be

24 Optics and Photonics Series, Course 2: Elements of Photonics determined using Fresnel’s equations. The calculation requires knowledge of the reflection angle geometry and the index of refraction of the dielectric material. Lambertian reflectors—Another method for attenuating the beam before detection is to allow it to fall normally on a diffusely reflecting massive surface, such as a magnesium oxide block. The arrangement is shown in Figure 3-11. The angular distribution of the reflected light is proportional to the cosine of the angle  between the normal to the surface and the axis of symmetry of the detector. Thus, the power reflected is maximum along the normal to the surface and decreases to zero at 90 degrees to the surface. This dependency is called Lambert’s cosine law, and a surface that follows this law is called a Lambertian surface. Many practical surfaces follow this relation, at least approximately. The power Pdetector that reaches the detector after reflection from such a surface is given by

Ad Pdetector = Ptot cos  (3-4) D2 where Ptot is the total laser power, Ad is the area of the detector (or its projection onto a plane perpendicular to the line from the target to the detector), and D is the distance from the target to the detector. This approximation is valid when D is much larger than the detector dimensions and the transverse dimension of the laser beam.

Figure 3-11 Arrangement for measuring laser power using a Lambertian reflector to attenuate the power reaching the detector. D is the distance from the surface target to the detector and Ad is the area of the detector.

With a Lambertian reflector, the power incident on the photo surface can be adjusted in a known way by changing the distance D or the angle . The beam may be spread over a large enough area on the Lambertian surface so that the surface is not damaged. The distance D is made large enough to ensure that the detector is not saturated. The measurement of the power received by the detector, plus some easy geometric parameters, gives the fraction of the beam power reaching the detector.

Module 2-3: Optical Detectors and Human Vision 25 Example 11 A laser beam with total power 10 watts is incident at normal incidence on a Lambertian surface. How much power reaches a detector with an area of 0.5 cm2 at an angle of 45 degrees if the detector area is 30 cm from where the beam strikes the target reflecting surface?

Solution According to Equation 3-4, the power reaching the detector is

A 0.5 cm2 P = P d cos  = 10 W cos 45 detector tot 2 2 D (30 cm) = 10 × 0.5/(3.1416 × 900) × 0.707 W

Pdetector = 0.00125 W = 1.25 mW

Electrical calibration It is also possible to calibrate power meters electrically. It is assumed that the deposition of a given amount of energy in the absorber provides the same response, independent of whether the energy is optical or electrical. The absorbing element is heated by an electrical resistance heater. The electrical power dissi- pation is determined from electrical measurements of the current and voltage. The measured response of the instrument to the known electrical power input provides the calibration. Accurate absolute measurement of optical power is difficult. Thus, one must use great care in the calibration of optical detectors.

Circuitry for Optical Detectors The basic power supply for an optical detector contains a voltage source and a load resistor in series with the detector. As the irradiance on the detector element changes, the current in the circuit changes and the voltage drop across the load resistor changes. Measurement of the voltage drop provides the basis for the optical power measurement. A variety of different circuits may be used, depending on the detector type and on the application. A full description of all the types of detector circuits is beyond the scope of this module. We shall describe the electrical circuitry used with two representative types of detectors, the photovoltaic detector and the photoconductive detector.

Basic circuit for a photovoltaic detector A photovoltaic detector requires no bias voltage; it is itself a voltage generator. A basic circuit for a photovoltaic detector is shown in Figure 3-12. This shows the conventional symbol for a photodiode at the left. The symbol includes the arrow representing incident light. The incident light generates a voltage in the photodiode, which causes current to flow through the load resistor RL. The resulting voltage drop across the resistor is available as a signal to be monitored.

26 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 3-12 Basic circuit for operation of a photovoltaic detector. The symbol for a photodiode is indicated. The load resistor is RL.

In this configuration it is assumed that the value of the load resistor is much larger than the value of the internal resistance of the detector. This resistance is the resistance of the detector element itself, which is in parallel with the load resistor in the circuit. (The internal resistance is not shown in the figure.) The internal resistance is often specified by the manufacturer and for silicon photodiodes may be a few megohms to a few hundred megohms. Disadvantages of this circuit are that its response is nonlinear (it is logarithmic) and the signal depends on the internal resistance of the detector, which may vary in different production batches of detectors. Practical loads that need to be driven are usually much lower than those that can be used with the photovoltiac diode. In order to counter this disadvantage, an amplifier can be used as a buffer between an acceptable high load resistor for the diode and a much lower useful load resistance. Figure 3-13 shows this configuration.

Figure 3-13 Circuit for photovoltaic detector operation with a high detector load resistance driving a useful load of lower resistance

Module 2-3: Optical Detectors and Human Vision 27 This circuit has a linear response to the incident light intensity. It also is a low-noise circuit because it has almost no leakage current; therefore, shot noise is low.

Basic circuit for a photoconductive detector We noted previously that photodiodes may be operated in a photoconductive mode. Figure 3-14 shows a circuit that provides this type of operation. The diode is reverse biased, so that the operation is in the third quadrant of Figure 3-4. The photocurrent produces a voltage across the load resistor RL that is in parallel with an internal resistance of the detector. (Internal resistance is not shown in Figure 3-14.) The internal resistance is nearly constant. One may use large values of load resistance, so as to obtain large values of signal, and still obtain linear variation of the output with the optical power.

Figure 3-14 Circuit for operation of a photodiode in the photoconductive mode. The load resistor is RL.

This circuit can provide very high-speed response. It is possible to obtain rise times of one nanosecond or below with this type of circuit. The biggest disadvantage of this circuit is that the leakage current is relatively large so that shot noise is increased.

28 Optics and Photonics Series, Course 2: Elements of Photonics Human Vision The eye as an optical detector An important optical detector is the human eye. In some respects, the eye can be regarded as a specialized type of detector system, with characteristics similar to those of the other detectors that we have considered. In common with other optical detectors, the eye is a square-law detector, responding to the incident radiant power, which is proportional to the square of the electric field in the light wave. The eye has a spectral response that covers the range approximately from 400 nm to 700 nm, the range identified as the visible spectrum. At longer and shorter wavelengths, the eye is not able to detect incident optical energy.

Structure of the eye The eye can be considered as a complete optical system, including packaging, a variable aperture, a curved corneal surface and a lens that provide for imaging, a variable focus capability, a photosensor, and an output to a computer, the brain. Figure 3-15 shows a simplified diagram of the structure of the eye. The eye is approximately spherical in shape and is contained within a tough, fibrous envelope of tissue called the sclera. The sclera covers all of the eyeball except for a portion of its front. At the front of the eyeball is the cornea, which has a refractive index around 1.38. The cornea is a transparent membrane that allows light to enter the eyeball and that contributes significantly (better than 70%) to the focusing capability of the eye. Behind the cornea is the iris, an adjustable aperture that expands in dim light and contracts in bright light, controlling the amount of light which enters the eyeball. The pupil of the eye is the opening in the center of the aperture defined by the iris. Light entering the eye passes through the pupil.

Figure 3-15 Structure of the human eye

Module 2-3: Optical Detectors and Human Vision 29 The region behind the cornea contains a transparent liquid called the aqueous humor with refractive index around 1.34. Then there is the lens of the eye, a capsule of fibrous jelly-like material, with refractive index varying from 1.41 at the center to 1.39 at the periphery. The shape of the lens can be changed by muscles attached to it. This allows for fine focusing of light entering the eye. Further inward, behind the lens, is a transparent thin jelly called the vitreous humor. It has a refractive index around 1.34. Finally, covering most of the back surface of the eyeball is the retina, the photosensitive medium that serves as the actual detector material. The cells in the retina are of two types, called rods and cones. The rods and cones serve different functions, the cones providing the most sensitive vision near the center of the retina and the rods the peripheral vision farther out in the retina. The rods also are more sensitive in dim light than are the cones, so that the rods tend to dominate night vision. Near the center of the retina is a slight depression, called the fovea centralis, that contains only cones. This region provides the most acute vision. The rods and cones are connected through nerve fibers to the optic nerve, which emerges from the back of the eyeball. The rods and cones receive the optical image and transmit it through the nerve fibers to the brain. At the point where the optic nerve exits the eyeball, there are no rods or cones, so there is a small blind spot at that position.

Operation of the eye The eye is an imaging system. The substantial refraction of incoming light energy by the cornea and the action of the lens combine to form an image of the pattern of incident light on the retina. Because the index of refraction of the lens (about 1.40) is not too different from that of the aqueous and vitreous media (about 1.34), much of the refraction of light entering the eyeball occurs at the cornea, as mentioned earlier. When a normal eye is relaxed, light from very distant objects is focused on the retina. The light rays from the distant object enter the eye as parallel rays. In the “relaxed state,” the eye is said to be “focused at infinity.”

30 Optics and Photonics Series, Course 2: Elements of Photonics Example 12 In Figure 3-15, if a ray of light entering the cornea just above the notation “incident light” strikes the cornea at an angle of 30 degrees from the normal to the cornea, how is the light refracted as the light enters the cornea? See accompanying sketch. Assume that the index of refraction of air is unity. (You will need to make use of Snell’s law for refraction of light, discussed in Modules 1 and 4 of Course 1, Fundamentals of Light and Lasers.) Solution According to Snell’s law:

n1 sin 1 = n2 sin 2 where n stands for the index of refraction, , for the angle from the surface normal to the incident ray direction, and the subscripts 1 and 2 to outside the cornea and inside the cornea, respectively. The discussion relating to Figure 3-15 specifies the index of refraction of the cornea as 1.38. Substituting numbers into Snell’s law, we have

1 sin 30 = 1.38 sin 2; sin 30 = 0.5. (1)(0.5) Therefore, sin 2 = = 0.362 1.38 and 2 = 21.2. Thus, the incident parallel light ray is bent away from its original direction by an angle of 8.8.

Fine focusing of light coming from points other than infinity is accomplished by changing the shape of the lens. Muscles attached to the lens and to the eyeball accomplish this. In this way the eye may form a sharp focus of nearby objects. This process is called accommodation. In some cases, where the shape of the cornea is not correct, and the length of the eyeball is a bit too long or too short, corneal sculpting, performed with lasers, can be carried out to improve visual acuity. The spectral response of the eye is shown in Figure 3-16. The y-axis is the relative response of the eye as a function of wavelength. It is normalized to unity at its peak near 555 nm. This curve is the so-called photopic response, which is characteristic of the cones. The curve usually covers the range from 400 nm to 700 nm, but there is some weak response at longer wavelengths. The peak response is in the green portion of the spectrum, near 555 nm.

Module 2-3: Optical Detectors and Human Vision 31

Figure 3-16 Relative spectral response of the eye (photopic response) as a function of wavelength, normalized to unity at its peak value near 555 nm

The rods in the eye have a somewhat different response, called the scotopic response. The peak of the scotopic response is shifted toward the blue.

Example 13 How much is the photopic response of the eye reduced at a wavelength of 650 nm as compared to the value at the peak?

Solution According to Figure 3-16, the relative value of the photopic response at 650 nm is about 0.1, as compared to the peak value of 1 at 555 nm. Thus, the photopic response of the eye is reduced by 90% at 650 nm as compared to its peak value.

The interaction of light with the structures of the eye leads to the phenomenon called vision. Vision may be considered to be the sensation experienced by a human observer when the retina of the eye is stimulated by optical energy with appropriate wavelengths. The process of vision begins with photochemical changes that occur within the retinal cells when light is absorbed by them. It continues as the complex organic molecules produced in the photochemical processes cause signals to propagate through the nerve fibers to the brain. Finally, in a very important portion of the process, the brain interprets the signals as images corresponding to external objects. This is the process by which the observer becomes aware of optical images through visual sensations.

32 Optics and Photonics Series, Course 2: Elements of Photonics Color Human vision includes the sensation of color. We see “colors” ranging from deep blue, around 400 nm, to deep red, around 700 nm. Our perception of the color of objects involves three characteristics—brightness, saturation, and hue. Taken together, the three attributes of brightness, saturation, and hue make up the sensation of color. We will discuss these three attributes one at a time. Brightness—For brightness, consider a series of neutral grays, ranging from white at one end to black at the other. White produces the greatest sensation of brightness and black the least. Other of the neutral grays are in between. A colored sample may be compared with the series of neutral grays. It will produce the same sensation of brightness as some member of the group of grays. Brightness is then defined as the attribute of any color that allows it to be considered as equivalent in the sensation produced by some member of the series of neutral grays. Saturation—The saturation is the attribute that describes the extent by which a color departs from a neutral gray and approaches a pure color. Hue—The attribute of hue is the property of color by which it can be perceived as ranging from red through orange, yellow, green, etc. It is related to a property called the dominant wavelength of the light, which will be defined later. We may clarify these concepts by considering the so-called chromaticity diagram. The chromaticity diagram allows us to specify any color in terms of numbers. The chromaticity diagram, usually presented in full color, is shown in Figure 3-17 in black and white. Usually the interior of the diagram is filled in with varying colors.

Figure 3-17 Black-and-white version of the chromaticity diagram. The point for white light is denoted as C. The numbers around the edge of the curve denote the wavelengths in nanometers of pure spectral colors, ranging from deep blue (400 nm) to deep red (700 nm).

Module 2-3: Optical Detectors and Human Vision 33 The wing-like boundary edge of the curve represents pure colors of the visible electromagnetic spectrum. These colors represent monochromatic light of one wavelength and are denoted by the wavelength in nanometers, ranging from 400 nm to 700 nm. A straight line from 400 nm to 700 nm completes the closed curve. The interior of the curve represents all colors. Shades of blue would be found inside the curve near the number 480, shades of yellow near 575, etc. The point marked C represents “white light,” or average daylight. Any color within the diagram can be expressed quantitatively by the two coordinates x and y. With the aid of the chromaticity diagram, the hue of a color can be expressed in terms of its principal wavelength. For a given color, with coordinates x and y in the diagram, a line is drawn from the point C through the point given by x and y and extended to the edge of the diagram, where it intersects the edge at some pure spectral color. The wavelength of that color is the principal wavelength for the given color. The purity, related to the saturation, may be found in the same way. On the line from C through the coordinates x and y to the edge of the diagram, the purity of the color expressed by x and y is the distance from C to the point represented by x and y, expressed as a percentage of the distance form C to the edge of the diagram. Thus, the purity of white light is 0% and the purity of a spectral color at the edge of the diagram is 100%.

Example 14 What is the principal wavelength and the purity of a color represented by the coordinates x = 0.4 and y = 0.5 in the chromaticity diagram?

Solution Beginning with Figure 3-17, draw a line from point C through point P at x = 0.4, y = 0.5 and extend the line to the edge of the diagram at Q. (See sketch.) The line should intersect the edge at approximately 570 nm, so the principal wavelength of the color represented by the point x = 0.4, y = 0.5 is 570 nm. To find the purity, measure the distance CP from point C to intersection point P located at x = 0.4, y = 0.5, and the distance CQ from point C to the edge of the diagram. The ratio of the two distances, CP/CQ, should be about 0.75, so the purity of the color represented by the point is 75%. That is, at point P for x = 0.4 and y = 0.5, the purity of the color there is about 75% of the spectral wavelength at 570 nm, a strong yellow color.

34 Optics and Photonics Series, Course 2: Elements of Photonics Defects of vision Vision can be imperfect in a number of ways. Some imperfections arise because there is an incorrect relation between the positions of various parts of the eye. In a normal, relaxed eye (looking far away), parallel light rays entering the eye will be focused on the retina, as shown in Figure 3-15. For very distant objects, the light rays coming from the object will be nearly parallel and the image of the object will be focused on the retina of the relaxed eye. If the eyeball is too long, parallel light rays will come to a focus in front of the retina. For this eyeball, the most distant object that will be in focus on the retina of the relaxed eye will be at a distance less than infinity. In this case, the eye is said to be nearsighted. The condition is called myopia. If the eyeball is too short, the focal point of parallel light rays will fall behind the retina. The eye is then said to be farsighted. This condition is called hyperopia. Another defect arises when the surface of the cornea is not spherical. It may be more sharply curved along one great circle than along another. This leads to a condition called astigmatism. Astigmatism makes it impossible to focus clearly on horizontal and vertical lines at the same time. The conditions of myopia, hyperopia, and astigmatism may all be alleviated by the use of corrective lenses. Another defect of vision, which does not arise from an improper geometry of the different parts of the eye, is color blindness. Color blindness, also called color vision deficiency, involves abnormalities that cause a person to be unable to distinguish between certain colors, and in general to perceive colors differently than most people. Color blindness arises from inherited defects in the pigment in cone cells in the retina. It may take on different degrees of severity, ranging from very mild to a situation in which the eye sees only shades of gray. Color blindness is a lifelong condition. It may disqualify people from certain occupations.

LABORATORY

In this laboratory, you will set up and operate circuitry for a silicon PIN photodiode and you will use the circuitry to measure chopped HeNe laser light and argon laser light. You will determine the relative response of the detector system at several wavelengths.

Materials Photodiode (Centronic OSD100-5T or equivalent) Operational amplifier (National Semiconductor LF356 or equivalent) Electric motor with toothed chopper wheel (Laser Precision CTX-534 Variable Speed Optical Chopper or equivalent) Helium-neon laser (few milliwatt output) Resistors (selection of values, kilohms to megohms)

Module 2-3: Optical Detectors and Human Vision 35 Neutral-density (ND) filters (selection of different values, with a total neutral density of at least 4) Oscilloscope Power meter (Spectra-Physics model 405 or equivalent) Argon ion laser (line tunable, with at least 4 visible wavelengths available) DC voltage source

Procedure The first part of the Procedure will involve fabrication of a circuit to operate a photodiode as a photovoltaic detector. You will use the circuit to measure chopped laser light and to measure the responsivity of the photodiode. (Recall that responsivity is defined as the detector output per unit of input power, here to be given in volts/watt.) First, you will set up circuitry for using the photodiode in the photovoltaic mode. Figure 3-18 shows the experimental setup. In this setup, the photodiode operates as a photovoltaic detector.

Figure 3-18 Experimental arrangement for measurements with photodiode operated in a photovoltaic mode

The toothed wheel is mounted on the electric motor. When it rotates, it chops the light. That is, it periodically blocks the HeNe laser light from reaching the detector. The speed of the motor should be adjusted so that the light is blocked 1000 times per second. This is a standard measurement condition. If the wheel has 10 teeth, for example, the motor should rotate at 100 revolutions per second. Assemble the circuit as shown in Figure 3-18. The value of the load resistor should be much smaller than that of the shunt resistance of the photodiode, generally specified by the manufacturer. The output of the circuit will be hooked to the input of the oscilloscope. The oscilloscope screen should show a square wave with a frequency of 1000 Hertz. Use the voltage calibration of the oscilloscope to measure the peak voltage of the signal. Then insert the power meter into the laser beam in front of the photodiode and measure the power in the beam. Calculate the responsivity of the detector and compare it to the manufacturer’s specification. Remove the power meter. Next, check the linearity of the detector response by inserting neutral-density filters into the path of the beam as indicated. Gradually increase the number of neutral-density filters and

36 Optics and Photonics Series, Course 2: Elements of Photonics record the total neutral density and the peak voltage at each step. Increase the neutral density to at least 4 while increasing the sensitivity of the oscilloscope display as necessary. Plot the peak voltage as a function of neutral density on semilogarithmic paper. The plot should be a straight line. Next, you will operate the photodiode in the photoconductive mode. The experimental arrangement is shown in Figure 3-19. Note that a DC voltage source (–V) is added in this figure.

Figure 3-19 Experimental arrangement for measurements with a photodiode operated in a photoconductive mode

Hook up the circuit as shown in Figure 3-19. The load resistor RL should be relatively large, in the megohm regime. The output of the circuit is used as the input to the oscilloscope. The output on the oscilloscope screen should be a 1000-Hz square wave. Use the voltage calibration of the oscilloscope to measure the peak voltage of the signal. Then insert the power meter into the laser beam in front of the photodiode and measure the power in the beam. Calculate the responsivity of the detector and compare it to the manufacturer’s specification. Next, you will investigate the effect of varying the load resistor. Remove the power meter and change the value of the load resistor. Use several different values of load resistor, and for each one record the value of the peak signal. Plot the peak signal as a function of the value of the load resistor. How does the signal vary with load resistance? Now you will measure the responsivity as a function of wavelength. One measurement is already available, at 633 nm. Use the line-tunable argon laser to obtain values for at least four other visible wavelengths. Replace the helium-neon laser in Figure 3-19 with the argon laser. Replace the load resistor with the same value that was used for the responsivity measurement at 633 nm. For each of four different argon laser wavelengths, measure the peak voltage on the oscilloscope and the laser power reading with the power meter in the same way that you measured them at 633 nm. If the argon laser power is too high, insert neutral-density filters in front of the photodiode and the power meter to reduce it to an appropriate value. Record the results and calculate the responsivity for each wavelength. Plot the responsivity as a function of wavelength and compare the result to the manufacturer’s specification.

Module 2-3: Optical Detectors and Human Vision 37 Data Table

Detector responsivity measurement (photovoltaic mode)

Voltage measurement ______Laser power ______Calculated detector responsivity ______Manufacturer’s quoted responsivity ______

Linearity measurement Measurement no. Neutral density Voltage 1. ______2. ______3. ______4. ______(Plot the results on semilog paper.) Detector responsivity measurement (photoconductive mode)

Voltage measurement ______Laser power ______Calculated detector responsivity ______Manufacturer’s quoted responsivity ______

Effect of load resistor Measurement no. Load resistor Signal voltage 1. ______2. ______3. ______4. ______(Plot the results.) Responsivity versus wavelength Measure- Manu- Voltage Power Calculated ment no. facturer’s 1. ______2. ______3. ______4. ______(Plot the results, including the value for 633 nm obtained earlier.)

38 Optics and Photonics Series, Course 2: Elements of Photonics PROBLEMS

1. Define detector responsivity, noise equivalent power, quantum efficiency, detectivity, and rise time. 2. Define sources of detector noise, including shot noise, Johnson noise, 1/f noise, and photon noise. Describe methods one can use to reduce these noise sources in the process of detecting optical radiation. 3. Describe and explain important types of photodetectors, including photon detectors, thermal detectors, photoemissive detectors, photoconductive detectors, photovoltaic detectors, and photomultiplier detectors. Describe the spectral response of each type. 4. Draw and describe a typical circuit used with a photovoltaic detector. 5. Draw and describe a typical circuit used with a photoconductive detector. 6. Describe concepts related to human vision, including structure of the eye, the formation of images by the eye, and common defects of vision. 7. With an irradiance of 0.001 W/cm2 incident on a detector of area 0.5 cm2 and with a bandwidth of 2 Hz, the ratio of the noise voltage to the signal voltage is 10. What is the noise equivalent power (NEP) of the detector? 8. A detector has a noise equivalent power (NEP) of 5 × 10–10 watts/(Hz)1/2 and an area of 0.2 cm2. What is its value of D* (detectivity)? 9. A detector has a responsivity of 0.12 ampere per watt at a wavelength of 1.06 ?m. What is the quantum efficiency of the detector? 10. A laser beam with total power of 22 watts is incident at normal incidence on a Lambertian surface. How much power reaches a detector with an area of 0.1 cm2 at an angle of 22 degrees located at a distance of 50 cm from where the beam strikes the surface?

REFERENCES

Dereniak, E. L., and D. G. Crowe. Optical Radiation Detectors. New York: Wiley, 1984. Kim, O. “A Look at High-Speed Detectors,” Lasers and Optronics, August 1999. Kruse, P. W., L. D. McGlauchlin, and R. B. McQuistan. Elements of Infrared Technology. New York: Wiley, 1962. Chapters 8–10. Lerner, E. J. “The Photodiode Is the Workhorse of Detection,” Laser Focus World, November 2001, p. 133. Vincent, J. D. Fundamentals of Infrared Detector Operation and Testing. New York: Wiley, 1990.

Module 2-3: Optical Detectors and Human Vision 39

Principles of Fiber Optic Communication

Module 2-4

of

Course 2, Elements of Photonics

OPTICS AND PHOTONICS SERIES

PREFACE

This is the fourth module in Course 2 (Elements of Photonics) of the STEP curriculum. Following are the titles of all six modules in the course: 1. Operational Characteristics of Lasers 2. Specific Laser Types 3. Optical Detectors and Human Vision 4. Principles of Fiber Optic Communication 5. Photonic Devices for Imaging, Storage, and Display 6. Basic Principles and Applications of Holography

The six modules can be used as a unit or independently, as long as prerequisites have been met. For students who may need assistance with or review of relevant mathematics concepts, a student review and study guide entitled Mathematics for Photonics Education (available from CORD) is highly recommended. The original manuscript of this document was authored by Nick Massa (Springfield Technical Community College) and edited by Leno Pedrotti (CORD). Formatting and artwork were provided by Mark Whitney and Kathy Kral (CORD).

CONTENTS

Introduction ...... 1 Prerequisites ...... 1 Objectives ...... 2 Scenario ...... 3 Basic Concepts ...... 4 Historical Introduction ...... 4 Benefits of Fiber Optics ...... 6 The Basic Fiber Optic Link ...... 7 Fiber Optic Cable Fabrication ...... 7 Preform fabrication ...... 7 Outside vapor deposition (OVD) ...... 9 Inside vapor deposition (IVD) ...... 10 Vapor axial deposition (VAD) ...... 10 Total Internal Reflection (TIR) ...... 11 Transmission Windows ...... 12 The Optical Fiber ...... 14 Numerical aperture ...... 15 Fiber Optic Loss Calculations ...... 16 Power budget ...... 19 Types of Optical Fiber ...... 21 Step-index multimode fiber ...... 21 Step-index single-mode fiber ...... 22 Graded-index fiber ...... 23 Polarization-maintaining fiber ...... 23 Fiber Optic Cable Design ...... 24 Dispersion ...... 27 Calculating dispersion ...... 28 Intermodal dispersion ...... 29 Chromatic dispersion ...... 29 Fiber Optic Sources ...... 32 LEDs ...... 32 Laser diodes ...... 33 Fiber Optic Detectors ...... 34 Connectors ...... 36 Fiber Optic Couplers ...... 37 Star couplers ...... 37 T-couplers ...... 39 Wavelength-division multiplexers ...... 39 Fiber Bragg gratings ...... 41 Erbium-doped fiber amplifiers (EDFA) ...... 42 Fiber Optic Sensors ...... 42 Extrinsic fiber optic sensors ...... 43 Intrinsic sensors ...... 45 Laboratory ...... 48 Problems ...... 51 References ...... 52

COURSE 2: ELEMENTS OF PHOTONICS

Module 2-4 Principles of Fiber Optic Communication

INTRODUCTION

The dramatic reduction of transmission loss in optical fibers coupled with equally important developments in the area of light sources and detectors have brought about a phenomenal growth of the fiber optic industry during the past two decades. Its high bandwidth capabilities and low attenuation characteristics make it ideal for gigabit data transmission and beyond. The birth of optical fiber communication coincided with the fabrication of low-loss optical fibers and room-temperature operation of semiconductor lasers in 1970. Ever since, the scientific and technological progress in this field has been so phenomenal that within a brief span of 30 years we are already in the fifth generation of optical fiber communication systems. Recent developments in optical amplifiers and wavelength division multiplexing (WDM) are taking us to a communication system with almost “zero” loss and “infinite” bandwidth. Indeed, optical fiber communication systems are fulfilling the increased demand on communication links, especially with the proliferation of the Internet. In this module, Principles of Fiber Optic Communication, you will be introduced to the building blocks that make up a fiber optic communication system. You will learn about the different types of optical fiber and their applications, light sources and detectors, couplers, splitters, wavelength-division multiplexers, and other components used in fiber optic communication systems. Non-communications applications of fiber optics including illumination with coherent light bundles and fiber optic sensors will also be covered.

PREREQUISITES

Prior to this module, you are expected to have covered Modules 1-1, Nature and Properties of Light; Module 1-3, Light Sources and Laser Safety; Module 1-4, Basic Geometrical Optics; and Module 1-5, Basic Physical Optics; Module 1-6, Principles of Lasers; and Module 2-3, Optical Detectors and Human Vision. In addition, you should be able to manipulate and use algebraic formulas involving trigonometric functions and deal with units.

1 OBJECTIVES

Upon completion of this module, you will be able to:  Identify the basic components of a fiber optic communication system  Discuss light propagation in an optical fiber  Identify the various types of optical fibers  Discuss the dispersion characteristics for various types of optical fibers  Identify selected types of fiber optic connectors  Calculate numerical aperture (N.A.), intermodal dispersion, and material dispersion.  Calculate decibel and dBm power  Calculate the power budget for a fiber optic system  Calculate the bandwidth of an optical fiber  Describe the operation and applications of fiber optic couplers  Discuss the differences between LEDs and laser diodes with respect to performance characteristics  Discuss the performance characteristics of optical detectors  Discuss the principles of wavelength-division multiplexing (WDM)  Discuss the significance of the International Telecom Union grid (ITU grid)  Discuss the use of erbium-doped fiber amplifiers (EDFA) for signal regeneration  Describe the operation and applications of fiber Bragg gratings  Describe the operation and application of fiber optic circulators  Describe the operation and application of fiber optic sensors.

2 Optics and Photonics Series, Course 2: Elements of Photonics SCENARIO

Dante is about to complete a bachelor’s degree in fiber optic technology, a field that has interested him since high school. To prepare himself for the highly rewarding careers that fiber optics offers, Dante took plenty of math and science in high school and then enrolled in an associate degree program in laser electro-optics technology at Springfield Technical Community College (STCC) in Springfield, Massachusetts. Upon graduation from STCC he accepted a position as an electro-optics technician at JDS Uniphase Corporation in Bloomfield, Connecticut. The company focuses on precision manufacturing of the high-speed fiber optic modulators and components that are used in transmitters for the telecommunication and cable television industry. As a technician at JDS Uniphase, Dante was required not only to understand how fiber optic devices work but also to have an appreciation for the complex manufacturing processes that are required to fabricate the devices. The background in optics, fiber optics, and electronics that Dante received at STCC proved to be invaluable in his day-to-day activities. On the job, Dante routinely worked with fusion splicers, optical power meters, and laser sources and detectors, as well as with optical spectrum analyzers and other sophisticated electronic test equipment. After a few years as an electro-optics technician, Dante went on to pursue a bachelor’s degree in fiber optics. (The courses he had taken at STCC transferred, so he was able to enroll in his bachelor’s program as a junior.) Because of his hard work on the job at JDS Uniphase, Dante was awarded a full scholarship and an internship at JDS Uniphase. This allowed Dante to complete his degree while working for JDS Uniphase part time. According to Dante, “the experience of working in a high-tech environment while going to school really helps you see the practical applications of what you are learning—which is especially important in a rapidly changing field like fiber optics.”

Module 2-4: Principles of Fiber Optic Communication 3 BASIC CONCEPTS

Historical Introduction Communication implies transfer of information from one point to another. When it is necessary to transmit information, such as speech, images, or data, over a distance, one generally uses the concept of carrier wave communication. In such a system, the information to be sent modulates an electromagnetic wave such as a radio wave, microwave, or light wave, which acts as a carrier. (Modulation means to vary the amplitude or frequency in accordance with an external signal.) This modulated wave is then transmitted to the receiver through a channel and the receiver demodulates it to retrieve the imprinted signal. The carrier frequencies associated with TV broadcast ( 50–900 MHz) are much higher than those associated with AM radio broadcast ( 600 kHz–20 MHz). This is due to the fact that, in any communication system employing electromagnetic waves as the carrier, the amount of information that can be sent increases as the frequency of the carrier is increased.1 Obviously, TV broadcast has to carry much more information than AM broadcasts. Since optical beams have frequencies in the range of 1014 to 1015 Hz, the use of such beams as the carrier would imply a tremendously large increase in the information-transmission capacity of the system as compared to systems employing radio waves or microwaves. In a conventional telephone system, voice signals are converted into equivalent electrical signals by the microphone and are transmitted as electrical currents through metallic (copper or aluminum) wires to the local telephone exchange. Thereafter, these signals continue to travel as electric currents through metallic wire cable (or for long-distance transmission as radio/microwaves to another telephone exchange) usually with several repeaters in between. From the local area telephone exchange, at the receiving end, these signals travel via metallic wire pairs to the receiver telephone, where they are converted back into corresponding sound waves. Through such cabled wire-pair telecommunication systems, one can at most send 48 simultaneous telephone conversations intelligibly. On the other hand, in an optical communication system that uses glass fibers as the transmission medium and light waves as carrier waves, it is distinctly possible today to have 130,000 or more simultaneous telephone conversations (equivalent to a transmission speed of about 10 Gbit/s) through one glass fiber no thicker than a human hair. This large information-carrying capacity of a light beam is what generated interest among communication engineers and caused them to explore the possibility of developing a communication system using light waves as carrier waves. The idea of using light waves for communication can be traced as far back as 1880 when Alexander Graham Bell invented the photophone (see Figure 4-1) shortly after he invented the telephone in 1876. In this remarkable experiment, speech was transmitted by modulating a light beam, which traveled through air to the receiver. The flexible reflecting diaphragm (which

1 The information-carrying capacity of an electromagnetic carrier is approximately proportional to the difference between the maximum and the minimum frequencies (technically known as bandwidth of the channel) that can be transmitted through the communication channel. The higher one goes in the electromagnetic spectrum in frequency scale, the higher the bandwidth and hence the information-carrying capacity of such a communication system. That is why historically the trend in carrier wave communication has been always toward bandwidths of higher and higher frequencies.

4 Optics and Photonics Series, Course 2: Elements of Photonics could be activated by sound) was illuminated by sunlight. A parabolic reflector placed at a distance of about 200 m received the reflected light. The parabolic reflector concentrated the light on a photoconducting selenium cell, which formed a part of a circuit with a battery and a receiving earphone. Sound waves present in the vicinity of the diaphragm vibrated the diaphragm, which led to a consequent variation of the light reflected by the diaphragm. The variation of the light falling on the selenium cell changed the electrical conductivity of the cell, which in turn changed the current in the electrical circuit. This changing current reproduced the sound on the earphone.

Figure 4-1 Schematic of the photophone invented by Bell. In this system, sunlight was modulated by a vibrating diaphragm and transmitted through a distance of about 200 meters in air to a receiver containing a selenium cell connected to the earphone.

After succeeding in transmitting a voice signal over 200 meters using a light signal, Bell wrote to his father: “I have heard a ray of light laugh and sing. We may talk by light to any visible distance without any conducting wire.” To quote from MacLean: “In 1880 he [Graham Bell] produced his ‘photophone’ which to the end of his life, he insisted was ‘…. the greatest invention I have ever made, greater than the telephone…’ Unlike the telephone, though, it had no commercial value.” The modern impetus for telecommunication with carrier waves at optical frequencies owes its origin to the discovery of the laser in 1960. Earlier, no suitable light source was available that could reliably be used as the information carrier.2 At around the same time, telecommunication traffic was growing very rapidly. It was conceivable then that conventional telecommunication systems based on coaxial cables, radio and microwave links, and wire-pair cable, could soon reach a saturation point. The advent of lasers immediately triggered a great deal of investigation aimed at examining the possibility of building optical analogues of conventional communication systems. The very first such modern optical communication experiment involved laser beam transmission through the atmosphere. However, it was soon realized that shorter-wavelength laser beams could not be sent in open atmosphere through reasonably long distances to carry signals, unlike, for example, the longer-wavelength microwave or radio systems. This is due to

2 We may mention here that, although incoherent sources like light-emitting diodes (LED) are also often used in present-day optical communication systems, it was the discovery of the laser that triggered serious interest in the development of optical communication systems.

Module 2-4: Principles of Fiber Optic Communication 5 the fact that a laser light beam (of wavelength about 1 m) is severely attenuated and distorted owing to scattering and absorption by the atmosphere. Thus, for reliable light-wave communication under terrestrial environments it would be necessary to provide a “guiding” medium that could protect the signal-carrying light beam from the vagaries of the terrestrial atmosphere. This guiding medium is the optical fiber, a hair-thin structure that guides the light beam from one place to another through the process of total internal reflection (TIR), which will be discussed in the next section.

Benefits of Fiber Optics Fiber optic communication systems have many advantages over copper wire-based communication systems. These advantages include:  Long-distance signal transmission The low attenuation and superior signal quality of fiber optic communication systems allow communications signals to be transmitted over much longer distances than metallic-based systems without signal regeneration. In 1970, Kapron, Keck, and Maurer (at Corning Glass in USA) were successful in producing silica fibers with a loss of about 17 dB/km at a wavelength of 633 nm. Since then, the technology has advanced with tremendous rapidity. By 1985 glass fibers were routinely produced with extremely low losses (< 0.2 dB/km). Voice-grade copper systems require in-line signal regeneration every one to two kilometers. In contrast, it is not unusual for communications signals in fiber optic systems to travel over 100 kilometers (km), or about 62 miles, without signal amplification of regeneration.  Large bandwidth, light weight, and small diameter Today’s applications require an ever-increasing amount of bandwidth. Consequently, it is important to consider the space constraints of many end users. It is commonplace to install new cabling within existing duct systems or conduit. The relatively small diameter and light weight of optical cable make such installations easy and practical, saving valuable conduit space in these environments.  Nonconductive Another advantage of optical fibers is their dielectric nature. Since optical fiber has no metallic components, it can be installed in areas with electromagnetic interference (EMI), including radio frequency interference (RFI). Areas with high EMI include utility lines, power-carrying lines, and railroad tracks. All-dielectric cables are also ideal for areas of high lightning-strike incidence.  Security Unlike metallic-based systems, the dielectric nature of optical fiber makes it impossible to remotely detect the signal being transmitted within the cable. The only way to do so is by accessing the optical fiber. Accessing the fiber requires intervention that is easily detectable by security surveillance. These circumstances make fiber extremely attractive to governmental bodies, banks, and others with major security concerns.

6 Optics and Photonics Series, Course 2: Elements of Photonics The Basic Fiber Optic Link Figure 4-2 shows a typical optical fiber communication system. It consists of a transmitting device T that converts an electrical signal into a light signal, an optical fiber cable that carries the light, and a receiver R that accepts the light signal and converts it back into an electrical signal. The complexity of a fiber optic system can range from very simple (i.e., local area network) to extremely sophisticated and expensive (i.e., long-distance telephone or cable television trunking). For example, the system could be built very inexpensively using a visible LED, plastic fiber, a silicon photodetector, and some simple electronic circuitry. On the other hand, a system used for long-distance, high-bandwidth telecommunication that employs wavelength-division multiplexing, erbium-doped fiber amplifiers, external modulation using distributed feedback (DFB) lasers with temperature compensation, fiber Bragg gratings, and high-speed infrared photodetectors can be very expensive. The basic question is how much information is to be sent and how far does it have to go? With this in mind we will first examine the basic principles of fiber optics. We will then move on to the various components that make up a fiber optic communication system, and finally look at the considerations that must be taken into account in the design of a simple fiber optic link

Figure 4-2 A typical fiber optic communication system: T, transmitter; C, connector; S, splice; R, repeater; D, detector, and coils of fibers

Fiber Optic Cable Fabrication The fabrication of fiber optic cable consists of two processes: preform fabrication and fiber draw. Preform fabrication involves manufacturing a glass “perform” consisting of a core and cladding with the desired index profile of the fiber. Fiber draw involves heating the preform to about 2000 C and drawing it down to the desired diameter and adding a protective buffer coating.

Preform fabrication The fabrication process for creating the glass preform (Figure 4-3) from which fiber optic cable is drawn involves forming a glass rod that has the desired index profile and core/cladding dimension ratio. This process, known as chemical vapor deposition or CVD, was developed by Corning scientists in the 1970’s and has made it possible to create ultra-pure glass fiber suitable for optical transmission over very long distances. Using the CVD method, the ultra-pure glass that makes up the preform is synthesized from ultra-pure liquid or gaseous reactants, typically, silicon chloride (SiCl4), germanium chloride (GeCl4), oxygen, and hydrogen. This reaction

Module 2-4: Principles of Fiber Optic Communication 7 produces a very fine “soot” of silicon and germanium oxide, which is then vitrified forming ultra-pure glass.

Figure 4-3 Two views of fiber optic preform fabrication (Sources: Upper—Fibercore Limited of Chilworth UK, a wholly-owned subsidiary of Scientific Atlanta Inc. of Lawrenceville, Georgia; used by permission. Lower—OFS; used by permission)

There are three processes commonly used to manufacture glass preforms: 1. Outside vapor deposition (OVD): Silicon and germanium particles are deposited on the surface of a rotating target rod. 2. Inside vapor deposition (IVD): A soot consisting of silicon and germanium particles is deposited on the inside walls of hollow glass tube. 3. Axial vapor deposition (AVD): Deposition is done axially, directly in the glass preform. Inside vapor deposition (IVD) and outside vapor deposition (OVD) require a collapse stage to close the hollow gap in the center of the preform after the soot is deposited. Outside vapor

8 Optics and Photonics Series, Course 2: Elements of Photonics deposition (OVD) and axial vapor deposition (AVD) require sintering to vitrify the soot after they have been deposited.

Outside vapor deposition (OVD) The OVD process for manufacturing optical fiber typically consists of three stages: 1. Laydown – Depositing the glass soot which will eventually form the glass preform 2. Consolidation – Heating the glass soot in a furnace to solidify the glass preform 3. Draw – Heating up the glass preform and drawing the glass into a fine strand of fiber In the laydown stage (see Figure 4-4), many fine layers of silicon and germanium soot are deposited onto a ceramic rod. During the laydown stage, SiCl4 and GeCl4 vapors are passed over the rotating rod and react with oxygen to generate SiO2 and GeO2. A traversing burner flame forms fine soot particles of silica and germania on the rod forming the core and cladding layers of the fiber. The GeCl4 serves as a dopant to increase the index of refraction of the core.

Figure 4-4 Outside vapor deposition

The OVD process is distinguished by the method of depositing the soot on the ceramic rod. The core material is deposited first, followed by the cladding material. Since the core and cladding materials are deposited using vapor deposition, the entire resulting preform is extremely pure. When the deposition process is complete, the ceramic rod is removed from the center of the porous preform and the hollow preform is placed into a consolidation furnace. During the high temperature consolidation process, water vapor is removed from the preform and sintering condenses the preform into a solid, dense, transparent rod.

Module 2-4: Principles of Fiber Optic Communication 9 During the draw process (see Figure 4-5), the finished glass preform is placed in a draw tower and drawn into a single continuous strand of glass fiber. A draw tower consists of a furnace to heat up the glass preform into molten glass, a diameter-measuring device (typically a laser micrometer), a coating chamber for applying a protective coating, and a take-up spool for winding the finished fiber. A typical draw tower can be several stories tall. The glass preform is first lowered into the draw furnace. The tip of the preform is then heated to about 2000 C until a piece of molten glass (called a “gob”), begins to fall due to the force of gravity. When the gob falls, it pulls behind it a fine glass fiber and cools. A draw tower operator then cuts off the gob and threads the fine fiber strand into a tractor assembly. The tractor assembly speeds up or slows down to provide tension to the fiber Figure 4-5 Fiber draw process stand, which controls the diameter of the fiber. The laser-based diameter monitor measures the diameter of the fiber hundreds of times per second to ensure that the outside diameter of the fiber is held to acceptable tolerance levels (typically 1 um). As the fiber is drawn, a protective coating is applied and cured using UV light.

Inside vapor deposition (IVD) Inside vapor deposition uses a process known as modified chemical vapor deposition (MCVD) to deposit the soot on the inside walls of a tube of ultra- pure silica. (See Figure 4-6.) In this method, a tube of ultra-pure silica is mounted on a glass-working lathe, equipped with an oxygen-hydrogen burner. The chlorides and oxygen are introduced from one end of the tube, and caused to react by the heat of the burner. The resulting soot (submicron Figure 4-6 Inside vapor deposition particles of silica and germania) is deposited inside the tube through a phenomenon known as thermophoresis. As the burner passes over the deposits, they are vitrified into solid glass. By varying the ratio of silicon and germanium chloride, the refractive index profile is built layer after layer, from the outside to the core. The more germanium, the higher the refractive index of the glass. When the deposition process is complete, the preform is heated to collapse the hollow tube into a solid preform.

10 Optics and Photonics Series, Course 2: Elements of Photonics Vapor axial deposition (VAD) The vapor axial deposition process involves the deposition of glass soot on the end of a rotating pure silica boule. (See Figure 4-7.) The initial soot deposit forms the core of the preform. Additional layers of soot are then added radially outward until the final desired refractive index profile is achieved. The benefit of vapor axial deposition is that no hole is created. This eliminates the need for both a central ceramic rod (as in OVD) and the need to collapse the preform to eliminate the hole as in IVD.

Figure 4-7 Vapor axial deposition

Total Internal Reflection (TIR) At the heart of an optical communication system is the optical fiber that acts as the transmission channel carrying the light beam loaded with information. As mentioned earlier, the guidance of the light beam (through the optical fiber) takes place because of the phenomenon of total internal reflection (TIR), which we will now discuss. You learned about critical angles, TIR, etc. in Module 1-4, Basic Geometrical Optics. You need now to refresh your memory and apply these ideas more directly to the physics of optical fibers. We first define the refractive index (n) of a medium: c n = v (4-1) where c ( 3 × 108 m/s) is the speed of light in free space and v represents the velocity of light in that medium. For example, for light waves, n  1.5 for glass and n  1.33 for water.

Figure 4-8 (a) A ray of light incident on a denser medium (n2 > n1). (b) A ray incident on a rarer medium (n2 < n1). (c) For n2 < n1, if the angle of incidence is greater than the critical angle, the incident ray will undergo total internal reflection.

Module 2-4: Principles of Fiber Optic Communication 11 As you know, when a ray of light is incident at the interface of two media (like air and glass), the ray undergoes partial reflection and partial refraction as shown in Figure 4-8a. The vertical dotted line represents the normal to the surface. The angles 1, 2, and r represent the angles that the incident ray, refracted ray, and reflected ray make with the normal. According to Snell’s law and the law of reflection,

n1 sin 1 = n2 sin 2 (Snell’s law) (4-2) 1 = r (Law of reflection) Further, the incident ray, reflected ray, and refracted ray lie in the same plane. In Figure 4-8a, we know from Snell’s law that since n2 > n1, we must have 2 < 1 (i.e., the refracted ray will bend toward the normal). On the other hand, if a ray is incident at the interface of a medium where n2 < n1, the refracted ray will bend away from the normal (see Figure 4-8b). The angle of incidence, for which the angle of refraction is 90, is known as the critical angle and is denoted by c. Thus, when n sin–1 2 (4-3) 1c  n1 the angle of incidence exceeds the critical angle (i.e., when 1 > c), there is no refracted ray and we have total internal reflection TIR. (See Figure 4-8c and Figure 4-10b).

Example 1

For a glass-air interface, n1 = 1.5, n2 = 1.0, and the critical angle is given by –1 c = sin (1.0/1.5)  41.8°

On the other hand, for a glass-water interface, n1 = 1.5, n2 = 1.33, and –1 c = sin (1.33/1.5)  62.5.

Transmission Windows Optical fiber communication systems transmit information at wavelengths that are in the near- infrared portion of the spectrum, just above the visible, and thus undetectable to the unaided eye. Typical optical transmission wavelengths are 850 nm, 1310 nm, and 1550 nm. Both lasers and LEDs are used to transmit light through optical fiber. Lasers are usually used primarily for 1310 and 1550-nm single-mode applications. LEDs are used for 850 nm multimode applications.

12 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 4-9 Typical wavelength dependence of attenuation for a silica fiber. Notice that the lowest attenuation occurs at 1550 nm [adapted from Miya, Hasaka, and Miyashita].

Figure 4-9 shows the spectral dependence of fiber attenuation (i.e., dB loss per unit length) as a function of wavelength of a typical silica optical fiber. The losses are caused by various mechanisms such as Rayleigh scattering, absorption due to metallic impurities and water in the fiber, and intrinsic absorption by the silica molecule itself. The Rayleigh scattering loss varies 4 as 1/0 , i.e., longer wavelengths scatter less than shorter wavelengths. (Here 0 represents the free space wavelength.) As we can see in Figure 4-9, Rayleigh scatter causes the dB loss/km to decrease gradually as the wavelength increases from 800 nm to 1550 nm. The two absorption – peaks around 1240 nm and 1380 nm are primarily due to traces of OH ions and metallic ions in the fiber. For example, even 1 part per million (ppm) of iron can cause a loss of about – 0.68 dB/km at 1100 nm. Similarly, a concentration of 1 ppm of OH ion can cause a loss of 4 dB/km at 1380 nm. This shows the level of purity that is required to achieve low-loss optical fibers. If these impurities are removed, the two absorption peaks will disappear. For 0 > 1600 nm, the increase in the dB/km loss is due to the absorption of infrared light by silica molecules. This is an intrinsic property of silica, so no amount of purification can remove this infrared absorption tail. As you see, there are two windows at which the dB/km loss attains its minimum value. The first window is around 1300 nm (with a typical loss coefficient of less than 1 dB/km) where, fortunately (as we will see later), the material dispersion is negligible. However, the loss coefficient is at its absolute minimum value of about 0.2 dB/km around 1550 nm. The latter window has become extremely important in view of the availability of erbium-doped fiber amplifiers.

Module 2-4: Principles of Fiber Optic Communication 13 The Optical Fiber An optical fiber (Figure 4-10) consists of a central glass core of radius “a” surrounded by an outer cladding made of glass with a slightly lower refractive index. The corresponding refractive index distribution (in the transverse direction) is given by: nn for ra 1 (4-4) nn 2 for ra

Figure 4-10 (a) A glass fiber consists of a cylindrical central core surrounded by a cladding material of slightly lower refractive index. (b) Light rays impinging on the core-cladding interface at an angle  greater than the critical angle c are trapped inside the core of the fiber and reflected back and forth (A, C, B, etc.) along the core-cladding interface.

Figure 4-10 shows a light ray incident on the air-core left interface at an angle i. The ray refracts at angle  in accordance with Snell’s law and then strikes the core-cladding interface at angle . In the drawing shown, the angle  is greater than the critical angle c defined in Equation 4-3, thus leading to total internal reflection at A. The reflected ray is totally internally reflected again at C and B and so on, remaining trapped in the fiber as it propagates along the core axis. The core diameter d = 2a of a typical telecommunication-grade multimode fiber is approximately 62.5 m with an outer cladding diameter of 125 um. The cladding index n2 is approximately 1.45 (pure silica), and the core index n1, barely larger, around 1.465. The cladding is usually pure silica while the core is usually silica doped with germanium, which increases the refractive index slightly from n2 to n1. The core and cladding are fused together during the manufacturing process and typically not separable. An outside plastic buffer is usually added to protect the fiber from environmental contaminants.

14 Optics and Photonics Series, Course 2: Elements of Photonics Numerical aperture One of the more important parameters associated with fiber optics is the numerical aperture. The numerical aperture of a fiber is a measure of its light-gathering ability and is defined by

N.A. = Sin (a)max (4-5) where (a)max is the maximum half-acceptance angle of the fiber, as shown in Figure 4-11.

Figure 4-11 Numerical aperture

The larger the numerical aperture, the greater the light gathering ability of the fiber. Typical values for N.A. are between 0.2 and 0.3 for multimode fiber and 0.1 to 0.2 for single-mode fiber. The numerical aperture is an important quantity because it is used to determine the coupling and dispersion characteristics of a fiber. For example, a large numerical aperture allows for more light to be coupled into the fiber but at the expense of modal dispersion, which causes pulse spreading and ultimately bandwidth limitations. The numerical aperture (N.A.) is related to the index of refraction of the core and cladding by the following equation:

22 N.A. sin(a )max nn 1  2 (4-6) As can be seen, the larger the difference between the core and cladding index, the larger the numerical aperture and hence more modal dispersion. The N.A. may also be expressed in terms of the relative refractive index difference termed , where

22 ()nn12  2 (4-7) 2n1 so that, with Equation 4-6, we get Equation 4-8. (N.A.)2  2 (4-8) 2n1

Module 2-4: Principles of Fiber Optic Communication 15 Combining Equations 4-6 and 4-8, we obtain a useful relation, Equation 4-9.

N.A. sin(a )maxn 1 2 (4-9)

In short, a large N.A. represents a large difference in refractive index, leading to a large acceptance angle and hence a large numerical aperture. However, this can lead to serious bandwidth limitations. Typical values for  range from 0.01 to 0.03 or 1 to 3 %

Example 2

For a typical step-index (multimode) fiber with core index n1  1.45 and   0.01, we get

sin(a)max = n1 2 1.45 2  (0.01)  0.205

so that (a)max  12. Thus, all light entering the fiber must be within a cone of half-angle 12°. The full acceptance angle is 2  12° = 24°.

Fiber Optic Loss Calculations Loss in a fiber optic system is expressed in terms of the optical power available at the output end with respect to the optical power at the input. As follows: P Loss = out (4-10) Pin where Pin is the input power to the fiber and Pout is the power available at the output end of the fiber. For convenience, fiber optic loss is often expressed in terms of decibels (dB) where

Pout LossdB = 10 log (4-11) Pin Fiber optic cable manufacturers usually specify loss in optical fiber in terms of decibels per kilometer (dB/km), as discussed earlier in connection with Figure 4-9.

Example 3

A fiber of 50-km length has Pin = 10 mW and Pout = 1 mW. Find the loss in dB/km. From Equation 4-11

1mW LossdB  10log 10dB (The negative sign indicates a loss.) 10 mW And so the loss per unit length of fiber, dB/km, is equal to Loss(dB/km) = (–10 dB/50 km) = –0.2 dB/km

16 Optics and Photonics Series, Course 2: Elements of Photonics Example 4 A 10-km fiber optic communication system link has a fiber loss of 0.30 dB/km. Find the output power if the input power is 20 mW. Solution x From Equation 4-11, making use of the relationship that y = 10 if x = log y,

Pout LossdB  10 log Pin

Loss P dB  log out 10 Pin which becomes, then,

LossdB P 1010  out . Pin So, finally, we have

LossdB 10 PPout  in 10 (4-12)

For fiber with a 0.30-dB/km loss characteristic, the lossdB for 10 km of fiber becomes

LossdB = 10 km × (–0.30 dB/km) = –3 dB Plugging this back into Equation 4-12 we get,

3 10 Pout 20 mW 10 10 mW

Optical power in fiber optic systems is often expressed in terms of dBm, a decibel term that references power to a 1 mWatt (milliwatt) input power level. Optical power here can refer to the power of a laser source or just to the power somewhere in the system. If Pin in Equation 4-11 is given as 1 milliwatt, then the power in dBm can be determined using equation 4-13:

Pout P(dBm)  10log  (4-13) 1 mW With optical power expressed in dBm, output power anywhere in the system can be determined simply by expressing the input power in dBm and then subtracting the individual component losses, also expressed in dB. It is important to note that an optical source with a power input of 1 mW can be expressed as 0 dBm, as indicated by Equation 4-13, since 1 mW 10log  = 10log(1) = (10)(0) = 0. 1 mW The use of decibels provided a convenient method of expressing optical power in fiber optic systems. For example, for every 3 dB loss in optical power, the power in milliwatts is cut in half. Consequently, for every 3-dB increase in optical power, the optical power in milliwatts is doubled. For example, a 3-dBm optical source has a P of 2 mW, whereas a –6-dBm source has a

Module 2-4: Principles of Fiber Optic Communication 17 P of 0.25 mW, as can be verified with Equation 4-13. Furthermore, every increase or decrease of 10 dB in optical power corresponds to a 10-fold increase or decrease in optical power in expressed in milliwatts. For example, whereas 0 dBm corresponds to 1-milliwatt of optical power, 10 dBm would be 10 milliwatts, and 20 dBm would be 100 milliwatts. Similarly, –10 dBm corresponds to 0.1 milliwatt, and –20 dBm would be 0.01 milliwatts, etc.

Example 5 A 3-km fiber optic system has an input power of 2 mW and a loss characteristic of 2 dB/km. Determine the output power of the fiber optic system in mW.

Solution Using Equation 4-13, we convert the source power of 2 mW to its equivalent in dBm: 2 mW Input powerdBm  10log 3 dBm 1 mW

The lossdB for the 3-km cable is,

LossdB = 3 km  2 dB/km = 6 dB Thus, power in dB is

(Output power)dB = +3 dBm – 6 dB = –3 dBm Using Equation 4-13 to convert the output power of –3 dBm back to milliwatts, we have P(mW) P(dBm) = 10log 1 mW

PP(dBm) (mW) or  log , 10 1 mW

P(mW) P(dBm) or = 1010 1(mW)

P(dBm) so that P(mW) = 1 mW 1010 Plugging in for P(dBm) = –3 dBm, we get for the output power in milliwatts

–3 10 Pout (mW) = 1 mW 10 = 0.5 mW

Note that one can also use Equation 4-12 to get the same result, where now Pin = 2 mW and LossdB = –6 dB: Loss dB 10 PPout =10in 

–6 10 or Pout =2 mW 10 = 0.5 mW, the same as above.

18 Optics and Photonics Series, Course 2: Elements of Photonics Power budget When designing a fiber optic communication system, one of the main factors that must be considered is whether or not there is enough power available at the receiver to detect the transmitted signal after all of the system losses have been accounted for. The process for accounting for all of the system losses is called a power budget. The power arriving at the detector must be sufficient to allow clean detection with few errors. Clearly, the signal at the receiver must be larger than the noise level. The power at the detector, Pr, must be above the threshold level or receiver sensitivity Ps.

Pr  Ps (4-14)

The receiver sensitivity Ps is the signal power, in dBm, at the receiver that results in a particular bit error rate (BER). Typically the BER is chosen to be at most one error in 1012 bits or 10–12.

Example 6 –12 A receiver has sensitivity Ps of – 45 dBm for a BER of 10 . What is the minimum power that must be incident on the detector? Solution Use Equation 4-13 to find the source power in milliwatts, given the power sensitivity in dBm. Thus, P –45 dBm = 10 log 1 mW

P 45 dBm or 10 10 , 1 mW

so that P = (1 mW)  10–4.5 = 3.16  10–5 mW = 31.6 microwatts

for a probability of error of 1 in 1012.

The received power at the detector is a function of:

1. Power emanating from the light source (PL)

2. Source-to-fiber loss (Lsf)

3. Fiber loss per km (FL) for a length of fiber (L)

4. Connector or splice losses (Lconn)

5. Fiber-to-detector loss (Lfd)

The power margin is the difference between the received power Pr and the receiver sensitivity Ps by some margin Lm.

Lm = Pr – Ps (4-15) where Lm is the loss margin in dB, Pr is the received power, and Ps is the receiver sensitivity in dBm.

Module 2-4: Principles of Fiber Optic Communication 19 If all of the loss mechanisms in the system are taken into consideration, the loss margin can be expressed as Equation 4-16.

Lm = PL – Lsf – (FL  L) – Lconn – Lfd – Ps (4-16) All units are in dB and dBm.

Example 7 A system has the following characteristics:

Source power (PL) = 2 mW (3 dBm)

Source to fiber loss (Lsf) = 3 dB

Fiber loss per km (FL) = 0.5 dB/km Fiber length (L) = 40 km

Connector loss (Lconn) = 1 dB (one connector between two 20-m fiber lengths)

Fiber to detector loss (Lfd) = 3 dB

Receiver sensitivity (Ps) = –36 dBm

Find the loss margin Lm. Solution

Lm = 3 dBm – 3 dB – (40 km  0.5 dB/km) – 1 dB – 3 dB – (–36 dBm) = 12 dB This particular fiber optic loss budget is illustrated in Figure 4-12, with each loss graphically depicted.

Figure 4-12 Fiber optic loss budget

20 Optics and Photonics Series, Course 2: Elements of Photonics Types of Optical Fiber There are three basic types of fiber optic cable used in communication systems: step-index multimode, step-index single-mode, and graded-index multimode. These are illustrated in Figure 4-13.

Figure 4-13 Types of fiber

Step-index multimode fiber Step-index multimode fiber has an index of refraction profile that “steps” from low-to-high-to- low as measured from cladding-to-core-to-cladding. A relatively large core diameter 2a and numerical aperture N.A. characterize this fiber. The core/cladding diameter of a typical multimode fiber used for telecommunication is 62.5/125 m (about the size of a human hair). The term “multimode” refers to the fact that multiple modes or paths through the fiber are possible, as indicated in Figure 4-13a. Step-index multimode fiber is used in applications that require high bandwidth (< 1 GHz) over relatively short distances (< 3 km) such as a local area network or a campus network backbone. The major benefits of multimode fiber are: (1) it is relatively easy to work with; (2) because of its larger core size, light is easily coupled to and from it; (3) it can be used with both lasers and LEDs as sources; and (4) coupling losses are less than those of the single-mode fiber. The drawback is that because many modes are allowed to propagate (a function of core diameter, wavelength, and numerical aperture) it suffers from intermodal dispersion, which will be discussed in the next section. Intermodal dispersion limits bandwidth, which translates into lower data rates.

Module 2-4: Principles of Fiber Optic Communication 21 Step-index single-mode fiber Single-mode step-index fiber (Figure 4-13b) allows for only one path, or mode, for light to travel within the fiber. In a multimode step-index fiber, the number of modes Mn propagating can be approximated by V 2 M  (4-17) n 2 Here V is known as the normalized frequency, or the V-number, which relates the fiber size, the refractive index, and the wavelength. The V-number is given by Equation 4-18. 2a V N.A. (4-18)  or by Equation 4-19. 2a Vn 1 2 (4-19)  In either equation, a is the fiber core radius,  is the operating wavelength, N.A. is the numerical aperture, n1 is the core index, and  is the relative refractive index difference between core and cladding. The analysis of how the V-number is derived is beyond the scope of this module. But it can be shown that by reducing the diameter of the fiber to a point at which the V-number is less than 2.405, higher-order modes are effectively extinguished and single-mode operation is possible.

Example 8 What is the maximum core diameter for a fiber if it is to operate in single mode at a wavelength of 1550 nm if the N.A. is 0.12? From Equation 4-18, 2a V N.A.  Solving for a yields V a = 2(N.A.) For single-mode operation, V must be 2.405 or less. The maximum core diameter occurs when V = 2.405. So, plugging into the equation, we get

(2.405)(1550 nm) –9 amax = = 4946  10 m = 4.95 m (2 )(0.12)

or, dmax = 2  a = 9.9 m

The core diameter for a typical single-mode fiber is between 5 and 10 µm with a 125-µm cladding. Single-mode fibers are used in applications such as long distance telephone lines, wide- area networks (WANs), and cable TV distribution networks where low signal loss and high data

22 Optics and Photonics Series, Course 2: Elements of Photonics rates are required and repeater/amplifier spacing must be maximized. Because single-mode fiber allows only one mode or ray to propagate (the lowest-order mode), it does not suffer from intermodal dispersion like multimode fiber and therefore can be used for higher bandwidth applications. At higher data rates, however, single-mode fiber is affected by chromatic dispersion, which causes pulse spreading due to the wavelength dependence on the index of refraction of glass (to be discussed in more detail in the next section). Chromatic dispersion can be overcome by transmitting at a wavelength at which glass has a fairly constant index of refraction (~1300 nm) or by using an optical source such as a distributed-feedback laser (DFB laser) that has a very narrow output spectrum. The major drawback of single-mode fiber is that compared to step-index multimode fiber, it is relatively difficult to work with (i.e., splicing and termination) because of its small core size and small numerical aperture. Because of the high coupling losses associated with LEDs, single-mode fiber is used primarily with laser diodes as a source.

Graded-index fiber In a step-index fiber, the refractive index of the core has a constant value. By contrast, in a graded-index fiber, the refractive index in the core decreases continuously (in a parabolic fashion) from a maximum value at the center of the core to a constant value at the core-cladding interface. (See Figure 4-13c.) Graded-index fiber is characterized by its ease of use (i.e., large core diameter and N.A.), similar to a step-index multimode fiber, and its greater information carrying capacity, as in a step-index single-mode fiber. Light traveling through the center of the fiber experiences a higher index of refraction than does light traveling in higher modes. This means that even though the higher-order modes must travel farther than the lower order modes, they travel faster, thus decreasing the amount of modal dispersion and increasing the bandwidth of the fiber.

Polarization-maintaining fiber Polarization-maintaining (PM) fiber is a type of fiber that only allows light of a specific polarization orientation to propagate. It is often referred to as high birefringence single-mode fiber. (A birefringent material is one in which the refractive index is different for two orthogonal orientations of the light propagating through it.) In birefringent fiber, light polarized in orthogonal directions will travel at different speeds along the polarization axes of the fiber. PM fibers utilize a stress-induced birefringence mechanism to achieve high levels of birefringence. These fibers embed a stress- applying region in the cladding area of the fiber. (See Figure 4-14.) Placed Figure 4-14 Polarization maintaining fiber symmetrically about the core, it gives the

Module 2-4: Principles of Fiber Optic Communication 23 fiber cross-section two distinct axes of symmetry. The stress region squeezes on the core along one axis, which makes the core birefringent. As a result, the propagation speed is polarization- dependent, differing for light polarized along the two orthogonal symmetry axes. Birefringence is the key to polarization-maintaining behavior. Because of the difference in propagation speed, light polarized along one symmetry axis is not efficiently coupled to the other orthogonal polarization—even when the fiber is coiled, twisted or bent. PM fibers can be designed with high stress levels to create birefringence sufficient to resist depolarization under harsh mechanical and thermal operating conditions.

Fiber Optic Cable Design In most applications, optical fiber is protected from the environment by using a variety of different cabling types based on the type of environment in which fiber will be used. Cabling provides the fiber with protection from the elements, added tensile strength for pulling, rigidity for bending, and durability. As fiber is drawn from the preform in the manufacturing process, a protective coating, a UV-curable acrylate, is applied to protect against moisture and to provide mechanical protection during the initial stages of cabling. A secondary buffer then typically encases the optical fibers for further protection. Fiber optic cable can be separated into two types: indoor and outdoor cables. (See Table 4-1.)

Table 4-1. Indoor and Outdoor Cables Indoor Cable Description Simplex Cables Contains a single fiber for one-way communication Duplex Cables Contains two fibers for two-way communications Multifiber Cables Contains more than two fibers. Fibers are usually in pairs for duplex operation. For example, a twelve-fiber cable permits six duplex circuits. Breakout Cables Typically have several individual simplex cables inside an outer jacket. The outer jacket includes a ripcord to allow easy access Heavy, Light, Heavy-duty cables have thicker jackets than light duty cable for rougher Plenum-Duty, and handling Riser Cable Plenum cables are jacketed with low-smoke and fire retardant materials Riser cables run vertically between floors and must be engineered to prevent fires from spreading between floors Outdoor Cables Outdoor cables must withstand more harsh environment conditions than indoor cables. Overhead Cables strung from telephone lines Direct Burial Cables placed directly in a trench Indirect Burial Cables placed in a conduit Submarine Underwater cable, including transoceanic applications

Most telecommunication applications employ either a loose-tube, tight buffer, or ribbon-cable design. Loose tube cable is used primarily in outside-plant applications that require high pulling strength, resistance to moisture, large temperature ranges, low attenuation, and protection from other environmental factors. (See Figure 4-15.) Loose-tube buffer designs allow easy drop-off of groups of fibers at intermediate points. A typical loose-tube cable can hold up to 12 fibers, with a cable capacity of more than 200 fibers. In a loose-tube cable design, color-coded plastic

24 Optics and Photonics Series, Course 2: Elements of Photonics buffer tubes are filled with a gel to provide protection from water and moisture. The fact that the fibers “float” inside the tube provides additional isolation from mechanical stress such as pull force and bending introduced during the installation process. Loose-tube cables can be either all dielectric, or armored. In addition, the buffer tubes are stranded around a dielectric or steel central member which serves as an anti-buckling element. The cable core is typically surrounded by aramid fibers to provide tensile strength to the cable. For additional protection, a medium-density outer polyethylene jacket is extruded over the core. In armored designs, corrugated steel tape is formed around a single-jacketed cable with an additional jacket extruded over the armor.

Figure 4-15 Loose tube direct burial cable

Tight buffer cable is typically used for indoor applications where ease of cable termination and flexibility are more of a concern than low attenuation and environmental stress. (See Figure 4-16.) In a tight-buffer cable, each fiber is individually buffered (direct contact) with an elastomeric material to provide good impact resistance and flexibility, while keeping size at a minimum. Aramid fiber strength members provide the tensile strength for the cable. This type of cable is suited for “jumper cables”, which typically connect loose-tube cables to active components such as lasers and receivers. Tight-buffer fiber may introduce slightly more attenuation due to the stress placed on the fiber by the buffer. However, because tight-buffer cable is typically used for indoor applications, distances are generally much shorter that for outdoor applications allowing systems to tolerate more attenuation in exchange for other benefits.

Module 2-4: Principles of Fiber Optic Communication 25

Figure 4-16 Tight buffer simplex and duplex cable

Ribbon cable is used in applications where fibers must be densely packed. (See Figure 4-17.) Ribbon cables typically consist of up to 18-coated fibers that are bonded or laminated to form a ribbon. Many ribbons can then be combined to form a thick, densely packed fiber cable that can be either mass-fusion spliced or terminated using array connectors that can save a considerable amount of time as compared to loose-tube or tight-buffer designs.

Figure 4-17 Loose tube ribbon cable

26 Optics and Photonics Series, Course 2: Elements of Photonics Cabling example Figure 4-18 shows an example of an interbuilding cabling scenario.

Figure 4-18 Interbuilding cabling scenario

Dispersion In digital communication systems, information to be sent is first coded in the form of pulses. These pulses of light are then transmitted from the transmitter to the receiver, where the information is decoded. The larger the number of pulses that can be sent per unit time and still be resolvable at the receiver end, the larger will be the transmission capacity, or bandwidth of the system. A pulse of light sent into a fiber broadens in time as it propagates through the fiber. This phenomenon is known as dispersion, and is illustrated in Figure 4-19.

Module 2-4: Principles of Fiber Optic Communication 27

Figure 4-19 Pulses separated by 100 ns at the input end would be resolvable at the output end of 1 km of the fiber. The same pulses would not be resolvable at the output end of 2 km of the same fiber.

Calculating dispersion Dispersion, termed t, is defined as pulse spreading in an optical fiber. As a pulse of light propagates through a fiber, elements such as numerical aperture, core diameter, refractive index profile, wavelength, and laser linewidth cause the pulse to broaden. This poses a limitation on the overall bandwidth of the fiber as demonstrated in Figure 4-20.

Figure 4-20 Pulse broadening caused by dispersion

Dispersion t can be determined from Equation 4-20.

1 2 t = (tout – tin) / (4-20) Dispersion is measured in units of time, typically nanoseconds or picoseconds. Total dispersion is a function of fiber length, ergo, the longer the fiber, the more the dispersion. Equation 4-21 gives the total dispersion per unit length.

ttotal = L  (Dispersion/km) (4-21) The overall effect of dispersion on the performance of a fiber optic system is known as intersymbol interference, as shown in Figure 4-19. Intersymbol interference occurs when the pulse spreading due to dispersion causes the output pulses of a system to overlap, rendering them undetectable. If an input pulse is caused to spread such that the rate of change of the input exceeds the dispersion limit of the fiber, the output data will become indiscernible.

28 Optics and Photonics Series, Course 2: Elements of Photonics Intermodal dispersion Intermodal dispersion is the pulse spreading caused by the time delay between lower-order modes (modes or rays propagating straight through the fiber close to the optical axis) and higher-order modes (modes propagating at steeper angles). This is shown in Figure 4-21. Modal dispersion is problematic in multimode fiber and is the primary cause for bandwidth limitation. It is not a problem in single-mode fiber where only one mode is allowed to propagate.

Figure 4-21 Mode propagation in an optical fiber

Chromatic dispersion Chromatic dispersion is pulse spreading due to the fact that different wavelengths of light propagate at slightly different speeds through the fiber. All light sources, whether laser or LED, have finite linewidths, which means they emit more than one wavelength. Because the index of refraction of glass fiber is a wavelength-dependent quantity, different wavelengths propagate at different speeds. Chromatic dispersion is typically expressed in units of nanoseconds or picoseconds per (km-nm). Chromatic dispersion consists of two parts: material dispersion and waveguide dispersion.

tchromatic = tmaterial + twaveguide (4-22) Material dispersion is due to the wavelength dependency on the index of refraction of glass. Waveguide dispersion is due to the physical structure of the waveguide. In a simple step-index- profile fiber, waveguide dispersion is not a major factor, but in fibers with more complex index profiles, waveguide dispersion can be more significant. Material dispersion and waveguide dispersion can have opposite signs (or slopes) depending on the transmission wavelength. In the case of a step-index single-mode fiber, these two effectively cancel each other at 1310 nm yielding zero-dispersion, which makes high-bandwidth communication possible at this wavelength. The drawback, however, is that even though dispersion is minimized at 1310 nm, attenuation is not. Glass fiber exhibits minimum attenuation at 1550 nm. Glass exhibits its minimum attenuation at 1550 nm, and optical amplifiers (known as erbium-doped fiber amplifiers [EDFA]) also operate in the 1550-nm range. It makes sense, then, that if the zero- dispersion property of 1310 nm could be shifted to coincide with the 1550-nm transmission window, very high-bandwidth long-distance communication would be possible. With this in mind, zero-dispersion-shifted fiber was developed. Zero-dispersion-shifted fiber “shifts “the zero dispersion wavelength of 1310 nm to coincide with the 1550 nm transmission window of glass fiber by modifying the waveguide dispersion slope. Modifying the waveguide dispersion slope is accomplished by modifying the refractive

Module 2-4: Principles of Fiber Optic Communication 29 index profile of the fiber in a way that yields a more negative waveguide-dispersion slope. When combined with a positive material dispersion slope, the point at which the sum of two slopes cancel each other out can be shifted to a higher wavelength such as 1550 nm or beyond. (See Figure 4-22.)

Figure 4-22 Single-mode versus dispersion-shifted fiber

An example of a zero-dispersion-shifted fiber is the “W-profile” fiber, named because of the shape of the refractive index profile which looks like a “W.” This is illustrated in Figure 4-23. By splicing in short segments of a dispersion-shifted fiber with the appropriate negative slope into a fiber optic system with positive chromatic dispersion, the pulse spreading can be minimized. This results in an increase in data rate capacity.

Figure 4-23 W-profile fibers: (a) step-index, (b) triangular profile

In systems where multiple wavelengths are transmitted through the same single-mode fiber, such as in dense wavelength division multiplexing (DWDM, discussed in a later section), it is possible for three equally spaced signals transmitted near the specified zero-dispersion wavelength to combine and generate a new fourth wave, which can cause interference between channels. This phenomenon is called four-wave mixing, which degrades system performance. If, however, the waveguide structure of the fiber is modified so that the waveguide dispersion is further increased in the negative direction, the zero-dispersion point can be pushed out past 1600 nm (outside the EDFA operating window). This results in a fiber in which total chromatic dispersion is still substantially lower in the 1550 nm range without the threat of performance problems. This type of fiber is known as nonzero dispersion-shifted fiber.

30 Optics and Photonics Series, Course 2: Elements of Photonics The total dispersion of an optical fiber, ttot, can be approximated using

22 ttttotal modal chromatic (4-23) where tmodal represents the dispersion due to the various components that make up the system. The transmission capacity of fiber is typically expressed in terms of bandwidth  distance. For example, the (bandwidth  distance) product for a typical 62.5/125-µm (core/cladding diameter) multimode fiber operating at 1310 nm might be expressed as 600 MHz  km. The approximate bandwidth BW of a fiber can be related to the total dispersion by the following relationship:

BW (Hz) = 0.35/ttotal (4-24)

Example 9 A 2-km-length multimode fiber has a modal dispersion of 1 ns/km and a chromatic dispersion of 100 ps/km  nm. It is used with an LED of linewidth 40 nm. (a) What is the total dispersion? (b) Calculate the bandwidth (BW) of the fiber.

(a) tmodal = 2 km  1 ns/km = 2 ns

tchromatic = (2 km)  (100 ps/km  nm)  (40 nm) = 8000 ps = 8 ns Now, from Equation 4-23, 2 2 1/2 ttotal = ([2 ns] + [8 ns] ) = 8.25 ns And from Equation 4-24,

(b) BW = 0.35/ttotal = 0.35/8.25 ns = 42.42 MHz

   Expressed in terms of the product (BW km), we get (BW km) = (42.4 MHz)( 2 km)  85 MHz km.

Example 10

A 50-km single-mode fiber has a material dispersion of 10 ps/km  nm and a waveguide dispersion

of –5 ps/km  nm. It is used with a laser source of linewidth 0.1 nm. (a) What is tchromatic? (b) What is ttotal? (c) Calculate the bandwidth (BW) of the fiber. (a) With the help of Equation 4-22, we get

tchromatic = 10 ps/km  nm – 5 ps/km  nm = 5 ps/km  nm

(b) For 50 km of fiber at a linewidth of 0.1 nm, ttotal is

ttotal = (50 km)  (5 ps/km  nm)  (0.1 nm) = 25 ps

(b) BW = 0.35/ttotal = 0.35/25 ps = 14 GHz

Expressed in terms of the product (BW  km), we get

(BW  km) = (14 GHz)(50 km) = 700 GHz  km In short, the fiber in this example could be operated at a data rate as high as 700 GHz over a one- kilometer distance.

Module 2-4: Principles of Fiber Optic Communication 31 Fiber Optic Sources Two types of light sources are commonly used in fiber optic communications systems: semiconductor laser diodes (LD) and light-emitting diodes (LED). Each device has its own advantages and disadvantages as listed in Table 4-2.

Table 4-2. LED Versus Laser Characteristic LED Laser (LD) Output power Lower Higher Spectral width Wider Narrower Numerical aperture Larger Smaller Speed Slower Faster Cost Less More Ease of operation Easier More difficult

Fiber optic sources must operate in the low-loss transmission windows of glass fiber. LEDs are typically used at the 850-nm and 1310-nm transmission wavelengths, whereas lasers are primarily used at 1310 nm and 1550 nm.

LEDs LEDs are typically used in lower-data-rate, shorter-distance multimode systems because of their inherent bandwidth limitations and lower output power. They are used in applications in which data rates are in the hundreds of megahertz as opposed to GHz data rates associated with lasers. Two basic structures for LEDs are used in fiber optic systems: surface-emitting and edge- emitting as shown in Figure 4-24.

Figure 4-24 Surface-emitting versus edge-emitting diodes

LEDs typically have large numerical apertures, which makes light coupling into single-mode fiber difficult due to the fiber’s small N.A. and core diameter. For this reason LEDs are most often used with multimode optical fiber. LEDs are used in lower-data-rate, short-distance (<1 km) multimode systems because of their inherent bandwidth limitations and low output power. In addition, the output spectrum of a typical LED is about 40 nm, which limits its performance due to severe chromatic dispersion. LEDs, however, operate in a more linear fashion than do laser diodes making them more suitable for analog modulation. Most fiber optic light sources are pigtailed, having a fiber attached during the manufacturing process. Some

32 Optics and Photonics Series, Course 2: Elements of Photonics LEDs are available with connector-ready housings that allow a connectorized fiber to be directly attached and are relatively inexpensive compared to laser diodes. LEDs are used in applications including local area networks, closed-circuit TV, and where transmitting electronic data in areas where EMI may be a problem.

Laser diodes Laser diodes are used in applications in which longer distances and higher data rates are required. Because an LD has a much higher output power than an LED, it is capable of transmitting information over longer distances. Consequently, and given the fact that the LD has a much narrower spectral width, it can provide high-bandwidth communication over long distances. The LD’s smaller N.A. also allows it to be more effectively coupled with single-mode fiber. The difficulty with LDs is that they are inherently nonlinear, which makes analog transmission more difficult. They are also very sensitive to fluctuations in temperature and drive current, which causes their output wavelength to drift. In applications such as wavelength- division multiplexing in which several wavelengths are being transmitted down the same fiber, the wavelength stability of the source becomes critical. This usually requires complex circuitry and feedback mechanisms to detect and correct for drifts in wavelength. The benefits, however, of high-speed transmission using LDs typically outweigh the drawbacks and added expense. In high-speed telecommunications applications, specially designed single-frequency diode lasers that operate with an extremely narrow output spectrum (< .01 nm) are required. These are known as distributed-feedback (DFB) laser diodes (Figure 4-25). In DFB lasers, a corrugated structure, or diffraction grating, is fabricated directly in the cavity of the laser, allowing only light of a very specific wavelength to oscillate. This yields an output wavelength spectrum that is extremely narrow—a characteristic required for dense wavelength division-multiplexing (DWDM) systems in which many closely spaced wavelengths are transmitted through the same fiber. Distributed-feedback lasers are available at fiber optic communication wavelengths between 1300 nm and 1550 nm.

Figure 4-25 Fourteen-pin butterfly mount distributed feedback laser diode (Source: JDS Uniphase Corporation; used by permission)

Module 2-4: Principles of Fiber Optic Communication 33 Fiber Optic Detectors The purpose of a fiber optic detector is to convert light emanating from the optical fiber back into an electrical signal. The choice of a fiber optic detector depends on several factors including wavelength, responsivity, and speed or rise time. Figure 4-26 depicts the various types of detectors and their spectral responses.

Figure 4-26 Detector spectral response

The process by which light energy is converted into an electrical signal is the opposite of the process by which an electrical signal is converted into light energy. Light striking the detector generates a small electrical current that is amplified by an external circuit. Photons absorbed in the PN junction of the detector excite electrons from the valence band to the conduction band, resulting in the creation of an electron-hole pair. Under the influence of a bias voltage these carriers move through the material and induce a current in the external circuit. For each electron-hole pair created, the result is an electron flowing in the circuit. Current levels are usually small and require some amplification as shown in Figure 4-27.

Figure 4-27 Typical detector amplifier circuit

34 Optics and Photonics Series, Course 2: Elements of Photonics The most commonly used photodetectors in fiber optic communication systems are the PIN and avalanche photodiodes (APD). The material composition of the device determines the wavelength sensitivity. In general, silicon devices are used for detection in the visible portion of the spectrum. InGaAs crystals are used in the near-infrared portion of the spectrum between 1000 nm and 1700 nm. Germanium PIN and APDs are used between 800 nm and 1500 nm. Table 4-3 gives some typical photodetector characteristics:

Table 4-3. Typical Photodetector Characteristics Photodetector Wavelength (nm) Responsivity (A/W) Dark Current (nA) Rise Time (ns) Silicon PIN 250–1100 0.1–1.0 1–10 0.07 InGaAs PIN 1310–1625 0.3–0.85 0.1–1 0.03 InGaAs APD 1310–1625 0.7–1.0 30–200 0.03

Some of the more important detector parameters listed below in Table 4-4 are defined and described in Module 1-6, Optical Detectors and Human Vision.

Table 4-4. Photodetector Parameters Parameter Description Responsivity The ratio of the electrical power to the detector’s output optical power Quantum The ratio of the number of electrons generated by the detector to the efficiency number of photons incident on the detector Quantum efficiency = (Number of electrons)/Photon Dark current The amount of current generated by the detector with no light applied. Dark current increases about 10% for each temperature increase of 1C and is much more prominent in Ge and InGaAs at longer wavelengths than in silicon at shorter wavelengths. Noise floor The minimum detectable power that a detector can handle. The noise floor is related to the dark current since the dark current will set the lower limit. Noise floor = Noise (A)/Responsivity (A/W) Response Time The time required for the detector to respond to an optical input. The response time is related to the bandwidth of the detector by

BW = 0.35/tr

where tr is the rise time of the device. The rise time is the time required for the detector to rise to a value equal to 63.2% of its final steady-state reading. Noise equivalent At a given modulation frequency, wavelength, and noise bandwidth, NEP power (NEP) is the incident radiant power that produces a signal-to-noise ratio of one at the output of the detector

Module 2-4: Principles of Fiber Optic Communication 35 Connectors In the 1980s, there were many different types and manufacturers of connectors. Some remain in production, but much of the industry has shifted to standardized connector types, with details specified by standards organizations such as the Telecommunications Industry Association, the International Electro-technical Commission, and the Electronic Industry Association. Today, there are many different types of connectors available for fiber optics depending on the application. Some of the more common types are shown in Table 4-5:

Table 4-5. Fiber Optic Connector Types (Source of photos: JDS Uniphase Corporation; used by permission)

Type Description Diagram

SC Snap-in Single-Fiber Connector: A square cross section allows high packing density on patch panels and makes it easy to package in a polarized duplex form that assures the fibers are matched to the proper fibers in the mated connector. Used in premise cabling, ATM, fiber-channel, and low-cost FDDI. Available in simplex and duplex configurations.

ST Twist-on Single-Fiber Connector: The most widely used and broadly used type of connector for data communications applications. A bayonet-style “twist and lock” coupling mechanism allows for quick connects and disconnects, and a spring-loaded 2.5 mm diameter ferrule for constant contact between mating fibers

LC Small Form Factor Connector: Similar to SC connector but designed to reduce system costs and connector density.

FC Twist-on Single-Fiber Connector: Similar to the ST connector and used primarily in the telecommunications industry. A threaded coupling and tunable keying allows ferrule to be rotated to minimize coupling loss.

36 Optics and Photonics Series, Course 2: Elements of Photonics Regardless of the type of connector used, compatibility between connectors produced by different manufacturers is essential. This does not necessarily mean that the connectors are identical. Connectors produced by different manufacturers may differ in the number of parts, ease and method of terminations, material used, and whether epoxy is used. Connectors may also differ in their performance involving insertion loss, durability, return loss, temperature range, etc. Single-mode and multimode connectors may also vary in terms of ferrule bore tolerance. A standard 125-um single-mode requires a more exacting fit to minimize insertions loss, whereas a multimode fiber with its larger core may be more forgiving. A typical multimode connector may have a bore diameter as large as 127 um to accommodate the largest fiber size. A single-mode connector, however, may be specified with a bore diameter of 125, 126, or 127 um to ensure a more precise fit.

Fiber Optic Couplers A fiber optic coupler is a device used to connect a single (or multiple) fiber to many other separate fibers. There are two general categories of couplers:  Star couplers (Figure 4-28a)  T-couplers (Figure 4-28b)

Figure 4-28 (a) Star coupler (b) T-coupler

Star couplers In a star coupler, each of the optical signals sent into the coupler are available at all of the output fibers (Figure 4-28a). Power is distributed evenly. For an n  n star coupler (n-inputs and n-outputs), the power available at each output fiber is 1/n the power of any input fiber. The output power from a star coupler is simply

Po = Pin/n (4-25) where n = number of output fibers.

The power division (or power splitting ratio) PDst in decibels is given by Equation 4-26.

PDst(dB) = –10 log(1/n) (4-26)

Module 2-4: Principles of Fiber Optic Communication 37 The power division in decibels gives the number of decibels apparently lost in the coupler from single input fiber to single fiber output. Excess power loss (Lossex) is the power lost from input to total output, as given in Equation 4-27 or 4-28.

Pout (total) Lossex  (4-27) Pin

Pout (total) Lossex/dB  –10log (4-28) Pin

Example 11

An 8  8 star coupler is used in a fiber optic system to connect the signal from one computer to eight terminals. The power at an input fiber to the star coupler is 0.5 mW. Find (1) the power at each output fiber and (2) the power division in decibels. Solution (1) The 0.5-mW input is distributed to eight fibers. Each has (0.50 mW)/8 = 0.0625 mW. (2) The power division, in decibels, from Equation 4-26 is

PDst = –10  log(1/8) = 9.0 dB

Example 12

A 10  10 star coupler is used to distribute the 3-dBm power of a laser diode to 10 fibers. The excess loss (Lossex) of the coupler is 2 dB. Find the power at each output fiber in dBm and µW. Solution The power division in dB from Equation 4-26 is

PDst = –10  log (1/10) = 10 dB

To find Pout for each fiber, subtract PDst and Lossex from Pin in dBm:

 Pout = 3 dBm – 10 dB – 2 dB = –9 dBm

To find Pout in watts we use Equation 4-13:

Pout –9 = 10  log  1 mW P 9 out = 1010 1 mW –0.9  Pout = (1 mW)(10 ) –3 Pout = (10 )(0.126) W Solving, we get

Pout = 126 W

38 Optics and Photonics Series, Course 2: Elements of Photonics An important characteristic of star couplers is cross talk or the amount of input information coupled into another input. Cross coupling is given in decibels (typically greater than 40 dB).

T-couplers Figure 4-29 show a T-coupler. Power is launched into port 1 and is then split between ports 2 and 3. The power split does not have to be equal. The power division is given in decibels or in percent. For example, an 80/20 split means 80% to port 2, 20% to port 3. In decibels, this corresponds to 0.97 dB for port 2 and almost 7.0 dB for port 3.

Figure 4-29 T-coupler

10 log (P2/P1) = –0.97 dB

10 log (P3/P1) = –6.99 dB

Directivity describes the transmission between the ports. For example, if P3/P1 = 0.5, P3/P2 does not necessarily equal 0.5. For a highly directive T-coupler, P3/P2 is very small. That is, no power is transferred between the two ports on the same side of the coupler.

Wavelength-division multiplexers The couplers used for wavelength-division multiplexing (WDM) are designed specifically to make the coupling between ports a function of wavelength. The purpose of these couplers is to separate (or combine) signals transmitted at different wavelengths. Essentially, the transmitting coupler is a mixer and the receiving coupler is a wavelength filter. Wavelength-division multiplexers use several methods to separate different wavelengths depending on the spacing between the wavelengths. Separation of 1310 nm and 1550 nm is a simple operation and can be achieved with WDMs that employ bulk optical diffraction gratings. Wavelengths in the 1550-nm range that are spaced at greater than 1 to 2 nm can be resolved using WDMs that incorporate interference filters. To separate very closely spaced wavelengths (< 0.8 nm) in a dense wavelength-division multiplexing system (DWDM), however, fiber Bragg gratings are required. An example of an 8-channel WDM is shown in Figure 4-30. Figure 4-30 Eight-channel WDM (Source: DiCon DWDM refers to the transmission of multiple Fiberoptics, Inc.; used by permission) closely spaced wavelengths through the same fiber. (See Figure 4-31.) For any given wavelength  and corresponding frequency f, the International Telecommunications Union (ITU) defines standard frequency spacing f as

Module 2-4: Principles of Fiber Optic Communication 39 100 GHz, which translates into a  of 0.8-nm wavelength spacing. This follows from the  f relationship  = . (See Table 4-6.) DWDM systems operate in the 1550-nm window f because of the low attenuation characteristics of glass at 1550 nm and the fact that erbium- doped fiber amplifiers (EDFA) operate in the 1530-nm to 1570-nm range. Although the ITU grid specifies that each transmitted wavelength in a DWDM system is separated by 100 GHz, systems are currently available with channel spacing to 50 GHz and below (< 0.4 nm). As the channel spacing decreases, the number of channels that can be transmitted increases, thus further increasing the transmission capacity of the system.

Figure 4-31 Wavelength-division multiplexing

Table 4-6. ITU grid Center Wavelength – nm Optical Frequency 1546.92 193.8 (vacuum) (THz) 1547.72 193.7 1530.33 195.9 1548.51 193.6 1531.12 195.8 1549.32 193.5 1531.90 195.7 1550.12 193.4 1532.68 195.6 1550.92 193.3 1533.47 195.5 1551.72 193.2 1534.25 195.4 1552.52 193.1 1535.04 195.3 1553.33 193.0 1535.82 195.2 1554.13 192.9 1536.61 195.1 1554.93 192.8 1537.40 195.0 1555.75 192.7 1538.19 194.9 1556.55 192.6 1538.98 194.8 1557.36 192.5 1539.77 194.7 1588.17 192.4 1540.56 194.6 1558.98 192.3 1541.35 194.5 1559.79 192.2 1542.14 194.4 1560.61 192.1 1542.94 194.3 1561.42 192.0 1543.73 194.2 1562.23 191.9 1544.53 194.1 1563.05 191.8 1545.32 194.0 1563.86 191.7 1546.12 193.9

40 Optics and Photonics Series, Course 2: Elements of Photonics Fiber Bragg gratings Fiber Bragg gratings are devices that are used in DWDM systems in which multiple closely spaced wavelengths require separation. (See Figure 4-32.) Light entering the fiber Bragg grating is reflected by periodic variations in the index of refraction in the fiber’s core. Fiber Bragg gratings are fabricated by passing ultraviolet light from an excimer laser through either a phase mask or diffraction and exposing a short segment of optical fiber with its core doped with a photosensitive material. Periodic variations in intensity incident on the fiber caused by the mask or diffraction grating create periodic variations in refractive index in the fiber core. By choosing the appropriate spacing between the periodic variations to be multiples of the half-wavelength of the desired signal, each variation reflects light with a 360 phase shift causing a constructive interference of a very specific wavelength while allowing others to pass.

Figure 4-32 Fiber Bragg grating

Fiber Bragg gratings are available with bandwidths ranging from 0.01 nm up to 20 nm. They are typically used in conjunction with circulators, which are used to extract or “drop” single or multiple narrow-band WDM channels while transmitting other channels (see Figure 4-33). Fiber Bragg gratings have emerged as a major factor, along with EDFAs, in increasing the capacity of next-generation high-bandwidth fiber optic systems.

Figure 4-33 Fiber optic circulator

Module 2-4: Principles of Fiber Optic Communication 41 Erbium-doped fiber amplifiers (EDFA) The EDFA is an optical amplifier used to boost the signal level in the 1530-nm to 1570-nm region of the spectrum. When it is pumped by an external laser source of either 980 nm or 1480 nm, signal gain can be as high as 30 dB (1000 times). Because EDFAs allow signals to be regenerated without having to be converted back to electrical signals, systems are faster and more reliable. When used in conjunction with wavelength-division multiplexing, fiber optic systems can transmit enormous amounts of information over long distances with very high reliability. (See Figure 4-34.)

Figure 4-34 Wavelength-division multiplexing system using EDFAs

Fiber Optic Sensors Although the most important application of optical fibers is in the field of transmission of information, optical fibers capable of sensing various physical parameters and generating information are also finding widespread use. The use of optical fibers for such applications offers the same advantages as in the field of communication: lower cost, smaller size, more accuracy, greater flexibility, and greater reliability. As compared to conventional electrical sensors, fiber optic sensors are immune to external electromagnetic interference and can be used in hazardous and explosive environments. A very important attribute of fiber optic sensors is the possibility of having distributed or quasi-distributed sensing geometries, which would otherwise be too expensive or complicated using conventional sensors. With fiber optic sensors it is possible to measure pressure, temperature, electric current, rotation, strain, and chemical and biological parameters with greater precision and speed. These advantages are leading to increased integration of such sensors in civil engineering structures such as bridges and tunnels, in process industries, medical instruments, aircraft, missiles, and even cars. Fiber optic sensors can be broadly classified into two categories: extrinsic and intrinsic. In the case of extrinsic sensors, the optical fiber simply acts as a device to transmit and collect light from a sensing element, which is external to the fiber. The sensing element responds to the external perturbation, and the change in the characteristics of the sensing element is transmitted by the return fiber for analysis. The optical fiber here plays no role other than that of transmitting the light beam. On the other hand, in the case of intrinsic sensors, the physical parameter to be sensed directly alters the properties of the optical fiber, which in turn leads to

42 Optics and Photonics Series, Course 2: Elements of Photonics changes in a characteristic such as intensity, polarization, or phase of the light beam propagating in the fiber. A large variety of fiber optic sensors have been demonstrated in the laboratory, and many are already being installed in real systems. In the following sections, we will discuss some important examples of fiber optic sensors.

Extrinsic fiber optic sensors Figure 4-35 shows a very simple sensor based on the fact that transmission through a fiber joint depends on the alignment of the fiber cores. Light coupled into a multimode optical fiber couples across a joint into another fiber. The light is detected by a photodetector. The detector immediately senses any deviation of the fiber pair from perfect alignment. A misalignment of magnitude equal to the core diameter of the fiber results in zero transmission. The first 20% of transverse displacement gives an approximately linear output. Thus, for a 50-m-core-diameter fiber, approximately 10-m misalignment will be linear. The sensitivity will of course become better with decrease in core diameter, but, at the same time, the range of displacements will also reduce.

Figure 4-35 A change in the transverse alignment between two fibers changes the coupling and hence the power falling on the detector

The misalignment between the fibers could be caused by various physical parameters, such as acoustic waves and pressure. Thus, if one of the probe fibers has a short free length while the other has a longer length, acoustic waves impinging on the sensor will set the fibers into vibration, which will result in a modulation of the transmitted light intensity leading to an acoustic sensor. Using such an arrangement, deep-sea noise levels in the frequency range of 100 Hz to 1 kHz and transverse displacements of a few tenths of a nanometer have been measured. Using the same principle, any physical parameter leading to a relative displacement of the fiber cores can be sensed using this geometry. Figure 4-36 shows a modification of the sensor in the form of a probe. Here light from an LED coupled into a multimode fiber passes through a fiber optic splitter to the probe. The probe is in the form of a reflecting diaphragm in front of the fiber, as shown. Light emanating from the fiber is reflected by the diaphragm, passes again through the splitter, and is detected by a photodetector. Any change in the external pressure causes the diaphragm to bend, leading to a change in the power coupled into the fiber. Such sensors can be built to measure pressure variations in medical as well as other applications requiring monitoring operating pressures of up to 4 mega Pascal (~ 600 psi). Such a device can be used for the measurement of pressure in the arteries, bladder, urethra, etc.

Module 2-4: Principles of Fiber Optic Communication 43

Figure 4-36 Light returning to the detector changes as the shape of the reflecting diaphragm changes due to changes in external pressure.

If the diaphragm at the output is removed and the light beam is allowed to fall on the sample, light that is reflected or scattered is again picked up by the fiber and detected and processed by the detector. With analysis of the returning optical beam, information about the physical and chemical properties of the blood can be obtained. Thus, if the scattering takes place from flowing blood, the scattered light beam is shifted in frequency due to the Doppler effect. (Doppler effect refers to the apparent frequency shift of a wave detected by an observer— compared with its true frequency—when there is relative motion between source and observer. You may have noticed the falling frequency of the whistle of a train as it approaches and passes by you.). The faster the blood cells are moving, the larger will be the shift. Through measurement of the shift in frequency, the blood flow rate can be estimated. By a spectroscopic analysis of the returning optical signal, one can estimate the oxygen content in the blood. One of the most important advantages of using optical fibers in this process is that they do not provoke adverse response from the immune system. They are more durable, more flexible, and potentially safer than alternatives. Another very interesting sensor is the liquid-level sensor shown in Figure 4-37. Light propagating down an optical fiber is total internally reflected from a small glass prism and couples back to the return fiber. As long as the external medium is air, the angle of incidence inside the prism is greater than the critical angle and hence light suffers total internal reflection. As soon as the prism comes in contact with a liquid, the critical angle at the prism-liquid interface reduces and the light is transmitted into the liquid, resulting in a loss of signal. By a proper choice of prism material, such a sensor can be used for sensing levels of various liquids such as water, gasoline, acids, and oils.

Figure 4-37 A liquid-level sensor based on changes in the critical angle due to liquid level moving up to contact the sides of the prism

44 Optics and Photonics Series, Course 2: Elements of Photonics Example 13

For a prism with refractive index np of 1.5, the critical angles with air (na = 1.0) and water (nw = 1.33) are 41.8o and 62.7o respectively. Thus, if the prism is isosceles right-angled, with two angles as 45o, light that suffers total internal reflection with air as the surrounding medium will suffer only partial internal reflection with water as the surrounding medium, resulting in a loss of signal.

Intrinsic sensors In intrinsic sensors the physical parameter changes some characteristic of the propagating light beam that is sensed. Among the many intrinsic sensors, here we discuss two important examples, namely the Mach-Zehnder interferometric fiber sensor and the fiber optic gyroscope. Mach-Zehnder interferometric sensor—One of the most sensitive arrangements for a fiber optic sensor is the Mach-Zehnder (MZ) interferometric sensor arrangement shown in Figure 4-38. Light from a laser is passed through a 3-dB fiber optic coupler, which splits the incoming light beam into two equal-amplitude beams in the two single-mode fiber arms. The light beams recombine at the output coupler after passing through the two arms. The output from the output coupler is detected and processed. One of the fiber arms of the interferometer is the sensing arm, which is sensitive to the external parameter to be sensed. The other fiber arm is the reference arm. It is usually coated with a material to make it insensitive to the parameter of measurement. The two fiber arms behave as two paths of an interferometer, and hence the output depends on the phase difference between the beams as they enter the output coupler. If the two fibers are of exactly equal lengths, the entire input light beam appears in the lower fiber and no light comes out of the upper fiber. Any external parameter such as temperature or pressure affects the sensing fiber by changing either the refractive index or the length of the arm, thus changing the phase difference between the two beams as they enter the output coupler. This results in a change in the intensity of the two output arms. Processing of the output leads to a measurement of the external parameter.

Figure 4-38 Fiber optic Mach-Zehnder interferometric sensor. Phase changes (due to external perturbation on the sensing arm) between the light beams arriving at the output coupler cause changes in intensity at the output.

Module 2-4: Principles of Fiber Optic Communication 45 The MZ sensor is extremely sensitive to external perturbations. For example, the change of phase due to an external pressure that causes both a change in refractive index and a change in the length of the specially coated sensing arm is about 3 × 10–4 rad/Pa-m. Here Pa = 1 N/m2 represents a Pascal, the unit of pressure. This implies that the change of phase suffered by the beam when the external pressure changes by 1 Pa over 1 m of the fiber is 3 × 10–4. When someone whispers, the sound pressure corresponds to about 2 × 10–4 Pa at a distance of 1 m. If the length of the sensing arm is 100 m, the corresponding phase change in the light propagating through the sensing arm is 6 × 10–6 rad. Such small changes in phase are detectable by sensitive signal processing. MZ sensors can be used to sense different physical parameters such as temperature, strain, and magnetic field. These physical parameters cause changes in the phase of the propagating light beam. Such sensors are finding various applications in hydrophones for underwater sound detection. One of the great advantages of such an application is the possibility of configuring the sensors as omni-directional or highly directional sensors. Fiber optic rotation sensor—the fiber optic gyroscope (FOG)—One of the more important fiber optic sensors is the fiber optic gyroscope, capable of measuring rotation rate. The FOG is a device with no moving parts, with improved lifetime, and of relatively low cost. Thus FOGs are rapidly replacing conventional mechanical gyros for many applications. The principle of operation of the fiber optic gyroscope is based on the Sagnac effect. Figure 4-39 shows a simple FOG configuration. It consists of a loop of polarization maintaining, single-mode optical fiber connected to a pair of 3-dB directional couplers (capable of splitting the incoming light beam into two equal parts or combining the beams from both input fibers), a polarized source, and a detector. Light from the source is split into two equal parts at the coupler C1, one part traveling clockwise and the other counterclockwise in the fiber coil. After traversing the coil the two light beams are recombined at the same coupler and the resulting light energy is detected by a photodetector connected to the coupler C2. The source in a FOG is usually a source with a broad spectrum and hence a short coherence length. The source is chosen to avoid any coherent interference between backscattered light from the two counter- propagating beams in the fiber loop. This could be a superluminescent diode or a superfluorescent fiber source.

Figure 4-39 A fiber optic gyroscope for rotation sensing based on the Sagnac effect

We first note that, if the loop is not rotating, the clockwise and the counterclockwise beams will take the same time to traverse the loop and hence arrive at the same time with the same phase at the coupler C1. On the other hand, when the loop begins to rotate, the times taken by the two beams are different. This can be understood from the fact that, if the loop rotates clockwise, by the time the beams traverse the loop the starting point will have moved and the clockwise beam

46 Optics and Photonics Series, Course 2: Elements of Photonics will take a slightly longer time than the counterclockwise beam to come back to the starting point. This difference of time or phase will result in a change of intensity at the output light beam propagating toward C2. One of the great advantages of a Sagnac interferometer is that the sensor gives no signal for reciprocal stimuli, i.e., stimuli that act in an identical fashion on both the beams. Thus a change of temperature affects both the beams (clockwise and counterclockwise) equally and so produces no change in the output. If the entire loop arrangement rotates with an angular velocity , the phase difference  (radians) between the two beams is given by 8NA   (4-29) co where N is the number of fiber turns in the loop, A is the area enclosed by one turn (which need not be circular),   is the free space wavelength of light, and c is the speed of light in a vacuum.

Example 14 Let us consider a fiber optic gyroscope with a coil of diameter 10 cm, having 1500 turns (corresponding to a total fiber length of DN ~ 470 m) and operating at 850 nm. The corresponding phase difference, determined from Equation 4-29 is = 1.16  rad. If  corresponds to the rotation rate of the Earth (15° per hour) the corresponding phase shift is  = 8.4 × 10–5 rad, a small shift indeed. This phase difference corresponds to a flight time difference between the two beams of

    = = = o  3.8 × 10–20 s.  2fo 2c

There are many different ways of operating the gyroscope. One of them is called the closed-loop operation. In this method, a pseudo rotation signal is generated in the gyro to cancel the actual signal caused due to the rotation, thus nulling the output. This is achieved by having a phase modulator near one end of the loop as shown in Figure 4-39. The counterclockwise-traveling beam encounters the phase modulator later than the clockwise beam. This time difference introduces an artificial phase difference between the two beams. The applied signal on the modulator required to null the signal gives the rotation rate. Fiber optic gyros capable of measuring from 0.001 deg/h to 100 deg/h are being made. Applications include navigation of aircraft, spacecraft, missiles, manned and unmanned platforms, antenna piloting and tracking, and a or north finder. Various applications require FOGs with different sensitivities: Autos require about 10 to 100 deg/h, attitude reference for airplanes requires 1 deg/h, and precision inertial navigation requires gyros with 0.01 to 0.001 deg/h. A Boeing 777 uses an inertial navigation system that has both ring laser gyroscopes and FOGs.

Module 2-4: Principles of Fiber Optic Communication 47 An interesting application involves automobile navigation. The automobile gyro provides information about the direction and distance traveled and the vehicle’s location, which is shown on the monitor in the car. Thus the driver can navigate through a city. Luxury cars from Toyota and Nissan sold in Japan have FOGs as part of their on-board navigation systems.

LABORATORY

Using the concepts developed in this module, you will be able to perform the following simple experimental projects as part of the laboratory exercises for this module.  Measure the numerical aperture (N.A.) of a plastic multimode optical fiber.  Measure the attenuation coefficient of a plastic multimode optical fiber.  Construct a 2  2 fiber optic coupler.  Demonstrate wavelength-division multiplexing.

Equipment Laser pointer 100-meter spool of 1-mm diameter plastic multimode optical fiber 1 razor blade 1 red and 1 greed LED 2 180- resistors 1 5-volt power supply 1 plastic mounted diffraction grating (~15,000 lines/inch)

(A) N.A. of a multimode optical fiber Cut off a one-meter segment of plastic fiber from the spool with a razor blade. Make sure that both ends of the fiber are cut straight. Place the laser point up against one end of the fiber and shine the light into the fiber. You should see the light coming out of the other end of the fiber. Place a piece of white paper one foot (Z) from the end of the fiber such that the light generates a spot on the paper. Draw a concentric circle on the paper to indicate the diameter of the spot as illustrated in Figure 4-40. Measure the diameter of the spot with a . This is D. The N.A. is calculated using the following equation. –1 N.A. = sin a = sin [tan (D/2z)] (4-30)

48 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 4-40 Measurement of the diameter D of the spot on a screen placed at a far-field distance z from the output end of a multimode fiber can be used to measure the N.A. of the fiber.

(B) Attenuation measurement A simple experiment can be performed for measuring the attenuation of the fiber at one specific wavelength. Cut a length L (about 1 km) of the fiber and couple the beam from a laser pointer squarely into the fiber. Measure the power Po exiting at the output end of the fiber. Without disturbing the coupling system, cut off a reference length of 1 m of the fiber from the input end. Measure the power Pi exiting from the 1-m length of the fiber. This will be the input power to the longer portion of the fiber. The attenuation coefficient  of the fiber at the wavelength of the laser is then given by 1 P (dB/km)  logi L Po

(C) Making a fiber optic coupler 1. With the razor blade, carefully strip off approximately 3 of the fiber jacket in the middle of a 1-foot segment of fiber. (See Figure 4-41.) Repeat using another 1-foot segment of fiber so that you have two identical pieces.

Figure 4-41

2. Where the fiber has been stripped, twist the two fibers together as shown n Figure 4-42. 3. On each end of the stripped area, place a small weight (e.g., paperweight, book) to hold the fiber in place. (See Figure 4-42.)

Module 2-4: Principles of Fiber Optic Communication 49

Figure 4-42

4. Using the heat-shrink gun set on the low temperature setting, apply heat to the twisted area. Move the heat gun gently back and forth to uniformly melt the fiber. CAUTION: Do not hold the heat gun stationary because the fiber will melt quickly! 5. As the fiber is heated, you will notice that it will contract a bit. This is normal. When the contraction subsides, remove the heat gun and let the fiber cool for a minute. 6. Shine the laser pointer into port 1 on wire 1 of the coupler. You should observe a fair amount of coupling (~20–30%) into port 3 of wire 2 of the coupler. (See Figure 4-42.) If more coupling is needed, repeat the heating process until the desired coupling is obtained. Without disturbing the pair of twisted wires, use them in the next procedure in a WDM demonstration.

(D) Wavelength-division multiplexing demonstration 1. Using an electronics breadboard, connect two LEDs (1 red, 1 green) as shown in Figure 4-43. Position the LEDs (bend the leads) so that the top of the LED is positioned horizontally. Make sure the LEDs are lit brightly. 2. Using black electrical tape, connect input ports 1 and 4 of Figure 4-43 to the tops of the red and green LEDs respectively.

Figure 4-43

3. Now observe the output at port 2 of Figure 4-42. The red and green colors will be mixed.

50 Optics and Photonics Series, Course 2: Elements of Photonics 4. To separate the colors, observe the output of port 2 through a diffraction grating. You should observe a central bright spot (coming from the fiber) and two identical diffraction patterns—one on either side—with the red and the green separated. (See Figure 4-44.) To ensure that the two signals are indeed independent, turn of the LEDs one at a time and observe the output of port 2 through the diffraction grating.

Figure 4-44

PROBLEMS

1. A fiber of 1-km length has Pin = 1 mW and Pout = 0.125 mW. Find the loss in dB/km. 2. The power of a 2-mW laser beam decreases to 15 W after the beam traverses through 25 km of a single-mode optical fiber. Calculate the attenuation of the fiber. (Answer: 0.85 dB/km) 3. A communication system uses 8 km of fiber that has a 0.8-dB/km loss characteristic. Find the output power if the input power is 20 mW. 4. A 5-km fiber optic system has an input power of 1 mW and a loss characteristic of 1.5 dB/km. Determine the output power. 5. What is the maximum core diameter for a fiber to operate in single mode at a wavelength of 1310 nm if the N.A. is 0.12? 6. A 1-km-length multimode fiber has a modal dispersion of 0.50 ns/km and a chromatic

dispersion of 50 ps/km  nm. If it is used with an LED with a linewidth of 30 nm, (a) what is the total dispersion? (b) What is the bandwidth (BW) of the fiber? –9 7. A receiver has a sensitivity Ps of – 40 dBm for a BER of 10 . What is the minimum power (in watts) that must be incident on the detector?

Module 2-4: Principles of Fiber Optic Communication 51 8. A system has the following characteristics:

• LD power (PL) = 1 mW (0 dBm)

• LD to fiber loss (Lsf) = 3 dB

• Fiber loss per km (FL) = 0.2 dB/km • Fiber length (L) = 100 km

• Connector loss (Lconn) = 3 dB (3 connectors spaced 25 km apart with 1 dB of loss each)

• Fiber to detector loss (Lfd) = 1 dB

• Receiver sensitivity (Ps) = – 40 dBm Find the loss margin and sketch the power budget curve.

9. A 5-km fiber with a BW  length product of 1200 MHz  km (optical bandwidth) is used in a communication system. The rise times of the other components are ttc = 5 ns, tL = 1 ns, tph = 1.5 ns, and trc = 5 ns. Calculate the electrical BW for the system. 10. A 4  4 star coupler is used in a fiber optic system to connect the signal from one computer to four terminals. If the power at an input fiber to the star coupler is 1 mW, find (a) the power at each output fiber and (b) the power division in decibels. 11. An 8  8 star coupler is used to distribute the +3-dBm power of a laser diode to 8 fibers. The excess loss (Lossex) of the coupler is 1 dB. Find the power at each output fiber in dBm and µW.

REFERENCES

Bennet, S. “Fibre Optic Gyro System Keeps Bus Riders Informed,” Photonics Spectra, August 1996, pp. 117–120. Burns, W.K. “Fiber Optic Gyroscopes—Light Is Better,” Optics and Photonics News, May 1998, pp. 28–32. Chynoweth, A.G. “Lightwave Communications: The Fiber Lightguide,” Physics Today, 29 (5), 28, 1976. Farmer, K.R., and T.G. Digges. “A Miniature Fiber Sensor,” Photonics Spectra, August 1996, pp. 128–129. “Fiber Optic Technology Put to Work—Big Time,” Photonics Spectra, August 1994, p. 114. Gambling, W.A. “Glass, Light, and the Information Revolution, Ninth W.E.S. Turner Memorial Lecture,” Glass Technology, Vol. 27 (6), 179, 1986. Ghatak, A., I.C. Goyal, and R. Varshney. Fiber Optica: A Software for Characterizing Fiber and Integrated Optic Waveguides. New Delhi: Viva Books, 1999.

52 Optics and Photonics Series, Course 2: Elements of Photonics Ghatak, A., and K. Thyagarajan. Introduction to Fiber Optics. Cambridge: Cambridge University Press, 1998. Grifford, R.S., and D.J. Bartnik. “Using Optical Sensors to Measure Arterial Blood Gases,” Optics and Photonics News, March 1998, pp. 27–32. Hotate, K. “Fiber Optic Gyros,” Photonics Spectra, April 1997, p. 108. Ishigure, T., E. Nihei, and Y. Koike. “Optimum Refractive Index Profile of the Graded Index Polymer Optical Fiber, Toward Gigabit Data Links,” Applied Optics, Vol. 35, 1996, pp. 2048–2053. Kao, C.K., and G.A. Hockham. “Dielectric-Fibre Surface Waveguides for Optical Frequencies,” Proc. IEEE, Vol. 113 (7), 1151, 1966. Kapron, F.P., D.B. Keck, and R.D. Maurer. “Radiation Losses in Glass Optical Waveguides,” Applied Physics Letters, Vol. 17, 1970, p. 423. Katzir, A. “Optical Fibers in Medicine,” Scientific American, May 1989, pp. 86–91. Keiser, G. Optical Fiber Communications. New York: McGraw Hill, 1991. Koeppen, C., R.F. Shi, W.D. Chen, and A.F. Garito. “Properties of Plastic Optical Fibers,” Journal of the Optical Society of America, B Vol. 15, 1998, pp. 727–739. Koike, Y., T. Ishigure, and E. Nihei. “High bandwidth graded index polymer optical fiber,” IEEE Journal of Lightwave Technology, Vol. 13, 1995, pp. 1475–1489. Maclean, D.J.H. Optical Line Systems. Chichester: John Wiley, 1996. Marcou, J., M. Robiette, and J. Bulabois. Plastic Optical Fibers, Chichester: John Wiley and Sons, 1997. Marcuse, D. “Loss Analysis of Single Mode Fiber Splices,” Bell Systems Tech. Journal, Vol. 56, 1977, p. 703. Miya, T., Y. Terunama, T. Hosaka, and T. Miyashita. “An Ultimate Low Loss Single Mode Fiber at 1.55 mm,” Electron. Letts., Vol. 15, 1979, p. 106. “Schott Is Lighting the Way Home,” Fiberoptic Product News, February 1997, p. 13. Spillman, W.B., and R.L. Gravel. “Moving Fiber Optic Hydrophone,” Optics Letters, Vol. 5, 1980, pp. 30–33.

Module 2-4: Principles of Fiber Optic Communication 53

Photonic Devices for Imaging, Storage, and Display

Module 2-5

of

Course 2, Elements of Photonics

OPTICS AND PHOTONICS SERIES

PREFACE

This is the fifth module in Course 2 (Elements of Photonics) of the STEP curriculum. Following are the titles of all six modules in the course: 1. Operational Characteristics of Lasers 2. Specific Laser Types 3. Optical Detectors and Human Vision 4. Principles of Fiber Optic Communication 5. Photonic Devices for Imaging, Storage, and Display 6. Basic Principles and Applications of Holography

The six modules can be used as a unit or independently, as long as prerequisites have been met. For students who may need assistance with or review of relevant mathematics concepts, a review and study guide entitled Mathematics for Photonics Education (available from CORD) is highly recommended. The original manuscript of this document was prepared by Harley Myler (Lamar University) and edited by Leno Pedrotti (CORD). Formatting and artwork were provided by Mark Whitney and Kathy Kral (CORD).

CONTENTS

Introduction ...... 1 Prerequisites ...... 1 Objectives ...... 1 Scenario ...... 2 Basic Concepts ...... 3 I. Introductory Concepts ...... 3 A. Sampling theory ...... 4 B. Imaging and storage systems ...... 13 II. Imaging and Storage Devices ...... 21 A. CCD cameras ...... 21 B. CMOS cameras ...... 23 C. Vidicons ...... 26 D. Image intensifiers ...... 27 III. Display Devices ...... 28 A. Introduction to cathode-ray tubes ...... 29 B. Flat-panel liquid-crystal displays ...... 29 C. Flat-panel electroluminescent displays ...... 32 D. Flat-panel LED displays ...... 33 IV. Looking Toward the Future ...... 33 Laboratory ...... 33 Resources ...... 34 Exercises ...... 35

COURSE 2: ELEMENTS OF PHOTONICS

Module 2-5 Photonic Devices for Imaging, Storage, and Display

INTRODUCTION

Electronic and electro-optic devices are frequently used to display images obtained from the computer processing of data. Images, or digital pictures, are generally two-dimensional data structures that convey spatial information to the viewer. Images are collected through various means, from digital cameras to laser radar scanning systems and, once stored in a computer, can be manipulated mathematically to accomplish many different objectives. The improvement of images for viewing or analysis and computer interpretation of image content are among those objectives. This module explains the terminology associated with images, how images are acquired and stored, and how images are displayed.

PREREQUISITES

Before starting this module, you should have completed the following modules in Course 1, Fundamentals of Light and Lasers: 1-1, Nature and Properties of Light; 1-3, Light Sources and Laser Safety; and 1-4, Basic Geometrical Optics. Specifically, you should have knowledge of fundamental optics to include lenses, apertures, and image formation.

OBJECTIVES

When you complete this module you will be able to:  Define imaging, pixel, quantization, sampling, and bandwidth.  Explain the relationship between resolution and spatial frequency.  Calculate the resolution of an imaging device.  Describe the basic parts of a camera and explain how it is used to record images.  List the different types of scanners, e.g., flying spot, flatbed, and drum, and explain their operation.  Understand how computer files store images for archival purposes.

1  Explain the difference between lossless and lossy image-compression schemes.  Describe the structure, operation, and capabilities of a CCD camera, a CMOS camera, a vidicon, and an image intensifier.  Compare the relative advantages and disadvantages of CCD and CMOS devices.  Describe the structure, operation, and capabilities of a CRT display.  List and define the phases of liquid crystal materials.  Describe the parts and operation of a basic LCD.  Explain the difference between active and passive LCD technologies.  List and define the two addressing modes for passive LCDs.  Define electroluminescence.  Describe the operation of an electroluminescent display and explain how it differs from an LCD.  Describe the operation of an LED display and explain how it differs from an LCD.

SCENARIO

Recording and displaying images in the workplace—A company that develops and manufactures optical products using state-of-the-art technologies for the medical, display, instrument, and other industries has hired LaTresha Watley to assist optical product development engineers in the prototyping laboratory. During her interview for the position, LaTresha was told that the successful candidate needed a background in display technologies and image acquisition. These were areas of study that she had particularly enjoyed at school. Shortly after accepting the photonics technician position, LaTresha discovered that all of the fundamentals of displays and image storage that she had learned in school were useful in her work. Her daily activities involve working closely with engineers and technicians to determine what data must be recorded from their experiments and the best ways to display and store the images produced. LaTresha has discovered that high-tech companies—like the one for which she works—have sophisticated equipment and software that allow very-high-resolution images to be recorded, manipulated, and stored. Her background training has allowed her to “come up to speed” quickly on the latest technology and to understand the complex details of operating and maintaining imaging devices. LaTresha has just been told that her company is sending her to a special training course on a new high-resolution thermal imaging system that they will be purchasing for her lab. She finds the continual learning opportunities on high-tech state-of-the- art equipment one of the aspects of her job that she especially enjoys. LaTresha knows that her training and motivation are the reason her company continues to invest in her education.

2 Optics and Photonics Series, Course 2: Elements of Photonics BASIC CONCEPTS

I. Introductory Concepts A major outcome of photonics work is in the form of images, where an image is a two- dimensional structure—an array—that can represent many different things depending on how the image was made or acquired and processed. When an image is made available as the output of a photonic system, it must be displayed for observation and further study. The imaging display allows us to view images that have been produced as an outcome of a photonics investigation or process. Images can be static, such as photographs as shown in Figure 5-1, or dynamic, as in movies.

Figure 5-1 A two-dimensional static image

A movie is nothing more than a sequence of static images called frames that are displayed at a speed at which the human brain fuses them into a continuous sequence. The flicker fusion rate, as perceptual psychologists call it, is around 24 frames per second.

Activity 1: Flicker fusion rate

A flip-book is a stack of papers with an image or a drawing on each page. Each page represents a single frame of an image sequence. When the pages are flipped through rapidly, a perception of continuous movement is sensed by virtue of the phenomenon of flicker fusion as discussed in the text. To develop a sense of this, take a small pad of paper (a 3M Post-it Notes Pad works great for this) and put a dot at a slightly different location on ten or so sheets. Now flip through the sheets and notice how the dot moves around. You can be ambitious and turn the dot into a simple insect and have it hover around a flower. If you flip 24 sheets over a second of time, you produce a 24-frame-per-second movie and have reached the nominal rate for flicker fusion to occur.

In this module we are interested in devices that are used to acquire and display both static and dynamic images. To discuss those devices, it is first necessary to define and explore some

Module 2-5: Photonic Devices for Imaging, Storage, and Display 3 fundamental aspects common to all images produced by photonic systems. These elements of images have to do with sampling theory, which we discuss next.

A. Sampling theory Sampling theory is involved with the collection, analysis, and interpretation of data. This data is image data collected from photonic systems, and this includes images formed by scanning and staring devices. A scanning device has one sensor or a small array of sensors that it moves in order to collect an array of data. A staring sensor has as many sensor elements as data in the array that it records and so does not move. To discuss the concepts of sampling theory that are pertinent to imaging, we first have to understand the fundamental elements of an image and their relationship to each other and to the image overall. These elements are called pixels. Pixels—A pixel is a conjunction of the two words picture element, in which the term picture is synonymous with image. Imagine a set of white marbles set into holes on a wooden board. The holes have been drilled to form a square array, and so a view of the marbles would look something like Figure 5-2.

Figure 5-2 “Marble” array

Now imagine that we have 256 marbles and they are arranged as 16 rows by 16 columns. If we replace some of the white marbles with black ones, we can produce an image using the marbles, like that shown in Figure 5-3.

Figure 5-3 “Marble” array image

4 Optics and Photonics Series, Course 2: Elements of Photonics The pixels in the image are represented by the marbles. The image produced by the marbles is called a binary image, since each of the pixels (marbles) can be one of two values (black or white). Imagine that we have an array of 400 by 400 marbles and the marbles have a range of shades between white and black, that is, a scale of gray levels. An image produced by a marble array of this sort would look something like Figure 5-4.

Figure 5-4 High-resolution “marble” array image

What do we notice about this image? It is clearer and sharper than our 256-marble image, because it has a higher resolution. If we dispense with the marbles altogether and replace them with points of light on a rectangular screen, we have something close to the output of a black and white television. The range of grays from black to white is called the grayscale, and a black and white image of this sort is sometimes called a grayscale image. If the pixels can take on color values, we have a color image. Pixels are not restricted to visible light, but can be variations of ink on a printed page. Pixels can also represent signals that cannot be viewed directly by the human eye, such as the pixels in an infrared or laser radar image. The values that a pixel can represent have to do with the quantization of the pixel, expressed in terms of bits. The images that we have been discussing are digital images, or images that take on discrete values. Each pixel is a discrete component of the image, and a fundamental assumption is that the image will be stored, manipulated, and displayed by a computer. As we saw with the digital binary image, the pixels may be only one of two (discrete) values. In a grayscale image, the pixel takes on a set of values that are typically defined by a power of 2, such as 4, 8, 16, 32, 64, 128, and 256. This is because each pixel is represented by a binary number in the computer. The more bits that represent the pixel, the more grayscale values it can take on.

Module 2-5: Photonic Devices for Imaging, Storage, and Display 5 The two most common image quantizations are 8-bit grayscale images and what are called 24-bit truecolor images. To compute the quantization of a pixel, we simply raise 2 by the number of bits. So, the 8-bit image will have 28 = 256 gray levels. The 24-bit truecolor image is a bit (no pun intended) different. Here each pixel is actually three pixels, one each of red, green, and blue. Color images are more complex than black and white images because they require a combination of the three primary colors—red, green, and blue. The three pixels of a color digital image are so small and closely spaced that their combination is fused by the eye into a single pixel whose color is the additive combination of three individual pixels. You can see this by looking at a color computer monitor with a magnifying glass. Each pixel is just like a grayscale pixel except that, instead of varying from black to white through shades of gray, the pixel varies from black to either red, green, or blue, with varying intensities of each color. The 24-bit truecolor image has 256 reds, 256 blues, and 256 greens. Collectively, they can produce 256  256  256 colors, or 16,777,216 million colors, that is, 224 colors. People can distinguish between only about 64 different shades of gray, but we do better with colors, where the number jumps into the millions.

Activity 2: RGB color fusion

For this activity you will need colored pens and paper (or a white board and markers). On the paper (board) draw a dot with one of the pens large enough so that you can distinguish its color easily. Now draw another dot of the same size next to the first dot, but using a different color. Next to the two dots, draw a third dot with the remaining color. Now put the piece of paper on the wall and look at it from a distance such that you can no longer distinguish the three dots. You should see a dot that is grayish if the pens that you used are consistent in brightness. You may see a color that is influenced by one of the three colors that you used. Your visual system is combining the three colors into one in the same way that you perceive either colors or shades of gray from a display system.

We can conclude that the pixel is simply the fundamental unit of an image, like a brick is the unit of a wall or a grain is the unit of a sandpile. The pixel can take on various values that constitute the quantization of the image, like a brick or grain of sand can have a color or a shade of gray. We will now discuss how the image is defined in terms of resolution and spatial frequency. Resolution and spatial frequency—Resolution has to do with the fineness of detail the image can represent, or the fineness of detail the camera can record or the display system can display. The more pixels per unit area an image has, the higher the resolution. The term resolution comes from the word resolve, which the dictionary defines as “to break up into constituent parts.” There are a number of ways to define resolution in terms of imaging. One way is by using the following relation: number of pixels Resolution = area

6 Optics and Photonics Series, Course 2: Elements of Photonics Example 1: Camera resolution Manuel is a photonics technician for a company that builds custom machine vision systems. An engineer has told him that the system that they are prototyping has a technical specification requiring that a camera be installed such that objects as small as 3 mm in diameter can be resolved. Manuel has been supplied with two cameras that meet other specifications of the project and has been asked to make a recommendation as to which one is best suited in terms of resolution. The resolution of Camera 1 is one pixel per millimeter. That means that a 3-mm object will activate a grid of nine pixels (see left graphic below) when imaged at 1:1. This camera will be able to resolve two objects of the 3-mm size easily. Camera 2 has a resolution of 0.2 pixel per millimeter—or 1.0 pixel per 5 millimeters. If two of the 3-mm objects are close together and positioned such that a single pixel is activated (see right graphic), then clearly the camera cannot resolve the two objects since they both lie within ONE pixel. The second camera will not meet the specification. Manuel correctly recommends Camera 1.

For an image that is 33 inches square and contains 900 by 900 pixels, the resolution is (900  900)/(3  3) = 90,000 pixels per square inch. This sort of measure is difficult to visualize and so an alternative expression that is more common defines resolution in terms of lines per distance. When you purchase an electronic camera or a television, the resolution is stated in terms of lines. Many inexpensive electronic cameras have 380 lines; this means that the sensor 1 array has 380 rows of sensors. If the CCD array—the electronic chip—is 4" wide, we can calculate:

number of lines 380 lines Resolution = = = 1520 lines per inch length 0.25 inch

Module 2-5: Photonic Devices for Imaging, Storage, and Display 7 Example 2: Print resolution Jill is a photonics technician working for a firm that builds document copiers. Her supervisor has just returned from a conference where she was given a print from a new copier being showcased by a competitor. Specifications for the new copier have not yet been released. Jill has been asked to determine the resolution of the print. She has been told that all the lines on the copy are at the maximum resolution of the copier, which means that the width of any line in the image will be the size of the smallest dot that the copier can produce. Jill images a line in the print using a CCD camera that produces a 176×144 pixel image with a physical dimension of 2.44×2.00 inches. (See sketch.) The CCD camera Jill used magnified the real line by a factor of 60.

Using an image processing program to analyze the image, Jill estimates the magnified line to be 15 pixels wide, or 15 × 2.44" = 0.2 inch. 176 Thus 0.2 inch is the size of the smallest dot the copier can reproduce. Since the CCD camera was fitted with a lens that magnified the line by 60 times. Jill computes the number of dots per inch (DPI) to be 1 dot 60 × = 300 DPI, or a “real” dot of minimum size of 0.0033 inch. 0.2" This is consistent with many printers on the market. Jill reports these findings to her supervisor.

Let’s return to the resolution of 1520 lines per inch calculated above—before Example 2. If we were to draw a one-inch square on a piece of paper and then draw lines on the square, we 1 could draw lines as thin as 1520 of an inch and the CCD camera could resolve them. This 1 assumes, however, as stated, that a 4" subsquare is imaged onto the CCD array due to the size of the array. Also, we can really resolve only 760 lines, which is one-half of 1520. The reason for this is that we need to have a black line then a white line, then a black, and so on. To display a line, we need a sensor for the line and a sensor for the space between that line and the next line. Hence, we need two sensors per line, which gives rise to the division by two of the resolution. In sampling theory, the Nyquist rate is the rate at which you must sample a signal in

8 Optics and Photonics Series, Course 2: Elements of Photonics order to capture all of its frequencies, and this rate is twice the highest frequency that must be sampled! Frequency has to do with how often something changes in time. When something changes in distance, like pixels or lines per inch, we call this spatial frequency. Spatial frequency is a more specific term than resolution, even though they share the same units. Typically, images have resolutions that are represented as pairs of numbers, indicating the number of rows and columns in pixels of the image—although the actual size in terms of length or area of the image is not mentioned. Common computer display resolutions are 512 by 512, 1024 by 1024, and 2048 by 2048 for square images and 640 by 480, 800 by 600, and 1024 by 768 for rectangular images. High-definition television, or HDTV, is 1080 by 1920 pixels. Figure 5-5 shows two grayscale images of the same object, both the same size. Figure 5-5a, however, is of far lower resolution than Figure 5-5b. The pixels of Figure 5-5a are much larger than those of Figure 5-5b, so there are fewer of them per unit area—hence a lower resolution. It is possible to see the square shape of the fuzzy pixels in Figure 5-5a, whereas an individual pixel in Figure 5-5b cannot be distinguished without the aid of a magnifying glass.

Figure 5-5 Two grayscale images of the same object with different resolutions

The resolution of printed media is defined in terms of dots per inch (DPI), where a dot is a small spot of ink, like a black pixel. Resolution in office laser printers is typically 300 or 600 DPI. Printers of magazines and books use presses that can produce upward of 1200 DPI. The common newspaper is 72 DPI. It is possible to print images of grayscale or color using binary pixels. The two most common processes are dithering and half-toning, where patterns of ink dots are used to develop the illusion of a grayscale pixel. The next time you read a newspaper, examine the pictures with a magnifying lens or glass. The half-tone pixels are easily discernible.

Module 2-5: Photonic Devices for Imaging, Storage, and Display 9 Activity 3: Resolution

For this activity we will make use of a five-bar resolution chart developed by the National Bureau of Standards (NBS 1010A) for microscopy testing. An illustration of what the chart looks like is shown below. Each of the five bars in a test group is 24 times as long as it is wide. The numeric value shown with each test group represents the number of cycles/millimeter, so the five vertical and horizontal lines in the upper left corner framing the number 1.0 are one cycle/millimeter—in other words, each line and space pair is 1 mm wide. As the numbers get larger, the space that the five lines must squeeze into decreases, so the lines get thinner and thinner. At some point, you may not be able to distinguish between the five lines. At that point you have reached the resolution limit of your visual system. Put the chart into a computer scanner and scan it at various resolutions and determine at what resolution the scanner cannot resolve the lines.

It should be clear that, the more resolution an image has, the finer the lines that can be displayed or printed. In Figure 5-5b, it is very easy to distinguish the seconds tick-marks of the stopwatch dial, whereas in Figure 5-5a they cannot be identified. This has to do with the spatial frequency, which we defined earlier as a measure of lines per unit distance. We say spatial frequency because the frequency is two-dimensional. Consider Figure 5-6. Here we see eight rectangles that exhibit varying degrees and types of spatial frequency. Starting in the upper left frame and reading across, we have a frame with closely spaced vertical lines, followed by the same size frame with fewer verticals. The first frame has a higher spatial frequency content than the second. The last two frames of the first row have high and low spatial frequency, respectively, but with horizontal lines. The second row at left shows a grid pattern where the vertical spatial frequency is equal to the horizontal. The frame to the right of it shows a square with very small dots. That square has high spatial frequency horizontally, vertically, and diagonally. The last two images show random textures. Can you tell which one has higher spatial frequency? If you said the first one, you were correct, because there is greater detail in the pattern. The spatial frequency is related directly to the resolution. In fact, the resolution of which a display is capable determines the maximum spatial frequency that can be displayed.

10 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 5-6 Examples of different spatial frequencies

As we saw earlier, if we have a resolution of 300 DPI, we can display a spatial frequency no greater than 150 lines per inch, or one-half the resolution. We can consider resolution to be a measure of sampling capability, where each pixel is a sample. The maximum number of lines that can be represented, i.e. the maximum spatial frequency, is measured in lines per unit distance and is just one-half the resolution. This can be expressed analytically as:

Maximum spatial frequency (lines/distance) = 1/2 resolution (lines/distance).

Example 3: Spatial frequency The machine vision system project that Shaqeal is working on has been designed to distinguish objects based on a known pattern. The project engineer gives Shaqeal two small cylinders about the size of AA batteries that are to be used as test targets for the system. He asks Shaqeal to mark the cylinders with vertical lines such that the spatial frequency of one is twice that of the other. (See sketch.)

The camera system that they are using has a resolution of 20 pixels per millimeter, so the maximum spatial frequency will be 10 lines per millimeter, or 254 lines per inch. Shaqeal decides to use his laser printer to produce shipping labels with the line patterns that may then be affixed to the targets. His printer can print 600 lines per inch, so he knows that he can produce a pattern that will not be resolvable by the camera system if his patterns are at the resolution limit of the printer. He elects to go with patterns that contain 200 and 100 lines per inch, with each line being 3 and 6 printer lines, respectively. The cylinder with the 200-lines-per-inch resolution will be twice the spatial frequency of the 100-lines-per-inch cylinder, and both will be resolvable by the camera system.

Module 2-5: Photonic Devices for Imaging, Storage, and Display 11 Activity 4: Resolution

Using a “pin art” module (see the “Other Resources” section on where to obtain), produce reliefs of different objects (calculator, golf ball, stapler, etc.) like that shown in the illustration. Now imagine that the movable pins in the module are pixels. Instead of value of light intensity, the pin depth determines the pixel values of each pin. Use a ruler to determine the size of the pin area, then count the number of pins and calculate the resolution of the module. Find an object with physical features that are smaller than the resolution of the pin art. Can the module reproduce the object features?

Bandwidth—Bandwidth is defined as the amount of information that can be transmitted across a channel in a fixed amount of time. See Module 2-4, Principles of Optical Fiber Communications. Bandwidth is a term from communication theory, but here we can consider an image to be a channel and can discuss image bandwidth in terms of spatial frequency. We replace time with distance and say that the bandwidth of an image is the maximum spatial frequency that the image can contain. This was computed earlier and can now be rewritten as the image bandwidth expression:

1 Image bandwidth (lines/distance) = 2 resolution (lines/distance)

Certainly, an image where all the pixels are of the same value does not convey much information. However, a very complex scene with a great amount of detail can reveal a good deal of information. Like we saw with the two images of the stopwatch in Figure 5-5, the image in Figure 5-5b with the higher resolution and higher spatial frequency is the image with the greater bandwidth. When images are transmitted over communication channels, the bandwidth of the channel will have a substantial effect on the speed at which images can be passed. Images are, by nature, high-information-content data structures. To illustrate this, consider the fact that a single frame of an HDTV image is roughly 6.2 billion pixels. So, to generate a 30-frame-per-second image sequence we need a channel capable of processing 186 billion pixels per second. With HDTV, it is necessary to reduce the amount of data (not information!) using compression techniques so that a channel of lesser bandwidth may be used. Nevertheless, a channel bandwidth of 8 MHz is needed. We can contrast this to analog TV, which requires only 4 MHz.

12 Optics and Photonics Series, Course 2: Elements of Photonics Example 4: Bandwidth Vanessa is a technician working for a local cable company. She is in charge of video-on-demand services. Her shop is responsible for maintaining a wide array of video equipment including video amplifiers. Recently she has been told that the company will be upgrading from standard definition television (SDTV) to high-definition television (HDTV) and that she needs to review the equipment specifications for her repair shop. Vanessa knows that HDTV is twice the bandwidth of SDTV, 8 MHz versus 4. A quick survey of her shop equipment reveals that the video amplifiers that are used for testing are rated at 6 MHz. Vanessa knows that, although the 6-MHz video amps work fine for 4-MHz signals, they will be unable to handle the 8-MHz HDTV signals and she will need to upgrade. Her signal generators should be able to produce the requisite bandwidth for the new HDTV testing requirements.

B. Imaging and storage systems Imaging systems include devices and methods used to capture and store images. These systems can capture images live from natural scenes or copy images that have been stored on a medium such as film or paper. Imaging refers to the methods associated with capturing, storing, and computer-processing images. Some clarification in terminology is now required. Images formed optically are referred to simply as images, and images that have been captured onto film are called pictures. These two terms are used interchangeably when the images are captured or scanned electronically for input, storage, and processing by a computer. The formal terms are digital images and digital pictures to indicate that we are talking about computer imaging. However, we will remain with the usual convention and allow the context of use to define the type of images being discussed. When the potential for confusion arises, we will specify computer image or picture to clarify. Cameras—A camera is a photonic device that uses a lens system to form the image of a natural scene onto a recording medium. The first cameras employed various chemical compounds to record scenes onto paper, polymer sheets, or other inert substrates—called the film—such as glass or plastic. The basic parts of a camera are illustrated in Figure 5-7. The film responds to light through chemical change. Light- sensitive compounds change in varying degrees depending on the intensity and wavelength of the light. Other chemicals are used to develop the film and reveal the image as it was recorded to the film. Light enters the camera through the lens system and is imaged onto the recording medium. The medium responds to the light proportionally, so some means must be provided to Figure 5-7 Parts of a camera

Module 2-5: Photonic Devices for Imaging, Storage, and Display 13 control the length of time that the image is allowed to illuminate the film. This time is controlled by the shutter, which is nothing more than a mechanical window that stays closed until it is time to expose the film. When that time arrives, the shutter opens for a preset interval, the exposure time. Exposure times vary depending on the lighting conditions, the camera optics, and the film. There are three critical aspects to camera photography: f-stop, shutter speed, and film speed. f-numbers, shutter speeds, and film speeds—The f-stop is the ratio of the focal length of the lens to the diameter of the lens opening, or aperture. The size of the aperture is controlled by turning a collar typically located at the base of the lens. Typical values may be f/2, f/2.8, f/5.6, and f/16 where the “f” denotes the f-stop, sometimes referred to as the f-number. Smaller f-numbers—f/2, f/2.8, etc.—represent larger apertures, and higher f-numbers—f/11, f/16, f/22, etc.—represent smaller apertures. The f-stops are arranged so that each f-number allows exactly half as much light through as the previous one and twice as much light through as the next. For example: f-number 1.4 2 2.8 4 5.6 8 11 16 1 1 1 1 1 1 1 relative brightness 1 2 4 8 16 32 64 128

Activity 5: f-numbers

Research has shown that the focal length of the human eye is about 1.7 cm (17 mm). Since f-numbers express the diameter of the aperture of a camera lens in terms of focal length of the lens, we can produce a table relating aperture to the human eye to f-number by just dividing the eye’s focal length by the f-number. This table is shown below.

focal length (mm) 17 17 17 17 17 17 17 17

f-number 1.4 2 2.8 4 5.6 8 11 16

Aperture diameter (mm) focal length  f-number

Since f-number is dimensionless, the apertures are in millimeters and the circles in the third row have diameters corresponding to the apertures defined by the f-numbers. To complete this activity, cut out the aperture holes with a razor knife (or sharp object for the small holes.). Hold the paper under a bright light and above a surface (desk or table) and observe how the brightness changes by about one-half for each successive f-number.

14 Optics and Photonics Series, Course 2: Elements of Photonics Example 5: f-number The prototype machine vision system that Marcus has been working on is nearing completion. The design engineer has specified a camera with a lens of fixed aperture size and an externally controlled gain. So, instead of changing the lens diameter—and thus the f-number of the lens—the lens diameter is kept fixed and the effective aperture (and f-number) is controlled electronically. The engineer explains to Marcus that when the gain control is at 10 volts, maximum for the camera, the effective aperture is f/1.4. The marketing group has requested that the production unit be fitted with an adjustment knob to allow settings of f/1.4, f/2, f/2.8, and f/4, as customer surveys have shown that to be a desirable feature. The engineer asks Marcus to install such a knob on the prototype. Marcus selects a four-position switch and supplies it with 10 volts. In position 1, the switch allows 10 volts to go to the gain control input of the camera. He then uses a voltage divider so that position 2 will supply 5 volts—corresponding to an f-number of 2. Likewise, he uses voltage dividers to produce 2.5 and 1.25 volts to accommodate the f/2.8 and f/4 positions on the switch. Each change of position on the knob will decrease or increase the effective aperture of the camera— and hence light energy on film—in accordance with the f-number. See sketch.

Shutter speed is a measure of how long the shutter remains open when the picture is taken. On manual cameras, it is usually set by means of a dial on the top of the camera or, less commonly, 1 1 1 1 a ring around the base of the lens. Typical shutter speeds (in seconds) are 1000, 500, 250 …. 15 and 1. Note that, like f-stop, shutter speeds differ by factors of 2. This makes it easier for photographers to judge camera settings for the subjects or scenes they are trying to photograph. It is for this reason that much of photography is considered to be an art. Film speed is a measure of how sensitive the film is to light and is measured by an International Standards Organization (ISO) number. High-sensitivity films are often called fast, and low- sensitivity films are called slow. Standard film speeds are ISO 100, ISO 200, ISO 400, and— more recently—ISO 800. A film rated at ISO 200 needs half as much light to form the same image density as one rated at ISO 100. Note, for example, that ISO 400 is one stop faster than ISO 200 and two stops faster than ISO 100. Generally, if you need to take pictures in low light conditions, you need a faster film, such as ISO 400 or ISO 800. You also need a fast film if your subject is in motion because, to freeze the motion, the shutter speed must be set for a very short time. The short time, in turn, limits the total amount of light available to expose the film. Automatic cameras have mechanisms, both electronic and mechanical, that simplify the setting of the camera parameters. They also restrict the flexibility of the camera and sometimes limit

Module 2-5: Photonic Devices for Imaging, Storage, and Display 15 the types of film and lenses that may be used. The photonics laboratory often includes cameras that are designed to interface with other components normally found on a well-equipped optical bench. A camera is called for when the optical signals produced are images and a permanent, hard-copy record is desired. A photonics technician is typically responsible for camera setup and operation as well as for obtaining film and camera supplies. Operation of laboratory cameras used to record experimental data can be complex, and in all cases the instruction manuals must be consulted prior to use. Inexpensive automatic cameras are adequate for recording experimental setups for archive or documentation use. Inexpensive Polaroid cameras are also useful for these tasks because the film can be developed quickly into a print or negative. After images are acquired, the film is sent for processing into final form. This form may be slides, photographs, or negatives. Some film processing facilities offer the option of receiving film images in digital form on compact disks. Photonics experiments may require the use of specially prepared slides for use in later experiments (such as holography). Cameras have appeared recently that use electronic CCD arrays in place of the chemical film media. These cameras store images in electronic computer memories and the results are available nearly instantaneously. However, the availability of “hard copy,” or printed results, is dependent on whether a printer is available to produce the prints. Printers capable of high-resolution, high- quality output comparable to that obtainable from chemical film technologies can be prohibitively expensive. CCD cameras are discussed in greater detail in Section IIA, CCD Cameras.

Activity 6: Camera optics

Using an inexpensive camera whose back has been removed, identify the lens system, shutter, aperture, and film area. Mount the camera on an optics table in front of the CCD array of an inexpensive circuit board “pinhole” camera (with CCD lens removed). Connect the output of the CCD camera to a monitor. Set the shutter speed of the camera to inf (infinite) so that the shutter remains open. Try to image an object using the camera’s lens and aperture. After obtaining a suitable image on the monitor, adjust the aperture to f-stops above and below the current setting. Do you observe a doubling and a halving of the brightness of the image on the display? What effect does changing the f-stop have on the clarity and sharpness of the displayed image?

Scanners—Unlike cameras, which are two-dimensional (2-D) devices that acquire an image all at once, scanners must capture images one pixel at a time. The very first electronic imaging systems used what is called a flying-spot scanner. Although many types of scanning devices exist, we are interested in only imaging scanners, like the flying spot, and so our discussion will be restricted to them. Essentially, a scanner converts a 2-D image or picture into a digital image or picture, i.e., an image in electronic form manipulated by a computer.

16 Optics and Photonics Series, Course 2: Elements of Photonics The flying-spot scanner is very simple in construction and consists of a photocell mounted on an x-y positioning system. The components of such a scanner are shown in Figure 5-8. The positioning system locates the photodiode so that it signals the upper left corner of the scene to be imaged. The intensity of the scene is recorded, and the positioning system moves the detector to the right. The output of the sensor is continuously recorded and the system continues in this fashion until it reaches the far limit of the first row of data points. The data points, of course, are the pixels of the image. The positioning system then moves the detector to the start of the second row, and the process repeats itself until the entire scene has been scanned.

Figure 5-8 Flying-spot scanner

The resolution of the image produced by the flying-spot scanner is dependent on the quality and size of the detector, the optics (if any) of the detector, and the resolution and accuracy of the x-y positioning system. This type of scanner is very susceptible to noise produced by the positioning system. Modern scanners use nutating (vibrating) mirrors or detector arrays to minimize the mechanical noise induced by the scanning system. One of the most common scanners is the page scanner, such as the one shown in Figure 5-9. This scanner uses a linear array of detectors that capture one row of pixels at a time from the image. This type of scanner is less susceptible to noise than the flying spot.

Module 2-5: Photonic Devices for Imaging, Storage, and Display 17

Figure 5-9 Page scanner

Activity 7: Scanners

Your instructor will have mounted a set of large posters or screens on the walls of the lab with images such as the lion in the graphic shown here. Set up a tripod-mounted light meter within viewing distance of one of the graphics. Use the viewfinder on the light meter to acquire image intensities from the superimposed grid squares on the poster. Move the tripod as necessary so that only one square is viewed at a time. Record the intensities on a sheet of graph paper so that grid square intensities from the poster match grid squares on the paper Now use the METIP Pixel Calculator program on one of the lab computers [see the “Other Resources” section at the end of this module] to enter the grid data into a blank grid image. What you have done is simulate a “flying-spot scanner.” How does the image that you scanned compare to the graphic in the poster? Calculate the resolution of the poster, your graph paper, and the pixel calculator. How do the resolutions differ? How does this difference relate to the subjective quality of each representation? How could the quality be improved?

18 Optics and Photonics Series, Course 2: Elements of Photonics Files—When images have been acquired, by either a camera or a scanner, they must be stored for easy access and cataloging. Print images are easily stored using standard office filing procedures, although care must be take to ensure that proper environmental conditions—such as temperature and humidity—are maintained. Also, some print media are light sensitive even after processing, and these images should be filed in special opaque envelopes designed for photographic storage. Images captured electronically may be filed on disk or tape for archival purposes. For many years, magnetic tape was the medium of choice for archiving image files, but today the removable diskette has replaced tape systems. Two issues of great importance must be addressed when storing image data to files—the resolution of the image and the file storage format. Both of these issues will impact the size of the storage medium required to archive the images. Resolution affects the gross size of the image in terms of computer memory requirements. A grayscale image with a matrix of 1024  1024 pixels that uses one byte per pixel will require 1 megabyte of storage if the image data are stored in raw form with no further processing. Table 5-1 shows a comparison between common digital storage media in terms of how many raw 1024  1024 images can be archived, the access rate (speed) of the device that reads and writes the medium, and the relative stability (longevity) of the medium. The data in the table is crude because the storage medium capacities may change dramatically in a very short time. Nevertheless, you can see how the choice of medium for archiving images is no trivial task. If the images are data collected from rare events, highly stable media will be best, such as CD-ROM or DVD diskettes. If there is a lot of data, a tape medium may be the best choice. Computer memory has no longevity because the data are lost upon removal of power. Hard disks can be useful for image archiving if the data are used often, since they are the fastest method (short of computer memory) for storing and retrieving data. Hard-disk data are easily backed up using one of the less volatile media to ensure that a device failure does not cause loss of data.

Table 5-1 Storage Medium Capacities Media Number of Images* Speed Longevity Computer memory 128 Very Fast None Hard disk 10,000 Fast Low CD-ROM 600 Medium Extreme DVD 5,000 Medium Extreme Tape 20,000 Slow High *10241024-pixel images (1 byte per pixel)

Like film media, digital media must be stored carefully according to the manufacturer instructions. Magnetic media such as tapes must be kept away from strong magnetic fields. Power supplies for high-powered laser systems can generate damaging electromagnetic fields, and so caution must be exercised when using or transporting magnetic computer media near these devices. This type of negative influence is called electromagnetic interference (EMI) or radio-frequency interference (RFI), and commercial equipment must be shielded from these

Module 2-5: Photonic Devices for Imaging, Storage, and Display 19 fields. However, some special-purpose laboratory equipment may not be adequately shielded and so caution must be exercised. Digital images are stored as computer files. Files are groupings of data that are kept on digital storage media like those described above. When the digital image is stored directly as pixels, the file that holds the data is called a raw image file. These files can be quite large and the storage of a large number of them can be difficult and time consuming. Some files use a compression scheme to reduce the amount of data that must be stored, but one must be very careful in the selection of the technique used. Compression schemes take advantage of the repetition in pixel characteristics, e.g., color, brightness, etc., that occurs in large collections of data or in the features that make up the image. Imagine an image such as that shown in Figure 5-10. This could be an image requiring a large amount of data storage, or we could just store the phrase “A large circle with horizontal lines through it.” Likewise, we could have an image that used only a few different pixel values. For this sort of image, we could make up very short codes to represent the pixel data. These shortened codes could be stored along with a code key to allow us to reconstruct the original pixel values when we want the image restored. In both cases, we could store a very small encoded image or description rather than the original large one. With the latter Figure 5-10 Simple graphic approach, we could get back our original image with no image degradation or loss of original pixels. But not so for the first approach, where how the reconstructed image would look would depend completely on the artistic and drawing skill of the individual tasked with recreating “a large circle with horizontal lines through it.” The point of this discussion is that compression schemes may be lossless or lossy. In almost all cases the lossy schemes give the best data reduction. This is why we say that you must be cautious in using image file formats, since some may give excellent compression but the cost of the compression will be paid in data loss. Table 5-2 lists a few popular file formats and the types of compression schemes that they employ.

Table 5-2 File Formats Format Compression Scheme raw None—pixel values stored directly. Lossless for grayscale images, can be lossy for color images since color GIF1 imagery is forced to an indexed color map. Inherently lossy, JPEG removes redundant data based on a model of JPEG2 human color perception. JPEG can be used to store a lossless image, but the file size will be greater than that of a raw image! TIFF3 Compression scheme selectable; most schemes used are lossless. 1Graphic interchange format 2Joint photographic experts group 3Tagged interchange file format

20 Optics and Photonics Series, Course 2: Elements of Photonics Activity 8: Image files

Using a commercial image-processing software such as Image Pro Plus by Media Cybernetics [see the “Other Resources” section at end of module], display a grayscale image file that your instructor will specify. This image file will have 8-bit pixels at a resolution of 256  256 pixels. Since a byte is 8 bits, this file will require 256  256 = 65,536 bytes if stored as “raw” pixel values, i.e., no compression. Store the file as GIF and uncompressed TIFF, and observe the file sizes. The GIF image file should be smaller than the TIFF. The TIFF file will be larger than the “raw” file because TIFF adds information to the file besides just the pixel data values. Get a different 256  256 image and store it as GIF. Although the raw files of the two images will be the same size, the GIF image files may be different due to the information content of the images. A very complex image will not compress as well as a simple image. Save an image in JPEG format and set the mode to maximum compression. Now reload the JPEG file that you just saved. You should see a noticeable decrease in the quality of the image.

II. Imaging and Storage Devices Various optical and electro-optical devices comprise an important aspect of photonics work. They are employed by imaging systems as components responsible for converting images into electronic signals for display—or for further conversion into digital images for use in computers. Many different imaging devices are available. The three most popular ones are discussed here.

A. CCD cameras Charge-coupled device (CCD) cameras are the most pervasive of today’s imaging devices. A CCD camera uses an array of light-sensitive cells formed in silicon. The cells can be thought of as miniature capacitors, where each capacitor is a pixel in the image created by the array. When the array is exposed to light, the capacitors charge up proportionally to the intensity of the light falling on the array. The array of different charges, discriminated by different values of voltage, is then read, converted to a digital signal, and transferred into a computer. The signal can also be sent directly to a CRT device (see section IIIA, Introduction to Cathode-Ray Tubes) for viewing. The CCD array can replace the film medium in a camera, where the camera optics form an image on the array instead of on the usual film. CCD cameras have electronic shutters that control the integration time at each pixel. This means that the charging time of the capacitor is controlled. If the capacitor is allowed to charge a long time, the pixel will have more time to charge. Exposure in the camera is also dependent on the device physics used in the CCD array construction. Most cameras have automatic exposure controls that operate to the limits of the camera specifications and are nothing more than automatic gain controls that limit the electrical signal output by the camera, or hold the signal to a constant value. More expensive cameras allow adjustments to be made under computer control and are equipped with special interface units. Figure 5-11 shows the basics of how a CCD array works.

Module 2-5: Photonic Devices for Imaging, Storage, and Display 21

Figure 5-11 CCD array

Note that the image data in the electrical signal output from the array must leave the array serially, in single file. This process is illustrated in Figure 5-12. The pixel values exit the array in synchronization to an electronic clock signal. The top row of pixel values (voltages) is clocked out of the chip, then the pixel values are shifted up (copied) by one row and these are then clocked out, and the process repeats until all pixels have been read out. You may wonder about whether or not a shutter mechanism is required. The answer is yes. The shutter action is accomplished electronically by a signal that copies the charge from all of the pixels on the primary CCD array to another CCD array—called the storage array—behind the imaging array. This second array is used to transfer the pixels out of the device. Color CCD arrays require filters to control the wavelength of light that strikes the individual elements of the array. For this reason, the color CCD is more complex physically, electronically, and optically—although the fundamental operation of the array is similar.

Figure 5-12 CCD pixel-data transfer

22 Optics and Photonics Series, Course 2: Elements of Photonics Structure—The structure of a CCD (electronic) camera is identical to that of a film camera with two exceptions: 1) the CCD camera replaces the film medium with a CCD array, as discussed above, and 2) the shutter action is performed electronically. CCD cameras have widespread use and can be made very inexpensively and with very high resolution. Small, inexpensive cameras are used heavily by the security and surveillance industry. All modern camcorders contain CCD arrays. Almost all modern astrophysical recording is done using high- resolution CCD arrays. Finally, medical imaging in hospitals is converting from chemical film processes to CCD-captured digital imagery. Capabilities—A number of specifications are used to characterize the capabilities of CCD cameras, but the primary ones are resolution in terms of pixel count and geometry, array size, pixel quantization, and frame rate. Arrays are now available with resolutions of 4096 by 4096 pixels, although combined array cameras of 8192 by 8192 pixels formed from sixteen 2048-by-2048 arrays were available at the time of this writing. Odd geometries are also prevalent, such as 1317 by 1035 or 2112 by 2070 pixel arrays. Electronic shutter speeds vary and range from less than one frame per second to over 100 frames per second. Pixel quantization is defined in terms of bits, as we mentioned earlier. Typically, the CCD pixel employs 8-bit quantization yielding a grayscale of 28 or 256 levels. Laboratory imaging generally requires 12-bit quantization or 4096 levels of grayscale. There are rules of thumb to be applied when discussing CCD capabilities. As resolution of the array increases, frame rate decreases. This is due to the fact that the pixels must be removed from the array serially, so the more pixels acquired the longer it takes to get them out of the array. If the images produced by the CCD are to be used for human viewing, 8-bit quantization is usually adequate. However, for computer image processing the rule is that the greater the quantization the better the processing.

B. CMOS cameras Complimentary metal-oxide semiconductor (CMOS) cameras are similar to CCD cameras in that they use an array of light-sensitive cells formed in silicon. Figure 5-13 shows a CMOS camera “chip,” as it is called. The light-sensitive array is at the center of the rectangular center section. The similarity between CCD and CMOS ends with the fact that they are both formed from a silicon integrated circuit technology, since the CMOS cells are transistors rather than miniature capacitors. When the array is exposed to light, the transistors output a voltage proportional to the intensity of the light falling on the array. This voltage is then read off of the array, converted to a digital signal, and transferred into a computer. As with CCD operation, the signal can also be sent directly to a CRT device for viewing. Similarly, the CMOS array can replace the film medium in a camera, where the camera optics form an image on the array instead of on the usual film. Figure 5-14 shows the CMOS camera chip with a lens installed.

Module 2-5: Photonic Devices for Imaging, Storage, and Display 23

Figure 5-13 CMOS camera “chip” Figure 5-14 CMOS camera with lens

Recall that the CCD camera incorporates a shifting process to get pixel data out of the detector array. The CMOS imager does not require this since both the photodetector and the voltage output amplifier are integrated into each pixel. Most CMOS cameras incorporate a photodiode (a diode that detects light) and three transistors to amplify the voltage from the photodiode in each pixel. The circuit diagram for a CMOS pixel, showing the photodiode and three transistors is shown in Figure 5-15. The transistor labeled “reset” is used to shutter the image produced by the array, and the row transistors allow selection of the pixel by row while the column is selected via a column bus. What this means is that the value of each pixel may be addressed in the same way that memory bit is accessed in a computer memory. X-Y wires that address the rows and columns of the array allow direct access to each pixel independently. Figure 5-16 shows a graphic that illustrates how the pixels in the CMOS array are individually addressed by row and column.

Figure 5-15 CMOS pixel circuit diagram

24 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 5-16 CMOS array addressing

Row-column addressing provides much greater flexibility in processing the image data from the camera since the data in a window, or subsection of the array, can be accessed (see Figure 5-17). This window of data facilitates certain image compression, motion detection, and target- tracking operations. The window can also be moved electronically to provide for pan, tilt, and zoom outputs.

Figure 5-17 CMOS array window

Similar to the CCD array, a shutter action is accomplished electronically. Unlike the CCD, however, the CMOS camera accomplishes shuttering by simply changing the rate that the image is read from the array. The shutter speed limit is determined by how fast the pixel values in the array can be read. Color CMOS arrays require filters to control the wavelength of light that strikes the individual elements of the array, just as they do with CCD cameras. For this reason, the color CMOS is more complex physically, electronically, and optically—although the fundamental operation of the array is similar. Structure—The structure of a CMOS camera is identical to that of a film camera with two exceptions: (1) The CMOS camera replaces the film medium with a CMOS array, as discussed above, and (2) the shutter action is performed electronically using the reset transistor. CCD versus CMOS—Although CCD and CMOS cameras share a number of similarities, there are significant advantages and disadvantages to each technology that you should be aware of.

Module 2-5: Photonic Devices for Imaging, Storage, and Display 25 These differences influence the selection of one technology over the other and also indicate where you will find one camera rather than another in an application. Noise. Because of the way that the CCD camera is manufactured, the CCD array is less susceptible to electronic noise than the array of the CMOS camera. The CCD camera, in general, will produce a higher-quality, lower-noise image than an equivalently-sized CMOS camera. Sensitivity. Since each pixel in the CMOS array is composed of a detector and upwards of three transistors, light falling on the pixel also falls on the transistors, where it has no effect. As a consequence, a similarly sized CCD camera will be more light sensitive than a CMOS camera. Power consumption. CMOS technology is a low-power technology. Hence, a CMOS camera will consume up to 100 times less power than a CCD camera. CMOS cameras are ideally suited to power-sensitive applications, like those where battery power is used. Fabrication. Since CMOS arrays are very similar to computer memory arrays, they are very easily manufactured using the same processes. The CCD array requires a very specialized fabrication technology and is therefore substantially more costly to produce. Maturity. CCD camera technology is older than that of CMOS cameras. Hence, the availability of high-resolution, high-pixel-quality arrays is more common. Because of the differences discussed above, CMOS cameras are found in low-cost applications where battery power is used and image quality or resolution is not of great concern. Inexpensive digital cameras, security systems, and cameras mounted on vehicles are applications where CMOS cameras find usage.

C. Vidicons The vidicon is an electron tube that has been designed to capture images and convert them to electrical signals. The signals produced by a vidicon are similar to those generated by a CCD array, but they are produced from an electron beam as opposed to a silicon charge system. The vidicon tube was invented in 1951 and has seen little change in basic design since that time. It was used extensively in the broadcast (TV) industry until the advent of the CCD. Likewise, the CCD camera is slowly replacing the vidicon in both scientific and medical imaging as CCD technology improves and vidicon technology ages. The two greatest drawbacks to the vidicon are the delicacy of the vacuum tube and the high voltage requirements for the electron scanning beam. Structure—The components of the vidicon tube are diagrammed in Figure 5-18. A cylindrical glass tube is constructed with a flat glass plate window at one end. Inside the tube and behind this window is a photoconductive target. The material from which this target is made has the property that its electrical resistance varies according to the intensity of illumination that strikes it. An external lens system is used to image a scene onto the window of the vidicon, which then charges the target material to varying intensities depending on the illumination. An electrode is placed at the other end of the tube. The electrode produces an electron beam when the glass tube is sealed under a vacuum. The beam is focused and deflected by a set of coils that surround the tube. The beam is scanned across the target, and an electrical signal is developed that is then output from the target. The voltage of this signal is proportional to the resistance of the target at

26 Optics and Photonics Series, Course 2: Elements of Photonics the position of the beam and to the light intensity striking the target at that location. The signal output is then synchronized to the electron beam and output as the electronic image.

Figure 5-18 Vidicon tube

Capabilities—As discussed earlier, vidicons are becoming obsolete as the capabilities of CCD arrays are improved. At one time they were the system of choice for medical and scientific imagery because of their high speed and high resolution. But now the CCD array camera has exceeded these capabilities. So there are companies that now offer CCD replacement units for vidicon tube cameras. The delicacy of the glass vidicon tubes and the complexity of the electronics required to drive them have contributed to their demise. The resolution of the vidicon is dependent on the construction of the glass envelope, the type of photoconductive material used, and the complexity of the electronics. If the electronics are susceptible to noise or suffer from thermal drift, the scanning beam will not accurately track the image formed by the phosphors and final image quality will be degraded.

D. Image intensifiers The image intensifier is a vacuum-tube device (similar to the vidicon) that accepts an image at one end and produces an image of higher intensity at the other. The image intensifier can be considered an image amplifier that uses energy to achieve the necessary amplification. These devices are used in low-light-level situations such as night vision and astronomy. Image intensifiers were first developed to work with vidicons and other early electronic camera technologies and were often manufactured as components of these cameras. Modern image intensifiers are used as adapters to cameras or by themselves, as night-vision devices under starlight conditions. Structure—The image-intensifier tube has a photocathode screen at one end and a phosphor screen at the other end as shown in Figure 5-14. The photocathode is a material, such as gallium arsenide, that emits electrons when exposed to light. An optical system is used to image a scene onto the photocathode, and electrons are emitted in proportion to the amount of light imaged.

Module 2-5: Photonic Devices for Imaging, Storage, and Display 27 The electrons are accelerated by an electric field, which produces a gain in terms of number of electrons at the phosphor screen. When the electrons strike the phosphor screen, the screen emits visible light and produces an intensified image on the photocathode screen of the scene formed at the input end of the tube.

Figure 5-19 Image intensifier

Capabilities—Early image intensifiers designed in the 1960s could amplify light by 1000 times. Current intensifiers can amplify upward of 50,000 times. These systems can develop the illusion of daylight when used for night vision under starlight-only conditions. An image intensifier amplifies light, so it will not operate when no light is falling on the input screen.

Activity 9: Image intensifier

Extinguish the room lights and use an image intensifier to experience “night vision.” Turn the lights back on and set up a CCD circuit board “pinhole” camera to image an object on the bench. Display the object on a monitor. Mount the image intensifier in front of the CCD camera and again extinguish the room lights. The CCD camera is now imaging the output of the intensifier and thus is able to operate in the darkened room.

III. Display Devices Display devices are the complement of imaging devices and are used to output images for viewing. Our discussion of displays is restricted to a few purely electronic display technologies that output images, although other technologies—such as printers and electromechanical systems—exist that also can display imagery. These devices use various approaches to take an image signal from an imaging device, or from a computer, and display the pixel data so that the image can be observed. All display devices have a common capability and that is to output a pixel value visually.

28 Optics and Photonics Series, Course 2: Elements of Photonics A. Introduction to cathode-ray tubes The cathode-ray tube (CRT) is the oldest electronic display technology and—unlike vidicons and flying-spot scanners—is in little danger of becoming technologically extinct in the near term. The CRT is a vacuum tube, much like a combination of a vidicon and an image intensifier. The image signal to be displayed is input as a serial sequence of pixel data, and these data are displayed through the use of luminous phosphors at the viewing end of the tube. Construction—Figure 5-20 shows a schematic drawing of how a basic CRT is constructed. The principal component parts are labeled. At one end is the electron gun. This unit produces a stream of electrons that are modulated in intensity by the image signal as it is accessed. The electrons are accelerated toward the opposite end of the tube by electromagnetic coils. The coils also play a role in focusing the beam, which also passes through opposing x-y charged plates that are used to electronically sweep the beam across a phosphor-coated screen at the far end. The electronics required to synchronize the beam to the input signal stream and to perform the scanning operation are quite sophisticated. At the end of the scan, the electronics must cause the beam to shut down while it is sent back to the start point of the scan.

Figure 5-20 Cathode-ray tube (CRT)

Capabilities—CRT displays are capable of very-high-resolution output and can display very large images. They can also handle very high frame rates and can output black and white (intensity only) images or color images. They are common in standard TV sets. Modern CRT displays are equipped with electronic controls so that they can automatically adjust to various resolutions and color requirements of images output from computers or received as television signals.

B. Flat-panel liquid-crystal displays Liquid crystals are substances whose material state is somewhere between crystals and liquids. They appear to be like a gel and have very interesting and useful properties. Liquid crystals are light-polarizing substances, and their polarization can be modified by the application of an electric field. This property makes them useful in display technologies. The advantages of LCD displays include low cost, small size, and low power consumption. Their primary disadvantage is that they modulate light and so require external light sources to operate. The light source used

Module 2-5: Photonic Devices for Imaging, Storage, and Display 29 to illuminate a large LCD display, such as those used for computer or television screens, is typically supplied from behind the display itself and is called a backlight. Liquid-crystal theory—Liquid crystals are classified into three different phases, called nematic, smectic, and cholesteric. In the nematic phase, individual molecules are threadlike and longitudinally arranged. In the smectic phase, molecules organize themselves into layers. In the cholesteric phase, molecules in the different layers orient themselves at a slight angle relative to each other (rather than parallel as in the nematic phase) and take on a helical orientation. The three phases are illustrated in Figure 5-21. These phases are important to the use of liquid crystals as a display methodology. Liquid crystals have birefringent properties, which means that they can polarize light. However, the polarization can be switched using an electric field. This property gives rise to their use in displays.

a. Nematic b. Smectic c. Cholesteric

Figure 5-21 Liquid-crystal phases

In Figure 5-22 we see how a simple liquid-crystal display is constructed. Such a display might be used in a digital watch or as the display for a small computer controller in an appliance or instrument. The liquid-crystal material is placed between two glass plates, one of which is coated with a transparent metal-oxide film. This film is one electrode of the display. The other electrode is formed into shapes, patterns, or symbols, and a separate wire lead is attached to each. The front of the display has a polarizer. When the display is inactive, the light passing through the assembly remains randomly polarized and nothing is observed. However, when one or more of the electrodes is energized by the application of a small electric field, the crystals align themselves such that they produce a polarization that is perpendicular to the polarizer layer. This behavior causes the symbol or pattern defined by the electrode to be darkened, as shown in Figure 5-23. The LCD can produce a grayscale effect depending on the amount of charge placed on the electrodes. The strength of the charge controls the degree of alignment of the crystals, hence the amount of light blockage by the polarization effect. Color can also be incorporated into these displays by the addition of red, green, and blue filters.

30 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 5-22 Simple LCD construction

Figure 5-23 LCD operation

LCDs capable of displaying images, where a pixel array is required, are called matrix LCDs since the image array is a matrix of pixels. These displays are considerably more complex than the simple displays described above. However, they represent the future of imaging display technology and will eventually replace CRTs. The industrial and domestic consumers of these displays want smaller size (in terms of case size, not screen) and less power consumption, both advantages of the LCD over the CRT. The limiting factor has been cost and display quality, but as these improve, the LCD will become increasingly popular as a replacement for the CRT. The matrix display must address a two-dimensional array of pixels. Unlike the scanning beam of the CRT that traces back and forth across the screen, the pixels in a matrix display must be individually addressed. There are two basic matrix LCD types, passive and active, and the differences between the two are discussed next. Passive-matrix liquid-crystal displays—A passive-matrix LCD incorporates a grid arrangement of electrodes where all the pixels in a column are connected and all the pixels in a

Module 2-5: Photonic Devices for Imaging, Storage, and Display 31 row are connected. To address a single pixel, the column/row of that pixel is energized. This is illustrated in Figure 5-24.

Figure 5-24 LCD row-column addressing

The display is updated using a scanning process and so is very slow—not fast enough to display movies. The row-column grid can allow leakage of charge, so pixels can appear smeared or fuzzy. The passive display is very inexpensive and is used primarily for imaging displays where high resolution and speed are not required. Passive LCDs can take advantage of dual-scan refresh, where the screen is divided into two sections that are refreshed simultaneously. Dual- scan displays are not as sharp or bright as active-matrix displays, but they consume less power. Active-matrix liquid-crystal displays—Thin-film transistors (TFT) are switching transistors that can be fabricated onto the glass layer of an LCD panel. The transistors may then be used to directly switch the field of the LCD pixel, providing a substantial increase in display- refresh speed and in sharpness and clarity. The use of the TFT to actively switch the LCD pixels gives rise to the terminology active-matrix LCD. These displays are substantially more expensive than those of the passive type, due to their complexity. But the higher resolution and speed have placed them in great demand for high-end portable computers. In time, the active- matrix LCD will dominate the LCD market and ultimately eclipse that of the CRT.

C. Flat-panel electroluminescent displays Electroluminescent displays (ELD) are very similar to LCDs. The primary difference is that ELDs generate light through the process of electroluminescence. When a voltage is applied directly to certain phosphors, they emit light in the same way that they do when struck with an electron beam in a CRT. ELDs have more limited applications than LCDs due to their higher cost of construction and the fact that full-color systems have not been developed.

32 Optics and Photonics Series, Course 2: Elements of Photonics D. Flat-panel LED displays LED stands for light emitting diode, and these electronic components are true photonic devices. LEDs are now available in almost any visible wavelength. Single, high-power, high-output LED units are replacing tungsten-filament light bulbs in many applications. An LED display is formed from an array of LEDs where each LED serves as a pixel. The modulation of current to the LED varies the light output and thus produces varying intensity levels. If clusters of red, green, and blue LEDs are used at each pixel position, a color image can be produced. LED displays are simpler in construction than either LCD or ELD systems. However, they are as yet incapable of the resolution of these displays and are very expensive to produce. Like the ELD, however, they produce light and so do not require an external light source for viewing. The LED display does require a greater amount of power to operate than does an LCD or an ELD.

IV. Looking Toward the Future Image-processing and display technology is changing daily, and these changes will impact the photonics technician of the future. One major aspect of imaging that is guaranteed to change is the resolution of displays. The two fields of work most likely to be impacted by this will be medical and military imaging. Both of these areas use photonics heavily, and both are increasingly interested in capturing, displaying, and analyzing images with high information content. Systems of the future will have resolutions that exceed the capability of human sight and will place great demands on the computing systems that analyze these images. The installation and maintenance of these systems will be demanding for the photonics technician as well. In the future, display systems will place more reliance on optics, and it will not be uncommon to see 3-D holographic displays and head-mounted displays of great resolution and speed so that the wearer is immersed in the image. Much of this virtual-reality work relies heavily on display technology, and the photonics technician will play a critical role in that field.

LABORATORY

In this laboratory you will determine the resolution of a CCD camera.

Equipment List The following equipment is needed to complete this laboratory: 1 Logitech QuickCam™ CCD camera 1 computer to support QuickCam™ 1 table-tripod or holder for camera 1 ruler marked in tenths of an inch or millimeters 75-LPI test target

(Note: You may use the target on this page for this experiment.)

Module 2-5: Photonic Devices for Imaging, Storage, and Display 33 Procedure 1. Verify that the target has a resolution of 75 lines/inch. Do this by first measuring the target with the ruler. The target should be 1" square. Now count the lines in the target, which should be 75. Remember to include the white lines when counting. 2. Connect the camera to the video monitor. Set the camera so that the target is imaged on the monitor with the lines on the target oriented vertically. Adjust the focus so that the lines are clearly distinguishable. 3. Move the camera (or the target) back until the lines are no longer clear, but keep the camera in focus. The goal is to stop moving the camera at just the point before the lines are no longer distinguishable. This is the point at which the camera sensors can just resolve the lines and thus are at their resolution limit. The camera now has one line per pixel imaged onto the CCD array. 4. Measure the width in inches of the target appearing on the monitor and measure the width of the image displayed. The software that accompanies the QuickCam™ allows you to display the image in a window. This window should be at maximum size. 5. Compute the number of (horizontal) pixels of the camera using the following formula: 75 lines Number of pixels =  size of window. size of target in window

6. To determine the resolution of the camera in lines per inch (LPI), you need to know the size of the CCD array. These data should be listed in the camera. The specifications will also list the number of pixels in the CCD so that you can compare your experimental result to the actual specifications of the array. 7. Now turn the target so that the lines are horizontal and repeat the experiment to determine the vertical pixels. Many CCDs (like the QuickCam™) have different numbers of horizontal and vertical pixels.

RESOURCES

Software  METIP Pixel Calculator, University of Washington Math Experiences Through Image Processing Program—http://www.cs.washington.edu/research/metip/metip.html  Image Pro Plus by Media Cybernetics—http://www.mediacy.com/ippage.htm  MATLAB Image Processing Toolkit by Math Works, Inc.— http://www.mathworks.com/products/image/

34 Optics and Photonics Series, Course 2: Elements of Photonics Reference Materials Textbooks Andrews, Harry C. Computer Techniques in Image Processing. New York: Academic Press, 1970. Myler, H. R. Fundamentals of Machine Vision. Bellingham: SPIE Press, 1998. Myler, H. R., and A. R. Weeks. Computer Imaging Recipes in C. Englewood Cliffs: Prentice Hall, 1993. Ono, Y. A. Electroluminescent Displays. World Scientific, 1995. Russ, John C. The Image Processing Handbook. Boca Raton: CRC Press, 1992. Tsukada, Toshihisa. TFT/LCD: Liquid-Crystal Displays Addressed by Thin-Film Transistors. Amsterdam: Gordon and Breach, 1996. Articles Musa, S. “Active-Matrix Liquid-Crystal Displays,” Scientific American, November 1997.

Equipment Suppliers LCD/LED Displays Digi-Key Corporation, 701 Brooks Avenue South, Thief River Falls, MN 56701. Jameco Electronics, 1355 Shoreway Road, Belmont, CA 94002. Image Intensifiers ITT Industries Night Vision, 7671 Enon Drive, Roanoke, VA 24019. “Pin Art” Action Products International, Inc. , 390 North Orange Avenue, Orlando, FL 32801 CCD “Pinhole” Cameras Hosfelt Electronics, 2700 Sunset Boulevard, Steubenville, OH 43952. Logitech QuickCam Camera Logitech, Inc., 6505 Kaiser Drive, Fremont, CA 94555.

EXERCISES

1. A CCD sensor array is 8.5 mm by 8.5 mm with 300 rows by 300 columns of sensors. What is the resolution of this array in lines per inch? 2. A computer monitor can produce 4096 colors using red, green, and blue pixels. Can you explain this in terms of bits per pixel? 3. A computer monitor is 15" by 15" with a resolution of 1024 by 1024 pixels. If a binary image is displayed on the monitor, what will be the result if the image is printed on an 8.5" by 11" sheet of paper by a laser printer capable of 300 DPI?

Module 2-5: Photonic Devices for Imaging, Storage, and Display 35 4. Photographs taken during a recent experiment have turned out to be too light. What action should be taken to correct the problem? 5. List advantages and disadvantages between flying-spot scanners and flatbed scanners. 6. A high-resolution CCD camera produces RGB color images that are 1024  1024 pixels with a bit-depth of 24 bits. Assuming that the images are stored in raw format, how many of these images could a CD-ROM hold? 7. You have been given a floppy disk for analysis. The disk contains image files produced by a high-resolution camera. When you list the disk files, you note that they are in JPEG format. Does this present a problem? 8. You have just finished installing a new CCD camera into an imaging system in the lab, BUT the system cannot maintain the frame rate that it had before. What could be the problem? 9. An experiment that you are conducting generates very-low-intensity images and your CCD camera set at the widest possible aperture still produces unacceptably dark images. What might correct this problem? 10. An LCD display has a dead row of pixels. What could explain this? 11. An LCD has a resolution of 640 by 480 pixels. Will an image from a CCD camera with 380 lines of resolution be displayed at full resolution? 12. An LCD display is used to display images from a CCD camera. You notice that fine detail is missing from objects in the image. What could explain this?

36 Optics and Photonics Series, Course 2: Elements of Photonics

Basic Principles and Applications of Holography

Module 2-6

of

Course 2, Elements of Photonics

OPTICS AND PHOTONICS SERIES

PREFACE

This is the sixth module in Course 2 (Elements of Photonics) of the STEP curriculum. Following are the titles of all six modules in the course: 1. Operational Characteristics of Lasers 2. Specific Laser Types 3. Optical Detectors and Human Vision 4. Principles of Fiber Optic Communication 5. Photonic Devices for Imaging, Storage, and Display 6. Basic Principles and Applications of Holography

The six modules can be used as a unit or independently, as long as prerequisites have been met. For students who may need assistance with or review of relevant mathematics concepts, a student review and study guide entitled Mathematics for Photonics Education (available from CORD) is highly recommended. The original manuscript of this document was authored by Jack Ready (consultant) and edited by Leno Pedrotti (CORD). Formatting and artwork were provided by Mark Whitney and Kathy Kral (CORD).

CONTENTS

Introduction ...... 1 Prerequisites ...... 1 Objectives ...... 1 Scenario ...... 2 Basic Concepts ...... 3 Introduction ...... 3 Formation of Holograms ...... 3 Types of Holograms ...... 8 Efficiency of Holograms ...... 12 Practical Aspects of Holography ...... 13 Applications of Holography ...... 16 Holographic interferometry ...... 16 The holocamera ...... 18 Use of holographic interferometry for defect detection ...... 19 Holographic Optical Elements ...... 19 Embossed Holograms ...... 20 Holographic Data Storage ...... 20 Displays ...... 21 Exercises ...... 23 Laboratory ...... 24 References ...... 27

COURSE 2: ELEMENTS OF PHOTONICS

Module 2-6 Basic Principles and Applications of Holography

INTRODUCTION

When one thinks of holography, the striking displays of three-dimensional imagery first come to mind. But there is much more to holography than the recording and displaying of these three- dimensional images. Holographic interferometry is used in industry to measure microscopic displacements of the surfaces of objects. It is used to measure small variations in the index of refraction in fluids in order to map flow patterns. Holographic optical elements (HOE) can perform the functions of mirrors, lenses, and gratings. They are used in many optical systems, especially when weight is an important consideration. In the future, holography will be used for data storage with large information banks, offering high speed of data retrieval. In this module, you will learn the basic concepts and applications of holography. In the laboratory, you will make a hologram of reasonably good quality and will observe the images stored in it. You will learn some of the more important characteristics of holograms through simple observations of holographic images.

PREREQUISITES

Before starting this module, you should have completed the following modules in Course 1, Fundamentals of Light and Lasers: 1-1: Nature and Properties of Light; 1-2: Optical Handling and Positioning; 1-3: Light Sources and Laser Safety; 1-4: Basic Geometrical Optics; and 1-5: Basic Physical Optics.

OBJECTIVES

When you complete this module, you should be able to:  Draw and label an experimental arrangement that can be used to produce a transmission hologram of a three-dimensional object. The drawing must include a source, object, necessary optical components, object and reference beams, and holographic film.

1

 Correctly state the relation between the length of the paths of the object and reference beams for the arrangement under the previous bullet.  Draw and label an experimental arrangement that can be used to reconstruct the three- dimensional image stored in a hologram. The drawing must include a source, reconstructing beam, hologram, and position of the observer. The drawing must show the proper light beams that lead to the real and virtual images of the original object.  Describe the differences between thick and thin holograms.  Describe the differences between transmission and reflectance holograms and the differences between the orientations of the beams that produce them.  Describe the differences between amplitude and phase holograms.  Correctly calculate the optical density of a piece of photographic film, given its transmission.  Name and describe three methods of holographic interferometry used for nondestructive testing.  Make a hologram of a suitable three-dimensional object, develop it, and describe the virtual image.  Study the properties of the hologram made above under the following conditions: (a) Illumination of the entire hologram with laser light of the same wavelength that was used to make it. (b) Illumination of the entire hologram with monochromatic light of some other wavelength (light from a sodium vapor lamp, for example) after passing this light through pinholes of different sizes. (c) Illumination of the hologram as in (a) or (b) above after the hologram has been reversed in orientation (turned front for back). (d) Moving a mask with a small hole over the viewing side of the fully illuminated hologram and examining the image.

SCENARIO

Maria Alvarez is a photonics technician who specializes in holographic techniques. The company she works for has received a new contract from a software firm for making 1,000,000 small holograms to be used as security seals on its products. Maria is responsible for making the master hologram using a three-dimensional model of a logo provided by the customer. Maria sets up a basic split-beam reflection hologram configuration on an optical table and aligns the components using techniques that she learned in her holography courses and laboratories in school. She produces an excellent holographic image of the logo that seems to “jump” out from the surface when illuminated by any point source of white light. Maria then turns the master hologram over to Brendan Williams, a technician in the replication department. Brendan treats the hologram as if it were an original object and makes a million copies of it using rolls of photopolymers. All this represents a typical day’s work for Maria and Brendan.

2 Optics and Photonics Series, Course 2: Elements of Photonics

BASIC CONCEPTS

Introduction Holography was invented by Professor Dennis Gabor of the Imperial College in London in the late 1940s. Gabor was able to make holograms and demonstrate the holographic process with the light sources available at that time. But there were no good coherent intense light sources available then, and it was very difficult to make good holograms. So after some period of activity, interest in holography diminished for a number of years. In 1960, with the invention of the laser, a good, bright, coherent light source became available. Emmett Leith and Juris Upatnieks of the University of Michigan soon developed novel techniques to apply the new light source and produced the first practical holograms. Interest in holography rapidly flourished and has continued to grow and develop to the present time. Gabor received the Nobel Prize in physics in 1971 for his original work. This module describes the concepts of holography and presents basic information on the methods of making holograms and viewing them. It also describes some of the applications of holography. It is not a complete manual for all the different ways to make holograms nor a laboratory guide for step-by-step procedures for holography. Nevertheless, you will have an opportunity to make a hologram using a typical procedure.

Formation of Holograms Holography involves a two-step procedure. First one records a complex interference pattern formed by the superimposition of two light waves. One of the light waves (object beam) comes from some object or scene that is to be stored. The other light wave (reference beam) travels a different path. The interference pattern formed by the two beams is recorded in a recording medium, usually photographic film. Then, in the second step, one illuminates the recorded interference pattern with a light beam identical to one of the beams used in the recording process. This is called reconstructing the image. Reconstruction yields an image that is a duplicate of the original object, including all details of its three-dimensional nature. Conventional photography produces two-dimensional images of three-dimensional scenes on flat film and needs lenses to do so. Holography is sometimes called lensless photography. In principle, no lenses are required to make holograms, but often, for convenience, lenses are used in holographic setups. Light coming from an object has both amplitude and phase information. Ordinary photography uses only the amplitude information. It loses the phase information. Holography preserves both amplitude and phase information, so that the light in a holographic image is identical to the light from the original object, including its three-dimensional nature. One typical method of recording a hologram is shown in Figure 6-1. All the components are mounted on a stable table to reduce vibration to a very low level. The monochromatic, coherent light from the laser may be spatially filtered. The spatial filter consists of a microscope objective (around 10X) and a pinhole (around 25 m diameter) placed exactly at the focal point

Module 2-6: Basic Principles and Applications of Holography 3

of the microscope objective. The purpose of the spatial filter is to provide a very clean and even reference beam at the recording medium. After the spatial filter, the beam is split into two beams (the object beam and the reference beam) by a beam splitter. Usually the beam is split so that the intensities of the two beams are in a ratio in the range 10: 1 to 20:1, with the object beam being more intense. The object is expanded by a lens L1 and reflected by a mirror M1 so that it will illuminate the entire object. Light reflected from the object reaches the photographic film.

Figure 6-1 Typical arrangement of laser, optical components, object beam, reference beam, object, and film for making a hologram

The reference beam shown in Figure 6-1 is sent directly to the photographic film by a mirror (M2) and a lens (L2). The lens expands the beam so that it overlaps the same area on the film

4 Optics and Photonics Series, Course 2: Elements of Photonics

illuminated by the object beam. Thus, the film is exposed by both the object and reference beam. The two beams, superimposed at the film, produce a complicated interference pattern. The object beam, reflected from the object, contains information about the object, so the interference pattern also contains information about the object. The photographic film records the interference pattern. The film may be developed by conventional techniques in accordance with instructions from the film manufacturer. The procedure is similar to those ordinarily used in film development. It is important to dry the film very carefully in order to minimize shrinkage of the emulsion, since shrinkage can distort the recorded fringes. Also, to reduce shrinkage, one usually uses plate film (emulsion on a glass plate) rather than sheet film. The developed film preserves the information about the scene that is carried on the object beam. The resulting piece of developed film—the hologram—looks like a fogged negative. It bears no resemblance to an image of the object. The hologram made with the arrangement shown in Figure 6-1 is called a split-beam transmission hologram. Later we will discuss some other methods for making other types of holograms. In the making of a hologram of this type, several points are important. First, the table must be very stable. Any movement of the components in Figure 6-1, by as little as a fraction of the wavelength of the light, will blur the interference pattern and reduce the quality of the hologram. One usually tries to keep the exposure time as short as possible, to reduce the effect of any residual vibration. Second, because coherent light is required to make holograms, the lengths of the paths traversed by the object beam and the reference beam from the laser to the film should be nearly equal. This is necessary to ensure that the two beams are still coherent after they have reached the film. The coherence length of the laser light is approximately equal to the length of the laser in many cases, so the two paths should be equal to each other to within the length of the laser. Also, the angle between the object and reference beams, denoted  in Figure 6-1, should be kept small. This is necessary to avoid having to use very high-resolution film, which usually has lower sensitivity and requires longer exposures. Once the hologram has been developed, it is ready to be reconstructed, that is, to be viewed. A hologram made with the arrangement shown in Figure 6-1 may be reconstructed by the method shown in Figure 6-2. In the figure, the dotted lines represent parts of the original equipment that are not required for reconstruction. The hologram is illuminated by the reference beam alone. One may remove the object, or block the object beam, or move the hologram to a different location and illuminate it with a laser beam directed at the same angle  to the hologram as was the original reference beam. This beam is referred to as the reconstructing beam.

Module 2-6: Basic Principles and Applications of Holography 5

Figure 6-2 Reconstruction of the holographic image

When the hologram is illuminated in this way, light is diffracted by the fringes in the hologram to form two images of the original object. The first image, viewed by an observer looking through the “back” of the hologram in the direction indicated in Figure 6-2, will appear at the same position as the original object and will appear exactly like the object, just as if the object were still present. The observer looks through the hologram—as described—to view the image. The image will be three-dimensional. That is, as the observer’s head moves, the observer will see part way around the object. This image is called a virtual image, because it requires a lens (the lens of the observer’s eye) to form it. The second image, called the real image, is formed by light diffracted by the interference pattern in a different direction. The relation of the two images and the angles at which they may

6 Optics and Photonics Series, Course 2: Elements of Photonics

be viewed is shown in Figure 6-3, which is kept in the same orientation as Figures 6-1 and 6-2 in order to make it easy to relate the reconstruction of the images to the geometry used in the original recording of the hologram.

Figure 6-3 Formation of the virtual and real images in a hologram during the reconstruction process

The real image is reversed in three-dimensional orientation from the virtual image. As one tries to look around one side of it, one sees part way around the opposite side. It is called a real image because it does not need a lens to form it. It may be projected directly onto a screen and the projection can be viewed by looking at the screen. In addition to the light diffracted into these two images by the hologram, some light is transmitted through the hologram. This undiffracted light continues to travel in the same direction as the reconstructing beam. This light produces a central spot between the two images. It is of no great interest, simply representing a loss of some of the light that might have gone into the two images. Detailed mathematical theory exists to explain how both the real and virtual images are formed by light from the reconstructing beam which interacts with the interference pattern stored in the hologram. Such theory is beyond the scope of this module. We simply note that the holographic process of recording and reconstructing a scene—as described above—does yield a three- dimensional bright image of the scene.

Module 2-6: Basic Principles and Applications of Holography 7

Types of Holograms Holograms may be recorded in a variety of different geometries, which yield holograms with differing characteristics. The first distinction we make is between thick and thin holograms. In the earlier discussion of the split-beam transmission hologram, it was implicitly assumed that the thickness of the photographic film was relatively thin. Such a recording medium will record the contours of the interference pattern in its plane. But if the film is relatively thick, the contours of the interference pattern may be recorded throughout the volume of the film. Figure 6-4 illustrates the difference between recording with thin and thick media. In a thin hologram, one would encounter a single fringe as one passes through the film in a direction perpendicular to the surface. In a thick hologram, there will be a number of fringes throughout the volume of the material, as one goes through the film.

Figure 6-4 Illustration of thin and thick holographic recording media

The difference between thick and thin holograms is not an absolute distinction based on the physical thickness of the film. A film with a specified thickness could form either a thick or thin hologram, depending on the manner in which the hologram is recorded. The properties and applications of thick and thin holograms may be different. Thick holograms are useful for increasing the amount of light that goes into the image, and thus for making brighter holographic images, as we will see in the next section. Also in the earlier discussion of the split-beam transmission hologram, it was implicitly assumed that the recording medium (photographic film) recorded the interference fringes by varying the transmission of the developed film, just as conventional photography does. A hologram formed in this way operates by absorption of some of the reconstructing light. Such holograms are called absorption holograms or amplitude holograms. The most common amplitude holograms are made using photographic film, which contains silver halide grains. These grains are converted to free silver metal during the exposure and development processes. The density of the free silver is greatest where the light intensity was the highest. Because the free silver absorbs light, the interference fringe pattern is stored in the film as a pattern of varying absorption.

8 Optics and Photonics Series, Course 2: Elements of Photonics

In addition to amplitude holograms, there are holograms in which the interference pattern is stored as a pattern of varying index of refraction. These holograms are called phase holograms. The pattern of variable index of refraction is equivalent to a pattern of varying optical path length as light passes through the material. Because there is no absorption of the reconstructing light, phase holograms may be brighter than amplitude holograms. There are several methods used for producing phase holograms. One method involves recording an amplitude hologram and then, through chemical means, bleaching the film and removing the silver grains that cause absorption. In this process the index of refraction is changed in regions where the silver density was highest, so the interference pattern is now stored in the hologram as a pattern of varying index of refraction. Other materials can produce a phase hologram directly without the need of a bleaching step. One popular material is dichromated gelatin, in which a gelatin-based material is impregnated with ammonium dichromate. When the dichromated gelatin is illuminated, the light interacts with the ammonium dichromate and produces crosslinking of gelatin molecules with those of the dichromate compound. This interaction produces variations in the index of refraction. A disadvantage of dichromated gelatin is that it is sensitive only to light with green or shorter wavelengths, and thus is not suitable for use with the popular HeNe laser. A third distinction is that between transmission and reflection holograms. We have already described one type of transmission hologram. The object and reference beams were incident from the same side of the film and the fringes were formed essentially perpendicular to the plane of the film. The observer views the hologram from the “back” by light transmitted through the film. In a reflection hologram the object and reference beams are incident from opposite sides of the film and the fringes are formed essentially parallel to the surface of the film. One would encounter a number of fringes as one passes through the film, so that a reflection hologram is necessarily a thick hologram. During reconstruction, the illuminating beam is incident from the same side of the film as the observer, who views the image in reflection from the hologram. The orientations for recording and for reconstructing such a reflection hologram are shown in Figure 6-5.

Figure 6-5 Recording and reconstructing a reflection hologram

Module 2-6: Basic Principles and Applications of Holography 9

Polychromatic or white light may be used in the reconstruction of reflection holograms, which are often called white-light holograms. Different spectral components will be reflected at different angles, so the apparent color of the image changes as the observer’s viewing angle changes. These holograms do not require a laser for viewing. Many holograms—such as artistic holograms, display holograms, or the holograms on credit cards—are reflection holograms that may be viewed with white light.

Example 1: Reflection holograms Given: A reflection hologram is made with red light, but when illuminated with white light, it appears yellow, or even green. Find: The reason for this behavior Solution The emulsion shrinks when the hologram is developed and dried. This causes the spacing between the interference fringes to decrease. Thus, a shorter wavelength—such as green—is reconstructed. Placing the hologram on top of a cup of hot coffee will cause the emulsion to swell and the color of the image can be turned back to red, at least temporarily.

We now summarize the relation of these different types of holograms in Table 6-1. First, there are thin and thick holograms, depending on the relation of the thickness of the recording medium and the fringe spacing. Thin holograms are transmission holograms, while thick holograms may be either transmission or reflection holograms. Finally, each of these three types may be either amplitude or phase holograms.

Table 6-1. Types of Holograms

We will not go into the details of describing the arrangement for recording each of the types of hologram. We have already described how a thin transmission hologram may be recorded in the last section. Such a hologram could be either an amplitude or phase hologram, depending on the

10 Optics and Photonics Series, Course 2: Elements of Photonics

recording material. We will describe the arrangements for some of the other basic types of holograms. First we describe the apparatus for making a split-beam reflection hologram. This is shown in Figure 6-6. The lens L1 is a microscope objective, perhaps about 10X, with focal length f. The pinhole P is around 25 m in diameter and is located at the focal point of the microscope objective. Together these two components form a spatial filter. The beamsplitter S splits the beam into object and reference beams. The mirror M1 is a front-surface high-reflectance mirror The object is denoted O and the film is denoted F. The object beam and the reference beam illuminate the film from different sides, a requirement for reflection holograms. The hologram may be either an amplitude hologram or a phase hologram, depending on what is used for the film F.

Figure 6-6 Split-beam reflection hologram

Now we turn to the single-beam reflection hologram, perhaps the simplest way to make a hologram. This arrangement is illustrated in Figure 6-7, using a diode laser as a source.

Figure 6-7 Single-beam reflection hologram

The object is placed behind the film, with the emulsion side facing the object. The reference beam passes through the film from one side and illuminates the object on the other side. Then it is reflected back to the film as the object beam.

Module 2-6: Basic Principles and Applications of Holography 11

An interesting type of hologram frequently used for displays is the so-called 360 hologram. This hologram may be viewed from any direction, which is why it is called a 360 hologram. The apparatus for recording a 360 hologram is shown in Figure 6-8. The lens L1 and pinhole P again form a spatial filter. The flat, front-surface mirror M1 directs the beam to the diverging lens L2, which spreads the beam so that it strikes both the object O and the film F, which is wrapped around the object in the form of a cylinder. This arrangement may be regarded as a variety of a single beam transmission hologram. The curved mirror M2 redirects some of the light back onto the object for better illumination. When the film is developed and replaced in its original position, an observer can walk around the cylinder of film and view the reconstruction from any direction.

Figure 6-8 Recording a 360 hologram

Efficiency of Holograms An important parameter distinguishing various types of holograms is their efficiency. The efficiency of a hologram is the ratio of the amount of light diffracted into the image to the amount of light in the reconstructing beam. Obviously one desires holograms with high efficiency in order to obtain a bright image. A complete discussion of the theory of hologram efficiency is beyond the scope of this module. We simply present the maximum theoretically possible values of efficiency for the different types of holograms described in the last section. These values are given in Table 6-2.

Table 6-2. Maximum possible efficiency for various types of holograms Absorption holograms Phase holograms Thin Thick Thick Thin Thick Thick transmission transmission reflection transmission transmission reflection 6.25% 3.7% 7.2% 33.9% 100% 100%

The values given in this table are the maximum possible values for efficiency predicted by the theory of holograms. In practice, if the techniques used in making the holograms are less than perfect, the actual values of the efficiency may be lower. With good practice, the theoretical values in the table have been approached for each type of hologram. For thick transmission

12 Optics and Photonics Series, Course 2: Elements of Photonics

phase holograms, recorded in dichromated gelatin, an experimental efficiency greater than 90% has been obtained. We note that the maximum values of efficiency for amplitude holograms is lower than the values for phase holograms. Amplitude holograms absorb part of the reconstructing light. This absorption constitutes a loss of some of the light. The maximum efficiency of amplitude holograms is fairly low, around 7% at most. Phase holograms are nearly transparent and utilize the reconstructing light more effectively. For thick phase holograms, most of the reconstructing light can go into the image. If the hologram efficiency is high, the reconstructed image is brighter and the requirements on the power in the reconstructing beam are reduced. As a result, holographers often use thick- phase holograms for applications, although they are somewhat more difficult to produce.

Practical Aspects of Holography Photographic film has been used most often for holographic recording. Table 6-3 lists some of the most common types of film used for holography along with some of their properties. The table is representative of what is commonly used, but is not a complete list. Also new types are continually being developed.

Table 6-3. Photographic emulsions for holography Sensitivity (erg/cm2) Emulsion at wavelength (nm) Resolution type (lines/mm) 514 nm 532 nm 633 nm 649F 800 1000 900 <2000 8E56 350 300 — 5000 8E75 120 60 20 5000 10E56 20 20 — 3000 10E75 120 60 20 3000 14C75 — — 5–10 1500

The table gives the relative sensitivity in units of energy per cm2 that is required to produce an optical density of unity for three wavelengths. The optical density D is defined by the equation:

D = –log10 T (6-1) where T is the transmission of the developed film. Thus an optical density D of unity means that the transmission T is 0.1. Over a broad range, the response of the film depends only on the total energy per unit area delivered to the film. It does not matter whether the energy arrives quickly at high power per unit area or slowly at low power per unit area. This phenomenon is called reciprocity.

Module 2-6: Basic Principles and Applications of Holography 13

Example 2: Optical density Given: A piece of developed photographic film with a transmission of 0.01 Find: The optical density Solution

According to Equation 6-1, the optical density is D = –log10 0.01 = –(–2) = 2.

It is desired that the energy per unit area required for holographic recording be as low as possible, so that the holographic recording may be completed quickly. The table also presents the resolution of the emulsions that is expressed in terms of the number of lines per millimeter that the film can record. A high value, in excess of one thousand, is required to record the closely-spaced interference fringes. The film 649F was used for the first work on holography, because in the early 1960s, it had the best sensitivity with sufficiently high resolution for holography. Much of the early work on holography refers to the use of 649F film. Today, films with both better sensitivity and better resolution are available, and 649F is no longer widely used. There is some tradeoff between sensitivity and resolution. Some films are extremely sensitive but are relatively coarse-grained and cannot record well the details of the interference fringes. As Table 6-3 shows, the films with the best sensitivity are not necessarily those with the highest resolution. The user must make the tradeoff between sensitivity and resolution for the specific application. In order to determine the proper exposure for a specific type of film, one should use the so- called Hurter-Driffield curve (or H-D curve) for the film. The H-D curve relates the optical density produced in the developed film to the energy per unit area incident on the film during recording. The energy per unit area is equal to the power per unit area multiplied by the exposure time. The H-D curve plots optical density of the developed film. The H-D curve for a given film is supplied by the manufacturer. A generalized example is shown in Figure 6-9 (without actual numbers, since they vary from one film to another). The H- D film depicted in Figure 6-9 has a region for low exposure where the film does not yield optical densities much above zero. Then there is a linear region where the optical density increases linearly with the logarithm of the exposure. Toward the right, the film Figure 6-9 Hurter-Driffield curve for a typical saturates and there is no further holographic film increase in optical density. One generally desires to work near the middle of the linear portion of the H-D curve. One must first determine the average power per unit area that falls on the film in the experimental

14 Optics and Photonics Series, Course 2: Elements of Photonics

arrangement. This is accomplished with the help of a power meter placed directly in front of the film. Then one determines the exposure at the center of the linear region from the H-D curve for the particular film being used. Next one divides the exposure (energy per unit area) by the power per unit area to obtain an estimate for the exposure time. This will provide a good starting point for making the hologram. But only trial and error will finally determine the optimum exposure time. Operating near the center of the linear region of the H-D curve will optimize the recording of the light. Photographic film is easily available and has a well-developed technology. Thus it is widely used for holography. But for special purposes, a variety of other materials have been used to record holograms. We have already mentioned dichromated gelatin, which is useful for phase holograms. Other materials have been used, at least experimentally. These include photopolymers, magnetooptic materials, and thermoplastic materials. In particular, thermoplastic materials allow holograms to be erased and re-recorded. One very important factor in the recording of holograms is the elimination of any relative motion between the components in the holographic apparatus. An interference fringe can change from bright to dark if the distance between any of the components changes by one-half the wavelength of the recording light. Thus relative motion of the components during recording will substantially degrade the quality of the hologram. Usually one tries to keep the change in distance to less than one-tenth of the wavelength of the light used for recording. This means that the holographer uses stable mounts and a vibrationless platform. Frequently, one uses metal honeycomb tables or granite slabs, mounted on pneumatic vibration isolation platforms. Such tables are commercially available from many manufacturers. They provide high rigidity. That is, if they move, they move as a whole and do not introduce changes in the relative positions of the components mounted on them. In addition, one usually tries to keep the exposure as short as possible, to reduce the effect of any motion or vibration. Thus one wants recording media that have a high sensitivity. Stable platforms, while effective in reducing vibration, tend to be relatively expensive. Workers with limited budgets have achieved good results with sand boxes. The platform may consist of a heavy sand box set on partially inflated automobile inner tubes. The lenses, mirrors, and other components are mounted on pieces of plastic pipe that are set into the sand. Sand is a very poor transmitter of vibration and is very inexpensive. Many workers have achieved good results with such simple vibration-reducing equipment. In addition, air currents and temperature variations will change the optical path length between the holographic components and will degrade the quality of the recorded hologram just as does physical motion. Often holograms are made in controlled environments with temperature stabilization and shields to reduce air motion. Holograms may be made of moving objects by using very short laser pulses. If one uses a nanosecond duration pulse, the position of even rapidly moving objects does not change much during the holographic exposure. In order to make holograms with high quality, the laser light must have a high degree of spatial coherence. It should be spatially coherent over the entire scene that is to be recorded. Thus, one generally uses a laser operating in the TEM00 mode.

Module 2-6: Basic Principles and Applications of Holography 15

The laser should also have high temporal coherence. The coherence length should be greater than the depth of the scene being recorded. This is equivalent to saying the laser should be highly monochromatic. The frequency spectrum of the laser is broadened by the presence of multiple modes, so that it is desirable to use a laser operating in a single longitudinal mode. Such lasers tend to emit less power than those operating in multiple longitudinal modes. Because higher power is desirable to reduce exposure time and the effects of any vibrations that may be present, one has to make a tradeoff between laser power and good coherence. We have already noted that in dual-beam holographic configurations, the path lengths for the reference beam and the object beam must be the same to an amount less than the coherence length of the laser. In many cases this difference is approximately equal to the length of the laser cavity. Careful adherence to the factors discussed above is necessary to produce high-quality holograms.

Applications of Holography Holography is used in a practical sense for a wide variety of different applications. We shall discuss a few of them in the following pages.

Holographic interferometry Holographic interferometry is perhaps the most widely used application of holography in industry. Holographic interferometry is used in nondestructive testing applications for manufactured parts, including defect detection, strain analysis and vibration analysis. A hologram stores information about the image of some object. At any time, the hologram may be examined and the image of the object may be obtained and used. In holographic interferometry, the image is used to interfere with a reference wavefront. In the process of interference, fringes will be formed. If the two wavefronts represent the object at different times, the fringe pattern can yield information about changes in the object that have occurred between the times the two images were produced. These measurements can reveal very small changes taking place in the object. We will describe three methods of using holographic interferometry for nondestructive testing: real time holographic interferometry, double-exposure holographic interferometry, and time- average holographic interferometry. Real-time holographic interferometry—In real-time holographic interferometry, a hologram is made of the object to be tested. The hologram is developed and then placed back in the position it occupied when it was made. Or with special equipment, the hologram may be developed in place, without moving it. Then it is reilluminated with the reference beam that was used to make the original hologram. This produces an image of the original object. If the original object has remained in place and is illuminated with laser light, there will be two views of the object, one coming from the real object and one from its holographic image. The light waves coming from the object and from its holographic image will interfere. If the dimensions of the object have changed, for example, because of stresses applied to the object, the light waves forming the two images will travel different distances. One interference fringe

16 Optics and Photonics Series, Course 2: Elements of Photonics

will appear wherever the path difference traversed by the two waves changes by one wavelength of the light. These fringes represent a contour map of changes in the object. If the stresses on the object change so that the object continues to deform, the fringes will move in real time as the object changes. The change of the fringe pattern in accordance with the deformation of the object can be followed as it occurs. Hence the name real-time holographic interferometry has been adopted. Sometimes it is also called live-fringe holographic interferometry There are some difficulties involved with obtaining exact quantitative information concerning the deformation of the object. One is the difficulty of replacing the developed hologram in exactly the same position it occupied during exposure. This difficulty may be reduced by having the photographic film in a specially designed holder to which the chemicals used for development can be added. This allows the hologram to be developed in place. Another problem is that the photographic emulsion always undergoes some distortion during development. Careful control of the processing can reduce this distortion, but some distortion will always remain. This means that there is always a background shift of a small number of fringes across the object. A third serious problem is that of interpretation of the fringe pattern to obtain quantitative measurements of the distortion. Without going into the details, we simply state that in the general case, it is very difficult to derive the exact deformation of the object from measurements on the fringe pattern. These difficulties usually keep one from extracting absolute quantitative information about changes that have occurred in the object. But real-time holographic interferometry does still provide useful information about strain in objects as they are deformed under pressure or by other means. If there is a weak area in the surface it will deform more and the interference fringes will crowd closer together. This allows the observer to identify the weak spots. In this way, scientists, engineers, and technicians use real-time holographic interferometry to detect defects or weak areas in manufactured parts. Double-exposure holographic interferometry—A second type of holographic interferometry is called double-exposure holographic interferometry. In this type, the hologram is exposed twice, at different times. Between the two exposures, the object is changed or deformed, for example, by stressing it or heating it. As a result of the deformation, the object will be slightly different in the two exposures. Interference fringes are formed when the hologram is illuminated because of interference between the two slightly different views of the object. The double-exposure method compares two different conditions of the object, just as the single-exposure method. But in contrast to the real-time method, the two conditions are both stored in the film. The double-exposure method avoids the problems of realignment of the hologram. It is also sometimes called frozen-fringe holographic interferometry. In the double exposure method, when the two exposures of the hologram are completed, the original object and the optical components used to illuminate the object can be removed. The wavefront that is characteristic of the object in its original condition is stored in the hologram, along with the wavefront representing the altered state of the object. Therefore, the double- exposure method of holographic interferometry is easier to carry out than the real-time method. The hologram may be viewed in the same way as an ordinary display hologram, without having to reposition it exactly. Shrinkage of the emulsion is the same for both exposures, so distortion

Module 2-6: Basic Principles and Applications of Holography 17

due to shrinkage is not important. However, the same difficulties in interpreting the exact quantitative deformation of the surface from measurements of the fringe structure still apply. The double-exposure method does compare the original object with only one altered state of the object. Thus the double-exposure method provides less information than the real-time method. One cannot observe the fringes form and spread across the surface of the object. But for many applications one does not need continuous examination of the surface deformation. The relative surface deformation over a fixed interval may be all that is required. Time-average holographic interferometry—Time-average holographic interferometry is used to study vibrating surfaces. The object is moving continuously during the exposure of the hologram. The resulting hologram is the limiting case of a very large number of exposures over many positions of the vibrating surface. The time-average hologram is similar to a double- exposure hologram in which the two exposures represent the positions where the surface spends the most time, that is, at positions where the speed of vibration is low. These two positions are the positions of extreme displacement of the surface. The usual restriction in holography of no motion during the exposure does not apply. The hologram is simply made as if the subject were motionless, when in fact it is motionless only at the extreme positions of the vibration. Time-average holographic interferometry is simple to employ. One makes a single hologram while the surface is vibrating. The vibrational amplitudes of diffusely-reflecting surfaces may be measured with high precision. After exposure, the hologram is developed and re-illuminated. The pattern of the interference fringes provides information on the relative vibrational amplitude as a function of position on the surface. Such measurements can be very useful for determining the modes of vibration of complex structures, which would be difficult to measure by other techniques. In contrast to the other types of holographic interferometry, time-average holographic interferometry is useful in one case, that is, where vibration analysis is desired.

The holocamera So far in this module we have emphasized the use of photographic film for the recording of holograms. All of the original uses of holographic interferometry for nondestructive testing were developed using photographic film. But now, in industry, a device called the holocamera, which does not involve photographic film, is often used. A holocamera uses materials like thermoplastics to record the images. The development is by electrical and thermal means, without the need for wet chemical processing. The development can be accomplished in a few seconds without having to move or reposition the recording. Without going into all the details of its operation, we simply note that in a holocamera, the development is done with heat and electrical charge, somewhat similar to the recording done in a xerographic process. Thus, the hologram can be developed and viewed rapidly. This makes holographic interferometry compatible with rapid inspection and analysis in an industrial environment. When one has finished with the hologram, it can be erased by heating. The material can then be reused to make another hologram. A sample of the thermoplastic recording material can be erased and reused hundreds of times.

18 Optics and Photonics Series, Course 2: Elements of Photonics

The holocamera, based on the thermoplastic recording medium, is a practical tool suitable for use in an industrial setting. The holocamera removes drawbacks and limitations associated with the wet chemical processing of photographic film. Commercial models of holocameras are available. The availability of the holocamera substantially increases the usefulness of holographic nondestructive testing for industry and manufacturing.

Use of holographic interferometry for defect detection Industrial applications of holographic interferometry emphasize measurement of deformations and identification of defects in manufactured parts. Double-exposure holographic interferometry is used more that the other two techniques because of the simplicity of its application for locating defects. Composite materials and structures are subject to defects arising from separation of the laminated layers of the structure. Holographic interferometry has become common for testing such materials. Composite materials can be stressed by heating. A hologram of the material is recorded at ambient temperature and then replaced. One then views the interference pattern in real time as the original material is heated above ambient temperature. The interference fringes will be distorted as they move over voids, disbonded areas, or other defects. Many laminated or composite structures have been tested in this way, including honeycomb panels, graphite-epoxy jet engine fan blades, simulated uranium nuclear fuel elements, automotive clutch plates, and composite compressor blades. One early application occurred in the testing of tires. One holographic exposure is made with low air pressure in the tire and a second exposure with high air pressure. When one views the fringe pattern in the resulting double-exposure hologram, there may be regions where the fringes crowd relatively close together. These are areas where relatively large distortion of the tire occurred when the air pressure was increased. The areas with high fringe density are areas of lower strength than surrounding areas and may indicate defects in the tire. Holographic interferometry is used in the electronics industry to analyze distortion that arises because of heating when a circuit is turned on. The structure is deformed more in areas where excessive heating occurs. Closely-spaced fringes in a double-exposure hologram show where these areas are located. The relatively larger deformation can cause stress on contacts and component leads and possibly circuit failure.

Holographic Optical Elements Holographic optical elements (HOEs) are holograms that can transform the properties of a wavefront, just as would a conventional optical element. For example, they can focus a collimated beam of light in the same way that would a lens. Such a HOE could be substituted for a lens in an optical system and will focus light just as would the lens. To produce a holographic device that acts as a lens, one records the interference pattern produced by two spherical waves with different radii of curvature. This yields a so-called “sinusoidal zone plate” which has a circularly symmetrical pattern in which the transmission varies sinusoidally with radial distance from the center, and in which the local frequency increases linearly with distance from the center. Such a zone plate acts simultaneously as a

Module 2-6: Basic Principles and Applications of Holography 19

positive (converging) lens and a negative (diverging) lens. Similarly, one can fabricate HOEs that perform the functions of prisms, mirrors, gratings, etc. Because HOEs are very light in weight—much lighter than the components that they replace— they are often used in applications where weight is important. For example, NASA is planning to use rotating HOEs to scan laser beams on spaceborne platforms in lidar (light detection and ranging ) systems to monitor atmospheric profiles of wind, aerosols, clouds, temperature, and humidity. The HOE is much lighter and easier to spin than a conventional rotating mirror. HOEs are used in optical systems to correct aberrations. They have been used as scanning elements in supermarket scanners, which pass a laser beam across the bar code on a product. HOEs have also been used to project aircraft instrument readings in heads-up displays for pilots. The readings are projected—to be seen floating in space—while the pilot retains a clear view of the scene in front of the aircraft. High-resolution, holographically recorded gratings have been used in optical spectrometers to replace conventional gratings which are more expensive.

Embossed Holograms A familiar use of embossed holograms is in security applications, such as logos on credit cards. The original hologram is recorded in a photosensitive material called photoresist. When the hologram is developed, the hologram consists of grooves in the surface of the material. A thin layer of nickel is deposited onto the hologram and then is peeled off. This yields a replica of the grooves in a metallic element which is called a shim. The shim is pressed onto a material like mylar by a roller under conditions of high pressure and temperature. This embosses the hologram onto the mylar, which then is attached to the credit card. This method allows mass production of very many embossed holograms simply and inexpensively. Embossed holograms are also used in anti-counterfeiting applications. The embossing process allows fabrication of large numbers of holograms inexpensively. Such holograms are incorporated into the packaging of a product in order to confirm that it is genuine. Anti- counterfeiting holograms have been used on many high-value products, including perfumes, automotive parts, and videotapes.

Holographic Data Storage Storage materials like CDs and DVDs store information on the surface of the storage medium as individual bits. One could record a hologram of a large pattern of bits (ones and zeros) that represent data. The holographic data storage system will store an entire page of information at once throughout the volume of the thick storage material. The hologram is formed by intersecting two beams within the storage material. The first beam contains a pattern of bits to be stored and the second is a relatively uniform reference beam. The interference pattern produces physical or chemical changes in the storage medium. The hologram will store the data until it is needed. Then the hologram can be reconstructed by illuminating the hologram with the reference beam. The image is projected onto an array of photodetectors to read out the information. Thus a hologram could serve as the recording medium for a data storage system.

20 Optics and Photonics Series, Course 2: Elements of Photonics

Because a volume hologram can store information throughout its entire volume, the storage capacity can be very large. Also because the readout of all the bits in the hologram is accomplished at once, instead of one bit at a time, a holographic data storage system could provide a very high readout rate.

Example 3: Storage capacity Given: A conventional 1-mm-thick compact disk that can store 1 gigabyte of information in the form of digital data, with all the data stored in the top 1-micrometer-thick layer Find: The amount of information that could be stored if the compact disk could record holographically throughout its entire volume at the same information density Solution The compact disk is 1 mm (1000 micrometers) thick, and it uses only the top 1 micrometer to store 1 gigabyte. The remaining 999 micrometers in a holographic data storage system could store an additional 999 gigabytes. Thus the total amount of stored information would be 1000 gigabytes or 1 terabyte.

Holographic data storage has been studied since the 1960s. But there were significant problems with both components and materials. One needs a high speed “spatial light modulator” to compose a large pattern of ones and zeros rapidly for the object beam. This component is now available in the form of liquid crystal spatial light modulators. Also one needs a suitable storage material. There is extensive work proceeding worldwide, in the United States, Europe, and Japan, to develop such materials. Photopolymers now seem to be the leading candidate. When these materials are irradiated with light of a specified wavelength, a chemical reaction occurs that can either make or break chemical bonds. This can change the index of refraction of the material and thus record a phase hologram. Despite considerable work, no holographic data recording systems are commercially available at the time of this writing (late 2004). However, companies working in the area are claiming that products will be introduced within one year. The prototype developments for such products seem to be aiming at a storage capacity of 1012 bits on a disk the size of a CD, with a readout rate around 109 bits/second. The most likely laser to be used seems to be the frequency-doubled Nd:YAG laser operating at 532 nm. Holographic data storage offers the promise of very large memory capacity in a reasonable size, with extremely fast data retrieval.

Displays Holography provides many attractive opportunities for display. The attractive and striking three- dimensional nature of the holographic image has been used in many applications, ranging from jewelry to advertising, magazine covers, and covers for videotape boxes. Holography has also evolved as an art form, with many beautiful examples of striking artistry. One type of hologram that has been developed specifically for display applications is the so- called rainbow hologram. A rainbow hologram may be produced by the method shown in

Module 2-6: Basic Principles and Applications of Holography 21

Figure 6-10. A rainbow hologram produces bright sharp images when the hologram is illuminated with white light.

Figure 6-10 Apparatus for recording a rainbow hologram

The image from a rainbow hologram illuminated by white light is spectrally dispersed in a vertical direction. When the observer’s eye is located at a particular height above the image, the observer will see a clear monochromatic image of the object. When the eye moves vertically, the color of the scene changes. It can change through all the colors of the rainbow, hence the name. At a given vertical height, if the eye is moved horizontally, the color does not change, but the observer will see parallax in the image. Rainbow holograms can be especially bright, because all the frequencies of the white light source are being used to form the image. Another hologram type that has been used for displays and that has become familiar is the holographic stereogram. The hologram is formed into a cylinder and illuminated with white light. As the observer looks around the cylinder, some range of object motion may be seen. (Alternatively, the observer can stay in one spot as the cylinder is rotated about its central axis.) One method of producing the holographic stereogram is to record a series of ordinary two- dimensional photographs of the subject as the subject performs some action. One well-known holographic stereogram shows a girl blowing a kiss. A movie camera may be used for the series of photographs. The original recording may include large scenes illuminated by natural light. Then, transparencies of the series of photographs are projected onto a translucent screen. This is illustrated in Figure 6-11. This procedure records a series of holograms as narrow vertical strips. More than 1000 such strip holograms may be recorded. When the finished set of vertical strip holograms is illuminated, each strip hologram produces a virtual image. The individual holograms are two dimensional, just like the movie film. But each of the observer’s eyes sees slightly different strip holograms, so the observer sees a view that seems to be three- dimensional.

22 Optics and Photonics Series, Course 2: Elements of Photonics

Figure 6-11 Top view of arrangement for recording a holographic stereogram from a series of two- dimensional transparencies

The optics are designed to produce images in the same manner as with rainbow holograms, so that the observer sees a three-dimensional monochromatic image if the set of holograms is viewed in white light. When the set of strip holograms is formed into a cylinder and rotated, the observer sees the subject performing the actions originally photographed. The result is a convincing impression of three-dimensional motion within the cylinder.

EXERCISES

1. Draw and label an experimental arrangement that can be used to produce a transmission hologram of a three-dimensional object. The drawing must include a source, object, necessary optical components, object and reference beams, and holographic film. 2. Correctly state the relation between the length of the individual paths of the object and reference beams for the arrangement in Exercise 1 above. 3. Draw and label an experimental arrangement that can be used to reconstruct the three- dimensional image stored in a hologram. The drawing must include a source, reconstructing beam, hologram, and position of the observer. The drawing must show the proper light beams that lead to the real and virtual images of the original object. 4. In terms similar to the text, describe the differences between thick and thin holograms. 5. In terms similar to the text, describe the differences between transmission and reflectance holograms and the differences between the orientations of the beams that produce them. 6. In terms similar to the text, describe the differences between amplitude and phase holograms. 7. A piece of photographic film has transmission T of 0.001. What is its optical density D? 8. In terms similar to the text, name and describe three methods of holographic interferometry used for nondestructive testing. 9. In terms similar to the text, describe how a rainbow hologram is made and viewed. 10. In terms similar to the text, describe how a holographic stereogram is made and viewed.

Module 2-6: Basic Principles and Applications of Holography 23

LABORATORY

Materials Vibrationless table with magnetic fasteners or optical bench and components HeNe laser (50 milliwatt) Short focal length lenses (e.g., 10X or 50X microscope objectives) Beam expander Beam splitter (a good optical flat—antireflection-coated on one side—would be appropriate) Spatial filter Sodium vapor lamp Assorted pinhole diaphragms Meter stick Power density meter Appropriate photographic film (e.g., Agfa-Gevaert Scientia 14C70 or 10E75) Film holder Object for hologram (The object should be reflective and have good contrast with the background. Also it should not be very shiny. Chess pieces may be a good choice.) Photographic dark room and necessary chemicals Sheet of opaque paper

Procedures 1. In a room that can be well darkened, set up the experimental apparatus, as shown in Figure 6-12. The laser should have an output of 50 milliwatts or so to provide adequate illumination of the object in a short exposure time. Short exposures are desired because unwanted vibrations are more likely during long exposures.

Figure 6-12 Arrangement of laser, film, and optical components on a vibrationless table to make a split-beam transmission hologram

24 Optics and Photonics Series, Course 2: Elements of Photonics

2. The beamsplitter BS may be a good optical flat—antireflection—coated on one side. The incident beam may be divided approximately 20 to 1, say 96% in the object beam and 4% in the reference beam. It may be desirable to add a spatial filter between the laser and the beamsplitter.

3. The lenses L1 and L2 may be microscope objectives with short focal lengths. They should be able to focus and then spread the reference and object beams enough to cover the film and object scene. The mirrors M1 and M2 can be any good flat mirrors. 4. Measure the cavity length of the laser with the meter stick. Then make sure that the difference in length between the paths (BS to L1 to M1 to object to film, and BS to M2 to L2 to film) is less than the length l of the laser cavity. Also keep the angle  between the object and reference beams small, no more than 15 or 20 degrees. 5. Darken the room lights. Check the alignment of the system with the HeNe laser operated at low power. The film holder should be in place and should be light-tight. Be sure that the object is well illuminated and that the object and reference beams cover the same area of most of the film holder. Turn the laser power up to 50 milliwatts. Use the power density meter to determine the amount of light in the reference and object beams reaching the film holder by placing the power density meter in the plane of the film holder. Move the meter over the area of the film to be exposed and record the average value. 6. On the H-D curve for the film being used, locate the central part of the linear region. Calculate the correct exposure time for the average power density reading determined in Step 5 to give exposure in the center of the linear portion of the H-D curve. 7. Expose the film for the amount of time determined in Step 6. 8. Develop the film according to the manufacturer’s instructions. Make sure the film is carefully dried.

Viewing the hologram 1. In order to view the images in the hologram that you have made, arrange the laser, lens, or beam expander and hologram along a common axis, as shown in Figure 6-13. Make sure that the hologram is more or less fully illuminated. Look toward the hologram from the side opposite the laser (position A) at angle . This angle should be the same angle that the reference beam made with the normal to the plane of the film in the original recording of the hologram. Examine the virtual image, noting its three-dimensional characteristics. Note that the image has “depth.” That is, you can distinguish front objects from back objects. Move your head back and forth, trying to see around objects in the foreground. Draw a sketch of the virtual image and note especially the orientation of the scene. You could take Polaroid or other photographs of the image from different positions for a permanent record to be studied in detail.

Module 2-6: Basic Principles and Applications of Holography 25

Figure 6-13 Reconstructing the images (virtual and real) formed by the split-beam transmission hologram

2. Locate the real image by looking toward the hologram from position B. Look on the same side of the hologram that you are on and about as far in front of the hologram as the virtual image was formed behind the hologram. You may have to move farther away from the hologram to see it. Look along the specified direction and focus your eyes on a region in front of the hologram to see a real three-dimensional image of the scene. Study the scene carefully and make a sketch of it. Note especially the orientation of the scene. Compare it with the virtual image sketched earlier. Do the top-bottom and left-right relationships appear the same in the real image as in the virtual image? Try to look around the edges of the image and note how the scene appears to rotate.

Properties of a hologram 1. With the same arrangements as before in Figure 6-13 get a good clear view of the virtual image that you examined before. Observe again the brightness and definition of the image. Carefully sketch in detail a particularly clear part of the scene (for example, one of the brighter chessmen in a scene of chess pieces). 2. Replace the HeNe laser with the sodium vapor lamp of wavelength  = 589 nm. Let the lamp come to full intensity. Place one of an assortment of pinholes next to the lamp. Remove the beam expander and move the hologram toward the pinhole until a sizable portion of the hologram is illuminated. Find the virtual image and examine it. Carefully examine the same part of the image as you sketched in the previous step. How does it compare in definition and brightness? What happens to the definition of the image when the smallest pinhole is replaced by the largest pinhole? Because light coming from a small pinhole has larger spatial coherence than light coming from a large pinhole and because a coherent illuminating beam (like a laser) gives a better image, you should see the image sharpen up a bit as the pinhole size becomes smaller. 3. The hologram was made with a HeNe laser with wavelength  = 632.8 nm. What do you observe to be the effect of illuminating the hologram with sodium light of wavelength  = 589 nm? Can you tell whether the image has grown or shrunk in size? Mathematical theory, which explains the mechanics of the reconstruction process, indicates that the magnification of the image should be equal to the reconstructing wavelength divided by

26 Optics and Photonics Series, Course 2: Elements of Photonics

the recording wavelength. Thus when you use sodium light to reconstruct an image of a hologram originally made with HeNe laser light, the image should undergo a reduction in size equal to 589/632.8. 4. Carefully rotate the hologram about its vertical axis through 180 degrees, so that you are looking into the opposite side of the hologram. Again view the virtual image with the sodium light and different pinholes. How do the images through the “front” and “back” sides of the hologram compare? 5. Replace the sodium light with the HeNe laser. Reinsert the beam expander as before and adjust the entire setup to get the clearest virtual image. From a sheet of opaque paper (larger than the hologram) cut out a hole roughly the size of a penny. Place this mask against the hologram, on your side of the hologram, and look at the scene through the hole. Move the hole around over all portions of the illuminated hologram and continue to examine the scene. Is part of the scene destroyed or do you continue to see the entire scene? Is the scene as clear and bright as it was without the mask? Do you conclude from this that each small portion of the hologram contains all the information in the original scene? Do you see that this is possible because light from every point on the object in the recording process was scattered to every point on the hologram and, therefore, that the interference pattern formed by the object beam and the reference beam in each small portion of the hologram is essentially the same? Isn’t the loss in brightness and detail (resolution) seen in going from the entire hologram to a piece of it comparable to what you would see in a two-dimensional image formed by a lens if a large-diameter lens were replaced by a smaller-diameter lens? Explain.

REFERENCES

Collier, R. J., C. B. Burckhardt, and L. H. Lin. Optical Holography. New York: Academic Press, 1971. Jeong, T. H. Laser Holography: Experiments You Can Do. Lake Forest, Ill.: Thomas Alva Edison Foundation, 1987. Kincade, K. “Holographic Data Storage Prepares for the Real World,” Laser Focus World, October 2003, p. 68. Robillard, J., and H. J. Caulfield. Industrial Applications of Holography. New York: Oxford University Press, 1990. Saxby, G. Practical Holography. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1996. Schwemmer, G., et al. “NASA Lidar Uses HOES for Lightweight Scanning,” Laser Focus World, June 2002, p. 141. Unterseher, F., J. Hansen, and B. Schlesinger. Holography Handbook: Making Holograms the Easy Way. Berkeley, Cal.: Ross Books, 1982. Vest, C. M. Holographic Interferometry. New York: John Wiley & Sons, 1979.

Module 2-6: Basic Principles and Applications of Holography 27