Quick viewing(Text Mode)

Book II Optics

V VV

VVV

VVVV

VVVV Basic in 180 Days Book II - Editor: Ramon F. aeroramon.com Contents

1 Day 1 1 1.1 History of optics ...... 1 1.1.1 Early history of optics ...... 1 1.1.2 Optics and vision in the Islamic world ...... 3 1.1.3 Optics in medieval Europe ...... 3 1.1.4 Renaissance and early modern optics ...... 6 1.1.5 and lensmaking ...... 7 1.1.6 Quantum optics ...... 8 1.1.7 See also ...... 8 1.1.8 Notes ...... 8 1.1.9 References ...... 10

2 Day 2 11 2.1 ...... 11 2.1.1 Explanation ...... 11 2.1.2 Reflection ...... 11 2.1.3 Refraction ...... 12 2.1.4 Underlying mathematics ...... 16 2.1.5 See also ...... 18 2.1.6 References ...... 18 2.1.7 Further reading ...... 19 2.1.8 External links ...... 19

3 Day 3 20 3.1 Optics ...... 20 3.1.1 History ...... 20 3.1.2 Classical optics ...... 26 3.1.3 Modern optics ...... 38 3.1.4 Applications ...... 39 3.1.5 See also ...... 42 3.1.6 References ...... 42 3.1.7 External links ...... 46 3.2 Optical instrument ...... 57

i ii CONTENTS

3.2.1 Image enhancement ...... 57 3.2.2 Analysis ...... 57 3.2.3 Other optical devices ...... 57 3.2.4 See also ...... 57 3.2.5 References ...... 57 3.2.6 External links ...... 58

4 Day 4 59 4.1 (optics) ...... 59 4.1.1 History ...... 60 4.1.2 Construction of simple lenses ...... 60 4.1.3 Imaging properties ...... 62 4.1.4 Aberrations ...... 64 4.1.5 Compound lenses ...... 66 4.1.6 Other types ...... 67 4.1.7 Uses ...... 67 4.1.8 See also ...... 68 4.1.9 References ...... 68 4.1.10 Bibliography ...... 70 4.1.11 External links ...... 70

5 Day 5 76 5.1 (optics) ...... 76 5.1.1 See also ...... 77 5.1.2 References ...... 78 5.2 Cardinal point (optics) ...... 78 5.2.1 Explanation ...... 79 5.2.2 Modeling optical systems as mathematical transformations ...... 80 5.2.3 See also ...... 81 5.2.4 Notes and references ...... 81 5.2.5 External links ...... 81

6 Day 6 87 6.1 Hyperfocal distance ...... 87 6.1.1 Acceptable sharpness ...... 87 6.1.2 Formulae ...... 87 6.1.3 Example ...... 88 6.1.4 Mathematical phenomenon ...... 90 6.1.5 History ...... 90 6.1.6 See also ...... 93 6.1.7 References ...... 93 6.1.8 External links ...... 93 CONTENTS iii

7 Day 7 95 7.1 ...... 95 7.1.1 Calculating a camera’s angle of view ...... 96 7.1.2 Measuring a camera’s field of view ...... 100 7.1.3 Lens types and effects ...... 101 7.1.4 Common lens angles of view ...... 103 7.1.5 Sensor size effects (“crop factor”) ...... 104 7.1.6 Cinematography and video gaming ...... 104 7.1.7 References and notes ...... 104 7.1.8 See also ...... 105 7.1.9 External links ...... 105 7.2 Field of view ...... 107 7.2.1 Humans and animals ...... 108 7.2.2 Conversions ...... 109 7.2.3 Machine vision ...... 109 7.2.4 Remote sensing ...... 109 7.2.5 Astronomy ...... 109 7.2.6 Photography ...... 110 7.2.7 Video games ...... 110 7.2.8 See also ...... 110 7.2.9 References ...... 110

8 Day 8 112 8.1 Depth of field ...... 112 8.1.1 criterion for depth of field ...... 112 8.1.2 Factors affecting depth of field ...... 113 8.1.3 DOF scales ...... 116 8.1.4 Zone focusing ...... 117 8.1.5 Hyperfocal distance ...... 117 8.1.6 Limited DOF: selective focus ...... 118 8.1.7 Near:far distribution ...... 118 8.1.8 Optimal f-number ...... 118 8.1.9 Other applications ...... 119 8.1.10 DOF formulae ...... 119 8.1.11 Derivation of the DOF formulae ...... 123 8.1.12 See also ...... 133 8.1.13 Notes ...... 134 8.1.14 References ...... 135 8.1.15 Further reading ...... 136 8.1.16 External links ...... 136 8.2 ...... 149 8.2.1 approximation ...... 149 iv CONTENTS

8.2.2 General optical systems ...... 149 8.2.3 In photography ...... 150 8.2.4 See also ...... 151 8.2.5 References ...... 152

9 Day 9 155 9.1 Magnification ...... 155 9.1.1 Examples of magnification ...... 156 9.1.2 Magnification as a number (optical magnification) ...... 156 9.1.3 Magnification and micron bar ...... 159 9.1.4 See also ...... 160 9.1.5 References ...... 160 9.2 (optics) ...... 160 9.2.1 Radial distortion ...... 161 9.2.2 Software correction ...... 163 9.2.3 Related phenomena ...... 165 9.2.4 See also ...... 165 9.2.5 References ...... 165 9.2.6 External links ...... 166 9.3 ...... 166 9.3.1 Overview ...... 166 9.3.2 Monochromatic aberration ...... 167 9.3.3 Analytic treatment of aberrations ...... 173 9.3.4 Practical elimination of aberrations ...... 175 9.3.5 Chromatic or color aberration ...... 176 9.3.6 See also ...... 180 9.3.7 References ...... 180 9.3.8 External links ...... 181 9.4 Orb (optics) ...... 181 9.4.1 Cause ...... 181 9.4.2 See also ...... 182 9.4.3 References ...... 182 9.4.4 External links ...... 182

10 Day 10 183 10.1 f-number ...... 183 10.1.1 Notation ...... 183 10.1.2 Stops, f-stop conventions, and exposure ...... 184 10.1.3 Effects on image sharpness ...... 188 10.1.4 ...... 189 10.1.5 Focal ratio in telescopes ...... 189 10.1.6 Working f-number ...... 190 CONTENTS v

10.1.7 History ...... 190 10.1.8 See also ...... 192 10.1.9 References ...... 192 10.1.10 External links ...... 193

11 Text and image sources, contributors, and licenses 196 11.1 Text ...... 196 11.2 Images ...... 200 11.3 Content license ...... 206 Chapter 1

Day 1

1.1 History of optics

Optics began with the development of lenses by the ancient Egyptians and Mesopotamians, followed by theories on light and vision developed by ancient Greek philosophers, and the development of geometrical optics in the Greco- Roman world. The word optics is derived from the Greek term τα ὀπτικά meaning “appearance, look”.[1] Optics was significantly reformed by the developments in the medieval Islamic world, such as the beginnings of physical and physiological optics, and then significantly advanced in early modern Europe, where diffractive optics began. These earlier studies on optics are now known as “classical optics”. The term “modern optics” refers to areas of optical research that largely developed in the 20th century, such as wave optics and quantum optics.

1.1.1 Early history of optics

The earliest known lenses were made from polished crystal, often quartz, and have been dated as early as 750 BC for Assyrian lenses such as the Nimrud / Layard lens.[2] There are many similar lenses from ancient Egypt, Greece and Babylon. The ancient Romans and Greeks filled glass spheres with water to make lenses. However, glass lenses were not thought of until the Middle Ages. Some lenses fixed in ancient Egyptian statues are much older than those mentioned above. There is some doubt as to whether or not they qualify as lenses, but they are undoubtedly glass and served at least ornamental purposes. The statues appear to be anatomically correct schematic eyes. In ancient India, the philosophical schools of Samkhya and Vaisheshika, from around the 6th–5th century BC, de- veloped theories on light. According to the Samkhya school, light is one of the five fundamental “subtle” elements (tanmatra) out of which emerge the gross elements. In , the Vaisheshika school gives an atomic theory of the physical world on the non-atomic ground of ether, space and time. (See Indian atomism.) The basic atoms are those of earth (prthivı), water (apas), fire (tejas), and air (vayu), that should not be confused with the ordinary meaning of these terms. These atoms are taken to form binary molecules that combine further to form larger molecules. Motion is defined in terms of the movement of the physical atoms. Light rays are taken to be a stream of high velocity of tejas (fire) atoms. The particles of light can exhibit different characteristics depending on the speed and the arrangements of the tejas atoms. Around the first century BC, the Vishnu Purana refers to sunlight as “the seven rays of the sun”. In the fifth century BC, Empedocles postulated that everything was composed of four elements; fire, air, earth and water. He believed that Aphrodite made the human eye out of the four elements and that she lit the fire in the eye which shone out from the eye making sight possible. If this were true, then one could see during the night just as well as during the day, so Empedocles postulated an interaction between rays from the eyes and rays from a source such as the sun. In his Optics Greek mathematician Euclid observed that “things seen under a greater angle appear greater, and those under a lesser angle less, while those under equal angles appear equal”. In the 36 propositions that follow, Euclid relates the apparent size of an object to its distance from the eye and investigates the apparent shapes of cylinders and cones when viewed from different angles. Pappus believed these results to be important in astronomy and included

1 2 CHAPTER 1. DAY 1

Euclid’s Optics, along with his Phaenomena, in the Little Astronomy, a compendium of smaller works to be studied before the Syntaxis (Almagest) of Ptolemy. In 55 BC, Lucretius, a Roman who carried on the ideas of earlier Greek atomists, wrote:

The light and heat of the sun; these are composed of minute atoms which, when they are shoved off, lose no time in shooting right across the interspace of air in the direction imparted by the shove. — Lucretius, On the nature of the Universe

Despite being similar to later particle theories of light, Lucretius’s views were not generally accepted and light was still theorized as emanating from the eye. In his Catoptrica, Hero of Alexandria showed by a geometrical method that the actual path taken by a of light reflected from a plane is shorter than any other reflected path that might be drawn between the source and point of observation. In the second century Claudius Ptolemy, an Alexandrian Greek or Hellenized Egyptian, undertook studies of reflection and refraction. He measured the angles of refraction between air, water, and glass, and his published results indicate that he adjusted his measurements to fit his (incorrect) assumption that the angle of refraction is proportional to the angle of incidence.[3][4] The Indian Buddhists, such as Dignāga in the 5th century and Dharmakirti in the 7th century, developed a type of atomism that is a philosophy about reality being composed of atomic entities that are momentary flashes of light or energy. They viewed light as being an atomic entity equivalent to energy, similar to the modern concept of photons, though they also viewed all matter as being composed of these light/energy particles.

The beginnings of geometrical optics

See also: Geometrical optics and Ray (optics)

The early writers discussed here treated vision more as a geometrical than as a physical, physiological, or psychological problem. The first known author of a treatise on geometrical optics was the geometer Euclid (c. 325 BC–265 BC). Euclid began his study of optics as he began his study of geometry, with a set of self-evident axioms.

1. Lines (or visual rays) can be drawn in a straight line to the object.

2. Those lines falling upon an object form a cone.

3. Those things upon which the lines fall are seen.

4. Those things seen under a larger angle appear larger.

5. Those things seen by a higher ray, appear higher.

6. Right and left rays appear right and left.

7. Things seen within several angles appear clearer.

Euclid did not define the physical nature of these visual rays but, using the principles of geometry, he discussed the effects of perspective and the rounding of things seen at a distance. Where Euclid had limited his analysis to simple direct vision, Hero of Alexandria (c. AD 10–70) extended the principles of geometrical optics to consider problems of reflection (catoptrics). Unlike Euclid, Hero occasionally commented on the physical nature of visual rays, indicating that they proceeded at great speed from the eye to the object seen and were reflected from smooth surfaces but could become trapped in the porosities of unpolished surfaces.[5] This has come to be known as emission theory. Hero demonstrated the equality of the angle of incidence and reflection on the grounds that this is the shortest path from the object to the observer. On this basis, he was able to define the fixed relation between an object and its image in a plane mirror. Specifically, the image appears to be as far behind the mirror as the object really is in front of the mirror. 1.1. HISTORY OF OPTICS 3

Like Hero, Ptolemy (c. 90–c. 168) considered the visual rays as proceeding from the eye to the object seen, but, unlike Hero, considered that the visual rays were not discrete lines, but formed a continuous cone. Ptolemy extended the study of vision beyond direct and reflected vision; he also studied vision by refracted rays (dioptrics), when we see objects through the interface between two media of different density. He conducted experiments to measure the path of vision when we look from air to water, from air to glass, and from water to glass and tabulated the relationship between the incident and refracted rays.[6] His tabulated results have been studied for the air water interface, and in general the values he obtained reflect the theoretical refraction given by modern theory, but the outliers are distorted to represent Ptolemy’s a priori model of the nature of refraction.

1.1.2 Optics and vision in the Islamic world

Al-Kindi (c. 801–873) was one of the earliest important optical writers in the Islamic world. In a work known in the west as De radiis stellarum, al-Kindi developed a theory “that everything in the world ... emits rays in every direction, which fill the whole world.”[7] This theory of the active power of rays had an influence on later scholars such as Ibn al-Haytham, Robert Grosseteste and Roger Bacon.[8] Ibn Sahl (c. 940-1000) was an Iraqi mathematician[9] associated with the court of Baghdad. About 984 he wrote a treatise On Burning and Lenses in which he set out his understanding of how curved mirrors and lenses bend and focus light. In his work he discovered a law of refraction mathematically equivalent to Snell’s law.[10] He used his law of refraction to compute the shapes of lenses and mirrors that focus light at a single point on the axis. Ibn al-Haytham (known in as Alhacen or Alhazen in Western Europe) (965–1040) produced a comprehensive and systematic analysis of Greek optical theories.[11] Ibn al-Haytham’s key achievement was twofold: first, to insist that vision occurred because of rays entering the eye; the second was to define the physical nature of the rays discussed by earlier geometrical optical writers, considering them as the forms of light and color. He then analyzed these physical rays according to the principles of geometrical optics. He wrote many books on optics, most significantly the Book of Optics (Kitab al Manazir in Arabic), translated into Latin as the De aspectibus or Perspectiva, which disseminated his ideas to Western Europe and had great influence on the later developments of optics.[12] Avicenna (980-1037) agreed with Alhazen that the speed of light is finite, as he “observed that if the perception of light is due to the emission of some sort of particles by a luminous source, the speed of light must be finite.”[13] Abū Rayhān al-Bīrūnī (973-1048) also agreed that light has a finite speed, and he was the first to discover that the speed of light is much faster than the speed of sound.[14] Abu 'Abd Allah Muhammad ibn Ma'udh, who lived in Al-Andalus during the second half of the 11th century, wrote a work on optics later translated into Latin as Liber de crepisculis, which was mistakenly attributed to Alhazen. This was a “short work containing an estimation of the angle of depression of the sun at the beginning of the morning twilight and at the end of the evening twilight, and an attempt to calculate on the basis of this and other data the height of the atmospheric moisture responsible for the refraction of the sun’s rays.” Through his experiments, he obtained the value of 18°, which comes close to the modern value.[15] In the late 13th and early 14th centuries, Qutb al-Din al-Shirazi (1236–1311) and his student Kamāl al-Dīn al-Fārisī (1260–1320) continued the work of Ibn al-Haytham, and they were the first to give the correct explanations for the rainbow phenomenon. Al-Fārisī published his findings in his Kitab Tanqih al-Manazir (The Revision of [Ibn al-Haytham’s] Optics).[16]

1.1.3 Optics in medieval Europe

The English bishop, Robert Grosseteste (c. 1175–1253), wrote on a wide range of scientific topics at the time of the origin of the medieval university and the recovery of the works of Aristotle. Grosseteste reflected a period of transition between the Platonism of early medieval learning and the new Aristotelianism, hence he tended to apply mathematics and the Platonic metaphor of light in many of his writings. He has been credited with discussing light from four different perspectives: an epistemology of light, a metaphysics or cosmogony of light, an etiology or physics of light, and a theology of light.[17] Setting aside the issues of epistemology and theology, Grosseteste’s cosmogony of light describes the origin of the universe in what may loosely be described as a medieval “big bang” theory. Both his biblical commentary, the 4 CHAPTER 1. DAY 1

Reproduction of a page of Ibn Sahl's manuscript showing his discovery of the law of refraction, now known as Snell’s law.

Hexaemeron (1230 x 35), and his scientific On Light (1235 x 40), took their inspiration from Genesis 1:3, “God said, let there be light”, and described the subsequent process of creation as a natural physical process arising from the generative power of an expanding (and contracting) sphere of light.[18] His more general consideration of light as a primary agent of physical causation appears in his On Lines, Angles, and Figures where he asserts that “a natural agent propagates its power from itself to the recipient” and in On the Nature of Places where he notes that “every natural action is varied in strength and weakness through variation of lines, angles and figures.”[19] 1.1. HISTORY OF OPTICS 5

The theorem of Ibn Haytham

Optical diagram showing light being refracted by a spherical glass container full of water. (from Roger Bacon, De multiplicatione specierum)

The English Franciscan, Roger Bacon (c. 1214–1294) was strongly influenced by Grosseteste’s writings on the im- portance of light. In his optical writings (the Perspectiva, the De multiplicatione specierum, and the De speculis com- burentibus) he cited a wide range of recently translated optical and philosophical works, including those of Alhacen, Aristotle, Avicenna, Averroes, Euclid, al-Kindi, Ptolemy, Tideus, and Constantine the African. Although he was not a slavish imitator, he drew his mathematical analysis of light and vision from the writings of the Arabic writer, Alhacen. But he added to this the Neoplatonic concept, perhaps drawn from Grosseteste, that every object radiates a power (species) by which it acts upon nearby objects suited to receive those species.[20] Note that Bacon’s optical use of the term "species" differs significantly from the genus / species categories found in Aristotelian philosophy. Another English Franciscan, John Pecham (died 1292) built on the work of Bacon, Grosseteste, and a diverse range of earlier writers to produce what became the most widely used textbook on Optics of the Middle Ages, the Perspectiva communis. His book centered on the question of vision, on how we see, rather than on the nature of light and color. Pecham followed the model set forth by Alhacen, but interpreted Alhacen’s ideas in the manner of Roger Bacon.[21] Like his predecessors, Witelo (c. 1230–1280 x 1314) drew on the extensive body of optical works recently translated from Greek and Arabic to produce a massive presentation of the subject entitled the Perspectiva. His theory of vision follows Alhacen and he does not consider Bacon’s concept of species, although passages in his work demonstrate that 6 CHAPTER 1. DAY 1

he was influenced by Bacon’s ideas. Judging from the number of surviving manuscripts, his work was not as influential as those of Pecham and Bacon, yet his importance, and that of Pecham, grew with the invention of printing.[22]

• Peter of Limoges (1240–1306)

• Theodoric of Freiberg (ca. 1250–ca. 1310)

1.1.4 Renaissance and early modern optics

Johannes Kepler (1571–1630) picked up the investigation of the laws of optics from his lunar essay of 1600. Both lunar and solar eclipses presented unexplained phenomena, such as unexpected shadow sizes, the red color of a total lunar eclipse, and the reportedly unusual light surrounding a total solar eclipse. Related issues of atmospheric refraction applied to all astronomical observations. Through most of 1603, Kepler paused his other work to focus on optical theory; the resulting manuscript, presented to the emperor on January 1, 1604, was published as Astronomiae Pars Optica (The Optical Part of Astronomy). In it, Kepler described the inverse-square law governing the intensity of light, reflection by flat and curved mirrors, and principles of pinhole cameras, as well as the astronomical implications of optics such as parallax and the apparent sizes of heavenly bodies. Astronomiae Pars Optica is generally recognized as the foundation of modern optics (though the law of refraction is conspicuously absent).[23] Willebrord Snellius (1580–1626) found the mathematical law of refraction, now known as Snell’s law, in 1621. Subsequently René Descartes (1596–1650) showed, by using geometric construction and the law of refraction (also known as Descartes’ law), that the angular radius of a rainbow is 42° (i.e. the angle subtended at the eye by the edge of the rainbow and the rainbow’s centre is 42°).[24] He also independently discovered the law of reflection, and his essay on optics was the first published mention of this law.[25] Christiaan Huygens (1629–1695) wrote several works in the area of optics. These included the Opera reliqua (also known as Christiani Hugenii Zuilichemii, dum viveret Zelhemii toparchae, opuscula posthuma) and the Traité de la lumière. Isaac Newton (1643–1727) investigated the refraction of light, demonstrating that a prism could decompose white light into a spectrum of colours, and that a lens and a second prism could recompose the multicoloured spectrum into white light. He also showed that the coloured light does not change its properties by separating out a coloured beam and shining it on various objects. Newton noted that regardless of whether it was reflected or scattered or transmitted, it stayed the same colour. Thus, he observed that colour is the result of objects interacting with already-coloured light rather than objects generating the colour themselves. This is known as Newton’s theory of colour. From this work he concluded that any refracting telescope would suffer from the of light into colours, and invented a reflecting telescope (today known as a Newtonian telescope) to bypass that problem. By grinding his own mirrors, using Newton’s rings to judge the quality of the optics for his telescopes, he was able to produce a superior instrument to the refracting telescope, due primarily to the wider diameter of the mirror. In 1671 the Royal Society asked for a demonstration of his reflecting telescope. Their interest encouraged him to publish his notes On Colour, which he later expanded into his Opticks. Newton argued that light is composed of particles or corpuscles and were refracted by accelerating toward the denser medium, but he had to associate them with waves to explain the diffraction of light (Opticks Bk. II, Props. XII-L). Later physicists instead favoured a purely wavelike explanation of light to account for diffraction. Today’s quantum mechanics, photons and the idea of wave-particle duality bear only a minor resemblance to Newton’s understanding of light. In his Hypothesis of Light of 1675, Newton posited the existence of the ether to transmit forces between particles. In 1704, Newton published Opticks, in which he expounded his corpuscular theory of light. He considered light to be made up of extremely subtle corpuscles, that ordinary matter was made of grosser corpuscles and speculated that through a kind of alchemical transmutation “Are not gross Bodies and Light convertible into one another, ...and may not Bodies receive much of their Activity from the Particles of Light which enter their Composition?"[26]

The beginnings of diffractive optics

The effects of diffraction of light were first carefully observed and characterized by Francesco Maria Grimaldi, who also coined the term diffraction, from the Latin diffringere, 'to break into pieces’, referring to light breaking up into different directions. The results of Grimaldi’s observations were published posthumously in 1665.[27][28] Isaac New- ton studied these effects and attributed them to inflexion of light rays. James Gregory (1638–1675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating. In 1803 Thomas 1.1. HISTORY OF OPTICS 7

Thomas Young’s sketch of two-slit diffraction, which he presented to the Royal Society in 1803

Young did his famous experiment observing interference from two closely spaced slits in his double slit interferom- eter. Explaining his results by interference of the waves emanating from the two different slits, he deduced that light must propagate as waves. Augustin-Jean Fresnel did more definitive studies and calculations of diffraction, published in 1815 and 1818, and thereby gave great support to the wave theory of light that had been advanced by Christiaan Huygens and reinvigorated by Young, against Newton’s particle theory.

1.1.5 Lenses and lensmaking

See also: Timeline of telescope technology

The earliest known lenses were made from polished crystal, often quartz, and have been dated as early as 750 BC for Assyrian lenses such as the Layard / Nimrud lens.[2] There are many similar lenses from ancient Egypt, Greece and Babylon. The ancient Romans and Greeks filled glass spheres with water to make lenses. The earliest historical reference to magnification dates back to ancient Egyptian hieroglyphs in the 5th century BC, which depict “simple glass meniscal lenses". The earliest written record of magnification dates back to the 1st century AD, when Seneca the Younger, a tutor of Emperor Nero, wrote: “Letters, however small and indistinct, are seen enlarged and more clearly through a globe or glass filled with water”.[29] Emperor Nero is also said to have watched the gladiatorial games using an emerald as a .[30] Ibn al-Haytham (Alhacen) wrote about the effects of pinhole, concave lenses, and magnifying glasses in his Book of Optics.[29][31][32] Roger Bacon used parts of glass spheres as magnifying glasses and recommended them to be used to help people read. Roger Bacon got his inspiration from Alhacen in the 11th century. He discovered that light reflects from objects rather than emanating from them. Between the 11th and 13th century "reading stones" were invented. Often used by monks to assist in illuminating manuscripts, these were primitive plano-convex lenses initially made by cutting a glass sphere in half. As the stones were experimented with, it was slowly understood that shallower lenses magnified more effectively. Around 1286, possibily in Pisa, Italy, the first pair of eyeglasses were made, although it is unclear who the inventor was.[33] The earliest known working telescopes were the refracting telescopes that appeared in the Netherlands in 1608. Their development is credited to three individuals: Hans Lippershey and Zacharias Janssen, who were spectacle makers in Middelburg, and Jacob Metius of Alkmaar. Galileo greatly improved upon these designs the following year. Isaac Newton is credited with constructing the first functional reflecting telescope in 1668, his Newtonian reflector. The first microscope was made around 1595 in Middelburg in the Dutch Republic.[34] Three different eyeglass makers have been given credit for the invention: Hans Lippershey (who also developed the first real telescope); Hans Janssen; and his son, Zacharias. The coining of the name “microscope” has been credited to Giovanni Faber, who gave that name to Galileo Galilei's compound microscope in 1625.[35] 8 CHAPTER 1. DAY 1

1.1.6 Quantum optics

Main article: Quantum optics

Light is made up of particles called photons and hence inherently is quantized. Quantum optics is the study of the nature and effects of light as quantized photons. The first indication that light might be quantized came from Max Planck in 1899 when he correctly modelled blackbody radiation by assuming that the exchange of energy between light and matter only occurred in discrete amounts he called quanta. It was unknown whether the source of this discreteness was the matter or the light.[36]:231–236 In 1905, Albert Einstein published the theory of the photoelectric effect. It appeared that the only possible explanation for the effect was the quantization of light itself. Later, Niels Bohr showed that atoms could only emit discrete amounts of energy. The understanding of the interaction between light and matter following from these developments not only formed the basis of quantum optics but also were crucial for the development of quantum mechanics as a whole. However, the subfields of quantum mechanics dealing with matter-light interaction were principally regarded as research into matter rather than into light and hence, one rather spoke of atom physics and quantum electronics. This changed with the invention of the maser in 1953 and the laser in 1960. Laser science—research into principles, design and application of these devices—became an important field, and the quantum mechanics underlying the laser’s principles was studied now with more emphasis on the properties of light, and the name quantum optics became customary. As laser science needed good theoretical foundations, and also because research into these soon proved very fruitful, interest in quantum optics rose. Following the work of Dirac in quantum field theory, George Sudarshan, Roy J. Glauber, and Leonard Mandel applied quantum theory to the electromagnetic field in the 1950s and 1960s to gain a more detailed understanding of photodetection and the statistics of light (see degree of coherence). This led to the introduction of the coherent state as a quantum description of laser light and the realization that some states of light could not be described with classical waves. In 1977, Kimble et al. demonstrated the first source of light which required a quantum description: a single atom that emitted one photon at a time. Another quantum state of light with certain advantages over any classical state, squeezed light, was soon proposed. At the same time, development of short and ultrashort laser pulses—created by Q-switching and mode-locking techniques—opened the way to the study of unimaginably fast ("ultrafast") processes. Applications for solid state research (e.g. Raman spectroscopy) were found, and mechanical forces of light on matter were studied. The latter led to levitating and positioning clouds of atoms or even small biological samples in an optical trap or by laser beam. This, along with Doppler cooling was the crucial technology needed to achieve the celebrated Bose–Einstein condensation. Other remarkable results are the demonstration of quantum entanglement, quantum teleportation, and (recently, in 1995) quantum logic gates. The latter are of much interest in quantum information theory, a subject which partly emerged from quantum optics, partly from theoretical computer science. Today’s fields of interest among quantum optics researchers include parametric down-conversion, parametric oscil- lation, even shorter (attosecond) light pulses, use of quantum optics for quantum information, manipulation of single atoms, Bose–Einstein condensates, their application, and how to manipulate them (a sub-field often called atom optics), and much more.

1.1.7 See also

• History of electromagnetism

• History of physics

• List of astronomical instrument makers

1.1.8 Notes

[1] T. F. Hoad (1996). The Concise Oxford Dictionary of English Etymology. ISBN 0-19-283098-8.

[2] “The Nimrud lens / the Layard lens”. Collection database. The British Museum. Retrieved May 11, 2015.

[3] Lloyd, G.E.R. (1973). Greek Science After Aristotle. New York: W.W.Norton. pp. 131–135. ISBN 0-393-04371-1.

[4] A brief history of Optics 1.1. HISTORY OF OPTICS 9

[5] D. C. Lindberg, Theories of Vision from al-Kindi to Kepler, (Chicago: Univ. of Chicago Pr., 1976), pp. 14-15.

[6] D. C. Lindberg, Theories of Vision from al-Kindi to Kepler, (Chicago: Univ. of Chicago Pr., 1976), p. 16; A. M. Smith, Ptolemy’s search for a law of refraction: a case-study in the classical methodology of 'saving the appearances’ and its limitations, Arch. Hist. Exact Sci. 26 (1982), 221-240; Ptolemy’s procedure is reported in the fifth chapter of his Optics.

[7] Cited in D. C. Lindberg, Theories of Vision from al-Kindi to Kepler, (Chicago: Univ. of Chicago Pr., 1976), p. 19.

[8] Lindberg, David C. (Winter 1971), “Alkindi’s Critique of Euclid’s Theory of Vision”, Isis, 62 (4): 469–489 [471], doi:10.1086/350790

[9] “IBN SAHL (940-1000): “The Inventor of the Refraction Law"". Islam In Indonesia. Retrieved 2016-02-23.

[10] R. Rashed, “A Pioneer in Anaclastics: Ibn Sahl on Burning Mirrors and Lenses”, Isis 81 (1990): 464–91.

[11] D. C. Lindberg, “Alhazen’s Theory of Vision and its Reception in the West”, Isis, 58 (1967), p. 322.

[12] D. C. Lindberg, Theories of Vision from al-Kindi to Kepler, (Chicago: Univ. of Chicago Pr., 1976), pp. 58-86; Nader El-Bizri 'A Philosophical Perspective on Alhazen’s Optics’, Arabic Sciences and Philosophy 15 (2005), 189–218.

[13] George Sarton, Introduction to the History of Science, Vol. 1, p. 710.

[14] O'Connor, John J.; Robertson, Edmund F., “Al-Biruni”, MacTutor History of Mathematics archive, University of St An- drews.

[15] Sabra, =A. I. (Spring 1967), “The Authorship of the Liber de crepusculis, an Eleventh-Century Work on Atmospheric Refraction”, Isis, 58 (1): 77–85 [77], doi:10.1086/350185

[16] O'Connor, John J.; Robertson, Edmund F., “Al-Farisi”, MacTutor History of Mathematics archive, University of St An- drews.

[17] D. C. Lindberg, Theories of Vision from al-Kindi to Kepler, (Chicago: Univ. of Chicago Pr., 1976), pp. 94-99.

[18] R. W. Southern, Robert Grosseteste: The Growth of an English Mind in Medieval Europe, (Oxford: Clarendon Press, 1986), pp. 136-9, 205-6.

[19] A. C. Crombie, Robert Grosseteste and the Origins of Experimental Science, (Oxford: Clarendon Press, 1971), p. 110

[20] D. C. Lindberg, “Roger Bacon on Light, Vision, and the Universal Emanation of Force”, pp. 243-275 in Jeremiah Hackett, ed., Roger Bacon and the Sciences: Commemorative Essays, (Leiden: Brill, 1997), pp. 245-250; Theories of Vision from al-Kindi to Kepler, (Chicago: Univ. of Chicago Pr., 1976), pp. 107-18; The Beginnings of Western Science, (Chicago: Univ. of Chicago Pr., 1992, p. 313.

[21] D. C. Lindberg, John Pecham and the Science of Optics: Perspectiva communis, (Madison, Univ. of Wisconsin Pr., 1970), pp. 12-32; Theories of Vision from al-Kindi to Kepler, (Chicago: Univ. of Chicago Pr., 1976), pp. 116-18.

[22] D. C. Lindberg, Theories of Vision from al-Kindi to Kepler, (Chicago: Univ. of Chicago Pr., 1976), pp. 118-20.

[23] Caspar, Kepler, pp 142–146

[24] Tipler, P. A. and G. Mosca (2004), Physics for Scientists and Engineers, W. H. Freeman, p. 1068, ISBN 0-7167-4389-2, OCLC 51095685

[25] “René Descartes”, Encarta, Microsoft, 2008, archived from the original on 2009-10-31, retrieved 2007-08-15

[26] Dobbs, J.T. (December 1982), “Newton’s Alchemy and His Theory of Matter”, Isis, 73 (4): 523, doi:10.1086/353114 quoting Opticks

[27] Jean Louis Aubert (1760), Memoires pour l'histoire des sciences et des beaux arts, Paris: Impr. de S. A. S; Chez E. Ganeau, p. 149

[28] Sir David Brewster (1831), A Treatise on Optics, London: Longman, Rees, Orme, Brown & Green and John Taylor, p. 95

[29] Kriss, Timothy C.; Kriss, Vesna Martich (April 1998), “History of the Operating Microscope: From Magnifying Glass to Microneurosurgery”, Neurosurgery, 42 (4): 899–907, doi:10.1097/00006123-199804000-00116, PMID 9574655

[30] Pliny the Elder. “Natural History”. Retrieved 2008-04-27.

[31] (Wade & Finger 2001)

[32] (Elliott & 1966 Chapter 1) 10 CHAPTER 1. DAY 1

[33] Ilardi, Vincent (2007-01-01). Renaissance Vision from Spectacles to Telescopes. American Philosophical Society. pp. 4–6. ISBN 9780871692597.

[34] Microscopes: Time Line

[35] Stephen Jay Gould(2000). The Lying Stones of Marrakech, ch.2 “The Sharp-Eyed Lynx, Outfoxed by Nature”. London: Jonathon Cape. ISBN 0-224-05044-3

[36] William H. Cropper (2004). Great Physicists: The Life and Times of Leading Physicists from Galileo to Hawking. Oxford University Press. ISBN 978-0-19-517324-6.

1.1.9 References

• Crombie, A. C. Robert Grosseteste and the Origins of Experimental Science. Oxford: Clarendon Press, 1971.

• Howard, Ian P.; Wade, Nicholas J. (1996), “Ptolemy’s contributions to the geometry of binocular vision”, Perception, 25 (10): 1189–201, doi:10.1068/p251189, PMID 9027922.

• Lindberg, D. C. “Alhazen’s Theory of Vision and its Reception in the West”, Isis 58 (1967), 321-341. • Lindberg, D. C. Theories of Vision from al-Kindi to Kepler. Chicago: University of Chicago Press, 1976.

• Morelon, Régis; Rashed, Roshdi (1996), Encyclopedia of the History of Arabic Science, 2, Routledge, ISBN 0-415-12410-7, OCLC 34731151.

• Wade, Nicholas J. (1998), A Natural History of Vision, Cambridge, Massachusetts: MIT Press, ISBN 0-262- 23194-8, OCLC 37246567.

• History of Optics (audio mp3) by Simon Schaffer, Professor in History and Philosophy of Science at the University of Cambridge, Jim Bennett, Director of the Museum of the History of Science at the University of Oxford and Emily Winterburn, Curator of Astronomy at the National Maritime Museum (recorded by the BBC). Chapter 2

Day 2

2.1 Geometrical optics

Main article: Optics

Geometrical optics, or ray optics, describes light propagation in terms of rays. The ray in geometric optics is an abstraction, or instrument, useful in approximating the paths along which light propagates in certain classes of circumstances. The simplifying assumptions of geometrical optics include that light rays:

• propagate in rectilinear paths as they travel in a homogeneous medium

• bend, and in particular circumstances may split in two, at the interface between two dissimilar media

• follow curved paths in a medium in which the changes

• may be absorbed or reflected.

Geometrical optics does not account for certain optical effects such as diffraction and interference. This simplification is useful in practice; it is an excellent approximation when the wavelength is small compared to the size of structures with which the light interacts. The techniques are particularly useful in describing geometrical aspects of imaging, including optical aberrations.

2.1.1 Explanation

A light ray is a line or curve that is perpendicular to the light’s wavefronts (and is therefore collinear with the wave vector). A slightly more rigorous definition of a light ray follows from Fermat’s principle, which states that the path taken between two points by a ray of light is the path that can be traversed in the least time.[1] Geometrical optics is often simplified by making the paraxial approximation, or “small angle approximation.” The mathematical behavior then becomes linear, allowing optical components and systems to be described by simple matrices. This leads to the techniques of Gaussian optics and paraxial ray tracing, which are used to find basic properties of optical systems, such as approximate image and object positions and magnifications.[2]

2.1.2 Reflection

Main article: Reflection (physics) Glossy surfaces such as mirrors reflect light in a simple, predictable way. This allows for production of reflected images that can be associated with an actual (real) or extrapolated (virtual) location in space.

11 12 CHAPTER 2. DAY 2

As light travels through space, it oscillates in amplitude. In this image, each maximum amplitude crest is marked with a plane to illustrate the wavefront. The ray is the arrow perpendicular to these parallel surfaces.

With such surfaces, the direction of the reflected ray is determined by the angle the incident ray makes with the surface normal, a line perpendicular to the surface at the point where the ray hits. The incident and reflected rays lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal.[3] This is known as the Law of Reflection. For flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size. (The magnification of a flat mirror is equal to one.) The law also implies that mirror images are parity inverted, which is perceived as a left-right inversion. Mirrors with curved surfaces can be modeled by ray tracing and using the law of reflection at each point on the surface. For mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit . Curved mirrors can form images with magnification greater than or less than one, and the image can be upright or inverted. An upright image formed by reflection in a mirror is always virtual, while an inverted image is real and can be projected onto a screen.[3]

2.1.3 Refraction

Main article: Refraction Refraction occurs when light travels through an area of space that has a changing index of refraction. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction n1 and another medium with index of refraction n2 . In such situations, Snell’s Law describes the resulting deflection of the light ray:

n1 sin θ1 = n2 sin θ2

where θ1 and θ2 are the angles between the normal (to the interface) and the incident and refracted waves, respectively. This phenomenon is also associated with a changing speed of light as seen from the definition of index of refraction provided above which implies: 2.1. GEOMETRICAL OPTICS 13 mirror

P

normal O

Q

Diagram of specular reflection

v1 sin θ2 = v2 sin θ1

[3] where v1 and v2 are the wave velocities through the respective media. Various consequences of Snell’s Law include the fact that for light rays traveling from a material with a high index of refraction to a material with a low index of refraction, it is possible for the interaction with the interface to result in zero transmission. This phenomenon is called total internal reflection and allows for fiber optics technology. As light signals travel down a fiber optic cable, they undergo total internal reflection allowing for essentially no light lost over the length of the cable. It is also possible to produce polarized light rays using a combination of reflection and refraction: When a refracted ray and the reflected ray form a right angle, the reflected ray has the property of “plane polarization”. The angle of incidence required for such a scenario is known as Brewster’s angle.[3] 14 CHAPTER 2. DAY 2

P n1 n2 index

v1 v2 velocity θ1 normal O θ2

Q interface

Illustration of Snell’s Law

Snell’s Law can be used to predict the deflection of light rays as they pass through “linear media” as long as the indexes of refraction and the geometry of the media are known. For example, the propagation of light through a prism results in the light ray being deflected depending on the shape and orientation of the prism. Additionally, since different frequencies of light have slightly different indexes of refraction in most materials, refraction can be used to produce dispersion spectra that appear as rainbows. The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton.[3] Some media have an index of refraction which varies gradually with position and, thus, light rays curve through the medium rather than travel in straight lines. This effect is what is responsible for mirages seen on hot days where the changing index of refraction of the air causes the light rays to bend creating the appearance of specular reflections in the distance (as if on the surface of a pool of water). Material that has a varying index of refraction is called a gradient-index (GRIN) material and has many useful properties used in modern optical scanning technologies including photocopiers and scanners. The phenomenon is studied in the field of gradient-index optics.[4] A device which produces converging or diverging light rays due to refraction is known as a lens. Thin lenses produce focal points on either side that can be modeled using the lensmaker’s equation.[5] In general, two types of lenses exist: convex lenses, which cause parallel light rays to converge, and concave lenses, which cause parallel light rays to diverge. The detailed prediction of how images are produced by these lenses can be made using ray-tracing similar to curved mirrors. Similarly to curved mirrors, thin lenses follow a simple equation that determines the location of the images given a particular focal length ( f ) and object distance ( S1 ):

1 1 1 + = S1 S2 f where S2 is the distance associated with the image and is considered by convention to be negative if on the same side of the lens as the object and positive if on the opposite side of the lens.[5] The focal length f is considered negative for concave lenses. Incoming parallel rays are focused by a convex lens into an inverted real image one focal length from the lens, on the far side of the lens. Rays from an object at finite distance are focused further from the lens than the focal distance; the closer the object is to the lens, the further the image is from the lens. With concave lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at an upright virtual image one focal length from the lens, on the same side of the lens that the parallel rays are approaching on. Rays from an object at finite distance are associated with a virtual image that is closer to the lens than the focal length, and on the same side of the lens as the object. The closer the object is to the lens, the closer the virtual image is to 2.1. GEOMETRICAL OPTICS 15

A ray tracing diagram for a simple converging lens.

Incoming parallel rays are focused by a convex lens into an inverted real image one focal length from the lens, on the far side of the lens

the lens. Likewise, the magnification of a lens is given by 16 CHAPTER 2. DAY 2

With concave lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at an upright virtual image one focal length from the lens, on the same side of the lens that the parallel rays are approaching on.

S f M = − 2 = S1 f − S1 where the negative sign is given, by convention, to indicate an upright object for positive values and an inverted object for negative values. Similar to mirrors, upright images produced by single lenses are virtual while inverted images are real.[3] Lenses suffer from aberrations that distort images and focal points. These are due to both to geometrical imperfections and due to the changing index of refraction for different wavelengths of light ().[3]

2.1.4 Underlying mathematics

As a mathematical study, geometrical optics emerges as a short-wavelength limit for solutions to hyperbolic partial differential equations. In this short-wavelength limit, it is possible to approximate the solution locally by u(t, x) ≈ a(t, x)ei(k·x−ωt) where k, ω satisfy a dispersion relation, and the amplitude a(t, x) varies slowly. More precisely, the leading order solution takes the form

iφ(t,x)/ε a0(t, x)e . 2.1. GEOMETRICAL OPTICS 17

Rays from an object at finite distance are associated with a virtual image that is closer to the lens than the focal length, and on the same side of the lens as the object.

The phase φ(t, x)/ε can be linearized to recover large wavenumber k := ∇xφ , and frequency ω := −∂tφ . The amplitude a0 satisfies a transport equation. The small parameter ε enters the scene due to highly oscillatory initial conditions. Thus, when initial conditions oscillate much faster than the coefficients of the differential equation, solutions will be highly oscillatory, and transported along rays. Assuming coefficients in the differential equation are smooth, the rays will be too. In other words, refraction does not take place. The motivation for this technique comes from studying the typical scenario of light propagation where short wavelength light travels along rays that minimize (more or less) its travel time. Its full application requires tools from microlocal analysis.

A simple example

Starting with the wave equation for (t, x) ∈ R × Rn

( ) ∂2 L(∂ , ∇ )u := − c(x)2∆ u(t, x) = 0, u(0, x) = u (x), u (0, x) = 0 t x ∂t2 0 t 18 CHAPTER 2. DAY 2

assume an asymptotic series solution of the form

∑∞ iφ(t,x)/ε j j iφ(t,x)/ε u(t, x) ∼ aε(t, x)e = i ε aj(t, x)e . j=0

Check that

(( ) ) i 2 2i i L(∂ , ∇ )(eiφ(t,x)/ε)a (t, x) = eiφ(t,x)/ε L(φ , ∇ φ)a + V (∂ , ∇ )a + (a L(∂ , ∇ )φ) + L(∂ , ∇ )a t x ε ε t x ε ε t x ε ε ε t x t x ε with

∂φ ∂ ∑ ∂φ ∂ V (∂ , ∇ ) := − c2(x) t x ∂t ∂t ∂x ∂x j j j

Plugging the series into this equation, and equating powers of ε , the most singular term O(ε−2) satisfies the eikonal equation (in this case called a dispersion relation),

2 2 2 0 = L(φt, ∇xφ) = (φt) − c(x) (∇xφ) .

To order ε−1 , the leading-order amplitude must satisfy a transport equation

2V a0 + (Lφ)a0 = 0

With the definition k := ∇xφ , ω := −φt , the eikonal equation is precisely the dispersion relation that results by plugging the plane wave solution ei(k·x−ωt) into the wave equation. The value of this more complicated expansion is that plane waves cannot be solutions when the wavespeed c is non-constant. However, it can be shown that the amplitude a0 and phase φ are smooth, so that on a local scale there are plane waves. To justify this technique, the remaining terms must be shown to be small in some sense. This can be done using energy estimates, and an assumption of rapidly oscillating initial conditions. It also must be shown that the series converges in some sense.

2.1.5 See also

• Hamiltonian optics

2.1.6 References

[1] Arthur Schuster, An Introduction to the Theory of Optics, London: Edward Arnold, 1904 online.

[2] Greivenkamp, John E. (2004). Field Guide to Geometrical Optics. SPIE Field Guides. 1. SPIE. pp. 19–20. ISBN 0-8194- 5294-7.

[3] Hugh D. Young (1992). University Physics 8e. Addison-Wesley. ISBN 0-201-52981-5.Chapter 35

[4] E. W. Marchand, Gradient Index Optics, New York, NY, Academic Press, 1978.

[5] Hecht, Eugene (1987). Optics (2nd ed.). Addison Wesley. ISBN 0-201-11609-X. Chapters 5 & 6. 2.1. GEOMETRICAL OPTICS 19

2.1.7 Further reading

• Robert Alfred Herman (1900) A Treatise on Geometrical optics from Archive.org. • “The Light of the Eyes and the Enlightened Landscape of Vision” is a manuscript, in Arabic, about geometrical optics, dating from the 16th century. • Theory of Systems of Rays - W.R. Hamilton in Transactions of the Royal Irish Academy, Vol. XV, 1828.

English translations of some early books and papers:

• H. Bruns, “Das Eikonal”

• M. Malus, “Optique” • J. Plucker, “Discussion of the general form for light waves”

• E. Kummer, “General theory of rectilinear ray systems” • E. Kummer, presentation on optically-realizable rectilinear ray systems

• R. Meibauer, “Theory of rectilinear systems of light rays” • M. Pasch, “On the focal surfaces of ray systems and the singularity surfaces of complexes”

• A. Levistal, “Research in geometrical optics” • F. Klein, “On the Bruns eikonal”

• R. Dontot, “On integral invariants and some points of geometrical optics” • T. de Donder, “On the integral invariants of optics”

2.1.8 External links

• Fundamentals of Photonics - Module on Basic Geometrical Optics Chapter 3

Day 3

3.1 Optics

This article is about the branch of physics. For the book by Sir Isaac Newton, see Opticks. For other uses, see Optic (disambiguation). Optics is the branch of physics which involves the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it.[1] Optics usually describes the behaviour of visible, , and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.[1] Most optical phenomena can be accounted for using the classical electromagnetic description of light. Complete electromagnetic descriptions of light are, however, often difficult to apply in practice. Practical optics is usually done using simplified models. The most common of these, geometric optics, treats light as a collection of rays that travel in straight lines and bend when they pass through or reflect from surfaces. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics. Historically, the ray-based model of light was developed first, followed by the wave model of light. Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation. Some phenomena depend on the fact that light has both wave-like and particle-like properties. Explanation of these effects requires quantum mechanics. When considering light’s particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields, photography, and medicine (particularly and ). Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, lenses, telescopes, microscopes, lasers, and fibre optics.

3.1.1 History

Main article: History of optics See also: Timeline of electromagnetism and classical optics Optics began with the development of lenses by the ancient Egyptians and Mesopotamians. The earliest known lenses, made from polished crystal, often quartz, date from as early as 700 BC for Assyrian lenses such as the Layard/Nimrud lens.[2] The ancient Romans and Greeks filled glass spheres with water to make lenses. These practical developments were followed by the development of theories of light and vision by ancient Greek and Indian philosophers, and the development of geometrical optics in the Greco-Roman world. The word optics comes from the ancient Greek word ὀπτική (optikē), meaning “appearance, look”.[3] Greek philosophy on optics broke down into two opposing theories on how vision worked, the "intromission theory" and the “emission theory”.[4] The intro-mission approach saw vision as coming from objects casting off copies of themselves (called eidola) that were captured by the eye. With many propagators including Democritus, Epicurus, Aristotle and their followers, this theory seems to have some contact with modern theories of what vision really is, but it remained only speculation lacking any experimental foundation.

20 3.1. OPTICS 21

Optics includes study of dispersion of light.

Plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes. He also commented on the parity reversal of mirrors in Timaeus.[5] Some hundred years later, Euclid wrote a treatise entitled Optics where he linked vision to geometry, creating geometrical optics.[6] He based his work on Plato’s emission theory wherein he described the mathematical rules of perspective and described the effects of refraction qualitatively, although he questioned that a beam of light from the eye could instantaneously light up the stars every time someone blinked.[7] Ptolemy, in his treatise Optics, held an extramission-intromission theory of vision: the rays (or flux) from the eye formed a cone, the vertex being within the eye, and the base defining the visual field. The rays were sensitive, and conveyed information back to the observer’s intellect about the distance and orientation of surfaces. He summarised much of Euclid and went on to describe a way to measure the angle of refraction, though he failed to notice the empirical relationship between it and the angle of incidence.[8] 22 CHAPTER 3. DAY 3

The Nimrud lens

During the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world. One of the earliest of these was Al-Kindi (c. 801–73) who wrote on the merits of Aristotelian and Euclidean ideas of optics, favouring the emission theory since it could better quantify optical phenomenon.[10] In 984, the Persian mathematician Ibn Sahl wrote the treatise “On burning mirrors and lenses”, correctly describing a law of refraction equivalent to Snell’s law.[11] He used this law to compute optimum shapes for lenses and curved mirrors. In the early 11th century, Alhazen (Ibn al-Haytham) wrote the Book of Optics (Kitab al-manazir) in which he explored reflection and refraction and proposed a new system for explaining vision and light based on observation and experiment.[12][13][14][15][16] He rejected the “emission theory” of Ptolemaic optics with its rays being emitted by the eye, and instead put forward the idea that light reflected in all directions in straight lines from all points of the objects being viewed and then entered the eye, although he was unable to correctly explain how the eye captured the rays.[17] Alhazen’s work was largely ignored in the Arabic world but it was anonymously translated into Latin around 1200 A.D. and further summarised and expanded on by the Polish monk Witelo[18] making it a standard text on optics in Europe for the next 400 years.[19] In the 13th century in medieval Europe, English bishop Robert Grosseteste wrote on a wide range of scientific topics, and discussed light from four different perspectives: an epistemology of light, a metaphysics or cosmogony of light, an etiology or physics of light, and a theology of light,[20] basing it on the works Aristotle and Platonism. Grosseteste’s most famous disciple, Roger Bacon, wrote works citing a wide range of recently translated optical and philosophical works, including those of Alhazen, Aristotle, Avicenna, Averroes, Euclid, al-Kindi, Ptolemy, Tideus, and Constantine the African. Bacon was able to use parts of glass spheres as magnifying glasses to demonstrate that light reflects from objects rather than being released from them. The first wearable eyeglasses were invented in Italy around 1286.[21] This was the start of the optical industry of grinding and polishing lenses for these “spectacles”, first in Venice and Florence in the thirteenth century,[22] and later in the spectacle making centres in both the Netherlands and Germany.[23] Spectacle makers created improved 3.1. OPTICS 23

Alhazen (Ibn al-Haytham), “the father of Optics”[9]

types of lenses for the correction of vision based more on empirical knowledge gained from observing the effects of the lenses rather than using the rudimentary optical theory of the day (theory which for the most part could not even adequately explain how spectacles worked).[24][25] This practical development, mastery, and experimentation with lenses led directly to the invention of the compound optical microscope around 1595, and the refracting telescope in 1608, both of which appeared in the spectacle making centres in the Netherlands.[26][27] In the early 17th century Johannes Kepler expanded on geometric optics in his writings, covering lenses, reflection by flat and curved mirrors, the principles of pinhole cameras, inverse-square law governing the intensity of light, and the optical explanations of astronomical phenomena such as lunar and solar eclipses and astronomical parallax. He was also able to correctly deduce the role of the retina as the actual organ that recorded images, finally being able to scientifically quantify the effects of different types of lenses that spectacle makers had been observing over the previous 300 years.[28] After the invention of the telescope Kepler set out the theoretical basis on how they worked 24 CHAPTER 3. DAY 3

Reproduction of a page of Ibn Sahl's manuscript showing his knowledge of the law of refraction. and described an improved version, known as the Keplerian telescope, using two convex lenses to produce higher magnification.[29] Optical theory progressed in the mid-17th century with treatises written by philosopher René Descartes, which ex- plained a variety of optical phenomena including reflection and refraction by assuming that light was emitted by objects which produced it.[30] This differed substantively from the ancient Greek emission theory. In the late 1660s and early 1670s, Isaac Newton expanded Descartes’ ideas into a corpuscle theory of light, famously determining that white light was a mix of colours which can be separated into its component parts with a prism. In 1690, Christiaan Huygens proposed a wave theory for light based on suggestions that had been made by Robert Hooke in 1664. Hooke 3.1. OPTICS 25

Cover of the first edition of Newton’s Opticks

himself publicly criticised Newton’s theories of light and the feud between the two lasted until Hooke’s death. In 1704, Newton published Opticks and, at the time, partly because of his success in other areas of physics, he was generally considered to be the victor in the debate over the nature of light.[30] Newtonian optics was generally accepted until the early 19th century when Thomas Young and Augustin-Jean Fresnel conducted experiments on the interference of light that firmly established light’s wave nature. Young’s famous double 26 CHAPTER 3. DAY 3 slit experiment showed that light followed the law of superposition, which is a wave-like property not predicted by Newton’s corpuscle theory. This work led to a theory of diffraction for light and opened an entire area of study in physical optics.[31] Wave optics was successfully unified with electromagnetic theory by James Clerk Maxwell in the 1860s.[32] The next development in optical theory came in 1899 when Max Planck correctly modelled blackbody radiation by assuming that the exchange of energy between light and matter only occurred in discrete amounts he called quanta.[33] In 1905 Albert Einstein published the theory of the photoelectric effect that firmly established the quantization of light itself.[34][35] In 1913 Niels Bohr showed that atoms could only emit discrete amounts of energy, thus explaining the discrete lines seen in emission and absorption spectra.[36] The understanding of the interaction between light and matter which followed from these developments not only formed the basis of quantum optics but also was crucial for the development of quantum mechanics as a whole. The ultimate culmination, the theory of quantum electrodynamics, explains all optics and electromagnetic processes in general as the result of the exchange of real and virtual photons.[37] Quantum optics gained practical importance with the inventions of the maser in 1953 and of the laser in 1960.[38] Following the work of Paul Dirac in quantum field theory, George Sudarshan, Roy J. Glauber, and Leonard Mandel applied quantum theory to the electromagnetic field in the 1950s and 1960s to gain a more detailed understanding of photodetection and the statistics of light.

3.1.2 Classical optics

Classical optics is divided into two main branches: geometrical (or ray) optics and physical (or wave) optics. In geometrical optics, light is considered to travel in straight lines, while in physical optics, light is considered as an electromagnetic wave. Geometrical optics can be viewed as an approximation of physical optics that applies when the wavelength of the light used is much smaller than the size of the optical elements in the system being modelled.

Geometrical optics

Main article: Geometrical optics Geometrical optics, or ray optics, describes the propagation of light in terms of “rays” which travel in straight lines,

n

θ1 normal θ2 θ1 interface

Geometry of reflection and refraction of light rays and whose paths are governed by the laws of reflection and refraction at interfaces between different media.[39] These laws were discovered empirically as far back as 984 AD[11] and have been used in the design of optical components and instruments from then until the present day. They can be summarised as follows: 3.1. OPTICS 27

When a ray of light hits the boundary between two transparent materials, it is divided into a reflected and a refracted ray.

The law of reflection says that the reflected ray lies in the plane of incidence, and the angle of reflection equals the angle of incidence.

The law of refraction says that the refracted ray lies in the plane of incidence, and the sine of the angle of refraction divided by the sine of the angle of incidence is a constant:

sin θ 1 = n sin θ2

where n is a constant for any two materials and a given colour of light. If the first material is air or vacuum, n is the refractive index of the second material. The laws of reflection and refraction can be derived from Fermat’s principle which states that the path taken between two points by a ray of light is the path that can be traversed in the least time.[40]

Approximations Geometric optics is often simplified by making the paraxial approximation, or “small angle ap- proximation”. The mathematical behaviour then becomes linear, allowing optical components and systems to be described by simple matrices. This leads to the techniques of Gaussian optics and paraxial ray tracing, which are used to find basic properties of optical systems, such as approximate image and object positions and magnifications.[41]

Reflections Main article: Reflection (physics) Reflections can be divided into two types: specular reflection and diffuse reflection. Specular reflection describes the gloss of surfaces such as mirrors, which reflect light in a simple, predictable way. This allows for production of reflected images that can be associated with an actual (real) or extrapolated (virtual) location in space. Diffuse reflection describes non-glossy materials, such as paper or rock. The reflections from these surfaces can only be described statistically, with the exact distribution of the reflected light depending on the microscopic structure of the material. Many diffuse reflectors are described or can be approximated by Lambert’s cosine law, which describes surfaces that have equal luminance when viewed from any angle. Glossy surfaces can give both specular and diffuse reflection. In specular reflection, the direction of the reflected ray is determined by the angle the incident ray makes with the surface normal, a line perpendicular to the surface at the point where the ray hits. The incident and reflected rays and the normal lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal.[42] This is known as the Law of Reflection. For flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size. The law also implies that mirror images are parity inverted, which we perceive as a left-right inversion. Images formed from reflection in two (or any even number of) mirrors are not parity inverted. Corner reflectors[42] retroreflect light, producing reflected rays that travel back in the direction from which the incident rays came. Mirrors with curved surfaces can be modelled by ray-tracing and using the law of reflection at each point on the surface. For mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration. Curved mirrors can form images with magnification greater than or less than one, and the magnification can be negative, indicating that the image is inverted. An upright image formed by reflection in a mirror is always virtual, while an inverted image is real and can be projected onto a screen.[42]

Refractions Main article: Refraction Refraction occurs when light travels through an area of space that has a changing index of refraction; this principle allows for lenses and the focusing of light. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction n1 and another medium with index of refraction n2 . In such situations, Snell’s Law describes the resulting deflection of the light ray: 28 CHAPTER 3. DAY 3 mirror

P

normal O

Q

Diagram of specular reflection

n1 sin θ1 = n2 sin θ2 [42] where θ1 and θ2 are the angles between the normal (to the interface) and the incident and refracted waves, respectively. The index of refraction of a medium is related to the speed, v, of light in that medium by n = c/v where c is the speed of light in vacuum. Snell’s Law can be used to predict the deflection of light rays as they pass through linear media as long as the indexes of refraction and the geometry of the media are known. For example, the propagation of light through a prism results 3.1. OPTICS 29

P n1 n2 index

v1 v2 velocity θ1 normal O θ2

Q interface

Illustration of Snell’s Law for the case n1 < n2, such as air/water interface

in the light ray being deflected depending on the shape and orientation of the prism. In most materials, the index of refraction varies with the frequency of the light. Taking this into account, Snell’s Law can be used to predict how a prism will disperse light into a spectrum. The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton.[42] Some media have an index of refraction which varies gradually with position and, thus, light rays in the medium are curved. This effect is responsible for mirages seen on hot days: a change in index of refraction air with height causes light rays to bend, creating the appearance of specular reflections in the distance (as if on the surface of a pool of water). Optical materials with varying index of refraction are called gradient-index (GRIN) materials. Such materials are used to make gradient-index optics.[43] For light rays travelling from a material with a high index of refraction to a material with a low index of refraction, Snell’s law predicts that there is no θ2 when θ1 is large. In this case, no transmission occurs; all the light is reflected. This phenomenon is called total internal reflection and allows for fibre optics technology. As light travels down an optical fibre, it undergoes total internal reflection allowing for essentially no light to be lost over the length of the cable.[42]

Lenses Main article: Lens (optics) A device which produces converging or diverging light rays due to refraction is known as a lens. Lenses are char- acterized by their focal length: a converging lens has positive focal length, while a diverging lens has negative focal length. Smaller focal length indicates that the lens has a stronger converging or diverging effect. The focal length of a simple lens in air is given by the lensmaker’s equation.[44] Ray tracing can be used to show how images are formed by a lens. For a thin lens in air, the location of the image is given by the simple equation

1 1 1 + = S1 S2 f

where S1 is the distance from the object to the lens, S2 is the distance from the lens to the image, and f is the focal length of the lens. In the sign convention used here, the object and image distances are positive if the object and image are on opposite sides of the lens.[44] Incoming parallel rays are focused by a converging lens onto a spot one focal length from the lens, on the far side of the lens. This is called the rear focal point of the lens. Rays from an object at finite distance are focused further from the lens than the focal distance; the closer the object is to the lens, the further the image is from the lens. 30 CHAPTER 3. DAY 3

A ray tracing diagram for a converging lens.

With diverging lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at a spot one focal length in front of the lens. This is the lens’s front focal point. Rays from an object at finite distance are associated with a virtual image that is closer to the lens than the focal point, and on the same 3.1. OPTICS 31

side of the lens as the object. The closer the object is to the lens, the closer the virtual image is to the lens. As with mirrors, upright images produced by a single lens are virtual, while inverted images are real.[42] Lenses suffer from aberrations that distort images. Monochromatic aberrations occur because the geometry of the lens does not perfectly direct rays from each object point to a single point on the image, while chromatic aberration occurs because the index of refraction of the lens varies with the wavelength of the light.[42]

ABCDEFGHIJKL

abcdefghi jkl

a

b c

-3f -2f -f f d2f 3f 4f 5f e

A

B f

C

D g

E F

h G

Images of black letters in a thin convex lens of focal length f are shown in red. Selected rays are shown for letters E, I and K in blue, green and orange, respectively. Note that E (at 2f) has an equal-size, real and inverted image; I (at f) has its image at infinity; and K (at f/2) has a double-size, virtual and upright image.

Physical optics

Main article: Physical optics

In physical optics, light is considered to propagate as a wave. This model predicts phenomena such as interference and diffraction, which are not explained by geometric optics. The speed of light waves in air is approximately 3.0×108 m/s (exactly 299,792,458 m/s in vacuum). The wavelength of visible light waves varies between 400 and 700 nm, but the term “light” is also often applied to infrared (0.7–300 μm) and ultraviolet radiation (10–400 nm). The wave model can be used to make predictions about how an optical system will behave without requiring an explanation of what is “waving” in what medium. Until the middle of the 19th century, most physicists believed in an “ethereal” medium in which the light disturbance propagated.[45] The existence of electromagnetic waves was predicted in 1865 by Maxwell’s equations. These waves propagate at the speed of light and have varying electric and magnetic fields which are orthogonal to one another, and also to the direction of propagation of the waves.[46] Light waves are now generally treated as electromagnetic waves except when quantum mechanical effects have to be considered.

Modelling and design of optical systems using physical optics Many simplified approximations are available for analysing and designing optical systems. Most of these use a single scalar quantity to represent the electric field of the light wave, rather than using a vector model with orthogonal electric and magnetic vectors.[47] The Huygens– Fresnel equation is one such model. This was derived empirically by Fresnel in 1815, based on Huygens’ hypothesis that each point on a wavefront generates a secondary spherical wavefront, which Fresnel combined with the principle of superposition of waves. The Kirchhoff diffraction equation, which is derived using Maxwell’s equations, puts the Huygens-Fresnel equation on a firmer physical foundation. Examples of the application of Huygens–Fresnel principle can be found in the sections on diffraction and Fraunhofer diffraction. 32 CHAPTER 3. DAY 3

More rigorous models, involving the modelling of both electric and magnetic fields of the light wave, are required when dealing with the detailed interaction of light with materials where the interaction depends on their electric and magnetic properties. For instance, the behaviour of a light wave interacting with a metal surface is quite different from what happens when it interacts with a dielectric material. A vector model must also be used to model polarised light. Numerical modeling techniques such as the finite element method, the boundary element method and the transmission- line matrix method can be used to model the propagation of light in systems which cannot be solved analytically. Such models are computationally demanding and are normally only used to solve small-scale problems that require accuracy beyond that which can be achieved with analytical solutions.[48] All of the results from geometrical optics can be recovered using the techniques of Fourier optics which apply many of the same mathematical and analytical techniques used in acoustic engineering and signal processing. propagation is a simple paraxial physical optics model for the propagation of coherent radiation such as laser beams. This technique partially accounts for diffraction, allowing accurate calculations of the rate at which a laser beam expands with distance, and the minimum size to which the beam can be focused. Gaussian beam propagation thus bridges the gap between geometric and physical optics.[49]

Superposition and interference Main articles: Superposition principle and Interference (optics)

In the absence of nonlinear effects, the superposition principle can be used to predict the shape of interacting wave- forms through the simple addition of the disturbances.[50] This interaction of waves to produce a resulting pattern is generally termed “interference” and can result in a variety of outcomes. If two waves of the same wavelength and frequency are in phase, both the wave crests and wave troughs align. This results in constructive interference and an increase in the amplitude of the wave, which for light is associated with a brightening of the waveform in that location. Alternatively, if the two waves of the same wavelength and frequency are out of phase, then the wave crests will align with wave troughs and vice versa. This results in destructive interference and a decrease in the amplitude of the wave, which for light is associated with a dimming of the waveform at that location. See below for an illustration of this effect.[50] Since the Huygens–Fresnel principle states that every point of a wavefront is associated with the production of a new disturbance, it is possible for a wavefront to interfere with itself constructively or destructively at different locations producing bright and dark fringes in regular and predictable patterns.[50] Interferometry is the science of measur- ing these patterns, usually as a means of making precise determinations of distances or angular resolutions.[51] The Michelson interferometer was a famous instrument which used interference effects to accurately measure the speed of light.[52] The appearance of thin films and coatings is directly affected by interference effects. Antireflective coatings use destructive interference to reduce the reflectivity of the surfaces they coat, and can be used to minimise glare and unwanted reflections. The simplest case is a single layer with thickness one-fourth the wavelength of incident light. The reflected wave from the top of the film and the reflected wave from the film/material interface are then exactly 180° out of phase, causing destructive interference. The waves are only exactly out of phase for one wavelength, which would typically be chosen to be near the centre of the visible spectrum, around 550 nm. More complex designs using multiple layers can achieve low reflectivity over a broad band, or extremely low reflectivity at a single wavelength. Constructive interference in thin films can create strong reflection of light in a range of wavelengths, which can be narrow or broad depending on the design of the coating. These films are used to make dielectric mirrors, interference filters, heat reflectors, and filters for colour separation in colour television cameras. This interference effect is also what causes the colourful rainbow patterns seen in oil slicks.[50]

Diffraction and Main articles: Diffraction and Optical resolution Diffraction is the process by which light interference is most commonly observed. The effect was first described in 1665 by Francesco Maria Grimaldi, who also coined the term from the Latin diffringere, 'to break into pieces’.[53][54] Later that century, Robert Hooke and Isaac Newton also described phenomena now known to be diffraction in Newton’s rings[55] while James Gregory recorded his observations of diffraction patterns from bird feathers.[56] The first physical optics model of diffraction that relied on the Huygens–Fresnel principle was developed in 1803 by Thomas Young in his interference experiments with the interference patterns of two closely spaced slits. Young showed that his results could only be explained if the two slits acted as two unique sources of waves rather than corpuscles.[57] In 1815 and 1818, Augustin-Jean Fresnel firmly established the mathematics of how wave interference 3.1. OPTICS 33

When oil or fuel is spilled, colourful patterns are formed by thin-film interference. can account for diffraction.[44] The simplest physical models of diffraction use equations that describe the angular separation of light and dark fringes due to light of a particular wavelength (λ). In general, the equation takes the form mλ = d sin θ where d is the separation between two wavefront sources (in the case of Young’s experiments, it was two slits), θ is the angular separation between the central fringe and the m th order fringe, where the central maximum is m = 0 .[58] This equation is modified slightly to take into account a variety of situations such as diffraction through a single gap, diffraction through multiple slits, or diffraction through a diffraction grating that contains a large number of slits at equal spacing.[58] More complicated models of diffraction require working with the mathematics of Fresnel or Fraunhofer diffraction.[59] X-ray diffraction makes use of the fact that atoms in a crystal have regular spacing at distances that are on the order of one angstrom. To see diffraction patterns, x-rays with similar wavelengths to that spacing are passed through the crystal. Since crystals are three-dimensional objects rather than two-dimensional gratings, the associated diffraction pattern varies in two directions according to Bragg reflection, with the associated bright spots occurring in unique patterns and d being twice the spacing between atoms.[58] Diffraction effects limit the ability for an optical detector to optically resolve separate light sources. In general, light that is passing through an will experience diffraction and the best images that can be created (as described in 34 CHAPTER 3. DAY 3

Diffraction on two slits separated by distance d . The bright fringes occur along lines where black lines intersect with black lines and white lines intersect with white lines. These fringes are separated by angle θ and are numbered as order n .

diffraction-limited optics) appear as a central spot with surrounding bright rings, separated by dark nulls; this pattern is known as an Airy pattern, and the central bright lobe as an Airy disk.[44] The size of such a disk is given by

λ sin θ = 1.22 D where θ is the angular resolution, λ is the wavelength of the light, and D is the diameter of the lens aperture. If the angular separation of the two points is significantly less than the Airy disk angular radius, then the two points cannot be resolved in the image, but if their angular separation is much greater than this, distinct images of the two points are formed and they can therefore be resolved. Rayleigh defined the somewhat arbitrary "Rayleigh criterion" that two points whose angular separation is equal to the Airy disk radius (measured to first null, that is, to the first place where no light is seen) can be considered to be resolved. It can be seen that the greater the diameter of the lens or its aperture, the finer the resolution.[58] Interferometry, with its ability to mimic extremely large baseline , allows for the greatest angular resolution possible.[51] For astronomical imaging, the atmosphere prevents optimal resolution from being achieved in the visible spectrum due to the atmospheric scattering and dispersion which cause stars to twinkle. Astronomers refer to this effect as the quality of astronomical seeing. Techniques known as adaptive optics have been used to eliminate the atmospheric disruption of images and achieve results that approach the diffraction limit.[60]

Dispersion and scattering Main articles: Dispersion (optics) and Scattering Refractive processes take place in the physical optics limit, where the wavelength of light is similar to other distances, as a kind of scattering. The simplest type of scattering is Thomson scattering which occurs when electromagnetic waves are deflected by single particles. In the limit of Thomson scattering, in which the wavelike nature of light is evident, light is dispersed independent of the frequency, in contrast to Compton scattering which is frequency- dependent and strictly a quantum mechanical process, involving the nature of light as particles. In a statistical sense, elastic scattering of light by numerous particles much smaller than the wavelength of the light is a process known as Rayleigh scattering while the similar process for scattering by particles that are similar or larger in wavelength is known as Mie scattering with the Tyndall effect being a commonly observed result. A small proportion of light scattering from atoms or molecules may undergo Raman scattering, wherein the frequency changes due to excitation of the atoms and molecules. Brillouin scattering occurs when the frequency of light changes due to local changes with time and movements of a dense material.[61] Dispersion occurs when different frequencies of light have different phase velocities, due either to material properties (material dispersion) or to the geometry of an optical waveguide (waveguide dispersion). The most familiar form of 3.1. OPTICS 35

Conceptual animation of light dispersion through a prism. High frequency (blue) light is deflected the most, and low frequency (red) the least. dispersion is a decrease in index of refraction with increasing wavelength, which is seen in most transparent materials. This is called “normal dispersion”. It occurs in all dielectric materials, in wavelength ranges where the material does not absorb light.[62] In wavelength ranges where a medium has significant absorption, the index of refraction can increase with wavelength. This is called “anomalous dispersion”.[42][62] The separation of colours by a prism is an example of normal dispersion. At the surfaces of the prism, Snell’s law predicts that light incident at an angle θ to the normal will be refracted at an angle arcsin(sin (θ) / n). Thus, blue light, with its higher refractive index, is bent more strongly than red light, resulting in the well-known rainbow pattern.[42]

Dispersion: two sinusoids propagating at different speeds make a moving interference pattern. The red dot moves with the phase velocity, and the green dots propagate with the group velocity. In this case, the phase velocity is twice the group velocity. The red dot overtakes two green dots, when moving from the left to the right of the figure. In effect, the individual waves (which travel with the phase velocity) escape from the wave packet (which travels with the group velocity).

Material dispersion is often characterised by the Abbe number, which gives a simple measure of dispersion based on the index of refraction at three specific wavelengths. Waveguide dispersion is dependent on the propagation constant.[44] Both kinds of dispersion cause changes in the group characteristics of the wave, the features of the wave packet that change with the same frequency as the amplitude of the electromagnetic wave. “Group velocity dispersion” manifests as a spreading-out of the signal “envelope” of the radiation and can be quantified with a group dispersion delay parameter:

1 dvg D = 2 vg dλ 36 CHAPTER 3. DAY 3

[63] where vg is the group velocity. For a uniform medium, the group velocity is

( )− dn 1 v = c n − λ g dλ

where n is the index of refraction and c is the speed of light in a vacuum.[64] This gives a simpler form for the dispersion delay parameter:

λ d2n D = − . c dλ2 If D is less than zero, the medium is said to have positive dispersion or normal dispersion. If D is greater than zero, the medium has negative dispersion. If a light pulse is propagated through a normally dispersive medium, the result is the higher frequency components slow down more than the lower frequency components. The pulse therefore becomes positively chirped, or up-chirped, increasing in frequency with time. This causes the spectrum coming out of a prism to appear with red light the least refracted and blue/violet light the most refracted. Conversely, if a pulse travels through an anomalously (negatively) dispersive medium, high frequency components travel faster than the lower ones, and the pulse becomes negatively chirped, or down-chirped, decreasing in frequency with time.[65] The result of group velocity dispersion, whether negative or positive, is ultimately temporal spreading of the pulse. This makes dispersion management extremely important in optical communications systems based on optical fibres, since if dispersion is too high, a group of pulses representing information will each spread in time and merge, making it impossible to extract the signal.[63]

Polarization Main article: Polarization (waves)

Polarization is a general property of waves that describes the orientation of their oscillations. For transverse waves such as many electromagnetic waves, it describes the orientation of the oscillations in the plane perpendicular to the wave’s direction of travel. The oscillations may be oriented in a single direction (linear polarization), or the oscillation direction may rotate as the wave travels (circular or elliptical polarization). Circularly polarised waves can rotate rightward or leftward in the direction of travel, and which of those two rotations is present in a wave is called the wave’s chirality.[66] The typical way to consider polarization is to keep track of the orientation of the electric field vector as the electro- magnetic wave propagates. The electric field vector of a plane wave may be arbitrarily divided into two perpendicular components labeled x and y (with z indicating the direction of travel). The shape traced out in the x-y plane by the electric field vector is a Lissajous figure that describes the polarization state.[44] The following figures show some ex- amples of the evolution of the electric field vector (blue), with time (the vertical axes), at a particular point in space, along with its x and y components (red/left and green/right), and the path traced by the vector in the plane (purple): The same evolution would occur when looking at the electric field at a particular time while evolving the point in space, along the direction opposite to propagation.

Linear

Circular

Elliptical polarization

In the leftmost figure above, the x and y components of the light wave are in phase. In this case, the ratio of their strengths is constant, so the direction of the electric vector (the vector sum of these two components) is constant. Since the tip of the vector traces out a single line in the plane, this special case is called linear polarization. The direction of this line depends on the relative amplitudes of the two components.[66] In the middle figure, the two orthogonal components have the same amplitudes and are 90° out of phase. In this case, one component is zero when the other component is at maximum or minimum amplitude. There are two possible 3.1. OPTICS 37 phase relationships that satisfy this requirement: the x component can be 90° ahead of the y component or it can be 90° behind the y component. In this special case, the electric vector traces out a circle in the plane, so this polarization is called circular polarization. The rotation direction in the circle depends on which of the two phase relationships exists and corresponds to right-hand circular polarization and left-hand circular polarization.[44] In all other cases, where the two components either do not have the same amplitudes and/or their phase difference is neither zero nor a multiple of 90°, the polarization is called elliptical polarization because the electric vector traces out an ellipse in the plane (the polarization ellipse). This is shown in the above figure on the right. Detailed mathematics of polarization is done using Jones calculus and is characterised by the Stokes parameters.[44]

Changing polarization Media that have different indexes of refraction for different polarization modes are called birefringent.[66] Well known manifestations of this effect appear in optical wave plates/retarders (linear modes) and in Faraday rotation/optical rotation (circular modes).[44] If the path length in the birefringent medium is sufficient, plane waves will exit the material with a significantly different propagation direction, due to refraction. For example, this is the case with macroscopic crystals of calcite, which present the viewer with two offset, orthogonally polarised images of whatever is viewed through them. It was this effect that provided the first discovery of polarization, by Erasmus Bartholinus in 1669. In addition, the phase shift, and thus the change in polarization state, is usually frequency dependent, which, in combination with dichroism, often gives rise to bright colours and rainbow-like effects. In mineralogy, such properties, known as pleochroism, are frequently exploited for the purpose of identifying minerals using polarization microscopes. Additionally, many plastics that are not normally birefringent will become so when subject to mechanical stress, a phenomenon which is the basis of photoelasticity.[66] Non-birefringent methods, to rotate the linear polarization of light beams, include the use of prismatic polarization rotators which use total internal reflection in a prism set designed for efficient collinear transmission.[67] Media that reduce the amplitude of certain polarization modes are called dichroic, with devices that block nearly all of the radiation in one mode known as polarizing filters or simply "polarisers". Malus’ law, which is named after Étienne-Louis Malus, says that when a perfect polariser is placed in a linear polarised beam of light, the intensity, I, of the light that passes through is given by

2 I = I0 cos θi , where

I0 is the initial intensity, and θi is the angle between the light’s initial polarization direction and the axis of the polariser.[66]

A beam of unpolarised light can be thought of as containing a uniform mixture of linear polarizations at all possible angles. Since the average value of cos2 θ is 1/2, the transmission coefficient becomes

I 1 = I0 2 In practice, some light is lost in the polariser and the actual transmission of unpolarised light will be somewhat lower than this, around 38% for Polaroid-type polarisers but considerably higher (>49.9%) for some birefringent prism types.[44] In addition to birefringence and dichroism in extended media, polarization effects can also occur at the (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected, with the ratio depending on angle of incidence and the angle of refraction. In this way, physical optics recovers Brewster’s angle.[44] When light reflects from a thin film on a surface, interference between the reflections from the film’s surfaces can produce polarization in the reflected and transmitted light.

Natural light Most sources of electromagnetic radiation contain a large number of atoms or molecules that emit light. The orientation of the electric fields produced by these emitters may not be correlated, in which case the light is said to be unpolarised. If there is partial correlation between the emitters, the light is partially polarised. If the polarization is consistent across the spectrum of the source, partially polarised light can be described as a superposition 38 CHAPTER 3. DAY 3

of a completely unpolarised component, and a completely polarised one. One may then describe the light in terms of the degree of polarization, and the parameters of the polarization ellipse.[44] Light reflected by shiny transparent materials is partly or fully polarised, except when the light is normal (perpendicu- lar) to the surface. It was this effect that allowed the mathematician Étienne-Louis Malus to make the measurements that allowed for his development of the first mathematical models for polarised light. Polarization occurs when light is scattered in the atmosphere. The scattered light produces the brightness and colour in clear skies. This partial polarization of scattered light can be taken advantage of using polarizing filters to darken the sky in . Optical polarization is principally of importance in chemistry due to circular dichroism and optical rotation ("circu- lar birefringence") exhibited by optically active (chiral) molecules.[44]

3.1.3 Modern optics

Main articles: Optical physics and Optical engineering

Modern optics encompasses the areas of optical science and engineering that became popular in the 20th century. These areas of optical science typically relate to the electromagnetic or quantum properties of light but do include other topics. A major subfield of modern optics, quantum optics, deals with specifically quantum mechanical properties of light. Quantum optics is not just theoretical; some modern devices, such as lasers, have principles of operation that depend on quantum mechanics. Light detectors, such as photomultipliers and channeltrons, respond to individual photons. Electronic image sensors, such as CCDs, exhibit shot noise corresponding to the statistics of individual photon events. Light-emitting diodes and photovoltaic cells, too, cannot be understood without quantum mechanics. In the study of these devices, quantum optics often overlaps with quantum electronics.[68] Specialty areas of optics research include the study of how light interacts with specific materials as in crystal optics and metamaterials. Other research focuses on the phenomenology of electromagnetic waves as in singular optics, non- imaging optics, non-linear optics, statistical optics, and radiometry. Additionally, computer engineers have taken an interest in integrated optics, machine vision, and photonic computing as possible components of the “next generation” of computers.[69] Today, the pure science of optics is called optical science or optical physics to distinguish it from applied optical sciences, which are referred to as optical engineering. Prominent subfields of optical engineering include illumination engineering, photonics, and optoelectronics with practical applications like lens design, fabrication and testing of optical components, and image processing. Some of these fields overlap, with nebulous boundaries between the subjects terms that mean slightly different things in different parts of the world and in different areas of industry. A professional community of researchers in nonlinear optics has developed in the last several decades due to advances in laser technology.[70]

Lasers

Main article: Laser A laser is a device that emits light (electromagnetic radiation) through a process called stimulated emission. The term laser is an acronym for Light Amplification by Stimulated Emission of Radiation.[71] Laser light is usually spatially coherent, which means that the light either is emitted in a narrow, low-divergence beam, or can be converted into one with the help of optical components such as lenses. Because the microwave equivalent of the laser, the maser, was developed first, devices that emit microwave and radio frequencies are usually called masers.[72] The first working laser was demonstrated on 16 May 1960 by Theodore Maiman at Hughes Research Laborato- ries.[74] When first invented, they were called “a solution looking for a problem”.[75] Since then, lasers have become a multibillion-dollar industry, finding utility in thousands of highly varied applications. The first application of lasers visible in the daily lives of the general population was the supermarket barcode scanner, introduced in 1974.[76] The laserdisc player, introduced in 1978, was the first successful consumer product to include a laser, but the compact disc player was the first laser-equipped device to become truly common in consumers’ homes, beginning in 1982.[77] These optical storage devices use a semiconductor laser less than a millimetre wide to scan the surface of the disc for data retrieval. Fibre-optic communication relies on lasers to transmit large amounts of information at the speed of light. Other common applications of lasers include laser printers and laser pointers. Lasers are used in medicine in areas such as bloodless surgery, laser eye surgery, and laser capture microdissection and in military applications such as missile defence systems, electro-optical countermeasures (EOCM), and lidar. Lasers are also used in holograms, bubblegrams, laser light shows, and laser hair removal.[78] 3.1. OPTICS 39

Kapitsa–Dirac effect

The Kapitsa–Dirac effect causes beams of particles to diffract as the result of meeting a standing wave of light. Light can be used to position matter using various phenomena (see optical tweezers).

3.1.4 Applications

Optics is part of everyday life. The ubiquity of visual systems in biology indicates the central role optics plays as the science of one of the five senses. Many people benefit from eyeglasses or contact lenses, and optics are integral to the functioning of many consumer goods including cameras. Rainbows and mirages are examples of optical phenomena. Optical communication provides the backbone for both the Internet and modern telephony.

Human eye

Main articles: Human eye and Photometry (optics)

The human eye functions by focusing light onto a layer of photoreceptor cells called the retina, which forms the inner lining of the back of the eye. The focusing is accomplished by a series of transparent media. Light entering the eye passes first through the cornea, which provides much of the eye’s optical power. The light then continues through the fluid just behind the cornea—the anterior chamber, then passes through the pupil. The light then passes through the lens, which focuses the light further and allows adjustment of focus. The light then passes through the main body of fluid in the eye—the vitreous humour, and reaches the retina. The cells in the retina line the back of the eye, except for where the optic nerve exits; this results in a blind spot. There are two types of photoreceptor cells, rods and cones, which are sensitive to different aspects of light.[79] Rod cells are sensitive to the intensity of light over a wide frequency range, thus are responsible for black-and-white vision. Rod cells are not present on the fovea, the area of the retina responsible for central vision, and are not as responsive as cone cells to spatial and temporal changes in light. There are, however, twenty times more rod cells than cone cells in the retina because the rod cells are present across a wider area. Because of their wider distribution, rods are responsible for peripheral vision.[80] In contrast, cone cells are less sensitive to the overall intensity of light, but come in three varieties that are sensitive to different frequency-ranges and thus are used in the perception of colour and photopic vision. Cone cells are highly concentrated in the fovea and have a high meaning that they are better at spatial resolution than rod cells. Since cone cells are not as sensitive to dim light as rod cells, most night vision is limited to rod cells. Likewise, since cone cells are in the fovea, central vision (including the vision needed to do most reading, fine detail work such as sewing, or careful examination of objects) is done by cone cells.[80] Ciliary muscles around the lens allow the eye’s focus to be adjusted. This process is known as accommodation. The near point and far point define the nearest and farthest distances from the eye at which an object can be brought into sharp focus. For a person with normal vision, the far point is located at infinity. The near point’s location depends on how much the muscles can increase the curvature of the lens, and how inflexible the lens has become with age. Optometrists, ophthalmologists, and opticians usually consider an appropriate near point to be closer than normal reading distance—approximately 25 cm.[79] Defects in vision can be explained using optical principles. As people age, the lens becomes less flexible and the near point recedes from the eye, a condition known as . Similarly, people suffering from hyperopia cannot decrease the focal length of their lens enough to allow for nearby objects to be imaged on their retina. Conversely, people who cannot increase the focal length of their lens enough to allow for distant objects to be imaged on the retina suffer from and have a far point that is considerably closer than infinity. A condition known as astigmatism results when the cornea is not spherical but instead is more curved in one direction. This causes horizontally extended objects to be focused on different parts of the retina than vertically extended objects, and results in distorted images.[79] All of these conditions can be corrected using corrective lenses. For presbyopia and hyperopia, a converging lens provides the extra curvature necessary to bring the near point closer to the eye while for myopia a diverging lens provides the curvature necessary to send the far point to infinity. Astigmatism is corrected with a cylindrical surface lens that curves more strongly in one direction than in another, compensating for the non-uniformity of the cornea.[81] The optical power of corrective lenses is measured in diopters, a value equal to the reciprocal of the focal length measured in metres; with a positive focal length corresponding to a converging lens and a negative focal length 40 CHAPTER 3. DAY 3 corresponding to a diverging lens. For lenses that correct for astigmatism as well, three numbers are given: one for the spherical power, one for the cylindrical power, and one for the angle of orientation of the astigmatism.[81]

Visual effects Main articles: Optical illusions and Perspective (graphical) For the visual effects used in film, video, and computer graphics, see visual effects. Optical illusions (also called visual illusions) are characterized by visually perceived images that differ from objective reality. The information gathered by the eye is processed in the brain to give a percept that differs from the object being imaged. Optical illusions can be the result of a variety of phenomena including physical effects that create images that are different from the objects that make them, the physiological effects on the eyes and brain of excessive stimulation (e.g. brightness, , colour, movement), and cognitive illusions where the eye and brain make unconscious inferences.[82] Cognitive illusions include some which result from the unconscious misapplication of certain optical principles. For example, the Ames room, Hering, Müller-Lyer, Orbison, Ponzo, Sander, and Wundt illusions all rely on the suggestion of the appearance of distance by using converging and diverging lines, in the same way that parallel light rays (or indeed any set of parallel lines) appear to converge at a vanishing point at infinity in two-dimensionally rendered images with artistic perspective.[83] This suggestion is also responsible for the famous moon illusion where the moon, despite having essentially the same angular size, appears much larger near the horizon than it does at zenith.[84] This illusion so confounded Ptolemy that he incorrectly attributed it to atmospheric refraction when he described it in his treatise, Optics.[8] Another type of optical illusion exploits broken patterns to trick the mind into perceiving symmetries or asymmetries that are not present. Examples include the café wall, Ehrenstein, Fraser spiral, Poggendorff, and Zöllner illusions. Related, but not strictly illusions, are patterns that occur due to the superimposition of periodic structures. For example, transparent tissues with a grid structure produce shapes known as moiré patterns, while the superimposition of periodic transparent patterns comprising parallel opaque lines or curves produces line moiré patterns.[85]

Optical instruments Main article: Optical instruments

Single lenses have a variety of applications including photographic lenses, corrective lenses, and magnifying glasses while single mirrors are used in parabolic reflectors and rear-view mirrors. Combining a number of mirrors, prisms, and lenses produces compound optical instruments which have practical uses. For example, a periscope is simply two plane mirrors aligned to allow for viewing around obstructions. The most famous compound optical instruments in science are the microscope and the telescope which were both invented by the Dutch in the late 16th century.[86] Microscopes were first developed with just two lenses: an objective lens and an eyepiece. The objective lens is essentially a magnifying glass and was designed with a very small focal length while the eyepiece generally has a longer focal length. This has the effect of producing magnified images of close objects. Generally, an additional source of illumination is used since magnified images are dimmer due to the conservation of energy and the spreading of light rays over a larger surface area. Modern microscopes, known as compound microscopes have many lenses in them (typically four) to optimize the functionality and enhance image stability.[86] A slightly different variety of microscope, the comparison microscope, looks at side-by-side images to produce a stereoscopic binocular view that appears three dimensional when used by humans.[87] The first telescopes, called refracting telescopes were also developed with a single objective and eyepiece lens. In contrast to the microscope, the objective lens of the telescope was designed with a large focal length to avoid optical aberrations. The objective focuses an image of a distant object at its focal point which is adjusted to be at the focal point of an eyepiece of a much smaller focal length. The main goal of a telescope is not necessarily magnification, but rather collection of light which is determined by the physical size of the objective lens. Thus, telescopes are normally indicated by the diameters of their objectives rather than by the magnification which can be changed by switching eyepieces. Because the magnification of a telescope is equal to the focal length of the objective divided by the focal length of the eyepiece, smaller focal-length eyepieces cause greater magnification.[86] Since crafting large lenses is much more difficult than crafting large mirrors, most modern telescopes are reflecting telescopes, that is, telescopes that use a primary mirror rather than an objective lens. The same general optical con- siderations apply to reflecting telescopes that applied to refracting telescopes, namely, the larger the primary mirror, the more light collected, and the magnification is still equal to the focal length of the primary mirror divided by the focal length of the eyepiece. Professional telescopes generally do not have eyepieces and instead place an instrument (often a charge-coupled device) at the focal point instead.[86] 3.1. OPTICS 41

Photography

Main article: Science of photography The optics of photography involves both lenses and the medium in which the electromagnetic radiation is recorded, whether it be a plate, film, or charge-coupled device. Photographers must consider the reciprocity of the camera and the shot which is summarized by the relation

Exposure ∝ ApertureArea × ExposureTime × SceneLuminance[88]

In other words, the smaller the aperture (giving greater ), the less light coming in, so the length of time has to be increased (leading to possible blurriness if motion occurs). An example of the use of the law of reciprocity is the Sunny 16 rule which gives a rough estimate for the settings needed to estimate the proper exposure in daylight.[89] A camera’s aperture is measured by a unitless number called the f-number or f-stop, f/#, often notated as N , and given by

f f/# = N = D where f is the focal length, and D is the diameter of the . By convention, “f/#" is treated as a single symbol, and specific values of f/# are written by replacing the number sign with the value. The two ways to increase the f-stop are to either decrease the diameter of the entrance pupil or change to a longer focal length (in the case of a zoom lens, this can be done by simply adjusting the lens). Higher f-numbers also have a larger depth of field due to the lens approaching the limit of a which is able to focus all images perfectly, regardless of distance, but requires very long exposure times.[90] The field of view that the lens will provide changes with the focal length of the lens. There are three basic classifications based on the relationship to the diagonal size of the film or sensor size of the camera to the focal length of the lens:[91]

• Normal lens: angle of view of about 50° (called normal because this angle considered roughly equivalent to human vision[91]) and a focal length approximately equal to the diagonal of the film or sensor.[92]

• Wide-angle lens: angle of view wider than 60° and focal length shorter than a normal lens.[93]

• Long focus lens: angle of view narrower than a normal lens. This is any lens with a focal length longer than the diagonal measure of the film or sensor.[94] The most common type of long focus lens is the , a design that uses a special telephoto group to be physically shorter than its focal length.[95]

Modern zoom lenses may have some or all of these attributes. The absolute value for the exposure time required depends on how sensitive to light the medium being used is (mea- sured by the film speed, or, for digital media, by the quantum efficiency).[96] Early photography used media that had very low light sensitivity, and so exposure times had to be long even for very bright shots. As technology has improved, so has the sensitivity through film cameras and digital cameras.[97] Other results from physical and geometrical optics apply to camera optics. For example, the maximum resolution capability of a particular camera set-up is determined by the diffraction limit associated with the pupil size and given, roughly, by the Rayleigh criterion.[98]

Atmospheric optics

Main article: Atmospheric optics The unique optical properties of the atmosphere cause a wide range of spectacular optical phenomena. The blue colour of the sky is a direct result of Rayleigh scattering which redirects higher frequency (blue) sunlight back into the field of view of the observer. Because blue light is scattered more easily than red light, the sun takes on a reddish hue when it is observed through a thick atmosphere, as during a sunrise or sunset. Additional particulate matter in the sky can scatter different colours at different angles creating colourful glowing skies at dusk and dawn. Scattering off of ice crystals and other particles in the atmosphere are responsible for halos, afterglows, coronas, rays of sunlight, and sun dogs. The variation in these kinds of phenomena is due to different particle sizes and geometries.[99] 42 CHAPTER 3. DAY 3

Mirages are optical phenomena in which light rays are bent due to thermal variations in the refraction index of air, producing displaced or heavily distorted images of distant objects. Other dramatic optical phenomena associated with this include the Novaya Zemlya effect where the sun appears to rise earlier than predicted with a distorted shape. A spectacular form of refraction occurs with a temperature inversion called the Fata Morgana where objects on the horizon or even beyond the horizon, such as islands, cliffs, ships or icebergs, appear elongated and elevated, like “fairy tale castles”.[100] Rainbows are the result of a combination of internal reflection and dispersive refraction of light in raindrops. A single reflection off the backs of an array of raindrops produces a rainbow with an angular size on the sky that ranges from 40° to 42° with red on the outside. Double rainbows are produced by two internal reflections with angular size of 50.5° to 54° with violet on the outside. Because rainbows are seen with the sun 180° away from the centre of the rainbow, rainbows are more prominent the closer the sun is to the horizon.[66]

3.1.5 See also

• Ion optics

• Important publications in optics

• List of optical topics

3.1.6 References

[1] McGraw-Hill Encyclopedia of Science and Technology (5th ed.). McGraw-Hill. 1993.

[2] “World’s oldest telescope?". BBC News. July 1, 1999. Retrieved Jan 3, 2010.

[3] T. F. Hoad (1996). The Concise Oxford Dictionary of English Etymology. ISBN 0-19-283098-8.

[4] A History Of The Eye. stanford.edu. Retrieved 2012-06-10.

[5] T. L. Heath (2003). A manual of greek mathematics. Courier Dover Publications. pp. 181–182. ISBN 0-486-43231-9.

[6] William R. Uttal (1983). Visual Form Detection in 3-Dimensional Space. Psychology Press. pp. 25–. ISBN 978-0-89859- 289-4.

[7] Euclid (1999). Elaheh Kheirandish, ed. The Arabic version of Euclid’s optics = Kitāb Uqlīdis fī ikhtilāf al-manāẓir. New York: Springer. ISBN 0-387-98523-9.

[8] Ptolemy (1996). A. Mark Smith, ed. Ptolemy’s theory of visual perception: an English translation of the Optics with introduction and commentary. DIANE Publishing. ISBN 0-87169-862-5.

[9] Verma, RL (1969), Al-Hazen: father of modern optics

[10] Adamson, Peter (2006). “Al-Kindi¯ and the reception of Greek philosophy”. In Adamson, Peter; Taylor, R.. The Cam- bridge companion to Arabic philosophy. Cambridge University Press. p. 45. ISBN 978-0-521-52069-0.

[11] Rashed, Roshdi (1990). “A pioneer in anaclastics: Ibn Sahl on burning mirrors and lenses”. Isis. 81 (3): 464–491. doi:10.1086/355456. JSTOR 233423.

[12] Hogendijk, Jan P.; Sabra, Abdelhamid I., eds. (2003). The Enterprise of Science in Islam: New Perspectives. MIT Press. pp. 85–118. ISBN 0-262-19482-1. OCLC 50252039.

[13] G. Hatfield (1996). “Was the Scientific Revolution Really a Revolution in Science?". In F. J. Ragep; P. Sally; S. J. Livesey. Tradition, Transmission, Transformation: Proceedings of Two Conferences on Pre-modern Science held at the University of Oklahoma. Brill Publishers. p. 500. ISBN 90-04-10119-5.

[14] Nader El-Bizri (2005). “A Philosophical Perspective on Alhazen’s Optics”. Arabic Sciences and Philosophy. 15 (2): 189– 218. doi:10.1017/S0957423905000172.

[15] Nader El-Bizri (2007). “In Defence of the Sovereignty of Philosophy: al-Baghdadi’s Critique of Ibn al-Haytham’s Ge- ometrisation of Place”. Arabic Sciences and Philosophy. 17: 57–80. doi:10.1017/S0957423907000367.

[16] G. Simon (2006). “The Gaze in Ibn al-Haytham”. The Medieval History Journal. 9: 89. doi:10.1177/097194580500900105. 3.1. OPTICS 43

[17] Ian P. Howard; Brian J. Rogers (1995). Binocular Vision and Stereopsis. Oxford University Press. p. 7. ISBN 978-0-19- 508476-4.

[18] Elena Agazzi; Enrico Giannetto; Franco Giudice (2010). Representing Light Across Arts and Sciences: Theories and Prac- tices. V&R unipress GmbH. p. 42. ISBN 978-3-89971-735-8.

[19] El-Bizri, Nader (2010). “Classical Optics and the Perspectiva Traditions Leading to the Renaissance”. In Hendrix, John Shannon; Carman, Charles H. Renaissance Theories of Vision (Visual Culture in Early Modernity). Farnham, Surrey: Ashgate. pp. 11–30. ISBN 1-409400-24-7.; El-Bizri, Nader (2014). “Seeing Reality in Perspective: 'The Art of Optics’ and the 'Science of Painting'". In Lupacchini, Rossella; Angelini, Annarita. The Art of Science: From Perspective Drawing to Quantum Randomness. Doredrecht: Springer. pp. 25–47.

[20] D. C. Lindberg, Theories of Vision from al-Kindi to Kepler, (Chicago: Univ. of Chicago Pr., 1976), pp. 94–99.

[21] Vincent, Ilardi (2007). Renaissance Vision from Spectacles to Telescopes. Philadelphia, PA: American Philosophical Soci- ety. pp. 4–5. ISBN 978-0-87169-259-7.

[22] '''The Galileo Project > Science > The Telescope''' by Al Van Helden ''. Galileo.rice.edu. Retrieved 2012-06-10.

[23] Henry C. King (2003). The History of the Telescope. Courier Dover Publications. p. 27. ISBN 978-0-486-43265-6.

[24] Paul S. Agutter; Denys N. Wheatley (2008). Thinking about Life: The History and Philosophy of Biology and Other Sciences. Springer. p. 17. ISBN 978-1-4020-8865-0.

[25] Ilardi, Vincent (2007). Renaissance Vision from Spectacles to Telescopes. American Philosophical Society. p. 210. ISBN 978-0-87169-259-7.

[26] Microscopes: Time Line, Nobel Foundation. Retrieved April 3, 2009

[27] Watson, Fred (2007). Stargazer: The Life and Times of the Telescope. Allen & Unwin. p. 55. ISBN 978-1-74175-383-7.

[28] Ilardi, Vincent (2007). Renaissance Vision from Spectacles to Telescopes. American Philosophical Society. p. 244. ISBN 978-0-87169-259-7.

[29] Caspar, Kepler, pp. 198–202, Courier Dover Publications, 1993, ISBN 0-486-67605-6.

[30] A. I. Sabra (1981). Theories of light, from Descartes to Newton. CUP Archive. ISBN 0-521-28436-8.

[31] W. F. Magie (1935). A Source Book in Physics. Harvard University Press. p. 309.

[32] J. C. Maxwell (1865). "A Dynamical Theory of the Electromagnetic Field". Philosophical Transactions of the Royal Society of London. 155: 459. Bibcode:1865RSPT..155..459C. doi:10.1098/rstl.1865.0008.

[33] For a solid approach to the complexity of Planck’s intellectual motivations for the quantum, for his reluctant acceptance of its implications, see H. Kragh, Max Planck: the reluctant revolutionary, Physics World. December 2000.

[34] Einstein, A. (1967). “On a heuristic viewpoint concerning the production and transformation of light”. In Ter Haar, D. The Old Quantum Theory (PDF). Pergamon. pp. 91–107. Retrieved March 18, 2010. The chapter is an English translation of Einstein’s 1905 paper on the photoelectric effect.

[35] Einstein, A. (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt” [On a heuristic viewpoint concerning the production and transformation of light]. Annalen der Physik (in German). 322 (6): 132–148. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607.

[36] “On the Constitution of Atoms and Molecules”. Philosophical Magazine. 26, Series 6: 1–25. 1913. Archived from the original on July 4, 2007.. The landmark paper laying the Bohr model of the atom and molecular bonding.

[37] R. Feynman (1985). “Chapter 1”. QED: The Strange Theory of Light and Matter. Princeton University Press. p. 6. ISBN 0-691-08388-6.

[38] N. Taylor (2000). LASER: The inventor, the Nobel laureate, and the thirty-year patent war. New York: Simon & Schuster. ISBN 0-684-83515-0.

[39] Ariel Lipson; Stephen G. Lipson; Henry Lipson (28 October 2010). Optical Physics. Cambridge University Press. p. 48. ISBN 978-0-521-49345-1. Retrieved 12 July 2012.

[40] Arthur Schuster (1904). An Introduction to the Theory of Optics. E. Arnold. p. 41.

[41] J. E. Greivenkamp (2004). Field Guide to Geometrical Optics. SPIE Field Guides vol. FG01. SPIE. pp. 19–20. ISBN 0-8194-5294-7. 44 CHAPTER 3. DAY 3

[42] H. D. Young (1992). “35”. University Physics 8e. Addison-Wesley. ISBN 0-201-52981-5.

[43] Marchand, E. W. (1978). Gradient Index Optics. New York: Academic Press.

[44] E. Hecht (1987). Optics (2nd ed.). Addison Wesley. ISBN 0-201-11609-X. Chapters 5 & 6.

[45] MV Klein & TE Furtak, 1986, Optics, John Wiley & Sons, New York ISBN 0-471-87297-0.

[46] Maxwell, James Clerk (1865). “A dynamical theory of the electromagnetic field” (PDF). Philosophical Transactions of the Royal Society of London. 155: 499. doi:10.1098/rstl.1865.0008. This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society. See also A dynamical theory of the electromagnetic field.

[47] M. Born and E. Wolf (1999). Principle of Optics. Cambridge: Cambridge University Press. ISBN 0-521-64222-1.

[48] J. Goodman (2005). Introduction to Fourier Optics (3rd ed, ed.). Roberts & Co Publishers. ISBN 0-9747077-2-4.

[49] A. E. Siegman (1986). Lasers. University Science Books. ISBN 0-935702-11-3. Chapter 16.

[50] H. D. Young (1992). University Physics 8e. Addison-Wesley. ISBN 0-201-52981-5.Chapter 37

[51] P. Hariharan (2003). Optical Interferometry (PDF) (2nd ed.). San Diego, USA: Academic Press. ISBN 0-12-325220-2.

[52] E. R. Hoover (1977). Cradle of Greatness: National and World Achievements of Ohio’s Western Reserve. Cleveland: Shaker Savings Association.

[53] J. L. Aubert (1760). Memoires pour l'histoire des sciences et des beaux arts. Paris: Impr. de S. A. S.; Chez E. Ganeau. p. 149.

[54] D. Brewster (1831). A Treatise on Optics. London: Longman, Rees, Orme, Brown & Green and John Taylor. p. 95.

[55] R. Hooke (1665). Micrographia: or, Some physiological descriptions of minute bodies made by magnifying glasses. London: J. Martyn and J. Allestry. ISBN 0-486-49564-7.

[56] H. W. Turnbull (1940–1941). “Early Scottish Relations with the Royal Society: I. James Gregory, F.R.S. (1638–1675)". Notes and Records of the Royal Society of London. 3: 22. doi:10.1098/rsnr.1940.0003. JSTOR 531136.

[57] T. Rothman (2003). Everything’s Relative and Other Fables in Science and Technology. New Jersey: Wiley. ISBN 0-471- 20257-6.

[58] H. D. Young (1992). University Physics 8e. Addison-Wesley. ISBN 0-201-52981-5.Chapter 38

[59] R. S. Longhurst (1968). Geometrical and Physical Optics, 2nd Edition. London: Longmans.

[60] Lucky Exposures: Diffraction limited astronomical imaging through the atmosphere by Robert Nigel Tubbs

[61] C. F. Bohren & D. R. Huffman (1983). Absorption and Scattering of Light by Small Particles. Wiley. ISBN 0-471-29340-7.

[62] J. D. Jackson (1975). Classical Electrodynamics (2nd ed.). Wiley. p. 286. ISBN 0-471-43132-X.

[63] R. Ramaswami; K. N. Sivarajan (1998). Optical Networks: A Practical Perspective. London: Academic Press. ISBN 0-12-374092-4.

[64] Brillouin, Léon. Wave Propagation and Group Velocity. Academic Press Inc., New York (1960)

[65] M. Born & E. Wolf (1999). Principle of Optics. Cambridge: Cambridge University Press. pp. 14–24. ISBN 0-521-64222- 1.

[66] H. D. Young (1992). University Physics 8e. Addison-Wesley. ISBN 0-201-52981-5.Chapter 34

[67] F. J. Duarte (2015). Tunable Laser Optics (2nd ed.). New York: CRC. pp. 117–120. ISBN 978-1-4822-4529-5.

[68] D. F. Walls and G. J. Milburn Quantum Optics (Springer 1994)

[69] Alastair D. McAulay (16 January 1991). Optical computer architectures: the application of optical concepts to next generation computers. Wiley. ISBN 978-0-471-63242-9. Retrieved 12 July 2012.

[70] Y. R. Shen (1984). The principles of nonlinear optics. New York, Wiley-Interscience. ISBN 0-471-88998-9.

[71] “laser”. Reference.com. Retrieved 2008-05-15.

[72] Charles H. Townes – Nobel Lecture. nobelprize.org

[73] “The VLT’s Artificial Star”. ESO Picture of the Week. Retrieved 25 June 2014. 3.1. OPTICS 45

[74] C. H. Townes. “The first laser”. University of Chicago. Retrieved 2008-05-15.

[75] C. H. Townes (2003). “The first laser”. In Laura Garwin; Tim Lincoln. A Century of Nature: Twenty-One Discoveries that Changed Science and the World. University of Chicago Press. pp. 107–12. ISBN 0-226-28413-1.

[76] What is a bar code? denso-wave.com

[77] “How the CD was developed”. BBC News. 2007-08-17. Retrieved 2007-08-17.

[78] J. Wilson & J.F.B. Hawkes (1987). Lasers: Principles and Applications, Prentice Hall International Series in Optoelectronics. Prentice Hall. ISBN 0-13-523697-5.

[79] D. Atchison & G. Smith (2000). Optics of the Human Eye. Elsevier. ISBN 0-7506-3775-7.

[80] E. R. Kandel; J. H. Schwartz; T. M. Jessell (2000). Principles of Neural Science (4th ed.). New York: McGraw-Hill. pp. 507–513. ISBN 0-8385-7701-6.

[81] D. Meister. “Ophthalmic Lens Design”. OptiCampus.com. Retrieved November 12, 2008.

[82] J. Bryner (2008-06-02). “Key to All Optical Illusions Discovered”. LiveScience.com.

[83] Geometry of the Vanishing Point at Convergence

[84] “The Moon Illusion Explained”, Don McCready, University of Wisconsin-Whitewater

[85] A. K. Jain; M. Figueiredo; J. Zerubia (2001). Energy Minimization Methods in Computer Vision and Pattern Recognition. Springer. ISBN 978-3-540-42523-6.

[86] H. D. Young (1992). “36”. University Physics 8e. Addison-Wesley. ISBN 0-201-52981-5.

[87] P. E. Nothnagle; W. Chambers; M. W. Davidson. “Introduction to Stereomicroscopy”. Nikon MicroscopyU.

[88] Samuel Edward Sheppard & Charles Edward Kenneth Mees (1907). Investigations on the Theory of the Photographic Process. Longmans, Green and Co. p. 214.

[89] B. J. Suess (2003). Mastering Black-and-White Photography. Allworth Communications. ISBN 1-58115-306-6.

[90] M. J. Langford (2000). Basic Photography. Focal Press. ISBN 0-240-51592-7.

[91] Warren, Bruce (2001). Photography. Cengage Learning. p. 71. ISBN 978-0-7668-1777-7.

[92] Leslie D. Stroebel (1999). View Camera Technique. Focal Press. ISBN 0-240-80345-0.

[93] S. Simmons (1992). Using the View Camera. Amphoto Books. p. 35. ISBN 0-8174-6353-4.

[94] Sidney F. Ray (2002). Applied Photographic Optics: Lenses and Optical Systems for Photography, Film, Video, Electronic and Digital Imaging. Focal Press. p. 294. ISBN 978-0-240-51540-3.

[95] New York Times Staff (2004). The New York Times Guide to Essential Knowledge. Macmillan. ISBN 978-0-312-31367-8.

[96] R. R. Carlton; A. McKenna Adler (2000). Principles of Radiographic Imaging: An Art and a Science. Thomson Delmar Learning. ISBN 0-7668-1300-2.

[97] W. Crawford (1979). The Keepers of Light: A History and Working Guide to Early Photographic Processes. Dobbs Ferry, New York: Morgan & Morgan. p. 20. ISBN 0-87100-158-6.

[98] J. M. Cowley (1975). Diffraction physics. Amsterdam: North-Holland. ISBN 0-444-10791-6.

[99] C. D. Ahrens (1994). Meteorology Today: an introduction to weather, climate, and the environment (5th ed.). West Pub- lishing Company. pp. 88–89. ISBN 0-314-02779-3.

[100] A. Young. “An Introduction to Mirages”.

Further reading

• Born, Max; Wolf, Emil (2002). Principles of Optics. Cambridge University Press. ISBN 1-139-64340-1. • Hecht, Eugene (2002). Optics (4 ed.). Addison-Wesley Longman, Incorporated. ISBN 0-8053-8566-5. • Serway, Raymond A.; Jewett, John W. (2004). Physics for scientists and engineers (6, illustrated ed.). Belmont, CA: Thomson-Brooks/Cole. ISBN 0-534-40842-7. 46 CHAPTER 3. DAY 3

• Tipler, Paul A.; Mosca, Gene (2004). Physics for Scientists and Engineers: Electricity, Magnetism, Light, and Elementary Modern Physics. 2. W. H. Freeman. ISBN 978-0-7167-0810-0. • Lipson, Stephen G.; Lipson, Henry; Tannhauser, David Stefan (1995). Optical Physics. Cambridge University Press. ISBN 0-521-43631-1. • Fowles, Grant R. (1975). Introduction to Modern Optics. Courier Dover Publications. ISBN 0-486-65957-7.

3.1.7 External links

Relevant discussions

• Optics on In Our Time at the BBC.(listen now)

Textbooks and tutorials

• Optics – an open-source optics textbook

• Optics2001 – Optics library and community • Fundamental Optics – Melles Griot Technical Guide

• Physics of Light and Optics – Brigham Young University Undergraduate Book

Wikibooks modules

Further reading

• Optics and photonics: Physics enhancing our lives by Institute of Physics publications

Societies 3.1. OPTICS 47

Linear polarization diagram 48 CHAPTER 3. DAY 3

Circular polarization diagram 3.1. OPTICS 49

Elliptical polarization diagram 50 CHAPTER 3. DAY 3

y θ0 x θ1

I0 z Polarizer I

A polariser changing the orientation of linearly polarised light. In this picture, θ1 – θ0 = θᵢ.

The effects of a polarising filter on the sky in a . Left picture is taken without polariser. For the right picture, filter was adjusted to eliminate certain polarizations of the scattered blue light from the sky. 3.1. OPTICS 51

Experiments such as this one with high-power lasers are part of the modern optics research.

VLT’s laser guided star.[73] 52 CHAPTER 3. DAY 3

Model of a human eye. Features mentioned in this article are 3. ciliary muscle, 6. pupil, 8. cornea, 10. lens cortex, 22. optic nerve, 26. fovea, 30. retina 3.1. OPTICS 53

The Ponzo Illusion relies on the fact that parallel lines appear to converge as they approach infinity. 54 CHAPTER 3. DAY 3

Illustrations of various optical instruments from the 1728 Cyclopaedia 3.1. OPTICS 55

Photograph taken with aperture f/32

Photograph taken with aperture f/5 56 CHAPTER 3. DAY 3

A colourful sky is often due to scattering of light off particulates and pollution, as in this photograph of a sunset during the October 2007 California wildfires. 3.2. OPTICAL INSTRUMENT 57

3.2 Optical instrument

An optical instrument either processes light waves to enhance an image for viewing, or analyzes light waves (or photons) to determine one of a number of characteristic properties.

3.2.1 Image enhancement

Further information: Viewing instrument

The first optical instruments were telescopes used for magnification of distant images, and microscopes used for magnifying very tiny images. Since the days of Galileo and Van Leeuwenhoek, these instruments have been greatly improved and extended into other portions of the electromagnetic spectrum. The binocular device is a generally compact instrument for both eyes designed for mobile use. A camera could be considered a type of optical instrument, with the pinhole camera and camera obscura being very simple examples of such devices.

3.2.2 Analysis

Another class of optical instrument is used to analyze the properties of light or optical materials. They include:

• Interferometer for measuring the interference properties of light waves • Photometer for measuring light intensity • Polarimeter for measuring dispersion or rotation of polarized light • Reflectometer for measuring the reflectivity of a surface or object • Refractometer for measuring refractive index of various materials, invented by Ernst Abbe • Spectrometer or monochromator for generating or measuring a portion of the optical spectrum, for the purpose of chemical or material analysis • Autocollimator which is used to measure angular deflections • Vertometer which is used to determine refractive power of lenses such as glasses, contact lenses and magnifier lens

DNA sequencers can be considered optical instruments as they analyse the color and intensity of the light emitted by a fluorochrome attached to a specific nucleotide of a DNA strand. Surface plasmon resonance-based instruments use refractometry to measure and analyze biomolecular interactions.

3.2.3 Other optical devices

• Polarization controller

• Camera

• Magic lantern

3.2.4 See also

• Scientific instruments

3.2.5 References

\ 58 CHAPTER 3. DAY 3

An illustration of some of the optical devices available for laboratory work in England in 1858.

3.2.6 External links

• Giorgio Carboni. “From Lenses to Optical Instruments”. Fun Science Gallery. Chapter 4

Day 4

4.1 Lens (optics)

For other uses, see Lens. A lens is a transmissive optical device that focuses or disperses a light beam by means of refraction.A simple lens

A biconvex lens consists of a single piece of transparent material, while a compound lens consists of several simple lenses (elements), usually arranged along a common axis. Lenses are made from materials such as glass or plastic, and are ground and polished or moulded to a desired shape. A lens can focus light to form an image, unlike a prism, which refracts light without focusing. Devices that similarly focus or disperse waves and radiation other than visible light are also called

59 60 CHAPTER 4. DAY 4

lenses, such as microwave lenses, electron lenses, acoustic lenses, or explosive lenses.

4.1.1 History

See also: History of optics and Camera lens The word lens comes from the Latin name of the lentil, because a double-convex lens is lentil-shaped. The genus of the lentil plant is Lens, and the most commonly eaten species is Lens culinaris. The lentil plant also gives its name to a geometric figure. The variant spelling lense is sometimes seen. While it is listed as an alternative spelling in some dictionaries, most mainstream dictionaries do not list it as acceptable.[1][2] The oldest lens artifact is the Nimrud lens, dating back 2700 years (7th century B.C.) to ancient Assyria.[3][4] David Brewster proposed that it may have been used as a magnifying glass, or as a burning-glass to start fires by concentrating sunlight.[3][5] Another early reference to magnification dates back to ancient Egyptian hieroglyphs in the 8th century BC, which depict “simple glass meniscal lenses”.[6] The earliest written records of lenses date to Ancient Greece, with Aristophanes' play The Clouds (424 BC) men- tioning a burning-glass (a biconvex lens used to focus the sun's rays to produce fire).[7] Some scholars argue that the archeological evidence indicates that there was widespread use of lenses in antiquity, spanning several millennia.[8] Such lenses were used by artisans for fine work, and for authenticating seal impressions. The writings of Pliny the Elder (23–79) show that burning-glasses were known to the Roman Empire,[9] and mentions what is arguably the earliest written reference to a corrective lens: Nero was said to watch the gladiatorial games using an emerald (pre- sumably concave to correct for nearsightedness, though the reference is vague).[10] Both Pliny and Seneca the Younger (3 BC–65) described the magnifying effect of a glass globe filled with water. Excavations at the Viking harbour town of Fröjel, Gotland, Sweden discovered in 1999 the rock crystal Visby lenses, produced by turning on pole lathes at Fröjel in the 11th to 12th century, with an imaging quality comparable to that of 1950s aspheric lenses. The Viking lenses were capable of concentrating enough sunlight to ignite fires.[11] Between the 11th and 13th century "reading stones" were invented. Often used by monks to assist in illuminating manuscripts, these were primitive plano-convex lenses initially made by cutting a glass sphere in half. As the stones were experimented with, it was slowly understood that shallower lenses magnified more effectively. Lenses came into widespread use in Europe with the invention of spectacles, probably in Italy in the 1280s.[12] This was the start of the optical industry of grinding and polishing lenses for spectacles, first in Venice and Florence in the thirteenth century,[13] and later in the spectacle-making centres in both the Netherlands and Germany.[14] Spectacle makers created improved types of lenses for the correction of vision based more on empirical knowledge gained from observing the effects of the lenses (probably without the knowledge of the rudimentary optical theory of the day).[15][16] The practical development and experimentation with lenses led to the invention of the compound optical microscope around 1595, and the refracting telescope in 1608, both of which appeared in the spectacle-making centres in the Netherlands.[17][18] With the invention of the telescope and microscope there was a great deal of experimentation with lens shapes in the 17th and early 18th centuries trying to correct chromatic errors seen in lenses. Opticians tried to construct lenses of varying forms of curvature, wrongly assuming errors arose from defects in the spherical figure of their surfaces.[19] Optical theory on refraction and experimentation was showing no single-element lens could bring all colours to a focus. This led to the invention of the compound achromatic lens by Chester Moore Hall in England in 1733, an invention also claimed by fellow Englishman John Dollond in a 1758 patent.

4.1.2 Construction of simple lenses

Most lenses are spherical lenses: their two surfaces are parts of the surfaces of spheres. Each surface can be convex (bulging outwards from the lens), concave (depressed into the lens), or planar (flat). The line joining the centres of the spheres making up the lens surfaces is called the axis of the lens. Typically the lens axis passes through the physical centre of the lens, because of the way they are manufactured. Lenses may be cut or ground after manufacturing to give them a different shape or size. The lens axis may then not pass through the physical centre of the lens. Toric or sphero-cylindrical lenses have surfaces with two different radii of curvature in two orthogonal planes. They have a different focal power in different meridians. This forms an astigmatic lens. An example is eyeglass lenses that are used to correct astigmatism in someone’s eye. 4.1. LENS (OPTICS) 61

More complex are aspheric lenses. These are lenses where one or both surfaces have a shape that is neither spherical nor cylindrical. The more complicated shapes allow such lenses to form images with less aberration than standard simple lenses, but they are more difficult and expensive to produce.

Types of simple lenses

Lenses are classified by the curvature of the two optical surfaces. A lens is biconvex (or double convex, or just convex) if both surfaces are convex. If both surfaces have the same radius of curvature, the lens is equiconvex. A lens with two concave surfaces is biconcave (or just concave). If one of the surfaces is flat, the lens is plano-convex or plano-concave depending on the curvature of the other surface. A lens with one convex and one concave side is convex-concave or meniscus. It is this type of lens that is most commonly used in corrective lenses. If the lens is biconvex or plano-convex, a collimated beam of light passing through the lens converges to a spot (a focus) behind the lens. In this case, the lens is called a positive or converging lens. The distance from the lens to the spot is the focal length of the lens, which is commonly abbreviated f in diagrams and equations. If the lens is biconcave or plano-concave, a collimated beam of light passing through the lens is diverged (spread); the lens is thus called a negative or diverging lens. The beam, after passing through the lens, appears to emanate from a particular point on the axis in front of the lens. The distance from this point to the lens is also known as the focal length, though it is negative with respect to the focal length of a converging lens. Convex-concave (meniscus) lenses can be either positive or negative, depending on the relative curvatures of the two surfaces. A negative meniscus lens has a steeper concave surface and is thinner at the centre than at the periphery. Conversely, a positive meniscus lens has a steeper convex surface and is thicker at the centre than at the periphery. An ideal thin lens with two surfaces of equal curvature would have zero optical power, meaning that it would neither converge nor diverge light. All real lenses have nonzero thickness, however, which makes a real lens with identical curved surfaces slightly positive. To obtain exactly zero optical power, a meniscus lens must have slightly unequal curvatures to account for the effect of the lens’ thickness.

Lensmaker’s equation

The focal length of a lens in air can be calculated from the lensmaker’s equation:[20]

[ ] 1 1 1 (n − 1)d = (n − 1) − + , f R1 R2 nR1R2 where

f is the focal length of the lens, n is the refractive index of the lens material,

R1 is the radius of curvature (with sign, see below) of the lens surface closest to the light source,

R2 is the radius of curvature of the lens surface farthest from the light source, and d is the thickness of the lens (the distance along the lens axis between the two surface vertices).

The focal length f is positive for converging lenses, and negative for diverging lenses. The reciprocal of the focal length, 1/f, is the optical power of the lens. If the focal length is in metres, this gives the optical power in dioptres (inverse metres). Lenses have the same focal length when light travels from the back to the front as when light goes from the front to the back. Other properties of the lens, such as the aberrations are not the same in both directions.

Sign convention for radii of curvature R1 and R2 Main article: Radius of curvature (optics)

The signs of the lens’ radii of curvature indicate whether the corresponding surfaces are convex or concave. The sign convention used to represent this varies, but in this article a positive R indicates a surface’s center of curvature is further along in the direction of the ray travel (right, in the accompanying diagrams), while negative R means that 62 CHAPTER 4. DAY 4

rays reaching the surface have already passed the center of curvature. Consequently, for external lens surfaces as diagrammed above, R1 > 0 and R2 < 0 indicate convex surfaces (used to converge light in a positive lens), while R1 < 0 and R2 > 0 indicate concave surfaces. The reciprocal of the radius of curvature is called the curvature. A flat surface has zero curvature, and its radius of curvature is infinity.

Thin lens approximation If d is small compared to R1 and R2, then the thin lens approximation can be made. For a lens in air, f is then given by

[ ] 1 ≈ (n − 1) 1 − 1 . [21] f R1 R2

4.1.3 Imaging properties

As mentioned above, a positive or converging lens in air focuses a collimated beam travelling along the lens axis to a spot (known as the focal point) at a distance f from the lens. Conversely, a point source of light placed at the focal point is converted into a collimated beam by the lens. These two cases are examples of image formation in lenses. In the former case, an object at an infinite distance (as represented by a collimated beam of waves) is focused to an image at the focal point of the lens. In the latter, an object at the focal length distance from the lens is imaged at infinity. The plane perpendicular to the lens axis situated at a distance f from the lens is called the focal plane.

If the distances from the object to the lens and from the lens to the image are S1 and S2 respectively, for a lens of negligible thickness, in air, the distances are related by the thin lens formula:[22][23][24]

1 1 1 + = S1 S2 f

This can also be put into the “Newtonian” form:

2 [25] x1x2 = f ,

where x1 = S1 − f and x2 = S2 − f .

Therefore, if an object is placed at a distance S1 > f from a positive lens of focal length f, we will find an image distance S2 according to this formula. If a screen is placed at a distance S2 on the opposite side of the lens, an image is formed on it. This sort of image, which can be projected onto a screen or , is known as a real image.

This is the principle of the camera, and of the human eye. The focusing adjustment of a camera adjusts S2, as using an image distance different from that required by this formula produces a defocused (fuzzy) image for an object at a distance of S1 from the camera. Put another way, modifying S2 causes objects at a different S1 to come into perfect focus.

In some cases S2 is negative, indicating that the image is formed on the opposite side of the lens from where those rays are being considered. Since the diverging light rays emanating from the lens never come into focus, and those rays are not physically present at the point where they appear to form an image, this is called a virtual image. Unlike real images, a virtual image cannot be projected on a screen, but appears to an observer looking through the lens as if it were a real object at the location of that virtual image. Likewise, it appears to a subsequent lens as if it were an object at that location, so that second lens could again focus that light into a real image, S1 then being measured from the virtual image location behind the first lens to the second lens. This is exactly what the eye does when looking through a magnifying glass. The magnifying glass creates a (magnified) virtual image behind the magnifying glass, but those rays are then re-imaged by the lens of the eye to create a real image on the retina. 4.1. LENS (OPTICS) 63

A negative lens produces a demagnified virtual image.

B A A Barlow lens (B) reimages a virtual object (focus of red ray path) into a magnified real image (green rays at focus)

Using a positive lens of focal length f, a virtual image results when S1 < f, the lens thus being used a magnifying glass (rather than if S1 >> f as for a camera). Using a negative lens (f < 0) with a real object (S1 > 0) can only produce a virtual image (S2 < 0), according to the above formula. It is also possible for the object distance S1 to be negative, in which case the lens sees a so-called virtual object. This happens when the lens is inserted into a converging beam (being focused by a previous lens) before the location of its real image. In that case even a negative lens can project a real image, as is done by a Barlow lens.

Real image of a lamp is projected onto a screen (inverted). Reflec- tions of the lamp from both surfaces of the biconvex lens are visible.

A convex lens (f << S1) forming a real, inverted image rather than the upright, virtual image as seen in a magnifying glass

For a thin lens, the distances S1 and S2 are measured from the object and image to the position of the lens, as described 64 CHAPTER 4. DAY 4

above. When the thickness of the lens is not much smaller than S1 and S2 or there are multiple lens elements (a compound lens), one must instead measure from the object and image to the principal planes of the lens. If distances S1 or S2 pass through a medium other than air or vacuum a more complicated analysis is required.

Magnification

The linear magnification of an imaging system using a single lens is given by

S f M = − 2 = S1 f − S1 where M is the magnification factor defined as the ratio of the size of an image compared to the size of the object. The sign convention here dictates that if M is negative, as it is for real images, the image is upside-down with respect to the object. For virtual images M is positive, so the image is upright. Linear magnification M is not always the most useful measure of magnifying power. For instance, when characterizing a visual telescope or binoculars that produce only a virtual image, one would be more concerned with the angular magnification—which expresses how much larger a distant object appears through the telescope compared to the naked eye. In the case of a camera one would quote the plate scale, which compares the apparent (angular) size of a distant object to the size of the real image produced at the focus. The plate scale is the reciprocal of the focal length of the camera lens; lenses are categorized as long-focus lenses or wide-angle lenses according to their focal lengths. Using an inappropriate measurement of magnification can be formally correct but yield a meaningless number. For instance, using a magnifying glass of 5 cm focal length, held 20 cm from the eye and 5 cm from the object, produces a virtual image at infinity of infinite linear size: M = ∞. But the angular magnification is 5, meaning that the object appears 5 times larger to the eye than without the lens. When taking a picture of the moon using a camera with a 50 mm lens, one is not concerned with the linear magnification M ≈ −50 mm / 380000 km = −1.3×10−10. Rather, the plate scale of the camera is about 1°/mm, from which one can conclude that the 0.5 mm image on the film corresponds to an angular size of the moon seen from earth of about 0.5°.

In the extreme case where an object is an infinite distance away, S1 = ∞, S2 = f and M = −f/∞= 0, indicating that the object would be imaged to a single point in the focal plane. In fact, the diameter of the projected spot is not actually zero, since diffraction places a lower limit on the size of the point spread function. This is called the diffraction limit.

4.1.4 Aberrations

Main article: Optical aberration

Lenses do not form perfect images, and a lens always introduces some degree of distortion or aberration that makes the image an imperfect replica of the object. Careful design of the lens system for a particular application minimizes the aberration. Several types of aberration affect image quality, including spherical aberration, , and chromatic aberration.

Spherical aberration

Main article: Spherical aberration

Spherical aberration occurs because spherical surfaces are not the ideal shape for a lens, but are by far the simplest shape to which glass can be ground and polished, and so are often used. Spherical aberration causes beams parallel to, but distant from, the lens axis to be focused in a slightly different place than beams close to the axis. This manifests itself as a blurring of the image. Lenses in which closer-to-ideal, non-spherical surfaces are used are called aspheric lenses. These were formerly complex to make and often extremely expensive, but advances in technology have greatly reduced the manufacturing cost for such lenses. Spherical aberration can be minimised by carefully choosing the surface curvatures for a particular application. For instance, a plano-convex lens, which is used to focus a collimated beam, produces a sharper focal spot when used with the convex side towards the beam source. 4.1. LENS (OPTICS) 65

Coma

Main article: Coma (optics)

Coma, or comatic aberration, derives its name from the comet-like appearance of the aberrated image. Coma occurs when an object off the optical axis of the lens is imaged, where rays pass through the lens at an angle to the axis θ. Rays that pass through the centre of a lens of focal length f are focused at a point with distance f tan θ from the axis. Rays passing through the outer margins of the lens are focused at different points, either further from the axis (positive coma) or closer to the axis (negative coma). In general, a bundle of parallel rays passing through the lens at a fixed distance from the centre of the lens are focused to a ring-shaped image in the focal plane, known as a comatic circle. The sum of all these circles results in a V-shaped or comet-like flare. As with spherical aberration, coma can be minimised (and in some cases eliminated) by choosing the curvature of the two lens surfaces to match the application. Lenses in which both spherical aberration and coma are minimised are called bestform lenses.

Chromatic aberration

Main article: Chromatic aberration

Chromatic aberration is caused by the dispersion of the lens material—the variation of its refractive index, n, with the wavelength of light. Since, from the formulae above, f is dependent upon n, it follows that light of different wavelengths is focused to different positions. Chromatic aberration of a lens is seen as fringes of colour around the image. It can be minimised by using an achromatic doublet (or achromat) in which two materials with differing dispersion are bonded together to form a single lens. This reduces the amount of chromatic aberration over a certain range of wavelengths, though it does not produce perfect correction. The use of achromats was an important step in the 66 CHAPTER 4. DAY 4

development of the optical microscope. An is a lens or lens system with even better chromatic aberration correction, combined with improved spherical aberration correction. are much more expensive than achromats. Different lens materials may also be used to minimise chromatic aberration, such as specialised coatings or lenses made from the crystal fluorite. This naturally occurring substance has the highest known Abbe number, indicating that the material has low dispersion.

Other types of aberration

Other kinds of aberration include field curvature, barrel and pincushion distortion, and astigmatism.

Aperture diffraction

Even if a lens is designed to minimize or eliminate the aberrations described above, the image quality is still limited by the diffraction of light passing through the lens’ finite aperture.A diffraction-limited lens is one in which aberrations have been reduced to the point where the image quality is primarily limited by diffraction under the design conditions.

4.1.5 Compound lenses

See also: Photographic lens, Doublet (lens), Triplet lens, and Achromatic lens

Simple lenses are subject to the optical aberrations discussed above. In many cases these aberrations can be com- pensated for to a great extent by using a combination of simple lenses with complementary aberrations. A compound lens is a collection of simple lenses of different shapes and made of materials of different refractive indices, arranged one after the other with a common axis.

The simplest case is where lenses are placed in contact: if the lenses of focal lengths f1 and f2 are "thin", the combined focal length f of the lenses is given by

1 1 1 = + . f f1 f2 Since 1/f is the power of a lens, it can be seen that the powers of thin lenses in contact are additive. If two thin lenses are separated in air by some distance d, the focal length for the combined system is given by

1 1 1 d = + − . f f1 f2 f1f2 The distance from the front focal point of the combined lenses to the first lens is called the front focal length (FFL): 4.1. LENS (OPTICS) 67

− FFL = f1(f2 d) . [27] (f1+f2)−d

Similarly, the distance from the second lens to the rear focal point of the combined system is the back focal length (BFL):

f (d − f ) BFL = 2 1 . d − (f1 + f2) As d tends to zero, the focal lengths tend to the value of f given for thin lenses in contact.

If the separation distance is equal to the sum of the focal lengths (d = f1+f2), the FFL and BFL are infinite. This corresponds to a pair of lenses that transform a parallel (collimated) beam into another collimated beam. This type of system is called an afocal system, since it produces no net convergence or divergence of the beam. Two lenses at this separation form the simplest type of optical telescope. Although the system does not alter the divergence of a collimated beam, it does alter the width of the beam. The magnification of such a telescope is given by

f M = − 2 , f1 which is the ratio of the output beam width to the input beam width. Note the sign convention: a telescope with two convex lenses (f1 > 0, f2 > 0) produces a negative magnification, indicating an inverted image. A convex plus a concave lens (f1 > 0 > f2) produces a positive magnification and the image is upright. For further information on simple optical telescopes, see Refracting telescope § Refracting telescope designs.

4.1.6 Other types

Cylindrical lenses have curvature in only one direction. They are used to focus light into a line, or to convert the elliptical light from a laser diode into a round beam. A Fresnel lens has its optical surface broken up into narrow rings, allowing the lens to be much thinner and lighter than conventional lenses. Durable Fresnel lenses can be molded from plastic and are inexpensive. Lenticular lenses are arrays of microlenses that are used in lenticular printing to make images that have an illusion of depth or that change when viewed from different angles. A gradient index lens has flat optical surfaces, but has a radial or axial variation in index of refraction that causes light passing through the lens to be focused. An axicon has a conical optical surface. It images a point source into a line along the optic axis, or transforms a laser beam into a ring.[28] Diffractive optical elements can function as lenses. Superlenses are made from negative index metamaterials and claim to produce images at spatial resolutions exceeding the diffraction limit.[29] The first superlenses were made in 2004 using such a metamaterial for microwaves.[29] Im- proved versions have been made by other researchers.[30][31]As of 2014 the superlens has not yet been demonstrated at visible or near-infrared wavelengths.[32] A prototype flat ultrathin lens, with no curvature has been developed.[33]

4.1.7 Uses

A single convex lens mounted in a frame with a handle or stand is a magnifying glass. Lenses are used as prosthetics for the correction of visual impairments such as myopia, hyperopia, presbyopia, and astigmatism. (See corrective lens, contact lens, eyeglasses.) Most lenses used for other purposes have strict axial symmetry; eyeglass lenses are only approximately symmetric. They are usually shaped to fit in a roughly oval, not circular, frame; the optical centres are placed over the eyeballs; their curvature may not be axially symmetric to correct for astigmatism. Sunglasses’ lenses are designed to attenuate light; sunglass lenses that also correct visual impairments can be custom made. 68 CHAPTER 4. DAY 4

Other uses are in imaging systems such as monoculars, binoculars, telescopes, microscopes, cameras and projectors. Some of these instruments produce a virtual image when applied to the human eye; others produce a real image that can be captured on photographic film or an optical sensor, or can be viewed on a screen. In these devices lenses are sometimes paired up with curved mirrors to make a catadioptric system where the lens’s spherical aberration corrects the opposite aberration in the mirror (such as Schmidt and meniscus correctors). Convex lenses produce an image of an object at infinity at their focus; if the sun is imaged, much of the visible and infrared light incident on the lens is concentrated into the small image. A large lens creates enough intensity to burn a flammable object at the focal point. Since ignition can be achieved even with a poorly made lens, lenses have been used as burning-glasses for at least 2400 years.[7] A modern application is the use of relatively large lenses to concentrate solar energy on relatively small photovoltaic cells, harvesting more energy without the need to use larger and more expensive cells. Radio astronomy and radar systems often use dielectric lenses, commonly called a lens antenna to refract electromagnetic radiation into a collector antenna. Lenses can become scratched and abraded. Abrasion-resistant coatings are available to help control this.[34]

4.1.8 See also

• Anti-fogging treatment of optical surfaces

• Back focal plane

• Cardinal point (optics)

• Caustic (optics)

• Eyepiece

• F-number

• Gravitational lens

• Lens (anatomy)

• List of lens designs

• Optical coatings

• Optical lens design

• Photochromic lens

• Prism (optics)

• Ray tracing

• Ray transfer matrix analysis

4.1.9 References

[1] Brians, Paul (2003). Common Errors in English. Franklin, Beedle & Associates. p. 125. ISBN 1-887902-89-9. Retrieved 28 June 2009. Reports “lense” as listed in some dictionaries, but not generally considered acceptable.

[2] Merriam-Webster’s Medical Dictionary. Merriam-Webster. 1995. p. 368. ISBN 0-87779-914-8. Lists “lense” as an acceptable alternate spelling.

[3] Whitehouse, David (1 July 1999). “World’s oldest telescope?". BBC News. Retrieved 10 May 2008.

[4] “The Nimrud lens/The Layard lens”. Collection database. The British Museum. Retrieved 25 November 2012. 4.1. LENS (OPTICS) 69

[5] D. Brewster (1852). “On an account of a rock-crystal lens and decomposed glass found in Niniveh”. Die Fortschritte der Physik (in German). Deutsche Physikalische Gesellschaft. p. 355.

[6] Kriss, Timothy C.; Kriss, Vesna Martich (April 1998). “History of the Operating Microscope: From Magnifying Glass to Microneurosurgery”. Neurosurgery. 42 (4): 899–907. doi:10.1097/00006123-199804000-00116. PMID 9574655.

[7] Aristophanes (22 Jan 2013) [First performed in 423 BC]. The Clouds. Translated by Hickie, William James. Project Gutenberg. EBook #2562.

[8] Sines, George; Sakellarakis, Yannis A. (1987). “Lenses in antiquity”. American Journal of Archaeology. 91 (2): 191–196. doi:10.2307/505216. JSTOR 505216.

[9] Pliny the Elder, The Natural History (trans. John Bostock) Book XXXVII, Chap. 10.

[10] Pliny the Elder, The Natural History (trans. John Bostock) Book XXXVII, Chap. 16

[11] Tilton, Buck (2005). The Complete Book of Fire: Building Campfires for Warmth, Light, Cooking, and Survival. Menasha Ridge Press. p. 25. ISBN 0-89732-633-4.

[12] Glick, Thomas F.; Steven John Livesey; Faith Wallis (2005). Medieval science, technology, and medicine: an encyclopedia. Routledge. p. 167. ISBN 978-0-415-96930-7. Retrieved 24 April 2011.

[13] Al Van Helden. '''The Galileo Project > Science > The Telescope. Galileo.rice.edu. Retrieved on 6 June 2012.

[14] Henry C. King (28 September 2003). The History of the Telescope. Courier Dover Publications. p. 27. ISBN 978-0-486- 43265-6. Retrieved 6 June 2012.

[15] Paul S. Agutter; Denys N. Wheatley (12 December 2008). Thinking about Life: The History and Philosophy of Biology and Other Sciences. Springer. p. 17. ISBN 978-1-4020-8865-0. Retrieved 6 June 2012.

[16] Vincent Ilardi (2007). Renaissance Vision from Spectacles to Telescopes. American Philosophical Society. p. 210. ISBN 978-0-87169-259-7. Retrieved 6 June 2012.

[17] Microscopes: Time Line, Nobel Foundation. Retrieved 3 April 2009

[18] Fred Watson (1 October 2007). Stargazer: The Life and Times of the Telescope. Allen & Unwin. p. 55. ISBN 978-1- 74175-383-7. Retrieved 6 June 2012.

[19] This paragraph is adapted from the 1888 edition of the Encyclopædia Britannica.

[20] Greivenkamp 2004, p. 14 Hecht 1987, §6.1

[21] Hecht 1987, § 5.2.3.

[22] Nave, Carl R. “Thin Lens Equation”. Hyperphysics. Georgia State University. Retrieved March 17, 2015.

[23] Colwell, Catharine H. “Resource Lesson: Thin Lens Equation”. PhysicsLab.org. Retrieved March 17, 2015.

[24] “The Mathematics of Lenses”. The Physics Classroom. Retrieved March 17, 2015.

[25] Hecht 2002, p. 120.

[26] There are always 3 “easy rays”. For the third ray in this case, see File:Lens3b third ray.svg.

[27] Hecht 2002, p. 168.

[28] Proteep Mallik (2005). “The Axicon” (PDF). Archived from the original (PDF) on 23 November 2009. Retrieved 22 November 2007.

[29] Grbic, A.; Eleftheriades, G. V. (2004). “Overcoming the Diffraction Limit with a Planar Left-handed Transmission-line Lens”. Physical Review Letters. 92 (11): 117403. Bibcode:2004PhRvL..92k7403G. doi:10.1103/PhysRevLett.92.117403. PMID 15089166.

[30] Valentine, J.; et al. (2008). “Three-dimensional optical metamaterial with a negative refractive index”. Nature. 455 (7211): 376–9. Bibcode:2008Natur.455..376V. doi:10.1038/nature07247. PMID 18690249.

[31] Yao, J.; et al. (2008). “Optical Negative Refraction in Bulk Metamaterials of Nanowires”. Science. 321 (5891): 930. Bibcode:2008Sci...321..930Y. doi:10.1126/science.1157566. PMID 18703734. 70 CHAPTER 4. DAY 4

[32] Nielsen, R. B.; Thoreson, M. D.; Chen, W.; Kristensen, A.; Hvam, J. M.; Shalaev, V. M.; Boltasseva, A. (2010). “Toward superlensing with metal–dielectric composites and multilayers” (PDF). Applied Physics B. 100: 93. Bibcode:2010ApPhB.100...93N. doi:10.1007/s00340-010-4065-z. Archived from the original (PDF) on 9 March 2013.

[33] Patel, Prachi. “Good-Bye to Curved Lens: New Lens Is Flat”. Retrieved 2015-05-16.

[34] Schottner, G (May 2003). “Scratch and Abrasion Resistant Coatings on Plastic Lenses—State of the Art, Current Devel- opments and Perspectives”. Journal of Sol-Gel Science and Technology. pp. 71–79. Retrieved 28 December 2009.

4.1.10 Bibliography

• Hecht, Eugene (1987). Optics (2nd ed.). Addison Wesley. ISBN 0-201-11609-X. Chapters 5 & 6.

• Hecht, Eugene (2002). Optics (4th ed.). Addison Wesley. ISBN 0-321-18878-0. • Greivenkamp, John E. (2004). Field Guide to Geometrical Optics. SPIE Field Guides vol. FG01. SPIE. ISBN 0-8194-5294-7.

4.1.11 External links

• a chapter from an online textbook on refraction and lenses • Thin Spherical Lenses (.pdf) on Project PHYSNET.

• Lens article at digitalartform.com • Article on Ancient Egyptian lenses

• FDTD Animation of Electromagnetic Propagation through Convex Lens (on- and off-axis) Video on YouTube • The Use of Magnifying Lenses in the Classical World

Simulations

• Learning by Simulations – Concave and Convex Lenses • OpticalRayTracer – Open source lens simulator (downloadable java)

• Video with a simulation of light while it passes a convex lens Video on YouTube • Animations demonstrating lens by QED 4.1. LENS (OPTICS) 71

Lenses can be used to focus light 72 CHAPTER 4. DAY 4

The Nimrud lens

Types of lenses 4.1. LENS (OPTICS) 73

A camera lens forms a real image of a distant object.

Virtual image formation using a positive lens as a magnifying glass.[26] 74 CHAPTER 4. DAY 4

ABCDEFGHIJKL

abcdefghi jkl

a

b c

-3f -2f -f f d2f 3f 4f 5f e

A

B f

C

D g

E F

h G

Images of black letters in a thin convex lens of focal length f are shown in red. Selected rays are shown for letters E, I and K in blue, green and orange, respectively. Note that E (at 2f) has an equal-size, real and inverted image; I (at f) has its image at infinity; and K (at f/2) has a double-size, virtual and upright image.

Close-up view of a flat Fresnel lens. 4.1. LENS (OPTICS) 75

Thin lens simulation Chapter 5

Day 5

5.1 Focus (optics)

For eye focus, see Accommodation (eye). In geometrical optics, a focus, also called an image point, is the point where light rays originating from a point on

A b

a B

Eye focusing ideally collects all light rays from a point on an object into a corresponding point on the retina. the object converge.[1] Although the focus is conceptually a point, physically the focus has a spatial extent, called the blur circle. This non-ideal focusing may be caused by aberrations of the imaging optics. In the absence of significant aberrations, the smallest possible blur circle is the Airy disc, which is caused by diffraction from the optical system’s aperture. Aberrations tend to get worse as the aperture diameter increases, while the Airy circle is smallest for large apertures. An image, or image point or region, is in focus if light from object points is converged almost as much as possible in the image, and out of focus if light is not well converged. The border between these is sometimes defined using a circle of confusion criterion. A principal focus or focal point is a special focus:

76 5.1. FOCUS (OPTICS) 77

An image that is partially in focus, but mostly out of focus in varying degrees.

• For a lens, or a spherical or parabolic mirror, it is a point onto which collimated light parallel to the axis is focused. Since light can pass through a lens in either direction, a lens has two focal points—one on each side. The distance in air from the lens or mirror’s principal plane to the focus is called the focal length.

• Elliptical mirrors have two focal points: light that passes through one of these before striking the mirror is reflected such that it passes through the other.

• The focus of a hyperbolic mirror is either of two points which have the property that light from one is reflected as if it came from the other.

Diverging (negative) lenses and convex mirrors do not focus a collimated beam to a point. Instead, the focus is the point from which the light appears to be emanating, after it travels through the lens or reflects from the mirror. A convex parabolic mirror will reflect a beam of collimated light to make it appear as if it were radiating from the focal point, or conversely, reflect rays directed toward the focus as a collimated beam. A convex elliptical mirror will reflect light directed towards one focus as if it were radiating from the other focus, both of which are behind the mirror. A convex hyperbolic mirror will reflect rays emanating from the focal point in front of the mirror as if they were emanating from the focal point behind the mirror. Conversely, it can focus rays directed at the focal point that is behind the mirror towards the focal point that is in front of the mirror as in a Cassegrain telescope.

5.1.1 See also

• Autofocus

• Cardinal point (optics)

• Depth of field

• Depth of focus 78 CHAPTER 5. DAY 5

Focal blur is simulated in this computer generated image of glasses, which was rendered in POV-Ray.

• Far point

• Focus (geometry)

• Fixed focus

• Bokeh

• Focus stacking

• Focal Plane

5.1.2 References

[1] “Standard Microscopy Terminology”. University of Minnesota Characterization Facility website. Archived from the original on 2008-03-02. Retrieved 2006-04-21.

5.2 Cardinal point (optics)

For other uses, see Cardinal point (disambiguation).

In Gaussian optics, the cardinal points consist of three pairs of points located on the optical axis of a rotationally symmetric, focal, optical system. These are the focal points, the principal points, and the nodal points.[1] For ideal systems, the basic imaging properties such as image size, location, and orientation are completely determined by the locations of the cardinal points; in fact only four points are necessary: the focal points and either the principal or nodal points. The only ideal system that has been achieved in practice is the plane mirror,[2] however the cardinal points are widely used to approximate the behavior of real optical systems. Cardinal points provide a way to analytically 5.2. CARDINAL POINT (OPTICS) 79

simplify a system with many components, allowing the imaging characteristics of the system to be approximately determined with simple calculations.

5.2.1 Explanation

The cardinal points lie on the optical axis of the optical system. Each point is defined by the effect the optical system has on rays that pass through that point, in the paraxial approximation. The paraxial approximation assumes that rays travel at shallow angles with respect to the optical axis, so that sin θ ≈ θ and cos θ ≈ 1 .[3] Aperture effects are ignored: rays that do not pass through the aperture stop of the system are not considered in the discussion below.

Focal planes

See also: Focus (optics) and Focal length

The front focal point of an optical system, by definition, has the property that any ray that passes through it will emerge from the system parallel to the optical axis. The rear (or back) focal point of the system has the reverse property: rays that enter the system parallel to the optical axis are focused such that they pass through the rear focal point. The front and rear (or back) focal planes are defined as the planes, perpendicular to the optic axis, which pass through the front and rear focal points. An object infinitely far from the optical system forms an image at the rear focal plane. For objects a finite distance away, the image is formed at a different location, but rays that leave the object parallel to one another cross at the rear focal plane. A diaphragm or “stop” at the rear focal plane can be used to filter rays by angle, since:

1. It only allows rays to pass that are emitted at an angle (relative to the optical axis) that is sufficiently small. (An infinitely small aperture would only allow rays that are emitted along the optical axis to pass.)

2. No matter where on the object the ray comes from, the ray will pass through the aperture as long as the angle at which it is emitted from the object is small enough.

Note that the aperture must be centered on the optical axis for this to work as indicated. Using a sufficiently small aperture in the focal plane will make the lens telecentric. Similarly, the allowed range of angles on the output side of the lens can be filtered by putting an aperture at the front focal plane of the lens (or a lens group within the overall lens). This is important for DSLR cameras having CCD sensors. The pixels in these sensors are more sensitive to rays that hit them straight on than to those that strike at an angle. A lens that does not control the angle of incidence at the detector will produce pixel in the images.

Principal planes and points

The two principal planes have the property that a ray emerging from the lens appears to have crossed the rear principal plane at the same distance from the axis that that ray appeared to cross the front principal plane, as viewed from the front of the lens. This means that the lens can be treated as if all of the refraction happened at the principal planes. The principal planes are crucial in defining the optical properties of the system, since it is the distance of the object and image from the front and rear principal planes that determines the magnification of the system. The principal points are the points where the principal planes cross the optical axis. If the medium surrounding the optical system has a refractive index of 1 (e.g., air or vacuum), then the distance from the principal planes to their corresponding focal points is just the focal length of the system. In the more general case, the distance to the foci is the focal length multiplied by the index of refraction of the medium. For a thin lens in air, the principal planes both lie at the location of the lens. The point where they cross the optical axis is sometimes misleadingly called the optical centre of the lens. Note, however, that for a real lens the principal planes do not necessarily pass through the centre of the lens, and in general may not lie inside the lens at all. 80 CHAPTER 5. DAY 5

Nodal points

The front and rear nodal points have the property that a ray aimed at one of them will be refracted by the lens such that it appears to have come from the other, and with the same angle with respect to the optical axis. The nodal points therefore do for angles what the principal planes do for transverse distance. If the medium on both sides of the optical system is the same (e.g., air), then the front and rear nodal points coincide with the front and rear principal points, respectively. The nodal points are widely misunderstood in photography, where it is commonly asserted that the light rays “inter- sect” at “the nodal point”, that the iris diaphragm of the lens is located there, and that this is the correct pivot point for , so as to avoid parallax error.[4][5][6] These claims generally arise from confusion about the optics of camera lenses, as well as confusion between the nodal points and the other cardinal points of the system. (A better choice of the point about which to pivot a camera for panoramic photography can be shown to be the centre of the system’s entrance pupil.[4][5][6] On the other hand, swing-lens cameras with fixed film position rotate the lens about the rear nodal point to stabilize the image on the film.[6][7])

Surface vertices

The surface vertices are the points where each optical surface crosses the optical axis. They are important primarily because they are the physically measurable parameters for the position of the optical elements, and so the positions of the cardinal points must be known with respect to the vertices to describe the physical system. In anatomy, the surface vertices of the eye’s lens are called the anterior and posterior poles of the lens.[8]

5.2.2 Modeling optical systems as mathematical transformations

In geometrical optics for each ray entering an optical system a single, unique, ray exits. In mathematical terms, the optical system performs a transformation that maps every object ray to an image ray.[1] The object ray and its associated image ray are said to be conjugate to each other. This term also applies to corresponding pairs of object and image points and planes. The object and image rays and points are considered to be in two distinct optical spaces, object space and image space; additional intermediate optical spaces may be used as well.

Rotationally symmetric optical systems; Optical axis, axial points, and meridional planes

An optical system is rotationally symmetric if its imaging properties are unchanged by any rotation about some axis. This (unique) axis of rotational symmetry is the optical axis of the system. Optical systems can be folded using plane mirrors; the system is still considered to be rotationally symmetric if it possesses rotational symmetry when unfolded. Any point on the optical axis (in any space) is an axial point. Rotational symmetry greatly simplifies the analysis of optical systems, which otherwise must be analyzed in three dimensions. Rotational symmetry allows the system to be analyzed by considering only rays confined to a single transverse plane containing the optical axis. Such a plane is called a meridional plane; it is a cross-section through the system.

Ideal, rotationally symmetric, optical imaging system

An ideal, rotationally symmetric, optical imaging system must meet three criteria:

1. All rays “originating” from any object point converge to a single image point (Imaging is stigmatic). 2. Object planes perpendicular to the optical axis are conjugate to image planes perpendicular to the axis. 3. The image of an object confined to a plane normal to the axis is geometrically similar to the object.

In some optical systems imaging is stigmatic for one or perhaps a few object points, but to be an ideal system imaging must be stigmatic for every object point. Unlike rays in mathematics, optical rays extend to infinity in both directions. Rays are real when they are in the part of the optical system to which they apply, and are virtual elsewhere. For example, object rays are real on the object 5.2. CARDINAL POINT (OPTICS) 81 side of the optical system. In stigmatic imaging an object ray intersecting any specific point in object space must be conjugate to an image ray intersecting the conjugate point in image space. A consequence is that every point on an object ray is conjugate to some point on the conjugate image ray. Geometrical similarity implies the image is a scale model of the object. There is no restriction on the image’s orien- tation. The image may be inverted or otherwise rotated with respect to the object.

Focal and afocal systems, focal points

In afocal systems an object ray parallel to the optical axis is conjugate to an image ray parallel to the optical axis. Such systems have no focal points (hence afocal) and also lack principal and nodal points. The system is focal if an object ray parallel to the axis is conjugate to an image ray that intersects the optical axis. The intersection of the image ray with the optical axis is the focal point F' in image space. Focal systems also have an axial object point F such that any ray through F is conjugate to an image ray parallel to the optical axis. F is the object space focal point of the system.

Transformation

The transformation between object space and image space is completely defined by the cardinal points of the system, and these points can be used to map any point on the object to its conjugate image point.

5.2.3 See also

• Film plane • Pinhole camera model • Radius of curvature (optics) • Vergence (optics)

5.2.4 Notes and references

[1] Greivenkamp, John E. (2004). Field Guide to Geometrical Optics. SPIE Field Guides vol. FG01. SPIE. pp. 5–20. ISBN 0-8194-5294-7.

[2] Welford, W.T. (1986). Aberrations of Optical Systems. CRC. ISBN 0-85274-564-8.

[3] Hecht, Eugene (2002). Optics (4th ed.). Addison Wesley. p. 155. ISBN 0-321-18878-0.

[4] Kerr, Douglas A. (2005). “The Proper Pivot Point for Panoramic Photography” (PDF). The Pumpkin. Archived from the original (PDF) on 13 May 2006. Retrieved 5 March 2006.

[5] van Walree, Paul. “Misconceptions in photographic optics”. Retrieved 1 January 2007. Item #6.

[6] Littlefield, Rik (6 February 2006). “Theory of the “No-Parallax” Point in Panorama Photography” (pdf). ver. 1.0. Re- trieved 14 January 2007.

[7] Searle, G.F.C. 1912 Revolving Table Method of Measuring Focal Lengths of Optical Systems in “Proceedings of the Optical Convention 1912” p.168-171.

[8] Gray, Henry (1918). “Anatomy of the Human Body”. p. 1019. Retrieved 12 February 2009.

• Hecht, Eugene (1987). Optics (2nd ed.). Addison Wesley. ISBN 0-201-11609-X. • Lambda Research Corporation (2001). OSLO Optics Reference (PDF) (Version 6.1 ed.). Retrieved 5 March 2006. Pages 74–76 define the cardinal points.

5.2.5 External links

• Learn to use TEM 82 CHAPTER 5. DAY 5

The cardinal points of a thick lens in air. F, F' front and rear focal points, P, P' front and rear principal points, V, V' front and rear surface vertices. 5.2. CARDINAL POINT (OPTICS) 83

object

convergent lens

back focal plane

image plane

Rays that leave the object with the same angle cross at the back focal plane. 84 CHAPTER 5. DAY 5

object

convergent lens

aperture

image plane

Angle filtering with an aperture at the rear focal plane. 5.2. CARDINAL POINT (OPTICS) 85

H H’ H H’ H H’ H H’ r r r r 1 (+) 1 (+) 1 (+) 1 (+)

r r r2 (∞) 2 (-) 2 (-) (+) r 2 1 2 43

H H’ H H’ H H’ H H’ r1 (∞) (-) (-) r r 1 r 1 1 (-)

(+) (+) (+) (+) r 2 r 2 r 2 r 2 5 6 87

Various lens shapes, and the location of the principal planes. 86 CHAPTER 5. DAY 5

N, N' The front and rear nodal points of a thick lens. Chapter 6

Day 6

6.1 Hyperfocal distance

In optics and photography, hyperfocal distance is a distance beyond which all objects can be brought into an “ac- ceptable” focus. There are two commonly used definitions of hyperfocal distance, leading to values that differ only slightly: Definition 1: The hyperfocal distance is the closest distance at which a lens can be focused while keeping objects at infinity acceptably sharp. When the lens is focused at this distance, all objects at distances from half of the hyperfocal distance out to infinity will be acceptably sharp. Definition 2: The hyperfocal distance is the distance beyond which all objects are acceptably sharp, for a lens focused at infinity. The distinction between the two meanings is rarely made, since they have almost identical values. The value computed according to the first definition exceeds that from the second by just one focal length. As the hyperfocal distance is the focus distance giving the maximum depth of field, it is the most desirable distance to set the focus of a fixed-focus camera.[1]

6.1.1 Acceptable sharpness

The hyperfocal distance is entirely dependent upon what level of sharpness is considered to be acceptable. The criterion for the desired acceptable sharpness is specified through the circle of confusion (CoC) diameter limit. This criterion is the largest acceptable spot size diameter that an infinitesimal point is allowed to spread out to on the imaging medium (film, digital sensor, etc.).

6.1.2 Formulae

For the first definition,

f 2 H = + f Nc where

H

f is focal length

N is f-number ( f/D for aperture diameter D )

87 88 CHAPTER 6. DAY 6

c is the circle of confusion limit

For any practical f-number, the added focal length is insignificant in comparison with the first term, so that

f 2 H ≈ Nc This formula is exact for the second definition, if H is measured from a thin lens, or from the front principal plane of a complex lens; it is also exact for the first definition if H is measured from a point that is one focal length in front of the front principal plane. For practical purposes, there is little difference between the first and second definitions.

Derivation using geometric optics

The following derivations refer to the accompanying figures. For clarity, half the aperture and circle of confusion are indicated.[2]

Definition 1 An object at distance H forms a sharp image at distance x (blue line). Here, objects at infinity have images with a circle of confusion indicated by the brown ellipse where the upper red ray through the focal point intersects the blue line. First using similar triangles hatched in green,

x − f f = c/2 D/2 cf ∴ x − f = D cf ∴ x = f + D Then using similar triangles dotted in purple,

H x = D/2 c/2 ( ) Dx D cf ∴ H = = f + c c D Df f 2 = + f = + f c Nc

as found above.

Definition 2 Objects at infinity form sharp images at the focal length f (blue line). Here, an object at H forms an image with a circle of confusion indicated by the brown ellipse where the lower red ray converging to its sharp image intersects the blue line. Using similar triangles shaded in yellow,

H f = D/2 c/2 Df f 2 ∴ H = = c Nc

6.1.3 Example

As an example, for a 50 mm lens at f/8 using a circle of confusion of 0.03 mm, which is a value typically used in 35 mm photography, the hyperfocal distance according to Definition 1 is 6.1. HYPERFOCAL DISTANCE 89 1. x

D /2

c /2

H f

2. D /2

c /2

H f Accompanying figures

(50)2 H = + (50) = 10467 mm (8)(0.03)

If the lens is focused at a distance of 10.5 m, then everything from half that distance (5.2 m) to infinity will be acceptably sharp in our photograph. With the formula for the Definition 2, the result is 10417 mm, a difference of 0.5%. 90 CHAPTER 6. DAY 6

6.1.4 Mathematical phenomenon

The hyperfocal distance has a curious property: while a lens focused at H will hold a depth of field from H/2 to infinity, if the lens is focused to H/2, the depth of field will extend from H/3 to H; if the lens is then focused to H/3, the depth of field will extend from H/4 to H/2. This continues on through all successive 1/x values of the hyperfocal distance. Piper (1901) calls this phenomenon “consecutive depths of field” and shows how to test the idea easily. This is also among the earliest of publications to use the word hyperfocal. The figure on the right illustrates this phenomenon.

6.1.5 History

The concepts of the two definitions of hyperfocal distance have a long history, tied up with the terminology for depth of field, depth of focus, circle of confusion, etc. Here are some selected early quotations and interpretations on the topic.

Sutton and Dawson 1867

Thomas Sutton and George Dawson define focal range for what we now call hyperfocal distance:[3]

Focal Range. In every lens there is, corresponding to a given apertal ratio (that is, the ratio of the diameter of the stop to the focal length), a certain distance of a near object from it, between which and infinity all objects are in equally good focus. For instance, in a single view lens of 6 inch focus, with a 1/4 in. stop (apertal ratio one-twenty-fourth), all objects situated at distances lying between 20 feet from the lens and an infinite distance from it (a fixed star, for instance) are in equally good focus. Twenty feet is therefore called the “focal range” of the lens when this stop is used. The focal range is consequently the distance of the nearest object, which will be in good focus when the ground glass is adjusted for an extremely distant object. In the same lens, the focal range will depend upon the size of the diaphragm used, while in different lenses having the same apertal ratio the focal ranges will be greater as the focal length of the lens is increased. The terms 'apertal ratio' and 'focal range' have not come into general use, but it is very desirable that they should, in order to prevent ambiguity and circumlocution when treating of the properties of photographic lenses. 'Focal range' is a good term, because it expresses the range within which it is necessary to adjust the focus of the lens to objects at different distances from it – in other words, the range within which focusing becomes necessary.

Their focal range is about 1000 times their aperture diameter, so it makes sense as a hyperfocal distance with CoC value of f/1000, or image format diagonal times 1/1000 assuming the lens is a “normal” lens. What is not clear, however, is whether the focal range they cite was computed, or empirical.

Abney 1881

Sir William de Wivelesley Abney says:[4]

The annexed formula will approximately give the nearest point p which will appear in focus when the distance is accurately focussed, supposing the admissible disc of confusion to be 0.025 cm: p = 0.41 · f 2 · a when f = a =

That is, a is the reciprocal of what we now call the f-number, and the answer is evidently in meters. His 0.41 should obviously be 0.40. Based on his formulae, and on the notion that the aperture ratio should be kept fixed in comparisons across formats, Abney says: 6.1. HYPERFOCAL DISTANCE 91

It can be shown that an enlargement from a small negative is better than a picture of the same size taken direct as regards sharpness of detail. ... Care must be taken to distinguish between the advantages to be gained in enlargement by the use of a smaller lens, with the disadvantages that ensue from the deterioration in the relative values of light and shade.

Taylor 1892

John Traill Taylor recalls this word formula for a sort of hyperfocal distance:[5]

We have seen it laid down as an approximative rule by some writers on optics (Thomas Sutton, if we remember aright), that if the diameter of the stop be a fortieth part of the focus of the lens, the depth of focus will range between infinity and a distance equal to four times as many feet as there are inches in the focus of the lens.

This formula implies a stricter CoC criterion than we typically use today.

Hodges 1895

John Hodges discusses depth of field without formulas but with some of these relationships:[6]

There is a point, however, beyond which everything will be in pictorially good definition, but the longer the focus of the lens used, the further will the point beyond which everything is in sharp focus be removed from the camera. Mathematically speaking, the amount of depth possessed by a lens varies inversely as the square of its focus.

This “mathematically” observed relationship implies that he had a formula at hand, and a parameterization with the f-number or “intensity ratio” in it. To get an inverse-square relation to focal length, you have to assume that the CoC limit is fixed and the aperture diameter scales with the focal length, giving a constant f-number.

Piper 1901

C. Welborne Piper may be the first to have published a clear distinction between Depth of Field in the modern sense and Depth of Definition in the focal plane, and implies that Depth of Focus and Depth of Distance are sometimes used for the former (in modern usage, Depth of Focus is usually reserved for the latter).[7] He uses the term Depth Constant for H, and measures it from the front principal focus (i. e., he counts one focal length less than the distance from the lens to get the simpler formula), and even introduces the modern term:

This is the maximum depth of field possible, and H + f may be styled the distance of maximum depth of field. If we measure this distance extra-focally it is equal to H, and is sometimes called the hyperfocal distance. The depth constant and the hyperfocal distance are quite distinct, though of the same value.

It is unclear what distinction he means. Adjacent to Table I in his appendix, he further notes:

If we focus on infinity, the constant is the focal distance of the nearest object in focus. If we focus on an extra-focal distance equal to the constant, we obtain a maximum depth of field from approximately half the constant distance up to infinity. The constant is then the hyper-focal distance.

At this point we do not have evidence of the term hyperfocal before Piper, nor the hyphenated hyper-focal which he also used, but he obviously did not claim to coin this descriptor himself. 92 CHAPTER 6. DAY 6

Derr 1906

Louis Derr may be the first to clearly specify the first definition,[8] which is considered to be the strictly correct one in modern times, and to derive the formula corresponding to it. Using p for hyperfocal distance, D for aperture diameter, d for the diameter that a circle of confusion shall not exceed, and f for focal length, he derives:

(D+d)f p = d

As the aperture diameter, D is the ratio of the focal length, f to the numerical aperture, N ; and the diameter of the circle of confusion, c = d , this gives the equation for the first definition above.

( f + c)f f 2 p = N = + f c Nc

Johnson 1909

George Lindsay Johnson uses the term Depth of Field for what Abney called Depth of Focus, and Depth of Focus in the modern sense (possibly for the first time),[9] as the allowable distance error in the focal plane. His definitions include hyperfocal distance:

Depth of Focus is a convenient, but not strictly accurate term, used to describe the amount of rack- ing movement (forwards or backwards) which can be given to the screen without the image becoming sensibly blurred, i.e. without any blurring in the image exceeding 1/100 in., or in the case of negatives to be enlarged or scientific work, the 1/10 or 1/100 mm. Then the breadth of a point of light, which, of course, causes blurring on both sides, i.e. 1/50 in = 2e (or 1/100 in = e).

His drawing makes it clear that his e is the radius of the circle of confusion. He has clearly anticipated the need to tie it to format size or enlargement, but has not given a general scheme for choosing it.

Depth of Field is precisely the same as depth of focus, only in the former case the depth is measured by the movement of the plate, the object being fixed, while in the latter case the depth is measured by the distance through which the object can be moved without the circle of confusion exceeding 2e. Thus if a lens which is focused for infinity still gives a sharp image for an object at 6 yards, its depth of field is from infinity to 6 yards, every object beyond 6 yards being in focus. This distance (6 yards) is termed the hyperfocal distance of the lens, and any allowable confusion disc depends on the focal length of the lens and on the stop used. If the limit of confusion of half of the disc (i.e. e) be taken as 1/100 in., then the hyperfocal distance

F d H = e d being the diameter of the stop, ...

Johnson’s use of former and latter seem to be swapped; perhaps former was here meant to refer to the immediately preceding section title Depth of Focus, and latter to the current section title Depth of Field. Except for an obvious factor-of-2 error in using the ratio of stop diameter to CoC radius, this definition is the same as Abney’s hyperfocal distance.

Others, early twentieth century

The term hyperfocal distance also appears in Cassell’s Cyclopaedia of 1911, The Sinclair Handbook of Photography of 1913, and Bayley’s The Complete Photographer of 1914. 6.1. HYPERFOCAL DISTANCE 93

Kingslake 1951

Rudolf Kingslake is explicit about the two meanings:[1]

The Hyperfocal Distance – It should be noted that if the camera is focused on a distance s equal to 1000 times the diameter of the lens aperture, then the far depth D1 becomes infinite. This critical object distance "h" is known as the Hyperfocal Distance. For a camera focused on this distance, D1 = ∞ and D2 = h/2 , and we see that the range of distances acceptably in focus will run from just half the hyperfocal distance to infinity. The hyperfocal distance is, therefore, the most desirable distance on which to pre-set the focus of a fixed-focus camera. It is worth noting, too, that if a camera is focused on s = ∞ , the closest acceptable object is at L2 = sh/(h + s) = h/(h/s + 1) = h (by equation 21). This is a second important meaning of the hyperfocal distance.

Kingslake uses the simplest formulae for DOF near and far distances, which has the effect of making the two different definitions of hyperfocal distance give identical values.

6.1.6 See also

• Circle of confusion • Deep focus

• Depssi, depth of field sunrise/sunset indicator • Depth of field

6.1.7 References

[1] Kingslake, Rudolf (1951). Lenses in Photography: The Practical Guide to Optics for Photographers. Garden City, NY: Garden City Press.

[2] Optics in Photography - Google Books. Retrieved 24 September 2014.

[3] Sutton, Thomas; Dawson, George (1867). A Dictionary of Photography. London: Sampson Low, Son & Marston.

[4] Abney, W. de W. (1881). A Treatise on Photography (First ed.). London: Longmans, Green, and Co.

[5] Taylor, J. Traill (1892). The Optics of Photography and Photographic Lenses. London: Whittaker & Co.

[6] Hodges, John (1895). Photographic Lenses: How to Choose, and How to Use. Bradford: Percy Lund & Co.

[7] Piper, C. Welborne (1901). A First Book of the Lens: An Elementary Treatise on the Action and Use of the Photographic Lens. London: Hazell, Watson, and Viney.

[8] Derr, Louis (1906). Photography for students of physics and chemistry. London: Macmillan.

[9] Johnson, George Lindsay (1909). Photographic Optics and Colour Photography. London: Ward & Co.

6.1.8 External links

• http://www.dofmaster.com/dofjs.html to calculate hyperfocal distance and depth of field 94 CHAPTER 6. DAY 6

This early use of the term hyperfocal distance, Derr 1906, is by no means the earliest explanation of the concept. Chapter 7

Day 7

7.1 Angle of view

A camera’s angle of view can be measured horizontally, vertically, or diagonally.

In photography, angle of view (AOV)[1] describes the angular extent of a given scene that is imaged by a camera. It is used interchangeably with the more general term field of view. It is important to distinguish the angle of view from the angle of coverage, which describes the angle range that a lens can image. Typically the image circle produced by a lens is large enough to cover the film or sensor completely, possibly including some vignetting toward the edge. If the angle of coverage of the lens does not fill the sensor, the image circle will be visible, typically with strong vignetting toward the edge, and the effective angle of view will be

95 96 CHAPTER 7. DAY 7 limited to the angle of coverage.

In 1916, Northey showed how to calculate the angle of view using ordinary carpenter’s tools.[2] The angle that he labels as the angle of view is the half-angle or “the angle that a straight line would take from the extreme outside of the field of view to the center of the lens;" he notes that manufacturers of lenses use twice this angle.

A camera’s angle of view depends not only on the lens, but also on the sensor. Digital sensors are usually smaller than 35mm film, and this causes the lens to have a narrower angle of view than with 35mm film, by a constant factor for each sensor (called the crop factor). In everyday digital cameras, the crop factor can range from around 1 (professional digital SLRs), to 1.6 (consumer SLR), to 2 (Micro Four Thirds ILC) to 4 (enthusiast compact cameras) to 6 (most compact cameras). So a standard 50mm lens for 35mm photography acts like a 50mm standard “film” lens even on a professional digital SLR, but would act closer to an 80mm lens (1.6 x 50mm) on many mid-market DSLRs, and the 40 degree angle of view of a standard 50mm lens on a film camera is equivalent to a 28 - 35mm lens on many digital SLRs.

7.1.1 Calculating a camera’s angle of view

For lenses projecting rectilinear (non-spatially-distorted) images of distant objects, the effective focal length and the image format dimensions completely define the angle of view. Calculations for lenses producing non-rectilinear images are much more complex and in the end not very useful in most practical applications. (In the case of a lens with distortion, e.g., a fisheye lens, a longer lens with distortion can have a wider angle of view than a shorter lens with low distortion)[3] Angle of view may be measured horizontally (from the left to right edge of the frame), vertically (from the top to bottom of the frame), or diagonally (from one corner of the frame to its opposite corner). 7.1. ANGLE OF VIEW 97

In this simulation, adjusting the angle of view and distance of the camera while keeping the object in frame results in vastly differing images. At distances approaching infinity, the light rays are nearly parallel to each other, resulting in a “flattened” image. At low distances and high angles of view objects appear “foreshortened”.

For a lens projecting a rectilinear image (focused at infinity, see derivation), the angle of view (α) can be calculated from the chosen dimension (d), and effective focal length (f) as follows:[4]

d α = 2 arctan 2f d represents the size of the film (or sensor) in the direction measured (see below: sensor effects). For example, for 35mm film which is 36 mm wide and 24mm high, d = 36 mm would be used to obtain the horizontal angle of view and d = 24 mm for the vertical angle. Because this is a trigonometric function, the angle of view does not vary quite linearly with the reciprocal of the focal ≈ d 180d length. However, except for wide-angle lenses, it is reasonable to approximate α f radians or πf degrees. The effective focal length is nearly equal to the stated focal length of the lens (F), except in macro photography where the lens-to-object distance is comparable to the focal length. In this case, the magnification factor (m) must be taken into account: 98 CHAPTER 7. DAY 7

f = F · (1 + m)

(In photography m is usually defined to be positive, despite the inverted image.) For example, with a magnification ratio of 1:2, we find f = 1.5 · F and thus the angle of view is reduced by 33% compared to focusing on a distant object with the same lens. A second effect which comes into play in macro photography is lens asymmetry (an asymmetric lens is a lens where the aperture appears to have different dimensions when viewed from the front and from the back). The lens asymmetry causes an offset between the nodal plane and pupil positions. The effect can be quantified using the ratio (P) between apparent exit pupil diameter and entrance pupil diameter. The full formula for angle of view now becomes:[5]

d α = 2 arctan 2F · (1 + m/P )

Angle of view can also be determined using FOV tables or paper or software lens calculators.[6]

1200 mm

1000 600 mm

300 mm

300 1/3" iPhone 6/5s (7.2 crop factor) Diagonal 150 mm

Horizontal 1/2.6" Galaxy S6/S5 (6.3 crop factor) 1/1.2" Nokia 808 (3.2 crop factor) 70 mm 100 Vertical

(Micro) Four Thirds (2 crop factor) 3:2 Canon APS-C (1.6 crop factor) 3:2 35 mm 35 mm-equivalent focal length (mm) 4:3 General APS-C (1.5 crop factor) 4:3 30 12 mm 35 mm full-frame (1 crop factor) 27 Medium format (0.7 crop factor)

Large format 4x5 (0.3 crop factor) 10 1 3 10 18 30 100 300 1000 3000 1 3 10 3048 100 180 Focal length (mm) Angle of view (degrees) Log- log graphs of focal length vs crop factor vs diagonal, horizontal and vertical angles of view for film or sensors of 3:2 and 4:3 aspect ratios. The yellow line shows an example where 18 mm on 3:2 APS-C is equivalent to 27 mm and yields a vertical angle of 48 degrees.

Example

Consider a 35 mm camera with a lens having a focal length of F = 50 mm. The dimensions of the 35 mm image format are 24 mm (vertically) × 36 mm (horizontal), giving a diagonal of about 43.3 mm. At infinity focus, f = F, and the angles of view are:

• h 36 ≈ ◦ horizontally, αh = 2 arctan 2f = 2 arctan 2×50 39.6 • v 24 ≈ ◦ vertically, αv = 2 arctan 2f = 2 arctan 2×50 27.0 • d 43.3 ≈ ◦ diagonally, αd = 2 arctan 2f = 2 arctan 2×50 46.8

Derivation of the angle-of-view formula

Consider a rectilinear lens in a camera used to photograph an object at a distance S1 , and forming an image that just barely fits in the dimension, d , of the frame (the film or image sensor). Treat the lens as if it were a pinhole at distance S2 from the image plane (technically, the center of perspective of a rectilinear lens is at the center of its entrance pupil):[7] Now α/2 is the angle between the optical axis of the lens and the ray joining its optical center to the edge of the film. Here α is defined to be the angle-of-view, since it is the angle enclosing the largest object whose image can fit on the film. We want to find the relationship between: 7.1. ANGLE OF VIEW 99

S S 1 2 F

Object α/2 d

(Camera body)

α

d/2

S2

Using basic trigonometry, we find:

d/2 tan(α/2) = . S2

which we can solve for α, giving:

d α = 2 arctan 2S2

To project a sharp image of distant objects, S2 needs to be equal to the focal length, F , which is attained by setting the lens for infinity focus. Then the angle of view is given by:

d α = 2 arctan 2f where f = F

S1f Note that the angle of view varies slightly when the focus is not at infinity (See breathing (lens)), given by S2 = S1−f rearranging the lens equation. 100 CHAPTER 7. DAY 7

Macro photography For macro photography, we cannot neglect the difference between S2 and F . From the thin lens formula,

1 1 1 = + F S1 S2

From the definition of magnification, m = S2/S1 , we can substitute S1 and with some algebra find:

S2 = F · (1 + m)

Defining f = S2 as the “effective focal length”, we get the formula presented above:

d · α = 2 arctan 2f where f = F (1 + m) .

A second effect which comes into play in macro photography is lens asymmetry (an asymmetric lens is a lens where the aperture appears to have different dimensions when viewed from the front and from the back). The lens asymmetry causes an offset between the nodal plane and pupil positions. The effect can be quantified using the ratio (P) between apparent exit pupil diameter and entrance pupil diameter. The full formula for angle of view now becomes:[5]

d α = 2 arctan 2F · (1 + m/P )

7.1.2 Measuring a camera’s field of view

In the optical instrumentation industry the term field of view (FOV) is most often used, though the measurements are still expressed as angles.[8] Optical tests are commonly used for measuring the FOV of UV, visible, and infrared (wavelengths about 0.1–20 µm in the electromagnetic spectrum) sensors and cameras. The purpose of this test is to measure the horizontal and vertical FOV of a lens and sensor used in an imaging system, when the lens focal length or sensor size is not known (that is, when the calculation above is not immediately applicable). Although this is one typical method that the optics industry uses to measure the FOV, there exist many other possible methods. UV/visible light from an integrating sphere (and/or other source such as a black body) is focused onto a square test target at the focal plane of a collimator (the mirrors in the diagram), such that a virtual image of the test target will be seen infinitely far away by the camera under test. The camera under test senses a real image of the virtual image of the target, and the sensed image is displayed on a monitor.[9] The sensed image, which includes the target, is displayed on a monitor, where it can be measured. Dimensions of the full image display and of the portion of the image that is the target are determined by inspection (measurements are typically in pixels, but can just as well be inches or cm).

D

d

The collimator’s distant virtual image of the target subtends a certain angle, referred to as the angular extent of the target, that depends on the collimator focal length and the target size. Assuming the sensed image includes the whole target, the angle seen by the camera, its FOV, is this angular extent of the target times the ratio of full image size to target image size.[10] The target’s angular extent is: 7.1. ANGLE OF VIEW 101

Schematic of collimator-based optical apparatus used in measuring the FOV of a camera.

L α = 2 arctan 2fc

where L is the dimension of the target and fc is the focal length of collimator.

The total field of view is then approximately:

D FOV = α d or more precisely, if the imaging system is rectilinear:

LD FOV = 2 arctan 2fcd

This calculation could be a horizontal or a vertical FOV, depending on how the target and image are measured.

7.1.3 Lens types and effects 102 CHAPTER 7. DAY 7

Monitor display of sensed image from the camera under test

Focal length

Lenses are often referred to by terms that express their angle of view:

• fisheye lenses, typical focal lengths are between 8 mm and 10 mm for circular images, and 15–16 mm for full-frame images. Up to 180° and beyond. • A circular fisheye lens (as opposed to a full-frame fisheye) is an example of a lens where the angle of coverage is less than the angle of view. The image projected onto the film is circular because the diameter of the image projected is narrower than that needed to cover the widest portion of the film. • Ultra wide angle lens is a rectilinear which is less than 24mm of focal length in 35mm film format, here 14mm is 114° and 24mm is 84° . • Wide-angle lenses (24–35mm in 35mm film format) cover between 84° and 64° • Normal, or Standard lenses (36–60mm in 35mm film format) cover between 62° and 40° • Long focus lenses (any lens with a focal length greater than the diagonal of the film or sensor used)[11] generally have an angle of view of 35° or less.[12] Since photographers usually only encounter the telephoto lens sub- type,[13] they are referred to in common photographic parlance as: • “Medium telephoto”, a focal length of 85mm to 135mm in 35mm film format covering between 30° and 10°[14] • “Super telephoto” (over 300mm in 35mm film format) generally cover between 8° through less than 1°[14] 7.1. ANGLE OF VIEW 103

Zoom lenses are a special case wherein the focal length, and hence angle of view, of the lens can be altered mechan- ically without removing the lens from the camera.

Characteristics

For a given camera–subject distance, longer lenses magnify the subject more. For a given subject magnification (and thus different camera–subject distances), longer lenses appear to compress distance; wider lenses appear to expand the distance between objects. Another result of using a wide angle lens is a greater apparent perspective distortion when the camera is not aligned perpendicularly to the subject: parallel lines converge at the same rate as with a normal lens, but converge more due to the wider total field. For example, buildings appear to be falling backwards much more severely when the camera is pointed upward from ground level than they would if photographed with a normal lens at the same distance from the subject, because more of the subject building is visible in the wide-angle shot. Because different lenses generally require a different camera–subject distance to preserve the size of a subject, chang- ing the angle of view can indirectly distort perspective, changing the apparent relative size of the subject and fore- ground. If the subject image size remains the same, then at any given aperture all lenses, wide angle and long lenses, will give the same depth of field.[15]

Examples

An example of how lens choice affects angle of view.

7.1.4 Common lens angles of view

This table shows the diagonal, horizontal, and vertical angles of view, in degrees, for lenses producing rectilinear images, when used with 36 mm × 24 mm format (that is, 135 film or full-frame 35mm digital using width 36 mm, height 24 mm, and diagonal 43.3 mm for d in the formula above).[16] Digital compact cameras sometimes state the focal lengths of their lenses in 35mm equivalents, which can be used in this table. For comparison, the human visual system perceives an angle of view of about 140° by 80°.[17]

Five images using 24, 28, 35, 50 and 72mm equivalent zoom lengths, portrait format, to illustrate angles of view [18]

Five images using 24, 28, 35, 50 and 72mm equivalent step zoom function, to illustrate angles of view 104 CHAPTER 7. DAY 7

7.1.5 Sensor size effects (“crop factor”)

Main article: Crop factor

As noted above, a camera’s angle of view depends not only on the lens, but also on the sensor used. Digital sensors are usually smaller than 35mm film, causing the lens to usually behave as a longer focal length lens would behave, and have a narrower angle of view than with 35mm film, by a constant factor for each sensor (called the crop factor). In everyday digital cameras, the crop factor can range from around 1 (professional digital SLRs), to 1.6 (mid-market SLRs), to around 3 to 6 for compact cameras. So a standard 50mm lens for 35mm photography acts like a 50mm standard “film” lens even on a professional digital SLR, but would act closer to an 80mm lens (1.6 x 50mm) on many mid-market DSLRs, and the 40 degree angle of view of a standard 50mm lens on a film camera is equivalent to a 28-35mm lens on many digital SLRs. The table below shows the horizontal, vertical and diagonal angles of view, in degrees, when used with 22.2 mm × 14.8 mm format (that is the common mid-market DSLR APS-C frame size) and a diagonal of 26.7 mm.

7.1.6 Cinematography and video gaming

Modifying the angle of view over time (known as zooming), is a frequently used cinematic technique, often combined with camera movement to produce a "dolly zoom" effect, made famous by the film Vertigo. Using a wide angle of view can exaggerate the camera’s perceived speed, and is a common technique in tracking shots, phantom rides, and racing video games. See also Field of view in video games.

7.1.7 References and notes

[1] Tim Dobbert (November 2012). Matchmoving: The Invisible Art of Camera Tracking, 2nd Edition. John Wiley & Sons. p. 116.

[2] Neil Wayne Northey (September 1916). Frank V. Chambers, ed. “The Angle of View of your Lens”. The Camera. Columbia Photographic Society. 20 (9).

[3] http://www.the-digital-picture.com/reviews/canon-ef-15mm-f-2.8-fisheye-lens-review.aspx

[4] Ernest McCollough (1893). “Photographic Topography”. Industry: A Monthly Magazine Devoted to Science, Engineering and Mechanic Arts. Industrial Publishing Company, San Francisco: 399–406.

[5] Paul van Walree (2009). “Center of perspective”. Retrieved 24 January 2010.

[6] CCTV Field of View Camera Lens Calculations by JVSG, December, 2007

[7] Kerr, Douglas A. (2008). “The Proper Pivot Point for Panoramic Photography” (PDF). The Pumpkin. Retrieved 2014-03- 20.

[8] Holst, G.C. (1998). Testing and Evaluation of Infrared Imaging Systems (2nd ed.). Florida:JCD Publishing, Washington: SPIE.

[9] Mazzetta, J.A.; Scopatz, S.D. (2007). Automated Testing of Ultraviolet, Visible, and Infrared Sensors Using Shared Optics. Infrared Imaging Systems: Design Analysis, Modeling, and Testing XVIII,Vol. 6543, pp. 654313-1 654313-14

[10] Electro Optical Industries, Inc.(2005). EO TestLab Methadology. In Education/Ref. “Archived copy”. Archived from the original on 2008-08-28. Retrieved 2008-05-22..

[11] Sidney F. Ray, Applied photographic optics: lenses and optical systems for photography, page 294

[12] Lynne Warren, Encyclopedia of 20th century photography, page 211

[13] Michael Langford, Basic photography, page 83

[14] photographywebsite.co.uk – Lens Types Explained

[15] Reichmann, Michael. “Do Wide Angle Lenses Really Have Greater Depth of Field Than Telephotos?".

[16] However, most interchangeable-lens digital cameras do not use 24x36 mm image sensors and therefore produce narrower angles of view than set out in the table. See crop factor and the subtopic digital camera issues in the article on wide-angle lenses for further discussion. 7.1. ANGLE OF VIEW 105

[17] “Archived copy”. Archived from the original on 2013-07-04. Retrieved 2014-04-27.

[18] The image examples uses a 5.1–15.3mm lens which is called a 24mm 3x zoom by the producer (Ricoh Caplio GX100)

7.1.8 See also

• 35 mm equivalent focal length • Camera angle

• Camera coverage • Camera operator

• Cinematic techniques • Field of view

• Filmmaking • Multiple-camera setup

• Single-camera setup • Video production

• Image sensor format • Crop factor

7.1.9 External links

• Simple Explanation of Angle of View and Focal Length • Angle of View on digital SLR cameras with reduced sensor size

• Focal Length and Angle of View 106 CHAPTER 7. DAY 7

How focal length affects perspective: Varying focal lengths at identical field size achieved by different camera-subject distances. Notice that the shorter the focal length and the larger the angle of view, perspective distortion and size differences increase. 7.2. FIELD OF VIEW 107

7.2 Field of view

For the same phenomenon in photography, see Angle of view. For other uses, see Field of view (disambiguation). The field of view is the extent of the observable world that is seen at any given moment. In case of optical

Horizontal Field of View

Vertical Field of View instruments or sensors it is a solid angle through which a detector is sensitive to electromagnetic radiation. 108 CHAPTER 7. DAY 7

Angle of view can be measured horizontally, vertically, or diagonally.

A 360-degree panorama of the Milky Way at the Very Large Telescope. Such a panorama shows the entire field of view (FOV) in a single image. An observer would perceive the Milky Way like an arc of stars spanning horizon to horizon – with the entire FOV mapped on a single image this arc appears as two streams of stars seemingly cascading down like waterfalls.[1]

7.2.1 Humans and animals

In the context of human vision, the term “field of view” is typically used in the sense of a restriction to what is visible by external apparatus, like spectacles[2] or virtual reality goggles. Note that eye movements do not change the field of view. If the analogy of the eye’s retina working as a sensor is drawn upon, the corresponding concept in human (and 7.2. FIELD OF VIEW 109

much of animal vision) is the visual field.[3] It is defined as “the number of degrees of visual angle during stable fixation of the eyes”.[4] Note that eye movements are excluded in the definition. Different animals have different visual fields, depending, among others, on the placement of the eyes. Humans have an almost 180-degree forward- facing horizontal diameter of their visual field, while some birds have a complete or nearly complete 180-degree visual field. The vertical range of the visual field in humans is typically around 135 degrees. The range of visual abilities is not uniform across the visual field, and varies from animal to animal. For example, binocular vision, which is the basis for stereopsis and is important for depth perception, covers 114 degrees (hori- zontally) of the visual field in humans;[5] the remaining peripheral 60–70 degrees have no binocular vision (because only one eye can see those parts of the visual field). Some birds have a scant 10 or 20 degrees of binocular vision. Similarly, color vision and the ability to perceive shape and motion vary across the visual field; in humans the former is concentrated in the center of the visual field, while the latter tends to be much stronger in the periphery. The phys- iological basis for that is the much higher concentration of color-sensitive cone cells and color-sensitive parvocellular retinal ganglion cells in the fovea – the central region of the retina – in comparison to the higher concentration of color-insensitive rod cells and motion-sensitive magnocellular retinal ganglion cells in the visual periphery. Since cone cells require considerably brighter light sources to be activated, the result of this distribution is further that peripheral vision is much more sensitive at night relative to foveal vision.[3]

7.2.2 Conversions

Many optical instruments, particularly binoculars or spotting scopes, are advertised with their field of view specified in one of two ways: angular field of view, and linear field of view. Angular field of view is typically specified in degrees, while linear field of view is a ratio of lengths. For example, binoculars with a 5.8 degree (angular) field of view might be advertised as having a (linear) field of view of 102 mm per meter. As long as the FOV is less than about 10 degrees or so, the following approximation formulas allow one to convert between linear and angular field of view. Let A be the angular field of view in degrees. Let M be the linear field of view in millimeters per meter. Then, using the small-angle approximation:

360◦ M A ≈ · ≈ 0.0573 × M 2π 1000 2π · 1000 M ≈ · A ≈ 17.45 × A 360◦

7.2.3 Machine vision

In machine vision the lens focal length and image sensor size sets up the fixed relationship between the field of view and the working distance. Field of view is the area of the inspection captured on the camera’s imager. The size of the field of view and the size of the camera’s imager directly affect the image resolution (one determining factor in accuracy). Working distance is the distance between the back of the lens and the target object.

7.2.4 Remote sensing

In remote sensing, the solid angle through which a detector element (a pixel sensor) is sensitive to electromagnetic radiation at any one time, is called instantaneous field of view or IFOV. A measure of the spatial resolution of a remote sensing imaging system, it is often expressed as dimensions of visible ground area, for some known sensor altitude.[6][7] Single pixel IFOV is closely related to concept of resolved pixel size, ground resolved distance, ground sample distance and modulation transfer function.

7.2.5 Astronomy

In astronomy the field of view is usually expressed as an angular area viewed by the instrument, in square degrees, or for higher magnification instruments, in square arc-minutes. For reference the Wide Field Channel on the Advanced Camera for Surveys on the Hubble Space Telescope has a field of view of 10 sq. arc-minutes, and the High Resolution Channel of the same instrument has a field of view of 0.15 sq. arc-minutes. Ground based survey telescopes have 110 CHAPTER 7. DAY 7 much wider fields of view. The photographic plates used by the UK Schmidt Telescope had a field of view of 30 sq. degrees. The 1.8 m (71 in) Pan-STARRS telescope, with the most advanced digital camera to date has a field of view of 7 sq. degrees. In the near infra-red WFCAM on UKIRT has a field of view of 0.2 sq. degrees and the forthcoming VISTA telescope will have a field of view of 0.6 sq. degrees. Until recently digital cameras could only cover a small field of view compared to photographic plates, although they beat photographic plates in quantum efficiency, linearity and dynamic range, as well as being much easier to process.

7.2.6 Photography

Main article: Angle of view

In photography, the field of view is that part of the world that is visible through the camera at a particular position and orientation in space; objects outside the FOV when the picture is taken are not recorded in the photograph. It is most often expressed as the angular size of the view cone, as an angle of view. For normal lens, field of view can be calculated FOV = 2 arctan(SensorSize/2f), where f is focal length.

7.2.7 Video games

Main article: Field of view in video games

The field of view in video games refers to the part you see of a game world, which is dependent on the scaling method used.[8]

7.2.8 See also

• Field of regard • Panorama • Perimetry • Peripheral vision • Visual perception • Useful field of view • 35 mm equivalent focal length • Angle of view • Crop factor • Image sensor format

7.2.9 References

[1] “Cascading Milky Way”. ESO Picture of the Week. Retrieved 11 June 2012.

[2] Alfano, P.L.; Michel, G.F. (1990). “Restricting the field of view: Perceptual and performance effects”. Perceptual and motor skills. 70: 35–45. doi:10.2466/pms.1990.70.1.35.

[3] Strasburger, Hans; Rentschler, Ingo; Jüttner, Martin (2011). “Peripheral vision and pattern recognition: a review”. Journal of Vision. 11 (5): 1–82. doi:10.1167/11.5.13.

[4] Strasburger, Hans; Pöppel, Ernst (2002). Visual Field. In G. Adelman & B.H. Smith (Eds): Encyclopedia of Neuroscience; 3rd edition, on CD-ROM. Elsevier Science B.V., Amsterdam, New York.

[5] Howard, Ian P.; Rogers, Brian J. (1995). Binocular vision and stereopsis. New York: Oxford University Press. p. 32. ISBN 0-19-508476-4. Retrieved 3 June 2014. 7.2. FIELD OF VIEW 111

[6] Oxford Reference. “Quick Reference: instantaneous field of view”. Oxford University Press. Retrieved 13 December 2013.

[7] Wynne, James B. Campbell, Randolph H. (2011). Introduction to remote sensing (5th ed.). New York: Guilford Press. p. 261. ISBN 160918176X.

[8] Feng Zhu School of Design – Field of View in Games Chapter 8

Day 8

8.1 Depth of field

“Focal depth” redirects here. For the seismology term, see Depth of focus (tectonics). In optics, particularly as it relates to film and photography, depth of field (DOF), also called focus range or effective focus range, is the distance between the nearest and farthest objects in a scene that appear acceptably sharp in an image. Although a lens can precisely focus at only one distance at a time, the decrease in sharpness is gradual on each side of the focused distance, so that within the DOF, the unsharpness is imperceptible under normal viewing conditions. In some cases, it may be desirable to have the entire image sharp, and a large DOF is appropriate. In other cases, a small DOF may be more effective, emphasizing the subject while de-emphasizing the foreground and background. In cinematography, a large DOF is often called deep focus, and a small DOF is often called shallow focus.

Simulation of the effect of changing a camera’s aperture in half-stops (at left) and from zero to infinity (at right)

8.1.1 Circle of confusion criterion for depth of field

Precise focus is possible at only one distance; at that distance, a point object will produce a point image.[1] At any other distance, a point object is defocused, and will produce a blur spot shaped like the aperture, which for the purpose of analysis is usually assumed to be circular. When this circular spot is sufficiently small, it is indistinguishable from a point, and appears to be in focus; it is rendered as “acceptably sharp”. The diameter of the circle increases with distance from the point of focus; the largest circle that is indistinguishable from a point is known as the acceptable circle of confusion, or informally, simply as the circle of confusion. The acceptable circle of confusion is influenced by visual acuity, viewing conditions, and the amount by which the image is enlarged (Ray 2000, 52–53). The increase of the circle diameter with defocus is gradual, so the limits of depth of field are not hard boundaries between sharp and unsharp.

Motion picture

For 35 mm motion pictures, the image area on the film is roughly 22 mm by 16 mm. The limit of tolerable error was traditionally set at 0.05 mm (0.002 in) diameter, while for 16 mm film, where the size is about half as large, the tolerance is stricter, 0.025 mm (0.001 in).[2] More modern practice for 35 mm productions set the circle of confusion limit at 0.025 mm (0.001 in).[3]

112 8.1. DEPTH OF FIELD 113

Still photography

For full-frame 35mm still photography, the circle of confusion is usually chosen to be about 1/30 mm. Because the human eye is capable of resolving a spot with diameter about 1/4 mm at 25 cm distance from the viewing eye, and the 35 mm negative needs about an 8X enlargement to make an 8x10 inch print, it is sometimes argued that the criterion should be about 1/32 mm on the 35mm negative, but 1/30 mm is close enough.[4] For 6x6 cm format enlarged to 8x8 inches and viewed at 25 cm, the enlargement is 3.4X, hence the circle of confusion criterion is about 1/(3.4 x 4) = 0.07 mm. Similarly, for subminiature photography (for example the Tessina) with a frame format of 14x21mm, 8x12 inches corresponds to 14.5X enlargement, hence circle of confusion limit about 0.017 mm. Many sources propose CoC limits as a fraction of the film format diagonal, typically 1/1000 in the early twentieth century to 1/1500 more recently. The three formats above at fraction 1/1500 would use 0.029 (about 1/32), 0.056, and 0.017 mm.[5]

Object field methods

Traditional depth-of-field formulas and tables assume equal circles of confusion for near and far objects. Some authors, such as Merklinger (1992),[6] have suggested that distant objects often need to be much sharper to be clearly recognizable, whereas closer objects, being larger on the film, do not need to be so sharp. The loss of detail in distant objects may be particularly noticeable with extreme enlargements. Achieving this additional sharpness in distant objects usually requires focusing beyond the hyperfocal distance, sometimes almost at infinity. For example, if photographing a cityscape with a traffic bollard in the foreground, this approach, termed the object field method by Merklinger, would recommend focusing very close to infinity, and stopping down to make the bollard sharp enough. With this approach, foreground objects cannot always be made perfectly sharp, but the loss of sharpness in near objects may be acceptable if recognizability of distant objects is paramount. Other authors (Adams 1980, 51) have taken the opposite position, maintaining that slight unsharpness in foreground objects is usually more disturbing than slight unsharpness in distant parts of a scene. Moritz von Rohr also used an object field method, but unlike Merklinger, he used the conventional criterion of a maximum circle of confusion diameter in the image plane, leading to unequal front and rear depths of field.

8.1.2 Factors affecting depth of field

Several other factors, such as subject matter, movement, camera-to-subject distance, lens focal length, selected lens f-number, format size, and circle of confusion criteria also influence when a given defocus becomes noticeable. The combination of focal length, subject distance, and format size defines magnification at the film / sensor plane. DOF is determined by subject magnification at the film / sensor plane and the selected lens aperture or f-number. For a given f-number, increasing the magnification, either by moving closer to the subject or using a lens of greater focal length, decreases the DOF; decreasing magnification increases DOF. For a given subject magnification, increasing the f-number (decreasing the aperture diameter) increases the DOF; decreasing f-number decreases DOF. If the original image is enlarged to make the final image, the circle of confusion in the original image must be smaller than that in the final image by the ratio of enlargement. Cropping an image and enlarging to the same size final image as an uncropped image taken under the same conditions is equivalent to using a smaller format under the same conditions, so the cropped image has less DOF. (Stroebel 1976, 134, 136–37). When focus is set to the hyperfocal distance, the DOF extends from half the hyperfocal distance to infinity, and the DOF is the largest possible for a given f-number.

Relationship of DOF to format size

The comparative DOFs of two different format sizes depend on the conditions of the comparison. The DOF for the smaller format can be either more than or less than that for the larger format. In the discussion that follows, it is assumed that the final images from both formats are the same size, are viewed from the same distance, and are judged with the same circle of confusion criterion. (Derivations of the effects of format size are given under Derivation of the DOF formulae.) 114 CHAPTER 8. DAY 8

“Same picture” for both formats When the “same picture” is taken in two different format sizes from the same distance at the same f-number with lenses that give the same angle of view, and the final images (e.g., in prints, or on a projection screen or electronic display) are the same size, DOF is, to a first approximation, inversely proportional to format size (Stroebel 1976, 139). Though commonly used when comparing formats, the approximation is valid only when the subject distance is large in comparison with the focal length of the larger format and small in comparison with the hyperfocal distance of the smaller format. Moreover, the larger the format size, the longer a lens will need to be to capture the same framing as a smaller format. In motion pictures, for example, a frame with a 12 degree horizontal field of view will require a 50 mm lens on 16 mm film, a 100 mm lens on 35 mm film, and a 250 mm lens on 65 mm film. Conversely, using the same focal length lens with each of these formats will yield a progressively wider image as the film format gets larger: a 50 mm lens has a horizontal field of view of 12 degrees on 16 mm film, 23.6 degrees on 35 mm film, and 55.6 degrees on 65 mm film. Therefore, because the larger formats require longer lenses than the smaller ones, they will accordingly have a smaller depth of field. Compensations in exposure, framing, or subject distance need to be made in order to make one format look like it was filmed in another format.

Same focal length for both formats Many small-format digital SLR camera systems allow using many of the same lenses on both full-frame and “cropped format” cameras. If, for the same focal length setting, the subject distance is adjusted to provide the same field of view at the subject, at the same f-number and final-image size, the smaller format has greater DOF, as with the “same picture” comparison above. If pictures are taken from the same distance using the same f-number, same focal length, and the final images are the same size, the smaller format has less DOF. If pictures taken from the same subject distance using the same focal length, are given the same enlargement, both final images will have the same DOF. The pictures from the two formats will differ because of the different angles of view. If the larger format is cropped to the captured area of the smaller format, the final images will have the same angle of view, have been given the same enlargement, and have the same DOF.

Same DOF for both formats In many cases, the DOF is fixed by the requirements of the desired image. For a given DOF and field of view, the required f-number is proportional to the format size. For example, if a 35 mm camera required f/11, a 4×5 camera would require f/45 to give the same DOF. For the same ISO speed, the exposure time on the 4×5 would be sixteen times as long; if the 35 camera required 1/250 second, the 4×5 camera would require 1/15 second. The longer exposure time with the larger camera might result in motion blur, especially with windy conditions, a moving subject, or an unsteady camera. Adjusting the f-number to the camera format is equivalent to maintaining the same absolute aperture diameter; when set to the same absolute aperture diameters, both formats have the same DOF. Comparison of fast standard lenses in the four main formats when used for portraiture with appropriate circles of confusion to produce an uncropped image at 10x8 inches to be viewed at 25 cm show that the following settings with similar aperture diameters produce similar DoF:

67 medium format using a 90 mm lens set to f/2.8 (32 mm aperture) gives a hyperfocal distance (H) of 49m. If focussed on a subject at 2m (s) the depth of field ranges from Dn=1.921m to Df = 2.085m (DoF = 163mm). 35mm (FX) 50 mm lens set to f/2 (25 mm aperture) gives H = 43m. If focussed to 2m the DoF is 1.911 to 2.097m (186mm). APSc (DX) 35 mm lens set to f/1.4 (25 mm aperture) gives H = 46m. If focussed to 2m the DoF is 1.917 to 2.091m (174mm). Four thirds 25 mm lens set to f/0.95 (26 mm aperture) gives H = 44m. If focussed to 2m the DoF is 1.913 to 2.095m (183mm).

For any of these, doubling the f-number will approximately double the depth of field.

Camera movements and DOF

When the lens axis is perpendicular to the image plane, as is normally the case, the plane of focus (POF) is parallel to the image plane, and the DOF extends between parallel planes on either side of the POF. When the lens axis is not perpendicular to the image plane, the POF is no longer parallel to the image plane; the ability to rotate the POF is 8.1. DEPTH OF FIELD 115 known as the Scheimpflug principle. Rotation of the POF is accomplished with camera movements (tilt, a rotation of the lens about a horizontal axis, or swing, a rotation about a vertical axis). Tilt and swing are available on most view cameras, and are also available with specific lenses on some small- and medium-format cameras. When the POF is rotated, the near and far limits of DOF are no longer parallel; the DOF becomes wedge-shaped, with the apex of the wedge nearest the camera (Merklinger 1993, 31–32; Tillmanns 1997, 71). With tilt, the height of the DOF increases with distance from the camera; with swing, the width of the DOF increases with distance. In some cases, rotating the POF can better fit the DOF to the scene, and achieve the required sharpness at a smaller f-number. Alternatively, rotating the POF, in combination with a small f-number, can minimize the part of an image that is within the DOF.

Effect of lens aperture

For a given subject framing and camera position, the DOF is controlled by the lens aperture diameter, which is usually specified as the f-number, the ratio of lens focal length to aperture diameter. Reducing the aperture diameter (increasing the f-number) increases the DOF because the circle of confusion is shrunk directly and indirectly by reducing the light hitting the outside of the lens which is focused to a different point than light hitting the inside of the lens due to spherical aberration caused by the construction of the lens;[7] however, it also reduces the amount of light transmitted, and increases diffraction, placing a practical limit on the extent to which DOF can be increased by reducing the aperture diameter. Motion pictures make only limited use of this control; to produce a consistent image quality from shot to shot, cinematographers usually choose a single aperture setting for interiors and another for exteriors, and adjust exposure through the use of camera filters or light levels. Aperture settings are adjusted more frequently in still photography, where variations in depth of field are used to produce a variety of special effects.

Aperture = f/1.4. DOF=0.8 cm

Aperture = f/4.0. DOF=2.2 cm

Aperture = f/22. DOF=12.4 cm Depth of field for different values of aperture using 50 mm objective lens and full-frame DSLR camera. Focus point 116 CHAPTER 8. DAY 8 is on the first blocks column.[8]

Digital techniques affecting DOF

The advent of digital technology in photography has provided additional means of controlling the extent of image sharpness; some methods allow extended DOF that would be impossible with traditional techniques, and some allow the DOF to be determined after the image is made. Focus stacking is a digital image processing technique which combines multiple images taken at different focus dis- tances to give a resulting image with a greater depth of field than any of the individual source images. Available pro- grams for multi-shot DOF enhancement include Adobe Photoshop, Syncroscopy AutoMontage, PhotoAcute Studio, Helicon Focus and CombineZ. Getting sufficient depth of field can be particularly challenging in macro photography. The images to the right illustrate the extended DOF that can be achieved by combining multiple images. Wavefront coding is a method that convolves rays in such a way that it provides an image where fields are in focus simultaneously with all planes out of focus by a constant amount. A plenoptic camera uses a microlens array to capture 4D light field information about a scene. Colour apodization is a technique combining a modified lens design with image processing to achieve an increased depth of field. The lens is modified such that each colour channel has a different lens aperture. For example, the red channel may be f/2.4, green may be f/2.4, whilst the blue channel may be f/5.6. Therefore, the blue channel will have a greater depth of field than the other colours. The image processing identifies blurred regions in the red and green channels and in these regions copies the sharper edge data from the blue channel. The result is an image that combines the best features from the different f-numbers, (Kay 2011). In 2013, Nokia implemented DOF control in some of its high-end smartphones, called Refocus, which can change a picture’s depth of field after the picture is taken. It works best when there are close-up and distant objects in the frame.[9]

Diffraction and DOF

See also: Rayleigh length

If the camera position and image framing (i.e., angle of view) have been chosen, the only means of controlling DOF is the lens aperture. Most DOF formulas imply that any arbitrary DOF can be achieved by using a sufficiently large f-number. Because of diffraction, however, this isn't really true. Once a lens is stopped down to where most aberrations are well corrected, stopping down further will decrease sharpness in the plane of focus. At the DOF limits, however, further stopping down decreases the size of the defocus blur spot, and the overall sharpness may still increase. Eventually, the defocus blur spot becomes negligibly small, and further stopping down serves only to decrease sharpness even at DOF limits (Gibson 1975, 64). There is thus a tradeoff between sharpness in the POF and sharpness at the DOF limits. But the sharpness in the POF is always greater than that at the DOF limits; if the blur at the DOF limits is imperceptible, the blur in the POF is imperceptible as well. For general photography, diffraction at DOF limits typically becomes significant only at fairly large f-numbers; be- cause large f-numbers typically require long exposure times, motion blur may cause greater loss of sharpness than the loss from diffraction. The size of the diffraction blur spot depends on the effective f-number N (1 + m) , however, so diffraction is a greater issue in close-up photography, and the tradeoff between DOF and overall sharpness can become quite noticeable (Gibson 1975, 53; Lefkowitz 1979, 84).

8.1.3 DOF scales

Many lenses for small- and medium-format cameras include scales that indicate the DOF for a given focus distance and f-number; the 35 mm lens in the image is typical. That lens includes distance scales in feet and meters; when a marked distance is set opposite the large white index mark, the focus is set to that distance. The DOF scale below the distance scales includes markings on either side of the index that correspond to f-numbers. When the lens is set to a given f-number, the DOF extends between the distances that align with the f-number markings. 8.1. DEPTH OF FIELD 117

Some cameras have the DOF scale not on lens barrel, but on focusing knob or dial; for example, the Rolleiflex TLR has its DOF scale on the focusing knob; the subminiature camera Tessina has DOF a scale on the focusing dial.

8.1.4 Zone focusing

See also: Zone focus

When the 35 mm lens above is set to f/11 and focused at approximately 1.3 m, the DOF (a “zone” of acceptable sharpness) extends from 1 m to 2 m. Conversely, the required focus and f-number can be determined from the desired DOF limits by locating the near and far DOF limits on the lens distance scale and setting focus so that the index mark is centered between the near and far distance marks. The required f-number is determined by finding the markings on the DOF scale that are closest to the near and far distance marks (Ray 1994, 315). For the 35 mm lens above, if it were desired for the DOF to extend from 1 m to 2 m, focus would be set so that index mark was centered between the marks for those distances, and the aperture would be set to f/11. The focus so determined would be about 1.3 m, the approximate harmonic mean of the near and far distances.[10] See the section Focus and f-number from DOF limits for additional discussion. If the marks for the near and far distances fall outside the marks for the largest f-number on the DOF scale, the desired DOF cannot be obtained; for example, with the 35 mm lens above, it is not possible to have the DOF extend from 0.7 m to infinity. The DOF limits can be determined visually, by focusing on the farthest object to be within the DOF and noting the distance mark on the lens distance scale, and repeating the process for the nearest object to be within the DOF. Some distance scales have markings for only a few distances; for example, the 35 mm lens above shows only 3 ft and 5 ft on its upper scale. Using other distances for DOF limits requires visual interpolation between marked distances. Since the distance scale is nonlinear, accurate interpolation can be difficult. In most cases, English and metric distance markings are not coincident, so using both scales to note focused distances can sometimes lessen the need for interpolation. Many autofocus lenses have smaller distance and DOF scales and fewer markings than do comparable manual-focus lenses, so that determining focus and f-number from the scales on an autofocus lens may be more difficult than with a comparable manual-focus lens. In most cases, determining these settings using the lens DOF scales on an autofocus lens requires that the lens or camera body be set to manual focus.[11] On a view camera, the focus and f-number can be obtained by measuring the focus spread and performing simple calculations. The procedure is described in more detail in the section Focus and f-number from DOF limits. Some view cameras include DOF calculators that indicate focus and f-number without the need for any calculations by the photographer (Tillmanns 1997, 67–68; Ray 2002, 230–31).

8.1.5 Hyperfocal distance

The hyperfocal distance is the nearest focus distance at which the DOF extends to infinity; focusing the camera at the hyperfocal distance results in the largest possible depth of field for a given f-number (Ray 2000, 55). Focusing beyond the hyperfocal distance does not increase the far DOF (which already extends to infinity), but it does decrease the DOF in front of the subject, decreasing the total DOF. Some photographers consider this wasting DOF; however, see Object field methods above for a rationale for doing so. Focusing on the hyperfocal distance is a special case of zone focusing in which the far limit of DOF is at infinity. If the lens includes a DOF scale, the hyperfocal distance can be set by aligning the infinity mark on the distance scale with the mark on the DOF scale corresponding to the f-number to which the lens is set. For example, with the 35 mm lens shown above set to f/11, aligning the infinity mark with the '11' to the left of the index mark on the DOF scale would set the focus to the hyperfocal distance.

Hyperfocusing

Some cameras have their hyperfocal distance marked on the focus dial. For example, on the Minox LX focusing dial there is a red dot between 2 m and infinity; when the lens is set at the red dot, that is, focused at the hyperfocal distance, the depth of field stretches from 2 m to infinity. 118 CHAPTER 8. DAY 8

The Zeiss Ikon Contessa camera has 20 ft marked in red, and aperture f/8 marked in red; this is the snapshot hyper- focal setting.

8.1.6 Limited DOF: selective focus

Depth of field can be anywhere from a fraction of a millimeter to virtually infinite. In some cases, such as landscapes, it may be desirable to have the entire image sharp, and a large DOF is appropriate. In other cases, artistic considerations may dictate that only a part of the image be in focus, emphasizing the subject while de-emphasizing the background, perhaps giving only a suggestion of the environment (Langford 1973, 81). For example, a common technique in melodramas and horror films is a closeup of a person’s face, with someone just behind that person visible but out of focus. A portrait or close-up still photograph might use a small DOF to isolate the subject from a distracting background. The use of limited DOF to emphasize one part of an image is known as selective focus, differential focus or shallow focus. Although a small DOF implies that other parts of the image will be unsharp, it does not, by itself, determine how unsharp those parts will be. The amount of background (or foreground) blur depends on the distance from the plane of focus, so if a background is close to the subject, it may be difficult to blur sufficiently even with a small DOF. In practice, the lens f-number is usually adjusted until the background or foreground is acceptably blurred, often without direct concern for the DOF. Sometimes, however, it is desirable to have the entire subject sharp while ensuring that the background is sufficiently unsharp. When the distance between subject and background is fixed, as is the case with many scenes, the DOF and the amount of background blur are not independent. Although it is not always possible to achieve both the desired subject sharpness and the desired background unsharpness, several techniques can be used to increase the separation of subject and background. For a given scene and subject magnification, the background blur increases with lens focal length. If it is not important that background objects be unrecognizable, background de-emphasis can be increased by using a lens of longer focal length and increasing the subject distance to maintain the same magnification. This technique requires that sufficient space in front of the subject be available; moreover, the perspective of the scene changes because of the different camera position, and this may or may not be acceptable. The situation is not as simple if it is important that a background object, such as a sign, be unrecognizable. The magnification of background objects also increases with focal length, so with the technique just described, there is little change in the recognizability of background objects.[12] However, a lens of longer focal length may still be of some help; because of the narrower angle of view, a slight change of camera position may suffice to eliminate the distracting object from the field of view. Although tilt and swing are normally used to maximize the part of the image that is within the DOF, they also can be used, in combination with a small f-number, to give selective focus to a plane that isn't perpendicular to the lens axis. With this technique, it is possible to have objects at greatly different distances from the camera in sharp focus and yet have a very shallow DOF. The effect can be interesting because it differs from what most viewers are accustomed to seeing.

8.1.7 Near:far distribution

The DOF beyond the subject is always greater than the DOF in front of the subject. When the subject is at the hyperfocal distance or beyond, the far DOF is infinite, so the ratio is 1:∞; as the subject distance decreases, near:far DOF ratio increases, approaching unity at high magnification. For large apertures at typical portrait distances, the ratio is still close to 1:1. The oft-cited rule that 1/3 of the DOF is in front of the subject and 2/3 is beyond (a 1:2 ratio) is true only when the subject distance is 1/3 the hyperfocal distance.

8.1.8 Optimal f-number

See also: Optimal aperture

As a lens is stopped down, the defocus blur at the DOF limits decreases but diffraction blur increases. The presence of these two opposing factors implies a point at which the combined blur spot is minimized (Gibson 1975, 64); at 8.1. DEPTH OF FIELD 119 that point, the f-number is optimal for image sharpness. If the final image is viewed under normal conditions (e.g., an 8″×10″ image viewed at 10″), it may suffice to determine the f-number using criteria for minimum required sharpness, and there may be no practical benefit from further reducing the size of the blur spot. But this may not be true if the final image is viewed under more demanding conditions, e.g., a very large final image viewed at normal distance, or a portion of an image enlarged to normal size (Hansma 1996). Hansma also suggests that the final-image size may not be known when a photograph is taken, and obtaining the maximum practicable sharpness allows the decision to make a large final image to be made at a later time.

Determining combined defocus and diffraction

Hansma (1996) and Peterson (1996) have discussed determining the combined effects of defocus and diffraction using a root-square combination of the individual blur spots. Hansma’s approach determines the f-number that will give the maximum possible sharpness; Peterson’s approach determines the minimum f-number that will give the desired sharpness in the final image, and yields a maximum focus spread for which the desired sharpness can be achieved.[13] In combination, the two methods can be regarded as giving a maximum and minimum f-number for a given situation, with the photographer free to choose any value within the range, as conditions (e.g., potential motion blur) permit. Gibson (1975), 64) gives a similar discussion, additionally considering blurring effects of camera lens aberrations, enlarging lens diffraction and aberrations, the negative emulsion, and the printing paper.[14] Couzin (1982) gave a formula essentially the same as Hansma’s for optimal f-number, but did not discuss its derivation. Hopkins (1955), Stokseth (1969), and Williams and Becklund (1989) have discussed the combined effects using the modulation transfer function. Conrad’s Depth of Field in Depth (PDF), and Jacobson’s Photographic Lenses Tutorial discuss the use of Hopkins’s method specifically in regard to DOF.

8.1.9 Other applications

Photolithography

In semiconductor photolithography applications, depth of field is extremely important as integrated circuit layout features must be printed with high accuracy at extremely small size. The difficulty is that the wafer surface is not perfectly flat, but may vary by several micrometres. Even this small variation causes some distortion in the projected image, and results in unwanted variations in the resulting pattern. Thus photolithography engineers take extreme measures to maximize the optical depth of field of the photolithography equipment. To minimize this distortion further, semiconductor manufacturers may use chemical mechanical polishing to make the wafer surface even flatter before lithographic patterning.

Ophthalmology and optometry

A person may sometimes experience better vision in daylight than at night because of an increased depth of field due to constriction of the pupil (i.e., miosis).

8.1.10 DOF formulae

The basis of these formulas is given in the section Derivation of the DOF formulae;[15] refer to the diagram in that section for illustration of the quantities discussed below.

Hyperfocal distance

Let f be the lens focal length, N be the lens f-number, and c be the circle of confusion for a given image format. The hyperfocal distance H is given by

f 2 f 2 H = f + ≈ . Nc Nc 120 CHAPTER 8. DAY 8

Moderate-to-large distances

Let s be the distance at which the camera is focused (the “subject distance”). When s is large in comparison with the lens focal length, the distance DN from the camera to the near limit of DOF and the distance DF from the camera to the far limit of DOF are

Hs D ≈ N H + s and

Hs D ≈ for s < H . F H − s

The depth of field DF − DN is

2Hs2 DOF ≈ for s < H . H2 − s2 Substituting for H and rearranging, DOF can be expressed as

2Ncf 2s2 DOF ≈ . f 4 − N 2c2s2

Thus, for a given image format, depth of field is determined by three factors: the focal length of the lens, the f-number of the lens opening (the aperture), and the camera-to-subject distance. When the subject distance is the hyperfocal distance,

DF = ∞ and

H D = . N 2 For s ≥ H , the far limit of DOF is at infinity and the DOF is infinite; of course, only objects at or beyond the near limit of DOF will be recorded with acceptable sharpness.

Close-up

When the subject distance s approaches the focal length, using the formulas given above can result in significant errors. For close-up work, the hyperfocal distance has little applicability, and it usually is more convenient to express DOF in terms of image magnification. Let m be the magnification; when the subject distance is small in comparison with the hyperfocal distance,

m + 1 DOF ≈ 2Nc , m2 so that for a given magnification, DOF is independent of focal length. In other words, for the same subject magnifi- cation, at the same f-number, all focal lengths used on a given image format give approximately the same DOF. The discussion thus far has assumed a symmetrical lens for which the entrance and exit pupils coincide with the front and rear nodal planes, and for which the pupil magnification (the ratio of exit pupil diameter to that of the 8.1. DEPTH OF FIELD 121

entrance pupil)[16] is unity. Although this assumption usually is reasonable for large-format lenses, it often is invalid for medium- and small-format lenses. When s ≪ H , the DOF for an asymmetrical lens is

2Nc (1 + m/P ) DOF ≈ , m2 where P is the pupil magnification. When the pupil magnification is unity, this equation reduces to that for a sym- metrical lens. Except for close-up and macro photography, the effect of lens asymmetry is minimal. At unity magnification, however, the errors from neglecting the pupil magnification can be significant. Consider a telephoto lens with P = 0.5 and a retrofocus wide-angle lens with P = 2 , at m = 1.0 . The asymmetrical-lens formula gives DOF = 6Nc and DOF = 3Nc , respectively. The symmetrical-lens formula gives DOF = 4Nc in either case. The errors are −33% and 33%, respectively.

Focus and f-number from DOF limits

For given near and far DOF limits DN and DF , the required f-number is smallest when focus is set to

2D D s = N F , DN + DF the harmonic mean of the near and far distances. When the subject distance is large in comparison with the lens focal length, the required f-number is

f 2 D − D N ≈ F N . c 2DNDF When the far limit of DOF is at infinity,

s = 2DN and

f 2 1 N ≈ . c 2DN In practice, these settings usually are determined on the image side of the lens, using measurements on the bed or rail with a view camera, or using lens DOF scales on manual-focus lenses for small- and medium-format cameras. If vN and vF are the image distances that correspond to the near and far limits of DOF, the required f-number is minimized when the image distance v is

v + v v − v v ≈ N F = v + N F . 2 F 2 In practical terms, focus is set to halfway between the near and far image distances. The required f-number is

v − v N ≈ N F . 2c The image distances are measured from the camera’s image plane to the lens’s image nodal plane, which is not always easy to locate. In most cases, focus and f-number can be determined with sufficient accuracy using the approximate formulas above, which require only the difference between the near and far image distances; view camera users 122 CHAPTER 8. DAY 8

sometimes refer to the difference vN − vF as the focus spread (Hansma 1996, 55). Most lens DOF scales are based on the same concept. The focus spread is related to the depth of focus. Ray (2000, 56) gives two definitions of the latter. The first is the tolerance of the position of the image plane for which an object remains acceptably sharp; the second is that the limits of depth of focus are the image-side conjugates of the near and far limits of DOF. With the first definition, focus spread and depth of focus are usually close in value though conceptually different. With the second definition, focus spread and depth of focus are the same.

Foreground and background blur

If a subject is at distance s and the foreground or background is at distance D , let the distance between the subject and the foreground or background be indicated by

xd = |D − s| .

The blur disk diameter b of a detail at distance xd from the subject can be expressed as a function of the subject magnification ms , focal length f , f-number N or alternatively the diameter of the entrance pupil d (often called the aperture) according to

fms xd xd b = = dms . N s  xd D The minus sign applies to a foreground object, and the plus sign applies to a background object. The blur increases with the distance from the subject; when b ≤ c , the detail is within the depth of field, and the blur is imperceptible. If the detail is only slightly outside the DOF, the blur may be only barely perceptible. For a given subject magnification, f-number, and distance from the subject of the foreground or background detail, the degree of detail blur varies with the lens focal length. For a background detail, the blur increases with focal length; for a foreground detail, the blur decreases with focal length. For a given scene, the positions of the subject, foreground, and background usually are fixed, and the distance between subject and the foreground or background remains constant regardless of the camera position; however, to maintain constant magnification, the subject distance must vary if the focal length is changed. For small distance between the foreground or background detail, the effect of focal length is small; for large distance, the effect can be significant. For a reasonably distant background detail, the blur disk diameter is

fm b ≈ s , N depending only on focal length. The blur diameter of foreground details is very large if the details are close to the lens. The magnification of the detail also varies with focal length; for a given detail, the ratio of the blur disk diameter to imaged size of the detail is independent of focal length, depending only on the detail size and its distance from the subject. This ratio can be useful when it is important that the background be recognizable (as usually is the case in evidence or surveillance photography), or unrecognizable (as might be the case for a pictorial photographer using selective focus to isolate the subject from a distracting background). As a general rule, an object is recognizable if the blur disk diameter is one-tenth to one-fifth the size of the object or smaller (Williams 1990, 205),[17] and unrecognizable when the blur disk diameter is the object size or greater. The effect of focal length on background blur is illustrated in van Walree’s article on Depth of field.

Practical complications

The distance scales on most medium- and small-format lenses indicate distance from the camera’s image plane. Most DOF formulas, including those in this article, use the object distance s from the lens’s front nodal plane, which often is not easy to locate. Moreover, for many zoom lenses and internal-focusing non-zoom lenses, the location of the front 8.1. DEPTH OF FIELD 123 nodal plane, as well as focal length, changes with subject distance. When the subject distance is large in comparison with the lens focal length, the exact location of the front nodal plane is not critical; the distance is essentially the same whether measured from the front of the lens, the image plane, or the actual nodal plane. The same is not true for close-up photography; at unity magnification, a slight error in the location of the front nodal plane can result in a DOF error greater than the errors from any approximations in the DOF equations. The asymmetrical lens formulas require knowledge of the pupil magnification, which usually is not specified for medium- and small-format lenses. The pupil magnification can be estimated by looking into the front and rear of the lens and measuring the diameters of the apparent apertures, and computing the ratio of rear diameter to front diameter (Shipman 1977, 144). However, for many zoom lenses and internal-focusing non-zoom lenses, the pupil magnification changes with subject distance, and several measurements may be required.

Limitations

Most DOF formulas, including those discussed in this article, employ several simplifications:

1. Paraxial (Gaussian) optics is assumed, and technically, the formulas are valid only for rays that are infinites- imally close to the lens axis. However, Gaussian optics usually is more than adequate for determining DOF, and non-paraxial formulas are sufficiently complex that requiring their use would make determination of DOF impractical in most cases.

2. Lens aberrations are ignored. Including the effects of aberrations is nearly impossible, because doing so requires knowledge of the specific lens design. Moreover, in well-designed lenses, most aberrations are well corrected, and at least near the optical axis, often are almost negligible when the lens is stopped down 2–3 steps from maximum aperture. Because lenses usually are stopped down at least to this point when DOF is of interest, ignoring aberrations usually is reasonable. Not all aberrations are reduced by stopping down, however, so actual sharpness may be slightly less than predicted by DOF formulas.

3. Diffraction is ignored. DOF formulas imply that any arbitrary DOF can be achieved by using a sufficiently large f-number. Because of diffraction, however, this isn't really true, as is discussed further in the section Diffraction and DOF.

4. For digital capture with color filter array sensors, demosaicing is ignored. Demosaicing alone would normally decrease sharpness, but the demosaicing algorithm used might also include sharpening.

5. Post-capture manipulation of the image is ignored. Sharpening via techniques such as deconvolution or unsharp mask can increase the apparent sharpness in the final image; conversely, image noise reduction can reduce sharpness.

6. The resolutions of the imaging medium and the display medium are ignored. If the resolution of either medium is of the same order of magnitude as the optical resolution, the sharpness of the final image is reduced, and optical blurring is harder to detect.

The lens designer cannot restrict analysis to Gaussian optics and cannot ignore lens aberrations. However, the re- quirements of practical photography are less demanding than those of lens design, and despite the simplifications employed in development of most DOF formulas, these formulas have proven useful in determining camera settings that result in acceptably sharp pictures. It should be recognized that DOF limits are not hard boundaries between sharp and unsharp, and that there is little point in determining DOF limits to a precision of many significant figures.

8.1.11 Derivation of the DOF formulae

DOF limits

A symmetrical lens is illustrated at right. The subject, at distance s , is in focus at image distance v . Point objects at distances DF and DN would be in focus at image distances vF and vN , respectively; at image distance v , they are imaged as blur spots. The depth of field is controlled by the aperture stop diameter d ; when the blur spot diameter is equal to the acceptable circle of confusion c , the near and far limits of DOF are at DN and DF . From similar triangles, 124 CHAPTER 8. DAY 8

v − v c N = vN d and v − v c F = . vF d It usually is more convenient to work with the lens f-number than the aperture diameter; the f-number N is related to the lens focal length f and the aperture diameter d by

f N = ; d The image distance v is related to an object distance s by the thin lens equation

1 1 1 + = ; s v f

1 + 1 = 1 ...... (5) DN vN f 1 + 1 = 1 ...... (6) DF vF f Solve the equations set (1) to (6) and obtain the exact solutions without any simplification

sf 2 D = N f 2 + Nc(s − f) and

sf 2 D = . F f 2 − Nc(s − f)

Hyperfocal distance

Solving equation (8) for the focus distance s and setting the far limit of DOF DF to infinity gives

f 2 s = H = + f, Nc where H is the hyperfocal distance. Setting the subject distance to the hyperfocal distance and solving for the near limit of DOF gives

f 2/(Nc) + f H D = = . N 2 2 Substituting the expression for hyperfocal distance into the formulas (7) and (8) for the near and far limits of DOF gives s∗(H−F ) DN = s+H−2∗F ...... (9) s∗(H−F ) DF = H−s ...... (10) For any practical value of H , the focal length is negligible in comparison, so that 8.1. DEPTH OF FIELD 125

f 2 H ≈ . Nc Substituting the approximate expression for hyperfocal distance into the formulas for the near and far limits of DOF gives

Hs D ≈ N H + s and

Hs D ≈ F H − s

f 2 However, if one states by definition that H = Nc , then coming

Hs D = N H + (s − f) and

Hs D = F H − (s − f)

Combining, the depth of field DF − DN is

2Hs(s − f) f 2 DOF = for s < H and H = . H2 − (s − f)2 Nc

Hyperfocal magnification Magnification m can be expressed as

f m = ; (s − f) at the hyperfocal distance, the magnification mh then is

f m = . h (H − f)

Substituting f 2/Nc + f for H and simplifying gives

Nc m = . h f

DOF in terms of magnification

It is sometimes convenient to express DOF in terms of magnification m . Substituting

m + 1 s = f m 126 CHAPTER 8. DAY 8

and

f s − f = m into the formula for DOF and rearranging gives

2f(m + 1)/m DOF = , (fm)/(Nc) − (Nc)/(fm) after Larmore (1965), 163).

DOF vs. focal length

Multiplying the numerator and denominator of the exact formula above by

Ncm f gives

2Nc (m + 1) DOF = ( )2 . 2 − Nc m f

If the f-number and circle of confusion are constant, decreasing the focal length f increases the second term in the denominator, decreasing the denominator and increasing the value of the right-hand side, so that a shorter focal length gives greater DOF.

The term in parentheses in the denominator is the hyperfocal magnification mh , so that

2Nc (m + 1) DOF = . 2 − 2 m mh As subject distance is decreased, the subject magnification increases, and eventually becomes large in comparison with the hyperfocal magnification. Thus the effect of focal length is greatest near the hyperfocal distance, and decreases as subject distance is decreased. However, the near/far perspective will differ for different focal lengths, so the difference in DOF may not be readily apparent. ≪ 2 ≪ 2 When s H , mh m , and

m + 1 DOF ≈ 2Nc , m2 so that for a given magnification, DOF is essentially independent of focal length. Stated otherwise, for the same subject magnification and the same f-number, all focal lengths for a given image format give approximately the same DOF. This statement is true only when the subject distance is small in comparison with the hyperfocal distance, however.

Moderate-to-large distances

When the subject distance is large in comparison with the lens focal length,

Hs D ≈ N H + s 8.1. DEPTH OF FIELD 127

and

Hs D ≈ for s < H , F H − s so that

2Hs2 DOF ≈ for s < H . H2 − s2 For s ≥ H , the far limit of DOF is at infinity and the DOF is infinite; of course, only objects at or beyond the near limit of DOF will be recorded with acceptable sharpness.

Close-up

When the subject distance s approaches the lens focal length, the focal length no longer is negligible, and the ap- proximate formulas (11),(12) above cannot be used without introducing significant error. Use formular (9) and (10) instead. It usually is more convenient to express DOF in terms of magnification. The distance is small in comparison with the hyperfocal distance, so the simplified formula

m + 1 DOF ≈ 2Nc , m2 can be used with good accuracy. For a given magnification, DOF is independent of focal length.

Near:far DOF ratio

From the “exact” equations for near and far limits of DOF, the DOF in front of the subject is

Ncs(s − f) s − D = , N f 2 + Nc(s − f) and the DOF beyond the subject is

Ncs(s − f) D − s = . F f 2 − Nc(s − f) The near:far DOF ratio is

2 s − DN f − Nc(s − f) = 2 . DF − s f + Nc(s − f) This ratio is always less than unity; at moderate-to-large subject distances, f ≪ s , and

− 2 − − s DN ≈ f Ncs H s 2 = . DF − s f + Ncs H + s When the subject is at the hyperfocal distance or beyond, the far DOF is infinite, and the near:far ratio is zero. It’s commonly stated that approximately 1/3 of the DOF is in front of the subject and approximately 2/3 is beyond; however, this is true only when s ≈ H/3 . At closer subject distances, it’s often more convenient to express the DOF ratio in terms of the magnification 128 CHAPTER 8. DAY 8

f m = ; s − f substitution into the “exact” equation for DOF ratio gives

s − D m − Nc/f N = . DF − s m + Nc/f As magnification increases, the near:far ratio approaches a limiting value of unity.

DOF vs. format size

When the subject distance is much less than hyperfocal, the total DOF is given to good approximation by

m + 1 DOF ≈ 2Nc . m2 When additionally the magnification is small compared to unity, the value of m in the numerator can be neglected, and the formula further simplifies to

2Nc DOF ≈ . m2 The DOF ratio for two different formats is then

( ) DOF N c m 2 2 ≈ 2 2 1 . DOF1 N1 c1 m2 Essentially the same approach is described in Stroebel (1976), 136–39).

“Same picture” for both formats The results of the comparison depend on what is assumed. One approach is to assume that essentially the same picture is taken with each format and enlarged to produce the same size final image, so the subject distance remains the same, the focal length is adjusted to maintain the same angle of view, and to a first approximation, magnification is in direct proportion to some characteristic dimension of each format. If both pictures are enlarged to give the same size final images with the same sharpness criteria, the circle of confusion is also in direct proportion to the format size. Thus if l is the characteristic dimension of the format,

m c l 2 = 2 = 2 . m1 c1 l1 With the same f-number, the DOF ratio is then

( ) ( ) DOF c m 2 l l 2 l 2 ≈ 2 1 = 2 1 = 1 , DOF1 c1 m2 l1 l2 l2 so the DOF ratio is in inverse proportion to the format size. This ratio is approximate, and breaks down in the macro range of the larger format (the value of m in the numerator is no longer negligible) or as distance approaches the hyperfocal distance for the smaller format (the DOF of the smaller format approaches infinity). If the formats have approximately the same aspect ratios, the characteristic dimensions can be the format diagonals; if the aspect ratios differ considerably (e.g., 4×5 vs. 6×17), the dimensions must be chosen more carefully, and the DOF comparison may not even be meaningful. 8.1. DEPTH OF FIELD 129

If the DOF is to be the same for both formats the required f-number is in direct proportion to the format size:

( ) N c m 2 l 2 ≈ 1 2 = 2 . N1 c2 m1 l1 Adjusting the f-number in proportion to format size is equivalent to using the same absolute aperture diameter for both formats, discussed in detail below in Use of absolute aperture diameter.

Same focal length for both formats If the same lens focal length is used in both formats, magnifications can be maintained in the ratio of the format sizes by adjusting subject distances; the DOF ratio is the same as that given above, but the images differ because of the different perspectives and angles of view. If the same DOF is required for each format, an analysis similar to that above shows that the required f-number is in direct proportion to the format size. Another approach is to use the same focal length with both formats at the same subject distance, so the magnification is the same, and with the same f-number,

DOF c l 2 ≈ 2 = 2 , DOF1 c1 l1 so the DOF ratio is in direct proportion to the format size, due to the smaller format size having a smaller circle of confusion when the final image size is the same. The perspective is the same for both formats, but because of the different angles of view, the pictures are not the same.

Cropping Cropping an image and enlarging to the same size final image as an uncropped image taken under the same conditions is equivalent to using a smaller format; the cropped image requires greater enlargement and conse- quently has a smaller circle of confusion. A cropped then enlarged image has less DOF than the uncropped image.

Use of absolute aperture diameter The aperture diameter is normally given in terms of the f-number because all lenses set to the same f-number give approximately the same image illuminance (Ray 2002, 130), simplifying exposure settings. In deriving the basic DOF equations, f/N can be substituted for the absolute aperture diameter d , giving the DOF in terms of the absolute aperture diameter:

2s DOF = , (dm)/c − c/(dm) after Larmore (1965), 163). When the subject distance s is small in comparison with the hyperfocal distance, the second term in the denominator can be neglected, leading to

2sc DOF ≈ . dm

With the same subject distance and angle of view for both formats, s2 = s1 , and

DOF c d m l d l d 2 ≈ 2 1 1 = 2 1 1 = 1 , DOF1 c1 d2 m2 l1 d2 l2 d2 so the DOFs are in inverse proportion to the absolute aperture diameters. When the diameters are the same, the two formats have the same DOF. Von Rohr (1906) made this same observation, saying “At this point it will be sufficient to note that all these formulae involve quantities relating exclusively to the entrance-pupil and its position with respect to the object-point, whereas the focal length of the transforming system does not enter into them.” Lyon’s Depth of Field Outside the Box describes an approach very similar to that of von Rohr. Using the same absolute aperture diameter for both formats with the “same picture” criterion is equivalent to adjusting the f-number in proportion to the format sizes, discussed above under “Same picture” for both formats 130 CHAPTER 8. DAY 8

Focus and f-number from DOF limits

Object-side relationships The equations for the DOF limits can be combined to eliminate Nc and solve for the subject distance. For given near and far DOF limits DN and DF , the subject distance is

2D D s = N F , DN + DF the harmonic mean of the near and far distances. The equations for DOF limits also can be combined to eliminate s and solve for the required f-number, giving

f 2 D − D N = F N . c DF(DN − f) + DN(DF − f) When the subject distance is large in comparison with the lens focal length, this simplifies to

f 2 D − D N ≈ F N . c 2DNDF When the far limit of DOF is at infinity, the equations for s and N give indeterminate results. But if all terms in the numerator and denominator on the right-hand side of the equation for s are divided by DF , it is seen that when DF is at infinity,

s = 2DN .

Similarly, if all terms in the numerator and denominator on the right-hand side of the equation for N are divided by DF , it is seen that when DF is at infinity,

f 2 1 f 2 1 N = ≈ . c 2DN − f c 2DN

Image-side relationships Most discussions of DOF concentrate on the object side of the lens, but the formulas are simpler and the measurements usually easier to make on the image side. If the basic image-side equations

v − v Nc N = vN f and

v − v Nc F = vF f are combined and solved for the image distance v , the result is

2v v v = N F , vN + vF the harmonic mean of the near and far image distances. The basic image-side equations can also be combined and solved for N , giving

f v − v N = N F . c vN + vF 8.1. DEPTH OF FIELD 131

The image distances are measured from the camera’s image plane to the lens’s image nodal plane, which is not always easy to locate. The harmonic mean is always less than the arithmentic mean, but when the difference between the near and far image distances is reasonably small, the two means are close to equal, and focus can be set with sufficient accuracy using

v + v v − v v ≈ N F = v + N F . 2 F 2

This formula requires only the difference vN − vF between the near and far image distances. View camera users often refer to this difference as the focus spread; it usually is measured on the bed or focusing rail. Focus is simply set to halfway between the near and far image distances.

Substituting vN + vF = 2v into the equation for N and rearranging gives

f v − v N ≈ N F . v 2c One variant of the thin-lens equation is v = (m + 1) f , where m is the magnification; substituting this into the equation for N gives

1 v − v N ≈ N F . 1 + m 2c At moderate-to-large subject distances, m is small compared to unity, and the f-number can often be determined with sufficient accuracy using

v − v N ≈ N F . 2c For close-up photography, the magnification cannot be ignored, and the f-number should be determined using the first approximate formula.

As with the approximate formula for v , the approximate formulas for N require only the focus spread vN − vF rather than the absolute image distances.

When the far limit of DOF is at infinity, vF = f . On manual-focus small- and medium-format lenses, the focus and f-number usually are determined using the lens DOF scales, which often are based on the approximate equations above.

Foreground and background blur

If the equation for the far limit of DOF is solved for c , and the far distance replaced by an arbitrary distance D , the blur disk diameter b at that distance is

fm D − s b = s . N D When the background is at the far limit of DOF, the blur disk diameter is equal to the circle of confusion c , and the blur is just imperceptible. The diameter of the background blur disk increases with the distance to the background. A similar relationship holds for the foreground; the general expression for a defocused object at distance D is

fm |D − s| b = s . N D For a given scene, the distance between the subject and a foreground or background object is usually fixed; let that distance be represented by 132 CHAPTER 8. DAY 8

xd = |D − s| ; then

fm x b = s d . N D or, in terms of subject distance,

fm x b = s d , N s  xd with the minus sign used for foreground objects and the plus sign used for background objects. For a relatively distant background object,

fm b ≈ s . N In terms of subject magnification, the subject distance is

m + 1 s = s f, ms so that, for a given f-number and subject magnification,

fm x fm2 x b = s d = s d . ms+1  N f  xd N (ms + 1) f msxd ms Differentiating b with respect to f gives

 3 2 db ms xd = 2 . df N [(ms + 1) f  msxd] With the plus sign, the derivative is everywhere positive, so that for a background object, the blur disk size increases with focal length. With the minus sign, the derivative is everywhere negative, so that for a foreground object, the blur disk size decreases with focal length. The magnification of the defocused object also varies with focal length; the magnification of the defocused object is

v (m + 1) f m = s = s , d D D

where vs is the image distance of the subject. For a defocused object with some characteristic dimension y , the imaged size of that object is

(m + 1) fy m y = s . d D The ratio of the blur disk size to the imaged size of that object then is

b m x = s d , mdy ms + 1 Ny so for a given defocused object, the ratio of the blur disk diameter to object size is independent of focal length, and depends only on the object size and its distance from the subject. 8.1. DEPTH OF FIELD 133

Asymmetrical lenses

This discussion thus far has assumed a symmetrical lens for which the entrance and exit pupils coincide with the object and image nodal planes, and for which the pupil magnification is unity. Although this assumption usually is reasonable for large-format lenses, it often is invalid for medium- and small-format lenses. For an asymmetrical lens, the DOF ahead of the subject distance and the DOF beyond the subject distance are given by[18]

Nc(1 + m/P ) DOF = N m2[1 + (Nc)/(fm)] and

Nc(1 + m/P ) DOF = , F m2[1 − (Nc)/(fm)] where P is the pupil magnification. Combining gives the total DOF:

2f(1/m + 1/P ) DOF = . (fm)/(Nc) − (Nc)/(fm) When s ≪ H , the second term in the denominator becomes small in comparison with the first, and (Shipman 1977, 147)

2Nc(1 + m/P ) DOF ≈ . m2 When the pupil magnification is unity, the equations for asymmetrical lenses reduce to those given earlier for sym- metrical lenses.

Effect of lens asymmetry

Except for close-up and macro photography, the effect of lens asymmetry is minimal. A slight rearrangement of the last equation gives

( ) 2Nc 1 1 DOF ≈ + . m m P

As magnification decreases, the 1/P term becomes smaller in comparison with the 1/m term, and eventually the effect of pupil magnification becomes negligible.

8.1.12 See also

• Angle of view • Bokeh • Camera angle • Depth-of-field adapter • Depth of focus • Frazier lens (very deep DOF) 134 CHAPTER 8. DAY 8

• Hyperfocal distance • Light-field camera • Miniature faking • Numerical aperture • Perspective distortion • Rayleigh length • Tilted plane focus (camera movements used to achieve selective focus)

8.1.13 Notes

[1] Strictly, because of lens aberrations and diffraction, a point object in precise focus is imaged not as a point but rather as a small spot, often called the least circle of confusion. For most treatments of DOF, including this article, the assumption of a point is sufficient. [2] Film and Its Techniques. University of California Press. 1966. p. 56. Retrieved 24 February 2016. [3] Thomas Ohanian and Natalie Phillips (2013). Digital Filmmaking: The Changing Art and Craft of Making Motion Pictures. CRC Press. p. 96. ISBN 9781136053542. Retrieved 24 February 2016. [4] Harold Merklinger, p13 [5] Hornberg, Alexander (2007). Handbook of Machine Vision. Wiley. p. 250. ISBN 9783527610143. [6] Englander describes a similar approach in his paper Apparent Depth of Field: Practical Use in Landscape Photography. (PDF); Conrad discusses this approach, under Different Circles of Confusion for Near and Far Limits of Depth of Field, and The Object Field Method, in Depth of Field in Depth (PDF) [7] Why Does a Small Aperture Increase Depth of Field? [8] “photoskop: Interactive Photography Lessons”. April 25, 2015. [9] “Nokia Lumia 1520 review: the best Windows Phone device yet”. November 18, 2013. [10] The focus distance to have the DOF extend between given near and far object distances is the harmonic mean of the object conjugates. Most helicoid-focused lenses are marked with image plane–to–subject distances, so the focus determined from the lens distance scale is not exactly the harmonic mean of the marked near and far distances. [11] Higher-end models in the Canon EOS line of cameras included a feature called depth-of-field AE (DEP) that set focus and f-number from user-determined near and far points in much the same manner as using DOF scales on manual-focus lenses (Canon Inc. 2000, 61–62). The feature has not been included on models introduced after April 2004. [12] Using the object field method, Merklinger (1992), (32–35) describes a situation in which a portrait subject is to be sharp but a distracting sign in the background is to be unrecognizable. He concludes that with the subject and background distances fixed, no f-number will achieve both objectives, and that using a lens of different focal length will make no difference in the result. [13] Peterson does not give a closed-form expression for the minimum f-number, though such an expression obtains from simple algebraic manipulation of his Equation 3. [14] The analytical section at the end of Gibson (1975) was originally published as “Magnification and Depth of Detail in Photomacrography” in the Journal of the Photographic Society of America, Vol. 26, No. 6, June 1960. [15] Derivations of DOF formulas are given in many texts, including Larmore (1965), 161–166), Ray (2000, 53–56), and Ray (2002), 217–220). Complete derivations also are given in Conrad’s Depth of Field in Depth (PDF) and van Walree’s Derivation of the DOF equations. [16] A well-illustrated discussion of pupils and pupil magnification that assumes minimal knowledge of optics and mathematics is given in Shipman (1977), 144–147). [17] Williams gives the criteria for object recognition in terms of the system resolution. When resolution is limited by defocus blur, as in the context of DOF, the resolution is the blur disk diameter; when resolution is limited by diffraction, the resolution is the radius of the Airy disk, according to the Rayleigh criterion. [18] This is discussed in Jacobson’s Photographic Lenses Tutorial, and complete derivations are given in Conrad’s Depth of Field in Depth (PDF) and van Walree’s Derivation of the DOF equations. 8.1. DEPTH OF FIELD 135

8.1.14 References

• Adams, Ansel. 1980. The Camera. The New Ansel Adams Photography Series/Book 1. Boston: New York Graphic Society. ISBN 0-8212-1092-0

• Canon Inc. 2000. Canon EOS-1v/EOS-1v HS Instructions. Tokyo: Canon Inc.

• Couzin, Dennis. 1982. Depths of Field. SMPTE Journal, November 1982, 1096–1098. Available in PDF at https://sites.google.com/site/cinetechinfo/atts/dof_82.pdf.

• Gibson, H. Lou. 1975. Close-Up Photography and Photomacrography. 2nd combined ed. Kodak Publication No. N-16. Rochester, NY: Eastman Kodak Company, Vol II: Photomacrography. ISBN 0-87985-160-0

• Hansma, Paul K. 1996. View Camera Focusing in Practice. Photo Techniques, March/April 1996, 54–57. Available as GIF images on the Large Format page.

• Hopkins, H.H. 1955. The frequency response of a defocused optical system. Proceedings of the Royal Society A, 231:91–103.

• Langford, Michael J. 1973. Basic Photography. 3rd ed. Garden City, NY: Amphoto. ISBN 0-8174-0640-9

• Larmore, Lewis. 1965. Introduction to Photographic Principles. 2nd ed. New York: Dover Publications, Inc.

• Lefkowitz, Lester. 1979 The Manual of Close-Up Photography. Garden City, NY: Amphoto. ISBN 0-8174- 2456-3

• Merklinger, Harold M. 1992. The INs and OUTs of FOCUS: An Alternative Way to Estimate Depth-of-Field and Sharpness in the Photographic Image. v. 1.0.3. Bedford, Nova Scotia: Seaboard Printing Limited. ISBN 0-9695025-0-8. Version 1.03e available in PDF at http://www.trenholm.org/hmmerk/.

• Merklinger, Harold M. 1993. Focusing the View Camera: A Scientific Way to Focus the View Camera and Estimate Depth of Field. v. 1.0. Bedford, Nova Scotia: Seaboard Printing Limited. ISBN 0-9695025-2-4. Version 1.6.1 available in PDF at http://www.trenholm.org/hmmerk/.

• Peterson, Stephen. 1996. Image Sharpness and Focusing the View Camera. Photo Techniques, March/April 1996, 51–53. Available as GIF images on the Large Format page.

• Ray, Sidney F. 1994. Photographic Lenses and Optics. Oxford: Focal Press. ISBN 0-240-51387-8

• Ray, Sidney F. 2000. The geometry of image formation. In The Manual of Photography: Photographic and Digital Imaging, 9th ed. Ed. Ralph E. Jacobson, Sidney F. Ray, Geoffrey G. Atteridge, and Norman R. Axford. Oxford: Focal Press. ISBN 0-240-51574-9

• Ray, Sidney F. 2002. Applied Photographic Optics. 3rd ed. Oxford: Focal Press. ISBN 0-240-51540-4

• Shipman, Carl. 1977. SLR Photographers Handbook. Tucson: H.P. Books. ISBN 0-912656-59-X

• Stokseth, Per A. 1969. Properties of a Defocused Optical System. Journal of the Optical Society of America 59:10, Oct. 1969, 1314–1321.

• Stroebel, Leslie. 1976. View Camera Technique. 3rd ed. London: Focal Press. ISBN 0-240-50901-3

• Tillmanns, Urs. 1997. Creative Large Format: Basics and Applications. 2nd ed. Feuerthalen, Switzerland: Sinar AG. ISBN 3-7231-0030-9

• von Rohr, Moritz. 1906. Die optischen Instrumente. Leipzig: B. G. Teubner

• Williams, Charles S., and Becklund, Orville. 1989. Introduction to the Optical Transfer Function. New York: Wiley. Reprinted 2002, Bellingham, WA: SPIE Press, 293–300. ISBN 0-8194-4336-0

• Williams, John B. 1990. Image Clarity: High-Resolution Photography. Boston: Focal Press. ISBN 0-240- 80033-8

• Andrew Kay, Jonathan Mather, and Harry Walton, “Extended depth of field by colored apodization”, Optics Letters, Vol. 36, Issue 23, pp. 4614-4616 (2011). 136 CHAPTER 8. DAY 8

8.1.15 Further reading

• Hummel, Rob (editor). 2001. American Cinematographer Manual. 8th ed. Hollywood: ASC Press. ISBN 0-935578-15-3

8.1.16 External links

• Online Depth of Field Calculator Simple depth of field and hyperfocal distance calculator • photoskop: Interactive Photography Lessons - Interactive Depth of Field

• Bokeh simulator and depth of field calculator Interactive depth of field calculator with background blur simu- lation feature 8.1. DEPTH OF FIELD 137

The area within the depth of field appears sharp, while the areas beyond the depth of field appear blurry. 138 CHAPTER 8. DAY 8

A macro photograph with very shallow depth of field

Digital techniques, such as ray tracing, can also render 3D models with shallow depth of field for the same effect. 8.1. DEPTH OF FIELD 139

A 35 mm lens set to f/11. The depth-of-field scale (top) indicates that a subject which is anywhere between 1 and 2 meters in front of the camera will be rendered acceptably sharp. If the aperture were set to f/22 instead, everything from just over 0.7 meters almost to infinity would appear to be in focus. 140 CHAPTER 8. DAY 8

Out-of-focus highlights have the shape of the lens aperture.

Scheimpflug principle. 8.1. DEPTH OF FIELD 141

1 2 3 5 1 2 3 4 5 Effect of aperture on blur and DOF. The points in focus (2) project points onto the image plane (5), but points at different distances (1 and 3) project blurred images, or circles of confusion. Decreasing the aperture size (4) reduces the size of the blur spots for points not in the focused plane, so that the blurring is imperceptible, and all points are within the DOF.

Series of images demonstrating a 6 image focus bracket of a Tachinid fly. First two images illustrate typical DOF of a single image at f/10 while the third image is the composite of 6 images. 142 CHAPTER 8. DAY 8

Detail from the lens shown above. The point half-way between the 1 m and 2 m marks represents approximately 1.3 m. 8.1. DEPTH OF FIELD 143

DOF scale on Tessina focusing dial 144 CHAPTER 8. DAY 8

Minox LX camera with hyperfocal red dot

Zeiss Ikon Contessa with red marks for hyperfocal distance 20 ft at f/8 8.1. DEPTH OF FIELD 145

At f/32, the background competes for the viewer’s attention.

At f/5.6, the flowers are isolated from the background. 146 CHAPTER 8. DAY 8

At f/2.8, the cat’s face is isolated from the background and its paw in the foreground. 8.1. DEPTH OF FIELD 147

The integrated circuit package, which is in focus in this macro shot, is 2.5 mm higher than the circuit board it is mounted on. In macro photography objects at even small distances from the plane of focus can be unsharp. At f/32 every object is within the DOF, whereas the closer to f/5 the aperture gets, the fewer the objects that are sharp. There is a tradeoff, however: at f/32, the lettering on the IC package is noticeably softer than at f/5 because of diffraction. At f/5 the small dust particles at the bottom right corner form blur spots in the shape of the aperture stop. The images were taken with a 105 mm f/2.8 macro lens. 148 CHAPTER 8. DAY 8

d DOF

c

DN vF s v DF vN

DOF for symmetrical lens.

DOF Image Plane

B c b

xd s vs

Defocus blur for background object at B. 8.2. FOCAL LENGTH 149

8.2 Focal length

“Rear focal distance” redirects here. For lens to film distance in a camera, see Flange focal distance. The focal length of an optical system is a measure of how strongly the system converges or diverges light. For an optical system in air, it is the distance over which initially collimated (parallel) rays are brought to a focus. A system with a shorter focal length has greater optical power than one with a long focal length; that is, it bends the rays more sharply, bringing them to a focus in a shorter distance. In most photography and all telescopy, where the subject is essentially infinitely far away, longer focal length (lower optical power) leads to higher magnification and a narrower angle of view; conversely, shorter focal length or higher optical power is associated with lower magnification and a wider angle of view. On the other hand, in applications such as microscopy in which magnification is achieved by bringing the object close to the lens, a shorter focal length (higher optical power) leads to higher magnification because the subject can be brought closer to the center of projection.

8.2.1 Thin lens approximation

For a thin lens in air, the focal length is the distance from the center of the lens to the principal foci (or focal points) of the lens. For a converging lens (for example a convex lens), the focal length is positive, and is the distance at which a beam of collimated light will be focused to a single spot. For a diverging lens (for example a concave lens), the focal length is negative, and is the distance to the point from which a collimated beam appears to be diverging after passing through the lens. When a lens is used to form an image of some object, the distance from the object to the lens u, the distance from the lens to the image v, and the focal length f are related by

1 1 1 = + . f u v

The focal length of a thin lens can be easily measured by using it to form an image of a distant light source on a screen. The lens is moved until a sharp image is formed on the screen. In this case 1/u is negligible, and the focal length is then given by f ≈ v .

8.2.2 General optical systems

For a thick lens (one which has a non-negligible thickness), or an imaging system consisting of several lenses and/or mirrors (e.g., a photographic lens or a telescope), the focal length is often called the effective focal length (EFL), to distinguish it from other commonly used parameters:

• Front focal length (FFL) or front focal distance (FFD) (sF) is the distance from the front focal point of the [1][2] system (F) to the vertex of the first optical surface (s1). • Back focal length (BFL) or back focal distance (BFD) (s′F′) is the distance from the vertex of the last optical [1][2] surface of the system (S2) to the rear focal point (F′).

For an optical system in air, the effective focal length (f and f′) gives the distance from the front and rear principal planes (H and H′) to the corresponding focal points (F and F′). If the surrounding medium is not air, then the distance is multiplied by the refractive index of the medium (n is the refractive index of the substance from which the lens itself is made; n1 is the refractive index of any medium in front of the lens; n2 is that of any medium in back of it). Some authors call these distances the front/rear focal lengths, distinguishing them from the front/rear focal distances, defined above.[1] In general, the focal length or EFL is the value that describes the ability of the optical system to focus light, and is the value used to calculate the magnification of the system. The other parameters are used in determining where an image will be formed for a given object position. 150 CHAPTER 8. DAY 8

For the case of a lens of thickness d in air (n1 = n2 = 1), and surfaces with radii of curvature R1 and R2, the effective focal length f is given by the Lensmaker’s equation:

( ) 1 1 1 (n − 1)d = (n − 1) − + , f R1 R2 nR1R2 where n is the refractive index of the lens medium. The quantity 1/f is also known as the optical power of the lens. The corresponding front focal distance is:[3]

( ) (n − 1)d FFD = f 1 + , nR2 and the back focal distance:

( ) (n − 1)d BFD = f 1 − . nR1

In the sign convention used here, the value of R1 will be positive if the first lens surface is convex, and negative if it is concave. The value of R2 is negative if the second surface is convex, and positive if concave. Note that sign conventions vary between different authors, which results in different forms of these equations depending on the convention used. For a spherically in air, the magnitude of the focal length is equal to the radius of curvature of the mirror divided by two. The focal length is positive for a concave mirror, and negative for a convex mirror. In the sign convention used in optical design, a concave mirror has negative radius of curvature, so

R f = − , 2 where R is the radius of curvature of the mirror’s surface. See Radius of curvature (optics) for more information on the sign convention for radius of curvature used here.

8.2.3 In photography

28 mm lens

50 mm lens

70 mm lens 8.2. FOCAL LENGTH 151

210 mm lens An example of how lens choice affects angle of view. The photos above were taken by a 35 mm camera at a fixed distance from the subject.

Camera lens focal lengths are usually specified in millimetres (mm), but some older lenses are marked in centimetres (cm) or inches. Focal length (f) and field of view (FOV) of a lens are inversely proportional. For a standard rectilinear lens, FOV = 2 arctan x/2f, where x is the diagonal of the film. When a photographic lens is set to “infinity”, its rear nodal point is separated from the sensor or film, at the focal plane, by the lens’s focal length. Objects far away from the camera then produce sharp images on the sensor or film, which is also at the image plane. To render closer objects in sharp focus, the lens must be adjusted to increase the distance between the rear nodal point and the film, to put the film at the image plane. The focal length (f), the distance from the front nodal point to the object to photograph (s1), and the distance from the rear nodal point to the image plane (s2) are then related by:

1 1 1 + = . s1 s2 f

As s1 is decreased, s2 must be increased. For example, consider a normal lens for a 35 mm camera with a focal length of f = 50 mm. To focus a distant object (s1 ≈ ∞), the rear nodal point of the lens must be located a distance s2 = 50 mm from the image plane. To focus an object 1 m away (s1 = 1,000 mm), the lens must be moved 2.6 mm farther away from the image plane, to s2 = 52.6 mm. The focal length of a lens determines the magnification at which it images distant objects. It is equal to the distance between the image plane and a pinhole that images distant objects the same size as the lens in question. For rectilinear lenses (that is, with no image distortion), the imaging of distant objects is well modelled as a pinhole camera model.[4] This model leads to the simple geometric model that photographers use for computing the angle of view of a camera; in this case, the angle of view depends only on the ratio of focal length to film size. In general, the angle of view depends also on the distortion.[5] A lens with a focal length about equal to the diagonal size of the film or sensor format is known as a normal lens; its angle of view is similar to the angle subtended by a large-enough print viewed at a typical viewing distance of the print diagonal, which therefore yields a normal perspective when viewing the print;[6] this angle of view is about 53 degrees diagonally. For full-frame 35 mm-format cameras, the diagonal is 43 mm and a typical “normal” lens has a 50 mm focal length. A lens with a focal length shorter than normal is often referred to as a wide-angle lens (typically 35 mm and less, for 35 mm-format cameras), while a lens significantly longer than normal may be referred to as a telephoto lens (typically 85 mm and more, for 35 mm-format cameras). Technically, long focal length lenses are only “telephoto” if the focal length is longer than the physical length of the lens, but the term is often used to describe any long focal length lens. Due to the popularity of the 35 mm standard, camera–lens combinations are often described in terms of their 35 mm-equivalent focal length, that is, the focal length of a lens that would have the same angle of view, or field of view, if used on a full-frame 35 mm camera. Use of a 35 mm-equivalent focal length is particularly common with digital cameras, which often use sensors smaller than 35 mm film, and so require correspondingly shorter focal lengths to achieve a given angle of view, by a factor known as the crop factor.

8.2.4 See also

• Depth of field • f-number or focal ratio • Dioptre 152 CHAPTER 8. DAY 8

• Focus (optics)

8.2.5 References

[1] John E. Greivenkamp (2004). Field Guide to Geometrical Optics. SPIE Press. pp. 6–9. ISBN 978-0-8194-5294-8.

[2] Hecht, Eugene (2002). Optics (4th ed.). Addison Wesley. p. 168. ISBN 978-0805385663.

[3] Hecht, Eugene (2002). Optics (4th ed.). Addison Wesley. pp. 244–245. ISBN 978-0805385663.

[4] Jeffrey Charles (2000). Practical . Springer. pp. 63–66. ISBN 978-1-85233-023-1.

[5] Leslie Stroebel and Richard D. Zakia (1993). The Focal encyclopedia of photography (3rd ed.). Focal Press. p. 27. ISBN 978-0-240-51417-8.

[6] Leslie D. Stroebel (1999). View Camera Technique. Focal Press. pp. 135–138. ISBN 978-0-240-80345-6. 8.2. FOCAL LENGTH 153

The focal point F and focal length f of a positive (convex) lens, a negative (concave) lens, a concave mirror, and a convex mirror. 154 CHAPTER 8. DAY 8

Thick lens diagram

ABCDEFGHIJKL

abcdefghi jkl

a

b c

-3f -2f -f f d2f 3f 4f 5f e

A

B f

C

D g

E F

h G

Images of black letters in a thin convex lens of focal length f are shown in red. Selected rays are shown for letters E, I and K in blue, green and orange, respectively. Note that E (at 2f) has an equal-size, real and inverted image; I (at f) has its image at infinity; and K (at f/2) has a double-size, virtual and upright image. Chapter 9

Day 9

9.1 Magnification

For other uses, see Magnification (disambiguation). Magnification is the process of enlarging something only in appearance, not in physical size. This enlargement

The stamp appears larger with the use of a magnifying glass

is quantified by a calculated number also called “magnification”. When this number is less than one, it refers to a reduction in size, sometimes called “minification” or “de-magnification”. Typically, magnification is related to scaling up visuals or images to be able to see more detail, increasing resolution, using microscope, printing techniques, or digital processing. In all cases, the magnification of the image does not change the perspective of the image.

155 156 CHAPTER 9. DAY 9

Stepwise magnification by 6% per frame into a 39 megapixel image. In the final frame, at about 170x, an image of a bystander is seen reflected in the man’s cornea

9.1.1 Examples of magnification

• A magnifying glass, which uses a positive (convex) lens to make things look bigger by allowing the user to hold them closer to their eye.

• A telescope, which uses its large objective lens to create an image of a distant object and then allows the user to examine the image closely with a smaller eyepiece lens thus making the object look larger.

• A microscope, which makes a small object appear as a much larger object at a comfortable distance for viewing. A microscope is similar in layout to a telescope except that the object being viewed is close to the objective, which is usually much smaller than the eyepiece.

• A slide projector, which projects a large image of a small slide on a screen.

9.1.2 Magnification as a number (optical magnification)

Optical magnification is the ratio between the apparent size of an object (or its size in an image) and its true size, and thus it is a dimensionless number. Optical magnification is sometimes referred to as “power” (for example “10× power”), although this can lead to confusion with optical power.

Linear or transverse magnification

For real images, such as images projected on a screen, size means a linear dimension (measured, for example, in millimeters or inches). 9.1. MAGNIFICATION 157

Angular magnification

For optical instruments with an eyepiece, the linear dimension of the image seen in the eyepiece (virtual image in infinite distance) cannot be given, thus size means the angle subtended by the object at the focal point (angular size). Strictly speaking, one should take the tangent of that angle (in practice, this makes a difference only if the angle is larger than a few degrees). Thus, angular magnification is given by: MA = tan ε tan ε0 where ε0 is the angle subtended by the object at the front focal point of the objective and ε is the angle subtended by the image at the rear focal point of the eyepiece. Example: The angular size of the full moon is 0.5°. In binoculars with 10x magnification it appears to subtend an angle of 5°. By convention, for magnifying glasses and optical microscopes, where the size of the object is a linear dimension and the apparent size is an angle, the magnification is the ratio between the apparent (angular) size as seen in the eyepiece and the angular size of the object when placed at the conventional closest distance of distinct vision: 25 cm from the eye.

By instrument

Single lens The linear magnification of a thin lens is M = f f−do where f is the focal length and do is the distance from the lens to the object. Note that for real images, M is negative and the image is inverted. For virtual images, M is positive and the image is upright.

With di being the distance from the lens to the image, hi the height of the image and ho the height of the object, the magnification can also be written as: M = − di = hi do ho Note again that a negative magnification implies an inverted image.

Photography The image recorded by a photographic film or image sensor is always a real image and is usually inverted. When measuring the height of an inverted image using the cartesian sign convention (where the x-axis is the optical axis) the value for hi will be negative, and as a result M will also be negative. However, the traditional sign convention used in photography is "real is positive, virtual is negative”.[1] Therefore in photography: Object height and distance are always real and positive. When the focal length is positive the image’s height, distance and magnification are real and positive. Only if the focal length is negative, the image’s height, distance and magnification are virtual and negative. Therefore the photographic magnification formulae are traditionally presented as: − M = di = hi = f = di f do ho do−f f

Telescope The angular magnification of a telescope is given by M = fo fe where fo is the focal length of the objective lens and fe is the focal length of the eyepiece.

Magnifying glass The maximum angular magnification (compared to the naked eye) of a magnifying glass depends on how the glass and the object are held, relative to the eye. If the lens is held at a distance from the object such that its front focal point is on the object being viewed, the relaxed eye (focused to infinity) can view the image with angular magnification 25 cm MA = f Here, f is the focal length of the lens in centimeters. The constant 25 cm is an estimate of the “near point” distance of the eye—the closest distance at which the healthy naked eye can focus. In this case the angular magnification is independent from the distance kept between the eye and the magnifying glass. If instead the lens is held very close to the eye and the object is placed closer to the lens than its focal point so that the observer focuses on the near point, a larger angular magnification can be obtained, approaching 158 CHAPTER 9. DAY 9

A Thin lens where black dimensions are real, grey are virtual. The direction of the arrows can be used to describe cartesian +/- signage : from the centre of the lens, left or down = negative, right or up = positive. 9.1. MAGNIFICATION 159

25 cm MA = f + 1 A different interpretation of the working of the latter case is that the magnifying glass changes the diopter of the eye (making it myopic) so that the object can be placed closer to the eye resulting in a larger angular magnification.

Microscope The angular magnification of a microscope is given by

MA = Mo × Me where Mo is the magnification of the objective and Me the magnification of the eyepiece. The magnification of the objective depends on its focal length fo and on the distance d between objective back focal plane and the focal plane of the eyepiece (called the tube length): d Mo = fo

The magnification of the eyepiece depends upon its focal length fe and is calculated by the same equation as that of a magnifying glass (above). Note that both astronomical telescopes as well as simple microscopes produce an inverted image, thus the equation for the magnification of a telescope or microscope is often given with a minus sign.

Measurement of telescope magnification

Measuring the actual angular magnification of a telescope is difficult, but it is possible to use the reciprocal relationship between the linear magnification and the angular magnification, since the linear magnification is constant for all objects. The telescope is focused correctly for viewing objects at the distance for which the angular magnification is to be determined and then the object glass is used as an object the image of which is known as the exit pupil. The diameter of this may be measured using an instrument known as a Ramsden dynameter which consists of a Ramsden eyepiece with micrometer hairs in the back focal plane. This is mounted in front of the telescope eyepiece and used to evaluate the diameter of the exit pupil. This will be much smaller than the object glass diameter, which gives the linear magnification (actually a reduction), the angular magnification can be determined from

MA = 1/M = DObjective/DRamsden

Maximum usable magnification

With any telescope or microscope, or a lens a maximum magnification exists beyond which the image looks bigger but shows no more detail. It occurs when the finest detail the instrument can resolve is magnified to match the finest detail the eye can see. Magnification beyond this maximum is sometimes called “empty magnification”. For a good quality telescope operating in good atmospheric conditions, the maximum usable magnification is limited by diffraction. In practice it is considered to be 2× the aperture in millimetres or 50× the aperture in inches; so, a 60mm diameter telescope has a maximum usable magnification of 120×. With an optical microscope having a high numerical aperture and using oil immersion, the best possible resolution is 200 nm corresponding to a magnification of around 1200×. Without oil immersion, the maximum usable magni- fication is around 800×. For details, see limitations of optical microscopes. Small, cheap telescopes and microscopes are sometimes supplied with the eyepieces that give magnification far higher than is usable.

9.1.3 Magnification and micron bar

Magnification figures on printed pictures can be misleading. Editors of journals and magazines routinely resize images to fit the page, making any magnification number provided in the figure legend incorrect. A scale bar (or micron bar) is a bar of stated length superimposed on a picture. This bar can be used to make accurate measurements on a picture. When a picture is resized the bar will be resized in proportion. If a picture has a scale bar, the actual magnification can easily be calculated. Where the scale (magnification) of an image is important or relevant, including a scale bar is preferable to stating magnification. 160 CHAPTER 9. DAY 9

9.1.4 See also

• Diopter

• Dynameter

• Lens

• Magnifying glass

• Microscope

• Optical telescope

• Screen magnifier

9.1.5 References

[1] Ray, Sidney F. (2002). Applied Photographic Optics: Lenses and Optical Systems for Photography, Film, Video, Electronic and Digital Imaging. Focal Press. p. 40. ISBN 0-240-51540-4.

9.2 Distortion (optics)

Not to be confused with spherical aberration, a loss of image sharpness that can result from spherical lens surfaces. In geometric optics, distortion is a deviation from rectilinear projection, a projection in which straight lines in a

Wine glasses create non-uniform distortion of their background scene remain straight in an image. It is a form of optical aberration. 9.2. DISTORTION (OPTICS) 161

9.2.1 Radial distortion

Although distortion can be irregular or follow many patterns, the most commonly encountered distortions are radially symmetric, or approximately so, arising from the symmetry of a photographic lens. These radial distortions can usually be classified as either barrel distortions or pincushion distortions. See van Walree.[1] Mathematically, barrel and pincushion distortion are quadratic, meaning they increase as the square of distance from the center. In mustache distortion the quartic (degree 4) term is significant: in the center, the degree 2 barrel distortion is dominant, while at the edge the degree 4 distortion in the pincushion direction dominates. Other distortions are in principle possible – pincushion in center and barrel at the edge, or higher order distortions (degree 6, degree 8) – but do not generally occur in practical lenses, and higher order distortions are small relative to the main barrel and pincushion effects.

Occurrence

Simulated animation of globe effect (right) compared with a simple pan (left)

In photography, distortion is particularly associated with zoom lenses, particularly large-range zooms, but may also be found in prime lenses, and depends on focal distance – for example, the Canon EF 50mm f/1.4 exhibits barrel distortion at extremely short focal distances. Barrel distortion may be found in wide-angle lenses, and is often seen at the wide-angle end of zoom lenses, while pincushion distortion is often seen in older or low-end telephoto lenses. Mustache distortion is observed particularly on the wide end of zooms, with certain retrofocus lenses, and more recently on large-range zooms such as the Nikon 18–200 mm. A certain amount of pincushion distortion is often found with visual optical instruments, e.g., binoculars, where it serves to eliminate the globe effect. In order to understand these distortions, it should be remembered that these are radial defects; the optical systems in question have rotational symmetry (omitting non-radial defects), so the didactically correct test image would be a set of concentric circles having even separation—like a shooter’s target. It will then be observed that these common distortions actually imply a nonlinear radius mapping from the object to the image: What is seemingly pincushion distortion, is actually simply an exaggerated radius mapping for large radii in comparison with small radii. A graph showing radius transformations (from object to image) will be steeper in the upper (rightmost) end. Conversely, barrel distortion is actually a diminished radius mapping for large radii in comparison with small radii. A graph showing radius transformations (from object to image) will be less steep in the upper (rightmost) end.

Chromatic aberration

Further information: Chromatic aberration 162 CHAPTER 9. DAY 9

Radial distortions can be understood by their effect on concentric circles, as in an archery target.

Radial distortion that depends on wavelength is called "lateral chromatic aberration" – “lateral” because radial, “chro- matic” because dependent on color (wavelength). This can cause colored fringes in high-contrast areas in the outer parts of the image. This should not be confused with axial (longitudinal) chromatic aberration, which causes aberra- tions throughout the field, particularly .

Origin of terms

The names for these distortions come from familiar objects which are visually similar. 9.2. DISTORTION (OPTICS) 163

• In barrel distortion, straight lines bulge outwards at the center, as in a barrel.

• In pincushion distortion, corners of squares form elongated points, as in a cushion.

• In mustache distortion, horizontal lines bulge up in the center, then bend the other way as they approach the edge of the frame (if in the top of the frame), as in curly handlebar mustaches.

9.2.2 Software correction

Radial distortion, whilst primarily dominated by low order radial components,[3] can be corrected using Brown’s distortion model,[4] also known as the Brown–Conrady model based on earlier work by Conrady.[5] The Brown– Conrady model corrects both for radial distortion and for tangential distortion caused by physical elements in a lens not being perfectly aligned. The latter is also known as decentering distortion. See Zhang[6] for radial distortion discussion.

2 4 ··· 2 2 2 4 ··· xd = xu(1 + K1r + K2r + ) + (P2(r + 2xu ) + 2P1xuyu)(1 + P3r + P4r + )

2 4 ··· 2 2 2 4 ··· yd = yu(1 + K1r + K2r + ) + (P1(r + 2yu ) + 2P2xuyu)(1 + P3r + P4r + ) where:

(xd, yd) = distorted image point as projected on image plane using specified lens,

(xu, yu) = undistorted image point as projected by an ideal pin-hole camera, 164 CHAPTER 9. DAY 9

(xc, yc) = distortion center (assumed to be the principal point), th Kn = n radial distortion coefficient, th Pn = n tangential distortion coefficient [note that Brown’s original definition has P1 and P2 inter- changed],√ 2 2 r = (xu − xc) + (yu − yc) , and ... = an infinite series.

Barrel distortion typically will have a negative term for K1 whereas pincushion distortion will have a positive value. Moustache distortion will have a non-monotonic radial geometric series where for some r the sequence will change sign. Software can correct those distortions by warping the image with a reverse distortion. This involves determining which distorted pixel corresponds to each undistorted pixel, which is non-trivial due to the non-linearity of the distor- tion equation.[3] Lateral chromatic aberration (purple/green fringing) can be significantly reduced by applying such warping for red, green and blue separately. An alternative method iteratively computes the undistorted pixel position.[7]

Calibrated

Calibrated systems work from a table of lens/camera transfer functions:

• Adobe Photoshop Lightroom and Photoshop CS5 can correct complex distortion. • PTlens is a Photoshop plugin or standalone application which corrects complex distortion. It not only corrects for linear distortion, but also second degree and higher nonlinear components.[8] • Lensfun is a free to use database and library for correcting lens distortion.[9] • DxO Labs' Optics Pro can correct complex distortion, and takes into account the focus distance. • proDAD Defishr includes an Unwarp-tool and a Calibrator-tool. Due to the distortion of a checkerboard pattern, the necessary unwrap is calculated. • The Micro Four Thirds system cameras and lenses perform automatic distortion correction using correction parameters that are stored in each lens’s firmware, and are applied automatically by the camera and raw con- verter software. The optics of most of these lenses feature substantially more distortion than their counterparts in systems that don't offer such automatic corrections, but the software-corrected final images show noticeably less distortion than competing designs.[10]

Manual

Manual systems allow manual adjustment of distortion parameters:

• ImageMagick can correct several distortions; for example the fisheye distortion of the popular GoPro Hero3+ Silver camera can be corrected by the command[11]

convert distorted_image.jpg -distort barrel “0.06335 −0.18432 −0.13009” corrected_image.jpg

• Photoshop CS2 and Photoshop Elements (from version 5) include a manual Lens Correction filter for simple (pincushion/barrel) distortion • Corel Paint Shop Pro Photo includes a manual Lens Distortion effect for simple (barrel, fisheye, fisheye spherical and pincushion) distortion. • The GIMP includes manual lens distortion correction (from version 2.4). • PhotoPerfect has interactive functions for general pincushion adjustment, and for fringe (adjusting the size of the red, green and blue image parts). • Hugin can be used to correct distortion, though that is not its primary application.[12] 9.2. DISTORTION (OPTICS) 165

9.2.3 Related phenomena

Radial distortion is a failure of a lens to be rectilinear: a failure to image lines into lines. If a photograph is not taken straight-on then, even with a perfect rectilinear lens, rectangles will appear as trapezoids: lines are imaged as lines, but the angles between them are not preserved (tilt is not a conformal map). This effect can be controlled by using a perspective control lens, or corrected in post-processing. Due to perspective, cameras image a cube as a square frustum (a truncated pyramid, with trapezoidal sides)—the far end is smaller than the near end. This creates perspective, and the rate at which this scaling happens (how quickly more distant objects shrink) creates a sense of a scene being deep or shallow. This cannot be changed or corrected by a simple transform of the resulting image, because it requires 3D information, namely the depth of objects in the scene. This effect is known as perspective distortion; the image itself is not distorted, but is perceived as distorted when viewed from a normal viewing distance. Note that if the center of the image is closer than the edges (for example, a straight-on shot of a face), then barrel distortion and wide-angle distortion (taking the shot from close) both increase the size of the center, while pincushion distortion and telephoto distortion (taking the shot from far) both decrease the size of the center. However, radial distortion bends straight lines (out or in), while perspective distortion does not bend lines, and these are distinct phenomena. Fisheye lenses are wide-angle lenses with heavy barrel distortion and thus exhibit both these phenomena, so objects in the center of the image (if shot from a short distance) are particularly enlarged: even if the barrel distortion is corrected, the resulting image is still from a wide-angle lens, and will still have a wide-angle perspective.

9.2.4 See also

• Anamorphosis • Angle of view • Cylindrical perspective • Distortion • Texture gradient • Underwater vision • Vignetting

9.2.5 References

[1] Paul van Walree. “Distortion”. Photographic optics. Retrieved 2 February 2009.

[2] “Tamron 18-270mm f/3.5-6.3 Di II VC PZD”. Retrieved 20 March 2013.

[3] de Villiers, J. P.; Leuschner, F.W.; Geldenhuys, R. (17–19 November 2008). “Centi-pixel accurate real-time inverse distor- tion correction” (PDF). 2008 International Symposium on Optomechatronic Technologies. SPIE. doi:10.1117/12.804771.

[4] Brown, Duane C. (May 1966). “Decentering distortion of lenses” (PDF). Photogrammetric Engineering. 32 (3): 444–462.

[5] Conrady, Alexander Eugen. "Decentred Lens-Systems.” Monthly notices of the Royal Astronomical Society 79 (1919): 384–390.

[6] Zhang, Zhengyou (1998). A Flexible New Technique for Camera Calibration (PDF) (Technical report). Microsoft Research. MSR-TR-98-71.

[7] “A Four-step Camera Calibration Procedure with Implicit Image Correction”. Retrieved 19 January 2011.

[8] “PTlens”. Retrieved 2 Jan 2012.

[9] “lensfun - Rev 246 - /trunk/README”. Retrieved 13 Oct 2013.

[10] Wiley, Carlisle. “Articles: Review”. Dpreview.com. Retrieved 2013-07-03.

[11] “ImageMagick v6 Examples -- Lens Corrections”.

[12] “Hugin tutorial – Simulating an architectural projection”. Retrieved 9 September 2009. 166 CHAPTER 9. DAY 9

9.2.6 External links

• Lens distortion estimation and correction with source code and online demonstration

• Lens distortion correction on post-processing

• 3D modeling Lens distortion and camera field of view in CCTV design

9.3 Optical aberration

An optical aberration is a departure of the performance of an optical system from the predictions of paraxial optics.[1] In an imaging system, it occurs when light from one point of an object does not converge into (or does not diverge from) a single point after transmission through the system. Aberrations occur because the simple paraxial theory is not a completely accurate model of the effect of an optical system on light (due to the wave nature of light), rather than due to flaws in the optical elements.[2] Aberration leads to blurring of the image produced by an image-forming optical system. Makers of optical instru- ments need to correct optical systems to compensate for aberration. The articles on reflection, refraction and caustics discuss the general features of reflected and refracted rays.

9.3.1 Overview

Reflection from a spherical mirror. Incident rays (red) away from the center of the mirror produce reflected rays (green) that miss the focal point, F. This is due to spherical aberration.

Aberrations fall into two classes: monochromatic and chromatic. Monochromatic aberrations are caused by the ge- ometry of the lens or mirror and occur both when light is reflected and when it is refracted. They appear even when using monochromatic light, hence the name. Chromatic aberrations are caused by dispersion, the variation of a lens’s refractive index with wavelength. They do not appear when monochromatic light is used. 9.3. OPTICAL ABERRATION 167

Monochromatic aberrations

• Tilt

• Defocus

• Spherical aberration

• Coma

• Astigmatism

• Field curvature

• Image distortion

Piston and tilt are not actually true optical aberrations, as they do not represent or model curvature in the wavefront. If an otherwise perfect wavefront is “aberrated” by piston and tilt, it will still form a perfect, aberration-free image, only shifted to a different position. Defocus is the lowest-order true optical aberration.

Chromatic aberrations

Main article: Chromatic aberration

• Axial, or longitudinal, chromatic aberration

• Lateral, or transverse, chromatic aberration

9.3.2 Monochromatic aberration

The elementary theory of optical systems leads to the theorem: Rays of light proceeding from any object point unite in an image point; and therefore an object space is reproduced in an image space. The introduction of simple auxiliary terms, due to C. F. Gauss (Dioptrische Untersuchungen, Göttingen, 1841), named the focal lengths and focal planes, permits the determination of the image of any object for any system (see lens). The Gaussian theory, however, is only true so long as the angles made by all rays with the optical axis (the symmetrical axis of the system) are infinitely small, i.e. with infinitesimal objects, images and lenses; in practice these conditions may not be realized, and the images projected by uncorrected systems are, in general, ill-defined and often completely blurred, if the aperture or field of view exceeds certain limits. The investigations of James Clerk Maxwell (Phil.Mag., 1856; Quart. Journ. Math., 1858) and Ernst Abbe[3] showed that the properties of these reproductions, i.e. the relative position and magnitude of the images, are not special properties of optical systems, but necessary consequences of the supposition (in Abbe) of the reproduction of all points of a space in image points (Maxwell assumes a less general hypothesis), and are independent of the manner in which the reproduction is effected. These authors proved, however, that no optical system can justify these suppositions, since they are contradictory to the fundamental laws of reflection and refraction. Consequently, the Gaussian theory only supplies a convenient method of approximating to reality; and no constructor would attempt to realize this unattainable ideal. At present, all that can be attempted is to reproduce a single plane in another plane; but even this has not been altogether satisfactorily accomplished: aberrations always occur, and it is improbable that these will ever be entirely corrected. This, and related general questions, have been treated — besides the above-mentioned authors — by M. Thiesen (Berlin. Akad. Sitzber., 1890, xxxv. 799; Berlin. Phys. Ges. Verh., 1892) and H. Bruns (Leipzig. Math. Phys. Ber., 1895, xxi. 325) by means of Sir W. R. Hamilton’s characteristic function (Irish Acad. Trans., Theory of Systems of Rays, 1828, et seq.). Reference may also be made to the treatise of Czapski-Eppenstein, pp. 155–161. A review of the simplest cases of aberration will now be given. 168 CHAPTER 9. DAY 9

N +

R 2 R U 2 U 1 U 2 U’1 O S O’ P P’ O’ 1 U 2 R

S1 S2

Entr. Ap. Ex. pup. stop pup.

Figure 1

Aberration of axial points (spherical aberration in the restricted sense)

Let S (fig. 1) be any optical system, rays proceeding from an axis point O under an angle u1 will unite in the axis point O'1; and those under an angle u2 in the axis point O'2. If there is refraction at a collective spherical surface, or through a thin positive lens, O'2 will lie in front of O'1 so long as the angle u2 is greater than u1 (under correction); and conversely with a dispersive surface or lenses (over correction). The caustic, in the first case, resembles the sign > (greater than); in the second < (less than). If the angle u1 is very small, O'1 is the Gaussian image; and O'1 O'2 is termed the longitudinal aberration, and O'1R the lateral aberration of the pencils with aperture u2. If the pencil with the angle u2 is that of the maximum aberration of all the pencils transmitted, then in a plane perpendicular to the axis at O'1 there is a circular disk of confusion of radius O'1R, and in a parallel plane at O'2 another one of radius O'2R2; between these two is situated the disk of least confusion. The largest opening of the pencils, which take part in the reproduction of O, i.e. the angle u, is generally determined by the margin of one of the lenses or by a hole in a thin plate placed between, before, or behind the lenses of the system. This hole is termed the stop or diaphragm; Abbe used the term aperture stop for both the hole and the limiting margin of the lens. The component S1 of the system, situated between the aperture stop and the object O, projects an image of the diaphragm, termed by Abbe the entrance pupil; the exit pupil is the image formed by the component S2, which is placed behind the aperture stop. All rays which issue from O and pass through the aperture stop also pass through the entrance and exit pupils, since these are images of the aperture stop. Since the maximum aperture of the pencils issuing from O is the angle u subtended by the entrance pupil at this point, the magnitude of the aberration will be determined by the position and diameter of the entrance pupil. If the system be entirely behind the aperture stop, then this is itself the entrance pupil (front stop); if entirely in front, it is the exit pupil (back stop). If the object point be infinitely distant, all rays received by the first member of the system are parallel, and their intersections, after traversing the system, vary according to their perpendicular height of incidence, i.e. their distance from the axis. This distance replaces the angle u in the preceding considerations; and the aperture, i.e. the radius of the entrance pupil, is its maximum value.

Aberration of elements, i.e. smallest objects at right angles to the axis

If rays issuing from O (fig. 1) be concurrent, it does not follow that points in a portion of a plane perpendicular at O to the axis will be also concurrent, even if the part of the plane be very small. With a considerable aperture, the neighboring point N will be reproduced, but attended by aberrations comparable in magnitude to ON. These 9.3. OPTICAL ABERRATION 169

aberrations are avoided if, according to Abbe, the sine condition, sin u'1/sin u1=sin u'2/sin u2, holds for all rays reproducing the point O. If the object point O is infinitely distant, u1 and u2 are to be replaced by h1 and h2, the perpendicular heights of incidence; the sine condition then becomes sin u'1/h1=sin u'2/h2. A system fulfilling this condition and free from spherical aberration is called aplanatic (Greek a-, privative, plann, a wandering). This word was first used by Robert Blair (d. 1828), professor of practical astronomy at Edinburgh University, to characterize a superior achromatism, and, subsequently, by many writers to denote freedom from spherical aberration. Both the aberration of axis points, and the deviation from the sine condition, rapidly increase in most (uncorrected) systems with the aperture.

Aberration of lateral object points (points beyond the axis) with narrow pencils. Astigmatism.

Figure 2

A point O (fig. 2) at a finite distance from the axis (or with an infinitely distant object, a point which subtends a finite angle at the system) is, in general, even then not sharply reproduced, if the pencil of rays issuing from it and traversing the system is made infinitely narrow by reducing the aperture stop; such a pencil consists of the rays which can pass from the object point through the now infinitely small entrance pupil. It is seen (ignoring exceptional cases) that the pencil does not meet the refracting or reflecting surface at right angles; therefore it is astigmatic (Gr. a-, privative, stigmia, a point). Naming the central ray passing through the entrance pupil the axis of the pencil or principal ray, it can be said: the rays of the pencil intersect, not in one point, but in two focal lines, which can be assumed to be at right angles to the principal ray; of these, one lies in the plane containing the principal ray and the axis of the system, i.e. in the first principal section or meridional section, and the other at right angles to it, i.e. in the second principal section or sagittal section. We receive, therefore, in no single intercepting plane behind the system, as, for example, a focusing screen, an image of the object point; on the other hand, in each of two planes lines O' and O” are separately formed (in neighboring planes ellipses are formed), and in a plane between O' and O” a circle of least confusion. The interval O'O”, termed the astigmatic difference, increases, in general, with the angle W made by the principal ray OP with the axis of the system, i.e. with the field of view. Two astigmatic image surfaces correspond to one object plane; and these are in contact at the axis point; on the one lie the focal lines of the first kind, on the other those of the second. Systems in which the two astigmatic surfaces coincide are termed anastigmatic or stigmatic. Sir Isaac Newton was probably the discoverer of astigmation; the position of the astigmatic image lines was deter- mined by Thomas Young (A Course of Lectures on Natural Philosophy, 1807); and the theory was developed by Allvar Gullstrand.[4] A bibliography by P. Culmann is given in Moritz von Rohr’s Die Bilderzeugung in optischen Instrumenten.[5] 170 CHAPTER 9. DAY 9

Aberration of lateral object points with broad pencils. Coma.

By opening the stop wider, similar deviations arise for lateral points as have been already discussed for axial points; but in this case they are much more complicated. The course of the rays in the meridional section is no longer symmetrical to the principal ray of the pencil; and on an intercepting plane there appears, instead of a luminous point, a patch of light, not symmetrical about a point, and often exhibiting a resemblance to a comet having its tail directed towards or away from the axis. From this appearance it takes its name. The unsymmetrical form of the meridional pencil—formerly the only one considered—is coma in the narrower sense only; other errors of coma have been treated by Arthur König and Moritz von Rohr,[5] and later by Allvar Gullstrand.[6]

Curvature of the field of the image

Main article: Petzval field curvature

If the above errors be eliminated, the two astigmatic surfaces united, and a sharp image obtained with a wide aperture—there remains the necessity to correct the curvature of the image surface, especially when the image is to be received upon a plane surface, e.g. in photography. In most cases the surface is concave towards the system.

Distortion of the image

Even if the image is sharp, it may be distorted compared to ideal pinhole projection. In pinhole projection, the magnification of an object is inversely proportional to its distance to the camera along the optical axis so that a camera pointing directly at a flat surface reproduces that flat surface. Distortion can be thought of as stretching the image non-uniformly, or, equivalently, as a variation in magnification across the field. While “distortion” can include arbitrary deformation of an image, the most pronounced modes of distortion produced by conventional imaging optics is “barrel distortion”, in which the center of the image is magnified more than the perimeter (figure 3a). The reverse, in which the perimeter is magnified more than the center, is known as “pincushion distortion” (figure 3b). This effect is called lens distortion or image distortion, and there are algorithms to correct it. Systems free of distortion are called orthoscopic (orthos, right, skopein to look) or rectilinear (straight lines). This aberration is quite distinct from that of the sharpness of reproduction; in unsharp, reproduction, the question of distortion arises if only parts of the object can be recognized in the figure. If, in an unsharp image, a patch of light corresponds to an object point, the center of gravity of the patch may be regarded as the image point, this being the point where the plane receiving the image, e.g., a focusing screen, intersects the ray passing through the middle of the stop. This assumption is justified if a poor image on the focusing screen remains stationary when the aperture is diminished; in practice, this generally occurs. This ray, named by Abbe a principal ray (not to be confused with the principal rays of the Gaussian theory), passes through the center of the entrance pupil before the first refraction, and the center of the exit pupil after the last refraction. From this it follows that correctness of drawing depends solely upon the principal rays; and is independent of the sharpness or curvature of the image field. Referring to fig. 4, we have O'Q'/OQ = a' tan w'/a tan w = 1/N, where N is the scale or magnification of the image. For N to be constant for all values of w, a' tan w'/a tan w must also be constant. If the ratio a'/a be sufficiently constant, as is often the case, the above relation reduces to the condition of Airy, i.e. tan w'/ tan w= a constant. This simple relation (see Camb. Phil. Trans., 1830, 3, p. 1) is fulfilled in all systems which are symmetrical with respect to their diaphragm (briefly named symmetrical or holosymmetrical objectives), or which consist of two like, but different-sized, components, placed from the diaphragm in the ratio of their size, and presenting the same curvature to it (hemisymmetrical objectives); in these systems tan w' / tan w = 1. The constancy of a'/a necessary for this relation to hold was pointed out by R. H. Bow (Brit. Journ. Photog., 1861), and Thomas Sutton (Photographic Notes, 1862); it has been treated by O. Lummer and by M. von Rohr (Zeit. f. Instrumentenk., 1897, 17, and 1898, 18, p. 4). It requires the middle of the aperture stop to be reproduced in the centers of the entrance and exit pupils without spherical aberration. M. von Rohr showed that for systems fulfilling neither the Airy nor the Bow-Sutton condition, the ratio a' cos w'/a tan w will be constant for one distance of the object. This combined condition is exactly fulfilled by holosymmetrical objectives reproducing with the scale 1, and by hemisymmetrical, if the scale of reproduction be equal to the ratio of the sizes of the two components. 9.3. OPTICAL ABERRATION 171

Fig. 3a: Barrel distortion

Zernike model of aberrations

Circular wavefront profiles associated with aberrations may be mathematically modeled using . Developed by Frits Zernike in the 1930s, Zernike’s polynomials are orthogonal over a circle of unit radius. A com- plex, aberrated wavefront profile may be curve-fitted with Zernike polynomials to yield a set of fitting coefficients that individually represent different types of aberrations. These Zernike coefficients are linearly independent, thus individual aberration contributions to an overall wavefront may be isolated and quantified separately. There are even and odd Zernike polynomials. The even Zernike polynomials are defined as

m m Zn (ρ, ϕ) = Rn (ρ) cos(m ϕ) and the odd Zernike polynomials as

−m m Zn (ρ, ϕ) = Rn (ρ) sin(m ϕ), where m and n are nonnegative integers with n ≥ m , Φ is the azimuthal angle in radians, and ρ is the normalized m radial distance. The radial polynomials Rn have no azimuthal dependence, and are defined as 172 CHAPTER 9. DAY 9

Fig. 3b: Pincushion distortion

(n−m)/2 ∑ (−1)k (n − k)! Rm(ρ) = ρn−2 k if n − m is even n k! ((n + m)/2 − k)! ((n − m)/2 − k)! k=0

m − and Rn (ρ) = 0 if n m is odd. The first few Zernike polynomials are: where ρ is the normalized pupil radius with 0 ≤ ρ ≤ 1 , θ is the azimuthal angle around the pupil with 0 ≤ θ ≤ 2π , and the fitting coefficients a0, . . . , a8 are the wavefront errors in wavelengths. As in Fourier synthesis using sines and cosines, a wavefront may be perfectly represented by a sufficiently large number of higher-order Zernike polynomials. However, wavefronts with very steep gradients or very high spatial frequency structure, such as produced by propagation through atmospheric turbulence or aerodynamic flowfields, are not well modeled by Zernike polynomials, which tend to low-pass filter fine spatial definition in the wavefront. In this case, other fitting methods such as fractals or singular value decomposition may yield improved fitting results. The circle polynomials were introduced by Frits Zernike to evaluate the point image of an aberrated optical system taking into account the effects of diffraction. The perfect point image in the presence of diffraction had already been described by Airy, as early as 1835. It took almost hundred years to arrive at a comprehensive theory and modeling 9.3. OPTICAL ABERRATION 173

Q

a' w P O O' P' a w'

Q'

Figure 4 of the point image of aberrated systems (Zernike and Nijboer). The analysis by Nijboer and Zernike describes the intensity distribution close to the optimum focal plane. An extended theory that allows the calculation of the point image amplitude and intensity over a much larger volume in the focal region was recently developed (Extended Nijboer-Zernike theory). This Extended Nijboer-Zernike theory of point image or ‘point-spread function’ formation has found applications in general research on image formation, especially for systems with a high numerical aperture, and in characterizing optical systems with respect to their aberrations.[7]

9.3.3 Analytic treatment of aberrations

The preceding review of the several errors of reproduction belongs to the Abbe theory of aberrations, in which definite aberrations are discussed separately; it is well suited to practical needs, for in the construction of an optical instrument certain errors are sought to be eliminated, the selection of which is justified by experience. In the mathematical sense, however, this selection is arbitrary; the reproduction of a finite object with a finite aperture entails, in all probability, an infinite number of aberrations. This number is only finite if the object and aperture are assumed to be infinitely small of a certain order; and with each order of infinite smallness, i.e. with each degree of approximation to reality (to finite objects and apertures), a certain number of aberrations is associated. This connection is only supplied by theories which treat aberrations generally and analytically by means of indefinite series.

Figure 5

A ray proceeding from an object point O (fig. 5) can be defined by the coordinates (ξ, η). Of this point O in an object 174 CHAPTER 9. DAY 9

plane I, at right angles to the axis, and two other coordinates (x, y), the point in which the ray intersects the entrance pupil, i.e. the plane II. Similarly the corresponding image ray may be defined by the points (ξ', η'), and (x', y'), in the planes I' and II'. The origins of these four plane coordinate systems may be collinear with the axis of the optical system; and the corresponding axes may be parallel. Each of the four coordinates ξ', η', x', y' are functions of ξ, η, x, y; and if it be assumed that the field of view and the aperture be infinitely small, then ξ, η, x, y are of the same order of infinitesimals; consequently by expanding ξ', η', x', y' in ascending powers of ξ, η, x, y, series are obtained in which it is only necessary to consider the lowest powers. It is readily seen that if the optical system be symmetrical, the origins of the coordinate systems collinear with the optical axis and the corresponding axes parallel, then by changing the signs of ξ, η, x, y, the values ξ', η', x', y' must likewise change their sign, but retain their arithmetical values; this means that the series are restricted to odd powers of the unmarked variables. The nature of the reproduction consists in the rays proceeding from a point O being united in another point O'; in general, this will not be the case, for ξ', η' vary if ξ, η be constant, but x, y variable. It may be assumed that the planes I' and II' are drawn where the images of the planes I and II are formed by rays near the axis by the ordinary Gaussian rules; and by an extension of these rules, not, however, corresponding to reality, the Gauss image point O'0, with coordinates ξ'0, η'0, of the point O at some distance from the axis could be constructed. Writing Dξ'=ξ'-ξ'0 and Dη'=η'-η'0, then Dξ' and Dη' are the aberrations belonging to ξ, η and x, y, and are functions of these magnitudes which, when expanded in series, contain only odd powers, for the same reasons as given above. On account of the aberrations of all rays which pass through O, a patch of light, depending in size on the lowest powers of ξ, η, x, y which the aberrations contain, will be formed in the plane I'. These degrees, named by J. Petzval (Bericht uber die Ergebnisse einiger dioptrischer Untersuchungen, Buda Pesth, 1843; Akad. Sitzber., Wien, 1857, vols. xxiv. xxvi.) the numerical orders of the image, are consequently only odd powers; the condition for the formation of an image of the mth order is that in the series for Dξ' and Dη' the coefficients of the powers of the 3rd, 5th…(m-2)th degrees must vanish. The images of the Gauss theory being of the third order, the next problem is to obtain an image of 5th order, or to make the coefficients of the powers of 3rd degree zero. This necessitates the satisfying of five equations; in other words, there are five alterations of the 3rd order, the vanishing of which produces an image of the 5th order. The expression for these coefficients in terms of the constants of the optical system, i.e. the radii, thicknesses, refractive indices and distances between the lenses, was solved by L. Seidel (Astr. Nach., 1856, p. 289); in 1840, J. Petzval constructed his portrait objective, from similar calculations which have never been published (see M. von Rohr, Theorie und Geschichte des photographischen Objectivs, Berlin, 1899, p. 248). The theory was elaborated by S. Finterswalder (Munchen. Acad. Abhandl., 1891, 17, p. 519), who also published a posthumous paper of Seidel containing a short view of his work (München. Akad. Sitzber., 1898, 28, p. 395); a simpler form was given by A. Kerber (Beiträge zur Dioptrik, Leipzig, 1895-6-7-8-9). A. Konig and M. von Rohr (see M. von Rohr, Die Bilderzeugung in optischen Instrumenten, pp. 317–323) have represented Kerber’s method, and have deduced the Seidel formulae from geometrical considerations based on the Abbe method, and have interpreted the analytical results geometrically (pp. 212–316). The aberrations can also be expressed by means of the characteristic function of the system and its differential coef- ficients, instead of by the radii, &c., of the lenses; these formulae are not immediately applicable, but give, however, the relation between the number of aberrations and the order. Sir William Rowan Hamilton (British Assoc. Report, 1833, p. 360) thus derived the aberrations of the third order; and in later times the method was pursued by Clerk Maxwell (Proc. London Math. Soc., 1874–1875; (see also the treatises of R. S. Heath and L. A. Herman), M. Thiesen (Berlin. Akad. Sitzber., 1890, 35, p. 804), H. Bruns (Leipzig. Math. Phys. Ber., 1895, 21, p. 410), and particularly successfully by K. Schwarzschild (Göttingen. Akad. Abhandl., 1905, 4, No. 1), who thus discovered the aberrations of the 5th order (of which there are nine), and possibly the shortest proof of the practical (Seidel) formulae. A. Gullstrand (vide supra, and Ann. d. Phys., 1905, 18, p. 941) founded his theory of aberrations on the differential geometry of surfaces. The aberrations of the third order are: (1) aberration of the axis point; (2) aberration of points whose distance from the axis is very small, less than of the third order — the deviation from the sine condition and coma here fall together in one class; (3) astigmatism; (4) curvature of the field; (5) distortion.

(1) Aberration of the third order of axis points is dealt with in all text-books on optics. It is very important in telescope design. In telescopes aperture is usually taken as the linear diameter of the objective. It is not the same as microscope aperture which is based on the entrance pupil or field of view as seen from the object and is expressed as an angular measurement. Higher order aberrations in telescope design can be mostly neglected. For microscopes it cannot be neglected. For a single lens of very small thickness and given power, the aberration depends upon the ratio of the radii r:r', and is a minimum (but never zero) for a certain value of this ratio; it varies inversely with the refractive index (the power of the lens remaining constant). The total aberration of two or more very thin lenses in contact, being the sum 9.3. OPTICAL ABERRATION 175

of the individual aberrations, can be zero. This is also possible if the lenses have the same algebraic sign. Of thin positive lenses with n=1.5, four are necessary to correct spherical aberration of the third order. These systems, however, are not of great practical importance. In most cases, two thin lenses are combined, one of which has just so strong a positive aberration (under-correction, vide supra) as the other a negative; the first must be a positive lens and the second a negative lens; the powers, however: may differ, so that the desired effect of the lens is maintained. It is generally an advantage to secure a great refractive effect by several weaker than by one high-power lens. By one, and likewise by several, and even by an infinite number of thin lenses in contact, no more than two axis points can be reproduced without aberration of the third order. Freedom from aberration for two axis points, one of which is infinitely distant, is known as Herschel’s condition. All these rules are valid, inasmuch as the thicknesses and distances of the lenses are not to be taken into account.

(2) The condition for freedom from coma in the third order is also of importance for telescope objec- tives; it is known as Fraunhofer’s condition. (4) After eliminating the aberration On the axis, coma and astigmatism, the relation for the flatness of the field in the third order is expressed by the Petzval equa- tion, S1/r(n'-n) = 0, where r is the radius of a refracting surface, n and n' the refractive indices of the neighboring media, and S the sign of summation for all refracting surfaces.

9.3.4 Practical elimination of aberrations

Laser guide stars are used to eliminate optical aberrations.[8]

The classical imaging problem is to reproduce perfectly a finite plane (the object) onto another plane (the image) through a finite aperture. It is impossible to do so perfectly for more than one such pair of planes (this was proven with increasing generality by Maxwell in 1858, by Bruns in 1895, and by Carathéodory in 1926, see summary in Walther, A., J. Opt. Soc. Am. A 6, 415–422 (1989)). For a single pair of planes (e.g. for a single focus setting of an objective), however, the problem can in principle be solved perfectly. Examples of such a theoretically perfect system include the Luneburg lens and the Maxwell fish-eye. Practical methods solve this problem with an accuracy which mostly suffices for the special purpose of each species of instrument. The problem of finding a system which reproduces a given object upon a given plane with given magnification (insofar as aberrations must be taken into account) could be dealt with by means of the approximation theory; in most cases, however, the analytical difficulties were too great for older calculation methods but may be 176 CHAPTER 9. DAY 9

ameliorated by application of modern computer systems. Solutions, however, have been obtained in special cases (see A. Konig in M. von Rohr’s Die Bilderzeugung, p. 373; K. Schwarzschild, Göttingen. Akad. Abhandl., 1905, 4, Nos. 2 and 3). At the present time constructors almost always employ the inverse method: they compose a system from certain, often quite personal experiences, and test, by the trigonometrical calculation of the paths of several rays, whether the system gives the desired reproduction (examples are given in A. Gleichen, Lehrbuch der geometrischen Optik, Leipzig and Berlin, 1902). The radii, thicknesses and distances are continually altered until the errors of the image become sufficiently small. By this method only certain errors of reproduction are investigated, especially individual members, or all, of those named above. The analytical approximation theory is often employed provisionally, since its accuracy does not generally suffice. In order to render spherical aberration and the deviation from the sine condition small throughout the whole aperture, there is given to a ray with a finite angle of aperture u* (width infinitely distant objects: with a finite height of incidence h*) the same distance of intersection, and the same sine ratio as to one neighboring the axis (u* or h* may not be much smaller than the largest aperture U or H to be used in the system). The rays with an angle of aperture smaller than u* would not have the same distance of intersection and the same sine ratio; these deviations are called zones, and the constructor endeavors to reduce these to a minimum. The same holds for the errors depending upon the angle of the field of view, w: astigmatism, curvature of field and distortion are eliminated for a definite value, w*, zones of astigmatism, curvature of field and distortion, attend smaller values of w. The practical optician names such systems: corrected for the angle of aperture u* (the height of incidence h*) or the angle of field of view w*. Spherical aberration and changes of the sine ratios are often represented graphically as functions of the aperture, in the same way as the deviations of two astigmatic image surfaces of the image plane of the axis point are represented as functions of the angles of the field of view. The final form of a practical system consequently rests on compromise; enlargement of the aperture results in a diminution of the available field of view, and vice versa. But the larger aperture will give the larger resolution. The following may be regarded as typical:

(1) Largest aperture; necessary corrections are — for the axis point, and sine condition; errors of the field of view are almost disregarded; example — high-power microscope objectives.

(2) Wide angle lens; necessary corrections are — for astigmatism, curvature of field and distortion; errors of the aperture only slightly regarded; examples — photographic widest angle objectives and oculars.

Between these extreme examples stands the normal lens: this is corrected more with regard to aperture; objectives for groups more with regard to the field of view.

(3) Long focus lenses have small fields of view and aberrations on axis are very important. Therefore zones will be kept as small as possible and design should emphasize . Because of this these lenses are the best for analytical computation.

9.3.5 Chromatic or color aberration

In optical systems composed of lenses, the position, magnitude and errors of the image depend upon the refractive indices of the glass employed (see Lens (optics) and Monochromatic aberration, above). Since the index of refraction varies with the color or wavelength of the light (see dispersion), it follows that a system of lenses (uncorrected) projects images of different colors in somewhat different places and sizes and with different aberrations; i.e. there are chromatic differences of the distances of intersection, of magnifications, and of monochromatic aberrations. If mixed light be employed (e.g. white light) all these images are formed and they cause a confusion, named chromatic aberration; for instance, instead of a white margin on a dark background, there is perceived a colored margin, or narrow spectrum. The absence of this error is termed achromatism, and an optical system so corrected is termed achromatic. A system is said to be chromatically under-corrected when it shows the same kind of chromatic error as a thin positive lens, otherwise it is said to be overcorrected. If, in the first place, monochromatic aberrations be neglected — in other words, the Gaussian theory be accepted — then every reproduction is determined by the positions of the focal planes, and the magnitude of the focal lengths, or if the focal lengths, as ordinarily happens, be equal, by three constants of reproduction. These constants are determined by the data of the system (radii, thicknesses, distances, indices, etc., of the lenses); therefore their dependence on the refractive index, and consequently on the color, are calculable.[9] The refractive indices for different wavelengths must be known for each kind of glass made use of. In this manner the conditions are maintained that any one constant 9.3. OPTICAL ABERRATION 177 of reproduction is equal for two different colors, i.e. this constant is achromatized. For example, it is possible, with one thick lens in air, to achromatize the position of a focal plane of the magnitude of the focal length. If all three constants of reproduction be achromatized, then the Gaussian image for all distances of objects is the same for the two colors, and the system is said to be in stable achromatism. In practice it is more advantageous (after Abbe) to determine the chromatic aberration (for instance, that of the distance of intersection) for a fixed position of the object, and express it by a sum in which each component conlins the amount due to each refracting surface.[10][11] In a plane containing the image point of one color, another colour produces a disk of confusion; this is similar to the confusion caused by two zones in spherical aberration. For infinitely distant objects the radius Of the chromatic disk of confusion is proportional to the linear aperture, and independent of the focal length (vide supra, Monochromatic Aberration of the Axis Point); and since this disk becomes the less harmful with an increasing image of a given object, or with increasing focal length, it follows that the deterioration of the image is proportional to the ratio of the aperture to the focal length, i.e. the relative aperture. (This explains the gigantic focal lengths in vogue before the discovery of achromatism.) Examples:

(a) In a very thin lens, in air, only one constant of reproduction is to be observed, since the focal length and the distance of the focal point are equal. If the refractive index for one color be n , and for another df dn 1 n+dn , and the powers, or reciprocals of the focal lengths, be f and f +df , then (1) = = f (n − 1) n ; dn is called the dispersion, and n the dispersive power of the glass.

(b) Two thin lenses in contact: let f1 and f2 be the powers corresponding to the lenses of refractive ′ ′′ ′ ′′ indices n1 and n2 and radii r1 , r1 , and r2 , r2 respectively; let f denote the total power, and df , dn1 , dn2 the changes of f , n1 , and n2 with the color. Then the following relations hold:

− − ′ − ′′ − ′ − ′′ − − f = f1 f2 = (n1 1)(1/r1 1/r1 ) + (n2 1)(1/r2 1/r2 ) = (n1 1)k1 + (n2 1)k2

(3) df = k1dn1 + k2dn2 . For achromatism df = 0 , hence, from (3),

(4) k1/k2 = −dn2/dn1 , or f1/f2 = −n1/n2 . Therefore f1 and f2 must have different algebraic signs, or the system must be composed of a collective and a dispersive lens. Con- sequently the powers of the two must be different (in order that f be not zero (equation 2)), and the dispersive powers must also be different (according to 4).

Newton failed to perceive the existence of media of different dispersive powers required by achromatism; con- sequently he constructed large reflectors instead of refractors. James Gregory and Leonhard Euler arrived at the correct view from a false conception of the achromatism of the eye; this was determined by Chester More Hall in 1728, Klingenstierna in 1754 and by Dollond in 1757, who constructed the celebrated achromatic telescopes. (See telescope.) Glass with weaker dispersive power (greater v ) is named crown glass; that with greater dispersive power, flint glass. For the construction of an achromatic collective lens ( f positive) it follows, by means of equation (4), that a collective lens I. of crown glass and a dispersive lens II. of flint glass must be chosen; the latter, although the weaker, corrects the other chromatically by its greater dispersive power. For an achromatic dispersive lens the converse must be adopted. This is, at the present day, the ordinary type, e.g., of telescope objective; the values of the four radii must satisfy the equations (2) and (4). Two other conditions may also be postulated: one is always the elimination of the aberration on the axis; the second either the Herschel or Fraunhofer Condition, the latter being the best vide supra, Monochromatic Aberration). In practice, however, it is often more useful to avoid the second condition by making the lenses have contact, i.e. equal radii. According to P. Rudolph (Eder’s Jahrb. f. Photog., 1891, 5, p. 225; 1893, 7, p. 221), cemented objectives of thin lenses permit the elimination of spherical aberration on the axis, if, as above, the collective lens has a smaller refractive index; on the other hand, they permit the elimination of astigmatism and curvature of the field, if the collective lens has a greater refractive index (this follows from the Petzval equation; see L. Seidel, Astr. Nachr., 1856, p. 289). Should the cemented system be positive, then the more powerful lens must be positive; and, according to (4), to the greater power belongs the weaker dispersive power (greater v ), that is to say, crown glass; consequently the crown glass must have the greater refractive index for astigmatic and plane images. In 178 CHAPTER 9. DAY 9

all earlier kinds of glass, however, the dispersive power increased with the refractive index; that is, v decreased as n increased; but some of the Jena glasses by E. Abbe and O. Schott were crown glasses of high refractive index, and achromatic systems from such crown glasses, with flint glasses of lower refractive index, are called the new achromats, and were employed by P. Rudolph in the first anastigmats (photographic objectives). Instead of making df vanish, a certain value can be assigned to it which will produce, by the addition of the two lenses, any desired chromatic deviation, e.g. sufficient to eliminate one present in other parts of the system. If the lenses I. and II. be cemented and have the same refractive index for one color, then its effect for that one color is that of a lens of one piece; by such decomposition of a lens it can be made chromatic or achromatic at will, without altering its spherical effect. If its chromatic effect ( df/f ) be greater than that of the same lens, this being made of the more dispersive of the two glasses employed, it is termed hyper-chromatic.

For two thin lenses separated by a distance D the condition for achromatism is D = v1f1 + v2f2 ; if v1 = v2 (e.g. if the lenses be made of the same glass), this reduces to D = (f1 + f2)/2 , known as the condition for oculars. If a constant of reproduction, for instance the focal length, be made equal for two colors, then it is not the same for other colors, if two different glasses are employed. For example, the condition for achromatism (4) for two thin lenses in contact is fulfilled in only one part of the spectrum, since dn2/dn1 varies within the spectrum. This fact was first ascertained by J. Fraunhofer, who defined the colors by means of the dark lines in the solar spectrum; and showed that the ratio of the dispersion of two glasses varied about 20% from the red to the violet (the variation for glass and water is about 50%). If, therefore, for two colors, a and b, fa = fb = f , then for a third color, c, the focal length is different; that is, if c lies between a and b, then fc < f , and vice versa; these algebraic results follow from the fact that towards the red the dispersion of the positive crown glass preponderates, towards the violet that of the negative flint. These chromatic errors of systems, which are achromatic for two colors, are called the secondary spectrum, and depend upon the aperture and focal length in the same manner as the primary chromatic errors do. In fig. 6, taken from M. von Rohr’s Theorie und Geschichte des photographischen Objectivs, the abscissae are focal lengths, and the ordinates wavelengths. The Fraunhofer lines used are shown in adjacent table. The focal lengths are made equal for the lines C and F. In the neighborhood of 550 nm the tangent to the curve is parallel to the axis of wavelengths; and the focal length varies least over a fairly large range of color, therefore in this neighborhood the color union is at its best. Moreover, this region of the spectrum is that which appears brightest to the human eye, and consequently this curve of the secondary on spectrum, obtained by making fC = fF , is, according to the experiments of Sir G. G. Stokes (Proc. Roy. Soc., 1878), the most suitable for visual instruments (optical achromatism,). In a similar manner, for systems used in photography, the vertex of the color curve must be placed in the position of the maximum sensibility of the plates; this is generally supposed to be at G'; and to accomplish this the F and violet mercury lines are united. This artifice is specially adopted in objectives for astronomical photography (pure actinic achromatism). For ordinary photography, however, there is this disadvantage: the image on the focusing- screen and the correct adjustment of the photographic sensitive plate are not in register; in astronomical photography this difference is constant, but in other kinds it depends on the distance of the objects. On this account the lines D and G' are united for ordinary photographic objectives; the optical as well as the actinic image is chromatically inferior, but both lie in the same place; and consequently the best correction lies in F (this is known as the actinic correction or freedom from chemical focus).

Should there be in two lenses in contact the same focal lengths for three colours a, b, and c, i.e. fa = fb = fc = f , then the relative partial dispersion (nc − nb)(na − nb) must be equal for the two kinds of glass employed. This follows by considering equation (4) for the two pairs of colors ac and bc. Until recently no glasses were known with a proportional degree of absorption; but R. Blair (Trans. Edin. Soc., 1791, 3, p. 3), P. Barlow, and F. S. Archer overcame the difficulty by constructing fluid lenses between glass walls. Fraunhofer prepared glasses which reduced the secondary spectrum; but permanent success was only assured on the introduction of the Jena glasses by E. Abbe and O. Schott. In using glasses not having proportional dispersion, the deviation of a third colour can be eliminated by two lenses, if an interval be allowed between them; or by three lenses in contact, which may not all consist of the old glasses. In uniting three colors an achromatism of a higher order is derived; there is yet a residual tertiary spectrum, but it can always be neglected. The Gaussian theory is only an approximation; monochromatic or spherical aberrations still occur, which will be different for different colors; and should they be compensated for one color, the image of another color would prove disturbing. The most important is the chromatic difference of aberration of the axis point, which is still present to disturb the image, after par-axial rays of different colors are united by an appropriate combination of glasses. If a collective system be corrected for the axis point for a definite wavelength, then, on account of the greater dispersion in the negative components — the flint glasses, — overcorrection will arise for the shorter wavelengths (this being the error of the negative components), and under-correction for the longer wavelengths (the error of crown glass lenses 9.3. OPTICAL ABERRATION 179

Figure 6

preponderating in the red). This error was treated by Jean le Rond d'Alembert, and, in special detail, by C. F. Gauss. It increases rapidly with the aperture, and is more important with medium apertures than the secondary spectrum of par-axial rays; consequently, spherical aberration must be eliminated for two colors, and if this be impossible, then it must be eliminated for those particular wavelengths which are most effectual for the instrument in question (a graphical representation of this error is given in M. von Rohr, Theorie und Geschichte des photographischen Objectivs). 180 CHAPTER 9. DAY 9

The condition for the reproduction of a surface element in the place of a sharply reproduced point — the constant of the sine relationship must also be fulfilled with large apertures for several colors. E. Abbe succeeded in computing microscope objectives free from error of the axis point and satisfying the sine condition for several colors, which therefore, according to his definition, were aplanatic for several colors; such systems he termed apochromatic. While, however, the magnification of the individual zones is the same, it is not the same for red as for blue; and there is a chromatic difference of magnification. This is produced in the same amount, but in the opposite sense, by the oculars, which Abbe used with these objectives (compensating oculars), so that it is eliminated in the image of the whole microscope. The best telescope objectives, and photographic objectives intended for three-color work, are also apochromatic, even if they do not possess quite the same quality of correction as microscope objectives do. The chromatic differences of other errors of reproduction have seldom practical importances.

9.3.6 See also

• Wavefront coding •

9.3.7 References

[1] Guenther, Robert (1990). Modern Optics. Cambridge: John Wiley & Sons Inc. p. 130. ISBN 0-471-60538-7.

[2] “Comparison of Optical Aberrations”. Edmund Optics. Archived from the original on Dec 6, 2011. Retrieved March 26, 2012.

[3] The investigations of Ernst Abbe on geometrical optics, originally published only in his university lectures, were first com- piled by S. Czapski in 1893. See full reference below.

[4] Gullstrand, Skand. Arch. f. Physiol., 1890, 2, p. 269; Allgemeine Theorie der monochromat. Aberrationen, etc., Upsala, 1900; Arch. f. Ophth., 1901, 53, pp. 2, 185

[5] von Rohr, Moritz (1904). Die bilderzeugung in optischen Instrumenten vom Standpunkte der geometrischen Optik. Berlin.

[6] Gullstrand, Allvar (1900). “Allgemeine Theorie der monochromat. Aberrationen, etc.”. Annalen der Physik. Upsala. 1905 (18): 941.

[7] Born, Max; Wolf, Emil. Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light. ISBN 978-0521642224.

[8] “New Laser Improves VLT’s Capabilities”. ESO Announcement. Retrieved 22 February 2013.

[9] Formulae are given in Czapski-Eppenstein (1903). Grundzuge der Theorie der optischen Instrumente. p. 166.

[10] See Czapski-Eppenstein (1903). Grundzuge der Theorie der optischen Instrumente. p. 170.

[11] A. Konig in M. v. Rohr’s collection, Die Bilderzeugung, p. 340

• This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Aberration". Encyclopædia Britannica. 1 (11th ed.). Cambridge University Press. pp. 54–61. Authorities cited: • H. D. Taylor, A System of Applied Optics (1906). The classical treatise in English. • R. S. Heath, A Treatise on Geometrical Optics (2nd ed., 1895). • L A. Herman, A Treatise on Geometrical Optics (1900). • S. Czapski, Theorie der optischen Instrumente nach Abbe, published: • separately at Breslau in 1893, • as vol. ii of Winkelmann’s Handbuch der Physik in 1894, and as • S. Czapski and O. Eppenstein, Grundzuge der Theorie der optischen Instrumente nach Abbe (2nd ed., Leipzig, 1903). • Moritz von Rohr, ed., Die bilderzeugung in optischen Instrumenten vom Standpunkte der geometrischen Optik (Berlin, 1904). The collection of the scientific staff of Carl Zeiss at Jena, which contains articles by Arthur König and M. von Rohr specially dealing with aberrations. 9.4. ORB (OPTICS) 181

9.3.8 External links

• Microscope Objectives: Optical Aberrations section of Molecular Expressions website, Michael W. Davidson, Mortimer Abramowitz, Olympus America Inc., and The Florida State University

• Photographic optics, Paul van Walree

9.4 Orb (optics)

A single orb in the center of the photo, at the person’s knee level.

In photography, an orb is a typically circular artifact on an image, created as a result of flash photography illuminating a mote of dust or other particle. Orbs are especially common with modern compact and ultra-compact digital cameras. Orbs are also sometimes called backscatter, orb backscatter, or near-camera reflection. Some orbs appear with trails indicating motion.

9.4.1 Cause

Orbs are captured during low-light instances where the camera’s flash is used. Cases include night or underwater photography, or where a bright light source is near the camera. Light appears much brighter very near the source due to the inverse-square law, which says light intensity is inversely proportional to the square of the distance from the source. The orb artifact can result from retroreflection of light off solid particles, such as dust or pollen, or liquid particles, especially rain. They can also be caused by foreign material within the camera lens.[1] The image artifacts usually appear as either white or semi-transparent circles, though may also occur with whole or partial color spectra, purple fringing or other chromatic aberration. With rain droplets, an image may capture light passing through the droplet creating a small rainbow effect. In underwater conditions, particles such as sand or small 182 CHAPTER 9. DAY 9

sea life close to the lens, invisible to the diver, reflect light from the flash causing the orb artifact in the image. A strobe flash, which distances the flash from the lens, eliminates the artifacts.The effect is also seen on infrared video cameras, where superbright infrared LEDs illuminate microscopic particles very close to the lens.

A hypothetical underwater instance with two conditions in which orbs are (A) likely or (B) unlikely, depending on whether the aspect of particles facing the lens are directly illuminated by the flash, as shown. Elements not shown to scale.[2]

The artifacts are especially common with compact or ultra-compact cameras, where the short distance between the lens and the built-in flash decreases the angle of light reflection to the lens, directly illuminating the aspect of the particles facing the lens and increasing the camera’s ability to capture the light reflected off normally sub-visible particles.[1]

9.4.2 See also

• Orb (paranormal) • Bokeh

• Digital artifact • Rod (optics)

• Rolling • Will-o'-the-wisp

9.4.3 References

[1] “The Truth Behind 'Orbs’".

[2] Míċeál Ledwith and Klaus W Heinemann (6 November 2007). The Orb Project. Simon and Schuster. p. 208. ISBN 1582701822.

9.4.4 External links

• The science of orb photos, by Mark Kimura

• The Orb Video Archive Chapter 10

Day 10

10.1 f-number

For other uses, see F-number (disambiguation).

Diagram of decreasing apertures, that is, increasing f-numbers, in one-stop increments; each aperture has half the light gathering area of the previous one.

In optics, the f-number (sometimes called focal ratio, f-ratio, f-stop, or relative aperture[1]) of an optical system is the ratio of the lens’s focal length to the diameter of the entrance pupil.[2] It is a dimensionless number that is a quantitative measure of lens speed, and an important concept in photography. The number is commonly notated using a hooked f, i.e. f/N, where N is the f-number.

10.1.1 Notation

The f-number N or f# is given by:

f N = D where f is the focal length, and D is the diameter of the entrance pupil (effective aperture). It is customary to write f-numbers preceded by f/, which forms a mathematical expression of the entrance pupil diameter in terms of f and N.[2] For example, if a lens’s focal length is 10 mm and its entrance pupil diameter is 5 mm, the f-number is 2, expressed by writing “f/2”, and the aperture diameter is equal to f/2 , where f is the focal length. Ignoring differences in light transmission efficiency, a lens with a greater f-number projects darker images. The brightness of the projected image (illuminance) relative to the brightness of the scene in the lens’s field of view

183 184 CHAPTER 10. DAY 10

(luminance) decreases with the square of the f-number. Doubling the f-number decreases the relative brightness by a factor of four. To maintain the same photographic exposure when doubling the f-number, the exposure time would need to be four times as long. Most lenses have an adjustable diaphragm, which changes the size of the aperture stop and thus the entrance pupil size. The entrance pupil diameter is not necessarily equal to the aperture stop diameter, because of the magnifying effect of lens elements in front of the aperture. A 100 mm focal length f/4 lens has an entrance pupil diameter of 25 mm. A 200 mm focal length f/4 lens has an entrance pupil diameter of 50 mm. The 200 mm lens’s entrance pupil has four times the area of the 100 mm lens’s entrance pupil, and thus collects four times as much light from each object in the lens’s field of view. But compared to the 100 mm lens, the 200 mm lens projects an image of each object twice as high and twice as wide, covering four times the area, and so both lenses produce the same illuminance at the focal plane when imaging a scene of a given luminance. A T-stop is an f-number adjusted to account for light transmission efficiency.

10.1.2 Stops, f-stop conventions, and exposure

A Canon 7 mounted with a 50 mm lens capable of f/0.95

The word stop is sometimes confusing due to its multiple meanings. A stop can be a physical object: an opaque part 10.1. F-NUMBER 185

A 35 mm lens set to f/11, as indicated by the white dot above the f-stop scale on the aperture ring. This lens has an aperture range of f/2.0 to f/22.

of an optical system that blocks certain rays. The aperture stop is the aperture setting that limits the brightness of the image by restricting the input pupil size, while a field stop is a stop intended to cut out light that would be outside the desired field of view and might cause flare or other problems if not stopped. In photography, stops are also a unit used to quantify ratios of light or exposure, with each added stop meaning a factor of two, and each subtracted stop meaning a factor of one-half. The one-stop unit is also known as the EV (exposure value) unit. On a camera, the aperture setting is traditionally adjusted in discrete steps, known as f-stops. Each "stop" is marked with its corresponding f-number, and represents a halving of the light intensity from the previous √ stop. This corresponds to a decrease of the pupil and aperture diameters by a factor of 1/ 2 or about 0.7071, and hence a halving of the area of the pupil. Most modern lenses use a standard f-stop scale, which is an approximately geometric sequence of numbers that corresponds to the sequence of the powers of the square root of 2: f/1, f/1.4, f/2, f/2.8, f/4, f/5.6, f/8, f/11, f/16, f/22, f/32, f/45, f/64, f/90, f/128, etc. Each element in the sequence is one stop lower than the element to its left, and one stop higher than the element to its right. The values of the ratios are rounded off to these particular conventional numbers, to make them easier to remember and write down. The sequence above is obtained by approximating the following exact geometric sequence: √ √ √ √ f/1 = f/( 2)0 , f/1.4 = f/( 2)1 , f/2 = f/( 2)2 , f/2.8 = f/( 2)3 ... 186 CHAPTER 10. DAY 10

In the same way as one f-stop corresponds to a factor of two in light intensity, shutter speeds are arranged so that each setting differs in duration by a factor of approximately two from its neighbour. Opening up a lens by one stop allows twice as much light to fall on the film in a given period of time. Therefore, to have the same exposure at this larger aperture as at the previous aperture, the shutter would be opened for half as long (i.e., twice the speed). The film will respond equally to these equal amounts of light, since it has the property of reciprocity. This is less true for extremely long or short exposures, where we have reciprocity failure. Aperture, , and film sensitivity are linked: for constant scene brightness, doubling the aperture area (one stop), halving the shutter speed (doubling the time open), or using a film twice as sensitive, has the same effect on the exposed image. For all practical purposes extreme accuracy is not required (mechanical shutter speeds were notoriously inaccurate as wear and lubrication varied, with no effect on exposure). It is not significant that aperture areas and shutter speeds do not vary by a factor of precisely two. Photographers sometimes express other exposure ratios in terms of 'stops’. Ignoring the f-number markings, the f-stops make a logarithmic scale of exposure intensity. Given this interpretation, one can then think of taking a half-step along this scale, to make an exposure difference of “half a stop”.

Fractional stops

Simulation of the effect of changing a camera’s aperture in half-stops (at left) and from zero to infinity (at right)

Most old cameras had a continuously variable aperture scale, with each full stop marked. Click-stopped aperture came into common use in the 1960s; the aperture scale usually had a click stop at every whole and half stop. On modern cameras, especially when aperture is set on the camera body, f-number is often divided more finely than steps of one stop. Steps of one-third stop (1/3 EV) are the most common, since this matches the ISO system of film speeds. Half-stop steps are used on some cameras. Usually the full stops are marked, and the intermediate positions are clicked. As an example, the aperture that is one-third stop smaller than f/2.8 is f/3.2, two-thirds smaller is f/3.5, and one whole stop smaller is f/4. The next few f-stops in this sequence are:

f/4.5, f/5, f/5.6, f/6.3, f/7.1, f/8, etc.

To calculate the steps in a full stop (1 EV) one could use

20×0.5, 21×0.5, 22×0.5, 23×0.5, 24×0.5 etc.

The steps in a half stop (1/2 EV) series would be

20/2×0.5, 21/2×0.5, 22/2×0.5, 23/2×0.5, 24/2×0.5 etc.

The steps in a third stop (1/3 EV) series would be

20/3×0.5, 21/3×0.5, 22/3×0.5, 23/3×0.5, 24/3×0.5 etc.

As in the earlier DIN and ASA film-speed standards, the ISO speed is defined only in one-third stop increments, and shutter speeds of digital cameras are commonly on the same scale in reciprocal seconds. A portion of the ISO range is the sequence

... 16/13°, 20/14°, 25/15°, 32/16°, 40/17°, 50/18°, 64/19°, 80/20°, 100/21°, 125/22°...

while shutter speeds in reciprocal seconds have a few conventional differences in their numbers (1/15, 1/30, and 1/60 second instead of 1/16, 1/32, and 1/64). 10.1. F-NUMBER 187

√ √ In practice the maximum aperture of a lens is often not an integral power of 2 (i.e., 2 to the power of a whole √ number), in which case it is usually a half or third stop above or below an integral power of 2 . Modern electronically controlled interchangeable lenses, such as those used for SLR cameras, have f-stops specified internally in 1/8-stop increments, so the cameras’ 1/3-stop settings are approximated by the nearest 1/8-stop setting in the lens.

Standard full-stop f-number scale Including aperture value AV:

√ N = 2AV

Conventional and calculated f-numbers, full-stop series:

Typical one-half-stop f-number scale

Typical one-third-stop f-number scale Sometimes the same number is included on several scales; for example, an aperture of f/1.2 may be used in either a half-stop[3] or a one-third-stop system;[4] sometimes f/1.3 and f/3.2 and other differences are used for the one-third stop scale.[5]

Typical one-quarter-stop f-number scale

H-stop

An H-stop (for hole, by convention written with capital letter H) is an f-number equivalent for effective exposure based on the area covered by the holes in the diffusion discs or sieve aperture found in Rodenstock Imagon lenses.

T-stop

A T-stop (for transmission stops, by convention written with capital letter T) is an f-number adjusted to account for light transmission efficiency (transmittance). A lens with a T-stop of N projects an image of the same brightness as an ideal lens with 100% transmittance and an f-number of N. A particular lens’ T-stop, T, is given by dividing the f-number by the square root of the transmittance of that lens:

f T = √ . transmittance For example, an f/2.0 lens with transmittance of 75% has a T-stop of 2.3:

2.0 T = √ = 2.309... 0.75

Since real lenses have transmittances of less than 100%, a lens’s T-stop is always greater than its f-number.[6] Lens transmittances of 60%–90% are typical,[7] so T-stops are sometimes used instead of f-numbers to more accu- rately determine exposure, particularly when using external light meters.[8] T-stops are often used in cinematography, where many images are seen in rapid succession and even small changes in exposure will be noticeable. Cinema cam- era lenses are typically calibrated in T-stops instead of f-numbers. In still photography, without the need for rigorous consistency of all lenses and cameras used, slight differences in exposure are less important.

Sunny 16 rule

An example of the use of f-numbers in photography is the sunny 16 rule: an approximately correct exposure will be obtained on a sunny day by using an aperture of f/16 and the shutter speed closest to the reciprocal of the ISO speed 188 CHAPTER 10. DAY 10 of the film; for example, using ISO 200 film, an aperture of f/16 and a shutter speed of 1/200 second. The f-number may then be adjusted downwards for situations with lower light. Selecting a lower f-number is “opening up” the lens. Selecting a higher f-number is “closing” or “stopping down” the lens.

10.1.3 Effects on image sharpness

Comparison of f/32 (top-left corner) and f/5 (bottom-right corner)

Shallow focus with a wide open lens

Depth of field increases with f-number, as illustrated in the image here. This means that photographs taken with a low f-number (large aperture) will tend to have subjects at one distance in focus, with the rest of the image (nearer and farther elements) out of focus. This is frequently used for nature photography and portraiture because background blur (the aesthetic quality of which is known as 'bokeh') can be aesthetically pleasing and puts the viewer’s focus on the main subject in the foreground. The depth of field of an image produced at a given f-number is dependent on other parameters as well, including the focal length, the subject distance, and the format of the film or sensor used to capture the image. Depth of field can be described as depending on just angle of view, subject distance, and entrance pupil diameter (as in von Rohr’s method). As a result, smaller formats will have a deeper field than larger formats at the same f-number for the same distance of focus and same angle of view since a smaller format requires a shorter focal length (wider angle lens) to produce the same angle of view, and depth of field increases with shorter focal lengths. Therefore, reduced–depth-of-field effects will require smaller f-numbers when using small-format cameras than when using larger-format cameras. Image sharpness is related to f/number through two different optical effects: aberration, due to imperfect lens design, and diffraction which is due to the wave nature of light.[9] The blur optimal f-stop varies with the lens design. For 10.1. F-NUMBER 189

modern standard lenses having 6 or 7 elements, the sharpest image is often obtained around f/5.6–f/8, while for older standard lenses having only 4 elements (Tessar formula) stopping to f/11 will give the sharpest image. The larger number of elements in modern lenses allow the designer to compensate for aberrations, allowing the lens to give better pictures at lower f-numbers. Even if aberration is minimized by using the best lenses, diffraction creates some spreading of the rays causing defocus. To offset that use the largest lens opening diameter possible (not the f/ number itself). Light falloff is also sensitive to f-stop. Many wide-angle lenses will show a significant light falloff (vignetting) at the edges for large apertures. Photojournalists have a saying, “f/8 and be there”, meaning that being on the scene is more important than worrying about technical details. Practically, f/8 allows adequate depth of field and sufficient lens speed for a decent base exposure in most daylight situations.[10]

10.1.4 Human eye

Computing the f-number of the human eye involves computing the physical aperture and focal length of the eye. The pupil can be as large as 6–7 mm wide open, which translates into the maximal physical aperture. The f-number of the human eye varies from about f/8.3 in a very brightly lit place to about f/2.1 in the dark.[11] Note that computing the focal length requires that the light-refracting properties of the liquids in the eye are taken into account. Treating the eye as an ordinary air-filled camera and lens results in a different focal length, thus yielding an incorrect f-number. Toxic substances and poisons (like atropine) can significantly reduce the range of aperture. Pharmaceutical prod- ucts such as eye drops may also cause similar side-effects. Tropicamide and phenylephrine are used in medicine as mydriatics to dilate pupils for retinal and lens examination. These medications take effect in about 30–45 minutes after instillation and last for about 8 hours. Atropine is also used in such a way but its effects can last up to 2 weeks, along with the mydriatic effect; it produces cycloplegia (a condition in which the crystalline lens of the eye cannot accommodate to focus near objects). This effect goes away after 8 hours. Other medications offer the contrary effect. Pilocarpine is a miotic (induces miosis); it can make a pupil as small as 1 mm in diameter depending on the person and their ocular characteristics. Such drops are used in certain glaucoma patients to prevent acute glaucoma attacks.

10.1.5 Focal ratio in telescopes

Diagram of the focal ratio of a simple optical system where f is the focal length and D is the diameter of the objective.

In astronomy, the f-number is commonly referred to as the focal ratio (or f-ratio) notated as N . It is still defined as 190 CHAPTER 10. DAY 10 the focal length f of an objective divided by its diameter D or by the diameter of an aperture stop in the system:

f × N = −−→D f = ND D Even though the principles of focal ratio are always the same, the application to which the principle is put can differ. In photography the focal ratio varies the focal-plane illuminance (or optical power per unit area in the image) and is used to control variables such as depth of field. When using an optical telescope in astronomy, there is no depth of field issue, and the brightness of stellar point sources in terms of total optical power (not divided by area) is a function of absolute aperture area only, independent of focal length. The focal length controls the field of view of the instrument and the scale of the image that is presented at the focal plane to an eyepiece, film plate, or CCD. For example, the SOAR 4 meter telescope has a small field of view (~f/16) which is useful for stellar studies. The LSST 8.4 m telescope, which will cover the entire sky every three days has a very large field of view. Its short 10.3 m focal length (f/1.2) is made possible by an error correction system which includes secondary and tertiary mirrors, a three element refractive system and active mounting and optics.[12]

10.1.6 Working f-number

The f-number accurately describes the light-gathering ability of a lens only for objects an infinite distance away.[13] This limitation is typically ignored in photography, where objects are usually not extremely close to the camera, relative to the distance between the lens and the film. In optical design, an alternative is often needed for systems where the object is not far from the lens. In these cases the working f-number is used. A practical example of this is, that when focusing closer, the lens’ effective aperture becomes smaller, from e.g. f/22 to f/45, thus affecting the exposure. The working f-number Nw is given by:

( ) 1 |m| Nw ≈ ≈ 1 + N 2NAi P where N is the uncorrected f-number, NAi is the image-space numerical aperture of the lens, |m| is the absolute value of lens’s magnification for an object a particular distance away, and P is the pupil magnification.[13] Since the pupil magnification is seldom known, it is often assumed to be 1, which is the correct value for all symmetric lenses. In photography, the working f-number is described as the f-number corrected for lens extensions by a bellows factor. This is of particular importance in macro photography.

10.1.7 History

The system of f-numbers for specifying relative apertures evolved in the late nineteenth century, in competition with several other systems of aperture notation.

Origins of relative aperture

In 1867, Sutton and Dawson defined “apertal ratio” as essentially the reciprocal of the modern f-number. In the 1 following quote, an “apertal ratio” of “1/24” is calculated as the ratio of 6 inches (150 mm) to ⁄4 inch (6.4 mm), corresponding to an f/24 f-stop:

In every lens there is, corresponding to a given apertal ratio (that is, the ratio of the diameter of the stop to the focal length), a certain distance of a near object from it, between which and infinity all objects are in equally good focus. For instance, in a single view lens of 6 inch focus, with a 1/4 in. stop (apertal ratio one-twenty-fourth), all objects situated at distances lying between 20 feet from the lens and an infinite distance from it (a fixed star, for instance) are in equally good focus. Twenty feet is therefore called the 'focal range' of the lens when this stop is used. The focal range is consequently the distance of the nearest object, which will be in good focus when the ground glass is adjusted for an extremely distant object. In the same lens, the focal range will depend upon the size of the diaphragm used, while 10.1. F-NUMBER 191

in different lenses having the same apertal ratio the focal ranges will be greater as the focal length of the lens is increased. The terms 'apertal ratio' and 'focal range' have not come into general use, but it is very desirable that they should, in order to prevent ambiguity and circumlocution when treating of the properties of photographic lenses.[14]

In 1874, John Henry Dallmeyer called the ratio 1/N the “intensity ratio” of a lens:

The rapidity of a lens depends upon the relation or ratio of the aperture to the equivalent focus. To ascertain this, divide the equivalent focus by the diameter of the actual working aperture of the lens in question; and note down the quotient as the denominator with 1, or unity, for the numerator. Thus to find the ratio of a lens of 2 inches diameter and 6 inches focus, divide the focus by the aperture, or 6 divided by 2 equals 3; i.e., 1/3 is the intensity ratio.[15]

Although he did not yet have access to Ernst Abbe's theory of stops and pupils,[16] which was made widely available by Siegfried Czapski in 1893,[17] Dallmeyer knew that his working aperture was not the same as the physical diameter of the aperture stop:

It must be observed, however, that in order to find the real intensity ratio, the diameter of the actual working aperture must be ascertained. This is easily accomplished in the case of single lenses, or for double combination lenses used with the full opening, these merely requiring the application of a pair of compasses or rule; but when double or triple-combination lenses are used, with stops inserted between the combinations, it is somewhat more troublesome; for it is obvious that in this case the diameter of the stop employed is not the measure of the actual pencil of light transmitted by the front combination. To ascertain this, focus for a distant object, remove the focusing screen and replace it by the collodion slide, having previously inserted a piece of cardboard in place of the prepared plate. Make a small round hole in the centre of the cardboard with a piercer, and now remove to a darkened room; apply a candle close to the hole, and observe the illuminated patch visible upon the front combination; the diameter of this circle, carefully measured, is the actual working aperture of the lens in question for the particular stop employed.[15]

This point is further emphasized by Czapski in 1893.[17] According to an English review of his book, in 1894, “The necessity of clearly distinguishing between effective aperture and diameter of physical stop is strongly insisted upon.”[18] J. H. Dallmeyer’s son, Thomas Rudolphus Dallmeyer, inventor of the telephoto lens, followed the intensity ratio terminology in 1899.[19]

Aperture numbering systems

At the same time, there were a number of aperture numbering systems designed with the goal of making exposure times vary in direct or inverse proportion with the aperture, rather than with the square of the f-number or inverse square of the apertal ratio or intensity ratio. But these systems all involved some arbitrary constant, as opposed to the simple ratio of focal length and diameter. For example, the Uniform System (U.S.) of apertures was adopted as a standard by the Photographic Society of Great Britain in the 1880s. Bothamley in 1891 said “The stops of all the best makers are now arranged according to this system.”[20] U.S. 16 is the same aperture as f/16, but apertures that are larger or smaller by a full stop use doubling or halving of the U.S. number, for example f/11 is U.S. 8 and f/8 is U.S. 4. The exposure time required is directly proportional to the U.S. number. Eastman Kodak used U.S. stops on many of their cameras at least in the 1920s. By 1895, Hodges contradicts Bothamley, saying that the f-number system has taken over: “This is called the f/x system, and the diaphragms of all modern lenses of good construction are so marked.”[21] Here is the situation as seen in 1899: Piper in 1901[22] discusses five different systems of aperture marking: the old and new Zeiss systems based on actual intensity (proportional to reciprocal square of the f-number); and the U.S., C.I., and Dallmeyer systems based on exposure (proportional to square of the f-number). He calls the f-number the “ratio number,” “aperture ratio number,” and “ratio aperture.” He calls expressions like f/8 the “fractional diameter” of the aperture, even though 192 CHAPTER 10. DAY 10

it is literally equal to the “absolute diameter” which he distinguishes as a different term. He also sometimes uses expressions like “an aperture of f 8” without the division indicated by the slash. Beck and Andrews in 1902 talk about the Royal Photographic Society standard of f/4, f/5.6, f/8, f/11.3, etc.[23] The R.P.S. had changed their name and moved off of the U.S. system some time between 1895 and 1902.

Typographical standardization

By 1920, the term f-number appeared in books both as F number and f/number. In modern publications, the forms f-number and f number are more common, though the earlier forms, as well as F-number are still found in a few books; not uncommonly, the initial lower-case f in f-number or f/number is set in a hooked italic form: f, or f.[24] Notations for f-numbers were also quite variable in the early part of the twentieth century. They were sometimes written with a capital F,[25] sometimes with a dot (period) instead of a slash,[26] and sometimes set as a vertical fraction.[27] The 1961 ASA standard PH2.12-1961 American Standard General-Purpose Photographic Exposure Meters (Photo- electric Type) specifies that “The symbol for relative apertures shall be f/ or f : followed by the effective f-number.” Note that they show the hooked italic f not only in the symbol, but also in the term f-number, which today is more commonly set in an ordinary non-italic face.

10.1.8 See also

• Circle of confusion

• Group f/64

• Photographic lens design

• Pinhole camera

10.1.9 References

[1] Smith, Warren Modern Lens Design 2005 McGraw-Hill

[2] Smith, Warren Modern Optical Engineering, 4th Ed. 2007 McGraw-Hill Professional

[3] Harry C. Box (2003). Set lighting technician’s handbook: film lighting equipment, practice, and electrical distribution (3rd ed.). Focal Press. ISBN 978-0-240-80495-8.

[4] Paul Kay (2003). Underwater photography. Guild of Master Craftsman. ISBN 978-1-86108-322-7.

[5] David W. Samuelson (1998). Manual for cinematographers (2nd ed.). Focal Press. ISBN 978-0-240-51480-2.

[6] Light transmission, DxOMark

[7] Marianne Oelund, “Lens T-stops”, dpreview.com, 2009

[8] Eastman Kodak, “H2: Kodak Motion Picture Camera Films”, November 2000 revision. Retrieved 2 September 2007.

[9] Michael John Langford (2000). Basic Photography. Focal Press. ISBN 0-240-51592-7.

[10] Levy, Michael (2001). Selecting and Using Classic Cameras: A User’s Guide to Evaluating Features, Condition & Usability of Classic Cameras. Amherst Media, Inc. p. 163. ISBN 978-1-58428-054-5.

[11] Hecht, Eugene (1987). Optics (2nd ed.). Addison Wesley. ISBN 0-201-11609-X. Sect. 5.7.1

[12] Charles F. Claver; et al. (19 March 2007). “LSST Reference Design” (PDF). LSST Corporation: 45–50. Retrieved 10 January 2011

[13] Greivenkamp, John E. (2004). Field Guide to Geometrical Optics. SPIE Field Guides vol. FG01. SPIE. ISBN 0-8194- 5294-7. p. 29.

[14] Thomas Sutton and George Dawson, A Dictionary of Photography, London: Sampson Low, Son & Marston, 1867, (p. 122). 10.1. F-NUMBER 193

[15] John Henry Dallmeyer, Photographic Lenses: On Their Choice and Use – Special Edition Edited for American Photographers, pamphlet, 1874.

[16] Southall, James Powell Cocke (1910). “The principles and methods of geometrical optics: Especially as applied to the theory of optical instruments”.

[17] Siegfried Czapski, Theorie der optischen Instrumente, nach Abbe, Breslau: Trewendt, 1893.

[18] Henry Crew, “Theory of Optical Instruments by Dr. Czapski,” in Astronomy and Astro-physics XIII pp. 241–243, 1894.

[19] Thomas R. Dallmeyer, Telephotography: An elementary treatise on the construction and application of the telephotographic lens, London: Heinemann, 1899.

[20] C. H. Bothamley, Ilford Manual of Photography, London: Britannia Works Co. Ltd., 1891.

[21] John A. Hodges, Photographic Lenses: How to Choose, and How to Use, Bradford: Percy Lund & Co., 1895.

[22] C. Welborne Piper, A First Book of the Lens: An Elementary Treatise on the Action and Use of the Photographic Lens, London: Hazell, Watson, and Viney, Ltd., 1901.

[23] Conrad Beck and Herbert Andrews, Photographic Lenses: A Simple Treatise, second edition, London: R. & J. Beck Ltd., c. 1902.

[24] Google search

[25] Ives, Herbert Eugene (1920). Airplane Photography (Google). Philadelphia: J. B. Lippincott. p. 61. Retrieved 12 March 2007.

[26] Mees, Charles Edward Kenneth (1920). The Fundamentals of Photography. Eastman Kodak. p. 28. Retrieved 12 March 2007.

[27] Derr, Louis (1906). Photography for Students of Physics and Chemistry (Google). London: Macmillan. p. 83. Retrieved 12 March 2007.

10.1.10 External links

• f Number arithmetic • Large format photography—how to select the f-stop 194 CHAPTER 10. DAY 10

A 1922 Kodak with aperture marked in U.S. stops. An f-number conversion chart has been added by the user. 10.1. F-NUMBER 195 Chapter 11

Text and image sources, contributors, and licenses

11.1 Text

• History of optics Source: https://en.wikipedia.org/wiki/History_of_optics?oldid=756337285 Contributors: Ronz, Charles Matthews, Beland, Cioxx, EliasAlucard, Robert P. O'Shea, Laurascudder, West London Dweller, Carbon Caryatid, Woohookitty, Ortcutt, Table- top, Rnt20, BD2412, Ceinturion, Srleffler, Aethralis, Bgwhite, Gaius Cornelius, Dialectric, Ezeu, Finell, Sardanaphalus, AppleRaven, SmackBot, Jagged 85, Edgar181, Hmains, Colonies Chris, Leinad-Z, Crittens, Lambiam, Sina Kardar, Cydebot, OSU80, Katherine Tredwell, SteveMcCluskey, Headbomb, Dainis, AntiVandalBot, Nyttend, David Eppstein, Cocytus, Anaxial, Speed8ump, Afluegel, Foun- tains of Bryn Mawr, Kansas Bear, TXiKiBoT, Natg 19, Guérin Nicolas, StAnselm, Mx. Granger, ClueBot, J8079s, SamuelTheGhost, Jedrzejow~enwiki, Addbot, DOI bot, Fgnievinski, Arabian-Editor, Epzcaw, Redhotsaxman, Claudeb, AnomieBOT, Citation bot, GB fan, LilHelpa, S h i v a (Visnu), Poetaris, WebCiteBOT, , StaticVision, Fragma08, HamburgerRadio, Citation bot 1, Jonesey95, Full-date unlinking bot, RjwilmsiBot, John of Reading, Knight1993, BrokenAnchorBot, ClueBot NG, MathKeduor7, Helpful Pixie Bot, SzMithrandir, Senecando, Hmainsbot1, Hillbillyholiday, Reatlas, Monkbot, CyberWarfare, Kaviraus, Ttt74, Bender the Bot, TimeFor- Lunch and Anonymous: 27 • Geometrical optics Source: https://en.wikipedia.org/wiki/Geometrical_optics?oldid=763055977 Contributors: Michael Hardy, Andres, Robbot, L-H, Bender235, Opt~enwiki, Rgdboer, MZMcBride, Srleffler, SmackBot, G716, Jbergquist, DMacks, Quibik, Missvain, Magi- oladitis, R'n'B, Reedy Bot, Gombang, JohnBlackburne, Kitashi, SieBot, Keilana, ClueBot, 1ForTheMoney, Crowsnest, Addbot, LinkFA- Bot, Luckas-bot, Deltafunction~enwiki, Langmore, AnomieBOT, VanishedUser sdu9aya9fasdsopa, AdjustShift, Lookang, MastiBot, Mr- FloatingIP, Pcirrus2, FoxBot, Dinamik-bot, EmausBot, GoingBatty, ZéroBot, GianniG46, JonRichfield, Alex Nico, Theopolisme, Help- ful Pixie Bot, Jcc2011, 2pem, Guy vandegrift, MusikAnimal, Solomon7968, Brad7777, BattyBot, LalahGrace, Beanja, Mmitchell10, Delphenich, Mahusha, Izkala, KasparBot, Kakaw89 and Anonymous: 23 • Optics Source: https://en.wikipedia.org/wiki/Optics?oldid=761864435 Contributors: The Cunctator, Zundark, The Anome, Fredbauder, William Avery, Peterlin~enwiki, Ben-Zin~enwiki, DrBob, Heron, Youandme, Ram-Man, Patrick, Michael Hardy, Ezra Wax, Fred Bauder, Bcrowell, Looxix~enwiki, Ahoerstemeier, CatherineMunro, JWSchmidt, Mxn, Fransoo~enwiki, Ffransoo, Novum, Jose Ramos, Nastos, Eugene van der Pijll, Robbot, Kwi, Academic Challenger, Rholton, Blainster, Gnomon Kelemen, Sunray, Hadal, Giftlite, Mintleaf~enwiki, Wolfkeeper, Andris, Hugh2414, Dmmaus, Jaan513, Ehusman, Latitudinarian, Beland, Kaldari, Karol Langner, APH, BoP, Icairns, AmarChandra, Gscshoyru, Jndstar15, JTN, Mjuarez, Discospinster, Milkmandan, Cacycle, Vsmith, ArnoldReinhold, Bender235, Loren36, Brian0918, El C, Robert P. O'Shea, Laurascudder, Art LaPella, Rimshot, Bobo192, TomStar81, Smalljim, Marcelo Reis, Michel32Nl, Maurreen, 9SGjOSfyHJaQVsEmy9NS, Kjkolb, Nk, Obradovic Goran, Haham hanuka, Jakew, Ale cyn, Alansohn, Benjah-bmm27, Wt- mitchell, Kusma, Dan100, Woohookitty, Linas, LOL, Ruud Koot, Mpatel, Mrs Trellis, JonBirge, Isnow, Abd, Palica, Askewmind, Rjwilmsi, Mayumashu, Orangehatbrune, Vegaswikian, YAZASHI, Fuzzball!, Johnrpenner, FlaBot, AED, Weihao.chiu~enwiki, Nihiltres, Gurch, Srleffler, Chobot, Will Lakeman, YurikBot, Sceptre, JabberWok, Salsb, David R. Ingham, Welsh, Samir, Keithd, Nae'blis, Ariel- Gold, RunOrDie, RG2, Profero, GrinBot~enwiki, Cmglee, Sbyrnes321, DVD R W, CIreland, That Guy, From That Show!, SmackBot, RDBury, InverseHypercube, J-beda, KocjoBot~enwiki, Jagged 85, Ssbohio, Gilliam, Skizzik, Durova, Lindosland, GoneAwayNowAn- dRetired, Knowledge4all, Shaggorama, Oli Filth, Fplay, DHN-bot~enwiki, A. B., Proofenough, Can't sleep, clown will eat me, RProgram- mer, Jennica, SundarBot, Cybercobra, Ohconfucius, SashatoBot, Lambiam, Big Wang, Robofish, Bjankuloski06, Pflatau, Mr Stephen, Dicklyon, Nikvist, Johnny 0, Hu12, Hetar, Amakuru, AGK, Courcelles, Jbolden1517, Sue in az, Tawkerbot2, JForget, JonesMI, Ale jrb, Van helsing, JohnCD, El aprendelenguas, WeggeBot, Myasuda, Gregbard, Cydebot, W.F.Galway, Meno25, Alanbly, Quibik, Doug Weller, DumbBOT, Waxigloo, Gimmetrow, Malleus Fatuorum, Thijs!bot, Barticus88, Markus Pössel, Headbomb, John254, Rickin- Baltimore, Cool Blue, Oreo Priest, Ileresolu, AntiVandalBot, Gioto, Fred151, KP Botany, DuncanHill, MER-C, The Transhumanist, Arch dude, LittleOldMe, Crmrmurphy, Magioladitis, Dmoulton, Gabrielyan, Robmossgb, VoABot II, Jeff Dahl, Cardamon, Hveziris, GreggEdwards, Darthtire, Danieliness, MartinBot, Dima373, R'n'B, CommonsDelinker, Ctroy36, Sr903, Pharaoh of the Wizards, Hans Dunkelberg, Good-afternun!, Ignatzmice, Afluegel, Coppertwig, M-le-mot-dit, Fountains of Bryn Mawr, Ohms law, Mumphius, Julian- colton, Cartiod, OCCAdmin, Funandtrvl, VolkovBot, Macedonian, Studio815, AlnoktaBOT, Optokinetics, Soliloquial, TXiKiBoT, Bert- Sen, Magnius, Rebornsoldier, Crohnie, Rumiton, Rdsherwood, Grandnewbien, Synthebot, SieBot, BotMultichill, Dawn Bard, Csblack, JetLover, Andrew12326, Mojoworker, Dabomb87, StewartMH, Tomasz Prochownik, ClueBot, Fyyer, Optollar, Wikicat, Razimantv, Yakiv Guck, J8079s, Wangjiajun, TypoBoy, Piledhigheranddeeper, Hwyengineer47, SamuelTheGhost, Excirial, Lartoven, Frozen4322, SniperLegend8, Versus22, XLinkBot, Nepenthes, RiStevens, WikHead, Badgernet, Soha.namnabat~enwiki, Addbot, Pecos Joe, Epz- caw, Ginosbot, Weasel494, Verbal, Lightbot, Quantumobserver, Legobot, Luckas-bot, Guy1890, AnomieBOT, Piano non troppo, Ma- terialscientist, Citation bot, Kalamkaar, ArthurBot, LilHelpa, MauritsBot, Xqbot, Corrigendas, Addihockey10, Mrba70, GrouchoBot,

196 11.1. TEXT 197

Omnipaedista, SassoBot, Ransim, Smallman12q, WaysToEscape, Aaron Kauppi, , FrescoBot, LucienBOT, Paine Ellsworth, Steve Quinn, Troy123, JameKelly, Wjh31, Citation bot 1, Alipson, Pinethicket, Focus, Churchman6718, Carolingio93, Jauhienij, MrFloat- ingIP, TobeBot, OpticsPhysics, OozeAndOz, Diannaa, Sherbert1, Marie Poise, EmausBot, John of Reading, WikitanvirBot, Eekerz, Anon23412, GoingBatty, RA0808, Minimac’s Clone, Exok, RenamedUser01302013, Jmencisom, Wikipelli, Dcirovic, Hhhippo, Ida Shaw, Finemann, GianniG46, Eaglehaslanded, Wayne Slam, VacioBot, Tls60, Carmichael, ClueBot NG, Gareth Griffith-Jones, Toby Manzanares, GioGziro95, Matthiaspaul, Lordain0, IOPhysics, Helpful Pixie Bot, Bibcode Bot, 2pem, Guy vandegrift, Solar Police, MusikAnimal, Davidiad, Snow Rise, Cybercopyedit, Supremeaim, Devenmehta2006, Glacialfox, Minsbot, Danpeden, Factfindersonline, ,Reatlas, Harish.awasthi ,ַאְבָרָהם ,BattyBot, Cyberbot II, Melenc, Khazar2, Dexbot, FoCuSandLeArN, Mr. Guye, TwoTwoHello, KWiki Eyesnore, PhantomTech, Hdhdhdhdhdhdhdhd, Apeman2, Manul, SJ Defender, Crisalin, Aperrone330, Jasnds, Permafrost46, AKS.9955, Filedelinkerbot, Ayrıntılı Bilgi, Trackteur, Trilumen, Vítor, Izkala, StanfordLinkBot, Bhargava1234, KasparBot, Jmc76, Kurousagi, Someone2169, Hacker123243, AlineXu, Mortee, Erstada and Anonymous: 331 • Optical instrument Source: https://en.wikipedia.org/wiki/Optical_instrument?oldid=758418277 Contributors: The Anome, XJaM, Dr- Bob, Michael Hardy, Blainster, Utcursch, JTN, Quiddity, Yamamoto Ichiro, Nihiltres, Srleffler, Kevin, Alexandrov, SmackBot, Pflatau, 16@r, Thijs!bot, Husond, MER-C, Nono64, PatríciaR, Floaterfluss, NathanPhoenix, Fountains of Bryn Mawr, Deor, Andy Dingley, Alle- borgoBot, SieBot, Gerakibot, JerrySteal, Radon210, ClueBot, Blanchardb, The Red, DumZiBoT, MystBot, Addbot, Cst17, Luckas-bot, Yobot, Addihockey10, Erik9bot, ClickRick, RedBot, Steve2011, Vrenator, Mishae, AndyHe829, EmausBot, ClueBot NG, Satellizer, Titodutta, OttawaAC, Markhole, RscprinterBot, Qetuth, YFdyh-bot, Sriharsh1234, Apeman2, JaconaFrere, Ihsaanwant, Tasha69, An- dreiGabriel16 and Anonymous: 39 • Lens (optics) Source: https://en.wikipedia.org/wiki/Lens_(optics)?oldid=762991451 Contributors: Derek Ross, The Anome, Andre En- gels, Rmhermen, PierreAbbat, DrBob, Caltrop, Heron, Mintguy, Montrealais, Topory, Stevertigo, Nealmcb, Patrick, Michael Hardy, Dhum Dhum, Ixfd64, Egil, Athypique~enwiki, Cimon Avaro, Evercat, Hashar, RodC, Adam Bishop, Tantalate, Daniel Quinlan, Marsh- man, Itai, Shizhao, Lumos3, PuzzletChung, Robbot, ZimZalaBim, Altenmann, Dng88, Wikibot, Giftlite, SamB, Wolfkeeper, Ben- FrantzDale, Ssd, Leonard G., AJim, Hugh2414, Edcolins, Isidore, PeterC, LiDaobing, MarkSweep, Moxfyre, Danh, Talkstosocks, Mike Rosoft, Poccil, EugeneZelenko, Discospinster, Rich Farmbrough, Cacycle, Brandon.irwin, Wk muriithi, Bender235, Bcjordan, Nabla, West London Dweller, Yono, Fir0002, Meggar, Nk, Bawolff, Samadam, Keenan Pepper, InShaneee, Wtshymanski, Rick Sidwell, Evil Monkey, Dirac1933, Sciurinæ, Gene Nygaard, Drbreznjev, Oleg Alexandrov, Miaow Miaow, Pol098, Jonnabuz, CPES, Christopher Thomas, Rnt20, Qwertyus, Dpv, Mendaliv, Rjwilmsi, Filu~enwiki, Bhadani, FlaBot, Default007, AED, Ewlyahoocom, Gurch, ElfQrin, Fresheneesz, Srleffler, DVdm, Design, Bgwhite, Amaurea, The Rambling Man, YurikBot, Wavelength, Jimp, RussBot, Groogle, Shell Kinney, Gaius Cornelius, Wimt, Apchar, Moe Epsilon, Kyle Barbour, BeastRHIT, Petr.adamek, Kkmurray, Tjnugent, Nick123, SM- cCandlish, Cmglee, DVD R W, Quadpus, SmackBot, Timrb, Maksim-e~enwiki, Melchoir, Unyoyega, Jagged 85, Edgar181, DMTa- gatac, Keegan, Bob the ducq, CSWarren, Jd2207, DHN-bot~enwiki, Onceler, Can't sleep, clown will eat me, Rrburke, Downwards, Astroview120mm, DMacks, Marksven, Vina-iwbot~enwiki, SashatoBot, Khazar, Ninjagecko, FrozenMan, Jpogi, Shadowlynk, Bjanku- loski06en~enwiki, Pflatau, CoolKoon, Stikonas, Dicklyon, Optakeover, Mets501, MTSbot~enwiki, AntOnTrack, ShakingSpirit, Wiz- ard191, JMK, Amakuru, Courcelles, Garvin58, Tawkerbot2, Pete g1, PorthosBot, Megaboz, Meyavuz, Bkessler23, Daniel 123, Pinestone, Doug Weller, Optimist on the run, Skyyblu3, SteveMcCluskey, Xanthis, TonyTheTiger, Steve Dufour, Vmadeira, Headbomb, Bobble- head, Rosuna, Dawnseeker2000, Openlander, Escarbot, AntiVandalBot, Danger, Lfstevens, MikeLynch, JAnDbot, Plantsurfer, Doc phil, VoABot II, Omerzu, Diamond2, Theroadislong, Hackfish, User A1, Cpl Syx, MartinBot, Physicists, Jim.henderson, Anaxial, Nono64, Ctroy36, Discpad, J.delanoy, Captain panda, Rlsheehan, Thaurisil, Bakkouz, Good-afternun!, DarkFalls, Jmajeremy, Afluegel, Mikael Häggström, M-le-mot-dit, Fountains of Bryn Mawr, Rosenknospe, Hanacy, STBotD, Vanished user 39948282, Treisijs, Bonadea, Ja 62, ITAIM566, VolkovBot, EEye, Johan1298~enwiki, Jeff G., Almazi, TXiKiBoT, A4bot, Saber girl08, Anna Lincoln, Viridiflavus~enwiki, Clarince63, Broadbot, DoktorDec, BotKung, Deranged bulbasaur, Erkan Umut, Rileyboughton, Symane, EmxBot, Eggebee1, Thw1309, MarwoodGRFC, SieBot, StAnselm, Holyschnapps, Mbz1, Caltas, Triwbe, Bentogoa, Anchor Link Bot, Twinsday, Loren.wilton, ClueBot, The Thing That Should Not Be, J8079s, DragonBot, Excirial, Jusdafax, Lenary, Bernard SOULIER, Johnuniq, TheProf07, Crossbyte, XLinkBot, Rror, Interferometrist, Zipspeed, WikiDao, MystBot, Iranway, Metodicar, Addbot, Some jerk on the Internet, DOI bot, AkhtaBot, CanadianLinuxUser, NjardarBot, D.c.camero, Chzz, Favonian, Mraiford, SamatBot, LinkFA-Bot, Ericg33, 5 albert square, Tide rolls, Redhotsaxman, QuadrivialMind, Zorrobot, HerculeBot, Luckas-bot, Yobot, THEN WHO WAS PHONE?, AnomieBOT, Andrewrp, KDS4444, 1exec1, Jim1138, IRP, AdjustShift, Chuckiesdad, Materialscientist, Citation bot, Obersachsebot, Xqbot, King- Steveinator, RibotBOT, Amaury, N419BH, Griffinofwales, FrescoBot, Lookang, Iamthebestpersonintheworld, D'ohBot, Ionutzmovie, OgreBot, Citation bot 1, Alipson, Biker Biker, Pinethicket, 10metreh, Achraf52, LiborX, Serols, Tamasflex, Jauhienij, MrFloatingIP, CobraBot, Xueboc, Sogar1, Lissadina, EmausBot, John of Reading, Orphan Wiki, Domesticenginerd, Immunize, GoingBatty, RA0808, Bengt Nyman, Vanished user zq46pw21, K6ka, Slawekb, Misty MH, CVIMG, Traxs7, Javacash, Anir1uph, Gordonheeks, Wayne Slam, Demiurge1000, Rainset, L Kensington, AVarchaeologist, Slowfastslow, George Makepeace, Petrb, Mikhail Ryazanov, ClueBot NG, Cheese2194, Hazhk, Braincricket, O.Koslowski, Widr, WikiPuppies, Helpful Pixie Bot, O.leanse, Plantdrew, Jeraphine Gryphon, Darth- jamesy, Guy vandegrift, TCN7JM, Smarties1997, Sfnagle, Sourabh.khot, Verbcatcher, Mike1922, BattyBot, DavinaDM, Cyberbot II, ChrisGualtieri, Tagremover, TheJJJunk, Izziswimmer12, FoCuSandLeArN, Webclient101, TowerOfBricks, CuriousMind01, Elec junto, Mpwood33, SteenthIWbot, Telfordbuck, The Anonymouse, Malusisters12345, Eyesnore, Jodosma, Dairhead, Brewernr, Mehdi Has- san, AddWittyNameHere, Jocelyn Parent Optical Designer, Monkbot, Capacitor12, KasparBot, Nkuehnle07, GreenC bot, JueLinLi, Thamjith008, SirMatthias and Anonymous: 459 • Focus (optics) Source: https://en.wikipedia.org/wiki/Focus_(optics)?oldid=737645444 Contributors: Andres, Hyacinth, Ancheta Wis, Fudoreaper, Fropuff, Qutezuce, Reinyday, Obradovic Goran, Mark Dingemanse, Mindmatrix, Ckelloug, Filu~enwiki, AED, Ewlya- hoocom, Srleffler, Jtkiefer, Diliff, Voidxor, Josh3580, Cmglee, HenrikP, MalafayaBot, Complexica, Javalenok, SashatoBot, Bjanku- loski06en~enwiki, Pflatau, Thomas Gilling, Dicklyon, Pegasus1138, Cydebot, Perfect Proposal, Headbomb, WinBot, Darklilac, Soul- bot, DadaNeem, VolkovBot, CanOfWorms, AlleborgoBot, SieBot, Phe-bot, Smsarmad, Sidesninth, Li4kata, ClueBot, Mild Bill Hic- ,Fryed-peach, Luckas-bot, Aboalbiss, ESCapade, GrouchoBot ,חובבשירה ,Gail ,ماني ,cup, Erudecorp, Addbot, Fieldday-sunday, Ld100 Jean-François Clet, EmausBot, Borg*Continuum, RockMagnetist, ClueBot NG, Nharish04, Rezabot, Darafsh, Tagremover, FrB.TG, Prolumbo, CitrusEllipsis, GreenC bot, Bear-rings and Anonymous: 43 • Cardinal point (optics) Source: https://en.wikipedia.org/wiki/Cardinal_point_(optics)?oldid=757186226 Contributors: DrBob, Lean- drod, BenFrantzDale, Cynical, Bender235, Hooperbloob, Jeltz, Srleffler, YurikBot, ZabMilenko, SmackBot, Melchoir, Elagatis, Kostmo, Fmlburnay, Roguegeek, Gonioul, Severoon, Pflatau, Misteror, Dicklyon, Garvin58, ShelfSkewed, Thijs!bot, AntiVandalBot, Richard Giuly, DinoBot, R'n'B, Panoramikz~enwiki, Cometstyles, YonaBot, Odo Benus, Anchor Link Bot, Arvind Guru - Zetta, Addbot, De- bresser, Yobot, AnomieBOT, 1exec1, Redbobblehat, Piano non troppo, Citation bot, HHahn, Justinlee22, Citation bot 1, Tamasflex, Lotje, Vajr, GianniG46, TedEyeMD, ClueBot NG, Helpful Pixie Bot, Bertoche, Nigellwh, Daaxix and Anonymous: 38 198 CHAPTER 11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

• Hyperfocal distance Source: https://en.wikipedia.org/wiki/Hyperfocal_distance?oldid=751687064 Contributors: DrBob, Patrick, Doug Pardee, Charles Matthews, Dmmaus, Mboverload, MarkSweep, Girolamo Savonarola, Icairns, Fg2, Qutezuce, Bender235, ESkog, Fat- phil, Hooperbloob, Gisling, PatrickSauncy, The wub, Nemethe, Srleffler, Cmglee, SmackBot, InverseHypercube, Betacommand, Durova, Fredvanner, Dicklyon, IvanLanin, CmdrObot, The Transhumanist, JeffConrad, Magioladitis, VolkovBot, SieBot, ClueBot, MystBot, Ad- dbot, G.Hagedorn, Luckas-bot, TheAMmollusc, Solventvapour, WikitanvirBot, ZéroBot, Hirumon, Atelierelealbe, EgodE, Jacopo188, LWizards, EngineerBryan and Anonymous: 19 • Angle of view Source: https://en.wikipedia.org/wiki/Angle_of_view?oldid=746939783 Contributors: Koyaanis Qatsi, Ericd, Egil, Big- FatBuddha, Andrewa, Phil Boswell, Hankwang, RedWolf, Seano1, BenFrantzDale, Nkocharh, EJDyksen, Bobblewik, MarkSweep, Mza- jac, Moxfyre, Imroy, FT2, Wareh, Jeffmedkeff, Toh, Hooperbloob, RoySmith, Cburnett, Pol098, Bubba73, The wub, DirkvdM, Srleffler, Physchim62, Chobot, Jasabella, YurikBot, Laundrypowder, Zwobot, Cmglee, SmackBot, Marc Lacoste, KocjoBot~enwiki, Liaocyed, Bluebot, J-wiki, Nbarth, Chlewbot, Midnightcomm, BillFlis, Dicklyon, Mets501, Flipperinu, Mit 85, Anoneditor, BDS2006, Mactogra- pher, Stybn, Ben pcc, JAnDbot, GurchBot, JeffConrad, GermanX, Felisopus, Kb1, Wdpics, SharkD, KjeldOlesen, RenniePet, Fountains of Bryn Mawr, Biglovinb, The enemies of god, TXiKiBoT, Vipinhari, Patche99z, Jamelan, Dirkbb, AlleborgoBot, SieBot, Beeemtee, Wikievil666, DragonBot, Jusdafax, SoxBot III, Expertjohn, Dekart, Jmkim dot com, Addbot, Deep silence, Jhenderson424, Efa, The Lamb of God, Shadowjams, FrescoBot, Arjan Mels, Citation bot 1, Tjmoel, Some Wiki Editor, Lopifalko, Barkal2, EmausBot, Orphan Wiki, Eekerz, Wikipelli, QuentinUK, Hirumon, Ὁ οἶστρος, Mikhail Ryazanov, JordoCo, Mr. Credible, AT563, Enigmaticca, QAQUAU, Darafsh, BattyBot, Tagremover, JazzSong, Zooeyjfp, Rfassbind, Hoglundtw, Sleap, InternetArchiveBot, Bender the Bot and Anonymous: 72 • Field of view Source: https://en.wikipedia.org/wiki/Field_of_view?oldid=744784232 Contributors: Graft, Ellywa, Dcoetzee, Merovin- gian, Nunh-huh, Eequor, Sonjaaa, Jareha, Robin klein, FinalGamer, Shadow demon, Johnkarp, Hooperbloob, Hackwrench, Ynhockey, Drbreznjev, Thryduulf, Eirikr, Sin-man, Rjwilmsi, DirkvdM, AED, Nihiltres, Srleffler, YurikBot, RussBot, Bcebul, RadioFan, Smack- Bot, Radoslaw Ziomber, Hydrogen Iodide, Gilliam, BirdValiant, Chris the speller, AstroMalasorte, Ronik~enwiki, Frap, Xyzzyplugh, Ortzinator, Dicklyon, IvanLanin, Twas Now, Tawkerbot2, Sadalmelik, Anthonyhcole, Tdvance, Epbr123, Stybn, Porqin, AntiVandalBot, JAnDbot, Jarekt, Jim.henderson, Nono64, Lee Kay, Kirujoy, Css2002, Sjones23, Elusive Pete, SieBot, Strasburger, Lourakis, Necoplay, The Thing That Should Not Be, Tiamat2~enwiki, Dantesoft, Gammel nisse, La Pianista, DumZiBoT, Nathan Johnson, Avoided, Addbot, M.nelson, Lightbot, Jarble, Luckas-bot, Yobot, Amirobot, KamikazeBot, AnomieBOT, Efa, Redbobblehat, The Lamb of God, Keithbob, Citation bot, LilHelpa, FrescoBot, Mxipp, PigFlu Oink, Tinton5, Xfansd, Creutiman, Andrea105, Ripchip Bot, EmausBot, Jymbook, Super48paul, Jmencisom, Wikipelli, Ὁ οἶστρος, Aladarius, Mikhail Ryazanov, ClueBot NG, Yourmomblah, Mr. Credible, AT563, BG19bot, BattyBot, Khazar2, Tony Mach, Rfassbind, KasparBot, Swissnetizen, Bender the Bot and Anonymous: 88 • Depth of field Source: https://en.wikipedia.org/wiki/Depth_of_field?oldid=757049128 Contributors: Brion VIBBER, Timo Honkasalo, The Anome, Koyaanis Qatsi, Ap, XJaM, Rmhermen, William Avery, Ericd, Ubiquity, Michael Hardy, Blueshade, Arpingstone, Minesweeper, Egil, Doug Pardee, BigFatBuddha, Ehn, Technopilgrim, Redjar, Dmsar, Jogloran, Pakaran, Postdlf, 75th Trombone, Hoot, Rrjanbiah, Xanzzibar, Pengo, Marc Venot, Ancheta Wis, BenFrantzDale, MadmanNova, Naufana, Macrakis, ChicXulub, Beland, MarkSweep, Jfliu, Girolamo Savonarola, BoP, Icd, Moxfyre, Rfl, Imroy, Discospinster, Guanabot, Rama, Harriv, Bender235, Mattdm, Causa sui, Fir0002, Jeffmedkeff, Shenme, .:Ajvol:., Johnteslade, AndyBrandt, PiccoloNamek, MPerel, Hooperbloob, Atlant, Digitalmoron, Hoary, Mac Davis, Spangineer, Dschwen, Cburnett, Kelly Martin, Mindmatrix, Jacobolus, Polyparadigm, Zippo, Hughcharlesparker, Gisling, Ligar~enwiki, Coneslayer, JHMM13, The wub, MarnetteD, Parkis, AED, Weihao.chiu~enwiki, Srleffler, YurikBot, RobotE, Peter S., Hede2000, Hellbus, Miskatonic, DarkPhoenix, LexieM, Jbattersby, PyroGamer, Emijrp, Super Rad!, Terrycallen, Katieh5584, Jeremy Butler, Cmglee, SmackBot, Fireworks, Blue520, Eskimbot, !nok, Ohnoitsjamie, Betacommand, GoneAwayNowAndRetired, Chris the speller, Nbarth, Onceler, Tsca.bot, Jp498, William Grimes, Onorem, Fitzhugh, Towsonu2003~enwiki, Ceoil, Autopilot, Pomakis, Pflatau, Dicklyon, Optakeover, TastyPoutine, H, Pfeldman, MightyWarrior, Makeemlighter, NaBUru38, WeggeBot, Fletcher, MC10, DumbBOT, Phydend, Kozuch, Thijs!bot, Mactographer, Marek69, Maximilian Schönherr, Klausness, Escarbot, Gioto, Salgueiro~enwiki, Parande, The Transhumanist, JeffConrad, VoABot II, Antientropic, Avicennasis, Sgr927, Gomm, MarcLevoy, Gphoto, Starheart3d, Autophoto, J.delanoy, Thegreenj, Speed8ump, Mehmetaergun, Ignacio Icke, SharkD, Jamesmcardle, Gurchzilla, In Transit, Althepal, Dcouzin, WOSlinker, Walvis, Michi zh, DesmondW, Vitz-RS, SieBot, VVVBot, Keilana, JetLover, Fratrep, WalrusJR, Odo Benus, Curtdbz, Asankegalgomuwa, Church, Martarius, De728631, ClueBot, Binksternet, The Thing That Should Not Be, Mi6el~enwiki, Jomsborg, Alexbot, Alex1ruff, XLinkBot, Fifedog, Delicious carbuncle, Freezing in Wisconsin, Grayfell, Dswader, Jamesington, Skydude6, Fire- allconsuming, Bernie Kohl, Numbo3-bot, Loupeter, MuZemike, Legobot, Yobot, Redvex81rg, KDS4444, Rubinbot, Camponez, Piano non troppo, B137, Materialscientist, Eumolpo, LilHelpa, Xqbot, Calculist, Mangeshshripad, Adrian1906, I am Me true, Mihanolis, Om- nipaedista, RibotBOT, Erik9, A.amitkumar, FrescoBot, Tony Wills, BornisMedia, WazzoTheMartian, MJ94, Ajuniper, Sir Edward V, ,Tricajus, EmausBot, Johann3s ,ناخدا ,Michel192cm, HelenOnline, Christobrier, Unknown1234321, Tubehahaha, DARTH SIDIOUS 2 Artiom.chaplygin, Thargor Orlando, Hirumon, Raytrix, JayQew87, Glockenklang1, L Kensington, Gsarwa, Chokity, ChuispastonBot, TYelliot, Kadcpm88, Kleopatra, Dllu, Petrb, Imohano, ClueBot NG, Matthiaspaul, Ahmedkamal1987, Beukebo, BG19bot, Jacopo188, TyArnberg, Jochen 0x90h, Mogism, JonathanMather, Tony Mach, Mark viking, EvergreenFir, Jaredzimmerman (WMF), Northeastern Nomad, Jackmcbarn, ScienEar, Donrfleming, Théophane 631, Lauro Sirgado, Deltasine, Veeblefetzer, Manudouz, VonHaarberg, Fmadd and Anonymous: 236 • Focal length Source: https://en.wikipedia.org/wiki/Focal_length?oldid=755511864 Contributors: PierreAbbat, DrBob, Glenn, Rl, Ehn, Cheeni, Redjar, Denelson83, Robbot, Hankwang, Diberri, Inter, Fudoreaper, BenFrantzDale, Hugh2414, Beland, BoP, Icairns, Mox- fyre, Discospinster, Milkmandan, Alistair1978, Quistnix, Neko-chan, Bobo192, Nigelj, Meggar, Thebassman, Hooperbloob, Wricar- doh~enwiki, RainbowOfLight, Mindmatrix, Anarchitect~enwiki, KymFarnik, Yurik, Cambridgeincolour, AndyKali, The wub, Titoxd, Margosbot~enwiki, Srleffler, DVdm, Borgx, RobotE, Aaron Brenneman, William Graham, Galar71, Jeremy Butler, Cmglee, Smack- Bot, Henrikb4, McGeddon, Unyoyega, Liaocyed, KYN, Betacommand, Chris the speller, Oli Filth, Nbarth, DHN-bot~enwiki, Marc- Gushwa, Mike1901, IronGargoyle, Pflatau, Dicklyon, AdultSwim, BranStark, Themightyquill, Swales, Gogo Dodo, JFreeman, Thijs!bot, Epbr123, Headbomb, Canadian-Bacon, The Transhumanist, .anacondabot, MartinBot, Keith D, SharkD, STBotD, VolkovBot, Chitrapa, Oshwah, Cosmic Latte, Venny85, Lbmarshall, Yomamas, Tiddly Tom, Rooh23, Martarius, ClueBot, Mild Bill Hiccup, DragonBot, Alexbot, Alex1ruff, LieAfterLie, SoxBot III, Addbot, ConCompS, Hakan Kayı, Lightbot, Zorrobot, Arbitrarily0, Legobot, Luckas-bot, ,عبد المؤمن ,HairyPerry, KDS4444, Materialscientist, The High Fin Sperm Whale, Citation bot, Maxis ftw, Xqbot, GrouchoBot, Gbruin VI, Pinethicket, Arctic Night, Serols, Tamasflex, Jujutacular, MrFloatingIP, Dinamik-bot, OrionKnight, Xclassmechluv, Googoohead, EmausBot, Everything Else Is Taken, EdEColbert, AquaGeneral, BrokenAnchorBot, Autoerrant, RockMagnetist, Lisawaa, ClueBot NG, Muon, Helpful Pixie Bot, Sfnagle, Bandit1125, Sjthomas12, ChrisGualtieri, Ravimishra53, Kgccc, Devpry, Melonkelon, Anarcham, Mgkrupa, Prajwal Charles K, StinkYbmbs, Bender the Bot, Lewis Shuck and Anonymous: 137 11.1. TEXT 199

• Magnification Source: https://en.wikipedia.org/wiki/Magnification?oldid=762118074 Contributors: Manning Bartlett, SimonP, DrBob, Nealmcb, Michael Hardy, Lou Sander, Flockmeal, Bearcat, Fredrik, Centrx, Leonard G., Atomius, Isidore, PenguiN42, MrMambo, OverlordQ, Nerd65536, Moxfyre, Florian Blaschke, Bender235, RoyBoy, Jag123, Sciurinæ, Versageek, MIT Trekkie, Woohookitty, Linas, Commander Keane, Torqueing, SeventyThree, Adam.gibson, Theodork, FlaBot, SchuminWeb, Djrobgordon, Srleffler, Gbm, Daniel Mietchen, Anastasius zwerg, Kkmurray, Ageekgal, Alanb, SmackBot, Man with two legs, Gelingvistoj, Carl.bunderson, Lp- gaffney, Rigadoun, MrGodot, Anand Karia, 16@r, Dr.K., DabMachine, Morgan Wick, Ibadibam, Penbat, Thijs!bot, Mwarren us, .ana- condabot, Email4mobile, Edward321, Boneka, Rettetast, Hasanisawi, Peppergrower, Bobianite, Juliancolton, Larryisgood, Red Act, Qwerty91~enwiki, HiDrNick, SieBot, Flyer22 Reborn, Kropotkine 113, OKBot, ClueBot, Rb126, Iohannes Animosus, Thingg, DumZ- iBoT, Zipspeed, Salvadoradi, Addbot, Mortense, Tanhabot, Qkowlew, Luckas Blade, Luckas-bot, Yobot, Ptbotgourou, KamikazeBot, AnomieBOT, Redbobblehat, AdjustShift, Materialscientist, GrouchoBot, SassoBot, Erik9bot, FrescoBot, Pinethicket, Tamasflex, Oren- burg1, Jpceulemans, Mean as custard, Manyman, EmausBot, AvicAWB, Access Denied, Robert M Dewey, Royeverett, Weirdvideos, ClueBot NG, Helpful Pixie Bot, BG19bot, Airjevans, BattyBot, Heywiki101, Chivesud, Muhammad112, G.Kiruthikan, Kylegodbey, JaconaFrere, Rifen89, Bender the Bot, Parth ranawat, Damdaniellllllllllllllll and Anonymous: 99 ,רן כהן ,YiFeiBot • Distortion (optics) Source: https://en.wikipedia.org/wiki/Distortion_(optics)?oldid=757016044 Contributors: SimonP, Patrick, Ben- FrantzDale, Macrakis, Zzo38, Rich Farmbrough, Mdf, O18, Collinong, Mindmatrix, Jacobolus, Tabletop, Qwertyus, Bitoffish, Srleffler, Antilived, Adoniscik, Borgx, Cmglee, SmackBot, Marc Lacoste, Nbarth, Beetstra, Calibas, Dicklyon, Anoneditor, Thijs!bot, Lovibond, Joe Schmedley, Nyttend, SoyYo, Kostisl, Tikiwont, Normankoren, Cmansley, OlavN, Aamackie, Guy Van Hooveld, WalrusJR, Odo Benus, XLinkBot, Nepenthes, Addbot, AnomieBOT, Tiger1027tw, Tom.Reding, Lotje, SpoonlessSA, John of Reading, Eekerz, Mer- litz, Jmencisom, Gsarwa, Mail6543210, MerlIwBot, BG19bot, St900, Wolfgang42, BattyBot, Bakkedal, ChrisGualtieri, OhCh9voh, Monkbot, Yikkayaya, Plumonito, Bicubic, Interpuncts, Laotinant, Latosh Boris and Anonymous: 40 • Optical aberration Source: https://en.wikipedia.org/wiki/Optical_aberration?oldid=760120748 Contributors: Joao, Bryan Derksen, The Anome, Taral, Toby Bartels, DrBob, Michael Hardy, Egil, Looxix~enwiki, Glenn, EdH, Lancevortex, Dcoetzee, Grendelkhan, Penfold, Robbot, Zandperl, Sbisolo, BenFrantzDale, Leonard G., Chinasaur, Hugh2414, Madoka, Chowbok, DavidBrooks, Beland, Phe, Adam- rice, Icairns, Sam Hocevar, Danh, Talkstosocks, Brianjd, Duja, Brianhe, Rich Farmbrough, Guanabot, Dave Foley, Martin TB, Pjacobi, Berkut, CanisRufus, Tenfour, AlexTheMartian, Pearle, Deacon of Pndapetzim, Pfalstad, Yurik, Rpbapo1, Rjwilmsi, Hblaser, Vuong Ngan Ha, AED, Margosbot~enwiki, Srleffler, Jmorgan, Roboto de Ajvol, YurikBot, RussBot, Dominican, Gaius Cornelius, Enormous- dude, robot, SmackBot, Unyoyega, KocjoBot~enwiki, Chris the speller, Bluebot, Nbarth, Spiritia, Yukishiro, Diverman, Pflatau, Dicklyon, Iridescent, Amakuru, Lenoxus, CRGreathouse, JonesMI, Zureks, Myasuda, Quibik, Brad101, Thijs!bot, Barticus88, Mglg, Headbomb, Xuanji, Thadius856, Rbowman, Orionus, Edokter, Skarkkai, Magioladitis, Swpb, Neededandwanted, InvertRect, Svhaver, R'n'B, DarkFalls, Koven.rm, Dark Green, Gombang, Fountains of Bryn Mawr, Yanroy, Catfood73, Rei-bot, Redikufuk, Don4of4, Gfu- tia, Billinghurst, Temporaluser, SieBot, Totally screwed, Psychless, A. Carty, Odo Benus, Senbazuru, Martarius, Alexbot, Tide rolls, Zorrobot, Legobot, Yobot, Amirobot, Materialscientist, Citation bot, FrescoBot, Saiarcot895, 0x30114, Clarice Reis, EmausBot, Parky- wiki, GoingBatty, Mats Löfdahl, Jmencisom, Kiatdd, ChuispastonBot, ClueBot NG, Widr, Helpful Pixie Bot, AFedosov, BattyBot, ChrisGualtieri, Tagremover, Stas1995, Spyglasses, HellTchi, DavidBrooks-AWB, Unician, Hellomynameismrs, Boehm, Premkiranuoh, GreenC bot, Naadal90 and Anonymous: 52 • Orb (optics) Source: https://en.wikipedia.org/wiki/Orb_(optics)?oldid=762540375 Contributors: Rdrozd, Tpbradbury, Jni, Gtrmp, Alexan- der.stohr, Rich Farmbrough, Vsmith, FrankCostanza, 9SGjOSfyHJaQVsEmy9NS, Kjkolb, Velella, Tony Sidaway, Kay Dekker, Kenyon, Revived, Angr, MamaGeek, WadeSimMiser, JeremyA, Llamnuds, Plrk, BorgHunter, Redwolf24, TeaDrinker, Jachin, Hydrargyrum, Zythe, Kortoso, Varano, Wknight94, Hal peridol, SmackBot, McGeddon, C.Fred, Mercifull, Bluebot, Mdwh, Stevage, Wasqps, Can't sleep, clown will eat me, Jefffire, Japeo, Acdx, Ollj, Mkimura1971, JHunterJ, Calibas, Meco, Iridescent, Jaksmata, McBill, Urutapu, Ladyhawker, Leujohn, Calle555, Bsdaemon, Docmgmt, Noclevername, Majorly, LuckyLouie, Barek, MER-C, Eurobas, Kangel, Taoc, Dr.falko, Faeden1, ExodusEleven, A3nm, Alleborgo, William James Croft, DerHexer, Waninge, Greenguy1090, Jim.henderson, Antije, Trusilver, Katalaveno, NewEnglandYankee, Antisora, Vinsfan368, VolkovBot, Orphic, Jeff G., Stelker, IPSOS, Cuddlyable3, Wiae, Andy Dingley, Piecemealcranky, Brianga, Stasven, Dogah, Gerakibot, Caltas, Sephiroth storm, Yintan, Oxymoron83, Jack1956, Anordstrom, Kutera Genesis, Superbeecat, Bpeps, Desean81, Twinsday, ClueBot, Binksternet, DxMxD, Mathman10, JeanNewton, 842U, Mister- normal, Numskullz, Dsmart, Nrstanley, Alrightthen, Addbot, Ashton1983, OlEnglish, Yobot, Mmxx, K2709, AnomieBOT, Piano non troppo, DMWuCg, Haples1, Sophus Bie, Buttons0603, Thehelpfulbot, Omni314, July1962, ErikvanB, Mean as custard, RjwilmsiBot, Bento00, BrendanFrye, EmausBot, John of Reading, Moretim, Ramon FVelasquez, Wikipelli, HiW-Bot, Askedonty, Anir1uph, Sger- bic, Pleiadestars22, Tolly4bolly, TyA, Donner60, ClueBot NG, Chester Markel, Movses-bot, Yanclae, Marcnovo, Harizotoh9, Richard- Mills65, GoShow, Earth100, SFK2, PYRFOTOS, JaconaFrere, Tdk408, KH-1, Dave Fallon, JJMC89, 10798X407, Mirazaliullah and Anonymous: 209 • F-number Source: https://en.wikipedia.org/wiki/F-number?oldid=761661458 Contributors: Brion VIBBER, The Anome, Tarquin, Koy- aanis Qatsi, Caltrop, Ericd, Michael Hardy, JeremyR, Egil, Ehn, Charles Matthews, AWhiteC, Zoicon5, Morven, Finlay McWalter, Donarreiskoffer, Robbot, RedWolf, Dbenbenn, Tremolo, BenFrantzDale, Kurt Eichenberger, Bradeos Graphon, Iridium77, TomViza, Sunny256, Eequor, Isidore, Comatose51, Beland, OverlordQ, MarkSweep, Neffk, Girolamo Savonarola, DragonflySixtyseven, Clarknova, Grunners, LHOON, Fg2, Moxfyre, Tobias Wolter, Shagie, Helohe, Rich Farmbrough, Cacycle, Rama, Kwamikagami, Cacophony, Aaron- brick, Fir0002, Cmdrjameson, Hooperbloob, Lysdexia, Seans Potato Business, Katana, Velella, Cburnett, Suruena, RainbowOfLight, Henry W. Schmitt, Mark mclaughlin, Kazvorpal, Madmatt213, JordanSamuels, Firsfron, Imaginatorium, Mindmatrix, Jackel, Tom Am- eye, Guy M, Splintax, Pol098, Vreejack, Matturn, Ketiltrout, Filu~enwiki, Cambridgeincolour, The wub, Evermail, Srleffler, King of Hearts, Chobot, Bgwhite, YurikBot, RussBot, Hellbus, Gaius Cornelius, Member, Janke, Phil Bastian, LAW, William Graham, Zwobot, Lars Trebing, Daniel C, X-mass, Mikus, Jeremy Butler, That Guy, From That Show!, SmackBot, Bazza 7, Midway, Sonu27, Yam- aguchi, Ohnoitsjamie, Betacommand, Chris the speller, Stevage, Gingi0, Modest Genius, Matthew, TheKMan, Kevinpurcell, Mid- nightcomm, Autopilot, The undertow, Khazar, Mgiganteus1, Dicklyon, AntOnTrack, Cbuckley, Vargklo, Walter Dufresne, Jim Stinson, Balazer, Runningonbrains, IntrigueBlue, MaxEnt, Lofote, Vanished User jdksfajlasd, Rmicallef, Thijs!bot, Mactographer, Maximilian Schönherr, Okki, Rosuna, Klausness, SNx, Harryzilber, Gcm, The Transhumanist, MeanGreeny, Hell Pé, Mwarren us, JeffConrad, Drew- cifer3000, GordonMcKinney, Ariel., Anaxial, Nono64, Smial, Gah4, Paultk, Lax4mike, Wiki Raja, Discpad, Qatter, Eliz81, Thegreenj, SharkD, Tommyknchan, Fountains of Bryn Mawr, Trilobitealive, Ajfweb, Richmassena, TheMindsEye, Oshwah, Michi zh, Lucamauri, JonFairbairn, Meldave, Vgranucci, Altasoul, Flash19901, Vladdydaddy, BusaJD~enwiki, SieBot, Ellusion, Rooh23, Anchor Link Bot, Kinzele, Asankegalgomuwa, Martarius, ClueBot, TimmmmCam, Alexbot, Manco Capac, SoxBot III, Pietpetoors, Bondrake, Rror, Sil- vonenBot, Anticipation of a New Lover’s Arrival, The, Addbot, Nissimkaufmann, Willking1979, Betterusername, Mjamja, Tide rolls, Neurovelho, Legobot, Yobot, Jordsan, Raviaka Ruslan, AnomieBOT, ESCapade, Materialscientist, The High Fin Sperm Whale, Cita- tion bot, ArthurBot, Xqbot, Nfr-Maat, Gap9551, J04n, GrouchoBot, FrescoBot, Peyman Ghasemi, Dger, Craig Pemberton, Joaotaveira, 200 CHAPTER 11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Armigo~enwiki, OgreBot, Citation bot 1, Michaelkas9, I dream of horses, Serols, Cnwilliams, Callanecc, Theo10011, RjwilmsiBot, Lopifalko, Markdaams, RA0808, Bengt Nyman, Westley Turner, Ὁ οἶστρος, Gmanterry, Captain Screebo, Glockenklang1, JFPalmer, Jsayre64, Gsarwa, Donner60, Edgar.bonet, Angerdan, Mikhail Ryazanov, ClueBot NG, Matthiaspaul, JimsMaher, Sxd20, Tduggan2, Theopolisme, Helpful Pixie Bot, Chucklingcanuck, Smudgeface, BG19bot, Wiki3Languages, Soerfm, Changeeeerr, Hansen Sebastian, David.moreno72, Tagremover, Rezonansowy, Johannesvg, Comp.arch, Tomenglish77591, JaconaFrere, Micky Finch, Monkbot, Anilba- gri92, Nastysasky, GeneralizationsAreBad, Jcchau, Kundan K Singh, Bender the Bot and Anonymous: 258

11.2 Images

• File:2015-05-25_0820Incoming_parallel_rays_are_focused_by_a_convex_lens_into_an_inverted_real_image_one_focal_length_ from_the_lens,_on_the_far_side_of_the.png Source: https://upload.wikimedia.org/wikipedia/commons/1/1e/2015-05-25_0820Incoming_ parallel_rays_are_focused_by_a_convex_lens_into_an_inverted_real_image_one_focal_length_from_the_lens%2C_on_the_far_side_of_ the.png License: CC BY-SA 4.0 Contributors: Own work http://weelookang.blogspot.com/2015/05/ejss-thin-converging-diverging-lens-ray. html Original artist: Lookang many thanks to author of original simulation = Fu-Kwun Hwang author of Easy Java Simulation = Francisco Esquembre • File:2015-05-25_0836With_concave_lenses,_incoming_parallel_rays_diverge_after_going_through_the_lens,_in_such_a_way_ that_they_seem_to_have_originated_at_an.png Source: https://upload.wikimedia.org/wikipedia/commons/3/3d/2015-05-25_0836With_ concave_lenses%2C_incoming_parallel_rays_diverge_after_going_through_the_lens%2C_in_such_a_way_that_they_seem_to_have_originated_ at_an.png License: CC BY-SA 4.0 Contributors: Own workhttp://weelookang.blogspot.com/2015/05/ejss-thin-converging-diverging-lens-ray. html Original artist: Lookang many thanks to author of original simulation = Fu-Kwun Hwang author of Easy Java Simulation = Francisco Esquembre • File:24-72mm_zoom_demo.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/95/24-72mm_zoom_demo.jpg License: Public domain Contributors: Own work Original artist: Patche99z • File:24-72mm_zoom_demo_horizontal.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/24-72mm_zoom_demo_ horizontal.jpg License: Public domain Contributors: Own work Original artist: Patche99z • File:ABERR1.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/28/ABERR1.svg License: Public domain Contributors: • ABERR1.png Original artist: ABERR1.png: Template:Maksim • File:ABERR2.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/dd/ABERR2.svg License: Public domain Contributors: Image:ABERR2.png Original artist: converted to svg by De.Nobelium 12:58, 24 December 2006 (UTC) with "Inkscape" • File:ABERR3rev.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/ABERR3rev.svg License: Public domain Con- tributors: This vector image was created with Inkscape. Original artist: converted to svg by De.Nobelium • File:ABERR5rev.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/83/ABERR5rev.svg License: Public domain Con- tributors: • ABERR5rev.png Original artist: ABERR5rev.png: Maksim • File:ABERR6rev.png Source: https://upload.wikimedia.org/wikipedia/commons/1/1e/ABERR6rev.png License: Public domain Con- tributors: ? Original artist: ? • File:Abbildungsfehler_am_Hohlspiegel_(Katakaustik).svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fb/Abbildungsfehler_ am_Hohlspiegel_%28Katakaustik%29.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Synkizz • File:Alhazen,_the_Persian.gif Source: https://upload.wikimedia.org/wikipedia/commons/6/63/Alhazen%2C_the_Persian.gif License: Public domain Contributors: www.levity.com/alchemy/islam09.html Original artist: Unknownwikidata:Q4233718 • File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do- main Contributors: Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs) • File:Ambox_outdated_content.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b1/Ambox_outdated_content.svg Li- cense: Public domain Contributors: Own work Original artist: penubag, clock made by Tkgd2007 • File:Angle_of_View_F_V_Chambers_1916.png Source: https://upload.wikimedia.org/wikipedia/en/5/54/Angle_of_View_F_V_Chambers_ 1916.png License: PD-US Contributors: ? Original artist: ? • File:Angle_of_view.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/72/Angle_of_view.svg License: Public domain Contributors: Transferred from en.wikipedia to Commons. Original artist: Dicklyon at English Wikipedia • File:Angleofview_210mm_f4.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/fd/Angleofview_210mm_f4.jpg License: Public domain Contributors: ? Original artist: ? • File:Angleofview_28mm_f4.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/12/Angleofview_28mm_f4.jpg License: Public domain Contributors: ? Original artist: ? • File:Angleofview_50mm_f4.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9c/Angleofview_50mm_f4.jpg License: Public domain Contributors: ? Original artist: ? • File:Angleofview_70mm_f4.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/68/Angleofview_70mm_f4.jpg License: Public domain Contributors: ? Original artist: ? • File:Aperture_diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/87/Aperture_diagram.svg License: CC-BY- SA-3.0 Contributors: Transferred from en.wikipedia to Commons. Original artist: Cbuckley at English Wikipedia Later versions were uploaded by Dicklyon at en.wikipedia. 11.2. IMAGES 201

• File:Archery_Target_80cm.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d5/Archery_Target_80cm.svg License: CC BY-SA 2.5 Contributors: Own work Original artist: Alberto Barbati • File:Astigmatism.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b3/Astigmatism.svg License: CC-BY-SA-3.0 Con- tributors: self-made/own work (created with Inkscape 0.44) Original artist: Sebastian Kosch (Ginger Penguin) • File:BackFocalPlane.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/47/BackFocalPlane.svg License: CC-BY-SA- 3.0 Contributors: Transferred from en.wikipedia to Commons by FastilyClone using MTC!. Original artist: The original uploader was BenFrantzDale at Wikipedia • File:BackFocalPlane_aperture.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6f/BackFocalPlane_aperture.svg Li- cense: CC-BY-SA-3.0 Contributors: Transferred from en.wikipedia to Commons by FastilyClone using MTC!. Original artist: The original uploader was BenFrantzDale at Wikipedia • File:Barlow_lens.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/Barlow_lens.svg License: Public domain Contrib- utors: Own work Original artist: Andreas 06 • File:Barrel_(PSF).png Source: https://upload.wikimedia.org/wikipedia/commons/5/56/Barrel_%28PSF%29.png License: Public do- main Contributors: Archives of Pearson Scott Foresman, donated to the Wikimedia Foundation Original artist: Pearson Scott Foresman • File:Barrel_distortion.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/63/Barrel_distortion.svg License: Public do- main Contributors: Own work Original artist: WolfWings • File:Basic_optic_geometry.png Source: https://upload.wikimedia.org/wikipedia/en/e/ee/Basic_optic_geometry.png License: PD Con- tributors: Own work Original artist: Redbobblehat (talk)(Uploads) • File:BiconvexLens.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d8/BiconvexLens.jpg License: CC BY-SA 3.0 Con- tributors: Own work Original artist: Tamasflex • File:Blumen_im_Sommer.jpg Source: https://upload.wikimedia.org/wikipedia/commons/f/f2/Blumen_im_Sommer.jpg License: CC BY-SA 3.0 Contributors: Own work Original artist: Maximilian Schönherr • File:Camera_focal_length_distance_house_animation.gif Source: https://upload.wikimedia.org/wikipedia/commons/d/d3/Camera_ focal_length_distance_house_animation.gif License: GFDL Contributors: Own work Original artist: SharkD • File:Camera_focal_length_vs_crop_factor_vs_angle_of_view.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/ Camera_focal_length_vs_crop_factor_vs_angle_of_view.svg License: CC BY-SA 4.0 Contributors: http://commons.wikimedia.org/wiki/ File:1_times_square_night_2013.jpg Original artist: cmglee, chensiyuan • File:Canon_7_with_50mm_f0.95_IMG_0374.JPG Source: https://upload.wikimedia.org/wikipedia/commons/c/cb/Canon_7_with_ 50mm_f0.95_IMG_0374.JPG License: CC BY-SA 2.0 fr Contributors: Own work Original artist: Rama • File:Cardinal-points-1.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e3/Cardinal-points-1.svg License: CC-BY-SA- 3.0 Contributors: en:Image:Cardinal-points-1.svg Original artist: en:User:DrBob • File:Cardinal-points-2.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/78/Cardinal-points-2.svg License: CC-BY- SA-3.0 Contributors: Bob Mellish (talk)(Uploads) Original artist: Bob Mellish (talk)(Uploads) • File:Cascading_Milky_Way.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Cascading_Milky_Way.jpg License: CC BY 4.0 Contributors: Cascading Milky Way Original artist: ESO/S. Brunier • File:Chromatic_aberration_lens_diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/aa/Chromatic_aberration_ lens_diagram.svg License: CC-BY-SA-3.0 Contributors: Bob Mellish (talk)(Uploads) Original artist: Bob Mellish (talk)(Uploads) • File:CircularPolarizer.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d8/CircularPolarizer.jpg License: CC-BY-SA- 3.0 Contributors: Own work Original artist: User PiccoloNamek on en.wikipedia • File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Orig- inal artist: ? • File:Concave_lens.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/32/Concave_lens.jpg License: CC-BY-SA-3.0 Con- tributors: ? Original artist: User Fir0002 on en.wikipedia • File:Contessa_hyperfocal.JPG Source: https://upload.wikimedia.org/wikipedia/commons/c/ce/Contessa_hyperfocal.JPG License: CC BY-SA 4.0 Contributors: Own work Original artist: Gisling • File:Convex_lens_flipped_image.JPG Source: https://upload.wikimedia.org/wikipedia/commons/6/64/Convex_lens_flipped_image. JPG License: CC BY 3.0 Contributors: Own work Original artist: CoolKoon • File:Crystal_Clear_app_kedit.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e8/Crystal_Clear_app_kedit.svg License: LGPL Contributors: Own work Original artist: w:User:Tkgd, Everaldo Coelho and YellowIcon • File:Cushion.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b6/Cushion.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:DOF-ShallowDepthofField.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/38/DOF-ShallowDepthofField.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:DOF_scale_detail.png Source: https://upload.wikimedia.org/wikipedia/commons/e/e4/DOF_scale_detail.png License: Public do- main Contributors: File:Lens aperture side.jpg Original artist: dicklyon • File:DefocusBlur.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6a/DefocusBlur.svg License: CC BY-SA 4.0 Con- tributors: Own work Original artist: VonHaarberg 202 CHAPTER 11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

• File:Depth_of_field_Nov_2008.jpg Source: https://upload.wikimedia.org/wikipedia/en/b/b7/Depth_of_field_Nov_2008.jpg License: CC-BY-SA-3.0 Contributors: I created this work entirely by myself. Original artist: Wilder Kaiser (talk) • File:Depth_of_field_illustration.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0c/Depth_of_field_illustration.svg License: CC-BY-SA-3.0 Contributors: • Diaphragm.svg Original artist: Diaphragm.svg: • File:Derr_Hyperfocal_1906.png Source: https://upload.wikimedia.org/wikipedia/en/f/f0/Derr_Hyperfocal_1906.png License: PD-US Contributors: ? Original artist: ? • File:Diaphragm_Numbers.gif Source: https://upload.wikimedia.org/wikipedia/commons/5/55/Diaphragm_Numbers.gif License: Pub- lic domain Contributors: http://www.largeformatphotography.info/shutters.html Original artist: ? • File:Dieselrainbow.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Dieselrainbow.jpg License: CC BY-SA 2.5 Contributors: Transferred from en.wikipedia.org. The original description page was here. All following user names refer to en.wikipedia.: 2007-03-16 14:07 . . John . . 976×871×8 (233607 bytes) . . (Taken and donated by User:Guinnog) Original artist: John (ex-user Guinnog) • File:DoF-sym.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/DoF-sym.svg License: CC-BY-SA-3.0 Contributors: Based on DoF-sym.png Original artist: JeffConrad • File:Dof_blocks_f1_4.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Dof_blocks_f1_4.jpg License: CC BY-SA 4.0 Contributors: Own work Original artist: Alex1ruff • File:Dof_blocks_f22.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9e/Dof_blocks_f22.jpg License: CC BY-SA 4.0 Contributors: Own work Original artist: Alex1ruff • File:Dof_blocks_f4_0.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9f/Dof_blocks_f4_0.jpg License: CC BY-SA 4.0 Contributors: Own work Original artist: Alex1ruff • File:Double_slit_diffraction.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Double_slit_diffraction.svg License: CC-BY-SA-3.0 Contributors: • Doubleslitdiffraction.png Original artist: Doubleslitdiffraction.png: Bcrowell • File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The Tango! Desktop Project. Original artist: The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (although minimally).” • File:Eye-diagram_no_circles_border.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e8/Eye-diagram_no_circles_ border.svg License: CC-BY-SA-3.0 Contributors: References: [1] [2] [3] among others Original artist: Chabacano • File:FOV_Target_on_Monitor.png Source: https://upload.wikimedia.org/wikipedia/en/8/8f/FOV_Target_on_Monitor.png License: CC- BY-SA-3.0 Contributors: Author Original artist: User:The Lamb of God, edited by Mikhail Ryazanov • File:FOV_test_Optics_apparatus.PNG Source: https://upload.wikimedia.org/wikipedia/en/b/bb/FOV_test_Optics_apparatus.PNG Li- cense: PD Contributors: Author Original artist: Mikhail Ryazanov • File:Field_curvature.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fc/Field_curvature.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: BenFrantzDale • File:Firesunset2edit.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/3b/Firesunset2edit.jpg License: Public domain Contributors: Own work Original artist: Durova • File:Flat_flexible_plastic_sheet_lens.JPG Source: https://upload.wikimedia.org/wikipedia/commons/c/cd/Flat_flexible_plastic_sheet_ lens.JPG License: CC BY-SA 3.0 Contributors: Own work Original artist: Almazi • File:Focal-length.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8b/Focal-length.svg License: CC-BY-SA-3.0 Con- tributors: Own work Original artist: Henrik • File:Focal_length.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/e5/Focal_length.jpg License: CC BY-SA 3.0 Con- tributors: Own work Original artist: Jcbrooks • File:Focal_ratio.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4f/Focal_ratio.svg License: Public domain Contribu- tors: Own work Original artist: The original uploader was Vargklo at English Wikipedia • File:Focus_stacking_Tachinid_fly.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/59/Focus_stacking_Tachinid_fly. jpg License: CC BY-SA 3.0 Contributors: Own work Original artist: Muhammad Mahdi Karim • File:Focusing_on_retina.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c2/Focusing_on_retina.svg License: CC BY- SA 3.0 Contributors: By Inkskape Original artist: Javalenok • File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc- by-sa-3.0 Contributors: ? Original artist: ? 11.2. IMAGES 203

• File:FortGhost.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/0b/FortGhost.jpg License: Public domain Contribu- tors: Transferred from en.wikipedia to Commons by Wouterhagens using CommonsHelper. Original artist: ExodusEleven at English Wikipedia • File:Glass_ochem_dof2.png Source: https://upload.wikimedia.org/wikipedia/commons/a/a0/Glass_ochem_dof2.png License: CC BY- SA 3.0 Contributors: Own work Original artist: Purpy Pupple • File:Glasses_800_edit.png Source: https://upload.wikimedia.org/wikipedia/commons/e/ec/Glasses_800_edit.png License: Public do- main Contributors: http://www.oyonale.com/modeles.php?lang=en&page=40 Original artist: Gilles Tran • File:Globe_effect.gif Source: https://upload.wikimedia.org/wikipedia/commons/5/59/Globe_effect.gif License: CC BY-SA 3.0 Con- tributors: Own work Original artist: Cmglee • File:HartmannShack_1lenslet.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/81/HartmannShack_1lenslet.svg Li- cense: CC BY-SA 3.0 Contributors: Own work Original artist: HHahn • File:Hyperfocal_distance_definitions.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/58/Hyperfocal_distance_definitions. svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Cmglee • File:Ibn_Sahl_manuscript.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/3a/Ibn_Sahl_manuscript.jpg License: Pub- lic domain Contributors: en:Image:Ibn Sahl fig.jpg Original artist: Ibn Sahl (Abu Sa`d al-`Ala' ibn Sahl) (c. 940-1000) • File:Identifiable-Images-of-Bystanders-Extracted-from-Corneal-Reflections-pone.0083325.s001.ogv Source: https://upload.wikimedia. org/wikipedia/commons/7/70/Identifiable-Images-of-Bystanders-Extracted-from-Corneal-Reflections-pone.0083325.s001.ogv License: CC BY 4.0 Contributors: Movie S1 from Jenkins R, Kerr C (2013). "Identifiable Images of Bystanders Extracted from Corneal Reflec- tions". PLOS ONE. DOI:10.1371/journal.pone.0083325. PMC: 3873323. Original artist: Jenkins R, Kerr C • File:Interference_of_two_waves.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0f/Interference_of_two_waves.svg License: CC BY-SA 3.0 Contributors: Vecorized from File:Interference of two waves.png Original artist: • original version: Haade; • File:Jonquil_flowers_at_f32.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/51/Jonquil_flowers_at_f32.jpg License: GFDL 1.2 Contributors: Own work Original artist: fir0002 | flagstaffotos.com.au

• File:Jonquil_flowers_at_f5.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/01/Jonquil_flowers_at_f5.jpg License: GFDL 1.2 Contributors: Own work Original artist: fir0002 | flagstaffotos.com.au

• File:Jonquil_flowers_merged.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/86/Jonquil_flowers_merged.jpg License: CC-BY-SA-3.0 Contributors: • Jonquil_flowers_at_f5.jpg Original artist: fir0002 / flagstaffotos; derivative work by Autopilot (modifications: merged Jonquil_flowers_at_f5.jpg and Jonquil_flowers_at_f32.jpg) • File:Kittyplya03042006.JPG Source: https://upload.wikimedia.org/wikipedia/commons/2/28/Kittyplya03042006.JPG License: CC BY 2.5 Contributors: Own work Original artist: David Corby (User:Miskatonic, uploader) • File:Large_convex_lens.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/82/Large_convex_lens.jpg License: CC-BY- SA-3.0 Contributors: http://en.wikipedia.org/wiki/Image:Large_convex_lens.jpg Original artist: User Fir0002 on en.wikipedia • File:Large_format_camera_lens.png Source: https://upload.wikimedia.org/wikipedia/commons/5/56/Large_format_camera_lens.png License: Public domain Contributors: ? Original artist: ? • File:Lens-coma.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/31/Lens-coma.svg License: CC-BY-SA-3.0 Contrib- utors: http://upload.wikimedia.org/wikipedia/en/3/31/Lens-coma.svg Original artist: ? • File:Lens1.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ef/Lens1.svg License: CC-BY-SA-3.0 Contributors: Origi- nally from en.wikipedia. Original artist: DrBob at English Wikipedia • File:Lens1b.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ab/Lens1b.svg License: CC-BY-SA-3.0 Contributors: Trans- ferred from en.wikipedia to Commons. Original artist: DrBob at English Wikipedia • File:Lens3.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/Lens3.svg License: GFDL Contributors: w:en:File:Lens3. svg Original artist: w:en:DrBob • File:Lens3b.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/97/Lens3b.svg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Lens4.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/03/Lens4.svg License: GFDL Contributors: w:en:File:Lens4. svg Original artist: w:en:DrBob • File:Lens5.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4d/Lens5.svg License: GFDL Contributors: w:en:File:Lens5. svg Original artist: w:en:DrBob • File:Lens6b-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/15/Lens6b-en.svg License: CC-BY-SA-3.0 Contribu- tors: • SVG version of Image:lens6b.png Original artist: Original uploaded by DrBob (Transfered by nbarth) • File:Lens_and_wavefronts.gif Source: https://upload.wikimedia.org/wikipedia/commons/c/c7/Lens_and_wavefronts.gif License: Pub- lic domain Contributors: self-made with MATLAB Original artist: Oleg Alexandrov • File:Lens_angle_of_view.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0c/Lens_angle_of_view.svg License: CC BY-SA 3.0 Contributors: self-made based on :Image:Lens3.svg Original artist: User:Moxfyre 204 CHAPTER 11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

• File:Lens_aperture_side.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/13/Lens_aperture_side.jpg License: Public domain Contributors: ? Original artist: ? • File:Lens_coma.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/87/Lens_coma.svg License: CC-BY-SA-3.0 Contrib- utors: Manuell Vectorisierte Version von: https://commons.wikimedia.org/wiki/File:Lens_coma.png Original artist: Unknownwikidata:Q4233718 • File:Lens_shapes.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/01/Lens_shapes.svg License: GFDL Contributors: Lens shapes.png Original artist: Fred the Oyster • File:Lenses_en.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a7/Lenses_en.svg License: CC BY-SA 3.0 Contribu- tors: Own work Original artist: ElfQrin • File:Libr0310.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/24/Libr0310.jpg License: Public domain Contributors: ? Original artist: ? • File:Light_dispersion_conceptual_waves.gif Source: https://upload.wikimedia.org/wikipedia/commons/f/f5/Light_dispersion_conceptual_ waves.gif License: Public domain Contributors: ? Original artist: ? • File:Light_dispersion_of_a_mercury-vapor_lamp_with_a_flint_glass_prism_IPNr°0125.jpg Source: https://upload.wikimedia.org/ wikipedia/commons/1/1f/Light_dispersion_of_a_mercury-vapor_lamp_with_a_flint_glass_prism_IPNr%C2%B00125.jpg License: CC BY-SA 3.0 at Contributors: Own work Original artist: D-Kuru • File:MISB_ST_0601.8_-_Horizontal_Field_of_View.png Source: https://upload.wikimedia.org/wikipedia/commons/8/83/MISB_ST_ 0601.8_-_Horizontal_Field_of_View.png License: Public domain Contributors: • Extracted from File:MISB Standard 0601.pdf Original artist: Motion Imagery Standards Board (MISB) • File:MISB_ST_0601.8_-_Vertical_Field_of_View.png Source: https://upload.wikimedia.org/wikipedia/commons/4/47/MISB_ST_0601. 8_-_Vertical_Field_of_View.png License: Public domain Contributors: • Extracted from File:MISB Standard 0601.pdf Original artist: Motion Imagery Standards Board (MISB) • File:Magnifying_glass2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/50/Magnifying_glass2.jpg License: GFDL Contributors: Own work Original artist: Heptagon • File:Malus_law.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/88/Malus_law.svg License: CC BY-SA 2.5 Contribu- tors: http://en.wikipedia.org/wiki/Image:Malus_law.png Original artist: Fresheneesz (original); Pbroks13 (redraw) • File:Military_laser_experiment.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a0/Military_laser_experiment.jpg Li- cense: Public domain Contributors: This Image was released by the United States Air Force with the ID 090809-F-5527s-0001 (next). This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing for more information. Original artist: US Air Force • File:Minox_LX_hyperfocal.JPG Source: https://upload.wikimedia.org/wikipedia/commons/5/56/Minox_LX_hyperfocal.JPG License: CC BY-SA 4.0 Contributors: Own work Original artist: Gisling • File:Mustache_distortion.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3c/Mustache_distortion.svg License: Pub- lic domain Contributors: Barrel distortion.svg Original artist: Barrel distortion.svg: WolfWings • File:Nimrud_lens_British_Museum.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Nimrud_lens_British_Museum. jpg License: GFDL Contributors: Photo by user:geni Original artist: Geni • File:No1-A_Autographic_Kodak_Jr.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/bb/No1-A_Autographic_Kodak_ Jr.jpg License: CC BY 3.0 Contributors: Own work Original artist: Richard F. Lyon • File:Optical_Devices_England_1858.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/74/Optical_Devices_England_ 1858.jpg License: Public domain Contributors: New York Public Library Archives Original artist: William Barclay Parsons Collection • File:Opticks.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a0/Opticks.jpg License: Public domain Contributors: ? Original artist: ? • File:Optics_from_Roger_Bacon’{}s_De_multiplicatone_specierum.jpg Source: https://upload.wikimedia.org/wikipedia/commons/ 8/86/Optics_from_Roger_Bacon%27s_De_multiplicatone_specierum.jpg License: Public domain Contributors: ? Original artist: ? • File:Orb_photographic.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/bf/Orb_photographic.jpg License: Public do- main Contributors: Own work (Original text: I(842U (talk)) created this work entirely by myself.) Original artist: 842U (talk) • File:Out-of-focus_image_of_a_spoke_target..svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bd/Out-of-focus_image_ of_a_spoke_target..svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Tom.vettenburg • File:Pincushion_distortion.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5b/Pincushion_distortion.svg License: Pub- lic domain Contributors: Own work Original artist: WolfWings • File:Plane_wave_wavefronts_3D.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/20/Plane_wave_wavefronts_3D.svg License: Public domain Contributors: • Onde_plane_3d.jpg Original artist: Onde_plane_3d.jpg: Fffred 11.2. IMAGES 205

• File:Polarisation_(Circular).svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9a/Polarisation_%28Circular%29.svg Li- cense: Public domain Contributors: This vector image was created with Inkscape. Original artist: inductiveload • File:Polarisation_(Elliptical).svg Source: https://upload.wikimedia.org/wikipedia/commons/7/79/Polarisation_%28Elliptical%29.svg License: Public domain Contributors: This vector image was created with Inkscape. Original artist: Inductiveload • File:Polarisation_(Linear).svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d7/Polarisation_%28Linear%29.svg Li- cense: Public domain Contributors: This vector image was created with Inkscape. Original artist: inductiveload • File:Ponzo_illusion.gif Source: https://upload.wikimedia.org/wikipedia/commons/0/02/Ponzo_illusion.gif License: Public domain Con- tributors: ? Original artist: ? • File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ? Original artist: ? • File:Povray_focal_blur_animation.gif Source: https://upload.wikimedia.org/wikipedia/commons/c/c3/Povray_focal_blur_animation. gif License: CC BY-SA 3.0 Contributors: Own work Original artist: SharkD • File:Povray_focal_blur_animation_mode_tan.gif Source: https://upload.wikimedia.org/wikipedia/commons/f/f1/Povray_focal_blur_ animation_mode_tan.gif License: CC BY-SA 3.0 Contributors: Own work Original artist: SharkD • File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0 Contributors: Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist: Tkgd2007 • File:Reflection_and_refraction.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Reflection_and_refraction.svg Li- cense: CC BY-SA 3.0 Contributors: Own work Original artist: Epzcaw • File:Reflection_angles.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/10/Reflection_angles.svg License: CC-BY-SA- 3.0 Contributors: No machine-readable source provided. Own work assumed (based on copyright claims). Original artist: No machine- readable author provided. Arvelius assumed (based on copyright claims). • File:Reflectionprojection.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/10/Reflectionprojection.jpg License: CC BY 2.0 Contributors: reflection and projection Original artist: bcjordan • File:Scheimpflug.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9c/Scheimpflug.jpg License: CC BY-SA 3.0 Con- tributors: Own work Original artist: Jacopo188 • File:Shallow_Depth_of_Field_with_Bokeh.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/32/Shallow_Depth_of_ Field_with_Bokeh.jpg License: CC BY 3.0 Contributors: Own work Original artist: Chokity • File:Snells_law.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d1/Snells_law.svg License: Public domain Contribu- tors: Transferred from en.wikipedia to Commons. Original artist: Cristan at English Wikipedia • File:Spherical_aberration_3.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/60/Spherical_aberration_3.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: HHahn • File:Stylised_Lithium_Atom.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6f/Stylised_atom_with_three_Bohr_model_ orbits_and_stylised_nucleus.svg License: CC-BY-SA-3.0 Contributors: based off of Image:Stylised Lithium Atom.png by Halfdan. Orig- inal artist: SVG by Indolences. Recoloring and ironing out some glitches done by Rainer Klute. • File:Summer_time_feet_in_Central_Park.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/Summer_time_feet_ in_Central_Park.jpg License: CC0 Contributors: Own work Original artist: Northeastern Nomad • File:Symbol_list_class.svg Source: https://upload.wikimedia.org/wikipedia/en/d/db/Symbol_list_class.svg License: Public domain Con- tributors: ? Original artist: ? • File:Table_of_Opticks,_Cyclopaedia,_Volume_2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/be/Table_of_Opticks% 2C_Cyclopaedia%2C_Volume_2.jpg License: Public domain Contributors: ? Original artist: ? • File:TessinaDOF.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/e3/TessinaDOF.jpg License: CC BY 3.0 Contribu- tors: Own work Original artist: Gisling • File:The_VLT’s_Artificial_Star.jpg Source: https://upload.wikimedia.org/wikipedia/commons/1/17/The_VLT%E2%80%99s_Artificial_ Star.jpg License: CC BY 4.0 Contributors: http://www.eso.org/public/images/potw1425a/ Original artist: ESO/G. Lombardi (glphoto.it) • File:The_new_PARLA_laser_in_operation_at_ESO’s_Paranal_Observatory.jpg Source: https://upload.wikimedia.org/wikipedia/ commons/a/a2/The_new_PARLA_laser_in_operation_at_ESO%E2%80%99s_Paranal_Observatory.jpg License: CC BY 4.0 Contribu- tors: http://www.eso.org/public/images/ann13010a/ Original artist: ESO/G.Hüdepohl • File:Theorem_of_al-Haitham.JPG Source: https://upload.wikimedia.org/wikipedia/commons/1/14/Theorem_of_al-Haitham.JPG Li- cense: CC-BY-SA-3.0 Contributors: Own work Original artist: • File:Thick_Lens_Diagram.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fc/Thick_Lens_Diagram.svg License: CC BY-SA 3.0 Contributors: Based on the drawing : http://www.wikipedia.or.ke/index.php/File:ThickLens.png Original artist: KDS444 • File:ThinLens.gif Source: https://upload.wikimedia.org/wikipedia/commons/9/9e/ThinLens.gif License: CC BY 3.0 Contributors: Own work Original artist: Lookang • File:Thin_lens_images.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/47/Thin_lens_images.svg License: CC BY- SA 3.0 Contributors: Own work Original artist: Cmglee • File:Uniformity.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/2b/Uniformity.jpg License: CC BY 2.5 Contributors: Own creation Original artist: Atoma • File:Video-x-generic.svg Source: https://upload.wikimedia.org/wikipedia/en/e/e7/Video-x-generic.svg License: Public domain Con- tributors: ? Original artist: ? 206 CHAPTER 11. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

• File:Villianc_transparent_background.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/88/Villianc_transparent_background. svg License: CC-BY-SA-3.0 Contributors: This vector image was created with Inkscape. Original artist: Image:Villianc.jpg, by J.J., re- leased under GFDL. • File:Virtualimageframerate1.gif Source: https://upload.wikimedia.org/wikipedia/commons/e/e4/Virtualimageframerate1.gif License: CC BY-SA 4.0 Contributors: Own workhttp://weelookang.blogspot.sg/2015/05/ejss-thin-converging-diverging-lens-ray.html Original artist: Lookangmany thanks to author of original simulation = Fu-Kwun Hwang author of Easy Java Simulation = Francisco Esquembre • File:Wave_group.gif Source: https://upload.wikimedia.org/wikipedia/commons/b/bd/Wave_group.gif License: GFDL Contributors: Own work Original artist: Kraaiennest • File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License: CC-BY-SA-3.0 Contributors: This file was derived from Wiki letter w.svg: Wiki letter w.svg Original artist: Derivative work by Thumperward • File:Wikibooks-logo-en-noslogan.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Wikibooks-logo-en-noslogan. svg License: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al. • File:Wiktionary-logo-v2.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Wiktionary-logo-v2.svg License: CC BY- SA 4.0 Contributors: Own work Original artist: Dan Polansky based on work currently attributed to Wikimedia Foundation but originally created by Smurrayinchester • File:Young_Diffraction.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/Young_Diffraction.png License: Public domain Contributors: ? Original artist: ?

11.3 Content license

• Creative Commons Attribution-Share Alike 3.0