PROCESSES IN BIOLOGICAL VISION:

including,

ELECTROCHEMISTRY OF THE NEURON

This material is excerpted from the full β-version of the text. The final printed version will be more concise due to further editing and economical constraints. A Table of Contents and an index are located at the end of this paper.

James T. Fulton Vision Concepts [email protected]/vision

April 30, 2017 Copyright 2000 James T. Fulton Environment & Coordinates 2- 1

2. Environment, Coordinate Reference System and First Order Operation 1

The beginning of wisdom is to call things by their right names. - Chinese Proverb The process of communication involves a mutual agreement on the meaning of words. - Charley Halsted, 1993

[xxx rationalize azure versus aqua before publishing ]

The environment of the eye can be explored from several points of view; its radiation environment, its thermo- mechanical environment, its energy supply environment and its output signal environment. To discuss the operation of the eye in such environments, establishing certain coordinate systems, functional relationships and methods of notation is important. That is the purpose of this chapter. The reader is referred to the original source for more details on the environment and the coordinate systems. Extensive references are provided for this purpose. Details relative to the various functional operations in vision will be explored in the chapters to follow.

Because of the unique integration of many processes by the photoreceptor cells of the , which are explored individually in different chapters, a brief section is included here to coordinate this range of material. 2.1 Physical Environment

The study of the requires exploring a great many parameters from a variety of perspectives. This situation involves employing and crossing many traditional lines of investigation. Terminology becomes difficult under these circumstances.

For purposes of this work, the wide variety of disciplines associated with the study of animals will be divided into the major categories of morphology and physiology. Morphology will relate to the form and structure of the animal. Physiology will relate to the functions performed and processes used by the animal. Within these categories, the terms defined in Table 2.1 will be used. When investigating fields without a check mark, the designation of the nearest field should be used with a specific modifier if a more specific label is not available.

TABLE 2.1 DISCIPLINES AND TERMINOLOGY USED IN THIS WORK

Field Morphology Physiology Microscope used Shape Structure Function Process None light electron Anatomy * * * * histology * * * cytology * * * topography * * * * * topology * * * * * biochemistry * * 2.1.1 The Radiation Environment

The terminology used to discuss the radiation environment applicable to vision is very awkward, especially with the introduction of photometry based on the presumed absorption spectrum of the human eye. The subject has been further complicated by the recognition that this absorption spectrum changes as the light level is reduced. Halstead

1Released: April 30, 2017 2 Processes in Biological Vision

has recently provided a readable description of the units used to describe the radiation environment of vision, although he did not address the difference between scotopic and photopic units. He makes a clear distinction between a photometer designed to measure incident flux (an incident photometer reporting in lumens/m2) and a photometer designed to measure the emitted, or reflected, light (a luminance photometer reporting in cd/m2). The reader should note the critique appearing at the end of the paper. Ryer has provided a more extensive treatise on the subject than is readily available2. There is a problem with page 31 of Ryer that will be addressed in Section 17.1.5.3.1. 2.1.1.1 The Luminance Range

In the most general sense, the radiation environment of the eye is limited by two external parameters; the spectral range of the electromagnetic spectrum which is transmitted by the atmosphere and the intensity range between that generated by the sun and the relative darkness of a cloudy winter night in the countryside. For aquatic animals, the spectral range is limited further by the spectral transmission of water. Cronly-Dillon & Gregory provide a figure adapted from Lythgoe that describes the marine transmission spectrum3. The detectable spectral range at the surface of the retina is then limited somewhat further by the transmissive elements of the eye itself. The detected spectral range is then determined by the product of the input radiance and the absorption spectrum of the individual photodetectors. Many animals utilize the 300-400 nm. portion of the ultraviolet range of the radiant spectra, although normal humans do not. Therefore, a limited sector of the radiance spectra has been defined as the luminance spectrum, extending from 400 nm. to 700 nm based on the effective absorption spectrum of the human species. The 400 nm. end-point is relatively abrupt since it is determined not only by the relevant chromophore but also by the transmissivity of the optical system of the eye. Aphakic human observers (those with the crystalline lens of their eye removed) report seeing wavelengths out in the 310-360 nm. range with a peak wavelength near 342 nm4. Such a peak is clearly not associated with the skirt of the S-channel chromophore. It appears that the aphakic human eye, and therefore the retina of all humans, contains UV-channel chromophores. The color reported for this spectral range by a very small group of humans is blue. The long wavelength limit of human vision is controlled primarily by the absorption coefficient of the L-channel chromophore; for sufficiently intense illumination, the human eye can detect light out to at least 900 nm.

The illumination ranges in Figure 2.1.1-1 have been adopted primarily from Rubin & Walls5. The Rubin & Walls variant used only English units. Note that the table is based on the emittance of a surface, not the irradiance or illuminance of an aperture. It provides a calibration on the luminance values of various surfaces found in the real world. The extra annotation on the right is from a Norden-Ketay Corp. report from the 1950’s. Halsted republished their figure in 1993 with a few embellishments and using SI units6. The values in the table are only accurate to an order of magnitude (at best). The common definition of twilight, extending from the astronomical definition of sunset to the astronomical definition of twilight, is a wide imprecise range from 102 to 10-2 millilamberts. Land & Nilsson have published a variant of this table7, apparently accumulated from the same sources but also including the luminance level at the sea surface as seen from different depths in the clearest ocean water (attenuation coefficient, 0.032 /meter).

A draft set of proposed Absolute Munsell Values has been added for purposes of discussion. The original Munsell scale was a relative logarithmic1.478 scale suited to the needs of artists of the 1900's. It only accommodated ten reflectance levels of pigment when illuminated such that the pigments were observed within the photopic region of human vision. Today, there is a need for an extended logarithmic scale that can accommodate the industrial and scientific communities (Sections 17.3.5.2 & 17.3.8). This extended scale makes it possible to interpret recent data acquired with robotic probes visiting other bodies in the solar system (Section 17.3.8.3). To keep the new scale compatible with both the conventional irradiance scales of industry and the astronomical community, a logarithmic

2Ryer, A. (1997) Light Measurement Handbook. International Light (Publisher) www.intl- light.com/handbook.html 3Cronly-Dillon, J. & Gregory, R. (1991) Evolution of the eye and visual system. Boca Raton FL: CRC Press pg. 407 4Wald, G. (1945) Human vision and the spectrum. Science. Vol. 101, No. 2635 pp. 653-658 5Rubin, M. & Walls, G. (1969) Fundamentals of visual science. Springfield, Il: Charles C. Thomas pg. 40 6Halstead, C. (1993) Op. Cit. 7Land, M. & Nilsson, D-E (2002) Animal Eyes. Oxford: Oxford University Press Page 20 Environment & Coordinates 2- 3

scale to the base 1.5849 has been introduced. While slightly different than that of the relative Munsell Value scale, it provides a greatly extended Absolute Munsell Value scale useful over the complete range of human vision. In the new Absolute Munsell Value range, the brightest highlight of the sample is taken initially as the Value on the proposed scale. If the brightest highlight of the sample is found to be equal to 10 on the Absolute Munsell Value scale, this value also corresponds to the relative Munsell Value of 10. As noted, when the Absolute Munsell Value range is reduced more than a factor of 10, the colors in the lower Value range (i.e., -15/ to -20/ and below) will exhibit color shifting due to the loss of “color constancy” within the human eye.

Figure 2.1.1-1 Table of equivalent light levels drawn from the literature. An expanded scale of proposed Absolute Munsell Values have been added for discussion purposes. See text.

The above luminance values can be compared with some common incidence values given by Ryer. Sunny day, 105 Lux; office lights, 100 Lux; full moon, 0.1 Lux; overcast night 10-4 Lux. 1 Lux equals 1 cd/m2.

The region of optimum visual performance is difficult to define because of the many variables involved. Color constancy is particularly difficult to define because its evaluation is strongly dependent on the color temperature of the illumination source used. Individuals are not readily aware of the loss of performance in the purple region of the spectrum when using tungsten light sources with color temperatures below 3600 Kelvin. [xxx edit these values for consistency ] The theoretical range of the photopic regime presented in Chapter 11 of this work is in excellent agreement with this Table. The predicted width of the photopic regime is 56000:1. Below this level, reading becomes difficult for two separate reasons, a poor signal-to-noise ratio and the poorer spatial resolution because of the opening of the that is the aperture stop of the system. Above this range, the dynamic range of the visual system becomes severely impaired. These situations are illustrated in Figure 2.1.1-2. While the visual system can operate without pain over a range of about 10-12 orders of magnitude, the photopic region extends over a range of only 4-5 orders of magnitude. Within this region, the perceived signal within the visual system (the brightness) represents the relative amplitude of the scene content with considerable precision due to the adaptation properties of the photoreceptor cells (to control amplifier gain) and the limited capability of the iris (to control the light level applied to the retina). Outside of this region, color constancy and other performance parameters are degraded. The operation of the 4 Processes in Biological Vision

individual mechanisms supporting these different operational modes will be developed in Chapters 11, 16 & 17. The author can confirm that his color vision continues under full moonlight conditions at 33°N latitude, 117W longitude. Greens, reds, yellows and blues are recognizable. The reds and yellows were particularly prominent. Perception of the blues and greens depended on the “darkness” of the original sample. This illumination level is represented by the vertical line passing near 10–2 cd/m2 in the figure. Environment & Coordinates 2- 5

Figure 2.1.1-2 Perceived Brightness versus luminance of common objects. The values given are only nominal, generally within an order of magnitude because of the variability of atmospheric conditions and other parameters. Values should not be confused with pupil (or retinal) illuminance. Above 30,000 cd/m2, the visual system exhibits negligible dynamic range (contrast performance). Below 10–3 cd/m2, it relies upon spatial integration to achieve only limited resolution.

2.1.1.2 Retinal Illumination

The discussion of the luminous intensity incident on a surface differs greatly from the luminance exiting a source. The SI photometric unit of incident radiation is the LUX, or lumen/m2. The SI unit of luminance is the candle/m2. These terms are frequently confused.

The term retinal illuminance has long been used as a shorthand in optometry. The expression is used to describe the illuminance incident at the location of the pupil of the eye multiplied by the area of the pupil. The resulting value should be in lumens. When encountering illuminance data in Trolands, the equation below is handy but archaic:

E(trolands) = 10•B(millilamberts) • r2(radius of the pupil in mm.) The above expression should be rewritten in units of lumens/m2, not millilamberts. The above products for the total luminous flux falling on the pupil of the eye has frequently been called the retinal illuminance. This term is a misnomer since it implies the product of the luminous intensity incident at the pupil times the pupil size at the aperture describes the luminous intensity applied to the retina. This is fundamentally incorrect because of the rules of optics applied to a lens. It is also incorrect in the off-axis case because both the f/# of the lens and the effective pupil size of the eye changes with field angle. The usefulness of the above equation also suffered a serious setback in 1933 when Stiles and Crawford discovered that the effectiveness of light entering the pupil was a strong function of its distance from the center of the pupil (now known as the Stiles-Crawford Effect, see Section 17.3.7). Initially this effect was thought to be due to the 6 Processes in Biological Vision

limited acceptance angle of the photoreceptor cells. It was claimed that only the cones showed this limitation and that at scotopic levels, the above equation was correct. Later studies showed that the effect was also strongly related to the aberrations associated with the outer regions of the lens when the iris was fully open as in scotopic vision. Thus, the effect is due to the nature of the crystalline lens and is not related directly to the light intensity. The problem is usually overcome by using a Maxwellian illuminator to project a light beam of constant diameter through the center of the pupil whatever the illuminance level--or by avoiding the units of trolands altogether. Rubin & Walls had strong views on this problem and spoke of the unit of trolands as follows: “However, a unit which becomes useless for photopic levels is simply useless, ‘period’.” 2.1.1.3 The Color Spectrum

Color is a uniquely psychophysical phenomenon. In the absence of the cognitive powers of a brain, the concept of color is meaningless. One is left with a continuous function describing the physical radiance of a source as a function of wavelength. It is only when this function is processed by a visual system and interpreted by a brain that a specific color is perceived. The description of the perceived color to a second party is a uniquely semantic problem. An individual is taught to associate a semantic label within his own language with a particular perceived response. In communicating to a second person, it is critically important that the second person was taught to associate the same label with the same perception. Otherwise, the report of the color perceived by a subject may not be interpreted correctly by the recipient of the report. This problem becomes even more complex if a translation is then made of the reported color into a different language. An additional problem involves the limited capability, of the human at least, to remember a perceived color with precision for more than a second or two. This is due to the fact that the visual system is designed to compare colors and not to evaluate single colors. A survey of the literature associated with the naming of colors illustrates all of the above difficulties8. There have been many attempts to represent the perceived color response of humans in a graphical format. Both one, two and three dimensional formats have been used in the past. Lacking a model of the functional processes within the visual system, these efforts have been less than successful. The most aggressive effort to organize the perceived colors was that of Kelly & Judd under the auspices of the US Government, National Bureau of Standards9 While updated in 1976-77, this effort failed due to lack of commercial support. The documentation is long out of print but available in some archives. The most successful has been the Munsell Color System which has enjoyed broad commercial adoption. See Appendix S for additional details. The first format that can be directly related to the functional performance of the visual system in humans is the New Chromaticity Diagram for Research presented in the previous chapter and developed in Chapter 17. This new presentation provides a theoretical foundation that is well correlated with the Munsell System and is compatible with the concept of two orthogonal chromatic axes of Hering. However, additional specifications must be added to both systems to correctly define the chromatic performance of the visual system.

The human visual system is actually a tetrachromatic system involving three orthogonal chromatic axes with the range of one axis constrained by the absorption of the lens in humans. For wavelengths shorter than 437 nm, the role of the third chrominance channel, the O–channel becomes significant. This channel leads to perception of a low saturation condition near 400 nm that is similar to the condition at 494 nm and 572 nm. Kuenhi has recognized the presence of this condition but used an inadequate conceptual model and attributed it erroneously to the “reappearance of red at the short wave end of the spectrum10.” The red component suggested by Kuenhi occurs at wavelengths shorter than 437 nm. It differs significantly from the red component associated with the CIE r(λ) and x(λ) color matching functions which peak near 530 nm. See Section 17.3.3. Section 17.3.8 of this work will provide the first definition of the absolute spectral parameters of a named color based on either the Munsell Color Space or the Kelly & Judd Color Space. 2.1.1.3.1 The dispersion associated with naming colors

This section will address several examples of the difficulty of agreeing on a specific list of color names. The

8See Appendix S 9Kelly, K. & Judd, D. (1955) The ISCC-NBS method of designating colors and a dictionary of color names; National Bureau of Standards Circular 553. Washington, DC:US Government Printing Office. 10Keuhni, R. (2002) Color: what could it be? SPIE Proc vol 4421, pp 22-25 Environment & Coordinates 2- 7

problem is exemplified by a discussion in the literature between two major players of their day. Consider the report of Naka & Rushton11 in 1966. They take exception to the work of Svaetichin & MacNichol12 in 1958. Whereas Svaetichin & MacNichol are speaking of a Chrominance unit in the S-plane of the retina that is hyperpolarized by “blue” and depolarized by “yellow”; Naka & Rushton claim to show that it is the signal from the “blue” that hyperpolarizes and that from the “Green” that depolarizes. There is absolutely no discussion of the precise wavelengths associated with “green” and “yellow.” The actual wavelength of the chromophore being discussed by both groups has a peak at 532 nm that is quite clearly in the Greenish-Yellow area of the above list. Additional problems of a similar nature will be discussed in Chapter 11.

2.1.1.3. Two-dimensional representations of color names

Humans report a wide range of names when shown spectrally pure lights or accepted standard paint chips, from either sources of light or by reflection (a problem compounded when other languages are also considered)13. This leads to a great deal of difficulty in expressing the psychophysical results of experiments. Moore, et. al. have recently complemented the work of Berlin & Kay by exploring the problem of language and culture between English, Chinese and Vietnamese14. Since the eye-brain system possesses almost no calibration of color related to wavelength, it will report a different name for a color depending on the previous color sensed--even following an intervening period of darkness or a neutral grey. The reported color is also extremely sensitive to the color temperature of the illumination and the surround. Webster recently reported on this phenomena15. To establish a point of reference, Table 2.1.1-2 lists the names associated with blocks of the color spectrum as printed on the two- dimensional CIE 1976 UCS Chromaticity Diagram that is widely distributed by Photo Research of Chatsworth, CA16. It is interesting to explore these labels compared with the recognized spectral peaks of the vision chromophores in humans. It is generally agreed that the S-chromophore has a peak near 437 nm, clearly in the Purple/Purplish-Blue area and not Blue. The M-chromophore is generally accepted to have a peak near 532 nm that is longer than the Green label, clearly in the Green/Yellowish-Green area. The L-chromophore as specified in this work is at 625 nm that would be labeled about one half way between Reddish-Orange and Red. If the value found in some literature of 570 nm for the L-chromophore is correct, the L-chromophore would be labeled Greenish-Yellow. The reader might want to think about the logic of having the L-chromophore peaking near 570 nm, so far from the “red” part of the spectrum. The origin of the reported spectral peak near 570 nm that has been associated with the long wavelength chromophore will be addressed in detail in Chapter 17.

TABLE 2.1.1-2 Color names given on the CIE 1976 UCS Chromaticity Diagram Wavelength Name Peak spectral absorption of chromophores in nm. In literature in this work Beyond 440 nm Purple 430 -450 437 450 Purplish-Blue 472 Blue 484 Greenish-Blue 495 Bluish-Green 510 Green 540 Yellowish-Green 500-530 532 564 Yellow-Green

11Naka, K. & Rushton, W. (1966) S-potentials from colour units in the retina of fish (cyprinidae) J. Physiol. vol. 185, pp. 536-555 12Svaetichin, G. & MacNichol, E. (1958) Retinal mechanisms for chromatic and achromatic vision. Ann. N. Y. Acad. Sci. vol. 74, pp. 385-404 13Berlin, B. & Kay, P. (1969) Basic color terms. Berkeley, CA: University of California Press 14Moore, C. et. al. (in publication) Effects of language and gender on the perceptual structure of basic colors in English and Vietnamese. J. Modern Optics and with regard to Chinese, Moore, C. et. al. (2000) Proc. Nat. Acad. Sci. 97:5007-5010 15Webster, M. (in publication) Contextual influences on color naming. J. Modern Optics 16The C.I.E. has taken pains in recent years to disassociate any color names from its graphical and tabular materials. 8 Processes in Biological Vision

572 Greenish-Yellow 578 Yellow 575 583 Orange-Yellow 590 Orange 605 Reddish-Orange 625 Orangish-Red 625 Beyond 680 Red The blocks assigned a given name in the above two-dimensional presentation are coarse.

2.1.1.3.3 One-dimensional representation of color names

Many investigators have attempted to define a list of names based on a one-dimensional format using a spectrometer to provide spectral colors. These investigators, lacking theoretical support, have relied on the common wisdom that perceived psychophysical colors are unidimensional functions of wavelength. Finding a list of standard color names in the literature for irradiance of a particular center wavelength is very difficult. (See below for an entirely subjective comparison of color names.) Table 2.1.1-3 is a comparison of several individuals studied by Walls. They were variously reported as giving responses that were quite near normal but normal was not specified. This list highlights a majority of the problems introduced at the beginning of this section and is worthy of study. Clearly, the subjects were given different instructions, either specifically or through their general education, as to the range of names to use. The names reported are at considerable variance with the above listing attributed to the C.I.E. Diagram. Note the only convergence of names is near 0.610, orange-red, and near 0.550, yellow-green. Berlin & Kay also addressed this situation and stressed the difficulty of an individual to define color name boundaries consistently (pg. 15). The differences between the observers strongly hints at a difference between them. It appears that there is a variation in “gain settings” among the differencing channels of the visual system in humans. Such gain differences may be strong functions of the state variables of the adaptation process. Walls provided data suggesting these gain settings differ between the two eyes of an individual17. TABLE 2.1.1-3

WAVE- S. K. S. K. (2nd try) M.D. O. E-M. LENGTH LEFT EYE LEFT EYE

400 purple blue-purple 410 purple greyish-purple violet 420 saxe-blue indigo-blue violet 430 greenish-blue? pale purple (bluish) violet 440 blue blue blue violet 450 blue blue blue violet 460 blue deep blue blue blue, some violet 470 blue blue blue 480 blue green blue 490 blue green blue 500 bluish-green green blue 510 bluish-green green blue 520 yellowish-green green bluish-green 530 yellowish-green green, perhaps some yellow green 540 yellowish-green green, some yellow green 550 yellowish-green yellow-green green 560 yellowish-green yellow green 570 greenish-yellow yellow yellow 580 greenish-yellow yellow yellow

17Walls, G. (1964) Notes on four tritanopes. Vision Res. vol. 4, pp 3-16 Environment & Coordinates 2- 9

590 orange yellow orange orange 600 reddish-orange yellow-red orange 610 orange-red orange-red orange 620 slightly bluish-red red red 630 bluish red red red 640 slightly bluish-red red red 650 slightly bluish-red red red 660 bluish-red red red 670 red red red 680 red red red 690 red red red 700 red red red

Several investigators have attempted to prepare “complete” lists of standardized color names. Pitt probably reached the extreme in color naming in 1935 using such entirely subjective names as Portland Stone, Sea Green, Mid-bronze Green, Light Brunswick Green, Fawn, etc. Note; none of these lists includes the names cyan or magenta, but both are used extensively in the graphic arts and the electronics of image transmission.

The ultimate attempt to correlate the names of colors in a completely subjective context was that of the Inter-Society Color Council & the U.S. National Bureau of Standards (ISCC-NBS) in 195518. Both the original and a 1976-77 reprint are long out of print. This material provided a set of correlation matrices between approximately 10,000 individual color names and the Munsell Color System (renotated) using the ISCC-NBS recommended list of 267 selected and structured color names.

Gouras has provided a good summary of the many attempts to arrange the above, and other, colors into two and three dimensional conceptual structures19. The systems reviewed are not compatible with the system proposed here. In 1971, Gurr provided a list of colors for use in the dye chemistry field that are roughly correlated to wavelength and defined the complementary color of each. Unfortunately, the complement of the complement is not always the original color in his set. MacAdam provided another undistinguished list of color names in 198520.

As late as 1990, Boynton and Olson were attempting to assign color names to specific spectral intervals based strictly on their psychophysical experiments21. They did not offer any relationship between their color names and the spectrum of the S-, M- & L-channel photoreceptors or with absolute wavelength.

In 1994, Wooten & Miller attributed a set of color names to Newton that appear more satisfactory than most recent names22.

Wavelength Color name

400-430 violet 440-460 indigo (purple) 470 blue 494 ** 505 green 572 ** 575 yellow

18Kelly, K. & Judd, D. (1955) The ISCC-NBS method of designating colors and a dictionary of color names; National Bureau of Standards Circular 553. Washington, DC:US Government Printing Office. 19Gouras, P. (1991) The Perception of Colour. Boca Raton, FL: CRC Press. pp. 218-261 20MacAdam, D. (1985) The physical basis of color specification, In Color Measurement; Theme and Variations, NY: Springer-Verlag, pp 1-25 21Boynton, R. & Olson, C. (1990) Salience of chromatic basic color terms confirmed by three measures. Vision Res. vol. 30, pp 1311-1317 22Wooten, W. & Miller, D. (1994) The psychophysics of color In Hardin, C. & Maffi, L. eds. Color Categories in Thought and Language. Cambridge: Cambridge Univ Press Chap 3 10 Processes in Biological Vision

590-620 orange >630 red The values of 494 and 572 nm have been added to this one-dimensional set of values for compatibility with a two- dimensional set of values presented later in this work. These wavelengths define null values between the colors blue and green and green and orange (the values 572 and 575 being assumed to be identical in perceived color for practical purposes). Purple has been associated with indigo since the name indigo (a natural dye) seldom appears in the more recent literature. The descriptive terms attributed to Hering in Wooten & Miller appear speculative and insufficiently defined. An alternative description of Hering’s color coordinates will be developed in Section 17.3.

2.1.1.4 Historically “Unique and Primal Colors” In Marriot, pg. 224 of Davson, vol 2 Several attempts have been made to extract a short list of primal colors from this class of psychophysical studies23,24. These studies have suffered either from significant dispersion or imprecise definition. Uttal took a rather unusual approach to naming colors. He assigned the seven most common “perceived” color names plus yellowish-green to the spectrum at multiples of 50 nm25. To accommodate purple, he extended the spectrum to 750 nm. However, he gave no reason for considering purple a spectral color at such a long wavelength.

Recently, two authors have explored their definition of unique blue and unique green based on psychophysical experiments. Volbrecht, et. al. attempted to describe the variation in their psychophysical based “unique green” as a function of the presumed density of S–channel photoreceptors26. The results did not support their original thesis. The same team also explored the same environment for their “unique blue.”27 The truly unique feature of the Volbrecht, et al paper was the range of “unique green” found among only three subjects over a variety of positions and target sized in the human retina. The range for “unique green” was from 480 to 525 nm.

Kuehni has discussed “focal green” and “unique green” from a largely philosophical position within the psychology community and without a physiological or neurological model28. He asserts “the unique green hue is the unique output of the greenness-redness opponent-color system in one polarity if the yellowness-blueness system is neutral.” As will be shown in Section 17.3, this condition defines the color observed at a spectral wavelength of 494 nm which is aqua and not a conventional green. A greener response is actually obtained when the yellowness-blueness system is indicating maximum yellow. He goes on, “I recently discovered, however, that for 40 of 40 observers their focal (ideal) green is significantly yellower than their unique green.” This agrees with the previous sentence if he means the focal green is the result of the presence of a yellow component. In that case, focal green corresponds to a wavelength of 532 nm.

Several authors have presented sets of psychophysically “unique colors.” It is claimed that these colors do not appear to change hue as the light level is adjusted within the conventional photopic range. These colors are closely tied to the Bezold-Brucke effect that will be discussed later and are not related to the primal colors of Hering. They are:

23Tschermak-Seysenegg, A. (1952) Introduction to Physiological Optics. Springfield, IL: Charles C. Thomas 24Le Grand, Y. (1957) Light, colour and vision. London: Chapman & Hall 25Uttal, W. (1981) A taxonomy of visual processes. Hillsdale, NJ: Lawrence Erlbaum Associates, pg. 165 26Volbrecht, V. Nerger, J. Imhoff, S. & Ayde, C. (2000) Effect of the short-wavelength-sensitive-cone mosaic and rods on the locus of unique green. J. Opt. Soc. Am. A. vol. 17, no. 3, pp 628-634 27Nerger, J. Volbrecht, V. Ayde, C. & Imhoff, S. (1998) Effect of the S-cone mosaic and rods on red/green equilibria. J. Opt. Soc. Am. A. vol. 15, pp 2816-2826 28Kuehni, R. (2002) Color: what could it be? SPIE Proc vol 4421 pp 22-25 Environment & Coordinates 2- 11

Wavelength Name Wavelength More recent work, according to according to according to (Section 13.5.3) OSA29 Wyszecki & Stiles30 Kaiser31 (noted) Unique purple-red 464 nm. Unique Blue 478 nm. ~480 nm 489 Unique Green 500 ~515 494 571 Unique Yellow 578 ~568 575±1 Unique Red Non-spectral These colors will be shown to have a unique perceptual characteristic later in this work, although they are not necessarily actual chromophoric attributes. [xxx re-rationalize this table with section 13.5.3] Burns, et. al. have summarized the work of over a dozen investigators who converge on 468.3 nm for the spectral value for unique blue32. The paper focuses on the non linearity of constant hue representations in the CIE 1934 Chromaticity Diagram for constant retinal illuminance values. “Linear models predict unique hue loci which are straight lines in the chromaticity diagram. In addition, both the unique red and unique green loci and the unique yellow and unique blue loci should be collinear, resulting in two intersecting straight lines. Nonlinear models with linear balance points make the same predictions.” “These data imply that unique hues are not a linear transformation of color matching functions. Linear models are only an approximation, even at a single luminance level.” “While there is disagreement about which constant hue loci are straight, most plots of constant hue loci show fairly straight lines for yellows, but curved lines for greens, reds and blues. These curved lines represent the Abney Effect.” The above quotations only apply to the CIE 1934 Chromaticity Diagram. The observed curvatures are partially corrected in the CIE 1976 Uniform Color Space and are totally corrected in the New Chromaticity Diagram of this work. The empirical curvatures on the CIE 1934 diagram are totally consistent with the theoretical curvatures presented on this diagram by this work (Section 17.3.5.3.2).

Burns et al. note “Our data were displayed in the Judd ( 195la) chromaticity diagram for four reasons. First. this diagram facilitates the comparison of data with the linear model predictions, since it is a projective transformation of color matching functions. Second, constant hue loci are usually plotted in chromaticity diagrams. Third, using a mixture diagram prevents treating metamers as separate data points. Fourth, using chromaticity coordinates allows us to express our results in a way which does not depend on the exact desaturant, as long as we know the chromaticity coordinates of the desaturant.” The Judd (1951a) chromaticity diagram introduced the x’, y’ coordinate system accommodating the short wavelength modifications to the earlier 1934 luminosity function. Their protocol was based on, “The circular test field subtended either 2 deg or 40 min of visual angle. The rest of the visual field was black. All stimuli were 20 td.” They took precautions to minimize background light leakage through their monochomator. On page 486, they noted the variation in their unique blues reported among subjects and with time for the same subject using their unchanging protocols. In discussing the constraints on their models, they note, “As discussed above, no linear model will fit our data. Models which assume that (a) mechanism response is at baseline for unique hues and (b) the cone outputs are summed and differenced prior to a non- linearity (e.g. an exponent) predict our data no better than do linear models. Our data may be explained by models (a) which apply a nonlinearity to one or more cone outputs before a summation occurs . . .” This is precisely the mechanism proposed throughout this work.

There appears to be a problem in defining unique red psycho-physiologically. The reason proposed in this work is that red is not located on the spectral locus. A saturated red requires a mixture of a spectral locus component at about 625 nm (610 nm in a more detailed analysis, Section 5.5.4 & 5.5.8) and sufficient spectral blue to result in a unique red with coordinates in nanometers of (625, 494). See Section 17.3.

The conventional photopic range will be expanded into three distinct ranges later in this work to provide a foundation for explaining the occurrence of these “unique colors.” These ranges are defined as the mesotopic, photopic and hypertopic ranges. 2.1.1.5 A theoretical basis for color names

29The Science of Color (1963) Comm. on Colorimetry, ed. NY: Optical Society of America pg. 103 30Wyszecki, G. & Stiles, W. (1982) Color Science. 2nd. Ed. NY: John Wiley & Sons. pg. 456 31Kaiser, P. (2009) http://www.yorku.ca/eye/ciediag1.htm 32Burns, S. Elsner, A. Pokorny, J. & Smith, V. (1984) The Abney Effect: chromaticity coordinates of unique and other constant hues Vision Res vol 24(5), pp 479-489 12 Processes in Biological Vision

This work will present a new formulation of the psychophysical color space in two dimensions that describes perceived color as a function of wavelength. This formulation is rectilinear and the two dimensions are orthogonal. The presentation represents a uniform color space similar to that being sought by the C.I.E. and approximated in its 1976 Uniform Color Space Standard. The proposed rectilinear presentation can be overlayed by a variety of sets of cylindrical coordinates. Such overlays provide a ready translation between the coordinate systems. However, the fundamental coordinates used to represent the underlying mechanisms are rectilinear. One particularly useful set of cylindrical coordinates is that of Munsell. This set places the center of the cylindrical coordinates at the intersection of the two rectilinear coordinates that represents the null condition known as “white.” In the algebra of the visual system, white does not represent a sum of the irradiances presented to the eyes. It represents a balance between two pairs of irradiances following channelization of their spectral content by the chromophores of vision. This is a profound difference in concept. It is not consistent with the “color equation” of the common wisdom. By plotting the spectral difference sensitivity of the visual process as a function of wavelength on the proposed rectilinear presentation, it is possible to define a region, about the two mean wavelengths, in which carefully trained observers will agree on the name of a color within a given statistical variance. These variance values are typically represented by ellipses about the mean throughout color space. It will be shown in Chapter 17 that these ellipses correlate well with the ellipses of MacAdam.

The problem remains of how to assign discrete names of colors that are based on an underlying two-dimensional continuum that is rectilinear. Munsell assigned names based on arbitrary size zones between more specifically defined radial in his cylindrical coordinate system. This technique resulted in the area assigned a name being larger with its distance from the center of the cylindrical coordinate system. This approach has at least two problems. First, it does not relate well to the corners of the underlying rectilinear color space and promulgates a strong suggestion that color space is circular. In this respect, it does relate well to the expression of color in terms of hue and saturation. However, a constant degree of saturation is not a circle on such a coordinate system. Maximum saturation corresponds to a rectangular overlay on the cylindrical grid. Second, and probably more important, the actual perceived hue associated with most radials in the Munsell color space does not remain constant with changes in saturation. Only the radials that are parallel with the underlying rectilinear axes exhibit a constant hue with saturation. These radials represent the actual colors represented by the so-called “unique colors” of both the OSA and Wyszecki & Stiles noted above. In this work, these unique colors (hues) are best represented by the radials 5BG-5R and 10Y-10PB of the Munsell System. The problem is that only two ends of these radials can be described as spectral. The response to the spectral wavelength of 494 nm can be defined as aqua (or “unique green”). The response to a spectral wavelength of 572 nm can be defined as yellow (or “unique yellow”). The other ends of these radials do not cross the spectral locus but can be defined in rectangular coordinates. To be precise, all colors must be defined in terms of their mean spectral wavelength(s), based on the rectilinear color space, and their variances from those means. 2.1.1.6 New definition of “Unique Colors” 2.1.1.6.1 Background

This work will show that it is extremely important to recognize that the visual system does not employ in perceptual space, the concepts of additive or subtractive color used effectively in object space. These two concepts reduce in the minimal case to the use of three well-spaced colors to approximate all colors. The additive concept is typically used in active illumination of a scene. The subtractive concept is usually used in reflective process such as printing where the result is known as “process color.” In the additive concept, the three colors can employ narrowband sources effectively. The mean wavelength of these sources is easily specified. However, in the subtractive case, it is important that wide band absorbers be used. In the subtractive case, expressing the median wavelength of the individual reflector is awkward. Process color fools the visual system by presenting sub pixel size zones of absorbent ink to blot out areas of high reflectance white paper. When the eye integrates all of the light from a specific pixel in object space, it averages the irradiance received from the entire pixel. The average radiance, as a function of wavelength, represents the light reflected by the substrate minus the light absorbed by the inks. Note that it is not the color of the inks alone that is the controlling factor. Environment & Coordinates 2- 13

The following material will attempt to differentiate the terminology common to the additive and subtractive fields of colorimetry. It is unfortunate that yellow has achieved such a firm foothold in the field of process color. The perceived color of the filters meeting this specification is distinctly brown. If one mixes green and red pigments, the perceived color is better described as brown than yellow. Yellow plays a more distinct role in the theory to be presented here. In this role, yellow is defined as a narrow band color with a peak wavelength of 572 nm. It is directly related to, and precisely defines the yellow of the Herring School of color theory. This wavelength, of 572 nm, is only coincidentally the same as the peak wavelength of the Purkinje phenomena associated with the luminance response of the visual system. This coincidence has played an important role in the frequent and incorrect definition in the literature of the spectral peak of the long wavelength chromophore of vision as nominally 575 nm. Section 13.5.3 will explore the situation related to unique yellow in greater detail and will show it occurs at a wavelength of 575 nm ± 1 nm.

--- Based on this expansion and the development of the actual electrophysical signal paths employed in vision, it is possible to subdivide the available color spectrum more precisely and define specific color names mathematically. The results of this process will be presented here for the Hypertopic and Photopic illumination environment to aid in the understanding of the discussions to follow. ---

Table 2.1.1-4 is arranged in a number of apparently arbitrary columns and rows. However, their meaning will be much clearer by the end of this work. The table is presented in an attempt to compartmentalize a variety of phenomena involved in color research. It is important to subdivide the discussion of colors into two categories related to the relative spectral width of the radiation involved. For now, these will be described as narrow band (Δλ< 0.25 λ) and broad band (Δλ>0.25 λ) where λ is the center frequency of the spectrum. The spectrums represented in the Table need not be and are virtually never Gaussian in shape. Two major physiological columns and two major non-physiological columns are needed. Several of these contain subdivisions as shown. All but two of the wavelengths shown in this Table relate to the chromatic perception of the eye without regard to the associated luminance information. The two exceptions are the wavelengths, λB = 494 & λP = 600 nm. which are important in two widely discussed phenomena. λB is the peak wavelength of the Bezold-Brucke phenomena encountered under Hypertopic conditions. λP is the peak wavelength of the Purkinje phenomena frequently encountered under Mesotopic conditions. These two phenomena are the result of peculiarities in the signal processing formula used in the visual system of many animals.

The entries shown in bold have unique definitions and play unique rolls in animal vision. The Electrophysical column includes the peak wavelengths of the four chromophores of animal vision, UV, S, M & L, along with two electro-physically critical wavelengths, λ = 494 nm. & λ = 572 nm. These last two wavelengths are null wavelengths used in the electrophysical determination of what “color” is reported to the brain. The null wavelength between the UV and S spectral channels, labeled “lilac,” is currently unknown but is estimated to be 390 nm. 14 Processes in Biological Vision

TABLE 2.1.1-4 THE DEFINITION OF UNIQUE COLOR NAMES

Physiological Colors Physical Colors Electrophysical Psychophysical ------“Process Color” Chromin. Additive (RGB) Subtractive (CMYK) Peak (Narrowband) (Wideband) Emission range Narrow U, λ = 342 ------Lilac λ = ? Band S, λ = 437 Blue, λ < 494 Blue, 494>>λ~450 λB = 487 Colors Azure λ = 494 M, λ = 532 Green, 494< λ< 572 Green, 500< λ <550 Yellow λ = 572 λP = 580 L, λ = 625 Red, 570 < λ Red, 572<< λ~640

Absorption range Broad Cyan (- Red), λ<570 Band Yellow (- Blue), 494<λ Colors Magenta ( - Green), λ<494 & λ>572

The entries under psychophysical are meant to describe the common observations related to these types of experiments in the laboratory. Look first at the chrominance subheading. To be perceived as bluish, the mean spectral wavelength of the source must be shorter than 494 nm. To be perceived as reddish, the mean wavelength must be longer than 572 nm. To be perceived as greenish, the mean wavelength must be between these two values. Using these criteria to create a scene in object space does not provide good spectral differentiation of colors. It is the difference between the mean of the spectrum of a given source and either 494 nm or 572 nm that determines the perceived color. Thus, to perceive highly saturated colors in an additive system, it is important that the mean wavelength of the three sources used be located as far from these wavelengths as possible. By varying the amount of light from each of these sources, the mean wavelength of the mixtures can be moved throughout the chromatic spectral range and higher saturations can be achieved.

Note that the physical colors typically used in additive color mixing have traditionally approximated the Chrominance column criteria. This is true for both pigments, filters & active phosphors.

Two special cases are noted under the luminance subheading. If the visual system is chromatically adapted, two special situations can occur. The peak spectral response of the luminance channel can be shifted to about 580 nm via the Purkinje phenomena. Alternately, the luminance channel can exhibit two peaks, one near 487 nm and another near 580 nm due to the Bezold-Brucke phenomena. These peaks can mislead both the psychophysiologist and electrophysiologist. The situation with respect to a subtractive color system is similar but more restrictive. Broad band spectral absorbers are used to remove significant amounts of the reflected light from the substrate. If only three broadband absorptive inks are used, only a limited range of the mean wavelength of the remaining irradiance can be achieved by varying the absolute absorption.of each ink. To overcome this problem, additional inks can be used to further divide the reflected light into narrower bins. The recently introduced six-ink system of Pantone is designed to overcome this problem in process color. 2.1.1.6.2 Color definitions by Barnes Environment & Coordinates 2- 15

Barnes performed one of the very few studies tying color names to spectral wavelengths, their chemical formula to insure reproducibility, and their appearance in Tristimulus (X,Y,Z) values as well as Trichromatic (x,y,z) coefficients as defined in the 1939 time period33. While many absorption curves were provided, the concentration of the chemicals involved were not given. This data allows precise naming of specific wavelengths very close to the most important wavelengths of the visual process. The measurements were made with ICI Illuminant C and a recording photoelectric spectrophotometer. Unfortunately, many of the long wavelengths are described by long-pass wavelength spectral responses instead of bandpass spectral responses. These colors include:

French ultramarine blue 445 nm Sodium aluminum silicate with sulphur Na8-10Al6Si6O24CS2± Blue verditer 476.4 nm Basic copper carbonate 2CuCO3CCu(OH)2 Verdigris 488.2 nm Basic copper acetate Cu3(OH)2(C2H3O2)4 Cobalt green 493.8 nm Cobalt and zinc oxides CoOCnZnO Emerald Green 511.9 nm Copper aceto-arsenite Cu(C2H3O2)2C3CuAs2O3

Strontium lemon yellow 572.3 nm out Strontium chromate SrCrO4 Cadmium orange 586.9 nm out Cadmium sulphide CdS English vermillion 608.1 nm out Mercuric sulphide HgS

Cobalt violet (magenta) 560.3c ** Cobalt phosphate Co3(PO4)2 Cobalt violet represents a good magenta because its two spectral peaks are nearly of equal intensity 44% @ 400 nm & 58% @ 700 nm. 2.1.1.6.3 Specific definitions for additive color in object space

Based on the New Chromaticity Diagram for Research developed in Chapter 17 and introduced earlier in Chapter 1, a series of unique narrow band colors can be identified. Although not as pure as the “unique colors” defined in Section 17.3.4.1, they can be used in lieu of precise wavelengths for psychophysical purposes. They are:

blue— median spectral flux at wavelengths much shorter than 494 nm. (And longer than ~437 nm. where applicable to tetrachromats) Aqua— median spectral flux centered on 494 nm. and no energy at wavelengths longer than 532 nm. Green--- median spectral flux near 532 nm and all radiant flux at wavelengths between 494 nm and 572 nm. Yellow--- median spectral flux centered on 572 nm and no energy at wavelengths shorter than 532 nm. Red--- median spectral flux at wavelengths much longer than 572 nm and all radiant flux at wavelengths longer than 572 nm.

Aqua is used here to avoid confusion with cyan which is applied to a specific (broader band) chromatic concept in process color for the photographic and printing trades. The aqua envisioned by the above definition is not the same as Pantone sample 15-4717 although it appears that the specified narrow band color and the sample may be metameres. 2.1.1.6.4 Specific definitions for subtractive color in object space

Without the New Chromaticity Diagram developed in Section 17.3.3 of this work, the reader will find this section difficult to understand. While additive color is sensed in the eyes of the subject, subtractive color involves a much more complex perception developed in the . As an example (and much to most readers surprise), the eyes do not sense any color that is described in conversation as “yellow.” There is no sensory receptor sensitive to “yellow.” Subtractive color mixing is frequently known as “Process Color” in industry. Three specific broadband absorption inks are generally used. Cyan is a broad band absorber over the interval of 400< λ < 570 nm. The resulting reflected radiance, for a white substrate, has a mean spectral wavelength near 600 nm and is perceived as reddish. Yellow, more technically correct “canary,” is a broad band absorber over the interval from 494< λ <650 nm. The resulting reflected radiance has a mean spectral wavelength near 450 nm and is perceived as bluish. Magenta is a fundamentally different type of absorber. It absorbs at both short and long wavelengths leaving a transmissive notch

33Barnes, N. (1939) Color characteristics of artist’s pigments J Opt Soc Am vol 29, pp 208-214 16 Processes in Biological Vision

at mid-band. It absorbs strongly in the bands of 400 < λ < 494 nm and 572< λ < 650. The reflected radiance has a mean spectral wavelength near 532 nm and appears greenish. This specification is similar to one developed in the process color industry by trial and error. With mean spectral wavelengths of about 600, 532 and 450 nm, this system is not able to produce a truly saturated blue or a saturated red. This is a well-known fact in the process color industry. The subject of subtractive color is frequently an awkward one for someone who has not worked in the “Process Color” industry. The fundamental process involves starting with a “white sheet” of paper and illuminating it with “white light.” If you then paint on the white sheet with a cyan pigment, the light originating at the source and reflected from the sheet will appear “reddish,” the only light not absorbed by the cyan pigment. If you then paint over the same cyan pigment with a canary pigment, the combination of the two pigments on the white paper will appears as Magenta (the reflected light lacks energy in the interval from 400 nm to 570 nm and lacks energy in the interval from 494 nm to 650 nm). The only light reflected by the pigmented area in Munsell Color Space is in the non-spectral region represented by the upper right quadrant of the New Chromaticity Diagram generally described as magenta. The magenta region is labeled using the complementary code such as 400c nm to indicate the lower right corner of the magenta space and 650c nm to indicate the upper left extent of the magenta space. See Section 17.3.4.3. An even more in depth discussion of process color, including the difference between North American and European process color printing practices, appears in Section 17.3.3.6

Edmund Scientific has attempted to explain the above two paragraphs using the following image, Figure 2.1.1-3, from their catalog. They show the nominal color of the filters they suggest as ideal for color separation purposes for both additive and subtractive color processing, along with the spectral performance of the individual filters. Other suppliers may suggest marginally different filters. The reader is referred to the original color samples in the Edmund Scientific catalog as the reproducibility achieved here is not ideal. The “blue” at upper left is clearly more “purple” than blue. The “red” is less than saturated red, and the “green” is less than saturated. The “cyan” at upper right is clearly unsaturated and the magenta is unsaturated. The “canary” is clearly not a saturated yellow.

Note carefully that the spectral response of the red filter exhibits very little transmission at wavelengths shorter than 600 nm. The output of this filter would not stimulate a sensory receptor with a peak sensitivity at 560-572 nm as proposed by the psychophysical community (Psychology Departments at many Universities) for many years. The peak sensitivity of the long-wavelength sensory receptor neurons of the eye of Chordata (including humans) occurs near 611 nm as measured by Thornton et al., 1999 (Section 5.5.10.4.3). Environment & Coordinates 2- 17

Figure 2.1.1-3 Comparison of the subtractive color and additive color concepts. From Edmund Scientific catalog of optical components.

2.1.1.7 Sources and illuminants

A subject poorly understood but of immense importance to the vision community is that of radiometry. Radiometry includes the description of both sources of radiation and the radiation falling on a surface. To be accurate, radiant quantities are given as a function of spectral wavelength. If the radiant properties of a source or illuminant are further constrained by a spectral filter that limits their properties to those observed by a detection system with a spectral passband equivalent to or approximating the luminosity function of the human eye, the source or illuminant is then described using the technical terms of the field of illumination instead of those of radiation. Constrained in the above sentence refers to multiplication if a new function of wavelength is desired or integration if a total energy falling within the spectral passband of vision is desired. As will be seen later, there is no single luminosity function for human vision. This function varies with the radiance level. In commercial and engineering applications, the C.I.E. luminosity function is assumed under all radiant intensity levels. This luminosity function is a smoothed average of the results from a number of laboratories collected initially in the 1920-30's and revised in 1961. Relatively wide spectral filters were used to measure the luminosity function at poorly controlled radiant levels. Therefore, the accuracy of the C.I.E. Luminosity Function should be appreciated when carrying out precise experiments. The lack of a fixed luminosity function for human vision complicates research. To achieve an accuracy exceeding 10-20%, it is necessary to use the luminosity function as a function of radiant intensity emanating from a source or 18 Processes in Biological Vision

irradiance falling on a surface. By assuming a fixed Luminosity Function, the C.I.E. has found fit to define illuminants in terms of their radiant intensity prior to filtration by a Luminosity Function. These illuminants should be called irradiants. These illuminants are converted into C.I.E. sanctioned illumination values when measured using a photometer incorporating a filter with a characteristic approximating the standard Luminosity Function. A radiometer would record a significantly different value. 2.1.1.7.1 Quantum count, radiant energy & chemical energy vs wavelength

The psychophysical community has long made measurements in the vision domain using units of radiant energy per unit wavelength, as a function of wavelength. However, the photoreceptors of biological vision are not radiant energy sensors (heat sensitive). They are quite specifically known to be quantum-mechanical sensors (quantum- catch sensitive). Planck’s Radiation Law, relating the radiant energy at a given wavelength to the temperature of the radiant source is also well known. Finally, the energy associated with a stream of photons (quanta) is given by Planck’s Equation, E = nhν, where h is Planck’s Constant and ν is the frequency of the energy. The equation can also be written as E = nhc/λ, where c = the speed of light and λ is the wavelength of the energy.

The above energy can also be expressed by E = 2.854/λ gram-cal/mole, or one electron-Volt = 23,060 calories/mole. These relationships allow the energy associated with a given wavelength of light to be expressed using a variety of terms.

They are particularly useful when discussing the energy bands associated with the chromophores of vision. As an example, Bokkon & Vimal recently gave the energy of the 635 nm photon as 45 kcal/mole34, 45,103 cal/mole using the accepted theoretical values.

Based on these relationships, the values of the energy at the peak wavelengths of the chromophores of vision used in this work are;

34Bókkon, I. & Vimal, R. (2009) Retinal phosphenes and discrete dark noises in rods: a new biophysical framework J Photochem Photobiol B: Biology In press xxx Environment & Coordinates 2- 19

Channel Wavelength Energy, eV Energy, kcal/mole (UV) 342 nm 3.63 eV 83.49 kcal/mole S 437 2.8 64.56 M 532 2.33 53.73 L 625 1.98 45.65 The sensory neurons have a minimum threshold for quantum-mechanical excitation of about 50 kcal/mole. As a result, the L-channel employs a novel technique to sense the energy of the photons of energy lower than 50 kcal/mole. This technique is responsible for the loss of L-channel vision at low light levels (the transition from photopic, through mesotopic, to scotopic vision (Section 17.2). While the response of a spectral channel is typically given conceptually as the product of the absorption coefficient of the chromophore times the radiant intensity of the source times the wavelength interval;

Where E (λ) is the energy irradiance of the stimulation (the power density at XE=⋅⋅∫ eX()λλλ S () d e the receiver aperture divided by the elemental surface area of the receiver aperture), SX(λ) is a normalized coefficient of absorption of the sensory material (chromophore) and dλ is the incremental wavelength employed. X is an “effective energy.” The expression is typically integrated over the interval 380 nm to 740 nm. X is typically taken as UV, S, M or L to describe the specific spectral channel of interest. In empirical work, SX(λ) is frequently replaced by the CIE sanctioned visibility functions V(λ) for the photopic region or V’(λ) for the scotopic region. There is no sanctioned visibility function for the mesotopic region of vision.

This description leaves too many parameters uncontrolled. This description is inadequate for scientific purposes. A complete description of this response as a function of wavelength is best illustrated in two steps. The first step involves an equivalent formula;

XE=⋅⋅∫ qX()λλλ S () dwhere Eq( l) is the radiance in quanta/sec of the stimulation (the quantal density at the receiver aperture divided by the elemental area of the receiver aperture), SX(λ) is the coefficient of absorption of the sensory material (chromophore) expressed in electrons/sec out per quanta/sec in, and dλ is the incremental wavelength employed. X is now the current out in electrons/sec. The expression is typically integrated over the interval 380 nm to 740 nm. X is typically replaced by the symbols UV, S, M or L to describe the specific spectral channel of interest but its meaning and units remain the same.

This formula can be expanded to illustrate the many mechanisms involved in the stage 0 and early stage 1 (transduction) processes;

XE=⋅⋅⋅⋅⋅∫ ee()λ Q () λ L (,) λρ SCS (,) λρ X () λ d λ where Ee(λ)CQe(λ) is equal to Eq(λ) but emphasizes the fact the number of quanta per unit wavelength Qe(λ) decreases as the wavelength is shortened. L( λ,ρ) describes the transmittance of the lens with respect to wavelength and radius from the optical axis of the lens. SC( λ,ρ) describes the acceptance of the photoreceptors as a function of the wavelength and the angle from the optical axis of the photoreceptors (but usually taken as the more easily evaluated and geometrical equivalent, the radius from the optical axis of the lens). This factor is commonly known st as the Stiles-Crawford Effect of the 1 kind. Sx(λ) is again the coefficient of absorption of the sensory material (chromophore) expressed in electrons/sec out per quanta/sec reaching the entrance surface of the photoreceptor. This expression describes the number of electrons generated per second in the individual photoreceptor, and prior to any amplification within the phoreceptor, as a result of external stimulation of the eye at a given energy irradiance. This signal is still subject to the variable amplification mechanisms associated with adaptation within the photoreceptor and the logarithmic conversion mechanism at the pedicle of the photoreceptor (associated with load element 4 in the block diagram). 20 Processes in Biological Vision

2.1.1.7.2 Illuminants

The C.I.E. has attempted to standardize illuminants for many years. Initially, the effort was to standardize the characteristics of sunlight (or skylight) falling on a surface. The integrated intensity level was found to vary with the season, local weather, time of day, orientation of the surface, etc. Furthermore, as instrumentation improved, it was found that the spectral characteristics of the illumination also varied with the above parameters, and possibly more-- such as the aerosol content of the air. With the widespread introduction of electric lamps, a convenient source of illumination became available. However, it was also difficult to standardize. The radiant characteristics varied with the absolute temperature as well as with the surface characteristics of the material. Furthermore, the illuminant produces depended greatly on the transmission characteristics of the vacuum envelope surrounding the source. Soda glass enclosures removed an inappropriate amount of the short wavelength radiation from the resulting radiation (or illumination). Table 2.1.1.7 tabulates the Standard Illuminants defined by the C.I.E. and several illuminants suggested by the author. Note carefully, these are not sources, they are illuminants in the vernacular of the C.I.E. When falling on a surface, they will be reported as an illuminant of given illumination value by a photometer. A photometer is a radiometer including a filter corresponding to the prescribed luminosity function of the human eye. The filters used in photometers are usually poor approximations of the actual luminosity function.

Table 2.1.1.7 Various Illuminants, including those Standardized by the C.I.E.

Illuminant Nominal Source

2042 Kelvin black body, an early source found in the literature

A Radiation from a full radiator at absolute temperature of 2856 K. This value was 2848 K beginning in 1931. It was changed to 2854 K prior to 1970. It has been 2856 since 1971. The spectrum of the radiator has not changed but the accepted value of Planck’s Constant has. This has caused the recalculation of the temperature as reflected above.

B Direct sunlight with a correlated temperature of 4874 K. The reference to a correlated temperature indicates a best fit to a true full radiator at this temperature. Sunlight exhibits a number of narrow band differences from a black body in the short wavelength region.

C Average daylight, representing skylight and not direct sunlight, with a correlated temperature of 6774 Kelvin.

D65 One of a series of illuminants with the designation D and a two digit subscript, defined by the C.I.E., representing a wide variety of phases of daylight. D65 has a correlated temperature of 6504 Kelvin. D65 is the preferred daylight illuminant according to the C.I.E. Other values include D55 and D75. (E) Illuminant exhibiting a constant energy per unit bandwidth over the region of interest in vision research. Defined by the author in terms of a full radiator at approximately 8683 K. This radiator provides an equal energy spectrum within +/-5.0% over the spectral region from 342 nm. to 625 nm. (F) Illuminant exhibiting a constant photon flux per unit bandwidth over the region of interest in human vision research. Defined by the author in terms of a full radiator at approximately 7053 K. This radiator provides an equal flux per unit wavelength spectrum within +/-5.7% over the spectral region from 437 nm. to 625 nm. Can be described as D70. (G) Illuminant exhibiting a constant photon flux per unit bandwidth over the region of interest in tetrachromat vision research. Defined by the author in terms of a full radiator at approximately 8073 K. This radiator provides an equal flux per unit wavelength Environment & Coordinates 2- 21

spectrum within +/-7.9% over the spectral region from 342 nm. to 625 nm. Can be described as D80. To fully investigate the human spectrum, slightly higher temperatures than those specified in (E) and (F) are suggested. This would provide a uniform flux source over the monotonically discernable chromatic range between 400 nm. and 655 nm. Wyszecki and Stiles35 present a discussion of the illuminants A through D but do not clearly define the term full radiator. The full radiator is not encumbered by a soda glass (as used in conventional light bulbs) or other enclosure that distorts the spectral output of the source, particularly in the “blue” region of the human spectrum. They also provide the characteristics of a Davis-Gibson filter for converting a source at 2856 K inside a quartz envelope into an illuminant approximating either illuminant B or C. As late as 1994, some vision community investigators were still not aware of the significance of color temperature. One group describe both 2550 Kelvin and 5800 Kelvin sources as achromatic, and they were working in the ultraviolet36. 2.1.1.7.3 Sources

As of this time, the C.I.E. has not specified precisely how to obtain a source corresponding to any of the above illuminants. It has only quasi-recommended one Source, the very clumsy laboratory source defined in terms of liquids of various compositions intercepting the unfiltered light from a black body source at 2856 K. The resulting irradiance from this source can approximate Illuminant B or C, depending on the solutions used. This is the only artificial source in radiometry that has been defined as an acceptable standard. In all other cases, the investigator (and the equipment supplier) is left on his own. This makes it extremely important that investigators claiming laboratory precision must carefully define their source configuration. Quoting a C.I.E. Illuminant designation when referring to a source is not adequate. This is true whether the source is locally built or of commercial origin.

2.1.1.8 Instrumentation used in physiological optics

The vision community frequently relies upon an instrument sold under the general name microspectrophotometer but frequently used in a different mode than implied by that name. Each of the prefixes in the above name have a unique meaning (and can be replaced by other expressions to obtain different instruments). The first term can be either absent (in the case of a wide angle instrument, typically hemispheric in coverage), spot (in the case of a nominal instrument with a field of view of a few degrees) or micro (for an instrument with a very small field of view). The term spectro can be either absent (in the case of an integrating instrument) or spectro (in the case of an instrument that provides an output as a function of wavelength). The term photo is the most awkward to deal with. The underlying function is that of a radiometer, an instrument that measures the total energy flux falling on the detector of the instrument in watts (and frequently expressed in watts/cm2 or watts/m2. The term photo- can be used in at least three contexts. First, it can be used to describe the visual spectral range of light. Second, it can be used to indicate the unit can measure the photon-flux rate per unit area as well as the power per unit area. Third, it can mean the unit includes a filter designed to replicate the luminous visibility function of the human eye and provide an output proportional to the luminance sensed by such an eye. In this case, the output is given in photometric units, such as foot-Lamberts, Lux, etc. In most vision research, the above instrument is actually used as a microspectroradiometer, with an output measured in watts per unit area. In applications dealing with the sensitivity of the photoreceptors of the eye, an output measured in quantum-mechanical units, photon-flux per unit area, is more desirable. Such an instrument should have a spectral range from not more than 300 nm to at least 1.2 microns (1200 nm) if to be used in vision research. A typical microspectroradiometer is that built and described by Harosi & MacNichol37. It used collimated optics to

35Wyszecki, G. & Stiles, W. (1982) Op. Cit. pp. 143-154 36Jacobs, G. & Deegan, J. (1994) Sensitivity to ultraviolet light in the gerbil (Meriones unguiculatus), Vision Res. vol. 34, no. 11, pp 1433-1441 37Harosi, F. & MacNichol E. (1974) Dichroic microspectrophotometer: a computer-assisted, rapid, wavelength-scanning photometer for measuring linear dichroism in single cells. J Opt Soc Am vol. 64, pp 903-918 22 Processes in Biological Vision

measure a beam with a cross section of 0.6 x 2 microns. The overall spectral range was obtained using two different lens systems. The short wavelength spectral response was limited to greater than 275 nm by the flouride-based optics. They used a dichroic optical system capable of measuring both the polarized and unpolarized (or average) components of the beam. 2.1.2 The scene environment

When discussing the performance of the visual system, it will be necessary to consider the explicit properties of the scene imaged by the system as well as the irradiation used to illuminate that scene. In general, the scene will include not only the reflective properties of each element of each object in the scene but also the geometry of the scene with respect to angles of illumination and angles between the surface and the line of sight of the eyes. These factors become critically important in interpreting the literature of color constancy and many similar effects. 2.1.2.1 Reflectance properties

In a variety of situations, it is necessary to specify the detailed properties of a surface in the field of view. Roughness can have a significant impact on the apparent color of an object if it is illuminated by multiple sources located at different positions. Even the interior properties of the material can become important, particularly for semitransparent crystalline materials. Alexandrite and other members of the sapphire family of gemstones are examples. These materials frequently exhibit one reflectance under specular illumination and a distinctly different reflectance under diffuse or scattered illumination. Stones like the Star of India owe their unique appearance to these considerations. 2.1.2.1 Fine surface structure related to printing

The specification of the color of a surface can depend strongly on its natural texture as suggested above. It can also depend on its detailed surface qualities, a fact utilized in commercial color printing. The entire field of color printing (except for some rare techniques used in art reproductions) depends on the ability of the eye to blend a group of sub- pixel size dots of different color ink into a perceived color that is not in fact present in the image. This so-called process color is so widely used. It is even used to create materials for use in the psychophysical color laboratory to evaluate the performance of the eye. This situation frequently leads to awkward and sometimes erroneous results and interpretations. 2.1.3 The thermo-mechanical environment

Saying that the eye is enclosed by the temperature and pressure-controlled bath of the body fluids is easy. It is still easy after recognizing the exception for a small surface consisting of an even smaller aperture, and a designed to minimize heat loss or gain through having a high reflectivity to radiation. Furthermore, the aperture is a poor radiator outside the visual region of the radiation spectrum. It will be important later to be even more specific concerning the temperature of the in-vivo retinal surface. This is particularly important during part of the gestation period when certain processes occur which effect the ultimate spectral performance of the eye, i.e., color abnormalities and color blindness. During in-vitro experiments, it will be seen later that it is very important to control and note the temperature of the specimen during tests to an accuracy of at least 0.5 degrees centigrade. The description of the mechanical environment becomes a bit more complex. Emphasis here will be primarily on the field of view and principle optical axes of the eye. The pointing capability of the animal eye varies widely between species. Man and some higher animals can direct their eye to point in any direction within a wide total angle, their eyeballs being essentially spherical. Many other vertebrate species have very limited capability in this regard, their eyeballs are not as spherical as in man and do not rotate in their socket easily over a wide angle. In the Mollusca and Insecta phyla, the eye is essentially fixed in relation to the body and no independent rotation is possible. An important property that impinges directly on the operating mechanisms of the eye is the question of the instantaneous field of view of each eye; when they overlap, when they do not and when they can but frequently do not. How is the resulting information handled? Many vertebrate animals also exhibit several involuntary eye movements that are very important to the overall operation of the eye. In man, some of these motions appear to be totally involuntary while in other mammals they are under at least partial voluntary control. In other vertebrate, and most invertebrate, animals, these particular Environment & Coordinates 2- 23

motions are not normally observed. 2.1.3.1 Sensitivity to temperature

Temperature plays an important role in precise visual research. Biological systems exhibit a greater sensitivity to temperature change than expected based on the conventional exponential relationship as a function of absolute temperature and the gas constant. Neurological systems appear to be based on a different relationship that only extends from near zero Centigrade (+273 K) to a temperature near 50° Centigrade (+323 K). Biological systems may survive for periods of time at temperatures outside of this narrower range. However, their neurological activity during these periods is either negligible or unpredictable. Both the equations for the initial response of the chromophores of vision to light and the gains of the amplifiers within the photoreceptors are found to be temperature sensitive in this work. Because of the above sensitivity to temperature, it is important to record temperature to an accuracy of less than ±0.5°C during visual research investigations. It is important that the measurement relate to the visual system itself and not a nearby thermal mass. This accuracy requirement is just as important in warm blooded as in cold blooded animals. Although humans and other mammals are considered to maintain a constant body temperature, this is only an approximation. The temperature of a healthy human varies by at least ±0.5°C in the course of a day based on oral measurement. 2.1.4 The hydraulic environment

The flow of body fluids into and out of the eye serve several purposes. They are clearly the principal means of temperature control in the eye, serving as a heat exchanger between the eye and the much larger body. The flow of these fluids also brings biological material for building and maintenance and also removing waste products not recycled within the eye itself. More important, these fluids bring the metabolic fuel needed to power the eye and remove the metabolic waste products so consumed. The eye is a significant consumer of all metabolic fuel entering the head. However, the heat released within the eye (and similarly within the brain) is remarkably small. This fact will be discussed further in Part C and the Appendices.

All of the fluids entering and leaving the eye travel in arteries and veins contained within what is colloquially called the . Near the optic disk, they divide into two major portions, the portion serving the neural material on the anterior side of the photoreceptors, and the portion posterior to the photoreceptors. Flows of materials within these channels exhibit a variety of velocities and time constants generally related to their size at a given point. Hogan has collected some gross values for the velocities and time constants of the overall eye.

Retinal rate of flow 1.6-1.7 ml. per mm. per gm. of retina (est.) Anderson et. al., 1964 Mean retinal circulation time 4.7 +/- 1.1 sec. Hickman & Frayser, ‘65 Mean retinal transit time 3-4 sec. Friedman et. al., 1964

It should not be assumed these are the time constants related to various parts of the retina. This is especially true of the foveal region where the arteriola give way to a fluid bed with entirely different dynamics. 2.2 The retinotopic coordinate systems of vision, Map of the Retina

Establishing a single consistent system of notation is important to avoid confusion when dealing with the visual system. Here again, it is common for different disciplines to employ inconsistent terminology. The ophthalmologist works in perimetry where the field of view is measured in object space from the fixation point, basically a radius, while most others in the biological field speak of the full field of view, basically a diameter. Research opticians prefer to use the optical axis as a reference. In this work, the full field of view will be used except when clearly indicated by the term semi-field of view. In animal vision, confusion can also result if the animal does not have a distinctive head. The biological coordinate systems generally apply to the entire animal whereas most vision oriented work uses a coordinate system particular to the head. The term sagittal, defined as “like or related to an arrow,” is a problem when defining a comprehensive model of the vision process because different authors use it in so many different ways: + The zoologist says the sagittal plane is the plane separating the two halves of a bilaterally symmetrical animal (relates to a centroid condition). + The ophthalmologist says the sagittal plane is any plane that is parallel to a meridional plane but does not 24 Processes in Biological Vision

pass through the anterior or posterior pole of the eye (relates to an offset condition). + The optician says the sagittal plane is any plane that is perpendicular to a meridional plane and includes the chief ray (relates to a skew condition). When referring to just the head of an animal, the term median plane is frequently used in a similar manner to the term sagittal plane. The terms sagittal and sagittal plane are so well established in the various disciplines that they cannot be easily replaced. Using them with caution and clarity will be necessary. 2.2.1 Coordinates external to the eye

Figure 2.2.1-1 illustrates specifically the terminology used herein. Both the left frame from Hickman38 and the right frame from Davson39 are slightly modified for consistency and clarity.

Figure 2.2.1-1 Coordinates external to the eye. Left figure displayed against coordinates appropriate for an animal that is not bipedal. Modified from Hickman. Right figure displayed to illustrate the median plane of the head (boundary between stipple and clear area). Shows xx’ axis through the center of rotation of the two eyes which aids in the definition of the vertical and horizontal planes perpendicular to the median plane. Modified from Davson.

All of the animals on the phylogenic tree above the flat worm planaria are bilateral in construction. Except for the bipedal animals and animals with flexible necks, defining the relevant geometric planes using zoological terms, as in (a), is straight forward. Under this set of definitions, the intersection of the sagittal plane and the longitudinal plane define a “sagittal arrow” which is parallel to the normal direction of travel of the animal and to the animal’s backbone (if present). This condition is not true for the bipedal animals since the sagittal arrow now points vertically whereas the principal direction of motion is given by the ventral intersection of the sagittal and transverse planes. For animals with flexible necks, defining the principal planes of the head as in (b) is useful. The median plane bisects the head along the plane of symmetry. For purposes of vision, the horizontal plane can be defined as perpendicular to the median plane and passing through the center of rotation of both eyes. A facial plane can be defined as perpendicular to both the median and horizontal plane and also passing through the center of rotation of

38Hickman, C. (1970) Integrated principles of zoology. 4th ed. Saint Louis, MO: C. V. Mosby pg. 110 39Davson, H. (1990) Physiology of the eye. 5th ed. NY: Pergamon Press. fig. 24.7 Environment & Coordinates 2- 25 both eyes. Both Kandel, et. al.40 and Nolte41 provide images to define the axes of the animal under a variety of circumstances and highlight some of the problems related to rostral and caudal directions. The above coordinate systems have not found wide use in modern imaging of the brain. A different set of coordinates have been defined for this purpose. They are discussed in Chapter 15. Fine & Yanoff42 provide a comprehensive summary of the coordinate system used to describe the human eye, particularly in the ophthalmological community. Figure 2.2.1-2 illustrates the basic definitions. The optical axis of the human eye is defined as the line passing through the center of the cornea and the center of the crystalline lens. The intersection of the optical axis with the external surface of the cornea is defined as the corneal pole, or the anterior pole. The intersection of the optical axis with the retina is defined as the polar point in terms of the surface of the retina. The intersection of the optical axis with the outside of the sclera is defined as the posterior pole of the eye. They define any meridional plane as including the optical axis of the eye. The name of the vertical meridional plane is usually simplified to the vertical plane, similarly for the horizontal meridional plane, etc. The equatorial (or Listing’s) plane is perpendicular to the optical axis and is midway between the anterior and posterior pole. This last definition is adequate for purposes of dissection. However, a more precise value will be presented below in terms of the diameter of the orb and the principal points of the optics. A series of meridional planes pass through the same two poles and are perpendicular to the equatorial plane. The meridional plane illustrated is actually the vertical meridional plane. A plane perpendicular to these two planes would be the horizontal plane of the eye. Any other meridional plane is known as an oblique plane.

Planes parallel to any meridional plane but not passing through the anterior and posterior poles are called sagittal planes. Figure 2.2.1-2 Coordinate system for the ocular showing The horizontal plane of the eye may be rotated about planes and axes of rotation. the axis ‘xx’ with respect to the horizontal plane of the head. The vertical plane of the eye may be rotated about the axis, ‘zz’, with respect to the median plane of the head. Rotation about the yy’ axis is minimal, but not trivial, in the human eye. Such torsional rotation about the line of fixation is critical to proper convergence and the pereception of 3D imagery. Records provides a figure defining all of the possible motions of the eyes and their scientific names43. See Section 7.4.3.6. In some animals, the two eyes may be rotated independently. In most mammals capable of forward vision, the eyes can only be moved in a coordinated manner. The small differences in angular motion between the eyes are computed and commanded by the brain to support stereoscopic vision. When speaking of the eye alone, confusion frequently arises among authors. Research opticians generally speak of the horizontal plane of the eye as that plane containing the retinal horizon and the optical axis. The plane passes

40Kandel, E. Schwartz, J. & Jessell, T. (2000) Principles of Neural Science, 4th ed. NY: McGraw-Hill pg. 321 41Nolte, J. (1999) The human brain: An introduction to its functional anatomy, 4th ed. St. Louis, MO: Mosby pg 52 42Fine & Yanoff (1972) Ocular histology. NY: Harper & Row, Medical Department pg. 44 43Records, R. (1979) Physiology of the Human Eye and Visual System NY: Harper & Row, pg 579 26 Processes in Biological Vision

through the polar point of the retina. Zoologists and biologists tend to define the horizontal plane of the eye as a plane containing the retinal horizon and passing through the point of fixation. This plane does not include the optical axis of the eye and is not a meridional plane of the eye. The biologist’s horizon is approximately two degrees above the optician’s horizon in object space. The figure would suggest the two eyes rotate in the horizontal and vertical planes. However, this is not the case. The muscular attachments to the oculars are not orthogonal. In order to compensate for this lack of orthogonality, the eyes employ three sets of muscles. There are two quasi-orthogonal pairs and a third pair that rotates the oculars in rotation about the line of fixation in order to allow convergence of the two images over a small field of view (on the order of 1.2 degrees in diameter). See Section 7.4.3.6. The fields of view of the animal eyes are usually given using the coordinates in Figure 2.2.1-3. Note the extreme angle from the median plane is not specified. The contour is truncated at the 90-degree angle value. Note also that the median plane used is relative to the fixation point and is not the optical median plane passing through the polar point. In humans, and similar forward-looking animals, the total angle in the horizontal plane of the head is approximately 180 degrees. In many other animals exhibiting less visual overlap, the angle may approach 360 degrees. The coordinate system can be used for any animal, whatever the extent of the field of view.

Figure 2.2.1-3 Field of view of the typical human showing the coordinate system used to describe the field of view of any animal. The ten degree radius and five degree radius circles in this figure have been poorly drawn by the draftsman. From Polyak, 1941. Environment & Coordinates 2- 27

Figure 2.2.1-4 shows a similar plot for the right eye from a 1938 paper44 citing Ronne of 191545. It provides one of the most precise figures available. The figure is presumed to be for “white” light of unknown color temperature. As usual, the perimetry paper supporting the data only extends out to 90 degrees from the point of fixation. The scale is equi-angular and not a projection of a spherical visual field. In fact, Ronne used a flat (tangent) screen rather than a spherical screen or a target following a circular path in a plane including the eye. A contributor to a blog on this subject translated Ronne as “he used a dark curtain straight in front of the subject and small white objects on a pole (camouflaged by giving it s the same color of the curtain). Ronne mentions that such an approach limits the measurements to the central 30 degs of the visual field but that the range can be easily extended by having the subjects look off-axis to other points than the central point on the curtain. For purposes of this work, an additional dashed circle has been added representing a 105 degree angle. Note that the maximum peripheral angle depends somewhat on the size of the test stimulus used. There is no one angle that can be quoted effectively. “The numerator of each fraction given in the caption correspond to the diameter of the test object in millimeters. The denominator is the distance from the eye of the patient. On the nasal side, the larger test objects all give the same extent of field, so that the nasal edge of the field is perpendicular, the temporale edge being steeply sloping.” Traquair also notes, “The geometrical centre of the field is about 20 degrees to the outer side of the visual axis or physiological centre.” See Section 2.4.8. Based on the added circle, the maximum peripheral angle is about 107 degrees only for the 100/1000 fraction, a very large test object.

A 100/1000 fraction corresponds to a 5.7 degree diameter test object, approximately the angular diameter of the entire fovea of the standard human eye. For a test object with its center at 107 degrees, it actually extends to nearly 110 degrees at the outer edge and nearly 104 degrees at its inner edge.

Loschky, of Kansas State University performed a survey over the internet in 2015 seeking more primary data sources. He only acquired a few more secondary sources,

“1) The primary source for the claim that the temporal visual field extends to something like 110 degrees is Roenne (1915), which is a 100 year old reference with a n size of 1 (and one eye), using a method no longer used in contemporary perimetry.

2) The good normative studies that have been done more recently (e.g., Neiderhauser & Mojon, 2002; Vonthein et al., 2007) give excellent normative estimates of the nasal (62/), superior (62/), and inferior (74/) limits of the visual field, but did not measure anything beyond 90/ in the temporal field. This is because they used domes, and in keeping with standard clinical practice, they did not rotate the fixation point in the nasal direction. (There is, however, a figure from Frisen's 1990 book, "Clinical Tests of Vision," showing results from a single participant using a Goldman perimeter with off-center fixation that tested out beyond 110 degrees.)

3) The results of Mapp and Ono (1984) suggest that there may be differences not only in the nasal but also the temporal visual fields by race (e.g., Caucasian vs. East Asian). However, given their population samples of n = 3 for both groups, this clearly begs to be more carefully studied.

To sum up, in the vision sciences we commonly throw around numbers like "the temporal visual field extends to 100 [or 110] degrees" but there is no firm basis, by contemporary methodological and analytical standards, for saying so. This begs for a normative study to answer it, which would involve using domes, but rotating the fixation point nasally. And a multi-site study would probably be best, since it would potentially capture important population differences.”

44Traquair, H. (1938) An introduction to clinical perimetry, 3rd Edition. London: Henry Kimpton pp 4-5 45Ronne, H. (1915) Zur Theorie und Technik der Bjerrrumschen Gesichtsfelduntersuchung vol 78 (4), pp 284-301 Wiesbaden Archiv fu"r Augenheilkunde 28 Processes in Biological Vision

Figure 2.2.1-4 Perimetry of the right eye modified to show 105 degree circle (dashed). ADD The isopters from the periphery inward are given as 100/1000, 80/1000, 40/1000, 20/1000, 10/1000, 5/1000, 5/2000, 3/2000, 2/2000, 1/4000 and 0.63/4000. See text. From Traquair, 1938

A similar presentation can be used to report the color sensitivity of the eye versus field position. Figure 2.2.1-5 shows both the perimetric plots of color vision for the human right eye and the location of the blind spot46. More recent research disputes this representation. Lachenmayr & Vivell show virtually identical contours for red, green

46Science of Color. (1963) Committee on Colorimetry, ed. Optical Society of America pg. 104 Environment & Coordinates 2- 29 and blue in their figure 8.9d47. The blind spot is approximately 6.5 degrees in diameter and approximately 16 degrees from the point of fixation (in object space). The perimetric contours shown are highly subject to test conditions and should only be used as a guide. More defined perimetric contours will be developed in later sections. Note the color names used are colloquial and not defined scientifically.

Figure 2.2.1-5 Perimetric plot of color vision in the visual field of the right human eye. The ten degree circle is drawn appropriately in this figure but the inner circle is not labeled at all. Details of colors used and test field sizes not supplied.

47Lachenmayr, B. & Vivell, P. (1993) Perimetry and its Clinical Correlations. NY: Thieme page 296 30 Processes in Biological Vision

It is notable that the Optical Society of America did not recognize in the above figure any region of the fovea that was not sensitive to color information. Many authors infer such a region based on a “cone free” region of the fovea.

2.2.1.1 Perimetry presentations are coarse approximations

The two previous figures are representations of what are called “tangent boards” among optometrists. These are displays designed to make the craftsman’s work easier. They are gross approximations involving much the same complexity of mathematical manipulation as the CIE Chromaticity Diagram. It is now obsolete except where it is used in commercial contexts because it is written into a great many laws and industrial standards. In academia, the diagram has been replaced by the theoretically defendable, CIE L*U*V* and L*A*B* color spaces. As noted in the citation to Polyak of 1941, the concept of the tangent board is an old one. Harrington describes its derivation from his “Hill of Vision,” a technical crutch from the 1980's or earlier. The tangent boards are two-dimensional planar displays meant to represent the polar coordinates of the external environment. Contrary to the pedagogical illustration of the above planar boards shown as vertical boards placed in front of a subject, these “tangent” boards are meant to represent a crude flattening of a spherical surface with the pupil of the eye at its center. Thus, the tangent board is meant to be tangent to the spherical surface at each point of measurement.

The tangent board does not correspond to any location on the retina and does not incorporate any representation of the anamorphic optics of the eye.

As noted in the captions above, the first figure is intentionally distorted in the central region to place more (but still inadequate) emphasis on the region within the ±10 degree contour. Note the ten degree radius circle is 2/3 of the radius of the 20 degree circle. The five degree radius is equal to one-half of the ten degree radius.

In the second figure, the five and ten degree radius circles are shown in proportion. However, the smaller circle within the five degree radius circle is not labeled at all.

The practical interpretation of the measurements using these representations are discussed in Section 2.8.1.1.3 and more extensively in Section 18.8.9. 2.2.1.2 Approximations of the binocular field

If the two fixation points of the above figure are superimposed to allow a representation of , Figure 2.2.1-6 results. In this figure from Ganong48, the dotted line encloses the visual field of the left eye; the solid line encloses that of the right eye. The heart-shaped clear zone in the center is viewed with binocular vision. This area is approximately the same as the area of chromatic sensitivity shown in the previous figure. The shaded areas are viewed with monocular vision. Martin49 has provided an alternate figure in his textbook but it contains no scales or axes. It shows that the binocular field just covers the face of a person when you are talking to him at arms length.

48Ganong, W. (1975) Review of Medical Physiology, 7th ed. Los Altos, CA: Lange Medical Publications pp 83-99 49Martin, J. (1996) Neuroanatomy, Text and Atlas. 2nd ed. Stamford, CT: Appleton & Lange pg. 191 Environment & Coordinates 2- 31

The two blind spots define areas within the combined field that are not capable of binocular or stereoscopic vision. The specific dimensions of these fields will be discussed in Section 2.2.2.2. It should be noted that the label binocular vision is not synonymous with stereoscopic vision. The large shared area of binocular vision is used in the awareness mode of vision as an aid to pointing the eyes. Stereoscopic vision is achieved within the analytical mode of vision over an area limited to the diameter of the , nominally 1.18 degrees in diameter. This nominal area is too small to illustrate at the scale of the figure. However, it is shown clearly in Figure 2.2.1-7.

It will be shown in Section 7.3 that the instantaneous stereoscopic parameters are only obtained for points within the instantaneous field of view of the foveola. However, the motion of the eyes allow the stereoscopic region to be scanned over the entire binocular region on demand. The information related to each individual area is acquired continuously (within sequential Figure 2.2.1-6 Monocular and binocular fields of vision intervals of about 200 msec.) through the motion of the in human from Ganong. The two small ellipses have been eyes. They are stored in the saliency map of vision added to define the blind spots of the eyes. These areas of where they are available on a continuous basis for the field are not viewed binocularly or stereo-optically. further computation and perception (See Sections 7.4 The area of instantaneous stereoscopic vision is too small and 15.4). to be seen at this scale. It is a circle of 1.18 degrees hidden behind the cross-point of the axes in this figure. 32 Processes in Biological Vision

Figure 2.2.1-7 The visual fields of monocular, binocular and stereoptic vision. The stereoptic field is shown in cyclopean form for simplicity. Each eye exhibits a similar field. They are usually convergent at the point of fixation. Environment & Coordinates 2- 33

2.2.2 Coordinates internal to the eye

Figure 2.2.2-1 provides a comprehensive section through the right eye of the human in the horizontal plane50. Note the contours inside the crystalline lens indicative of changes in the index of refraction. Note also that the visual axis has been drawn as a straight line between the fovea and the 2nd principal point of the optics. The line is not continuous beyond the 2nd principal point. The figure in the next section will address this subject further. Hogan provides additional dimensional details concerning the eye. He gives the outside diameter of the orb itself as 23 mm. Thus, the true location of the equatorial plane should be 11.5 mm from the posterior pole of the optical axis. The retina lines nearly the entire inside of the orb, up to the Ora serrata. The location of the Ora serrata is well beyond the equatorial, or Listing’s plane and is usually sacrificed during dissection. Representing the retina on a flat surface for purposes of discussion encounters the same difficulty as that of representing the globe of the earth on a flat surface. Distortions are inevitable.

The area of the Macula will be shown in greater detail in later figures. The lamina cribosa is shown as the hatched lines extending across the optic disk between the two sides of the sclera. Note the small size of the fovea within the area of the macula and the even smaller size of the foveola. It is the foveola that is so critical to the vision of man. 2.2.2.1 Optically important dimensions

To the first order of approximation, the eye is Figure 2.2.2-1 Section through the right eye of the rotationally symmetrical about an optical axis passing human at the horizontal plane. The cornea plays a more through the center of the cornea and the center of the important role as a lens than does the morphologically lens. Most measurements of eye use the corneal labeled lens. All of the photoreceptors of the retina are crossing of this axis, the corneal pole, as the reference pointed toward the center of the lens. The optic nerve point for all measurements along this axis. The retinal exits as close to the mid-plane of the head as possible. crossing of this axis, the polar point, is used for The visual axis deviates significantly from the optical axis making measurements relative to the surface of the of the eye. Note the small size of the fovea and the even retina. There are two additional points associated with smaller size of the foveola. Modified from Walls, 1942. the optical performance of the eye which are also geometrically important, the principal points. Figure 2.2.2-2 provides details. The first principal point, P, is coincident with the intersection of the entrance aperture surface (shown dotted to indicate it is non- planar) with the optical axis. It provides an apex point for determining the angle, u, of optical rays entering the eye. The second principal point, P’, is coincident with the intersection of the exit aperture surface and the optical axis and provides an apex for defining the direction a ray takes within the eye. Note the discontinuity between the principal ray approaching the optics and the principal ray leaving the optics. A similar situation is seen for the nodal points. A ray drawn from the focal surface to the 2nd nodal point, is Figure 2.2.2-2 Image construction for a wide angle discontinuous at that point. The equivalent ray in optical system with aperture stop.

50Walls, G. (1942) The Vertebrate Eye. Bloomfield Hills, MI: Cranbrook Institute of Science 34 Processes in Biological Vision

object space is drawn through the first nodal point parallel to the ray through the 2nd nodal point. In a wide angle system like the eye, the entrance and exit apertures are not symmetrical about the optical axis and do not lie in a plane perpendicular to the optical axis at P & P’. They do lie on the curved surfaces passing through these points. This is particularly important when the angle, u, exceeds a few degrees. When u exceeds a few degrees, the image surface should be shown curved and the nodal points, N & N’ do not exist. Technically, the fovea is beyond the angle from the optical axis in which the paraxial 2nd nodal point can be used. As the angle, u, increases, it is mandatory that the curvature of the retina be recognized. Some investigators have defined an angle of intersection between principal rays within the eye and the tangent to the retina at the point of intersection as drawn in a plane including the optical axis. Lotmar labeled this angle the angle of acceptance on the apparent assumption that all photoreceptors were maximally sensitive in a direction perpendicular to the local tangent. In fact, the photoreceptors are sheared at just such an angle from the local tangent to maximize their entrance cross-section. Thus, this angle more accurately relates to the shear angle of the actual photoreceptors as a function of their angular distance from the polar point. The more appropriately defined acceptance angle is related to the waveguide characteristics of the photoreceptor cell to be discussed below and in Section 2.4.2. 2.2.2.2 Retinal dimensions

Bennet & Rabbetts51 have provided detailed dimensions of objects in the retina critical to vision, Figure 2.2.2-3. The angular dimensions are referred to the second nodal point and are the same as would be observed in object space. The point marked M’ is the center of the fovea. The five-degree angle between the point of fixation and the posterior pole is frequently labeled the alpha angle. The two-degree angle is indicative of the fact that the point of fixation in object space is approximately two degrees above the optical horizon. The papilla, or optic disc, is approximately 2 mm. vertically by 1.5 mm. horizontally and is large enough to obscure ten full moons placed side by side. Relative to the posterior pole, all of the features shown are beyond the two degree diameter criteria for the application of the paraxial (Gaussian) analysis method. All dimensions should be given in angular measure relative to the 2nd principal point, angular measure relative to object space, or in linear measure along the curved retina.

Newell has provided a photograph of the retina with a line drawing overlay similar to that of Bennett & Rabbetts (but for the left eye)52. However, the values are slightly different and the scale is apparently not uniform. The foveola is shown much larger than actual size. The numbers of cells in the foveola versus the fovea also appear to be inconsistent. Glaser on the other hand, referencing Gray et. al., provides a strikingly different set of dimensions for these features of the retina (and also contains an inconsistency in the location of the optic disc)53. The major difference is Gray, et. al. define the macula as 5.5 mm in diameter (25 degrees) while describing the fovea as 1.5 mm (7-10 degrees).54 Thus, their macula covers a major portion of the retina. The Gray representation is based on anatomical dimensions of the retina. However, the dimensions, such as those of the macula, are still based on observation.

Carpenter 1988 and Guestrin & Eizenman 2006 have provided more recent information suggesting the location of the blind spot (optic disc) may vary considerably in vertical position. Hansen et al55. have reduced their comments to “In a typical adult, the fovea is located about 4-5° horizontally and about 1.5° below the point of the optic axis and the retina and may vary up to 3° vertically between subjects.”

Due to the nearly spherical shape of the retina in most vertebrate eyes, care must be taken in defining distances relative to the retina, especially when geometric distortion is one of the subjects under discussion. Clearly, in the paraxial case, one can speak of the distances from the polar point or the point of fixation in terms of microns. However, by the time the units become millimeters, it is important to indicate the distances are along a spherical surface defined as the Petzval Surface.

51Bennett, A. & Rabbetts, R. (1984) Clinical visual optics. Boston, MA: Butterworths 52Newell, F. (1986) Ophthalmology, 6th ed. St. Louis, MO: C. V. Mosby 53Glaser, J. (1999) Neuro-ophthalmology, 3rd ed. NY: Lippincott Williams & Wilkins, pg 35 54Gray, L. Galetta, S. Siegal, T. et. al. (1997) The central visual field in homonymous hemianopsia Arch Neurol. vol. 54, pg 312. 55Hansen, D. Augstin, J. & Villanueva, A. (2010) Homography normalization for robust estimation in uncalibrated setups Proc 2010 Symp Eye-tracking Res Appl: ETRA pp 13-20 Environment & Coordinates 2- 35

Figure 2.2.2-3 The relative sizes and positions of features on the human retina as seen from the nodal point in image space using nodal projection techniques. The optic disk is located superior to the meridian through the foveola (the visual axis) in object space. The labeling is somewhat controversial. In 1999, Glaser defined a much larger 25 degree disc the macula with the five degree disk labeled the fovea and the smaller disk (M’) labeled the foveola. From Bennett & Rabbetts 1984

This is especially important when making histological measurements on the posterior portion of the eye, the eye cup. The following table is from Hogan56 and assumes a standard size photoreceptor of 2 microns diameter for discussion purposes.

Zones of the Retina (following Hogan, 1971) ....Central Foveola 0.35 mm diam ~175 PC’s in diam. Fovea next zone out to 1.85 mm diam. ~750 PC’s in diam. Parafovea next zone out to 2.85 mm diam. ~1,250 PC’s in diam. Perifovea next zone out to 5.85.0 mm diam. ~3,000 PC’s in diam. ....Peripheral Near periphery 1.5 mm zone around the central retina Mid periphery 3.0 mm zone around near periphery Far periphery 9-10 mm wide on temporal side, 16 mm wide on nasal side Ora serrata 2 mm wide on temporal side, 0.7-0.8 mm. wide nasally ....Macula (a.k.a. Macula Lutea) Overlay of retinal area 2.0 mm. wide and 0.88 mm. vertically centered on the Foveola

56Hogan, M. & Alvarado, J. & Weddell, J. (1971) Histology of the human eye. NY: Saunders pg. 401-403 36 Processes in Biological Vision

The macula is seen to overlay both the Foveola and the Fovea by a small amount. In Glaser, the macula extends to a 25 degree diameter. A cross-section of the retina was presented in the Introduction stressing the location of the focal surface and its curved nature in the region of the fovea. The focal surface follows the junction of the inner and outer segments of the photoreceptor cells. Figure 2.2.2-4 provides additional details and points of interest. This figure is from Fine & Yanoff but has been modified in several ways. Although recognizing the paucity of cell bodies in the optical path in the foveal region is important, it is misleading not to show the presence of the nuclei and the inner segments of the photoreceptor cells in this region. Fine & Yanoff chose to give the distance from the vitreous surface of the retina to the posterior of the outer segments. The focal surface and the optical entrance to the outer segments are approximately 50 microns anterior to the posterior end of the outer segments.

Figure 2.2.2-4 Schematic representation of histological section through fovea of rhesus monkey. Modified from Fine & Yanoff (1972).

In the human eye optical system, there is little spectral filtration relative to the applied illumination. A small area in front of the fovea, the Macula Lutea, appears to exhibit some filter characteristics. The literature is divided over whether this feature is a coating on the surface of the retina or an impregnation into the surface of the retina. It may even be the underlying color of the retina in the absence of soma related to the bipolar and other proximal neurons in this area. Although the Macula Lutea may provide a “minus blue” filter similar to what human hunters wear to increase their visual acuity while in the woods, its effectiveness is small. This is shown by the hunters wanting an additional more effective minus blue filter. Figure 2.2.2-5 presents a plan view of the small portion of the retina seen through the standard ophthalmoscope when it is centered slightly to the right of the polar point. The circle is approximately 25 degrees in object space. Thus, only a very small but a crucial portion of the overall retina is usually shown in figures like this in the literature. Environment & Coordinates 2- 37

Figure 2.2.2-5 Retina seen through the ophthalmoscope in a normal human. Diagram at left identifies the landmarks in the photograph on the right. The label Fovea points to the foveloa in the context of this work. The Macula is the same diameter as the nominal fovea. Modified from Vaughan & Asbury to show angular size.

The figure clearly shows the presence of both the arteriole and veins coursing over the surface of the retina, and the lack of these structures immediately in front of the fovea. As indicated in other sections, only the fact that the photoreceptors are change detectors precludes these arteriole and veins from being seen under normal conditions. Since they move with the retina, they do not appear as moving objects in the scene. Only objects that change position with respect to the retina, or change with respect to time, are sensed by the photoreceptors. While the macula is shown covering about five degrees, the foveola only covers 1.18 degrees. 2.2.2.3 Photoreceptor dimensions and refractive indices

The photoreceptor cell exhibits many important characteristics related to its physical dimensions; primary among these is the waveguide-related properties of the cylindrically shaped Outer Segment. In addition, there are structures in the Inner Segment that appear to have significant optical properties, including the ellipsoid located near the entrance to the Outer Segment and occasionally a paraboloid found anterior to the ellipsoid. Figure 2.2.2-6 illustrates the most important of the possible situations related to vertebrate eyes. On the right, the situation is that of a simple photoreceptor that does not exhibit an ellipsoid or “oil drop” in its Inner Segment. Here, the optical collection efficiency of the photoreceptor is determined by the diameter of the Outer Segment, the shape of its proximal end, and the difference in index of refraction between the surrounding environment and the OS. If it is desired to quantify this efficiency using a single number, an acceptance angle can be defined as the angle relative to the optical axis where the efficiency has fallen to 50% of maximum. 38 Processes in Biological Vision

In the middle, the slightly more complex situation is illustrated involving an “ellipsoid” located in the IS immediately next to the OS. This ellipsoid may have a spectral filtration function in which case it is given the name “oil drop.” However, it clearly has an optical function. In a very fast optical system, it is possible that the light ray bundle approaching the OS has an angular subtense greater than the acceptance angle of the OS. This would result in a loss in overall light collecting efficiency unless a collimator was placed in the optical path ahead of the entrance to the waveguide. The ellipsoid could form just such a collimator. Since the ellipsoid has a higher index of refraction than its immediately surrounding environment, it acts as a double convex lens. It projects a light ray bundle into the OS that has a smaller angular subtense than the light ray bundle impinging on it. This can raise the absorption efficiency of the Figure 2.2.2-6 Optical parameters of the photoreceptor overall system considerably. Winston has cell. Index of the material in the extrusion cup is suggested an alternate configuration where the unknown. The value of 1.41 for the outer segment is an ellipsoid is merely a feature inside a average with the actual absorbing material exhibiting a parabolically shaped IS57. considerably higher index In these illustrations, noting that the OS is surrounded by the IPM fluid of a given index of refraction is important. It is not directly affected by the index of refraction of the IS or of the ellipsoid. In this sense, the optical properties of the IS can be treated separately from those of the OS.

The orientation of the photoreceptors relative to the retinal surface is also relevant to the optical performance of the system. Enoch and his associates58 have studied this area extensively. Their work has shown that the photoreceptors in a normal eye are sheared in such a way as to always have the optical axis of the OS, and IS where appropriate, parallel to a line between the location of the OS on the retina and the center of the exit aperture of the optical system. This insures optimum collection efficiency for the overall photodetection function. Note that the surface of the exit apertures crosses the optical axis at the 2nd principal point. However, the actual exit aperture for off axis rays is not concentric with the 2nd principal point. 2.2.2.4 Precision of optical dimensions

One difficulty in physiological optics is the wide variation in individual measurement among different subjects. It is generally found that the location and radius of different optical elements vary by as much as 0.5 mm. On the other hand, the depth of focus of the image formed at the image plane of the optics is less than 10 microns. For a given eye to perform properly, it is obvious that it must employ a specific set of dimensions that result in an adequate level of focus accuracy over the majority of the retina. This would suggest that the eye involves a considerable ability to optimize itself geometrically, failing this, it exhibits various myopic or hypermetropic conditions, astigmatisms, and so called refractional scotomas (small areas of poor focus). A second major difficulty in physiological optics is the determination of the index of refraction of the various structures to an accuracy adequate for detailed optical analysis. The indices are usually determined by immersion of

57Winston, R. & Enoch, J. (1971) Retinal cone receptor as an ideal light collector. J.O.S.A. vol. 61, pp. 1120-21 58Bedell, H. & Enoch, J. (1980) An apparent failure of a photoreceptor alignment mechanism in a human observer. Arch. Opthalm. Vol. 98 pp. 2023-2026 has a list of references Environment & Coordinates 2- 39 the specimen in a liquid of a known index and known not to attack the specimen. Unfortunately, the indices of these test liquids are not very precisely known. Most indices of biological compatible liquids are only known to three decimal places59. This is reflected in the figures showing the optical properties of photoreceptor cells. 2.2.3 Coordinates relative to perception

In all animals, establishing how they interpret the signals emanating from their photoreceptors is important and place the information in a usable context. In most of the phyla Insecta and Mollusca, the information emanating from the photoreceptors can be interpreted with reference to the orientation of the whole body of the animal. The field of view of a given photoreceptor is fixed with respect to the animal’s body, no matter what orientation that body assumes. In the phylum Chordata, that is not true. The eye is frequently mounted on an independently rotatable head and the ability of the eye to rotate freely relative to that head provides additional degrees of freedom in the equations of computational optics. In this regard, it is of interest to perform a little reality check and to note that the field of awareness in the human is not related to the fixation point of the eye. To do this, while sitting at your desk, note the extent of the total visual field you are aware of, it should be about 180 degrees. Now, move the point of fixation across or up and down the length of a page of paper. What effect did this have on the overall limits of your field of awareness? Clearly, the image created in the human mind is not related to the instantaneous field of the view of the retina (of either eye). The fixation point can vary by more than 10 degrees in any direction without affecting the overall field of awareness maintained by the visual system. That is to say, the computational optics of the human visual system does not just interpret the instantaneous image on the retina but maintains a more global field of awareness that can even be described as inertially referenced to either the head or the entire body. The data representing the instantaneous field of view of the individual retina is woven into this larger field of awareness. In the human, there may be two fields of awareness including other sensory capabilities and related by the short term memory. The instantaneous field of visual awareness is approximately 180 degrees wide, straddling the median plane of the eye. It is approximately 150 degrees high, beginning from the location of the feet directly below the eyes and extending to about 60 degrees above horizontal plane of the eye. There is a wider field of awareness for audio and tactile sensations. This wider field may also include a wider visual field of awareness stored in short term memory. For the many grazing animals, the total short term field of visual awareness is much larger than for the human, approaching 360 degrees in the longitudinal plane. The chameleon and its relatives, along with the sea horses, are a special case. Their eyes frequently operate independently with regard to pointing. This capability emphasizes the idea of a nominal instantaneous field of view for each eye coupled to a very wide field of awareness for the animal utilizing short term memory. Both of these families appear to include a small region of binocular vision directly ahead and straddling the median plane of the head.

Figure 2.2.3-1 illustrates a possible method of organizing the information related to the field of awareness in the human computational optics system. The analogy with a networked computer is probably quite reasonable. In this context, each eye operates like a personal computer with a limited field-of-view scanner attached to it. Each scanner has a variable resolution capability based on variable size photodetector pixels. The peripheral computer acts like a “dumb terminal” and has little or no local memory. It is attached to a larger central computer via a Local Area Network (LAN) that collects data from a variety of sensors and maintains a large data file (possibly considered a relational database) encompassing the entire field of awareness. Man’s knowledge of data storage techniques is still too primitive to interpret the manner in which the data is stored. However, it is surely a dynamic allocation type of database. The peripheral computer collects data at a variable resolution over a limited field of view based on instructions from the main computer. Data collected is formatted and then forwarded to the main computer using an appropriate modulation scheme. As in modern man-made systems, the main computer performs many functions. It demodulates the received information and places the data into a specific location in the large data file based on the pointing vector information. This master data file is multilayered, in the informational sense, and includes planes for storing audio, visual and probably tactile information. The data in the planes is at least semipermanent. The main computer also acts as a server. It issues instructions to the sensors to collect update information. These instructions are based on either apriori rules or changes received from one sensor requiring confirmation by another sensor.

59CRC Handbook of Chemistry & Physics (1975) 56th edition Cleveland, OH: CRC Press Pp. E-221 & E- 222 40 Processes in Biological Vision

It appears that the actual human eye works in an open loop mode; the central computer tells the peripheral computer where to look and then places the returned information into the large data file based on its own pointing instructions. This occasionally causes problems in the modern world. For instance, the coordinates of a strobe light may be entered into the large data file erroneously if the light is recorded during the time interval before the eye ball has rotated to the prescribed pointing vector. 2.2.4 Other parameters related to embroyology and morphology of the eye

The eye is formed of only three layers of tissue. Each layer differentiates into morphologically and functionally distinct areas as described in detail by Nolte at the gross anatomy level60. Lent61 edited a volume discussing the genesis of the visual system in 1992. Hogan also provides a discussion in this area62. A broader presentation at the cytological level, and expanding on Hogan, appears in Section 4.5.1.

Figure 2.2.3-1 Conceptual model of the information handling capability of the eye 2.3 The organization and mixed coordinate systems of the chordate brain

Nolte provides several figures defining the topographic characteristics of the human brain at the introductory level. At a more detailed level, Nolte & Angevine should be consulted63. These descriptions will be addressed more fully in Section 15.1.4 along with Brodmann’s original numerical identifications. 2.3.1 The general organization of the brain

While figure 3-23 of Nolte provides a tabular definition of the major afferent elements of the central nervous system, CNS, based on morphology, it does not show the interrelationship of these elements, particularly with respect to signaling in vision. It omits any discussion of a most important part of the diencephalon, the thalamic reticular nucleus (possibly in conjunction with the fornix and amygdala). It also omits the elements of the efferent signal paths of vision, both within the CNS and in the peripheral nervous system. It has been common for the neuroscientist focused on the brain to consider the as part of the central nervous system with the optic nerve treated, although not labeled, as a commissure. Section 4.5.1 will develop the perspective of the more general morphogenicist, that the eye is formed of ectodermal tissue and that part called the retina contains neurosecretory organs just like many other areas of ectodermal tissue (typically hair follicles). Conversely, that part labeled the retinal pigment epithelial layer corresponds to a digestive organ. Neither of these types of organs are found within the neural portion of the brain discussed above. Under this interpretation, the retina, and the rest of the eyes would not be considered part of the CNS. Under this interpretation, the eyes and the the oculomotor nerves and muscles are found within the skull but are considered extra-CNS.

60Nolte, J. (1999) Op. Cit. pg. 398 61Lent, R. ed. (1992) The Visual System from Genesis to Maturity. Boston, MA: Birkhauser 62Hogan, M. & Alvarado, J. & Weddell, J. (1971) Op. Cit. Pp 399-401 63Nolte, J. & Angevine, J (1995) The human brain in photographs and diagrams. St. Louis, MO: Mosby Environment & Coordinates 2- 41

Under the above interpretation, the optic nerve of vertebrates can be considered a portion of the peripheral nervous system, PNS, just like any other large bundle of nerves except it contains a vascular component and is not enclosed by vertebra. Continuing to consider the retina as part of the CNS for purposes of discussion, Figure 2.3.1-1 presents the data of Nolte in diagram form but tailored to the visual system. In both figures, the Cerebrum is defined as including both the Diencephalon and the Cerebral hemispheres. These designations arise from different systems of notation and do not agree completely with the following figure. To maintain consistency with other papers, the major sensory organs usually considered part of the brain are shown. However, the break lines and the dashed line associated with the thalamus recognize the alternate notation used in most non- literature. In those documents, the thalamus is usually considered the terminal portion of the brainstem instead of part of the cerebrum. Nolte & Angevine define the thalamus as distinct from the cerebrum (page 1). They define the diencephalon as the third distinct major portion of the brain with the notation “dienchephalon (literally the in-between brain).” This work will focus on a specific part of the diencephalon that is morphologically and topologically distinct from the remainder of the thalamus. This thalamic reticular nucleus appears to play a key role in the control of the operation of the nervous system. It is closely related to the fornix and the amygdala. These three elements appear to provide the essence of consciousness, “to be conscious and of independent action based on volition.” See Sections 15.2.3.7 & 15.2.5. 42 Processes in Biological Vision

Figure 2.3.1-1 Diagrammatic overview of the subdivision of the CNS significant in vision. Compare to Nolte, 1999. The figure has been expanded to illustrate both the efferent and afferent neural systems. The legends in parenthesis relate to the primary function of the element. The numbers in parenthesis of Nolte have been dropped to avoid implying a sequential relationship. As noted by Nolte, some of the functional elements shown overlap the borders implied by the connecting lines. He shows the vestibular nuclei at the junction between the pons and Medulla. The break between the Diencephalon and the Thalamus and the dotted line suggest an alternate representation commonly used for the non-. In those contexts, the thalamus is usually considered the terminal point of the brainstem. The thalamic reticular nucleus takes on a greater role than envisioned by Nolte. See text.

The situation is well illustrated in figure 3-17 of Carpenter & Sutin and other medial views of the human brain64.

For the higher primates, the top caudal part of the thalamus is designated the pulvinar as shown in Figure 2.3.1-2.

64Carpenter, M. & Sutin, J. (1983) Human Neuroanatomy. London: Williams & Wilkins pg 80 Environment & Coordinates 2- 43

Figure 2.3.1-2 CR Caricature showing the position of the Pulvinar, LGN and Thalamus in relation to the old brain. Note the rostral part of the thalamus is to the left. The optic nerve (commissure) merges with the brain at a caudal location. Note also the separation of the commissure proximal to the optic chiasma into two separate bundles going to the LGN and to the brachium of the Superior Colliculus. From Mettler, 1948.

It is very difficult to locate an actual photograph of the elements of the thalamus at the level of detail shown in this caricature. The best pictures were found in Nolte & Angevine and in Jackson & Duncan65. Both of these works

65Jackson, G. & Duncan, J. (1996) MRI Neuroanatomy NY: Churchill-Livingstone 44 Processes in Biological Vision

showed both photographic and MRI imagery of these areas. Nolte & Angevine provide annotated pictures that will be expanded upon later in this work66. The Jackson & Duncan text showed a more extensive series of slices through the brain that allow the relevant structures to be tracked from slice to slice. Orrison shows a coronal MRI image and a caricature of the image showing the relative distance between and the isolation of the Superior Colliculus and the thalamus67. When expanding on the figures in the above works, it is necessary to differentiate between the many nuclei of the thalamus and assign them primary roles. This is difficult to do with the amount of exploratory work already reported in the literature. To circumvent this problem, the term pretectum will be appropriated from the non-primate chordate literature and used to describe those portions of the thalamus that are related to the visual process but do not include the lateral geniculate nuclei. A precise functional distinction between the pretectum and pulvinar may not be possible at this time. As defined, the pretectum includes the so-called pulvinar-LP complex (LP = lateral posterior nucleus), the lateral dorsal neucleus and the ventral posterolateral nuclei (terms not necessarily used consistently by all current authors. The text in parenthesis [ xxx what figure, maybe brodal or pansky ] is designed to describe the principle function of the various elements with respect to vision. No vision related notation is given under some of the elements. The notation under the various morphological lobes will be addressed below. They differ significantly from Nolte, particularly with respect to the frontal (anterior) lobe. The major task of the frontal lobe is cognition. One result of this process is the generation of high level commands to the parietal lobe to implement actions. These actions are not limited to motor functions. They also include changes in operating mode of a variety of feature extraction engines.

Nolte has drawn attention to the fact that the functional characteristics of the various elements do not respect morphologically defined boundaries. This is particularly true where the functional element (engine) is found to extend deep into sulci and occasionally reappear on the other side. The folding of the surface of the brain is primarily to aid in the packaging of a large thin sheet of neural material in a minimal volume shielded by bone while maintaining minimum signal delay between various neural engines (functional elements) . The minimal volume reflects the requirement in advanced chordates to minimize the moment of inertia of the head in order to achieve high angular rates of rotation.

Any relationship between a given gyrus and a functional neural engine is a flimsy one. The same situation arises in the thalamus and other elements of the midbrain. As Nolte noted in connection with his figure 15-3 describing the brainstem, the precise location of many of the morphologically defined neural engines spill over into adjacent areas defined at the next higher morphologically level.

Further definition of the morphological areas of the thalamus are given in Table 16-1 of Nolte. Both their names and a common abbreviation are given. Interestingly, because it has not been as widely studied, the pulvinar is not associated with a common abbreviation.

It is important to note Nolte’s emphasis on the dual character of the neural signals related to the thalamus (and generally true throughout the neural system). Although he does not address the signal processing neurons present as such, he indicates that projection neurons operating over short distances operate in the tonic mode while those operating over longer distances (commissure neurons) operate in a burst mode (i.e., phasic mode associated with action potentials).

2.3.2 Coordinate transformations within the visual system 2.3.2.1 The transition from retinotopic to abstract mapping

The organization of the visual system begins to lose its retinotopic character as the signal paths progress away from the retinas. Within the parietal, temporal and anterior lobes, the organization is not retinotopic. It is essentially

66Nolte, J. & Angevine, J. (2000) The Human Brain in Photographs and Diagrams, 2nd ed. St. Louis, MO: Mosby pp 124-125 67Orrison, W. (1995) Atlas of Brain Function. NY: Thieme Medical Publishers pp 142-143 Environment & Coordinates 2- 45

abstract and described in terms of a saliency space where the information is in vector form. The vector related to each piece of information contains a tag that relates to its spatial origin. However, this spatial tag is not retinotopic. It defines the location of the source of the information in inertial space as computed from all of the sensory inputs of the animal. Conflict between the various sensory inputs results in the well known vertigo since the higher feature extraction engines are not able to arrive at an unequivocal location tag in inertial space.

2.3.2.2 The transition from abstract to inertial-topic mapping

Many experiments related to the afferent command signals arriving at or leaving the superior colliculus, SC, have suggested a retinotopic relationship. However, this relationship is not in terms of absolute spatial position on the retina. It is more closely related to the absolute position of the excitation source in object space. It is more directly related to the distance of the absolute spatial position from the line of fixation of the eyes. These signals are used to command the oculomotor, and other, muscles as required to achieve a desired line of fixation. These commands can be generated autonomously or in response to the volition of the anterior lobe. In most experiments, the animal has been constrained to prevent head and body motion. As a result, the investigator has found the SC signals only indicative of eye motion. Care must be taken in interpretation of such results.

2.3.3 Initial embroyology of the visual system

2.3.3.1 Initial embryology of the brain

Figure 2.3.3-1(B) will be discussed in the next subsection. Figure 2.3.3-1(A) assimilates some of the terminology in the 5th edition of Torrey68, Pansky, et. al69. and the new edition of Afifi & Bergman70. Many of the figures in these works are taken from the older text by Noback71. The material to the left of the vertical centerline is generally from Torrey. Hamilton has also provided a figure similar to the left half of the above figure72. The material to the right of the centerline is an expansion based on this work using the nomenclature of Afifi & Bergman. The opening paragraph of Afifi is particularly relevant to the following discussion. The figure attempts to show the morphogenesis of the brain of a higher chordate as a function of gestation time using three common anatomical forms. The most comprehensive discussion of morphogenesis appears in Noback. However, even this discussion does not address the difference in morphogenesis related to the foveola found in a broad range of higher chordates. Squire et al. have presented additional detail related to the morphogenesis of the overall brain based on Swanson73.

68Feduccia, A. & McCrady, E. (1991) Torrey’s Morphogenesis of the vertebrates. NY: John Wiley & Sons. 69Pansky, B. Allen, D. & Budd, G. (1988) Review of Neuroscience. 2nd ed. NY: Macmillan Publishing Co. 70Afifi, A. & Bergman, R. (1998) Functional neuroanatomy. NY: McGraw-Hill 71Noback, C. (1967) The Human Nervous System. NY: Blakiston Div. of McGraw-Hill 72Hamilton, L. (1976) Basic limbic system anatomy of the rat. NY: Plenum Press. pg. 30 73Squire, L. Bloom, F. et al. eds. (2003) Fundamental Neuroscience, 2nd Ed. MY: Academic Press pg 26 46 Processes in Biological Vision

Figure 2.3.3-1 The morphogenesis and gross anatomy of the brain related to vision. A; an alternate portrayal. See text. B; a conventional portrayal. The progression from two-part to five-part anatomy traces the morphogenesis of all brains. The figures to the left of the vertical centerline are similar to the fifth edition of Torrey. The four arrows differ from the three pen and ink entries made in Torrey by the new editors of that volume. In this case, the arrows refer to the location of the initial interface between the sensory neurons and the brain. Torrey has the lower arrow pointing to the lowest ventricle in the three-part anatomy, the Rhombencephalon. This area relates more directly to the vestibular than the auditory function. The figure to the right of the centerline represents the proposed interface between the retinas and the brain in greater detail. Paths A and B are afferent paths while path C is efferent. While the shaded area is frequently discussed in terms of ventricles or “spaces”, they are generally filled with the association fibers and commissures connecting the various engines (Torrey describes as centers) of the brain.

The literature has generally presented two different topographies for the brain of the chordates depending on their level of morphological development. When examined more closely from a functional perspective, this dichotomy is not supported here. Torrey has exemplified this dichotomy by drawing the three heavy arrows indicating the general areas associated with the senses and then saying that the retinas connect to the diencephalon in the higher chordates and to the mesencephalon in the lower chordates. Hence the left side of the five-part anatomy of the higher chordates show the retinas interfacing with the diencephalon. This was as far as Torrey went. The arrows in the above figure are drawn at slightly different locations than the arrows (obviously added during editorial review) of Torrey. Since the brain has not completed its genesis at the three-part anatomy stage, the arrows point only to general areas. To go further based on only a two-dimensional drawing is difficult. As an example, the optic chiasm Environment & Coordinates 2- 47

cannot be shown easily within the context of the above drawing. In addition, there is another level of detail that is critical to the understanding of vision in the higher chordates. These species exhibit a morphological fovea. Of greater importance, they exhibit a physiological foveola within the fovea. The presence or absence of a foveola is a significant physiological dichotomy among the chordates. This dichotomy influences the final morphological implementation of the visual system. Whereas the neural interface between the optic nerves associated with the tectum is called the pretectum in the lower chordates, some authors call it the corpora quadrigemina in the higher chordates. The difference in this case is almost entirely morphological and not functional. Afifi discuss this matter in terms of the pretectum being reabsorbed as the corpora quadrigemina is formed. There is an additional problem in accounting for the portion of the tectum associated with the audio interface. This paired structure (not shown) is located within the area of the corpora quadrigemina. The area is analogous to an area of hills and mountains. The question becomes where do you draw the line between a hill and a mountain. The area could be called the corpora sexigemina equally well. The structures of both the thalamus and tectum that relate to vision are generally paired and arranged laterally from the sagittal plane (Pansky, et. al, pg. 325). Torrey ends its discussion by suggesting that the anterior pair of the corpora quadrigemina are related to vision and the posterior pair are related to hearing. Noback presents the exact opposite impression when discussing the audio system74. He does not differentiate between the nerves coming from the ears and those coming from the nearby vestibulary system. This work takes a different view. It suggests corpora sexigemina is a better functional description of the roof of the tectum in all animals. It proposes that the two anterior bodies, the inferior colliculi are directly related to vision, the middle pair (which might be a part of the inferior colliculi based on Noback ) are related to audition and the posterior pair, the superior colliculi are related to the oculomotor functions of vision as well as the motor functions of the body. This structure has a close tie to the vestibular system. This position is consistent with a merging of the figures on page XXX & 325 of Pansky, et. al.

The graphical description of the above relationships is further clouded by the three flexure points found particularly in the higher chordates. The brain cannot be properly presented on a 2-dimensional surface because of these flexures. These flexures are used to allow the brain to be packaged within a skull compatible with a face parallel to the ventral surface of the bipeds and quasi-bipeds. These flexures evolve in humans up to the time of birth. As shown, only the cephalic flexure is collocated with an isthmus. The cervical and pontine flexures actually reverse during the final genesis of the human.

On initial examination, most of the neurons of the optic nerve are found to interface with the thalamus. However, this conclusion is not relevant to the functional performance of the visual system. When explored in detail, the retinas are seen to interface simultaneously with both the bottom of the diencephelon (the lateral geniculate nuclei of the thalamus, labeled A in the figure) and with the top of the mesencephalon (the roof of the tectum). The interface with the roof of the tectum actually consists of two distinct interfaces in the case of at least the higher chordates. There is an interface with the inferior colliculus (usually labeled the pretectum in the lower chordates and labeled B in the figure) and with the superior colliculus (labeled C in the figure). These interfaces are shown explicitly in Pansky, et. al75. (Pansky, et. al. pairs the terminology pretectum and superior colliculus in their 1988 book and this terminology is frequently used in this work.) The inferior colliculus interface is associated with the neurons emanating from the foveola and being processed by the Precision Optical System. The superior colliculus interface is associated with the few motor neurons of the ocular globes which also travel within the optic nerve and support the iris, lens, etc. The neurons interfacing with the inferior colliculus provide the fine spatial resolution associated with the foveola and so important to the visual system of humans and other advanced chordates (including birds). Hence, while most of the neurons of the optic nerves interface with the thalmus, the most important neurons of the optic nerves interface with the tectum.

2.3.3.1.1 The initial brain as an open-ended tube

Frame A of the above figure reproduces the conventional view of the brain evolving from a closed end tube at the top of the spinal column. Recent activity attempting to computer model the embryology of the brain, and the recognition of the asymmetrical character of the multilayer brain tissue, has suggested a different configuration. The tissue of the is a multilayered structure with all of the neurons entering and exiting from one (the

74Noback, C. (1967) Op. Cit. pg. 6 75Pansky, B. Allen, D. & Budd, G. (1988) Op. Cit. pg. 137 48 Processes in Biological Vision

inner) surface. The corpus callosum would be an exception to this structure if the brain evolved from a closed tube. If the brain evolved from an tube open on both ends, the formation of the corpus callosum would be straight forward. Frame (B) of the above figure suggests this alternate development. The upper end of the tube pinches into a bifurcated structure. The two bifurcations then turn toward each other allowing neurons from each hemisphere of the cerebral cortex to communicate via the corpus callosum without penetrating the external surface of the brain tissue. 2.3.3.2 Initial embryology of the eye

As noted above, although most neuroscientists think of the retinas as integral parts of the CNS, a broader view based on morphogenesis would suggest the eyes, including the retinas, are parts of the PNS that happen to be located within the cranium in vertebrates. Section 4.5.1 presents more details on the embryology of the eyes. 2.3.3.3 Initial embroyology of the retina

The initial formation of the retina is documented by Reh, writing in Ryan, et. al76. He notes that the ganglion cells appear to form first. Lam & Bray develop the fact that large numbers of ganglion cells die during development77. The available facts suggest that a large number of ganglion cells are formed before the signaling architecture of vision is completely implemented. Subsequently, unneeded ganglion cells atrophy.

2.3.4 The connections between elements of the brain

With the retinas recognized as brain tissue, it is important to also recognize the historical term optic nerve actually refers to a commissure of the brain. A commissure is a major bundle of neurons connecting distant locations of the central nervous system, CNS. The largest commissure, the corpus callosum includes over 300 million individual axons. The optic nerve, and can be considered commissure. Commissure connect areas of the brain separated by more than a few millimeters. Commissure is the morphological name for elements of the projection stage of the neural system found within the CNS. These neurons employ phasic signal transmission (action potentials). Over shorter distances, the signals are transmitted within the brain using tonic signaling (analog waveforms).

Within this work, all signals traveling from sensor neurons will be considered afferent until they reach the anterior lobe of the cerebral cortex or are redirected by the pretectum. Conversely, all neural signals emanating from the cerebral cortex, and those assembled by the pretectum, will be considered efferent. These terms will de defined more completely in Chapter 11. 2.3.4.1 The connection architecture of vision

In discussing the connections within the brain, it will be shown in Section 15.2.4 that it is critical that the functional (topological) architecture be understood before assignments can be made based on morphology of topography. The visual system is so sophisticated, that many inappropriate assertions can be made lacking such background. 2.3.4.1.1 The block diagram

While the visual system can be described based on many morphological features, the fact that the higher level chordates exhibit two distinctly different signaling paths between the old brain and the new brain is crucial. The second signaling path is the mark of a highly developed brain containing a fovea. The well known path is from the LGN of the thalamus, through the geniculocalcerine fissure, to area 17 of the cerebral cortex. It is primarily concerned with the visual awareness of the space surrounding the animal and has been labeled the awareness path. This path is fundamentally extrafoveal. The second path is less well known and extends from the pulvinar, along the pulvinar pathway to area 7 of the cerebral cortex. It is concerned with the analysis of the details of a scene projected

76Reh, T. in Ryan, et. al. (2001) Retina, 3rd ed. Vol. One, St. Louis, MO: Mosby, Chap 1 77Lam, D. & Bray, G. (1992) Regeneration and Plasticity in the Mammalian Visual System. Cambridge, MA: The MIT Press, pg 31 Environment & Coordinates 2- 49

onto the foveola. It is labeled the analytical path and is only concerned with the neurons emanating from the foveola. The coordinated movement of the eyes, involving both major and minor (including tremor) over short intervals of time allow the system to analyze important elements of the surrounding environment in a sequential manner. These pathways are not related to the so-called M– and P–pathways which both follow the above awareness path. Figure 2.3.4-1 reproduces [Figure 15.2.4-3] to illustrate the overall topology of the visual system in the higher chordates (which does not necessarily include the monkeys). This figure will aid the reader in following the remainder of this section and the references cited. In this context, the term higher chordates refers to the sophistication of their foveas. This group includes many hunters among the mammals and aves. The purpose of the figure is to differentiate between the awareness path along the top of the figure and the analytical path below it. The signals from these two afferent paths are formed initially in the retina and are ultimately merged in area 7 of the cerebral cortex. Note also the efferent path labeled the command path leaving the POT and arriving at the superior colliculus. The Parietal-Occipital-Temporal lobe junction area, POT, is a major signal exchange point within the visual system and probably between all of the major sensor systems. This path highlights the role of the superior colliculus in vision. It clarifies the fact that the superior colliculus is primarily involved in output command (efferent signal) generation regardless of whether the initial stimuli came from an external source or from the cerebral cortex via the volition path. The alarm path is important in this discussion because it frequently introduces responses in the superior colliculus (a part of the POS) not anticipated by the researcher. The stereo path is shown for completeness.

Figure 2.3.4-1 Reproduction of Figure 15.2.4-3 to illustrate the top level topology of the visual system in the higher chordates. The Pretectum incorporates a series of nuclei, including the pulvinar. 50 Processes in Biological Vision

Note the delays shown by the sausage shaped symbols throughout the figure. These delays are functional features of the signal projection circuits (commissure). These delays have a profound effect on the operation of the visual system even though they have not been recognized in the morphology (or to a great extent in physiology) of the visual process. The commissure employ phasic signaling as a power saving mechanism and accept the associated time delay. The neural engines employ analog signaling to minimize time delay, in the much more complex circuits, and minimize the distances between circuits to minimize power consumption. There are also significant delays associated with Meyer’s loop and the related Reyem’s loop.

2.3.4.1.2 The morphological picture up to the cerebral cortex

There are many caricatures available describing the brain as applicable to vision. This is primarily because the major vision pathways are so prominent and so much of the brain is devoted to . However, these caricatures are difficult to correlate because of the different perspectives of various authors. Newell republishes three different figures describing these pathways, (figures 1-47, 1-54 and 2-24) plus a selection of line drawings of the same areas. His figure 1-47, from Glaser78, is reproduced here as Figure 2.3.4-2. It shows the geniculocalcarine tract in some detail. It also introduces the fact that Meyer’s radiation actually divides into a bundle going to the occipital lobe and a bundle going to the parietal lobe. It does not show a separate bundle from the pretectum to the parietal lobe following the pulvinar tract which will be introduced below.

78Glaser, J. (1978) Neuro-Ophthalmology, Hagerstown, MD: Harper & Row Also reproduced in Newell Op. Cit. Environment & Coordinates 2- 51

Figure 2.3.4-2 The visual-sensory system viewed from the left side. The left cerebral hemisphere has been removed except for a portion for the occipital lobe and the ventricular system. The arrow beneath the third ventricle points to the lateral geniculate body. From Glaser, 1978.

2.3.4.2 The optic nerve between the eyes and the midbrain

Most people are familiar with the optic nerves and their interdigitation that occurs at the optic chiasm. However, they are not generally familiar with the retinotopic localization of afferent nerves within the optic nerve (commissure) and seldom think of the efferent nerves within this bundle that control the iris and lens system of the eyes. Few people are aware of the additional segmentation of the optic tract posterior to the chiasm. These segmentations were shown in the earlier figure and labeled A, B & C. While A & B are afferent, C is efferent. At the detailed level, the terminus of these various bundles within the midbrain are quite diffuse. Some appear to extend into the pons and probably relate to the vestibular system. 2.3.4.2.2 The morphological picture for afferent paths 52 Processes in Biological Vision

Figure 2.3.4-3 presents the clearest caricature of the afferent neural paths from the eyes to the midbrain79. While it was drawn by Miller to focus on the , it shows many other paths.

Figure 2.3.4-3 Caricature of the afferent neural paths between the eyes and the midbrain. Miller notes the probability of a collateral of the visual axons leaves the optic tract before the visual axons synapse in the lateral geniculate body. From Miller, 1985.

The bulk of the optic nerve is shown proceeding to the two lateral geniculate nuclei. However, Miller notes in his caption the probability of a collateral path of vison axons proceeding elsewhere. It is a proposal of this work that this collateral path is actually a key path in the ability of humans to interpret fine detail. It is proposed that approximately 2.5% of the neurons of the optic nerve follow this path to the pretectum. There, the signals from these neurons are processed in a two-dimensional correlator in order to extract the initial interpretation of the content of the scene applied to the foveola. It is proposed that this collateral path is one of the distinguishing features of the human visual system. While it may be shared with the other higher apes, it is probably not shared widely with the lower mammals. Miller suggests there is a distinct path from specialized sensors in the retina to the pretecto-oculomotor tract that aid in the control of the iris. This is a plausible but not well documented position. An alternate position would assume the necessary information to control the iris was extracted from the information passed to the pretectum over the visual neurons. In either case, signals from the pretectum pass back to the muscles via a series of nuclei. A similar situation arises with regard to control of the lens for purposes of . It appears probably that signals to control the lens are extracted from the signals from the foveola in the pretectum.

79Miller, N. (1985) Walsh and Hoyt’s clinical neuro-ophthalmology, 4th ed. vol. 2, Baltimore, MD: Williams & Wilkins. Also reproduced in Newell, Op. Cit. pg 107 Environment & Coordinates 2- 53

Based on this analysis, three types of signals are extracted by the pretectum, the signals describing the fine detail in the scene projected on the foveola, the signals required to command the oculomotor muscles for purposes of pointing, and the signals required to control the iris and lens. 2.3.4.1.2 The morphological picture for efferent paths

There are two sets of efferent neural paths from the midbrain. The oculomotor group controls the motions of the eyes. The second group controls the iris and the lens inside of the eye. Both of these groups are associated with the servomechanisms that will be described as portions of the Precision Optical System (POS) of the visual system. The POS was formerly known as the auxiliary optical system because its purpose was unknown.

Figure 2.3.4-4 shows the principle neural elements and paths between the midbrain and the eye80. The nomenclature in this figure is somewhat different than used elsewhere in this work. However, the correlation between terms can be made visually. E-W, Edinger-Westphal parasympathetic sub-nucleus; IR, inferior rectus muscle nucleus; IO, inferior oblique muscle nucleus; MR medial rectus muscle nucleus; SR, superior rectus; SO, superior oblique; CCN, caudal nucleus; LR, abducent nucleus for lateral rectus muscle.

2.3.4.3 The connections between the midbrain and the cerebral cortex

There are two separate and distinct visual pathways between the old brain (thalamus, pons & medulla) and the new brain (the cerebral cortex). The well known geniculocalcerine pathway leads from the lateral geniculate nuclei to area 17 of the occipital lobe. Historically, area 17 has been considered the primary . This is an archaic designation. The alternate pulvinar pathway, between the pulvinar portion of the thalamus (or more specifically the pretectum) and area 7 of the cerebral cortex, plays a crucial role in the superior vision and analytical Figure 2.3.4-4 Organization of the capabilities of those animals with a fovea. A human viewed from above, left posterior. See text. From Glaser, can function in the modern world with severe damage 1978. to area 17. However, he is greatly constrained by damage to the afferent visual area of area 7 or to the pulvinar and pulvinar pathway. This signal path constitutes the analytical path in human vision. This signal path is most highly developed in Homoinoidea (man and the higher, anthropoid, apes). While the rhesus monkey (Cercopithecoidea macacus) is widely used in vision research, this family does not exhibit the full capabilities of the analytical path found in the great apes and man. 2.3.5 Ultimate morphological and topological organization of the brain 2.3.5.1 The morphological environment of the midbrain

To interpret the major signal pathways of the brain in Section 2.6.1, it is important to explore the anatomical terminology associated with the brain. This terminology is very confused at the detailed level at the current time because of the variety of approaches to the subject. The new text by Nolte, referenced above, provides an abundance of information concerning the brain. However, the text is introductory and does not focus specifically on the details of the visual system. It is not in close agreement with the authors referenced above. 2.3.5.2 Correlating the optic tectum

Vanegas has edited a comprehensive volume comparing the optic tectum in different species. Unfortunately, much of the comparison is left to the reader as each chapter addresses the optic tectum in a different species81. The text

80Glaser, J. (1978) Op. Cit. Also reproduced in Newell, Op. Cit. pg 59 81Vanegas, H. (1984) Comparitive Neurology of the Optic Tectum. NY: Plenum 54 Processes in Biological Vision

does not address signaling directly. It relies primarily on the introduction of lesions to ascertain the visual role of different regions. In this respect, the text is concerned primarily with the retinotopic aspects of vision. In reviewing Vanegas, one rapidly discerns a pattern. As the immature brain matures, the midbrain differentiates into multiple regions associated with vision. While the morphological names applied to these regions may vary between species, their topological function is more consistent. The lateral elements at the front (posterior most in humans) of the tectum receive the corresponding ipsilateral fields from both eyes. They are generally described as the lateral geniculate bodies. In animals with a fovea, there is a separate area at the front of the tectum that receives and merges the images from the foveola of both eyes. This area has been labeled the pretectum in many of the lower animals and the pulvinar in humans. The vision related areas posterior to the above areas are generally divided into two lateral pairs, the inferior and superior colliculi. The roles of the superior colliculi are quite clear, they are a major part of the neural system controlling the pointing of the eyes relative to inertial space. In this role they control the oculomotor system as well as the head and other body muscles controlling the direction of the line of fixation of the eyes. The superior colliculi direct the operation of the various terminal nuclei of the auxiliary optical system. These nuclei directly control the oculomotor system. Because of the importance of this signal path, the auxiliary optical system will be designated the Precision Optical System, POS, in the remainder of this work. The POS is a major component of the Precision Optical Servo-System, POSS, that controls the spatial pointing of the eyes and the extraction of precise information used to perceive, interpret and recognize elements of the visual field. The complete functional description of the POSS and POS will be found in Section 15.2. 2.3.5.3 The lateral geniculate nuclei

The lateral geniculate nuclei, LGN, have been studied for a very long time. They have been mapped extensively by morphological and physiological investigators. They will be discussed more fully in Section 2.8.1.4. There are two primary functions of these nuclei. First, these nuclei correlate the images from the corresponding fields of view of the two eyes, primarily outside of the foveal area. Second, they detect rapid movement in the retinal image due to motion in object space. The results of both of these functions are passed to the pretectum/pulvinar over the stereo path and the alarm path. The signals passed over the alarm path are absolutely critical to the survival of most animals regardless of phylum or species.

The fact that the signal processing carried out within these structures is highly dependent on the time delay introduced by Reyem’s loops has not been recognized until now. Reyem’s loops introduce a time delay that is proportional to the distance from the line of fixation of element of the scene. By attempting to merge the corresponding signals from the two eyes, this time difference is easily measured and the information is transmitted to the pulvinar/pretectum area to establish stereo convergence of the eyes. Rapid motions in the scene also generate time differences that are easily measured in the LGN. These time differences are also passed to the pretectum/pulvinar for immediate action.

The time delays due to Reyem’s loops are not of use after the processing in the LGN Therefore, these delays are removed by the corresponding Meyer’s loops before the signals reach the cerebral cortex along the awareness path. 2.3.5.4 The Pulvinar, analog of the Pretectum

In recent primate literature, the important role in vision of the region of the thalamus labeled the pulvinar has become clearer. Chalupa has presented considerable material relative to this region82. Nolte has also noted the supervisory role of the pulvinar (page 381). Nomenclature remains a problem in delineating the pulvinar and the associated structures. As indicated above, the pretectum (or optic tectum) may consist of a majority of the anterior part of the thalamus (other than the lateral and median geniculate nuclei) and include both the pulvinar and the lateral posterior nucleus plus other components. In addition, there are a great many caricatures of this region but very few photographs and micrographs. The collage of photographs in Nolte (page 385) and attributed to Nolte, et. al83. and to Chen, et. al. are the best available. They show the location of one lateral zone consisting of a pulvinar, lateral geniculate and medial geniculate and reticular nuclei simultaneously and also their location within the broader context of one of the two thalamus structures. These photographs can be correlated with the caricatures of Chalupa

82Chalupa, L. (1991) Visual function of the Pulvinar. Chapter 6 in Leventhal, A. ed. The Neural Basis of Visual Function. vol. 4 of Vision and Visual dysfunction, Cronly-Dillon, xxx general ed. 83Nolte, J. & Angevine, J (1995) Op. Cit. Environment & Coordinates 2- 55 and of Brodal. The details shown in the caricature of Brodal, Figure 2.3.5-1, form a good reference for further discussion84. Note the reference in Brodal’s text (2004, page 170) to the brachium of the superior colliculus, SC, as a part of the auditory modality, along with the MG. Further details, based on sectioning of the structure, are shown in Figure 16-16 and Table 16-2 of Nolte.

Figure 2.3.5-1 Three dimensional view of the right human thalamus seen from the dorsolateral aspect. Abbreviations for thalamic nuelei: A, anterior; CM, centromedian; Int. lam., intralaminar; LD. LP., lateralis dorsalis and posterior; LG, lateral geniculate body; MD, dorsomedial; Ml. midline; P, pulvinar; R. reticular; VA, ventralis anterior; VL, ventralis lateralis; VPI, VPM, ventralis posterior lateralis and medialis. Other abbreviations: ac. acoustic input through brachium of interior colliculus (aka, through medial geniculate nucleus, MG, and perigeniculat nucleus, PGN-not shown); cereb., cerebellar input; med. l., medial lemniscus; opt., optic tract; pall., pallidal inputs; sp. th., spinothalamic tract; trig., trigeminal input. From Brodal, 1981.

While all of the elements in the above figure are identified as separate morphologically, this work will continue to consider them all (except the MG and LG) as part of the stage 4 elements of the pulvinar. Not shown in this figure is the thalamic reticular nucleus, TRN, forming an outer shell covering virtually all of the elements shown and acting as the supervisory and switching engine controlling most interconnections between the thalamus and the cerebrum, and probably the cerebellum as well. Schall85 defines the pulvinar as consisting of four nuclei, the medial, lateral, inferior and anterior–that are distinguishable based on their connectivity and functional parameters. [xxx what about functional parameters. how does figure compare to Brodal.] Schall makes the interesting observation that the increase in size of the pulvinar in primates parallels that of the extrastriate visual cortex (presumably following the definition of Spear). When only looking at the primates, this correlation, whether volumetric or area related, may not be that good. Schall also

84Brodal, A. (1981) Neurological Anatomy in Relation to Clinical Medicine 3rd ed. NY: Oxford University Press Subsequent versions, 1992, 1998 & 2004 by his son, Pers Brodal. 85Schall J. (1991) Neural basis of saccadic eye movements in primates. Chapter 15 in Leventhal, A. Op. Cit. 56 Processes in Biological Vision

provides an extensive bibliography on connectivity between the pulvinar and other regions of the brain as well as the more general connectivity between regions. His figure 15.11 is believed to contain too many reciprocal paths between engines of the brain. Otherwise it shows a strong familial resemblance to [Figure 2.3.4-1] above. It may be that many of the antidromic circuits in these reciprocal paths are what are called supervisory circuits in communication. Supervisory circuits are low capacity paths only used to report the operational status of an engine to, in this case, the pulvinar. These paths will be discussed further in Section 2.8 & 15.2.4. In Section 15.2.4, the critical role of the pretectum/pulvinar as supervisor of visual signal processing will be stressed. In the context of figure 15.11, the pretectum might be defined as incorporating his pregeniculate nucleus, his pulvinar and his central thalamus. Further discussion of the functional characteristics described in Schall will be found in Chapters 15 and 17. 2.3.5.5 The superior colliculus

The nuclei of the superior colliculus , SC, are of significant size and perform a wide range of complex signal manipulations. These manipulations are complex because the signal received from the POT are fully vectorized in saliency space while those received from within the POS are more retinotopic in character. The SC must process both types of signals as well as those received from the vestibular system. It is frequently proposed that the signals at the output of the SC are retinotopic. However, in general, they relate to the relative position of the elements of a scene relative to the line of fixation rather than the absolute position of the scene elements. These relative values are used to generate the signals used to control the oculomotor muscles shown in the box labeled Plant in the above figure. 2.3.5.6 The cerebellum

The role of the cerebellum will be discussed in more detail in Chapters 11 and 15. It has not offered the morphologist much to work on. Its functional role is highly time sensitive and difficult to determine based on past investigatory techniques. Its role has been determined previously primarily from functional limitations following serious laceration of the element. Nolte has noted the recent proposals that the cerebellum is involved in cognitive functions. In this work, that assertion will be reworded to propose the cerebellum is involved in perceptual and interpretive activities, in association with the pretectum/pulvinar, leading to cognition within the cerebral cortex. Brodal has provides an annotated topographic view of the cerebellum in humans (also reproduced in Brown86) as well as showing some of the main interconnections with other elements of the brain. The primary area associated with vision is believed to be the vermis. Schall provides references to the vision related activities of the vermis.

The major role of the cerebellum is in interpretation of the image projected onto the foveola, where it acts as a very fast two dimensional correlator. In this role, it receives information from the 23,000 photoreceptors of the foveola that is time correlated and attempts to match previously encountered patterns. Individual distinct patterns are received continuously at intervals of only 10-30 milliseconds. When the cerebellum accomplishes a match, it reports the fact to the cerebral cortex (probably by way of the pretectum/pulvinar) via a message in vectorial form. If it fails to recognize an image, it may record the correlation factor obtained for future correlation purposes. 2.3.6 The morphological environment of the cerebral cortex

It is not often recognized that the cerebral cortex (new brain or neo-cortex) is not a three-dimensional or volumetric structure. The cerebral cortex is basically a large thin sheet (thickness equal to a few sheets of paper) that has been folded extensively to fit into the relatively small volume available and thereby accommodate other operational constraints involving signaling delays and moment of inertia requirements relative to the motion of the animal. The arrangement of the brain within the skull is a prime example of the fact that form follows functional requirements in the anatomy of an animal, contrary to the common view of morphologists that function follows form. The entire surface of the sheet is available to support engines (large groups of neurons focused on a specific task) without regard to the folding process. While investigators have historically differentiated between the gyrus –gyri (plateaus between the fold) and the sulci (the folds), recent work has noted the importance of the tissue within the

86Brown, A. (1991) Nerve Cells and Nervous Systems. NY: Springer-Verlag pg 193 Environment & Coordinates 2- 57

folds in neural activity. It has also become more widely known that the location of the sulci are similar to finger prints, they are not common at the detailed level. Only the location of the main sulci between the individual lobes appear to be consistent within a species. Across species, they reflect the difference in packaging requirements. While the human cerebral cortex is bilateral, it is usually described as formed of five or six lobes. Most of these lobes exhibit a bilateral symmetry of their own. Only the temporal lobes do not since they are a lateral pair that do not individually cross the line of symmetry. The limbic lobe is essentially hidden within the outer corona of the cerebral hemispheres. The posterior portion of the occipital lobe (area 17 in Brodmann’s notation) exhibits a unique morphological surface that has given this region the alternate title of the striate of the cerebral cortex. However, this striate appearance also appears elsewhere in the brain. Since this area has long been known to be involved in vision, the extrastriate areas were originally thought to not participate in vision. It is now recognized that neural engines impacting the overall visual process are found throughout the brain. This is also true of the individual lobes of the cerebral cortex. Recently, the critical importance of area 7 of the cerebral cortex and the pulvinar to the visual process has deprecated the title “primary visual cortex” associated with area 17. The functional role of area 17 is not unlike that of the LGN. It performs relatively low level image information correlation. In this role, it exhibits a high level of retinotopicity. As the signals are passed from area 17 to areas 18- 22, they progressively lose this retinotopicity as the information is translated into saliency vectors that are employed in cognition. 2.3.6.x The definition of extrastriate visual cortex

The character of the surface of area 17 of the occipital lobe is probably related to its correlation function. This low level function also calls for a high degree of retinotopicity. It may be that the striations (corrugations on a small scale) are another way to increase the amount of surface area available in a restricted space. In any case, it has been common to speak of the extrastriated area of the cerebral cortex in a morphological context. Spear has provided a definition for purposes of a specific discussion that is not wholly satisfactory in a general context87. He limits the extrastriate visual cortex to areas that respond to light and contain a retinotopic relationship to the original retinal image. This definition restricts this term to primarily the areas near 17 of the occipital lobe and possibly some areas of the temporal lobes depending on ones criteria for the degree of retinotopicity.

There are many areas of the cortex that are critical to the visual process. These areas respond to light but operate entirely in a vector space devoid of retinotopicity. In some cases, they exhibit a topographical relationship to object space in the context of an inertial system but not to the coordinates of the retina. The term extrastriate is probably archaic at this time. It does not differentiate between the functional areas of the cortex involved in vision from other areas. While providing an annotated caricature of the cat brain, he also shows the great disparity between the cat and human brain. It is safe to conclude that the cat brain does not exhibit or contain a highly developed pretectum/pulvinar and the associated analytical path found in the human brain.

Motter has discussed the ramifications associated with areas beyond the extrastriated cortex as defined by Spear88. His discussion was largely conceptual and was introductory to a discussion by Schall89. While primarily discussing the functional aspects of parts of the brain, it contains many additional morphological details. 2.4 The Visual Optical System 2.4.1 Overview

The optical systems of the eyes of animals have not received rigorous attention from the perspective of an optical

87Spear, P. Functions of extrastriate visual cortex in non-primate species. Chapter 13 in Leventhal, A. ed. The Neural Basis of Visual Function. Vol. 4 of Leventhal, A. general ed. of Vision and Visual Dysfunction. Boca Raton, FL: CRC Press pg 339 88Motter, B. (1991) Beyond extrastriate cortex: The parietal visual system. Chapter 14 in Leventhal, A. ed. The Neural Basis of Visual Function. Vol. 4 of Leventhal, A. general ed. of Vision and Visual Dysfunction. Boca Raton, FL: CRC Press pg 339 89Schall, J. (1991) Neural basis of saccadic eye moovements in primates. Chapter 15 in Leventhal, A. Op. Cit. 58 Processes in Biological Vision

engineer (to avoid confusion with the clinical use of the term optician). No record could be found of an actual ray trace of the visual system beyond some very simple traces based on Gaussian Optics. The rules of Gaussian optics are not applicable to any visual system except possibly for some simple eyes in insects. There has also been little recognition of the relevance of the motion of the eye relative to object space in defining the perceived scene. This is because of the common underlying assumption that the visual systems employ integrating detectors similar to photographic film. This conceptual problem is highlighted by Hubel who addresses it by saying: “It is as if the visual system, after going to the trouble to make movement a powerful sitmulus–wiring up cells so as to be insensitive to stationary objects–had then to invent microsaccades to make stationary objects visible.” (Hubel is speaking of the mechanism of tremor when he speaks of the phenomena of microsaccades.)90 By putting the evolution of the eye into perspective, a different interpretation arises from that of Hubel, including his comments on page 69. Most animals do not have the capability to image a scene. Their visual system is tailored to satisfy their need for detecting a change in their visual field and taking evasive action if required. These animals are starers, not imagers. They have not evolved to the point where they need microsaccades and their eyes generally do not incorporate such motion. It is only the chordates and the higher molluscs that are able to image a scene. The chordates accomplish this by mounting their eyes in a gimbal capable of pure rotation. It is the rotation of the line of sight with respect to the scene and a full field (vectorized) memory in the cortex that leads to imaging using a change detector. The higher molluscs do not have a gimbled eye. However, they have evolved a configuration that provides the same capability over a narrow field. They have evolved a hinged eye. 2.4.1.1 General Discussion

The most basic description of the eye is that of a camera obscura. Camera is from the Latin for chamber or vault. A camera obscura is a chamber or vault with only one aperture in it, usually but not necessarily covered by a lens. This definition says nothing about the light-sensitive medium inside, if any. Early camera obscura were large enough to allow people to walk into them and observe an image on a wall or table with their eyes. No light sensitive “films” were used in these cameras. Dwelling on this is appropriate because the eye is not a camera in the popular sense of a simple 35 mm camera with a light integrating photosensitive surface. The photosensitive surface is not in fact a light integrator, like photographic film or a television camera, but a change detector more like the detectors used in homing type military missiles. This fact will be developed more completely later.

Close examination of the animal eye quickly discloses its sophistication. The overall optical design of the animal eye is much more advanced than all but a very few optical designs by man. Figure 2.4.1-1 illustrates a variety of animal eyes from Prince91. Prince provides a wide-ranging discussion of the anatomical optimizations found in different species. The discussion includes the more sophisticated double and strip fovea of various types of hunters. He gives tabulations of animals double fovea and many other features. A recent book by Land & Nilsson provides a very Figure 2.4.1-1 A variety of animal eye profiles. Some show the extended fovea characteristic of their species. From Prince (1956)

90Hubel, D. (1988) Eye, brain, and vision. NY: Scientific American Library, pg. 81 91Prince, J. (1956) Comparative anatomy of the eye. Springfield, IL: Charles C. Thomas pg. 284 Environment & Coordinates 2- 59 wide view of animal vision92. Their Tables 3.1 and 3.2 are particularly useful in comparing vision among different species. Unfortunately, Table 3.2 only shows the sensitivity of humans under photopic conditions while the other animals are shown under scotopic conditions. The human eye is at least as sensitive as Limulus under scotopic conditions. They also provide a good summary of the forms of the pupil in different species.

92Land, M. & Nilsson, D-E. (2002) Animal Eyes. Oxford: Oxford Univ. Press. 60 Processes in Biological Vision

Figure 2.4.1-2, also from Prince, presents a caricature of crystalline lenses from a variety of animals, mostly from Mollusca. The variation among species shown in these figures is considerable and several conclusions can be inferred from careful study of them: + Having a lens in an eye is not necessary, but it does increase performance. + The lenses show a great deal of variation in shape and in some species the shape can be changed slightly to provide accommodation. + The wide variety of shapes of the retinal surface implies a great deal of flexibility in the optical design and the materials used therein. This leads to the expectation that non-spherical optics and gradient index materials are used widely in the animal kingdom.

The simple spherical crystalline lens of the snail is reminiscent of the Baker aerial camera lens of World War II93. This lens provided an extremely wide field of view while maintaining a very high resolution.

The eye of Pecten (Mollusca Pelecypoda) is particularly interesting. It exhibits not only two separate retinas but also two separate optical systems, one dioptric and one catadioptric.

2.4.1.1.1 Shape of the eyes

The shape of the optical system of an animal tells a great deal about its state of evolution. In insects and the majority of molluscs, the optical system is hard mounted to the structure of the animal. It does not rotate independently. In the higher molluscs, the optical system is hinged to allow a small amount of motion relative to the other structures. In chordata, all eyes are mounted in a gimbal. However, the degree of rotation of that gimbal may be quite low. This is the case in the majority of fish and even the sperm whale as noted in the figure. This degree of rotation allows small and micro-saccades but not large saccades. To achieve large changes in the line of sight, the animal must swivel its head. It is only in the higher chordates that the eyes swivel over a large degree. For optimum packaging in a pointable system, the exterior of the total package must be nearly spherical. This implies that the image plane must be approximately spherical as well, not planar as in most of man’s designs.

2.4.1.1.2 Dimensions of the eyes

There is a popular mythology that giant squids live in the depths of the oceans and have similarly giant eyes. However, the data does not support this projection.

Figure 2.4.1-2 CR A variety of crystalline lenses, from both Arthropoda and Mollusca. From Prince (1956)

93Kingslake, R. (1983) Optical system design. NY: Academic Press pg. 275 Environment & Coordinates 2- 61

Oyster94 has presented a graph based on Hughes showing the axial length of the eye versus body weight. The curve is asymptotic at less than 80 mm. (3.15 inches) The eye of the baleen whale is only slightly larger than that of the elephant. Although presenting the axial length of the eyes, it would be extraordinary to find a mollusc eye with a diameter significantly greater than its length. In addition, it would be extraordinary to find a mollusc eye with an f/# lower than f/2.0. If the giant squid has an eye as big in diameter as a “saucer,” the saucer is apparently quite small and the pupil of the eye is even smaller. The dimensions of the human eye have been tabulated by a variety of investigators (but frequently not to the precision required for a complete optical analysis). These values are discussed below and tabulated in Appendix L. The data is less well documented for other species. Perry & Cowey have provided considerable information for the eyes of monkeys95. 2.4.1.1.3 Optical environment of the eyes

The literature does not generally recognize there are two fundamentally different operating environments for animal eyes. A large number of animals live an aquatic life where the index of refraction is nearly the same on both sides of the lenses of their eyes. The other group consists of the terrestrial animals. The index of refraction of the air on one side of their cornea is quite different from the index of the bodily fluids on the other side. Optics used by this second group are defined as immersion optics, from the common use of the term in the laboratory microscope. There are two important conditions highlighted by this situation. Optical rays passing through a surface separating two areas of different index of refraction do not follow a straight line (except for a perpendicular ray). Second, a single surface separating two such areas constitutes a lens. A lens is not defined by two surfaces arranged back to back and enclosing a material of a given index of refraction. A lens is a single non-planar surface separating two materials of different index. The curvature of the lens is defined with relationship to the index difference. A portion of a cylinder or sphere is convex if its center is on the side of the higher index material. Otherwise, it is concave. This will be important in discussing the photoreceptor cell in Chapter 4.

The above definition of a lens is critical to the understanding of the terrestrial animal eye. In general, it is the cornea and not the “lens” which is the primary optical element in the terrestrial eye. For the aquatic animal, it is the “lens” that is the primary optical element.

2.4.1.2 Fundamentals of optical analysis

The analysis of an optical system cannot be done on a piece-meal basis. In the general case, the concepts and mechanisms involved in the interpretation of an optical system involve mathematics that cannot be factored along lines providing a single function that can be related to each optical element. The corollary to this fact is that simplified optical diagrams generally do not present a good approximation to the actual case. This is critically obvious when one compares the ray traces applicable to a real optical design to the ray traces applicable to a simplified, or Gaussian approximation, to that real optical design.

There have always been problems due to the limits of the technology of the day. Initially, the details of the geometry of the eye and the index of refraction of the optical materials were not known with sufficient accuracy to compute the optical performance of the eye. Furthermore, the complexity and sheer volume of the computations necessary to define the optical performance of the eye were simply beyond the capability of the investigators. In a similar vein, the diffraction limited performance of even the most modern microscope has limited the ability of the cytologist to define the structures found within the retina of the eye. When using the microscope to examine high contrast edges near the diffraction limit of the device, strange effects are observed that must be allowed for in the analysis of the results. They are frequently not discussed. Use of the modern digital computer and the electron microscope has raised the understanding of the physiological

94Oyster, C. (1999) The human eye. Sunderland, MA: Sinauer Associates, Inc. pg. 17 95Perry, V. & Cowey, A. (1985) The ganglion cell and cone distrituions in the monkey’s retina. Vision Res. vol. 25, no. 12, pp 1795-1810 62 Processes in Biological Vision

optics of the eye to new highs. Based on this level of understanding, reviewing the performance of the human eye in a modern context is appropriate. 2.4.1.2.1 Ray tracing

In the 1670’s, Newton developed the fundamental laws of optics and the new mathematics called the calculus. He was able to demonstrate that these tools did allow the complete conceptual understanding of any optical system. In practice, they could only be used to analyze simple optical systems, primarily a simple sphere, a pure paraboloid and a single “thin” lens. This was due to the complex and voluminous mathematical calculations required. However, it was common practice then, as now, to leave the detailed and laborious calculations to the student. Even to perform the calculations presented by Newton, he adopted a basic simplification that is still used to this day. He expanded the sine function into an algebraic series and then replaced the sine function by the first term in that series, i. e., let sin x = x. The resulting “first order” optics was adequate for what is known as the paraxial case in optics, where sin x is essentially given by x. This condition requires that rays of light passing through an optical system are both close to the optical axis and parallel to that axis. This class of first order optics, or paraxial optics, or Gaussian optics is widely used in teaching and occasionally in preliminary design of simple optical devices. Even with this simplification, optical analysis becomes very difficult if the optical surfaces are not spherical or conforming to a conic section. Paraxial, or Gaussian, optics does not address the “third order”(Seidal ) aberrations or the multitude of “fifth order” optical aberrations. These are the terms that address the bulk of the aberrations in an optical system, the distortions in the image, and the curvature of the focal plane. Since it is restricted to the paraxial case, Gaussian optics does not adequately deal with aperture stops, such as the iris. With an optical system like the eye that uses very sophisticated techniques, many of which man seldom uses to this day, it is important that the exact laws of optics be used in the analysis. Previously, attempts were made to develop “Standard Eyes” which emulated the human eye sufficiently well to allow simple calculations in optometry. Performing a complete ray trace of the optical system is now possible with the very sophisticated computer programs available and based on the improved parameters, both physical and geometric, of the eye.

To preform a complete ray trace, the software package must be able to handle gradient index optical elements and anamorphic optical elements involving a variable focal length with angle from the axis. Many simple programs can only handle simple anamorphic elements such as cylinders.

In the mathematical analysis of any optical system, certain important or cardinal points have been defined which aid the overall analysis immensely. In a complete analysis of any optical system, there are four cardinal points and an aperture stop that are always defined mathematically. These points define the focal length of the optical system referred to the entrance side of the system, F, the focal length referred to the exit side of the system, F’, and the intersection of two planes with the optical axis which define the principal points, P & P’. Although these planes are not necessarily flat, they are usually shown flat in top level drawings of man-made optical systems.

Figure 2.4.1-3 shows these geometrical relationships as described by Malacara96. It should be noted that the entrance pupil and the exit pupil of physical optics are not directly related to the pupil of physiology. The pupil of physiology formed by the iris is known as the aperture stop in optics. In complex systems, the entrance, aperture stop and exit may appear in any order relative to object space. The fact that the index of refraction of the fluids within the eye is not equal to the index of air, the optical system used by air breathing animals must be considered an immersed optical system. Because of the high degree of curvature of the cornea compared to its diameter, this lens alone must be considered a thick lens. With multiple lenses separated by a finite distance, the lens group of the human eye must be considered a thick lens system. These requirements and the wide field of view provided by the optical system requires the use of complete (sometime described as Newtonian or Maxwellian) optics as opposed to Gaussian optics to interpret the performance of the system. The apparent size of the entrance pupil as viewed from the object may or may not be circular depending on the alignment of the optical elements and the location in object space. This situation is described in detail on page 38 of Malacara.

96Malacara, D. (1988) Geometrical and Iinstrumental Optics, vol. 25 in Methods of Experimental Physics, NY: Academic Press pp 23-38 Environment & Coordinates 2- 63

Figure 2.4.1-3 Principle rays and points in a non-immersed optical system. The b-ray is known as the principal ray. The a-ray is known as the limiting ray. When the index of refraction of the material to the right of the first lens differs from the index on the left, the system is known as an immersed optical system. In an immersed system, the b- ray on the right is not parallel to the b-ray on the left. This makes it very important to define whether eccentricity angles are measured in object space or image space.

Besides the mathematical description of the shape and position of every optical surface, knowing the index of refraction of every medium between these surfaces is necessary. In most man-made designs, the index of refraction of the medium containing the object and the image are the same, usually air. The index of air is so close to one, 1.00032, it is usually shown only in a footnote on a drawing. This is not always the case in vision; in terrestrial animals, the index of the material between the last optical surface and the image is usually 33% higher than the index of the air in object space. This has a profound effect on the operation of the overall design. Designs of this type can be described in analogy to an immersion microscope and will be defined as “immersion optics” in this work.

A serious problem has arisen in the teaching of elementary optics to clinical investigators. It has been compounded by Miller97. On page 7.6, he and the latest editors of volume 1 have promulgated the idea that a ray approaching the human eye in air at one minute of arc from the axis continues within the eye at one minute of arc from the axis. This is not correct. The interior ray continues at an angle given by the external angle divided by the index of refraction of the viscous humor relative to air. The figure they present is also misleading by showing a large E with different proportions than the E of Snellen’s Eye Chart. Their presentation has led to studies such as that of Hueter & Gruber who have discovered this factor due to the index but assign it to the wrong situation98. They treat the terrestrial eye as the normal one and treat the aquatic eye as using immersion optics. Their experiment conveniently used air in place of the vitreous humor of the aquatic eye, thus treating it as an immersed optical system, with a different index material on each side of the lens group, when it is not. Showing a paraxial ray and an oblique ray passing through the optical system is common in man-made optical designs, particularly de-centered designs. The oblique ray is usually drawn to help define the aperture stop of the

97Miller, D. (1991) Optics and Refraction: a user-friendly guide. Vol 1, Textbook of Ophthalmology, Podos, S. & Yanoff, M. ed. NY: Gower Medical Publishing, pg. 7.6 98Hueter, R. & Gruber, S. (1980) Retinoscopy of aquatic eyes. Vision Res. vol. 20, pp 197-200 64 Processes in Biological Vision

system. In one sense, this element controls how much light can pass through the system. In another sense, it determines the widest field angle that the system can accommodate. The location of the principal planes is significantly affected by the shape of the lens(es), the indices of refraction on each side of the optical system and the position of the aperture stop. Because of the geometry involved, the actual size of the aperture stop is usually not the size it appears to be when viewed from the entrance and/or exit side of the optical system. These two apparent sizes are represented by the entrance pupil and the exit pupil respectively. In wide field angle systems, the size and location of the entrance and exit apertures may be quite different from the actual aperture stop(s). They are frequently functions of the field angle. In wide field angle systems, describing the bundle of optical rays passing through an optical system is sometimes difficult. An oblique ray passing through the center of the entrance aperture is called a principal ray. A principal ray will appear to exit from the center of the exit aperture. This ray may or may not be located in the center of the bundle of rays. The ray at the center of a bundle of rays is called the chief ray. More terms will be defined in Appendix L for use in the full ray trace presented there.

With the advent of Ophthalmology in the 1800’s, it was found unwieldy to use a full optical analysis of the eye, even conceptually, when the main point of interest was improving the on-axis, or nearly on-axis performance of a patient’s eyes. The goal was convenience and simplicity. Therefore, certain approximations were widely adopted based on the work of Listing. He chose to;

+ use only first order or Gaussian optics, which in turn limited the applicability of his work to the paraxial region, + ignore the field stop that played an insignificant role in the paraxial case + define two additional cardinal points which only apply in the paraxial case. Environment & Coordinates 2- 65

This work was carried forward by a number of workers. An optical description of the human eye drawn from the work of LeGrand is shown in Figure 2.4.1-4(a) with the index of refraction added by this author. The value of 1.336 for the index of refraction is a widely used nominal value for the eye. As discussed below, the cornea, aqueous humor and lens vary systematically from this value. LeGrand was well aware of the off-axis performance of the human eye, as documented by Lotmar99 and others. However, most subsequent authors have only referenced or discussed the first-order or Gaussian optics of LeGrand’s complete specification. Note, the image is shown as a straight line in the focal plane corresponding to the straight objective in object space. The two additional terms, known as the nodal points, N & N’, were defined by a rearrangement of terms in the basic first order equation of paraxial optics, known as the lens makers equation. The aim was to simplifying the everyday use of this equation by ophthalmologists. Thus, the new cardinal points, called the nodal points were defined to include the index of refraction of the vitreous humor in their value. The second of these new nodal points, the exit nodal point, was now at a distance from the exit focal point defined as the distance to the exit principal point divided by the index of refraction on the exit side of the optic. The other, or first, nodal point was the same distance from the second nodal point as the spacing between the original two principal points. Although these new cardinal points were artificial, they were convenient. They allowed the teacher or practitioner to define a new “chief ray” for the paraxial case only, through the optical system. This chief ray left the exit nodal point at the same angle as it encountered the entrance nodal point. This chief paraxial ray is not a principal ray and does not pass through the principal points of the system. Furthermore, this paraxial chief ray does not exist for off axis ray bundles. It should be noted that the distance, f, now appears on both sides of the drawing. When on the left side, it is known as the object focal length. When on the right, it is known as the image nodal distance to distinguish it from the actual image focal length shown as f’.

Figure 2.4.1-4(b) illustrates the human eye in a more realistic form for research purposes. Even here, the full extent of the cornea is not shown. Truncating the cornea leaves the impression that it is a thin lens. It is not. The lens extends back to at least 3 mm from the corneal pole and is clearly a thick lens. The demarcation between the air on the left and the humor on the right of the cornea is shown explicitly by the dotted line. The field stop, the iris, is shown with a 4.0 mm diameter opening typical of photopic conditions for young adults. It is 3.60 mm. from the corneal pole in the un-accommodated condition. The resultant entrance and exit apertures are at 3.05 and 3.68 mm. respectively100. The elliptical shape of the crystalline lens is shown explicitly as is its variation in index of refraction. Also shown is the center, c, of the ocular orb and the resulting curved focal surface in image space. . A requirement of the visual system is that the image surface formed by the optical system must closely correspond to the Petzval Surface formed by thelocation of the entrance aperture of the photoreceptor cells of the retina. A necessary result of forming an image on a curved surface is the introduction of distortion. Whereas for small objects imaged on-axis, the distortion is vanishingly small. It becomes significant for objects encompassing more than 10 degrees in object space, about five inches at arms length.

Nodal points are not shown since they only exist for the paraxial condition. The two focal lengths, f and f’, are shown with respect to the aperture stop in this figure. The principal points from which they are actually specified are within a fraction of a millimeter of the aperture stop and do not show on this diagram.

Note how small the diameter of the iris is relative to the diameter of the crystalline lens under photopic conditions. This is a very important feature of the human eye as will be seen in Figure 2.4.1-4 (c), (d) & (e). Figure 2.4.1-4 (c) shows an optical bundle from infinity and containing a principal ray that is coincident with the optical axis passing through the human eye. Such a bundle will focus at the focal point of the optical system. This point may be portrayed as a point on a planar surface, as is usually done in Gaussian optics, or it may be portrayed as a point on a more general surface. This surface is a more accurate portrayal of the focal plane of a complex (elliptical) optical system such as the eye. In the case of the eye, the back focal length varies from 22.5 mm on axis to approximately 16.5 mm at 45 degrees internal field angle (70 degrees external field angle). Figure 2.4.1-4 (d) shows two different optical bundles passing through the human eye, one from 90 degrees to the side and one from 60 degrees. In each case, the (longer) principal ray is passing through the center of the entrance aperture. However, neither the principal ray nor any other ray in either bundle passes through the center of the exit surface of the crystalline lens, the approximate location of the image nodal point. A straight line drawn from either

99Lotmar, W. (1971) Theoretical eye model with aspherics. J. Opt. Soc. Am. vol. 61, no. 11, pp. 1522-1529 100Davson, H. (1962) The Eye. Vol. 4 NY: Academic Press pp. 101-113 66 Processes in Biological Vision

of the focal points due to these bundles and passing through the image nodal point would not pass through the aperture of the eye at all. The angle between the optical axis and the principal ray of each bundle is approximately 48 and 40.5 degrees respectively. The limiting angle for light to enter the human eye and elicit a visual response from the ora serrata is approximately 104 degrees from the axis, approximately 14 degrees beyond a right angle from the axis, as shown by Hartridge101. This experimental fact is not compatible with the presentation of figure 2-7 in Newell102 showing the eye limited in field of view to less than 90 degrees from the optical axis. Newell has apparently attempted to extend the paraxial model of the eye to angles far beyond its limit of one degree or less from the optical axis. It should also be noted that figure 6.6 in the 1990 edition of Davson103 shows the off-axis optical bundles bending in the opposite direction from that predicted by Snell’s Law and shown here. Plate 1(c) of Zeki is also quite misleading in his treatment of a peripheral ray approaching the eye at 45 degrees from the axis104. Such a ray does not proceed in a straight line through the lens group, nor is the internal ray parallel with the external ray. It is useful to note that the image focal length in the human eye is a function of the incoming angle of the optical bundle in object space. This is primarily due to the variation in the index of refraction of the crystalline lens and the elliptical surfaces of both the lens and the cornea. Note also that the diameter of the optical bundle passing through the entrance aperture also depends on object space field angle. The ratio of these two values is nearly constant. This suggests that the f/# of the eye as a function of object field angle is nearly constant.

From these drawings, the central portion of the crystalline lens is obviously optimized to support near axial vision. Peripheral portions are optimized primarily to support the off-axis performance of the eye under photopic conditions. Under less than photopic conditions, the iris opens to collect more light. This results in poorer on-axis performance due to spherical aberration as shown in exaggerated form in Figure 2.4.1-4 (e). The additional peripheral rays included in the ray bundle come to a focus nearer the image principal point than do the central rays of the bundle. A result is the well known Stiles-Crawford effect. 2.4.1.2.2 Diffraction effects

101Hartridge, XX (1919) J. Physiol. Vol. 53, plate xvii 102Newell, F. (1986) Op. Cit. fig 2-7 103Davson, H. (1990) Physiology of the Eye, 5th ed. NY: Pergamon Press 104Zeki, S. (1993) A Vision of the Brain London: Blackwell Scientific Publications pg 68 Environment & Coordinates 2- 67

Figure 2.4.1-4 (a) A paraxial presentation of the human eye. (b) A full field presentation of the human eye under photopic conditions, aperture = 4.0 mm. Note the variation in index of refraction of the lens. (c) Optical bundle with chief ray parallel to the optical axis. (d) Optical bundles with the chief ray at 60 and 90 degrees to the optical axis. (e) Optical bundle indicative of spherical aberration as aperture size exceeds 4.0 mm. The core bundle remains in focus. The peripheral rays intersecting the aperture at greater than 4.0 mm. are brought to a focus before reaching the focal surface. 68 Processes in Biological Vision

In precision microscopy at high magnification, discerning dimensions clearly that are approximately the same as is difficult or smaller than the wavelength of the illuminance used. Although diffraction theory is usually taught in schools using the point spread function, the equivalent line spread function is more important in biological microscopy. Here the Airy rings surrounding the Airy disk are replaced by Airy lines parallel to a linear contrast edge of zero width. The locations of these lines are given by a similar equation. The 1st line is located a small distance from the geometrical center of the line image. This distance is given by 0.72• λ •NA = 0.72• λ •(n’•sin u’) where NA is the numerical aperture of the microscope. Using 500 nm. illumination and a NA = 1.00, the first line is located 0.36 microns from the geometrical center of the image. Using a NA = 0.3, the distance is reduced to 0.11 microns. These dimensions are similar to the dimensions of cell membranes and require that great care be taken in analyzing cell structures in visible light. Using ultraviolet light obviously reduces the size of the diffraction artifacts by a factor of two or so if the optical system remains diffraction limited and exhibits acceptable transmission. Environment & Coordinates 2- 69

Because of this situation, it is extremely important that the investigator include a calibration target in the field of view of the microscope so that diffraction effects associated with the images can be evaluated. Because of the low contrast of many biological specimens, this is difficult. However, it is necessary for precision work planned for publication. The important point is that most caricatures of photoreceptor cells show a cell wall along the length of the Outer Segment of a photoreceptor cell. However, photomicrographs do not show a smooth continuous membrane at this location. The caricatures show it because the conventional wisdom has been that the Outer Segment was an integral part of the photoreceptor cell and located within its outer membrane. Using electron-microscopy, imaging the Outer Segments of the photoreceptors in much greater detail is possible. These images, with a 1st Airy line displaced only Angstroms from the image edge, do not show a membrane surrounding the Outer Segment. In the case, of micrographs of OS’s broken while separating the retina from the RPE, studying a large group of OS’s at once is possible. In these pictures, no indication of a cell membrane surrounding the disk stack of individual OS’s can be seen. Also, no evidence of any ruptured parts of such outer membranes, which would be expected in such pictures, can be seen. In very high magnification electron-micrographs, seeing the actual crystalline planes associated with the underlying structure is possible. There is no indication that these planes are covered by a smooth membrane. Chapter 4 presents imagery supporting these observations. 2.4.1.2.3 Aberration effects

Besides the impact of diffraction on geometrical optics, there are also the aberrations associated with differences in the uniformity and the path length differences within any transmissive material in the optical train. These differences usually manifest themselves as aberrations in focal position as a function of wavelength and position with respect to the optical axis. These aberrations can be related to a mathematical series. The two most common, first order, aberrations are known as longitudinal chromatic aberration, LCA, and transverse chromatic aberration, TCA. Rynders, Navarro & Losada105 have studied the LCA recently comparing subjective and objective measurements under conditions of cycloplegia and dilation. Their results show a rapid drop-off in the performance of the human eye, using objective and subjective methods, due to both LCA and poor spatial resolution, at angles as small as 2.5 degrees from the fovea. Care was taken to use only monochromatic light at 458, 501.8, 543.5 & 632.8 nm but their model of the eye was not defined explicitly. It appears their measurements were for field angles measured temporally from the fovea and they assumed the pupil of the visual optical system always corresponded to the physical size of the iris opening. The latter assumption is not valid for a wide angle optical system such as the eye. Although they appear to have assumed the optical system consisted of optically homogeneous material in each of the optical elements, they did report anomalies for large artificial aperture sizes which would be consistent with a graded index optical system. They proposed that their anomalies could be explained by additional (higher order, ?) aberration effects without needing to postulate retinal effects. They also remark that beyond 10°, the aberrations associated with a 6-mm artificial pupil were so large (and the image quality so poor) that the merit function they used did not change noticeably, even for large amount of artificial defocus.

Although they found best focus to remain relatively constant with wavelength, poorer for the off-axis condition, the foveal LCA varied considerably with wavelength (about 1.25 Diopters). For the foveal condition this variation was approximately the same for both the 3 mm and 6 mm artificial pupils. At 2.5° from the fovea, the LCA was approximately the same for the 3 mm pupil. They did not present the 6 mm data. Their data is completely consistent with a graded index optical system of variable focal length with field angle as found in the immersed (different index of refraction between air and the vitreous humor) optical system of the human eye. 2.4.1.2.4 interference effects

Normally, interference is not a significant factor in vision. However, the recent introduction of interferometry to vision research has complicated the matter. This technique is not normally addressed adequately in undergraduate courses associated with the visual sciences. As a result, serious experimental difficulties can be encountered. An addendum has been prepared discussing this technique. When using interferometric techniques, the diffraction

105Rynder, M. Navarro, R. & Losada, M. (1998) Objective measurement of the off-axis longitudinal chromatic aberration in the human eye. Vision Res. vol 38, no. 4, pp. 513-522 70 Processes in Biological Vision

effects associated with a lens are not bypassed by using interferometry. When operating in the spatial domain, the overall performance of a lens system is a product of the interferometric function and the diffraction function. 2.4.1.2.5 Inertial effects

Duebel & Bridgeman have explored the distortions in the visual optical system under angular accelerations (saccades)106. The distortions can result in a deviation from the nominal optical axis of up to 0.5 degrees during and immediately following saccades. The effect appears to be primarily due to the elasticity of the accommodation mechanism supporting the lens. 2.4.1.2.6 Modes of optical analysis

Considering the optical system of the eye under two separate conditions is just as important as when considering the signal processing aspects of the eye. The conditions are the “off-axis” condition and the “near axis” condition. + The optical system of the human eye provides a very wide total viewing angle that is highly distorted spatially and of relatively poor spatial resolution due to significant aberrations.

+ The same optical system, over a relatively small angle near the optical axis, provides relatively good spatial uniformity and spatial resolution within a factor of 3:1 of the diffraction limit.

Man routinely achieves spatial resolutions of within a factor of 1.5:1 or better with respect to the diffraction limit in precision applications involving a similarly small field angle. However, the Airy Disk associated with the physiological optical system is well matched to the acceptance characteristics of the photoreceptor optical system.

In summary, the use of Gaussian optics as an analytical tool in vision research is inadequate when referring to the animal eye. In the human eye, the Stiles-Crawford Effect clearly involves optical rays beyond the realm of the paraxial criteria. In addition, the curvature of the optical image plane and the resultant large geometric distortion in the images presented to the retina cannot be treated using Gaussian optics. When analyzing the animal eye, an optical designer would limit his use of paraxial analysis to less than one degree from the optical axis. An ophthalmologist, again seeking simplicity and convenience might accept the errors involved in using paraxial rays out to angles of 2-5 degrees from the optical axis. This is especially true since these angles are required just to reach the fovea of the human eye relative to the optical axis of the system. [Figure 2.4.1-3(d)] provides a perspective on the field angle that can be accepted in paraxial analysis if accuracy is not a prime consideration. As Watkins107 has shown, to maintain five digit accuracy in the calculations, a low value in optics, the off-axis angle must not exceed 2 degrees and the pupil must not exceed 0.5 mm. diameter. For a 2 mm. diameter pupil at the aperture stop, the angle cannot exceed 10.5 degrees. For an 8 mm. pupil, the angle cannot exceed 47 degrees.

Unfortunately, there has been much abuse of the concept of the nodal points in both the scientific literature and in teaching at advanced levels. The problem is manifest in two areas. First, the student is led to believe that all rays passing through the eye approach the retina from the exit nodal point instead of from the exit principal point. Second, the student is led to believe that the paraxial analysis is descriptive of the wide angle performance of the eye. This situation is frequently carried to the extreme of showing a “chief ray” actually passing through the sclera to reach the entrance nodal point from a high field angle. Not only is Gaussian optics limited to rays within a few degrees of the optical axis at most, the aperture stop plays a major role in defining what rays may pass through what part of the individual optical elements. The well-publicized schematic eyes of both Gullstrand (1911) and Le Grand (1946) are both defined in the paraxial context only and assume spherical optical elements of constant internal

106Duebel, H. & Bridgeman, B. (1995) Fourth purkinje image signals reveal eye-lens deviations and retinal image distortions during saccades Vision Res vol 35(4), pp 529-538 107Watkins see ref #9 Environment & Coordinates 2- 71

indexes of refraction108. Furthermore, the simplified versions of these two models, which both attempt to represent the optical system of the human eye by a single thin lens, ignore the aperture stop of the system completely. To achieve good focus over a nearly spherical surface, the magnification of the eye is highly variable with angle relative to the optical axis; the result is considerable geometric distortion in the image presented to the retina. Failure to recognize these limitations can lead to absurd performance predictions. Page 101 in Wyszecki & Stiles109 assumes a flat focal plane of great extent. A straight line in the object field subtending 1.0 radians is predicted to have a retinal image length of 16.683 mm.-- based on an image nodal distance of 16.683 mm. Glasser & Campbell have recently provided excellent data on the aging properties of human eyes110. The data is sufficient to cause a basic change in the Standard Eye. Rather than discuss a standard in terms of “young eyes,” it is now possible to provide equations (at least through the fourth term in the optical equations) for the nominal eye as a function of age. The use of visible light microscopy may provide misleading results when used to detect small features related to photoreceptor cells and having dimensions comparable to the diffraction limit of the microscope. A target in object space must be used to prove the feature noted is not a diffraction artifact of the instrumentation. 2.4.1.3 State of the art in man-made optics

The state of the art in man-made optical design is based almost entirely on spherical optical surfaces with some corrections made with even simpler cylindrical optical elements. This is because of the ease of generating these surface shapes in the manufacturing plant, a situation recognized many years ago. Furthermore, until the availability of the modern digital computer, nearly all optical designs were based on the “thin lens” approximation. During the 1980’s, the computer provided the capability of developing more complex “thick lens” designs and aspheric designs, usually incorporating parabolic and occasionally elliptical correction components.

It also became possible in the 1980’s-90’s to create, in a limited way, optical materials with an index of refraction that varied within the individual optical element, usually in the radial dimension from the optical axis.

To achieve the excellent packaging efficiency found in the animal eye while also achieving the functional performance required, these techniques, which man has only recently begun to exploit, have been incorporated into the animal eye in very sophisticated ways. This will be shown in a paragraph below comparing the eye of the 4-eyed fish and the human eye. 2.4.2 The Physiological Optical System

To understand much of the scientific literature concerning the physiological optics of vision, it is important have a firm grasp on the laws of geometric (simplified) optics and the very significant approximations made in these laws by the vision community. To perform research in physiological optics and the performance of the visual system, it is important to understand the laws of physical (real) optics. A following subheading of this section will review these laws and relationships. They will then be applied to the actual optical system of human vision in the subsequent subheadings. When studying the eye in detail, it is also important to understand the geometry of the optical surfaces. In the past, these surfaces have been considered spherical for lack of adequate data. More recently, they have been considered parabolic. However, the underlying morphogenesis and recent measurements confirm that these surfaces are elliptical. This may appear a small difference. However, it makes a major difference in the performance of such a wide angle optical system.. Hogan, et. al. have briefly discussed some of the actual dimensions of the surfaces of the lens group, but not at the detail or precision required for optical analysis111.

108Gullstrand recognized the variable index of the “lens” and chose to represent it by two nested concentric lenses. 109Wyszecki, Stiles 110Glasser, A. & Campbell, M. (1998) Presbyopia and the optical changes in the human crystalline lens with age. Vision Res. vol. 18, no. 2, pp 209-229 111Hogan, M. Alvarado, J. & Weddell, J. (1971) Histology of the Human Eye Philadelphia, PA: W. B. Saunders pg 60 72 Processes in Biological Vision

Koretz has recently reviewed the four primary problems in exploring, understanding and describing the lens of the human eye112. It is also important to note that the typical optical system cannot be represented by so-called thin lens (or Gaussian) theory. The typical animal lens system is a thick-lens system as will be discussed below. The thin lens approximation is only appropriate in the field of consumer optometry. The human physiological optics, along with that of most higher chordates, is anamorphic in order to image the world on a highly curved retina. This feature is accomplished by making the image focal length a strong function of the angle from the point of fixation to the target in object space. In the case of the human eye, the back focal length, measured within the vitreous humor, varies from 22.5 mm on axis to approximately 16.5 mm at 45 degrees internal field angle. A fortuitous variation of the optical pupil size with angle in congruity with the change in focal length maintains a nearly constant f/# for the system with angle. These variations also insure a nearly constant illumination intensity on the retina per unit area, thereby avoiding the cos4 relationship expected in a conventional optical system at angles far from the axis. The degree of anamorphism in the human eye is quite high as illustrated in [Figure 2.4.5-1]. While this would have a serious impact in a reimaging system, the brain does not reassemble an image of the real world. It is perfectly adequate for the brain to convert the location on the retina to a location in object space without regard to geometric fidelity.

An overview of the performance of the physiological optics is presented in Figure 2.4.2-1. This figure provides a clearer concept than the conventional brief ray diagram. The outer field of view, in the horizontal plane, can be described as one of nearly equal sensitivity over a majority of a field of ± 110 degrees, in the absence of the nose. A similar diagram can be drawn for the vertical plane, but with a smaller field angle. The energy received from the scene is re-radiated by the lens group in the direction of the retina. The re-radiated energy density is nearly a constant over a total angle of over 90 degrees. This 90 degrees includes the entire hemisphere of the retina. Within this hemisphere are on the order of 100 million photoreceptors. Each photoreceptor has a narrow cone of acceptance for radiation. The axis of each of these cones is pointed at the center of the exit pupil of the lens group (Kono, Enoch et al. has recently reviewed113). Thus the energy from the far field is passed to the appropriate photoreceptor for further processing by the neurological portion of the visual system. The neural layer between the photoreceptors and the lens group also acts as a specialized optical element, especially in the area of the foveola. However, it is not easily shown in this figure. This optical element can act as a field flattener in mammals and frequently acts as a magnifier in at least some predatory birds.

Note that for an optical system to focus on a hemisphere, it must exhibit a variable focal length with respect to the angle from the optical axis.

112Koretz, J. (2002) Models of the lens and aging effects In Hung, G. & Ciuffreda, K. (2002) Models of the Visual System. NY: Kluwer Academic/Plenum Press, Chapter 2 113 Kono, M. Enoch, J. Strada, E. et al. (2001) Stiles-Crawford effect on the first kind: assessment of photoreceptor alignments following dark patching Vision Res vol 41, pp 103-118 Environment & Coordinates 2- 73

To achieve the wide field of view of the human physiological optics, the system consists of a first lens (the cornea) that is a negative meniscus, the same form commonly used in wide angle man-made systems. In this form, the lens is actually thicker at its periphery than it is near its center. The cornea is followed by a typical biconvex lens. However, the index of refraction of this lens varies both radially and along the optical axis. The retina in mammals is considered “reversed.” Light must pass through the neural tissue of the retina to reach the photoreceptors. The outer segments of each photoreceptor are the sensitive elements. They are farthest from the pupil of the optical system. They are said to be phototropic in that every outer segment has its axis aligned to point directly at the pupil of the eye as seen from their location on the surface of the retina. On the other hand, the inner segments of the same photoreceptors are always found to be aligned perpendicular to the surface of the retina. Because of this configuration, the converging light from the pupil Figure 2.4.2-1 The human physiological optics described passes through the essentially transparent inner as a series of antennas. The area between the inner and segments without any change and achieves a focus at outer lobes of the lens system will be detailed later. The the entrance to the outer segments (acting as inner lobe illuminates the inside of the entire hemisphere waveguides). This configuration is the key of the retina nearly equally. The photoreceptors of the mechanism associated with the Stiles-Crawford Effect retina each exhibit a narrow angle cone of acceptance that (Section 17.3.7). is pointed directly at the exit pupil of the lens system. The optical effect of the neural surface between the optics 2.4.2.1 Spectral transmission of the lens and the photoreceptors is too slight to be shown here. group

The spectral transmission of the lens group varies considerably among phyla and even among families within a phylum. It is important to quantify this feature of an optical system before attempting to evaluate the spectral performance of the retina, or its elements, alone.

Douglas & McGuigan114 and Douglas115 have provided good data on the lenses of the teleost (bony) fish. While Douglas & McGuigan suggest the thickness of the lens among fish is a minor consideration, Douglas provides graphs showing the significance of this parameter. Both papers focus on the potential environmental impact of lens transmission in these fish. Charman has assembled much of the absorption data related to the physiological optics116. More recently, van den Berg & Tan have provided data primarily as a function of age in humans117. Their data is quite coarse. Within the range they examined, they note the spectral transmittance function is dominated by forward light scattering.

There is considerable variation species to species in the short wavelength portion of the spectrum. The long wavelength portion is much more uniform. In the long wavelength region, transmittance typically exceeds 95% out to at least 900 nm. It falls to 50% at 1000 nm and then recovers to 80% at 1100 nm before falling to below 10% at 1200 nm and beyond.

114Douglas, R. & McGuigan, C. (1989) The spectral transmission of freshwater teleost ocular media–an interspecific comparison and a guide to potential ultraviolet sensitivity Vision Res vol. 29, no. 7, pp 871- 879 115Douglas, R. (1989) The spectral transmission of the lens and cornea of the brown trout (Salmo trutta) and goldfish (Carassius auratus) Vision Res vol. 29, no. 7, pp 861-869 116Charman, W. (1991) Limits on visual performance set by the eye’s optics and the retinal cone mosaic, in Vision and Visual Dysfunction, vol. 5 Boca Raton, FL: CRC Press, Inc. Chapter 7 117van den Berg, T. & Tan, K. (1994) Light transmittance of the human cornea from 320 to 700 nm for different ages Vision Res vol. 34, no. 11, pp 1453-1456 74 Processes in Biological Vision

In the short wavelength region, the net absorption is a strong factor of the thickness of the lens and the unit absorption per unit thickness as a function of wavelength. The thickness is a major factor in the equation for fish because the mature eye of many species incorporates a very thick lens to achieve their high f/# optical system. Among humans, the lens is much thinner relatively and the variation in lens thickness from birth to old age is quite small. Figure 2.4.2-2 presents the variation in spectral transmission among just the teleost fish118. Note the significant impact on whether these fish would be considered trichromatic or tetrachromatic just on the basis of the transmission of their lenses. A more extensive study of the transmission characteristic of the lens group in fish is presented in Thorpe, et. al119. This paper helps to account for the large variation in the reported spectral responses of fish using non-invasive behavioral test techniques. The peak at 320 nm in some of these responses is due primarily to the presence of alcohols as a ligand in the protein material of the lens. The Rutilus rutilus is representative of those teleosts whose lens group can support efficient sensing by the UV-chromophoric channel. Human lenses generally fall midway between curve 2 & 3 in this figure but vary significantly with age. In addition, the human lens group exhibits significant absorption in the macular pigment of the field lens.

Figure 2.4.2-2CR Relative transmission spectra of lenses from three types of the teleost fish: 1. Rutilus rutilus, 2. Pelmatochromis kribensis, 3. Trichogaster trichopterus. All curves normalized to 100% transmission at 700 nm. From Douglas, Bowmaker & Kunz.

118Douglas, R. Bowmaker, J. & Kunz, Y. (1987) Ultraviolet vision in fish In Seeing contour and colour, Kulikowski, J. Dickinson, C. & Murray, I. ed. NY: Pergamon Press pp. 601-616 119Thorpe, A. Douglas, R. & Truscott, R. (1993) Spectral transmission and short-wave absorbing pigments in the fish lens–I. Phylogenetic distribution and identity. Vision Res. vol. 33, no. 3, pp. 289-300 Environment & Coordinates 2- 75

Figure 2.4.2-3 shows similar data for Chordata120. [ xxx check reference spelling] The fundamental parameter in this figure is the thickness of the material forming the lens group. The larger the eye, the thicker the lens group and the poorer the transmission in the ultraviolet spectral region is likely to be. This parameter, rather than the basic architecture, is the primary limitation on the short-wavelength spectral range of the human eye. The sharp change in slope of the absorption characteristic of the human lens is frequently used to define what is called the Rayleigh region, wavelengths longer than the break point in the absorption curve of the lens near 440 nm. Burns & Elsner used a significantly different breakpoint to define their Rayleigh region121. They used an empirical wavelength of 540 nm based on the neurological network of this work. Although undetailed, their primary reason for picking this wavelength was to limit their laboratory measurements to the Q-channel of the chrominance channels of vision (defined in Chapter 11). 2.4.2.1.1 Spectral transmission of the lens in the human eye

Data for the human lens group and macula is discussed in Wyszecki & Stiles122. Section 16.3.3.1 of this work develops the absorption characteristic of a lens like material consisting of proteins containing a significant Figure 2.4.2-3 CR Absorbance spectra of chordate lenses. amount of alcohol and aldehyde present as ligands. (1) human, (2) duck, (3) canary. From Debipriya, et. al. Figure 2.4.2-4 shows their summary of the data, as of (1999) 1982 (pages 108-109), superimposed by the theoretical model proposed by this work. While the ordinate scale would suggest absorption was the primary mechanism at work, it is not. The curve shows the calculated absorption for the protein described above plus the expected Rayleigh scattering at long wavelengths. The agreement is well within the experimental error expected during the 1940's & 50's. Note that the estimated absorption (solid line) is drawn horizontally beyond 650 nm primarily because of the limited sensitivity of the instrumentation. The dashed line continues at its current slope from 600 nm to 800 nm and beyond.

120Das, D. Wilkie, S. Hunt, D. & Bowmaker, J. (1999) Visual pigments and oil droplets in the retina of a passerine bird, the canary Serinus canaria: microspectrophotometry and opsin sequences. Vision Res. vol. 39, pp. 2801-2815 121Burns, S. & Elsner, A. (1993) Color matching at high illuminances: photopigment optical density and pupil entry J Opt Soc Am A vol 10(2), pp 221-230 122Wyszecki, G. & Stiles, W. (1982) Color Science NY: John Wiley & Sons pp. 107-116 76 Processes in Biological Vision

Said & Weale presented similar data to that above in an expanded form. Both Hart and Wyszecki & Stiles have reproduced their data in similar form123. However, neither Wyszwcki & Stiles or Hart described the experimental protocol used to obtain the data in sufficient detail. Most of the data appears to result from “difference spectra” measurements. One of the sources of this data is also Wald. In his original paper, Wald says that the average luminosity function was used to compute the difference spectra. This clearly limits the accuracy of the difference spectra obtained.

Figure 2.4.2-4 Various determinations of the optical density of human eye lenses relative to that at 700 nm. The solid curve represents a compromise drawn through the data by Wyszecki & Stiles. The dashed line represents a theoretical curve based on this work. At greater than 500 nm, the dashed curve represents Rayleigh scattering. At less than 420 nm, the dashed curve represents aldehyde ligand absorption in the protein of the lens.

123Hart, W. ed.(1992) Adler’s Physiology of the eye, 9th ed. St Louis, MO: Mosby Year Book pg. 710 Environment & Coordinates 2- 77

Figure 2.4.2-5 shows the Said & Weale data for “living lenses” plus a curve for the absorption coefficient of the macula lutea from Ruddock. This author has analyzed the absorption of the lens data provided by Said & Weale in some detail (Section 16.3.3.1). It is clear from this analysis that the data can be described by the same functions used in the previous figure. Interpreting the optical density as a lack of transmission rather than just absorption, the overall responses can be described by a nearly vertical segment at 400 nm defined by the very high absorption of the aldehyde component of the protein forming the lens. Alternately, the segment below a density of 0.12 appear to be limited by the noise floor of the instrumentation. In between these asymptotes, the response appears to represent Rayleigh scattering. This scattering is generated by atom-size quantum-mechanical resonators within the bulk material of the lens. The atom-size quantum-mechanical resonators are probably atoms de-bonded from the protein material by aging effects, most likely by high energy photons entering the lens and disrupting bonds.

Based on this interpretation, the level of the Rayleigh scattering component rises vertically with the increase in atom-size resonators within the bulk material. The absorption of the aldehyde component continues to add to the total optical density as the Rayleigh scattering increases. Thus, the overall curves not limited by the instrumentation floor, rise vertically as well. This suggests the free-hand curves drawn through the data points by Said & Weale should be replaced by virtually straight lines as shown for the 63 year-old case. The transmission of the human lens appears to fall approximately 0.03 optical density units per decade (about 0.5% per year) due almost exclusively to an increase in Rayleigh scattering. The scattering is due to a linear increase in the number of scattering centers within the lens material. Weale originally proposed that the loss in optical transmission was caused by light absorbing material uniformly permeating the lens matrix124. A more appropriate description of the material would be light scattering material. While the loss in transmission is minimal, the loss in scene contrast because of the scattering of image light is quite significant in the older population.

Griswold & Stark have recently presented data that Figure 2.4.2-5 Optical density of living human crystalline quantifies the absorption of the human lens in the lenses plotted as a function of wavelength by age. Dashed region of 310-400 nm (See Section 17.2). The data line gives optical density of macular pigment. The data peaks at about 3.5 units of optical density. Since the points are limited. Only one or two points of each curve effective path length for light passing through the lens are associated with the absorption by the aldehyde of the is about 5 mm, the absorption in chordate eyes similar eye. The instrument floor limits the data at long to that of the human is about 0.7 density units per mm wavelengths. The middle portion of the curves are limited of thickness. Most animals with a lens of less than one by Rayleigh scattering. Rayleigh scattering is the millimeter thickness should be able to see well into the dominant variable in this data set. The dash-dot line ultraviolet spectrum and should be expected to be moves up with age. The lens transmission decreases tetrachromatic. approximately 0.03 density units per decade due primarily to the formation of new scattering centers. Data points Dillon et al. have provided additional data on the and linght lines from Said & Weale, 1959. transmission of the older human eye125. Their data lacks data points and/or range bars but is in general conformance with the above measurements.

124Weale, R. (1961) Notes on the photometric significance of the human crystalline lens. Vision Res. vol. 1, pp. 183-191 125Dillon, J. Zheng, L. Merriam, J. & Gaillard, E. (2004) Transmission of light to the aging juman retina: possible implications for age related macular degeneration Exp Eye Res vol 79, pp 753-759 78 Processes in Biological Vision

2.4.2.1.2 Spectral transmission of the macula lutea in the human eye

The macula lutea is a slightly yellowish tinge frequently observed by ophthalmologists in the area overlaying the fovea of the retina. While originally thought to be a separate film overlaying the region in some way, the experiments of Snodderly et. al. have shown it is a characteristic of the neural lamina itself (Section 3.2.1.3.3). Wald provided an estimate of the transmission spectra of the macula lutea in 1959 based on averaging the in-vitro data from nine subjects126. Ruddock undertook a more detailed in-vivo analysis in 1963 with surprising results. He examined four subjects including himself. Of the four, two exhibited no sign of a yellowish tinge and the other two showed tinges that were difficult to correlate. A smoothed rendition of the data from KHR is usually published in the literature as typical of the macula lutea. However, Ruddock’s original figure presents a totally different picture. Figure 2.4.2-6 presents the measured and theoretical absorption by the macula lutea. The measured data is for two subjects, KHR and JAS. While KHR shows a substantial macula lutea, JAS shows virtually no absorption that can be related to a macula lutea. It is difficult to analyze the data for the macula lutea for three reasons; there is very little data from only a few investigators, the data is inconsistent, and much of it is the result of difference spectra experiments. The data from Ruddock127 shows a double peak while that of Wald generally shows a triple peak in the absorption spectrum (W&S, page 111). The Wald data, being an average, is less distinct than that of Ruddock. Analysis of Ruddock’s double peak, as illustrated in Hart, strongly suggests that the material of the macula lutea, if a distinct material, exhibits normal molecular absorption with two peaks. One is near 460 nm and the other is near 490 nm. The absorption spectrum shown is an emulation of the spectrum derived by Wald using materials with theoretical peaks at 440 nm and 487 nm. The Q of both of these materials is approximately 15. The assumption underlying the theoretical curve is that the absorption by the macula lutea is by individual resonant structures that are not quantum-mechanically related like the materials in the liquid crystalline state.

Wald attributed the material forming the macula lutea to the carotene, xanthophyll. The presence of two absorption peaks in close proximity would be consistent with two distinct ligands associated with a single molecule or with a mixture of two molecules with individual ligands absorbing in these regions. These molecules could be members of the carotene family as suggested by Wald but there is presently no confirming evidence. Wald also made the claim the macula lutea is only seen in primates.

Ruddock gave the diameter of the macula lutea of RHK as 2.4 degrees (half-amplitude diameter) or 7.2 degrees (at a nominal zero point).

The theoretical macula absorption curve assumes two resonant structures of the same Q = 15. Absorptions of the same Q in close proximity have different heights when plotted as a function of wavelength. The curve of Ruddock can be matched to any arbitrary tolerance using the mathematical equation. However, the data is not sufficiently precise (few points from only one subject) to support such an effort. The author can provide the mathematical

126Wald, G. (1945) Human vision and the spectrum Science vol 101, pp 653-658 127Ruddock, K. (1963) Evidence for macular pigmentation from colour matching data Vision Res vol 3, pp 417-xxx Environment & Coordinates 2- 79 support for these analyses as a Mathcad file on request. 2.4.2.2 Diffraction properties of the lens group (& neural retina)

When discussing the lens system of the eye, it is important to differentiate between the terms aspheric and non-spheric. A non-spheric lens is ideally cylindrical, parabolic, elliptic or hyperbolic in basic form. Any of these lenses can be made aspheric, I. e., made to deviate slightly (by only a few wavelengths of light) from its ideal form. The lens group, consisting of the cornea and “lens,” have been known to be aspheric since the 1930's. The cornea is an aspherized ellipsoidal lens. There were also hints from that time period that the material of the “lens” might not be optically isotropic. Stiles and Crawford studied the impact of axial light entering the eye through different zones of the aperture. Their results were presented beginning in 1933. Their extensive laboratory work has been a pillar in this corner of vision science for a long time128. Campbell and Campbell & Gregory129 provided some additional measurements in the late 1950's. Weale has also presented interesting data on this subject130. Although presented without a vertical scale, it can be compared with the original Stiles data in Wyszecki & Stiles. Vos, Walraven & Van Meeteren provided an empirical study of the light profiles of at the fovea in 1976131. Charman has recently summarized much of the off-axis Figure 2.4.2-6 Macula lutea absorption versus the monochromatic aberration data132. Recently, Artal and theoretical equivalent. Data points; open circles for KHR, associates have provided considerable new data using closed circles along bottom scale, JAS. Solid line; the latest instrumentation. Their work was summarized absorption reported by Ruddock for the human eye based in 1996133. Liang, et. al., in a series of papers through on the KHR data. From Ruddock, 1963. Dashed curve; 1997, have probably provided the most comprehensive absorption calculated for two ligands centered on 440 nm description of the performance of the lens group (See and 487 nm. Section 2.4.5.1). Their work used a Hartmann-Shack wavefront sensor with great success.

The titles of the above papers hint at the variation in approaches taken to this problem. Their content will not be presented here because some of them are not supported by a strong theoretical foundation. The data will be discussed more thoroughly in Chapters 11 & 17. The reported work has concentrated on measurements of the performance of the lens-retina combination at the fixation point. All of the work treated the optical system and the lens group in a simplistic manner (at least implicitly). The immersed nature of the human optical system, and hence the importance of the cornea as an optical element has been ignored. Gaussian optics have been assumed and the fixation point has been assumed to be on the optical axis (again at least implicitly). Artal et. al. have chosen to define a Campbell Effect that is not obviously different from the Stiles-Crawford Effect,

128Wyszecki, G. & Stiles, W. (1982) Color Science, 2nd. ed. NY: John Wiley pp. 424-429 129Campbell, F. Gregory, A. (1960) The spatial resolving power of the human retina with oblique incidence. J. Opt. Soc. Am. Vol. 50, pg. 831 130Weale, R. (1961) Notes on the photometric significance of the human crystalline lens. Vision Res. Vol. 1, pp, 183-191 131Vos, J. Walraven, J. & Van Meeteren, A. (1976) Light profiles of the foveal image of a point source Vision Res vol 16, pp 215-219 132Charman, W. (1991) Op. Cit. 133Artal, P. Marcos, S. Iglesias, I. & Green, D. (1996) Optical modulation transfer and contrast sensitivity with decentered small pupils in the human eye. Vision Res. Vol. 36, pp, 3575-3586 80 Processes in Biological Vision

in either its original or evolved forms. Stiles & Crawford originally reported a variation in luminance sensitivity of the fovea, at the fixation point, to light entering the eye through different zones of the aperture. The original premise for much of this work was that the optical group was a simple homogeneous system and any variation in sensitivity was due to a restricted cone of acceptance at the photoreceptors of the retina. Based on this premise, the term directional sensitivity was introduced to describe an effect that was assumed to be due to the directional sensitivity of the retina to incident radiation. The function was described by using an external artificial iris to control the zone of the lens aperture used to illuminate the retina. Later studies have generally shown that the measured effect was not due to the directional properties of the retina but due to the asphericity and in-homogeneity of the lens group. The effect actually described the relative quality of the optical ray bundle that arrived at the fixation point of the retina as a function of its point of passing through the iris. Thus, the term directional sensitivity has been retained but the meaning has changed. The directional sensitivity of, more properly the sensitivity to displacement of the optical bundle entering, the eye describes the effect of stimuli entering the eye through different parts of the eye pupil and being imaged on the same retinal area. Because of the need to use the foveola for accurate measurement, the data is necessarily limited to the off-optical-axis condition (nominally five degrees temporally in the horizontal plane and two degrees ventrally in the horizontal plane as referenced to object space). At these levels of eccentricity, asymmetries in the reported Airy image on the retina are to be expected even for “perfect” eyes.

The human lens is known to be highly anisotropic in index of refraction, both axially and radially. The first order reasons for this can be seen in Figure 2.4.2-7 from Newell134. This image provides an interpretation of the spots in figure 6 in Navarro135 (discussed in Section 2.4.8.2.2). The actual data for the index is summarized in Section 2.4.8.2. Alternate representations of the human lens from which the gradient index of refraction can be inferred have been presented by Berman136 and by Zampighi137

134Newell, F. (1986) Ophthalmology, 6th ed. St. Louis, MO: C. V. Mosby, pg 33 135Navarro, R. (2009) The Optical Design of the Human Eye: a Critical Review J Optom vol 2, pp :3-18 136Berman, Elaine (1991) Biochemistry of the Eye. NY:Plenum Press pg 202 137Zampighi, G. (2006) The Lens In Fishbarg, J. ed. The Biology of the Eye. NY: Elsevier pg 153 Environment & Coordinates 2- 81

Figure 2.4.2-7 Coarse cytological structure of the human lens. Note the absence of nuclei in some layers of the lens. Newell describes the cells forming at the equator and migrate centrally [sic]. From Newell, 1986.

Koretz has provided an interesting micrograph credited to Dr. Kuszak138. It shows the highly organized structure of the cells of a small internal region of a human lens. 2.4.2.2.1 Variability of the lens to achieve focus

The shape of the crystalline lens is changed to accommodate changes in the distance to objects in the field of view. This change in shape and a slight change in position along the optical axis is provided by the ciliary muscle group attached to the lens. By contracting, this muscle changes both the shape and the position of the lens139. The range of this change is not well documented. Jennings & Charman give a value of 6.5 diopters for one subject140. Wyszecki & Stiles give values provided by Le Grand on the assumption that the lens does not move relative to the cornea. The range and consequences of errors in the absolute and differential values of the power of the lens system is addressed in Section 18.2.4. 2.4.2.3 Details of the Human eye

[Figure 1.6.4-1] of the previous chapter has shown that the overall human optical system is quite complex and is best divided into several categories. This section will discuss primarily the physiological and photoreceptor optical systems. The physiological optical system actually consists of three dioptric elements, the cornea, the “lens” and the

138Koretz, J. (2002) Models of the lens and aging effects In Hung, G. & Ciuffreda, K. (2002) Models of the Visual System. NY: Kluwer Academic/Plenum Press, Chapter 2 139Young, J. (1975) The life of mammals. Oxford: Clarendon Press. pp. 372-375 140Jennings, J. & Charman, W. (1978) Optical image quality in the peripheral retina. Am. J. Optom. Physiol. Optics. vol. 55, no. 8, pp 582-590 82 Processes in Biological Vision

field lens. The field lens consists of the non-light sensitive material of the retina (and the ellipsoid or “oil drop” of the IS if present) anterior to the OS. The aperture stop, known as the iris, also plays a major role in this part of the overall system. The photoreceptor optical system consists of the OS that exhibits several properties related to an optical waveguide, and when present, the ellipsoid in the IS that plays the roll of a collimator to optimize the collection efficiency of the photoreceptor optical system. 2.4.2.3.1 Diffraction properties of the lens group

The human eye is a typical camera obscura. It is basically a sphere with an internal radius of about 12 mm. There is a two-element lens system in the aperture of the camera and a field lens immediately in front of the image plane located within the retina. The outer surface of the cornea is the first element of an immersed optical system, i. e. the entire remainder of the camera (up to the retina itself ) is immersed in a fluid like material of index 1.33. The outer surface forms a meniscus lens of 7.8 mm radius with an index of refraction of 1.3771. The value of 7.8 mm and 1.3771 are taken from Gullstrand’s unaccommodated full theoretical eye. Ho et al. provided a set of values (including standard deviations) in 2008 based on 221 right eyes obtained under different and more controlled conditions141. Their abstract included;

RESULTS: The means for rant, rpost, rsimK, and AP ratio were 7.75mm±0.28 (SD), 6.34±0.28 mm, 7.75 ± 0.27 mm, and 1.223 ± 0.034 mm, respectively. These parameters were normally distributed. The mean calculated keratometric index (Ncal) was 1.3281 ± 0.0018. Using the keratometric indices of 1.3281 (Ncal), 1.3315 (Gullstrand schematic eye), and 1.3375 (conventional), the mean arithmetic and absolute estimation errors for the total corneal power were, 0.00 ± 0.24 diopter (D) and 0.17 ± 0.17 D, 0.43 ± 0.23 D and 0.45 ± 0.21 D, and 1.21 ± 0.24 D and 1.21 ±0.24 D, respectively. The total corneal power was predicted to within ±0.50 D of the actual value in 95.0%, 60.2%, and 0.9% of eyes, respectively. The mean arithmetic and absolute estimation errors for the posterior corneal power using an AP ratio of 1.223 (this study) or 1.132 (Gullstrand schematic eye) were 0.00±0.17 D and 0.13±0.12 D and 0.47±0.18 D and 0.47±0.17 D, respectively. The posterior corneal power was estimated to within ±0.50 D of the actual value in 97.7% and 60.2% of eyes, respectively.

The second optical element, of less optical power but adjustable by the muscles, is also immersed in this fluid. This second element exhibits a variation in index of refraction both along its axis and with distance from its axis which is believed to contribute substantially to the reduction in spherical aberration for such a wide angle system as the eye. The core of this lens has an index of 1.40; the index at the outer extreme is about 1.38. For distant vision, this lens has an anterior radius of curvature of about 10 mm and a posterior radius of curvature of about 6 mm. For close in vision, both of these radii approximate 6 mm. Because it is an immersed optical system, its principal points are displaced toward the retina. The distance from the second principal point to the retina is nearly 23 mm, but the distance from the second nodal point to the retina is only 17 mm, and it is this latter distance which is usually taken as the effective focal length of the eye--the effective focal length is 17 mm. in air and the actual focal length is 23 mm. in the vitreous medium.

Although several radii were mentioned in the above paragraph, these surfaces are known to be elliptical and critical analysis should use the equations of ellipsoidal optics, particularly if the off-axis performance is important. One of the important features of an immersed optical system is its ability to change the angular subtense of an image in accordance with Snell’s Law. This accounts for the eye’s ability to accept an angular field of view of more than 135 degrees while only creating an image on the retina of some 100 degrees. Because of this important feature, it is inappropriate and very misleading for any college level text or atlas to treat the optical system of the eye as a “thin lens” and show an arrow being projected from the object plane to the image plane utilizing straight lines to connect the end points of the two arrows. This is especially true in ophthalmology texts where such simple diagrams appear

141Ho, J-D. Tsai, C-Y, Tsai, R. et al. (2008) Validity of the keratometric index: Evaluation by the Pentacam rotating Scheimpflug camera J Cataract Refract Surg vol 34, pp 137-145 http://libir.tmu.edu.tw/bitstream/987654321/123/2/79.pdf Environment & Coordinates 2- 83

frequently to this day. The true situation was presented by Descartes in the year 1637142. The optical power of the cornea is approximately 43 diopters and the lens contributes 16 diopters for distant vision for a total of 59 diopters. For close vision, the lens changes to a power of up to 26 diopters for a total power of 70 diopters. The overall focal length of the eye thus changes from about 22.5 mm for distant vision to 19 mm for close vision. It is important to note (which most textbooks do not) that the principal optical element of the eye of land animals is the cornea. The secondary element is the “lens” which acts as an adjustable vernier to adjust the overall optical power of the eye in a given situation. As noted earlier, the cornea is of the negative meniscus form, i. e., a physically thin (but not optically thin) lens with the center thickness less than the edge thickness. The high degree of curvature of both surfaces requires that the cornea be considered a thick lens in optical design and analysis. The second element, the lens, is a conventional biconvex lens in form. However, it is made of a material of variable index of refraction with position. The variation in the index of refraction of the lens of the human eye has been tabulated. The lens can be represented by a multilayered structure similar to an onion with its axis parallel to the optical axis. Figures in Duke-Elder143 and in Hogan, et. al.144 describe the physical geometry of the lens. The index varies in distance from the core along ellipsoidal surfaces. Campbell & Piers have documented the variation in index along the axis of the human lens but only to 3 decimal accuracy or less (no error bars were provided145, Figure 2.4.2-8. Modern optical design programs can handle such variable index materials; however, many of them are optimized to treat variable index materials that exhibit a variance either in the radius from the axis or in the distance along the axis. These would have to be modified to handle an ellipsoidal situation like the vertebrate animal eye. This complex variation in the index in combination with the aperture stop of the eye accounts for several performance parameters of the overall eye (such as at least a major part of the overall Stiles-Crawford Effect).

Between the two lens elements is an adjustable aperture stop commonly called the iris. This aperture is typically a circular hole varying in size from 2.0- to 7.0+ mm. Stiles146 has provided detailed data on the size of the iris versus light level and time. The time constant of the iris when opening is τo=6.0 seconds; when closing, τc=1.2 seconds. Using the focal length of 22.5 mm, these numbers give an F/# or optical speed of about F/11.2 to F/3.2. Using these numbers, the eye has a best on axis resolution defined by the diameter of its Airy disk, of about 1.9 microns in the green at 0.5 microns and F/3.2. The diameter grows to 6.8 microns for F/11.2. These spot sizes are based on a paraxial approximation of the optical system and assume a flat focal surface

The depth of focus of the image formed at the focal plane is given by the acceptable image diameter Figure 2.4.2-8 Refractive index profileof the human multiplied by the F/# for the parafoveal or "near on crystalline lens. Index profile alon the optical axis of the axis" condition. Outside this region, the quality of the dual gradient model. See text. From Campbell & Piers, image deteriorates seriously. Using similar F/#'s given xxx. by Miller, the image projected by the lens of the eye will be seriously out of focus when the focal plane surface differs from the inside surface of the eyeball by more than +/-3.0 microns at F/2.4 and +/- 17 microns at F/8.5. Using a 4.0 mm aperture as typical of reading conditions, the

142Descartes, (1637) La Dioptrique, in Discours de la Methods. See also Ottoson, D. (1983) The Physiology of the nervous system. NY: Oxford Univ. Press pg. 347 143Duke-Elder, S. (1961) System of Ophthalmology. St Louis, MO: Mosby pg 311 144Hogan, M. Alvarado, J. & Weddell, J. (1971) Histology of the human eye. Philadelphia, PA: W. B. Saunders. pp. 642-643 145Campbell, M. & Piers, xxx (xxx) xxx cited in AAAS lecture in 2014 but without journal, etc. 146Wyszecki, G. & Stiles, W. (1982) “Color Science” NY: Wiley pg 105 84 Processes in Biological Vision

two surfaces must coincide within +/- 8 microns for satisfactory vision. Assuming the focal plane is flat for a moment, the curvature of the inside of the eye diverges from a flat surface by 8 microns at 1° 32' or about 600 microns from the axis. Liang, et. al. have provided detailed data on the other aberrations associated with the human eye, those not associated with focus errors (see Section 2.4.5.1). 2.4.2.2.2 Diffraction properties of the neural portion of the retina

The vertebrate eye, of which we are using the human eye as the model, includes a third optical element with two purposes; + in the human case, it is designed to maintain the plane of the overall optical system in congruence with the Petzval surface formed by the entrance apertures of the photoreceptor cells of the retina. + in some sophisticated vertebrate animal systems, primarily hunting birds, it has been further optimized to provide some degree of additional magnification in the region of the point (or line) of fixation.

The above feature was described briefly in Chapter 1.

With the sophisticated techniques employed in the vertebrate eye, particularly the location of the aperture stop, the variable index “lens” material and the presence of a field lens, a wide variety of eyeball shapes can be accommodated as seen in different vertebrate species.

2.4.2.2.3 Diffraction properties of the photoreceptor cell

The outer segment of the individual photoreceptor cell acts optically like a finite diameter solid cylinder with a nearly flat end facing the lens of the eye. It operates as a waveguide for the energy supplied to it from the lens. The diffraction pattern of the photoreceptor cell is described by its radiation pattern that is dependent on the waveguide mode excited. This pattern can vary significantly and its ramifications are described in Chapter 17 with regard to the various Stiles-Crawford Effects. 2.4.2.2.4 Vascular support to the oculus

The individual eye is served by two major vaxcular paths. The foveola and immediate surround is supplied by the choroid artery within the optic nerve assembly. The more peripheral retina is served by the retinal artery that enters the oculus separately.

2.4.3 Auxiliary Optical Techniques Used in Specific Visual Systems

There are five additional modifications of the basic lens system that are significant enough to describe here since they illustrate the range of techniques available in the optical design of eyes. First is the iris and its variants among various visual systems. Second is the unusual elliptical optics used in some fish. Third is the auxiliary “contact lens” used by some birds and amphibia. The fourth involves the tapetum in a variety of modifications. The last concerns the dual retinas found in certain molluscs. 2.4.3.1 The Iris ----- The use of a lens in the aperture of a camera obscura greatly increases the light gathering capability of the system since it allows a larger aperture without unacceptable image distortion. However, for photometric speeds faster than f/8, the lens itself must be compensated for its shortcomings. In the human eye, the lens is formed from two individual lenses, the cornea and the crystalline lens which, along with the gradient index of the crystalline lens material, provides this compensation. Environment & Coordinates 2- 85

This raises the question of the purpose of the iris of the human eye. Because of its obvious changes in size, explaining its purpose has always been easy; it is to control how much light reaches the retina. This is a naive explanation; the iris only provides illumination control over a range of about 16:1 that is a very small part of the overall dynamic range of the eye, about 10 million to one. The actual primary function of the iris is to control optical aberrations when desirable147. Thus, under high illumination conditions, it closes and restricts the light rays passing through the lens to the least aberrated paths. This insures the best overall system resolution, particularly on- axis. When light conditions are poorer, the iris opens, which does allow the collection of more light but only at the expense of light rays following more aberrated paths through the optics. In this way, the system maintains its ultimate capability to provide alarm in the presence of external luminous intensity change but at a cost of reduced image quality. The sensitivity of the iris to light is subject to considerable discussion. Several studies have shown that pupil size is more strongly correlated to blue light intensity (e.g., Barbur et al., 1992) than to photopic luminance, with the effect becoming more prominent at lower luminance levels. Blue-rich light causes incrementally smaller pupil sizes than yellower light. Although it is sometimes assumed to be mediated by rod cell (scotopic) response, research indicates that pupil size may be dependent on blue-sensitive S-cones (Kimura and Young, 1999), a combination of rod and cone cell response with peak sensitivity at 490 nm (Bouma, 1962), or a L-cone minus M-cone mechanism (Tsujimura et al., 2001). ---- Beyond its role in optimizing performance by trading illuminance for aberration reduction, the iris in many animals is further adapted to maintain the highest practical resolution in one plane or another. This additional optimization involves the shape of the closed iris. In closing, the iris affects the spatial resolution of the eye since it is the controlling factor in the f/# of the optical system. In the human, where there appears to be no need to preserve or emphasize the resolution of the visual system in any particular plane, the closed iris is circular. However, in many animals, there is a need to preserve the spatial resolution in one plane, even at the expense of resolution in an orthogonal plane. In these situations, the iris closes to a slit instead of a small circle.

In the cat family, the iris closes to a vertical slit. The iris of other animals may close to a horizontal slit or a dumbbell shaped opening. 2.4.3.1.1 The Iris in the human

Although frequently portrayed in the literature as an aperture stop located immediately in front of the lens148, this is a misleading portrayal. The cornea of the human eye is the principle, most powerful, lens of the lens group. The iris is actually located between the two lenses. Its primary purpose is to optimize the spatial performance of the eye under varying illumination conditions. It limits aberrations under high illumination conditions by closing. The gross transient performance of the iris is illustrated in Figure 2.4.3-1. A more detailed discussion of the transient response is presented in Section 7.4.7.4. Note the fact that the iris is at minimum size throughout the majority of the photopic range (nominally at 1.5 log cd/m2 and above ). This is important in understanding the primary role of the iris in controlling the quality of the optical image during photopic operation. It only begins to play a role in controlling the intensity of the illumination at the retina within the mesotopic and scotopic regions.

To understand the quality of the data supporting the solid line in this figure, DeGroot & Gebhard should be reviewed149. They stress the large variation in the data among a group of investigators used to assemble this curve. It has in fact range bars as large as those of the dashed line.

147Campbell, F. & Gregory, A. (1960 Op. Cit. 148Oyster, C. (1999) Op. Cit. pg. 412 149DeGroot, S. & Gebhard, J. (1952) Pupil size as determined by adapting luminance J Opt Soc Am vol. 42, pp 492-495 86 Processes in Biological Vision

Figure 2.4.3-1 Average pupil diameter as a function of background luminance. Solid curve is the average of several studies. Circles are the average of 12 observers from Spring & Stiles, 1948. From Geisler, 1989. Environment & Coordinates 2- 87

Because the data is hard to locate, Figure 2.4.3-2 illustrates the opening and closing performance of the human iris under forced conditions150. The available data is usually presented in tabular form without reference to an equation. However, it is seen from this figure that the human iris is simple and well behaved from a mathematical standpoint. Its equation is a first order exponential for both opening and closing. Under normal conditions, the iris opens and closes very slowly in response to similar changes in the illumination level.

Figure 2.4.3-2 Dynamics of the human iris. Diameters are apparent size measured externally after a change in illumination involving 320 cd/m2 (white light). Solid curves from data of Reeves, 1920

150Reeves, P. (1920) The response of the average pupil to various intensities of light. J. Opt. Soc. Am. Vol. 4, pg. 35 (also in Wyszecki & Stiles, pg 106.) 88 Processes in Biological Vision

2.4.3.2 The Elliptical Eyeball It is very interesting to examine the special optical requirements and the design used to satisfy these requirements in the fish, Anableps tetrophthalmus151, (Family Cyprinodontidae, Suborder Haplomi of the fish) who loiter at the surface of a body of water and have a need to see well above the horizon in the air environment, index of refraction of 1.00 and below the surface in the water environment, index of refraction of 1.33. They accomplish this feat very effectively by using an elliptical eye ball. Thus, the eye operates in the viewing region below the surface of the water as in other aquatic animals, a simple “non-immersed” optical system with the same index of refraction on both sides of the lenses; in the viewing region above the waters surface, the eye operates as in terrestrial animals, an “immersed optical system” with different indices for the optical media on the two sides of the lenses. Figure 2.4.3- 3-A shows this situation in caricature. B. shows the details, including the different role played by the cornea. The cornea has almost no optical power when submerged. It has significant power when in air. An unfortunate conflict between the scientific terminology and the vernacular should not be overlooked here; the portion of the eye that is not immersed in water operates in the “immersed optical” mode.

2.4.3.3 The Nictating Auxiliary Lens

Many animals are known to have an auxiliary inner eyelid. It is usually assumed to be an added protection for the cornea against the environment. However, it has frequently been found to have a variation in thickness in the region intercepting the optical rays passing through the iris of the eye. If it also has an optical index different from that of the media between the object being viewed and the cornea, it will form what would be commonly called a “contact lens” which is removable/insertable under muscular control. This situation is commonly found in amphibians, including both aquatic mammals, reptiles and birds. Hickman152 says such a nictating membrane is vestigial in man.

2.4.3.4 The tapetum Figure 2.4.3-3 The optical system of Anableps tetrophthalmus. Main sketch and A From Kershaw The tapetum, found posterior to the photo-receptive (1983). B by author. structure of the eye, serves a variety of purposes in different species. It does not seem important in the higher mammals. In most species, it acts as a contrast enhancing light absorber that reduces the scattering of light within the eye.

In many nocturnal species, it acts as a sensitivity increasing reflector by reflecting any unabsorbed light back through the retina. In this role, it is frequently asymmetrical with respect to the local horizon. It reflects light received from below the local horizon but absorbs light received from above the horizon. In a very few species, the retina has separated from the tapetum as discussed in the next section and the tapetum has become a distinct optical element in the overall visual system . 2.4.3.5 The Spatially Separated Retinas

As noted in Chapter 1, the case of Pecten is fascinating. It has evolved an eye with two separate retinas placed next to each other but separated from the tapetum and other elements at the posterior of the optical orb. The tapetum has become a reflective mirror in a catadioptric optical system consisting of the objective group and the tapetum. When the eye is immersed in sea water, the cornea is ineffective but the crystalline lens and the tapetum combine to form a

151 Kershaw, D. (1983) Animal Diversity. London: University Tutorial Press pg. 328 152 Hickman, C. op. cit. pg 783 Environment & Coordinates 2- 89

catadioptric optical system bringing light to focus on one of the two retinas. When the eye is not immersed in sea water, the cornea of the objective group is effective and the cornea and crystalline lens operate as a dioptric optical system with the other retina. This provides an animal living in an estuary with focused vision under both aquatic and terrestrial conditions. 2.4.4 The Field Lens and Image Plane of the Optical System

The literature of the 1930-60's contained considerable discussion about the structure and material of the region now labeled the fovea and foveola. It was fostered by the obvious coloration of this region when viewed through the iris. The early experiments were exploratory and the conclusions were primarily conceptual. As higher resolution pictures became available, different conclusions were drawn. These were frequently contradictory in detail. As an example, Miller says the retina is thinned in this area while the book Science of Color153 says the retina is somewhat thickened in this area. Neither describes the diameter of the area under discussion. By reviewing the pictures in this work as a function of the diameter of the picture, a number of conclusions can be drawn. In general, the total thickness of the retina is reduced in the foveola. There is frequently a small thickening of the overall retina in the area surrounding the fovea compared with the retina as a whole. At a more detailed level, the Outer Segments are frequently longer (thicker in terms of a layer) in the area of the foveola than in the surrounding regions. Wald154 provided early spectral data concerning the macula lutea which is frequently described as the yellow spot. His measurements showed the absorption to be as high as 60% in the region of 455 nm. His measurements were obtained by differencing the sensitivity of foveal and peripheral photoreceptors. This is a very primitive technique by modern standards. He proposed the cause of this absorption was the presence of a carotenoid, either lutein or leaf xanthophyll, C40H54(OH)2. The data points presented in his Figure 4 show only a casual relationship to xanthophyll. This was at a time when only a handful of carotenoids was known. Many other materials and/or mechanisms could provide similar differential sensitivities. Any of a variety of currently known carotenoids or retinoids could be involved. The neural material displaced from the region in front of the IS could show an absorption spectrum in the short wavelength region. Its removal could result in a yellowish color being reported for the remaining material.

The Science of Color155 says “Light falling upon the macula lutea, or yellow spot, which is the yellow pigmented area surrounding and including the fovea, is absorbed to an extent that increases with decreasing wavelength to 450 nm, where the absorption starts falling again. The pigmented area is some 7 degrees in diameter and includes several identifiable regions, but its extent and distribution vary irregularly and significantly from one person to another.” This statement would be completely consistent with the artifact being due to a reduction in the amount of S-channel chromophore present in this area compared with adjacent areas. The peak at 450 nm. is consistent with the data of Wald.

There are now a number of high resolution cross-sectional views of the central region of the retina for both man and other chordates. Many of these were obtained using different color illumination and highlight different features. This situation requires that care be taken in drawing conclusions about the animal optical system. The figures of Miller156 and Fine & Yanoff157 are notable.

153Science of Color. (1963) NY: Optical Society of America pg. 84 154Wald, G. (1945) Human vision and the spectrum. Science. Vol. 101, No. 2635 pp. 653-658 155Science of Color. (1963) NY: Optical Society of America pg. 104-105 156Miller, D. (ed) (1987) Clinical light damage to the eye. NY: Springer-Verlag. 157Fine, B. & Yanoff, M. (1979) Ocular histology. NY: Harper & Row, Medical Dept. pg. 96 90 Processes in Biological Vision

Figure 2.4.4-1 CR Cross section of the fovea of M. mulatta in blue and green light. The significant absorption of the axons of the Inner plexiform layer (IP) and the photoreceptor axon layer (RA) to blue light is clearly shown. Green light is highly absorbed in the outer segment layer (OS) and the RPE(PE here). One degree visual field equals approximately 246 microns in the Macaque. From Snodderly (1984)

Miller158 shows a histological cross-section of the fovea that does not show a separate layer identifiable as the macula lutea. Quoting Miller, “The composition of the yellow pigment seems to be a combination of carotenoids and is primarily located in the outer plexiform layer with some in the inner core segments.” It may also be an artifact of removing the soma of the bipolar and other cells from the foveal region. A possibility exists that it is due to a lower density of S-channel chromophores. Only further detailed experiment is likely to answer this question. Recent imagery of Snodderly159 provides additional data which questions the conclusions of Wald about the Macula Lutea and the differencing methods he used. Figure 2.4.4-2 is a photomicrograph of the retina of M. mulatta in both green and blue light. This clearly shows the high absorption of the RPE (PE in the figure) and the OS for green light

158Miller, D. (1991) Optics and refraction--a user-friendly guide. NY: Gower Medical Publishing pp. 3.24- 3.25 159Snodderly, D. Auran, J. & Delori, F. (1984) The macular pigment II. Spatial distribution in primate retinas. Invest Ophthal. Vis. Sci. vol. 25, pp. 674 Environment & Coordinates 2- 91 of undefined wavelength. It also shows that the retina is highly absorptive of blue light. The light is highly absorbed by the axons in the IP layer and the RA (or fiber) layer. These axons are the dominant material in these two layers, particularly in the fovea. It is likely it is the axons themselves that are the absorptive material resulting in the yellow reflectance of the macular area. This figure can also be used to interpret the refractive properties of the neural layers. Several factors are involved in the matching of the entrance aperture of the individual photoreceptor cells and the imaging surface of the imaging optics. Note particularly the line formed by the junctions of the inner and outer segments of the photodetectors. At this scale, this is the Petzval Surface that the imaging optics must match.. This line is not straight in a local sense--or a curve of uniform radius at a larger scale. An explanation for this variation must be given in any overall discussion of the optical system of the eye. The neural tissue of the retina has been arranged to minimize the optical scattering of light in the field plate due to the presence of miscellaneous structures of various indices of refraction. Making these changes has influenced the local average index of refraction. This necessarily changes the optical path difference associated with rays passing through this area. The result is that the rays will come to a focus in this local area at a surface anterior to the plane of focus for rays beyond this area. The image surface will exhibit a local curvature toward the 2nd principal point. Compensating for this curvature is the principal reason that the OS elements in the fovea are longer than in the adjacent region. See reference to Brown, Watanabe & Murakami in Section 3.2. Section 1.7.1.4.3 describes the significantly more complex field plate formed by the neural laminate in the hunting birds.

2.4.3.5 The putative neural retina as a fiber optic plate

Franze et al. have recently attempted to identify the neural laminates of the retina with a fiber optic plate160. The plate is formed by a large number of Muller cells forming radial light pipes between the physical surface of the neural laminates at the viscous boundary and the input to the photoreceptor cells. Their figure 3 does not appear to describe the photoreceptor cell adequately, wherein the applied light usually passes through the inner segments before reaching the outer segments (ROS). Their apparent reliance on figure 3 of Ashton & Tripathi leads to questions161. The figure in question is actually of the endothelium of the cornea and totally unrelated to the structure of the retina. A more appropriate image would be that of Roorda & Williams (Section 3.2.3) [xxx showing the apparently rounded and closely spaced inputs to the retina ] The idea that the radial Muller cells form a light pipe does not appear to be compatible with the non-radial character of the light reaching the photoreceptor cells at 90 degrees from the fixation point in the normal eye. The photoreceptors are known to be oriented toward the pupil, and not radially, in this region. The failure of the photoreceptors to orient toward the pupil is a recognized disease in the clinical literature.

2.4.5 Summary of the Optical System up to the Petzval Surface

In summary, the non-aquatic eye is defined as an immersion type system consisting of three optical groups. Two of these groups are used to form the nominal image needed to match the Petzval surface of the photoreceptors. The third group is a field group formed of neural tissue used to increase the efficiency of photoreceptor absorption. The objective group consists of a primary and strongest refracting lens known as the cornea, an aperture stop known as the iris, and a vernier refracting lens under muscle control known as the crystalline lens. The second optical group consists of the field plate. It is variable in thickness and consists of the neural material of the retina between the inner and outer limiting membranes. The portion of the field plate covering the fovea is generally called the macula lutea because of its distinctive color. This coloration appears to be due primarily to the removal of blue absorbing neural material from the optical path. The result may have a slight operational effect by reducing the absorption of blue light on its way to the foveola. The crystalline lens is known to have a variable index of refraction with distance from the optical axis; a condition that would reduce the spherical aberration considerably. The field lens may also be found to exhibit a variable index of refraction with distance from the fovea centralis to suppress

160Franze, K. Grosche, J. Scatchhov, S. et al. (2007) Muller cells are living optical fibers in the vertebrate retina PNAS vol 104(20), pp 8287-9292 161Ashton, N. & Tripathi, R. (1972) The argyrophilic mosaic of the internal limiting membrane of the retina Exp Eye Res vol 14, pp 49-52 92 Processes in Biological Vision

spherical aberration further. Spherical aberration is not related to the shape of the Petzval Surface but to how well the light rays passing through different zones of the optical system are brought to focus at a given point on this surface. The image produced by this optical system is probably diffraction limited over a very small area centered on the intersection of the optical axis and the retina. Beyond that region, the spread function of the optical system is probably matched to the size of the individual photoreceptor. To understand the operation of the visual system, it is mandatory that a complete (Maxwellian) optical analysis be used. The use of Gaussian optics, and nodal points, is not allowed. [Figure 2.4.1-3] stressed that the angle between the target in object space and the line of fixation is not the same as the angle in image space between the target image and the point of fixation. This is an important consideration in analyzing and interpreting the off-axis performance of the visual system. 2.4.5.1 On-axis performance

[material in remainder of this section should move to Chapter 17 part 2 ] Liang & Williams have recently provided the most comprehensive data for the on-axis performance of the human eye. Figure 2.4.5-1 reproduces their data collected using a variety of measurement techniques. Most of the data was collected at or corrected to a 632.8 nm wavelength and errors in focus and astigmatism have been removed. Frame (b) provides a definitive description of the Rayleigh limit of the eye and indirectly the focal length of the subjects eye. Note the distinctly different shape of the curves for 2 and 3 mm pupil at low frequencies. These responses are approaching diffraction limited performance. This is shown even more clearly in frame (a) where the 2 mm response is approaching a straight line. The eye is approaching a diffraction limited performance at this pupil size. The dashed line has been added to predict the performance of the human eye with a one mm pupil.

The aberrations in frame (a) are very significant. At a 7.3 mm pupil size, the eye exhibits nearly a full wavelength of aberration (expressed as a cumulative wavefront error due to all causes). This is far worse than a typical commercial criterion for a diffraction limited system of a cumulative error of λ/14. Only the 2 mm response exhibits a cumulative wavefront error of better than λ/4 (still far below the commercial standard). The Liang & Williams paper contains values for each of the Zernike coefficients combining to give these relatively large wavefront errors. This tabular data was collected using a Hartmann-Shack wavefront sensor. Wavefront error maps are provided summarizing their data. They also show the degree of mirror symmetry in the errors associated with the two eyes of one subject. Environment & Coordinates 2- 93

Figure 2.4.5-1 Mean of the radially averaged MTF’s of the eye for different pupil sizes. Observed pupil sizes in millimeters.

Navarro, et. al. have also provided recent data on the optical performance of the human eye under a variety of conditions162. They used a helium neon laser in an unequal-path Twyman-Green interferometer configuration with auxiliary viewing and display equipment. The subject viewed the focal point of the interferometer at a constant distance using natural accommodation (3 diopters). They maintained a 4mm pupil size (technically the entrance pupil, observed externally via the optical axis of the test set) by controlling the background light level of the perimeter screen. As shown in [Figure 2.4.1-3(d)] they actually observed the projection of the pupil into the optical axis of their test set (the entrance aperture). The data was acquired as a result of two passes through the physiological optics. The authors took precautions to process their data as required by this technique. They also discuss a variety of tradeoffs involved with their test configuration.

Their method of measurement is entirely physical. The subjects only task is to ensure appropriate accomodation and minimal vignetting. He minimizes vignetting by positioning his head appropriately in an auxiliary experiment or task. The method does involve a considerable amount of computation beginning with the aerial images recorded on a CCD camera. They illustrated the variability in the data in their figure 7. Their calculations were based on the Strehl resolution variant of the Strehl ratio. They chose to present their MTF data using a logarithmic ordinate scale. While unconventional, it does provide greater detail and precision at higher spatial frequencies and higher angles of eccentricity. Figure 2.4.5-2 shows their data along with the theoretical MTF of the LeGrand Eye (F. L. = 22.2888 mm) at a pupil size of two and four millimeters. As would be expected in a very wide angle optical system, the system is highly aberrated even for the on-axis condition. This condition is confirmed by the low Strehl ratios presented in their figure 5. The on-axis ratios were between 12 and 18 percent depending on subject. As suggested in Section 2.4.3, the human eye operates at an observed pupil size of about 2 mm over most of the photopic range. As noted in Figure 2.4.1-3(c), the actual pupil may be slightly smaller than the apparent pupil (the entrance pupil) observed from

162Navarro, R. Artal, P. & Williams, D. (1993) Modulation transfer of the human eye as a function of retinal eccentricity. J. Opt. Soc. Am. A. vol. 10, no. 2, pp 201-212 94 Processes in Biological Vision

outside the eye. It would be interesting to acquire data using the test set of Navarro, et. al. at a smaller pupil size. The aberrations could be reduced. However, the diffraction limit would be moved in to only 70 cycles per degree.

In their analysis, they attempted to fit the smoothed curves for each condition with the sum of two exponential terms (their eq. 1). However, they noted that an exponential does not go to zero at a finite spatial frequency. This fact is more obvious in their off-axis data. Williams, Sekiguchi & Brainard also provided an on- axis MTF in 1993163. They attribute the MTF to a collaboration with Navarro(see above). They did not specify whether the data was collected using a psychophysical experiment on the Sekiguchi interferometer (single-pass) or a physical experiment on the Navarro interferometer (two-pass). The data was collected at a wavelength of 632.8 nm. In either case, it is not clear why their caption suggests “M and L cones” if the experiment did use this wavelength. Figure 2.4.5-2 MTF of the human eye at its point of The context of the article suggests the data was fixation using a logarithmic scale. Also shown are the obtained from calculations of the MTF using contrast theoretical MTF’s for the LeGrand Eye at a true pupil size threshold data. If the data was obtained using the of 2 and 4 mm. Original graph from Navarro, et. al. 1993. Navarro test set, the data would have been collected based on Airy disk data. The shape and magnitude of the results are completely compatible with the previous figure except it was acquired at a longer wavelength. Figure 2.4.5-3 illustrates their data plotted against the theoretical MTF of a LeGrand Eye for a three millimeter real pupil size and 632.8 nm light. The data is that obtained by averaging the results from three subjects. It was obtained using a three millimeter artificial pupil and paralyzed accommodation.

The Diffraction limit for the LeGrand Eye is calculated as 83 cycles per degree for a three mm pupil and 632.8 nm light.

The measured response still shows considerable aberration at the lower spatial frequencies. However, it is approaching the diffraction capability of the optics near 60 cycles per degree. The low frequency aberration is similar to that usually associated with a focus error. If this is the actual source of error, it is equivalent to an optical phase difference (OPD) of about 3λ/4.

Williams, et. al. may have had difficulty with their definition of the Nyquist limit (the dashed vertical line Figure 2.4.5-3 MTF of the human eye at its point of in the figure), as opposed to the Nyquist frequency, fixation using 632.8 nm light and a 3 mm artificial pupil under their interpretation of this data. They used the (open circles). The ordinate is a linear scale. Also shown Geisler model of an ideal observer (See Section 19.2). is the theoretical MTF for the LeGrand Eye calculated at They calculated this limit as 56 cycles per degree under 632.8 nm and true natural pupil sizes of 3 mm. See text. that mosaic-based-imager model but measured a finite Original data points reinterpreted from Williams, et. al. MTF at values greater than this. As a result, they show 1993.

163Williams, D. Sekiguchi, N. & Brainard, D. (1993) Color, contrast sensitivity, and the cone mosaic. Proc. Natl. Acad. Sci. USA, vol. 90, pp 9770-9777 Environment & Coordinates 2- 95 the last line segment in their figure as dashed. Part of the problem highlighted in their discussion involves the interpretation of the performance of their test set (See Section 17.6.4). Under the interpretation used here, “super-Nyquist resolution,” which is an unexplained mechanism in their interpretation, does not occur. Much of the discussion in their paper, regarding this putative phenomenon, and their effort to introduce aliasing as part of the explanation for the observed results, can be ignored under this interpretation. The Diffraction limit for the LeGrand Eye is calculated as 69 cycles per degree for a two mm pupil and 139 cycles per degree for a four mm pupil based on 500 nm light. Rovamo, Kukkonen & Mustonen have provided data for the on-axis human eye for a variety of entrance pupil dimensions164. They base their work on a unique description of the human visual system as a 5-step image processor. Their rationalizations, when limited to the photopic region are generally compatible with the conclusion as this work. Note that the vision researcher generally speaks of the pupil, even when using an external aperture stop that is more correctly called an “entrance pupil.” The on-axis entrance pupil is generally 10% larger in diameter than the actual pupil of the physiological optical system. The difference between the size of the entrance pupil and the equivalent internal pupil can become quite large when speaking of off-axis performance. Their data shows the human visual system to be nearly diffraction limited only for an entrance pupil, or real pupil of under 1.0 mm. Even with an entrance pupil of 1.5 mm , the eye exhibits significant degradation from the ideal. They did not address the structure of the retinal mosaic. It is not clear why they introduced the approximation for the MTF in their equation 8 when they have the precise description in their equation 11. 2.4.5.1.1 Aberrations in On-axis performance

Measurement of the eye’s wave-front aberration function is becoming an important goal of clinical optometry and ophthalmology. Atchison, Joblin & Smith have recently provided an overview and computational analysis of the aberrations usually associated with the Stiles-Crawford Effect of the 1st kind. They do speak in terms of an entrance pupil. However, they do not differentiate that term from the between the lens pupil of the real eye. Unfortunately, they limited their model and discussion to “centered optical systems” based on thin lenses and no gradient optical elements. The human visual system is not a centered system. The fovea, and the associated line of fixation are de- centered as discussed in Section 2.2.2.2. They did not provide a comprehensive list of the parameters used in their analysis. They do reference Applegate & Lakshminarayanan who have defined the centroid of the mechanism causing the Stiles-Crawford Effect165. While their goal is admirable and the work carefully done, it treats the Stiles- Crawford Effect as of retinal origin and attempts to equate it to an apodization. In fact, its origin is in the physiological optics and is due to the gradient in the index of refraction of the lens with distance from its centroid. For the on-axis condition, the effect of apodization is to reduce the importance of the aberrated rays passing through the periphery of the lens. This does reduce the apparent impact of the Stile-Crawford Effect. Part of their conclusion is correct. The Stiles-Crawford Effect of the 1st kind is not effective in correcting for spherical aberration and defocus (see Section 17.3.7).

Jennings & Charman have provided a composite of the MTF data of several investigators that continues to suggest the human eye is aberration limited at 3.8 mm iris opening and larger166. He, et. al. have recently provided measured wavefront errors for the in-vivo human eye at 543 nm167. Their technique is quite sophisticated and shows the considerable difference in the wavefront errors between individuals under identical conditions. A systemic asymmetry relative to the line of fixation is usually found. This asymmetry is suggestive of the normal off-center operation of the system.

164Rovamo, I. Kukkonen, H. & Mustonen, J. (1998) Foveal optical modulation transfer function of the human eye at various pupil sizes. J. Opt. Soc. Am. A. vol. 15, no. 9, pp 2504-2513 165Applegate, R. & Lakshminarayanan, V. (1993) Parametric representation of Stiles-Crawford functions. J. Opt. Soc. Am. A. vol. 10, pp 1611-1623 166Jennings, J. & Charman, W. (1978) Optical image quality in the peripheral retina. Am. J. Optom. Physiol. Optics. Vol. 55, no. 8, pp 582-590 fg 3. 167He, J Marcos, S. Webb, R. & Burns, S. (1998) Measurement of the wave-front aberration of the eye by a fast psychophysical procedure. J. Opt. Soc. Am. A. vol. 15, no. 9, pp 2449-2456 96 Processes in Biological Vision

Salmon, Thibos & Bradley have gone farther and introduced a Shack-Hartmann wave-front sensor at a wavelength of 632.8 nm168. The in-vivo results are quite encouraging. Subtle variations in the wavefronts of individuals were determined.

2.4.5.1.2 On-axis performance versus light level

Visual acuity is reduced significantly at reduced light levels. Figure 2.4.5-5 shows this performance in a figure from Smith169. While part of the reduction is due to the increasing use of the peripheral areas of the lens as the pupil becomes larger, the signal-to-noise ratio of the signal applied to the individual photoreceptors of the retina also plays a role. under scotopic conditions is typically signal-to-noise limited. This characteristic is described in figure 5.6 of Smith. 2.4.5.2 Off-axis performance

Smith has also provided a general description of the variation in visual acuity with angle in a log-log Figure 2.4.5-4 Visual acuity as a function of object presentation. He notes that while the presentation is brightness in reciprocal minutes. The dashed and dotted comprehensive in angle, the falloff is far more rapid lines show the effect of increased and decreased than the shape of the curve might indicate. Miller & (respectively) surround brightness (1 millilambert is Newman is currently the authority in this area170. approximately the brightness of a perfect diffuser Figure 2.4.5-5 shows their reproduction of a figure illuminated by 1 foot-candle). The open circle curve from Randall et al171. indicates the diameter of the pupil. From Smith, 2000.

168Salmon, T. Thibos, L. & Bradley, A. (1998) Comparison of the eye’s wave-front aberration measured psychophysically and with the Shack–Hartmann wave-front sensor. J. Opt. Soc. Am. A. vol. 15, no. 9, pp 2457-2465 169Smith, W. (2000) Modern Optical Engineering NY: McGraw-Hill, pg 137 170Miller, N. & Newman, N. ed. (1998) Walsh and Hoyt’s Clinical Neuro-Ophthalmology. 5th ed, vol. 1. Baltimore, MD: Williams & Wilkins 171Randall, H. Brown, J. Sloan, L. (1966) Peripheral visual acuity Arch Ophthalmol vol 75, pp 500-504 Environment & Coordinates 2- 97

Figure 2.4.5-5 DUPL Reduction in visual acuity with visual field eccentricity. A composite of many investigators. The Ludvigh curve appears to be displaced vertically. From Randall et al., 1966

Jennings & Charman have provided off-axis performance data based on skiagrams172. The skiagram is a two dimensional representation of equivalent refraction error as a function of peripheral angle. The data is derived from a Fourier Transform of a two-pass Airy disk. Their data shows clearly that the optical performance of the physiological optics is not aligned with the line of fixation of the eye. It also shows the off-axis performance of the optics parallels the density of the ganglion cells according to Weymouth.

Navarro, et. al. proceeded to measure the temporal off-axis performance of the human eye out to 60 degrees eccentricity from the line of fixation in object space. Figure 2.4.5-6 shows their data. The data continues to show significant aberration at all angles, in line with the low Strehl values noted in their figure 5. While they claim they maintained a natural pupil of 4 mm for all angles observed, the observed pupil size is not necessarily indicative of the true pupil size. As the angle from the fixation point increases, the effective size of the real pupil decreases as does the focal length of the eye. The ratio remains nearly constant. However, the pupil becomes tilted relative to the principle ray. In addition, the gradient index characteristics of the lens, see [Figure 2.4.1-3(b)], contribute considerable asymmetry to the ray bundle passing through the lens. Because of these two facts, it is likely the theoretical performance of the eye at high angles from the line of fixation would have been lower than that of the theoretical 4 mm MTF shown. This would be consistent with the lower strehl ratios of Figure 5 at higher off-axis angles.

172Jennings, J. & Charman, W. (1978) Op. Cit. 98 Processes in Biological Vision

2.4.5.2.1 Aberrations in Off-axis performance

Recently, Navarro, Moreno & Dorronsoro have provided data on the off-axis aberrations of the human eye, in-vivo using laser reflectometry at 543 nm173. The results include many spot diagrams. They are quite well organized and informative. They extend from the fixation point to 40 degrees along the horizontal plane in the temporal direction (the side of the retina away from the optic disk).

Figure 2.4.5-6 MTF of the human eye at angles relative to the line of fixation on a logarithmic scale. Average radial profiles (symbols) and results fo curve fitting (continuous curves) for six eccentricities. Measured with a 4mm +/– 0.25 mm pupil at 632.8 nm. Results for 10 and 40 degrees are available in original paper. Theoretical MTF for a 4 mm circular pupil at 632.8 nm also shown. Data from Navarro et. al., 1993.

173Navarro, R. Moreno, E. & Dorronsoro, C. (1998) Monochromatic aberrations and point-spread functions of the human eye across the visual field. J.Opt. Soc. Am. A. vol. 15, no. 9, pp 2522-2529 Environment & Coordinates 2- 99

2.4.5.3 Gross distortion in the human eye

The human eye is highly anamorphic. The image formed by the human eye is very greatly distorted from a geometrical imaging point of view. Figure 2.4.5-7 provides a caricature of the situation that is quite different from that of Guyton174. The scale or magnification of the eye varies nearly 4:1 with field angle. This would cause a significant data processing task for the brain if the system were used in an imaging mode. However, it is shown elsewhere herein that the human eye, and that of all animals, is fundamentally a change detector with respect to time. In the chordates, this capability is combined with the continuous fine motion of the eye (tremor), also with respect to object plane position, to provide spatial contrast detection. What is important is that the spatial contrast sensitivity in the object plane is nearly constant even if the spatial distortion is high. In the operating environment, the eye still detects a sudden contrast change, due to either spatial contrast change (due to motion) or an absolute brightness change (in the absence of motion), and signals the brain; the eye rotates rapidly to bring the object of concern into the fixation point of the retina for closer examination.

2.4.6 The Optical System at and beyond the Petzval Surface

The Optical system beyond the Petzval surface is key to the high performance of the visual system compared to the equivalent performance of a photographic film- based system. Photographic film technology is only slowly increasing and is not expected to approach the absorption efficiency of the animal photoreceptors, typically 90%. Film is currently in the range of 1-2%. It lacks the auxiliary optical structures used in the photoreceptor cells. Figure 2.4.5-7 Caricature of pattern imaged on the retina The depth of focus calculated above illustrates two as a function of object size in object space. Inner cross things; corresponds roughly to diameter of Fovea. Only the outline of these shapes is transmitted to the brain, along + the requirement that the Ptezval surface formed with a color descriptor applying to each side of each line. by the entrance aperture of the photoreceptors of the of the retina be carefully matched to the image surface of the optical system if maximum performance is to be achieved

+ the necessity of absorbing the incident light within the depth of focus or of arranging for the light to be captured within the depth of focus and to be ‘funneled’ to the point of light absorption without serious attenuation. A full optical ray trace of any eye has not been found in the literature. It needs to be performed. It must recognize the immersion aspect of the system, the variable index properties of the vernier lens, the field plate, and the possible role of the macula lutea as a precision field flattener. The macula lutea may also involve a variable index of refraction with radius from the fixation point. As eluded to at many locations in the literature, the outer segments of the photoreceptors are long thin cylindrical features that lend themselves to providing a ‘waveguide’ function. However, to do this efficiently, they must exhibit certain features: 1. the inner material of the waveguide must have a higher index of refraction than the outer or

174Guyton, A. (1976) Textbook of Medical Physiology. 5th ed. Philadelphia PA: W. B. Saunders pg. 816 100 Processes in Biological Vision

surrounding material. 2. to avoid serious attenuation, the diameter of the waveguide must exceed one wavelength of the radiation to be transmitted throughout its length 3. For maximum performance, the waveguide should not vary in diameter or index of refraction (unless variations in these two parameters are coordinated in a complex way) 4. The numerical aperture of the optical rays entering the waveguide must be matched to the diameter and the index of refraction of the waveguide. Snyder, writing in Snyder & Menzel has discussed some of these first order considerations from a graphical and mathematical perspective175. The analyses was extended by papers of Snyder & Pask176, and Stacey & Pask177,178. However, their analysis also treats the inner segment as a waveguide and discounts any lens action on the part of the ellipsoid. They were well aware of the steps they took to idealize the configuration of the photoreceptor cell for purposes of their analyses. They also use the term coherent to refer to the spatial coherence (collimation), not the spectral coherence, of the light. Figure 2.4.6-1 illustrates four distinctly different potential configurations of the Petzval and matching focal plane interface. (a) shows the configuration used by Snyder, etc. The assumption is that light enters the photoreceptor cell at the end of the inner segment farthest from the outer segment. The recent micrographs of Section 4.2 & 4.3 provide evidence suggesting that the inner segment does not act as a waveguide feeding the outer segment in most (if not all) of the retina. However, their analyses are applicable to the outer segment acting alone as a waveguide. (b) shows a configuration with the ellipsoid, with its significantly different index of refraction from the rest of the inner segment, acting as a field lens for its associated outer segment. Where active, the ellipsoid can act as a collimating lens for light impinging on the outer segment. (c) shows the inner segment not participating at all in the optical schematic. In this case, the extrusion zone where the disks are formed may act as the end surface of the outer segment waveguide, or as a lens in contact with the first surface of the waveguide. (d) shows the configuration of (c) for the off axis (or general) condition. The angle between the centerline of the inner segment and the centerline of the incident light exceeds 40 degrees for photoreceptors located at the equator of the eye. The angle can be even larger for photoreceptors in the extreme periphery. In this case, it is obvious that the inner segment, and particularly the extreme end of the inner segment plays no role in the capture of the incident light.

The precise point of focus of the image plane is not shown in detail in this figure. In both ( c) and (d), the extrusion cups (labeled parabolic surfaces) may act as collimating lenses for light entering the outer segment. Note, the extruded disks of the outer segment are not confined within a membrane. The transition in index of refraction between the disks and the inter photoreceptor matrix (IPM) is a diffuse one.

It is interesting, regarding point 4, that the F/# is quite ‘fast’ in the eye and this makes it very difficult for a waveguide to capture all of the photons efficiently. The discussion of the Stiles-Crawford Effect in Section 17.3.7 involves this fact. It is also a fact that there are two structures associated with the photoreceptor cell that could aid in photon absorption by the OS. In many animal eyes there is a small ellipsoid near the junction of the inner and outer segments of the photoreceptors. This ellipsoid is in the exact optimum position to act as a collimating lens at the entrance to the photoreceptor waveguide. This would change the radiation bundle to a very ‘slow’ F/# and aid photon capture immensely. It could be considered a second individual field lens placed directly in front of its own photoreceptor. These small spherical lenses would form a two dimensional lenticular lens array as shown in Figure 2.4.6-1. The optical bundle shown at the upper left in this figure corresponds to an F/2 optical system. The point of crossing of the two dashed lines represent the image plane of the optical system in the absence of diffraction. In the real case, the optical bundle would never reduce to a zero diameter at the Petzval surface. In the absence of an ellipsoid, the extrusion cup of the Inner Segment could also act as a condensing lens if the material in the cup had a higher index of refraction than the material of the IS itself. This change would only require the image surface of the optical system be moved slightly to the right to preserve optimum focus. This situation is shown for the lower

175Snyder, A. (1975) Photoreceptor optics: Theoretical principles. In Photoreceptor Optics, Snyder, A. & Menzel, R. ed. NY: Springer-Verlag, Section A.2 176Snyder, A. & Pask, C. (1973)The Stiles-Crawford Effect–explanation and consequences. Vision Res. vol. 13, pp 1115-1137 177Stacey, A. & Pask, C. (1994) Spatial frequency response of a photoreceptor and its wavelength dependence. I. Coherent sources. J. Opt. Soc. Am. A. vol. 11, no 4, pp 1193-1198 178Stacey, A. & Pask, C. (1994) Spatial frequency response of a photoreceptor and its wavelength dependence. II Partially coherent sources. J. Opt. Soc. Am. A. vol. 14, no 11, pp 2893-2900 Environment & Coordinates 2- 101 optical bundle. Points 3. and 2. are easily met if the outer segments are cylindrical after extrusion by the mechanisms of the inner segments. These points are not easily satisfied by a conical shaped structure, particularly if the diameter of the cone decreases below a few wavelengths of the light at any point.

Figure 2.4.6-1 Alternate optical configurations applicable to the retina. The chromophoric material extends five times farther to the right than shown. Optical bundle (a) is positioned for the case of the inner segment acting as a waveguide transitioning in the vicinity of the ellipsoid to the outer segment, also acting as a waveguide. For this case, the inner segment also extends five times farther to the left than shown. Optical bundle (b) is positioned for the ellipsoidal case. Lenticular array of small lenses in front of each photoreceptor Outer Segment. Each lens may be a single element, the ellipsoid or the parabolic element formed by the extrusion cup, or a two element system consisting of the ellipsoid and associated paraboloic surface. (c) is positioned for the parabolic case. In the absence of either field lens, much of the light incident on the Outer Segments would follow the dashed lines. (d) shows the off-axis condition with the light path at an angle to the associated inner segment. The contrast of the resulting image would be significantly reduced.

It must be recognized that the longer wavelength of red light may lead to poorer performance of the waveguides of a given diameter. This is apparently not true as will be shown later where the human eye is seen to perform according to theory out to wavelengths beyond 1.0 microns without any requirement to account for a “waveguide cutoff” effect. It must also be recognized that the outer segments will appear to be “resistively loaded” waveguides, i. e. from an antenna theory point of view, the photon absorptive material inside the waveguide is equally spaced and separated by a material of a lower index of refraction. The combination will act similarly to a resistive filter. On the other hand, abrupt changes in diameter or of the average index of refraction over multiple wavelength intervals will appear as “reactive loads” and generally hurt overall performance. 102 Processes in Biological Vision

Point 1. is easily met if the individual outer segments are separated by an interstitial medium that has a lower index of refraction than the segment over most of the length of the segment. Here, we are not talking about the vitreous humor that fills the optical cavity of the eye but the fluid between the outer segments and the retinal pigment epithelium or RPE and known as the inter-photoreceptor matrix, IPM. This material must have a lower index than the average for the Outer Segments because the OS consists of a mixture of this material and the very high index chromophoric material in liquid crystalline form. The lowest photoreceptor in the figure shows the inner segment out of alignment with the outer segment as it is found in all normal eyes. The outer segments are aligned to point to the pupil for that part of the retina while the inner segments are always perpendicular to the inner surface of the retina. It is now appropriate to discuss how the waveguide should be properly terminated. Without a proper termination, any energy reaching the end of a waveguide will be reflected back on itself resulting in “standing waves” in the waveguide. This may cause a non-uniform absorption characteristic for the overall device in the wavelength region of interest. An optical waveguide may be terminated in one of three ways: 1. By being left open to the surrounding low index of refraction medium

2. By interfacing with a higher index medium

3. By interfacing with a medium of essentially the same index.

Methods one and two lead to a reflection of the remaining energy back through the waveguide and could result in the red eye effect seen in most animals, although this effect would most likely lead to a “rainbow” effect. The red eye effect will be examined further later.

The third situation avoids both the rainbow effect and selective distortion of the absorption spectrum. It is best achieved by interfacing to a material with an index similar to the outer segments themselves. It appears the retinal pigment epithelium (RPE) is a perfect candidate for this termination device. In fact, the way the RPE encloses the end of the outer segment and has an index similar to the outer segment makes it an ideal termination device from a radiation physics point of view. It is so good, it is safe to say that any outer segment that does not terminate at the RPE (or has a conical shape that reaches a diameter of less than a wavelength of light) is probably not optimally functional. It may be juvenile, but not optimally functional. As will be discussed in the section on phagocytosis, the RPE contains a considerable amount of the actual material from the outer segment. 2.4.6.1 The role of the spatial properties of the photoreceptor cell mosaic

Under the assumption that the biological visual system is fundamentally an imager, the statistical parameters of the photoreceptor mosaic are significant in determining the MTF of the physiological optical system of the complex eye (in both Mollusca and Chordata). Since the visual system is known to be blind in the absence of motion between an object in object space and the line of fixation of the stabilized eye, the complex eye is clearly not an imager. Therefore, the mosaic statistical properties are not significant to the overall MTF. On the other hand, the size and shape of the individual photoreceptor cell does play a role in determining the MTF of the system operating as a change detector. The MTF due to the photoreceptor cell is given by the Fourier transform of the integral of the edge response of a photoreceptor cell as it intersects a high contrast knife edge at a 90 degree angle. This factor appears to be largely negligible compared to the other elements of the physiological optical system. The dimensions of these cells will be discussed in Section 3.2.2.1. It is noted that considering the human eye as an imager has two major drawbacks (besides the fact it cannot image in the absence of motion between the scene and the line of fixation). First, computing the limiting MTF of the eye based on the parameters of the mosaic and the Nyquist Criteria inevitably leads to a calculated limiting resolution that is routinely exceeded in laboratory measurements. This leads to the requirement for the questionable concept of super–Nyquist resolution179. No theoretical explanation of this concept has been presented by its proponents.

179Williams, D. Sekiguchi, N. & Brainard, D. (1993) Color, contrast sensitivity, and the cone mosaic. Proc. Natl. Acad. Sci. USA, vol. 90, pp 9770-9777 Environment & Coordinates 2- 103

Second, no concept can be found in the literature explaining how the individual photoreceptors of the retina generate a time varying signal at their output that incorporates the spatial frequency performance of the eye. 2.4.6.2 The optical elements of the Inner Segment

From the morphological perspective, it is difficult to adopt the position of the Snyder & Stacey team with regard to treating the inner segment as a waveguide leading to the outer segment. The bulk of the inner segment is frequently smaller in diamter than the outer segment. In this case, it would constitute an additional spectrally selective short- wavelength-pass filter in front of the outer segment. Based on the recent data presented in Chapter 4 that the ellipsoid can act as a relatively spherical lens at the wavelengths of light, it is likely that this is the primary optical role of the inner segment. In the absence of such an effective lens, the extrusion cup between the inner and outer segment may exhibit optical properties because of the relatively high index of refraction of some of its contents. There is considerable discussion but very little data on the spectral performance of the ellipsoids (frequently labeled oil droplets) of the inner segments. Wolbarsht has shown the width of the absorption spectrum for these ellipsoids in turtle180. However, his figure is a replot of Liebman, & Granda181 and the vertical scale is a relative one. His text does suggest the low absolute optical density of many of these ellipsoids. The width of the spectrum suggests the presence of multiple absorbers not unlike the case of the macular absorption (also shown in same paper) and the lens in the ultraviolet. The ellipsoids need not be colored to act as a lens. 2.4.6.3 The optical elements of the Outer Segment

Teasing adequate detailed information from the literature on the exact characteristics of the Outer Segments of any given retina as they vary over the spatial extent of the full retina is difficult. The goal would be to define the size and orientation of each photoreceptor by position in the field and whether there was an associated optical mechanism in the associated IS. If there is such a mechanism, its description would be useful; whether there is an ellipsoid or paraboloid present in the associated inner segment and the “nature” (chemical description, index of refraction, size, color, etc.) of these elements.

The photoreceptors in this foveal region are frequently described as having a diameter of nearly 2.0 microns and to be located on 2 micron centers. In addition, unlike other areas of the retina, an absolute minimum amount of circuitry overlays the fovea. Miller182 says "no nerve fiber layer, ganglion cell layer, inner plexiform layer, or inner nuclear layer is present. Only the outer plexiform and photoreceptor cells [sic] are present in the central fovea."

Figure 2.4.6-2 illustrates the shape and spacing of the disks of the photoreceptors of the retina in some detail. Note the considerable similarity in the outer segments in this figure. The optically sensitive part of all of the samples are cylindrical in shape, none are conical in shape. Wald (1965) stressed this fact when he said "It has long been recognized that the outer segments of foveal cones are rod-shaped; they are attenuated cylinders about 50 microns long and about 1-2 microns wide." Continuing, he said "In primate cones, however, both parafoveal (Cohen, 1961) and foveal (Dowling, 1965), the layers are flattened sacs, as in rods, lacking only the specialized outer rim." The actual imagery of Outer Segments of a rhesus monkey from Dowling is shown in the above figure. Whether there is a rim on both the cones and the rods will be left for the reader to determine. Dowling discusses the difference in spacing between the disks in the rod and cone as if they were statistically significant. He also labels the wavy vertical line as a membrane enclosing the Outer Segment. There is no way of demonstrating this line is associated with the Outer Segment instead of the extrusion cup of the Inner Segment. If it should be associated with the cup, the spacing of the disks may not yet have achieved its final value. Note the spacing is slightly wider in the bottom of the cone frame and slightly wider at the top of the rod frame. No mention of the original location or the orientation of the tissue samples appeared in the original article. No information about the location or orientation of the chromophores associated with the disks is available at this magnification.

180Wolbarsht, M. (1976) The function of intraocular color filters. Fed. Proc. vol. 35, no. 1, pp 44-50 181Liebman, P. & Granda, A. Microspectrophotometric measurements of visual pigments in two species of turtle, Pseudemys scripta and Chelonia mydas. Vision Res. vol. 11, pp 105-114 182Miller, D. (1991) Optics & refraction. Vol. 1, ed. Podos, S. & Yanoff, M. NY: Gower Medical Publishing 104 Processes in Biological Vision

The fact that the diameter of the outer segments is considerably smaller than the length of the segment and that the depth of focus of the image plane is so small does suggest that the photoreceptors do act as light pipes so that the light captured by the individual photodetector can be completely absorbed over a much larger distance than would be otherwise possible. Indent xxx It is hoped that by the end of this document, the use of the term rods and cones will be properly circumscribed to their correct usage. The terms only have a colloquial meaning when used to describe the morphology of the photoreceptor cells. They were first used in the literature to describe the Outer Segments of the cell. This usage was found to be less than dependable. The terms were later used to describe the shape of the Inner Segments, i.e., the non photosensitive portion, of the photoreceptor cells. This has also been found to be a poor descriptor of IS shapes. There is no physical dichotomy among the OS’s or the IS’s justifying such descriptors. See Section 3.1.5.1.

There is a continuum of inner segment shapes ranging from long thin ones to short chubby ones. The outer segments of chordate are consistently shown in the electron-microscopy literature as cylindrical with an aspect ratio of typically 25:1.

The terms have no consistent usage in discussing the visual performance of the eye. They do not relate to the spectral performance of the photoreceptors.

Figure 2.4.6-2 CR Details of the Outer Segments in the eye of a rhesus monkey. (a); foveal cone and (b); foveal rod from the same preparation. The arrow in lower left points to the circular end of a single disk. The vertical line on the left has not been shown to be a membrane enclosing the disk stack. Magnification, 162,000 with scale added in the original art. Art from Dowling (1965) Environment & Coordinates 2- 105

2.4.6.4 Location of the image surface relative to the photoreceptor cells

Williams has recently collected data on the location of the image surface (objective plane in his notation) relative to the outer limiting membrane (elm in his notation) and expressed it in terms of a dioptric error. The data in Figure 2.4.6-4 is not yet sufficient to expressly eliminate the inner segment as a functional waveguide, but it is close. Williams shows the best estimate of the Petzval surface as between 75 (foveal) and 50 microns (4-8 degrees from fixation point) from the pigment epithelium layer. The 95% confidence level is ± 30 microns. The longer foveal distance is presumably because of the longer length of the outer segments in the fovea. These numbers could still accommodate an ellipsoid acting as a lens. 2.4.6.5 Modeling of the optics beyond the Petzval Surface

There are numerous reports of colored ellipsoidal elements in the optical path of animals between the exit pupil and the outer segments of the photoreceptor cells. However, these reports normally do not include human eyes. As shown in [Figure 2.2.2-6], the index of refraction map of the human photoreceptor suggests there is an ellipsoid associated with the human eye (even if it appears to be transparent). A proper model of the human optical system must account for this Figure 2.4.6-3 Location of the Petzval surface relative to possibility and evaluate the situation in the laboratory. the elements of the retina. From Williams, et. al. 1994. In the one case, the ellipsoid within the inner segment lies beyond the image surface and acts as a collimating element funneling the light into the outer segment of the same photoreceptor cell. This case would place an entirely different requirement on the front surface of the outer segment when viewed as a light pipe. If there is no effective ellipsoid in the human eye (even a transparent one), the image surface would be expected to lie beyond the entry surface of the outer segments. In this case, the optical acceptance angle of the outer segment may be a significant factor in the overall sensitivity of the visual system.

The appendix to Snyder & Pask provides an analysis of the acceptance angle and propagation capability of a small diameter waveguide. They used similar indeces of refraction as shown in The Standard Eye of this work. However, they did not recognize the presence of the higher index of the ellipsoid. Nor did they address the local indeces within the outer segment required by a more detailed analysis of the absorption process.

Figure 2.4.6-4 shows their solution to the equation

2 2 ½ n = (π•d/λ)•(n – ns ) n = index of the interior of the guide, ns = 1.340 Eq. 2.4.6-1 For a nominal 2.0 micron diameter outer segment and an average index for the outer segment of 1.41, ν = 6.91 & 3.95 for 400 nm and 700 nm respectively. Only one mode is sustained at wavelengths beyond 1.2 microns based on these values. This number may be related to the discontinuity in the infrared luminous efficiency data of Sliney addressed in Section 17.2.2.5. 106 Processes in Biological Vision

Within the above range of four to seven, for the outer segment used here, the outer segments are less spectrally selective than in the model of the Snyder, Stacey & Pask group. The outer segments

Macular Sparing

xxx of the photoreceptors are able to support multiple modes associated with all wavelengths of visible light. See Section 4.3.4.2.1 for a further discussion. 2.4.6.6 Measurements of the optics beyond the Petzval Surface

Because of the limited amount of modeling of the optics beyond the Petzval Surface, the available measured data cannot be interpreted with confidence. Figure 2.4.6-4 The fraction of light accepted by a waveguide as a function of the dimensionless parameter ν. Burns, et. al. has recently provided new measurements Their diameters ranged between ν = 2 & 5. From Snyder of the photon flux returned to the exit pupil (aperture & Pask, 1973 stop as viewed from the inside of the eye)183. Unfortunately, the data are integrated functions and the analysis assumes the inner segment plays no role in the optical performance of the retina. No model of the photoreceptor cell was provided to support their analyses. These analyses are in conflict with the available detailed data (see Section 2.2.2.3).

In a related article, Burns, et. al. has employed a bleaching technique to change the apparent sensitivity of the outer segments184. Unfortunately their discussion of bleaching is largely conceptual and they do not specify the degree of bleaching employed, nor what change in sensitivity was achieved. They also do not specify the precise time between the end of the light adaptation process and the measurement cycle. As seen in Chapter 17, the recovery process can change the sensitivity of the eye more than a factor of ten in a matter of seconds.

183Burns, S. Wu, S. He, J. & Eisner, A. (1997) Variations in photoreceptor directionality across the central retina. J. Opt. Soc. Am. A. vol. 14, no. 9, pp 2033-2040 184Burns, S. Wu, S. Delori, F. & Eisner, A. (1995) Direct measurement of human-cone-photoreceptor alignment. J. Opt. Soc. Am. A. vol. 12, no. 10, pp 2329-2338 Environment & Coordinates 2- 107

2.4.7 Summary of the Overall Optical System

The following table provides the best available values for the index of refraction found in the various optical elements of the animal eye. Element Terrestrial Aquatic environment air 1.00xx Terrestrial animals water 1.33x Aquatic animals (@ 1.0 atmos, 25 degree centigrade) corneal layer xxx anterior humor xxx lens xxx vitreous humor xxx macula lutea (if present) The following values generally agree with Stacey & Pask185. retinal plexiform xxx photoreceptors inner segment cell body 1.353 paraboloid 1.38 ellipsoid 1.39 outer segment 1.43 (minimum average value, data from partially bleached material. Index within chromophoric material substantially higher) interstitial space 1.340 RPE xxx tapetum (if present)

Measurements for elements of the photoreceptor cells are necessarily least accurate because of the extremely small sizes involved, the frequent presence of extraneous cell material in the measurement path and the limited accuracy of the method used. The most common method involves immersion of the specimen in materials of a known index to observe whether the specimen becomes invisible--and possibly interpolating the results after a series of tests.

Limiting further discussion to the human eye, the optical system consists of the cornea, the lens, the field lens and the re-collimating properties of the IS and OS. These elements are arranged with the most important area of the retina, the foveola, at an off-axis position. The re-collimating mechanism of the IS, consisting of the ellipsoid immersed in a lower optical index material, alleviates any directional sensitivity of the physical structure of the OS disk stack. As a result, the retina is insensitive to the direction of the incident illumination except in certain diseases where the photoreceptors are seriously misaligned relative to the Poynting vector of the incident radiation. The cornea and lens are highly elliptical, non-spherical elements of nonhomogeneous composition, with both a radial and axial variation in index of refraction. They exhibit a back focal length that is a strong variable in order to accommodate the curvature of the retinal surface. As a result, the characteristics of the Airy image formed by the lens group is highly dependent on the location of the image on the retina. The peak amplitude of the Airy image relative to an ideal, is a reflection on the peak brightness sensitivity reported by a subject. When the Airy image is formed on the off-axis foveola by using a de-centered auxiliary aperture near the pupil, the reduction in Airy image height is reported as the original Stiles Crawford Effect. This reduction can be correlated with a similar loss in contrast sensitivity, a measure of peak signal to average background illumination. The degradation in the width of the Airy image with position on the retina can be correlated with the spatial resolution profile by using the Fourier Transform. The Fourier Transform is required instead of the Laplace Transform because the Airy image is not symmetrical in the cases of interest. In addition, the use of an auxiliary aperture near the pupil introduces additional asymmetry. This asymmetry frequently causes an additional loss in frequency response and image contrast in the overall system. Although the asymmetry is not shown, probably because of the lack of sensitivity in the instrumentation, the data of Weale (discussed in Section 2.4.2.3) is illustrative. The space frame structure of the disks in the OS is critical to the proper operation of the total optical system. The

185Stacey, A. & Pask, C. (1997) Op. Cit. 108 Processes in Biological Vision

spacing of the disks prevents a large change in the effective index of refraction at the entrance aperture of the OS. In the absence of the space frame, the chromophoric material would exhibit a very large difference in optical index of refraction relative to the IS. This would result in the surface of the retina at the entrance to the disks appearing as a highly reflective dielectric mirror. A loss of sensitivity on the order of 80% could be expected under such a condition relative to the sensitivity actually achieved. The signal manipulation characteristics of the neural layers of the retina, in addition to the physical optical elements, play a significant role in the overall contrast sensitivity and spatial resolution perceived by the individual as a function of the location on the retina. Although the foveola is not located on the optical axis of the physiological optical system, it constrains the peak performance of the overall optical system.

2.4.7.1 The optical parameters of the human visual system

Stenstrom has provided a comprehensive study of the properties of the human eyes186. This includes a table of the major dimensions of the human eyes and the three-sigma variation in those dimensions. He also provides a longer list of parameters related to Gulstrand’s exact schematic eye. The range of accommodation as a function of age is also provided. He offers one of the few discussions of the brightness of an image relative to the luminance of an object. This discussion surfaces the problem with the definition of the Troland. He does not address the question of the gradients in the index of refraction of the optical elements.

LeGrand & El Hage have also provided a comprehensive listing of many of the “standard eyes” discussed above187. However, only the paraxial condition is addressed. Bennett & Rabbetts has provided a more mathematical, but still paraxial, treatment of vision188. Within the last decade, the optical science community has taken an interest in vision. A special issue of the “A” series of the Journal of the Optical Society of America has provided a gold mine of information on the physiological optics of the human189. The terminology of the vision researcher is seen to differ considerably from that of the optician because the human eye is a complex thick lens system with a “between the lens pupil.” Most vision researchers confuse an external entrance aperture, or entrance pupil and the actual pupil, or aperture stop, of the optical system. This leads to inaccuracies. It generally leads to major errors when discussing off-axis performance. 2.4.7.2 The acuity of the human visual system

Defining the spatial performance of the visual system has been an awkward field in the research environment compared to the clinical environment for a long time. Although it has been standardized in the clinical environment for a long time, measurement consistency has been mainly due to the protocol used. The precision of measurement has not been at the research level, even for research oriented investigators. The problems have been basically three. First, the techniques used by different investigators have not been standardized. Second, the effect of the difference in stimulus used has not been appreciated. Third, lacking a model of the process, control of the required parameters has not been achieved. The failure to recognize that the vision is a photon flux sensitive system instead of an energy sensitive system has also introduced a problem, particularly with respect to calibration.

The fact that the data is highly dependent on the spatial form of the stimulus has not been generally recognized. This has caused considerable inconsistency in the literature. This is because of the importance of the image correlation mechanisms within the visual system. The same results cannot be expected from using a Snellen Eye Chart, a series of parallel bars (square wave and sine wave give different results), or Gabor patches. Because of this, the Glossary contains three distinctly different definitions of the general term acuity. The simple term acuity is reserved for the clinical evaluation using a Snellen Chart. For investigations using bars, the preferred expression is grating acuity or vernier acuity. Alternately, spatial resolution is acceptable. However, in each of these cases, the specific aspect ratio of the stimulus pattern should be specified, e. g., pitch of the bars and the length of the bars relative to the pitch.

186Senstrom, S. (1964) Optics and the eye. Wash. DC: Butterworths. pg 104 187LeGrand, Y. & El Hage, S. (1980) Physiological Optics. NY: Springer-Verlag 188Bennett, A. & Rabbetts, R. (1984) Clinical Visual Optics, London: Butterworths 189— (1998) Measurement and correction of the optical aberrations of the human eye. J. Opt. Soc. Am. A, vol. 15, no. 9 Environment & Coordinates 2- 109

As will be developed in Section 17.6.3, the maximum spatial performance is a direct function of the total photon flux per unit area applied to the retina (not the pupil in precise work) in each of the spectral bands of vision. This flux level affects both the level of the peak performance and the maximum achievable spatial resolution. The contrast of the stimulus, as well as the adaptation level also directly affect the performance of the system. As will become obvious later, the use of an incandescent light source, a TV monitor or a laser give considerably different results when they are calibrated using an energy sensitive meter. Furthermore, describing the source as white, red or blue is entirely inadequate, the spectral performance of the source as it affects each spectral channel of vision under test must be considered. The problem of using TV monitors is particularly awkward since different monitors use significantly different “blue” phosphors. The NTSC (USA) and PAL (EU) standards call for different phosphors. The phosphors used in computer monitors is at the manufacturers discretion. Thus, specifying a model number of a monitor is completely insufficient. The phosphor type must be specified and a call to the phosphor response information on the manufacturers website is desirable. Continental is particularly good at listing all of the phosphors they use. The data is indexed by model number. The acuity of the visual system is not often presented in scientific nomenclature. A good figure appeared long ago. Figure 2.4.7-1 dates from 1942190. It presents the acuity of the eye in both red and blue light separately using a Landolt broken ring. The figure and the details of this test fixture are both shown in Pirenne191. This author does not support the interpretation of this figure provided in Pirenne. [xxx clean up and take out solid lines or de-emphasize. provide more discussion of figure]

Figure 2.4.7-1 Human visual acuity in red and blue light. The test object was a Landolt broken ring. The lower data (ordinate to the right) were obtained using light from the red end, the upper data (ordinates at the left) using light from the blue end of the spectrum. The filled circles represent measurements made within the retinal periphery in a subjectively colorless field. The half-filled circles are the measurements made with the parafovea in a subjectively colored field. All other circles are measurements made with the fovea. From Shlaer, Smith & Chase, 1942.

190Shlaer, Smith & Chase (1942) J. gen. Physiol. vol. 25, pg. 553 191Pirenne, M. (1967) Vision and the eye. London: Chapman & Hall. pg. 135 110 Processes in Biological Vision

The data on the acuity of the visual system as a function of angle from the fixation point has been collected and discussed by Miller & Newman192. Figure 2.4.7-2 shows this data193. Note the considerable variation in the curves provided by different investigators. 20/50 versus 20/100 is a factor of two in acuity assuming the intensity and contrast of the stimulus is maintained. Note also the data is limited to less than 15 degrees eccentricity. The human eye performs out to angle beyond 90 degrees. To represent this performance, a logarithmic vertical scale would be required. Miller & Newman provided no discussion of the stimulus intensity nor the color temperature of the sources used. They did provide a useful conversion table between the many notations used when discussing acuity (page 160). The common expression for acuity, 20/20, corresponds to a grating acuity of 30 cycles per degree.

192Miller, N. & Newman, N. (1998) Walsh and Hoyt’s Clinical Neuro-Ophthalmology, 5th ed. vol. one. Baltimore, MD: Williams & Wilkins pg 164 193Randall, H. Brown, J & Sloan, L. (1966) Peripheral visual acuity. Arch Opththalmol. vol. 75, pp 500-504 Environment & Coordinates 2- 111

Figure 2.4.7-2 DUPL Acuity as a function of eccentricity from the line of sight. From Randall, et. al. 1966.

Similar data has been collected within the psychophysical community. Some of this is shown in Figure 2.4.7-3. Note the coarseness of the measurements relative to the horizontal axis. Anderson, et. al. collected the data shown by the data points. Their test set employed a commercial high resolution monitor. The spectrum of the phosphor of this monitor is shown only in caricatures in their figures. It is highly unlikely the phosphor spectrum was similar to that shown. They connected the data points from each subject by straight lines, based on the fact that the range bars associated with each data point were smaller than the size of the symbol used in the figure. This figure only draws a single line through the achromatic and blue-yellow grating acuity data. This method highlights the variance between the data from the subjects. The peak value for the achromatic data is at about 30 cycles/degree. The peak value for the blue-yellow data is nominally 4 cycles/degree. 112 Processes in Biological Vision

The peak value of 30 cycles/degree for the maximum performance of the visual system compares [un- - favorably xxx ] with the accepted value based on the Snellen Chart.

Some recent data from Metha & Lennie is provided as an overlay to this figure194. It represents the measured grating acuity believed to be associated with the S–channel sensory channel. It has been replotted onto a logarithmic scale. For a detailed discussion, the original figure should be reviewed. Their data was collected using a laser interferometer at 441.6 nm near the peak of the S-channel absorption spectrum. Although they measured the light level, and determined the level providing the maximum grating acuity, their article does not provide the precise values used in their acuity versus eccentricity graphs. While Metha & Lennie rely upon an earlier project that designed and interpreted performance of the laser interferometer, this work offers a different interpretation of the performance of the interferometer as a test set and the resulting data. The original project relied upon geometric (Gaussian) optics and proposed that the technique bypassed the spatial performance contribution of the physiological optics. The project appears to have ignored two additional relevant phenomena. It appears to have ignored the effects of diffraction associated with two coherent wavefronts passing through two nearby points in space and allowed to interact in the focal plane. It also ignored the fact the physiological optical system is an immersed optical system (See Section 14.2.1 for an alternate interpretation of their tests). Metha & Lennie show their data peaking at 10 cycles/degree.

As in the previous discussion, the range bars associated with their data are frequently on the order of 2:1. Metha & Lennie say (grating) acuity peaks at the fovea and falls smoothly with eccentricity, slightly more steeply in the nasal hemi-retina.” This statement can only be considered a generalization based on their sparse data, the associated ranges and the location of the optical axis of the eye. Their statement can be reinterpreted, based on the fact that the line of sight and the optical axis do not coincide, as the peak acuity is achieved slightly to the temporal side of the line of sight because of the location of the optical axis of the eye. This is supported by the difference in average value for their data points at +/– 2 and +/– 4 degrees. The values to the temporal side of the line of sight are higher.

The two solid bars in the figure indicate the peak on- axis grating acuity reported by Sekiguchi et. al195. This data was collected with a new interferometric Figure 2.4.7-3 Composite of grating acuity as a function technique. The data based on two lasers, at 514 & 633 of eccentricity. See text. Dashed line from Metha & nm, provides a 25% higher peak value for the sum of Lennie, 2001. Solid lines and data points from Anderson, the two laser signals are summed in-phase (their et. al. 2002. Two heavy bars are from Sekiguchi, et. al., isoluminant condition) more than a 200% higher peak 1993. See text. when they were summed out-of phase (their isochromatic condition). Note that except at one eccentricity, the data attributed to the grating acuity of the S-channel of Anderson, et. al. and Metha & Lennie vary by a factor of 2:1 or more (a factor of 4 at zero eccentricity). When the values of Sekiguchi

194Metha, A. & Lennie, P. (2001) Transmission of spatial information in S-cone pathways. Visual Neurosci. vol. 18, pp 961-972 195Sekiguchi, N. Williams, D. & Brainard, D. (1993) Aberration-free measurements of the visibility of isoluminant gratings. J. Opt. Soc. Am. A vol. 10, no. 10, pp 2105-2117 Environment & Coordinates 2- 113 are introduced, the spread is even larger. There are many contributions to this, including the difference in peak performance as a function of absolute photon flux intensity applied to the eye (See Section 17.6.3). A higher degree of care in experiment design and data collection is clearly necessary. Note also that the shape and width of the data of Anderson, et. al., which includes the diffraction effects of the physiological optics of the eye, are essentially the same as that of Metha & Lennie that ostensibly do not. An explanation is needed to defend the interpretation of these two groups concerning the role of physiological optics in their data. This work associates much of the shape of this data to the diffraction effects of the physiological optics. It also recognizes that the photoreceptor mosaic does not vary any where near the extent suggested by Anderson, et. al. as a function of eccentricity (See Section 3.2.2.1). The fact the grating acuity of the achromatic system and putative S–channel alone can be overlaid by moving the curves vertically shows the great likelihood they are controlled by the same phenomenon, the diffraction performance of the physiological optics. The vertical motion is an indication of the absolute luminous intensity of the stimulus, and possibly the contrast of the stimulus, rather than an indication of the mosaic parameters. The impact of the absolute photon flux on the P/D Equation is present in spite of any brightness constancy associated with later elements in the visual process. That is, the spatial (and related temporal) frequency response of the visual system is a function of the absolute photon flux level. The overlay of the data of Metha & Lennie on that of Anderson, et. al. would support the same conclusions. However, both sets are too coarse to perform a precise comparison.

If the grating acuity was controlled or dominated by the parameters of the retinal mosaic, acquiring the data using narrower bins along the horizontal axis should surface irregularities in the shape of the S–channel performance versus eccentricity due to variations in the photoreceptor density. The above challenges to the interferometric approach introduced by Sekiguchi, et. al. will be explored more completely in Section 14.2.1.

Navarro, et. al. have recently provided more data that can be translated into the form presented above196. Their data is based on a Twyman-Green interferometer. They note the resemblance of the physiological optical system to a conventional wide angle lens system. 2.4.8 Ophthalmological models of the visual system 2.4.8.1 Generic descriptions

Figure 2.4.8-1 provides a generic optical schematic that includes all of the features available to all animal visual systems. Many species do not use all these features but they are available in the basic conceptual design for the animal kingdom. The optics can be divided into three groups; the objective group, the field group and the photoreceptor group. It is the objective group that has been enshrined in the work of Gullstrand, LeGrand & others. Beginning from the left, a light ray originating from the object initially encounters an “outer” eyelid, which is used as a shutter. Next is the “inner eyelid” which may only act as an additional layer of protection against the exterior environment but is also known in some species to have optical power and be called a nictating lens. The next surface is the cornea that, in most terrestrial species, is the strongest optical element of the system; while in aquatic species, it can be thought of as a mere window without any optical power. The next element is the iris. It is generally described as primarily an element for controlling the light level falling on the retina. However, its main role is involved in controlling the quality of the image formed on the retina by controlling the diameter of the optical ray bundle passed through the outer reaches of the lens system. The next element is the “lens” or crystalline lens. This lens is only about 1/5th as strong as the cornea in terrestrial animals. It is used as a variable focal length lens that provides most of the accommodation. The literature suggests the cornea may also change shape to some degree and aid in the accommodation task. The above elements as a group are known as the objective lens (group). Following the objective lens group is an important multipurpose element that will be called the “field plate.” This element lies immediately in front of the focal plane. In photography, it is usually used to flatten the focal plane. However, in vision it is used in several ways; as a spectral filter, as a magnifier and possibly as a “focal plane corrector.” The combination of the objective and field groups create an image at the surface of best focus for the total optical system and commonly known as the focal plane. However for the eye, it is far from a flat plane, it is a highly curved surface (with a dimple around the fovea). A better name for the visual focal plane formed by the optics is clearly the focal surface.

196Navarro, R. Artal, P. & Williams, D. (1993) Modulation transfer of the human eye as a function of retinal eccentricity J. Opt. Soc. Am. A vol. 10, no. 2, pp 201-212 114 Processes in Biological Vision

The lens and cornea are variable index optical elements. Their index of refraction varies with distance from the “geometric axis” or the geometric center of the element. Usually, this geometric axis is the same as the optical axis but this is not always true. For a narrow field of view optical system, this variation in the index can be used to correct chromatic and spherical aberration in the image. In a wide angle system such as most animal eyes, it is used to cause the focal length of the optical system to vary quite markedly with angle. The result is a focused image formed on the inside surface of a sphere, not a flat surface as in a camera. Since formation of this spherical image is of primary importance, the resultant aberrations can be quite severe. The iris plays a role in limiting these.

Figure 2.4.8-1 The generic schematic of the animal eye. It includes all of the elements found in various eyes in the three major Phyla, Arthropoda, Mollusca & Chordata

The above situation explains the actual primary purpose of the iris; it is to optimize the performance of the optical system. It does this by trading off the light collection efficiency of the optics against the image degradation associated with aberrations in the optical system. The technique is to restrict the size of the optical ray bundle passing through the optical system from a given object point when the illumination level is high. As will be seen later, the iris opens at low illumination levels. This increases the chromatic and spherical aberrations. The Stiles- Crawford effect (of the first kind) is directly related to the spherical aberration found in this optical configuration. The long wavelength spectral channel of the eye effectively shuts down at low illumination levels. This results in a narrowing of the spectral region sensed by the eye and less overall impact from the chromatic aberration resulting from the increased iris opening. Behind the field plate are the three optical elements of the photoreceptor cell group. The first is a lenticular array of small lenses formed by the optical index variation of the materials inside the Inner Segments. This can be an array of single element lenses using only the ellipsoids. Alternately, it can be considered an array of two element lens groups using both the ellipsoid and the paraboloid if present. These small lenses can provide a significant improvement in the optical capture efficiency of the photo-receptive material in the individual Outer Segments. The second optical element is the structure of the Outer Segment. When viewed as an optical element, it can be considered a cylindrical optical material with a rounded end. Describing its input illuminance acceptance angle is important. This requires a description of the diameter of the cylinder, the curvature of the input end, and knowledge of the index of refraction of the fluid making up the nearby IPM. Beyond the photoreceptors is another optical element, the tapetum. This element has a diverse range of uses. In Environment & Coordinates 2- 115

some animals, it is used as a contrast enhancing device by absorbing energy not previously absorbed by the photoreceptors. In others, it is used as a reflective element to cause the original energy not absorbed by the photoreceptors to be reflected back through the same photoreceptors. The tapetum is acting as a sensitivity enhancer leading to an increase in overall absorption efficiency. To evaluate the performance of the tapetum as a single element absorber or reflector, knowing the curvature of the exit end of the associated photoreceptor and the index of refraction of the IPM in the local area at the end of the photoreceptor cell is necessary. The tapetum can also be used as a large scale mirror in a catoptric or catadioptric optical system. Here, it acts as a major optical element affecting the performance of all of the photoreceptors in a given retina. Further detailed understanding of the optical system of the eye of a given species must await a complete ray trace of the eye involved. Without a ray trace analysis, further discussion must be considered primarily speculation. A raytrace based on the proposed research-oriented eye appears in Appendix L. 2.4.8.2 Existing ophthalmological models of the human eye

Many authors have prepared optical models of the human eye, particularly for use in medicine and optometry. Davson, up to his third edition, presented an excellent model and a good discussion of other models. The older available models, including the Gullstrand of 1911 (actually several), the LeGrande of 1946, and the Emsley of 1952 are all based on first order equations, spherical optics, thin optical elements and recognition of the gradient index features of the lens. However, all use one or more average values for the index of the lens in the actual calculations. Most of these models also offered a simplified schematic, usually called a reduced eye, which is still a paraxial model attempting to represent the eye with only a single thin lens. These models are not detailed enough for the purposes of this work. 2.4.8.2.1 Models introduced during the 1970's & 1980's

During this time period, investigators began to deal with wide angle models of the human eye represented by non- spherical surfaces and incorporating the gradient index of the lens in their calculations. However, there were only limited computer programs available outside of the aerospace-industrial complex for addressing these models in detail. Blaker197, Howcraft & Parker198, Kooijman199, Lotmar200, Navarro et. al.201, Pomerantzeff, et. al202. and Chan, et. al203. all contributed to the development of a wide angle models of the human eye during this period. Charman has provided an extensive index to this material204. It should be pointed out that most of these authors incorporate the words aspheric optics in their titles. However, in optical design, the more appropriate word would be non- spheric optics or more specifically elliptical optics (a class of conic sections). In optical design, an asphere is still a spherical surface modified only slightly (less than a few wavelengths of light) in curvature as a function of radius from the axis, to aid in the correction of aberrations. Whereas an elliptical, or parabolic surface is a non-spherical optic in origin, i. e., usually a conic section, it may also be modified slightly to correct for aberrations. When so modified, they are spoken of as being aspherized conics. The animal visual system employs non-spherical optics made of gradient index transmissive elements. If it employs aspherizing of the non-spherical elements, this has not been reported in the literature.

Blaker has concentrated on the gradient index aspects of the lens and introduced the notation developed by Moore205

197Blaker, J. (1980) Toward an adaptive model of the human eye. J. Opt. Soc. Am. Vol. 70 pp. 220-223 198Howcraft, M. & Parker, J. (1977) Aspheric curvatures for the human lens. Vision Res. Vol. 17 pp. 1217- 1223 199Kooijman, A. (1983) Light distribution on the retina of a wide-angle theoretical eye. J. Opt. Soc. Am. Vol. 73 pp. 1544-1550 200Lotmar, W. (1971) Theoretical eye model with aspherics J. Opt. Soc. Am. Vol. 61 pp 1522-1529 201Navarro, R. Santamaria, J. & Bescos, J. (1985) Accommodation-dependent model of the human eye with aspherics. J. Opt. Soc. Am. A Vol. 2 pg 1273-1281 202Pomerantzeff, O. Dufault, P. & Goldstein, R. (1983) Wide-angle optical model of the eye In, Breinin, G. & siegel, I. ed. Advances in Diagnostic Visual Optics, NY: Springer-Verlag pp 13-21 203Chan, D. Ennis, J. Pierscionek, B. & Smith, G. (1988) Determination and modeling of the 3-D gradient refractive indices in crystalline lenses Applied Optics vol. 27, no. 5, pp 926-931 204Charman, W. (1991) Op. Cit. pp 21-26 205Moore, D. (1971) Design of singlets with continuously varying indices of refraction. J. Opt. Soc. Am. Vol 61, pp. 886-894 116 Processes in Biological Vision

and implemented in at least one modern ray tracing program206. A similar notation was introduced much earlier by Gullstrand based on a simpler construct by Matthiessen (1883). However, optical ray tracing remained in its infancy at that time and was generally limited to calculations by human “computers” with calculating machines or very expensive large scale digital computers. Glasser & Campbell have recently provided extensive data on the performance of the human lens as a function of age207. The data shows progressive changes with age of both the focal length and the spherical aberration of human eyes. It also revisits the “lens paradox” that has arisen within simpler concepts of eye operation. Following this work, it is imperative that the Standard Eye exhibit an age related component. Unfortunately, as the references detail, most of the above investigators have been working outside the field of biological sciences and many of their models would qualify as “floating models.” As an example, Kooijman calculates the illumination falling on a surface tangent to the surface of the retina at various angles from the optical axis. This is done on the assumption that this surface is the absorbing media. However, the light accepting ends of the photoreceptors are aligned to the source of incoming illumination. They are not parallel to the retinal surface. Therefore, the relevant surface area is the cross-section of the photoreceptor OS that is perpendicular to the incoming ray and not that projection parallel to the retinal surface. Similarly, Lotmar speaks of an (photoreceptor ) acceptance angle as the angle between the ray from the principal point to the retina surface and a tangent to that surface. He assumes that the photoreceptor OS is perpendicular to the retinal surface even at large angles from the optical axis. Bedell & Enoch208 have shown that the photoreceptors of the eye are sheared over at large distances from the polar point in order to be aligned to the incident rays. Therefore, the photoreceptors all point toward the second principal point of the eye, except in a few interesting pathological cases.

As a point of interest, figure 1 of Lotmar shows the Gullstrand-Le Grand model with a ray traveling in a straight line from the first surface of the cornea to the retina. This is far from the actual case. The so-called internal angle of the figure should be shown relative to the exit aperture of the lens (and need not be on-axis at that point). At the high incident angle shown, the exit aperture of the optical system is not described by the diameter and position of the iris.

Kooijman’s artist appears to have taken some license in his Figure 2 where the ray diagram is drawn for the paraxial case and rays should be passing through the 2nd nodal point. Instead, it appears to be drawn for the full field case since the rays are passing through the 2nd principal point instead.

From the perspective of an optical designer, only approximate data was available during this time period concerning the parameters of the animal, including the human, eye. In optical design of multi-element optical systems, using parametric data with an accuracy of at least five terms, and usually six or more, is normal. The calculations are normally carried out on a computer to at least eight digit accuracies (usually 32 or 64 binary bits) and then rounded to five or six for presentation. Watkins209 has documented that when using paraxial ray analysis, calculations made to five digit accuracies are only adequate for off -axis object angles of less than +/- 2 degrees and entrance aperture diameters of less than 0.5 mm. Note that the point of fixation in the human eye is more than 5 degrees from the optical axis. The Gullstrand and LeGrande schematic eyes generally give the refractive index data to only three, sometimes four decimal points at an unspecified wavelength and no dispersion data, i.e., the V-number or Abbe number associated with the refractive index. Without the dispersion data, talking about the chromatic aberration of the eye is very difficult. Navarro et. al.210 have presented index data to four decimal accuracies at four different wavelengths and from this have calculated preliminary dispersion values to three or four digit accuracies. They have also said these values are all approximations. They do not consider the gradient in the index of the crystalline lens of animal eyes, a considerable additional complexity. There is an additional problem in the published literature associated with the values associated with the Gullstrand and LeGrande schematics. There are a great many secondary documents extant with different numbers for the same

206Sigma 2100 available from Kidger Optics, www.kidger.com 207Glasser, A. & Campbell, M. (1998) Op. Cit. 208Bedell, H. & Enoch, J. (1980) An apparent failure of the photoreceptor alignment mechanism in a human observer. Arch Opthal. Vol. 98 pp. 2023-2026 209Watkins, R. (1972) A finite aperture model of the optical system of the human eye. Ph.D. thesis, Flinders University, Australia. 210Navarro, R. Santamaria, J. & Bescos, J. (1985) Accommodation-dependent model of the human eye with aspherics. J. Opt. Soc. Am. A Vol. 2 pg 1273-1281 Environment & Coordinates 2- 117 parameters. Any investigator is cautioned to collect data from as many publications as possible and then interpret the data to decide which is most likely correct and adequate for his use. The most applicable values at present are those of Blaker. They supercede the Gullstrand and LeGrand tabulations because of the more realistic treatment of the gradient index characteristics of the lens. This is the only set of data that explains the Stiles-Crawford Effects. See Appendix L. 2.4.8.2.2 Models and results introduced during the 1990's & 2000's

Beginning around 1990, the cost of computer processing time dropped significantly with the growth in the capability of desk top computers. This allowed a variety of non-optical scientists to begin “ray-tracing” still relatively simple models of the primate and human eyes to a higher degree of precision than ever before in the off-axis situation. These ray-tracings typically still did not recognize the non-uniformity of the index of refraction of the lens or the fundamental non-spherical character of the cornea. The studies of this time period were frequently limited to off-axis positions within 40 degrees of the axial condition. Bakaraju et al. provided a comparison between models available during this interval in 2008211. They also noted the transition from the use of multilayer structures as approximations to various gradient variations in ocular material to better gradient-based calculation models. These gradient-based models provide more rational results for off-axis conditions. Bakaraju et al. note this transition, “The off-axis aberrations of the unaccommodated form of Navarro, Santamaria, and Bescos (1985) model were later extensively (re)analyzed by Escudero-Sanz and Navarro (1999).” They also noted the work of Thibos and co-worker’s, “However, as pointed out several years earlier, the difficulty remains, that any reduced eye approach is limited in terms of its ability to truly represent real world vision since it cannot incorporate the sort of variation in refracting surfaces that occurs naturally (Le Grand & El Hage, 1980).” “Siedlecki, Kasprzak, and Pierscionek (2004) attempted to address this problem by combining the surface parameters of Kooijman (1983) with a lens having aspheric surfaces and a specific gradient index. Although they claimed superior image quality and low spherical aberration for their wide angle model, their work does not appear to have gained wide acceptance and is not among the popularly referenced examples in the literature. In an effort to achieve ever more representative behaviour, other models have included accommodative changes for near (Goncharov & Dainty, 2007; Gullstrand, 1909; Le Grand & El Hage, 1980; Navarro et al., 1985), lenticular refractive index changes with age (Goncharov & Dainty, 2007; Norrby, 2005; Popiolek-Masajada & Kasprzak, 2002; Rabbetts, 1998; Smith, Atchison, & Pierscionek, 1992; Zadnik et al., 2003) and changing refractive error in a generic adult model (Atchison, 2006).”

“In a further refinement, Goncharov and Dainty (2007) incorporated a mathematical representation of a gradient-index (GRIN) lens and were able to reproduce the properties of two well-known schematic eye models, namely Navarro’s model (Escudero-Sanz & Navarro, 1999; Navarro et al., 1985) for off-axis aberrations and Thibos’s chromatic on-axis model (Thibos et al., 1992).”

Navarro dedicated a major part of his 2009 paper to the GRIN situation as it relates to the aging process. It is interesting to note that Navarro developed his GRIN model based on variations in the index of refraction of layers of the “onion” like lens (known since the work of Gullstrand). He did not address the variation in the index with radial distance from the optical axis (a separate phenomenon) in detail. Specifically, he did not address this change in index relative to achieving the wide angle capability of the overall optical design. His figure 6 indicates his tendency to model the lens as a “thin lens” or group of thin lenses rather than the actual thick lens character of the biological lens. A thin lens design can only represent a very narrow field of view optical system. Section 2.4.1.2.1 describes why any detailed analysis of the biological optical system must be considered using the equations appropriate for thick lens optical systems. He does describe the global characteristics of the problem that are yet to be explored in detail. He notes significant differences in the predicted performance between 2nd order (quadratic) and 6th order models of the lens. Bakaraju et al. proceeded to explore the features of several of the above models using the 2007 edition of the optical design program, ZEEMAX, EE Edition at 5 degrees off axis, the nominal location of the line of fixation. Their calculations extended up to the sixth order errors in coma, etc. using a 3 mm pupil. Their effort was extensive and included many comments related to their professional familiarity with the ZEEMAX program. Noting a feature of earlier work, Bakaraju et al. note, “Most of the wide angled models proposed give a reasonably good prediction of spherical aberration on axis. This is not unexpected, as the parameters are deliberately selected to

211Bakaraju, R. Ehrmann, K. Papas, E. & Ho, A. (2008) Finite schematic eye models and their accuracy to in-vivo data Vision Res vol 48, pp 1681–1694 118 Processes in Biological Vision

achieve this goal. For example, Atchison chose the posterior surface asphericity to satisfy the condition of optimal spherical aberration to agree with literature values. Goncharov & Dainty chose both the anterior and posterior lenticular asphericities to be variables in the optimization routine, again to satisfy the condition of real eye spherical aberration. This constitutes a reverse ray tracing approach where ocular parameters are adjusted to achieve a specific aberration profile.” They also noted, “All the schematic models we studied predicted relatively good off-axis image quality over a wide visual field that declined sharply to near zero levels beyond 40 degrees.” “By default the pupil is always uniformly illuminated in the software used for simulation. Moreover, all schematic modeling assumes a scatter free environment. Clearly, this is not true in the real eye, where image formation is strongly influenced by scattering. Finally, the Stiles-Crawford effect governs the illumination level at retinal eccentricities but was not accounted for (Atchison & Smith, 2000; Drasdo & Fowler, 1974; Stiles & Crawford, 1993).” Bakaraju et al. conclude by suggesting a series of optimizations to the models they explored compared to the parameters provided by the original modelers. They also assert, “The next generation of optical models is expected to be based on the latest results of ocular parameter measurements and to include the use of toroidal corneas, the Zernike description of the whole cornea surface rather than mean keratometric values, and also decentration and tilt of the cornea.” These new models must include the Stiles_Crawford effects, numbers 1, 2 and 3. Within their 40 degree area of study, they conclude, “Among the five selected models, Liou-Brennan & Atchison models gave a closer match with real eye data, in the aspects of RMS spherical aberrations, HOA & Coma and also peripheral refraction profiles.”

Atchison offered a model tailored to a specific situation in 2006212. “These models include a gradient index lens and aspheric corneal, lens and retinal surfaces. He discussed their multiple limitations and virtues.

Navarro readdressed the optical design of the human eye in 2009213. The paper contains some minor semantic difficulties due to its origin in the mind of a Spanish speaker (page 14, example “On the contrary, we have other three opposite features, . .”). He begins, “The optical design of the eye is already given by nature (optimization through evolution), so its study can be seen as an inverse engineering problem: to unravel such design. Inverse problems are difficult in general and must be solved by successive approaches. Each approach consists of (1) some starting hypothesis based on previous knowledge; (2) a set of experimental data; and (3) a model relating those data and the hypothesis. The testing stage (4) compares model predictions to experimentally assessed optical performance. In the general case some agreement, and also important discrepancies, are obtained. The analysis of these discrepancies leads to the formulation of new hypotheses; therefore, a new approach should be undertaken. One interesting example is the intra-capsular mechanism of accommodation hypothesized by Gullstrand to explain discrepancies between geometry and power of his accommodated eye model (this example will be analyzed below.)” He goes on, “It is crucial to realize that models and underlying hypotheses affect not only the way we understand the eye, but everything involved in the study of the eye; from measurement instruments, to data analysis and interpretation of results. Thus models make our ideas evolve and models progress with new experimental knowledge. This also affects clinical practice. To understand the optical design of the eye we need models of each component (cornea, lens) and from that we can construct a model of the complete optical system. An initial overview of the literature shows a great variety of models, mainly attending to the following features: - Reduced (single refractive surface)2,3 versus anatomical (cornea and lens), - Monochromatic6 versus polychromatic7,8 (considering refractive index dispersion), - Paraxial versus finite optical performance (optical and image quality), - Homogeneous7,9 versus gradient index (GRIN) lens5, - On-axis versus wide angle, - Unaccommodated9 versus accommodative, - Age-independent versus aging - Generic versus custom or personalized, Other relevant aspects, such as intraocular scattering, have been incorporated only in a few models.” The paper contains many individual pieces of measured data. However, his figure 11 of a wide angle model of the eye is limited, but does extend to 70 degrees. He illustrates three man-made wide angle lenses, all based on spherical optics rather than the employment of a true conic section for the cornea as in the biological case. Only the

212Atchison, D. (2006) Optical models for human myopic eyes Vision Res vol 46, pp 2236–2250 213Navarro, R. (2009) The Optical Design of the Human Eye: a Critical Review J Optom vol 2, pp :3-18 Environment & Coordinates 2- 119

third case involves a curve image plane (of modest curvature). Navarro does not the conic (ellipsiodal) form of the cornea. His discussion of the retina is the most concise and profound that has been found in the literature, “Retina The retina has a two-fold role in the optical design of the eye. As it will be explained later, the curvature of the retina seems to match closely the image curvature, which has a major impact in maintaining a reasonable peripheral image quality. In addition, each retinal cone from the mosaic of photoreceptors, behaves as an individual waveguide. These waveguides approximately point to the centre of the pupil and have a limited acceptance angle. As a consequence, the relative efficiency of the rays reaching one photoreceptor is maximum for the chief-ray (ray connecting the centre of the pupil with the centre of the waveguide) and decreases for peripheral rays (higher aperture angles). This is the well-known Stiles-Crawford effect, which is a sort of pupil apodization. The reduced luminous efficiency of peripheral rays attenuates aberrations of peripheral rays, as well as the effect of intraocular scattering. In summary, the retina is a fundamental component of the optical system of the eye and plays an essential role in the optimization of its performance, both on-axis by waveguiding and off-axis by means of its curvature.” While describing the short-comings of the optics of the eye in considerable detail, figure 14 of Navarro summarizes the actual situation in a useful way. It is reproduced here as Figure 2.4.8-2. Outside of the foveola, the resolution of the stage 0 optical system is far better than the overall system by evolutionary design. The limitations are initially due to the photoreceptor sampling methodology of the visual modality. The off-axis “retinal resolution” is largely a result of the photoreceptor sampling methodology. In the foveola, the limiting retinal resolution of the eye (based strictly on the nominal entrance geometry of the outer segment) is actually 3 arcsec rather than the nominal value suggested in this figure (Section 7.4.5). 120 Processes in Biological Vision

Figure 2.4.8-2 Optical (stage 0) versus retinal (stage 1) resolution in the human eye. The performance of the optical system is focused on its wide angle capability in order to provide the alarm and awareness capability crucial to survivability. Its resolution is only critical with regard to the foveola and the analytical capability of the visual modality. See text. From Navarro, 2009.

Navarro concludes, “Our current knowledge and state of the art in measuring techniques often lead to paradoxical mismatches between experimental data and model predictions. The lens paradox is a clear classic example. It refers to emmetropia remaining rather constant with age, despite the marked increase in surface lens curvatures. Experimental evidences of accommodation, aging and presbiopia studies suggest that the aging lens may correspond to a sort of accommodated state (thicker and more curved lens), so that presbiopia could be interpreted as the decline of the ability to desaccommodate rather than to accommodate. If the aging lens is always accommodated, then the eye should be myopic and this is not the case (lens paradox.) Most explanations of this paradox rely on the change of the GRIN distribution with age, but there are other factors, such as lens growth and other potential changes of the optical system of the eye. It is clear that paradoxes like this must come from too simplistic or wrong models and assumptions, added to the lack of experimental data. The effects of aging (lens paradox, decline of optical quality, presbiopia) are showing to be especially difficult to model.” After carrying on a soliloquy on the design of the animal eye, he closes with “In conclusion, the optical system of the eye seems to combine smart design principles with outstanding flaws. Navarro clearly does not appreciate the overall (end-to-end) design specification for the visual modality that has evolved. The implemented design meets the design specification remarkably well. “Until now, such a modest quality could not be improved by surgery; indeed, any attempt of modification, and ageing itself, only yields a reduction of the optical performance.” Environment & Coordinates 2- 121

Campbell and associates were among the leaders of the investigatory biology community studying the optics of the eye from outside the professional optical design community during the 1990's. The capability they have created is described in the abstract to one of their 1990 papers214, “A video frame grabbing technique with a computer- generated overlay is used to obtain a direct visualization of the position of the image of a point source, producing a Maxwellian view, in the natural entrance pupil of the human eye. With this simple technique, the image point can be located with an accuracy of at least 0.07 mm. Applications are demonstrated: (a) in instruments using a single Maxwellian view for measuring retinal resolution in white light or superimposed Maxwellian views for color vision experiments, (b) in a dual Maxwellian view apparatus used for measuring the Stiles-Crawford effect of the first kind or for measuring chromatic aberration, and (c) in a new semi-Maxwellian view apparatus for the evaluation of monochromatic and chromatic aberrations.” They are careful to differentiate between the optical axis and the line of fixation of the eye. A Google Scholar search for the year 1990 and beyond will provide a rich source of their papers providing data on a range of optical distortions associated with the human eye (generally within the foveola).

Their data (such as in Simonet & Campbell215) highlights the fact the biological performance of the eye is frequently less than that found necessary in many man-made optical imaging applications. This is a trueism. The eye was not designed to perform imaging for purpose of archival record keeping or to fill a large field of view as found necessary in many television applications where the individual members of the audience are only interrogating a small portion of the overall image at a given instant (in fact in a two step process involving gazes and much shorter glimpses, see Section 19.10.3 on the PEEP procedure). 2.4.8.2.3 Existing ophthalmological models of the macaque eye

Several optical models of the macaque eye are available216. Most of these are based on Gaussian lens equations and are therefore limited to field angles of less than 1.2 degrees. Such models typically assume a spherical cornea and utilize measurements of 3 or 4 digit accuracy. As a result, they do not describe the monkey eye except for the on axis condition. Lapuerta & Schein were well aware of the limitations of their analysis. Such a Gaussian description does not apply to the foveola which, like in humans, is on the order of six degrees from the optical pole of the retina.

2.4.8.3 Performance evaluation tools of Optometry

Normally, only the human visual system is evaluated for “resolution” using complex symbols. And normally this evaluation is limited to images presented to the fovea using a standard (Snellen) eye chart. However, this procedure gives little indication of the capability of the system to resolve detail at both foveal and off-foveal locations simultaneously. Anstis has provided a novel eye chart217 that is reproduced in Figure 2.4.8-3. It provides the viewer with an estimate of his ability to resolve detail as a function of location relative to the point of fixation. Care must be employed to insure a high quality, preferably typeset, version of this figure is used to obtain quantitative results. The reader is cautioned that Anstis repeats the conventional wisdom that the resolution of the eye degrades with distance from the fovea because of increased size of the photoreceptors. If this were true, the photoreceptors sensing the largest letters would be sixteen times larger than those sensing the smallest letters. As shown by the data of Pirenne, discussed in Chapter 3, this is not the case. The loss in performance is due to the off-axis performance of a very wide angle optical system using gradient optics.

214 Campbell, M. & Simonet, P. (1990) Video monitoring of the principal ray of a Maxwellian view for the measurement of optical aberrations, the Stiles-Crawford effect, retinal resolution, and for investigating color vision Applied Optics vol 29(10), pp 1420-1426 215Simonet, P. & Campbell, M. (1990) The optical transverse chromatic aberration on the fovea of the human eye Vision Res vol 30(2), pp 187~20 216Lapuerta, P. & Schein, S. (1995) A four-surface schematic eye of macaque monkey obtained by an optical method Vision Res vol 35(16), pp 2245-2254 217Anstis, S. (1974) A chart demonstrating variation in acuity with retinal position. Vision Res. vol. 14, pp. 589-592 122 Processes in Biological Vision

Figure 2.4.8-3 A novel eye chart for evaluating both the on-axis and off-axis performance of the visual system simultaneously. Try to maintain fixation on the central symbol and determine what symbols and what circles of symbols you can recognize simultaneously. From Anstis (1974). Environment & Coordinates 2- 123

2.4.8.4 Performance evaluation tools of Ophthalmology 2.4.8.4.1 Precise terminology

As noted above and discussed further below, it is important to recognize that the human eye is not used as an optimized, on-axis, optical system. The fact that the foveola is not located on the optical axis assures that the visual system does not reach its theoretical potential in spatial performance. When examining the visual system more closely than usually done, particularly in the case of disease, it is important to use the more precise terminology of ophthalmology. TABLE 2.4.8-1 summarizes many of the terms used to describe the spatial performance of the visual system. The term -tropia is the root, derived from the Greek, normally used to describe deviations from the nominal dimensions of the eye. In general, the clinical terms are older and less formally rooted in Greek. In a few cases these more generic terms are shown in the left hand column in parentheses. TABLE 2.4.8-1 TERMS USED TO DESCRIBE THE SPATIAL PERFORMANCE OF THE EYE

TERM CONDITION CLINICAL TERM VERNACULAR Emmetropia Ideal refractive focus

Anisometropia Diff. in refraction between eyes Ametropia An error of refraction on the retina Hypermetropia Focus behind the retina Hyperopia Farsightedness Hypometropia Focus in front of the retina Myopia** Nearsightedness

(Amblyopia) Poor spatial performance at Amblyopia** Lazy eye nominal illum. w/o morphological cause

Heterotropia Inability of eyes to converge Strabismus Esotropia Eyes crossed abnormally Cross-eyed

(Nystagmus) Uncontrollable eye movements Nystagmus Wandering eye

Myosis Abnormal contraction of iris

Low Vision Uncorrectable poor acuity, 20/70

Legally Blind 20/200 corrected

**Myopia is a clinical term that will be shown to consist of two components in serious medical situations, a refractive error, generally hypometropia, and a neurological error commonly described as Amblyopia. 2.4.8.5 Development of a detailed optical model for research purposes

It would be useful to have a detailed optical ray tracing of the human optical system. However, at the present time, the necessary parameters are not known with sufficient precision to achieve a formal ray tracing. To achieve a useable ray tracing, all of the relevant optical parameters need to be known to a precision of at least 5 decimal places. The best data available has been reviewed in Sections 2.4.1, 2.4.5 & 2.4.8. That data together does not form an adequate set. Llotente, et al. have recently made measurements using a 37-aperture wave aberration technique that may develop into a useful data set218. In precise laboratory experiments, it is important that the modulation transfer function appropriate to the specific optical configuration is used, e,g, a round aperture and test structure or slit configurations. The appropriate MTF’s

218Llorente, L. et. al. (2001) Ocular aberrations in infrared and visible light using a laser ray tracing technique. Xxx Probably ARVO convention Vol. 42, no.4, 15 March, pg S392 124 Processes in Biological Vision

are discussed in Wolfe219. [xxx touch on the interferometry of Section 14.2.1.4 ]

219Wolfe, W. & Zissis (1978)The Infrared Handbook. Washington, DC: Office of Naval Research, Section 8-8 through 8-10 Environment & Coordinates 2- 125 2.5 Motion, and fixation in the operation of the eye

[[ Some of the following may belong in Chapter 4 ]] 2.5.1 Overview

The great importance of in the visual process of higher animals is generally not presented in texts on vision; although it is very well known that without such movement, the eye becomes functionally blind (Yarbus220 in 1965 and Ditchburn221 in 1973 provided comprehensive reports and references). From another perspective, the differential motion between the eyes is seldom explored in depth. This frequently leads to superficial discussions on the nature of binocular and/or stereoscopic vision, and how it is realized in the visual process. Similarly, texts could not be found that discuss the independent eye motions of the Chamaeleontidae family of vertebrates. Only superficial references could be found to the fact that the eyes of some animals only provide binocular vision over relatively limited sectors of their field of view. Further study of these animals could provide a significant level of data concerning how the computational optics of the visual system is configured in various animals. Referring to the basic phylogenic tree, the planaria exhibited photosensitive spots that were primarily sensitive to sudden changes in illumination. Long term, such as diurnal, changes were unimportant to such an animal. However, more sudden changes could have been important and it even developed the ability to sense the coarse direction of such changes. All of the more advanced phyla developed more accurate methods of determining the direction of the illumination change. They employ either the replication of simple eyes to form compound eyes, or the replication of either a retinula or individual photoreceptor cells behind a single lens assembly to form a complex eye. In Arthropoda and Mollusca, the eyes were normally mounted directly to another part of the animals body and exhibited little ability to change their own orientation. On the other hand, the animals of Chordata adopted a different evolutionary approach. They developed an eye that was based on the replication of photoreceptors behind a single lens, a complex eye. In addition, it was mounted in a socket that gave the eye a high degree of rotational capability. Thus the chordate eye achieved a high angular resolution over a significant instantaneous field of view and was also able to redirect that instantaneous field of view within a wider total field of awareness rapidly.

As will be developed in later chapters, it is much easier to understand how the animal uses the information collected by these, and all other, eyes once two ideas are discarded. These are the ideas that the eye is an imaging device and that a literal image map is transmitted to the brain. It will be shown that the eye is actually a change detector. It requires relative motion between the target in the field of view and the instantaneous line of sight of the eye. Furthermore, the eye(s) delivers only vectors to the brain and not a literal map. The brain reconstitutes a virtual map based on these vectors. This virtual map is not related to the instantaneous field of view of the eye(s) but is in fact related to the position of the head, or body, in earth centered inertial space.

2.5.2 Muscles and motions in Chordata

With the human eyeball located in its socket, rotation of the eyeball in the orbit is accomplished by six extrinsic muscles (generally striated, see below), Figure 2.5.2-1. Four of these--the external, internal, superior, and inferior recti--provide for the main movements in azimuth and elevation. These four have their origin in the circular ligament of Zinn at the rear of the orbit. A simple azimuthal movement is produced by the internal and external recti. The attachment points of the superior and inferior recti are such that they produce besides elevation and depression of the axis of the eye, a certain amount of torsion or rolling movement. This torsion is compensated by the two oblique muscles, the inferior oblique working with the superior rectus, and the superior oblique working with the inferior rectus. Guyton222 provides a discussion of these muscles and their innervation. Leigh & Zee have provided a much more elaborate discussion of these muscles and their characteristics223. They include a very detailed figure credited to Miller & Demer.

220Yarbus, A. (1967) The Role of Eye Movements in Vision. New York: Plenum Press (Russian text published in Moscow (1965) 221Ditchburn, R. (1973) Eye-movements and visual perception. Oxford: Clarendon Press 222Guyton, A. (1976) Textbook of medical physiology. Phila. PA: W. B. Saunders pp. 818-825 223Leigh, R. & Zee, D. (1999) The Neurology of Eye Movements, 3rd ed. NY: Oxford University Press, fig 9-1. Much of this text dates from the original 1983 edition. 126 Processes in Biological Vision

Dimer has recently proposed a paradigm shift in the operation of the oculomotor muscles involving a more complex pulley system operating in conjunction with each pair of muscles. It appears to explain how the eyes can satisfy Listing’s Law without requiring a sophisticated computational capability involving trigonometric functions224. Leigh & Zee discuss the detailed nature of the muscles of the eyes and point out their unusual, compound nature. They state the “extraocular muscles differ anatomically, physiologically and immunologically from limb muscles.”225 They point out that some twitch fibers have a single endplate per fiber and can generate an all-or-none propagating response. There are also non-twitch fibers that show graded contractions to trains of electrical pulse stimuli, similar to those in amphibians. The unique arrangement of these fibers is illustrated in their figure 9-5. These two fiber types are separated with the fatigue resistant twitch fibers Figure 2.5.2-1 CR The extraocular muscles of the eye and generally at the orbital end and the non-twitch fibers at their innervation. From Guyton (1976) the global end of the muscle. These characteristics will be important in the dynamics of the eyes discussed in Chapter 7.

The eye in humans and other higher mammals exhibits several distinct motions. Some of these are under conscious control but most are not. There are few comprehensive discussions of these motions in the literature. Where they occur, it is important to characterize the authors perspective. As an example, Leigh & Zee are clinically oriented. Their extensive definitions of the saccadic motions (their Table 3-1) actually describe responses to a stimulus that includes a large saccadic component. However, the definitions are more properly described as syndromes or complexes rather than classes of saccades. They include time delays and involve abnormal stimuli and training in abnormal responses (antisaccades). Their listing is also limited to clinically (typically bedside) observable eye movements. They do not include the finer motions defined as flicks and tremor. The perspective of those writing in Underwood is different226. They are describing the easily observable motions of the eyes while examining scenes and reading text. They also do not include the finer motions defined as flicks and tremor. These flicks and the microsaccades of tremor are the motions that are critical to the cognition of the symbols associated with reading. Chapter 7 will address these motions in detail. A summary of these motions should include the large and small saccadic motions, flick, tremor, and nystagmus.

The large saccadic motion is the motion associated with pointing the eye to bring an object in the far field to the fixation point of the eye. This motion can involve coordinated angular motions of just one or two degrees up to the full rotational capability of the eye. If needed, this rotation can be augmented by the angular rotation of the head. These motions can be considered voluntary and are frequently described as part of a voluntary fixation mechanism. However, the voluntary aspect is frequently overridden by involuntary motions commanded by the brain in response to fright or lack of important visual information about the environment. Peak velocities of over 400 degrees/sec are achieved in the human eye when rotating more than 30 degrees. Peak velocities of over 700 degrees/sec are reported for horizontal saccades227 In this work, large saccades will be arbitrarily defined as larger than three degrees based on figure 3 of Nies et. al. in

224Dimer, J. (2002) The orbital pulley system: a revolution in concepts of orbital anatomy. Ann. N.Y. Acad. Sci. vol. 956, pp 17-32 225Leigh, R. & Zee, D. (1999) Op. Cit. pg. 327 226Underwood, G. (1998) Eye Guidance in Reading and Scene Perception. NY: Elsevier 227Yee, R. Schiller, V. Lim, V. Baloh, F. Baloh, R. & Honrubia, V. (1985) Velocities of vertical saccades with different eye movement recording methods. Inv. Ophth. Vis. Sci pp. 938-944 Environment & Coordinates 2- 127

Becker, et. al228. That work suggests that the useful field of the eye before a major re-fixation is on the order of three degrees, 70% detection threshold contour. Motions in the range between three degrees and one degree will be arbitrarily defined as small saccades. Many investigators have plotted the small saccadic motions while the visual system is examining an object, or frequently a picture of an object in the laboratory. Guyton and xxx have provided a maps of the simple case of the eye looking at a uniform spot in the laboratory. The map of this motion clearly suggests a probing of the image to find significant contrast changes representing transition points and lines. These authors do not discuss what the eye or visual system is doing while it is fixated on one of these locations for a significant time interval, xxx. The next smallest saccadic motion,flick, is not under voluntary control and is considered part of the involuntary fixation mechanism. It is the primary motion used to analyze imagery brought to the point of fixation by the large saccadic motions. The amplitude of the motions is highly variable but in the range of a few minutes of arc to one degree, depending as they do on image content. Each individual motion is accomplished in a very short time. The resulting power spectrum of these motions is difficult to measure but is believed to extend to a bandwidth of 30-50 Hertz. The tremor of the eye is the primary mechanism used to convert the information collected by the photoreceptors, which are basically change detectors, into a perceived vector in the brain. It is these vectors that are associated with the meaning of a letter or fundamental shape. Tremor is very difficult to study because of its extremely small angular amplitude. This motion of each eye can be described in terms of its horizontal, vertical and torsional components. These components can be defined as microsaccades but this term has been used variously in the literature, frequently to describe the larger small saccades and flicks. The amplitude of the tremor is between 20 and 40 arc seconds in object space. That corresponds to one or two photoreceptor diameters in the fovea. Similarly, the frequency of the tremor is reported to be between 30 and 90 Hertz (with reports to 150 Hertz). To obtain these measurements, Yarbus resorted to very carefully designed experiments. These involved the subject using a bite bar mounted on an inertial mass. The body of the subject was further constrained to avoid unnecessary motion. Sometimes, compensation and/or data reduction techniques were used to compensate for the heart beat and related hydraulic motions of the subject. The resulting values are generaly of the RMS type derived from power density measurements. The peak amplitude may be three times the RMS value.

The tremor of the eye is continuous during waking and much of sleeping, being overridden in an undefined manner during the large and small saccadic motions. Serious fatigue of the visual system, in the authors experience, manifests itself in a reduction in the amplitude of the tremor. This reduction in amplitude causes a loss of peripheral vision as the amplitude becomes smaller compared with the size of the individual photoreceptors and/or the data convergence function associated with them. As the fatigue becomes worse, the tremor amplitude is further reduced and the subject experiences “tunnel vision”--his vision is reduced to a zone around the fixation point correlated to the size of his fovea. It appears that the lack of a tremor is one sign a medical doctor uses to confirm the death of a subject. Lack of tremor causes the eye to appear much more distinct to an observer.

The tremor in Chordata, sometimes described as the micro-saccades, can be thought of as due to a “keep alive circuit.” In the absence of tremor, the eye is blind to fixed images in its field of view.

The term nystagmus appears frequently in the vision literature. Nystagmus is thought to be primarily a pathological motion of the eye resulting from disorders of the central nervous system. The condition is characterized by an oscillatory movement of the axes of the eyes during which the amplitude of oscillation is tens or hundreds of times greater than the amplitude of the tremor. Simultaneously, the frequency of the nystagmus is tens of times lower than the frequency of the tremor229. Many types of nystagmus appear in the literature. Nystagmus will not be discussed further in this work. Examination of the motions of the eye suggest that they are essentially ballistic. The muscles introduce an impulse that starts the eye rotating. This rotation continues at a nearly constant rate until a second impulse is received to stop it. Only very detailed measurement would show any variation in the rate during a movement or any overshoot as part of a movement. A small amount of data regarding this motion appears in Richards230.

228Becker, W. Deubel, H. & Merger, T. (1999) Current Oculomotor Research: Physiological and Psychological Aspects. NY: Kluwer Academic/Plenum Publishers, pg. 273 229Yarbus, A. op. cit. pg. 120 230Richards, W. (1968) Visual suppression during passive eye movement. J. Opt. Soc. Am. vol. 58, no. 8, pp. 1159-1160 128 Processes in Biological Vision

2.5.3 Adaptation of ocular motion The amplitude of the tremor in lower animals is an important adaptation to their niche environment. Many lower animals can partially control the tremor of their visual system. It appears that cats of all species reduce the amplitude of the tremor in their eyes when they go into the characteristic hunting posture (which includes inertial stabilization of their head during other body movements). For fixed targets, such reduced angular motion of the line of fixation causes a de-emphasis on the animals peripheral imaging capability while raising the emphasis on analysis of the target via the small saccadic motions. For targets moving compared with the background scene, the target motion causes an emphasis in the image analysis computations of the cat. Even while flying, many hunting birds, and particularly the Kites (Elanus Leucurus), inertially stabilize their heads and reduce the tremor to achieve very high sensitivity to the movement of small animals in their field of view. The Kites have perfected a technique, called “stooping,” for descending feet first with their head remaining inertially stabilized to capture their prey. The jerky head motion of many birds may relate to the reestablishment of an inertially stabilized condition. It appears the apparent prominence of the eyes of owls are due to their stabilized condition.

---- The most widely discussed motion is the saccadic motion related to establishing a line of sight to a specific object or scene and the motions related to analyzing that scene. The main function of saccades is to change the point(s) of fixation. Saccades involve very short duration (hundredths of a second) motions of the eye at very high angular rates and in animals with stereoscopic vision, almost identical angular motions by the two eyes. The large saccades, measured in degrees, are essentially voluntary. This is not true of the small saccades, which are usually measured in a few minutes of arc and are involuntary. Yarbus showed that these actual motions did not agree with the motions reported by the subject in psychophysical tests. The saccades are obviously very important but not as important as the movements known as tremor.

Tremor is both the most important and the most difficult of the eye movements to study. The amplitude of the motion is very low and its frequency is very high. Further, it is frequently interrupted or masked by the larger saccadic motions. As noted by Yarbus, successful measurement requires careful attention to detail concerning vibration of the subject, the apparatus and even the building housing the laboratory. Yarbus frequently used differential measurement techniques to measure the difference in motion between the eye and an upper incisor tooth nearby. The amplitude of the tremor is usually about the angular dimensions of the eye receptors (20-40 seconds of arc), while its frequency varies from 30 to 90 hertz. Ditchburn231 says the tremor is characterized by a continuous spectrum of frequencies up to 150 hertz. The tremor is generated by the various eye muscles acting in coordination, i. e., the resulting motion in the object plane is a series of ellipses.

Ditchburn et. al. in the 1950’s232 and Riggs et. al. also working in the 1950’s233 developed the critical dependence of visual acuity on the fine motions of the eye. Yarbus went further to demonstrate complete loss of vision under a variety of situations where the relative motion between the scene and the eye was brought to zero; even investigating the situation where a zone of the visual field was brought to zero motion, and became functionally blind, while a surrounding region continued to perform normally. He also described how the higher cognitive centers of the eye- brain filled the “blind” area with a color representative of the surrounding area; similar to what is done in a modern computer program using the “fill” tool. The conclusion that can be drawn from the above work on tremor and the minor saccades is that the eye is based on a change detection mode of operation (not an image integration mode). Thus, the individual photodetectors of the eye sense changes in brightness, either due to changes in illumination level of an area or the change in illumination related to the edge of an object compared with the background. Sudden changes in brightness are unusual in the natural world (the characteristics of fire being the major exception). In the natural world, the eye is absolutely dependent on relative motion of the image relative to the retina for operation. This motion is provided primarily by tremor.

231Ditchburn, R. (1959) Problems of Visual Discrimination. 20th Thomas Young Oration, delivered to the society Nov. 12, 1959 232Ditchburn, R. (1961) Eye movements in relation to perception of color, Visual Problems Colour 2:51 233Riggs, L. & Niehl, E. Eye movements recorded during convergence and divergence. J. Opt. Soc. Amer. 50(9):913 Environment & Coordinates 2- 129

2.5.4 Coordination of ocular motion The coordination of ocular motion occurs in a variety of contexts. In some animals the individual eyes are not rotatable and muscular coordination is irrelevant. In some animals with no overlap of the visual fields due to eye placement, there is also no need for coordination. When there is a common field of view between the two eyes, coordination is normally required--both in muscular control and in data reduction. Where muscular control is needed, it may be merely to achieve binocular vision. This might be called first order coordination. It insures the images are properly overlaid in the data reduction computations. If a more sophisticated level of vision, stereoscopic vision is wanted, a finer degree of coordination is required. This level of coordination employs parallax as a measure in depth perception in computational optics. There are a number of animals, in several phyla, who exhibit totally independent eye movements. These include the chameleon in the family Chamaeleontidae, Sub-Order Sauria, Order Squamata, Class Reptilia of the Phylum Chordata and the interesting family including the Pipefish, sea horse and sea dragon. This family of 175 members is the Syngnathidae of the Sub-Order Catosteomi, Order Acanthopterygii, Super-order Teleostei of the Class Fish of the Phylum Chordata. The eye movements of these animals invariably include overlapping fields resulting in the likelihood of stereoscopic vision in that portion of the total vision field on a part time basis. At other times, no overlap of the fields of the two eyes occurs. 2.5.5 Sensitivity suppression during eye movements

There are references in the literature to the suppression of visual sensitivity during ocular motion. However, these have not been extensively quantified. The question has been raised as to whether such suppression occurs during cognitively induced saccades and/or saccades performing compensation for externally induced displacements. The latter might not involve the cortex. Richards lists references and has provided some data. His data shows desensitization is minimal for both active and passive eye movements234. For both types of movement at a test stimulus wavelength of 580 nm, the suppression was 0.26 log units (less than a factor of 2). The suppression at 460 nm was considerably less, 0.08 to 0.1 log units, and probably negligible.

2.6 The Temporal Input Environment of the eye

The temporal input to the eye can be discussed in two broad categories; natural occurring situations and man-made situations, particularly those generated in the laboratory for use in vision experiments.

2.6.1 The natural environment

Recalling the natural visual environment is difficult for a member of an industrialized society. Some key features include;

+ Scene contrasts limited to less than 10:1 and normally less than 6:1(unless passing from an enclosed area into sunlight).

+ Very little transverse motion in the field of view of the eye exceeding a velocity of 0.1 radian/sec.

+ Very little axial motion in the field of view of the eye. The rate of any actual axial motion is computed in the brain from the observed transverse motions related to the growth in object size within the field of view. + No sudden change in brightness levels except due to scintillation. The scintillation is usually related to the reflection of light from the sun caused by the motions of water, or water on a wind blown substrate; or by the motions of an open flame. + Most of the motion within the field of view is due to the translational or rotational motions of the subject itself. + Essentially all temporal changes observed by a photoreceptor cell are at rates below 50 Hertz, except those due to tremor and atmospheric lightning.

234Richards, W. (1968) Visual suppression during passive eye movement. J. Opt. Soc. Am. vol. 58, no. 8, pp. 1159-1160 130 Processes in Biological Vision

2.6.2 The man-made visual environment Man has introduced a wide range of artificial conditions that challenge the visual system. The following list provides a cursory overview of these challenges in their order of impact. + Writing, where the characteristics and arrangement of the symbols used are chosen to be as compatible as possible with the data reduction capabilities of the eye. + Television, where the system is designed to have minimal impact on the visual system while transmitting an excellent facsimile of an image from a distance. + Sports, where the visual system is intentionally challenged, along with the motor system and both the analytical and coordination capabilities of the brain. + Participation in the operation of modern machinery, where the intent is to use the visual system efficiently to convey information to the brain. + Warfare, where the visual system is intentionally challenged in an aggressive manner.

+ Laboratory testing, where the intent is to determine the operating characteristics of the visual system under both nominal and stressed conditions.

Laboratory testing is frequently the most challenging of all these man-induced visual activities. This is particularly true in testing for the temporal frequency response of the visual system using either flashing sources of illumination or images moving rapidly in the transverse direction. 2.7 The unique environment of the photoreceptor cells

The operation of the photoreceptor cells of vision cannot be understood without an understanding of the complex environment in which they operate. The level of sophistication with regard to how they are integrated with their environment is striking if not poetic. Nearly every element in their environment and their cytology plays at least two distinctly different technical roles involving two distinctly different disciplines. As an example, the inter- photoreceptor matrix, IPM, not only transports the chemicals required to implement the chromophores of vision and the metabolic supplies required to power the cells, it also acts as an electrical return for the electrical signals generated and its index of refraction appears to be critical to the proper operation of the optics of the eye. 2.7.1 Overview 2.7.1.1 Histology, optical signal absorption and electrical signal generation

Figure 2.7.1-1 shows the best available caricature of the critical elements of the photoreceptor cell as they are reported by a variety of investigators. The most useful sources were Papermaster, et. al235, Richardson236 and the more recent Papermaster paper that provides an overall scale237 . The composite is prepared after examining many images of cross-sections of the cell along various diameters. This is the only way to prepare a meaningful composite. It will be noted that no membrane is shown enclosing the Outer Segment. As discussed in Chapter 4, the presence of such a membrane is generally not consistent with the process of phagocytosis near the tip of the outer segment. Furthermore, many investigators dispute the presence of such a membrane in the region beyond the extension of the inner segment that essentially forms an extrusion dye for the disks. There is no evidence of such a membrane in electron micrographs of fractured Outer Segments.

235Papermaster, D. Schneider, B. & Besharse, J. (1985) Vesicular transport of newly synthesized opsin from the Golgi apparatus toward the rod outer segment. Invest. Ophthal. & Vis. Sci. vol. 26, Oct. pp. 1386-1404 236Richardson, T. (1969) Cytoplasmic and ciliary connections between the inner and outer segments of mammalian visual receptors. Vision Res. vol. 9, pp. 727-731 237Tam, B. Moritz, O. Hurd, M. & Papermaster, D. (2000) Indentification of an outer segment targeting signal in the COOH terminus of rhodopsin using transgenic Xenopus laevis. J. Cell Biol. vol. 151, pp. 1369-1380 Environment & Coordinates 2- 131

Figure 2.7.1-1 Cytology of the photoreceptor cell at the OS/IS interface. This caricature provides the best estimate available of the cytology of the cell. The disks of the OS are shown formed within the extrusion cup of the IS which is filled with Rhodonine as a component of the Inter-photoreceptor matrix (IPM). After secretion into a pool at (D), the protein opsin is formed into a sheet (arrows) that is then looped into a continuous sheet that becomes coated on one side with Rhodonine. As the bends in the sheet are compressed prior to extrusion of the sheets, they break. The broken sheets become completely covered with Rhodonine and are extruded as finished disks. The precise location of the Outer Limiting Membrane (OLM) is unknown. The Activa forming the output amplifier of the cell is shown along the left side of the figure along with its podite connection to the IPM. The reticulum of the axon proceed to the pedicel. The microtubules pass up through the ciliary collar and are distributed into grooves surrounding the disk stack formed by the extrusion process. The labels referring to the metabolic apparatus are from Papermaster. See text for details. Compare with figures of Papermaster (1985) and Richardson (1969)

2.7.1.1.1 Opsin generation and chromophore coating

Papermaster proposes that opsin is generated within the Inner Segment and secreted through the plasma membrane of that segment into the extrusion cup (calyx). The cup is filled with fluid from the inter-photoreceptor matrix. 132 Processes in Biological Vision

There appear to be several scenarios as to how the disks are formed. In one scenario, the secreted opsin is formed into a sheet that is folded by the protrusion along the membrane wall near the ciliary collar. In this case, the chromophoric material present in the IPM coats one side of the opsin sheet. As the opsin is folded back and forth and moves toward the extruding section of the calyx, the sheet hardens. When compressed, it breaks into individual disks that become paired at the uncoated side. The chromophore then forms a continuous liquid crystalline layer on the outside of the newly formed two-layer disk as it is extruded. The extrusion process forms fissures in the disks. It is these fissures that are partly filled with the microtubules. The detailed discussion of the resulting disk geometry is in Chapter 4. 2.7.1.1.2 Chemical pathway

The chromophoric material and the metabolic material required to support the operation of the dendrites layed into the fissures is provided by the IPM from the RPE. The process of coating the disks appears to be aided by the flow of chromophoric material from the IPM through ports in the calyx. Once the disks are coated with chromophoric material in the calyx, the only chemistry required to support the dendrites involves the glutamate cycle materials that are also supplied by and returned to the RPE. Although the IPM may be in a viscous state, it probably is not liquid- crystalline as this would interfere with the flow of the metabolic materials. Further discussion of the dynamics of this chemical activity is found in Chapter 7. 2.7.1.1.3 Signal pathway

The micrographs show the 9 microtubules in the grooves of the disks and their passage through the ciliary collar into the body of the Inner Segment where they appear to terminate. The reason that the microtubules pass through the ciliary collar is to exit the plasma membrane of the photoreceptor cell and go around the calyx before falling into the fissures of the Outer Segment. It is the portions of the microtubules that lay in the fissures that are properly described as the dendrites of the PC. These dendrites are functionally and electrically different from the remainder of the microtubules.

Below the point of termination of the microtubules, distinctly different long structures are seen proceeding toward the nucleus. It is proposed that these latter features are the reticulum of the conduit that eventually emerges as the axon. It is also proposed that the structure between the microtubules and this conduit is the output Activa of the cell. The poditic terminal of this Activa is shown in contact with the inter-photoreceptor matrix. However, this location is still subject to confirmation. The podite terminal may be in contact with the INM which would suggest the OLM should be shown closer to the OS part of the figure. Further discussion of signaling within the PC is found in Chapter 4. 2.7.1.1.4 Optical pathway

The irradiance from the aperture of the eye approaches the OS from the bottom of this figure. As will be discussed in a following section, the optical bundle converges very rapidly on a point within the IS. The exact point is not well established. There are two principal options. First, the bundle could focus on a point just below the extrusion cup. In this case, the material in the cup presumably has a higher index of refraction than the bulk of the IS resulting in the formation of a plano-convex lens between the flat face of the disk stack and the extrusion cup wall. In the second scenario, the mitochondria may exhibit a higher index of refraction than the bulk of the IS. They would then form a spherical lens in the optical path, that is normally named the ellipsoid when it is visible by light microscopy. Some authors have found an ellipsoid that is apparently filled with a fluid based on light microscopy. However formed, such a lens would have the same collimating power as that described above. The lens would significantly improve the absorption performance of the OS. The optical characteristics of the PC combined with the IPM will be discussed further in the next section. 2.8 The Signal Environment of the visual system

The signal environment of the visual system is a complex one, both as to individual signals and the complicated signal distribution paths. Data is rapidly accumulating from many sources concerning the signal distribution network. Combining this material provides an even better understanding. 2.8.1 The Major Signal Pathways of the Visual System Environment & Coordinates 2- 133

The following sections will discuss the primary signaling paths of the visual system external to the cortex. The plan view has traditionally been used to teach this subject. Unfortunately, it is now clear that the elevation view shows that a critically important path, the Pulvinar (a.k.a. the thalmo-parietal) pathway has been completely omitted from these views. This alternate pathway is so important that it throws doubt on the wisdom of defining area 17 of the cortex as the primary visual cortex. The following material in this section is presented as an introduction. Much greater detail will be found in Chapter. Figure 2.8.1-1 shows a block diagram modified from Goodale & Milner showing the major signal pathways of vision. The perigeniculate nucleus (PGN) has been separated from the adjacent and morphologically more obvious superior colliculus (SC) in this figure to stress its functional significance. The thalmo-parietal pathway is less well investigated than the other pathways because it is difficult to access in the laboratory. However, it is the key path associated with information extraction by the PGN/pulvinar couple of the brain. It is also the signaling path that supports blindsight and macular sparing when the occipital lobe is seriously damaged.

2.8.1.1 Plan view

Figure 2.8.1-2 provides a clear plan view, from below, of the visual pathways in the human external to the cortex. The cortical pathways will be discussed in detail in Chapter 15. Many authors have provided a simpler figure concentrating only on the upper part of this figure238 ,239 . The left portion of the figure shows a caricature of the visual fields of the eye. Note the off- axis optical rays shown are not straight lines due to the immersion mode of operation used in human optics. The dashed axial rays are shown converging on a distant object. The common and potential stereographic field of the eyes is shown along with two extreme rays. In reality, the extreme rays are actually more than 90 degrees from the axial ray. The optical field of each eye is shown bifurcated and this arrangement is also found in the optic nerve. Upon reaching the optic Chiasm, these bifurcated bundles of nerves are rearranged into the bundles shown within the optic tracts. All of the nerves associated with the Figure 2.8.1-1 Block diagram showing major visual left fields of view proceed to the right half of the brain signal paths. The weighting of the arrows does not and vice versa. The secondary divisions in the optic represent the importance of the signal paths. See text. tracts after this Chiasm are very important. They Modified from Goodale & Milner, 1992. separate the signals related to the fovea from the main neural path and route them to the Pretectum and the Superior Colliculus. These morphological elements are frequently shown as two pairs of elements but they are sub- elements of the tectum. The tectum is frequently shown as one morphological element of the mid-brain. In either case, they operate in close coordination as if there were only one Pretectum and Superior Colliculus from a signaling perspective. Perry & Cowey were surprised to find that 10% of the total number of nerves in the optic nerve proceeded to the pretectum of the thalamus rather than to the LGN (page 1805)240. Carpenter & Sutin went further and showed that no neurons from the macular area, described more precisely here as the foveola, went to the LGN241. They all went to what they describe as the superior colliculus and is described here as the pretectum. The term pretectum is synonymous with the term perigeniculate nucleus (PGN) used in Chapter 15. They show the neurons from this region proceeding on to the calcarine sulcus of the occipital lobes.

238Daw, N. (1995) Visual development. NY: Plenum Press pg. 8 239Dowling, J. (1992) Neurons and Networks. Cambridge, MA: Belknap Press of Harvard, pg. 350 240Perry, V. & Cowey, A. (1985) The ganglion cell and cone distrituions in the monkey’s retina. Vision Res. vol. 25, no. 12, pp 1795-1810 241Carpenter, M. & Sutin, J. (1983) Human neuroanatomy, 8th ed. Baltimore, MD: Williams & Wilkins 134 Processes in Biological Vision

Figure 2.8.1-2 Plan view of the human visual system as seen from BELOW. The retina projects to the lateral geniculate body, the , and the suprachiasmatic nucleus (not shown). The figure is similar to one by Daw (1995). The optical rays are redrawn to illustrate both the stereo viewing field and the maximum viewing field. Note the fact that the rays do not follow straight lines as they enter the eyes due to the immersion mode of the optics in terrestrial animals. The optic nerves are shown bifurcated. Note how they are rearranged at the chiasm so that all signals from the left field of vision proceed to the right side of the brain. Axons from nasal retina cross in the chiasm, and axons from temporal retina do not. Note also the additional bifuraction in the optical tracts with neurons proceeding to both the lateral geniculate neucleus, LGN, and the pretectum. The path to the superior colliculus is actually antidromic. Neurons controlling the iris and lens return to the eye along this path. Additional neurons are shown leaving the LGN and proceeding to the striated cortex, Area 17, as the optic radiation. Note the presence of Meyer’s Loop. Additional neurons also leave the pretectum and proceed (out of the plane of the paper) directly to Area 7 of the cortex via the Pulvinar Pathway.

The foveal nerves found in the optic tract proceed to the pretectum of the thalamus. The nerves in the optic tract related to the Pretectum are orthodromic. The Pretectum is part of the Auxiliary Optical System, AOS, of the brain. It plays a crucial role in the operation of the visual system. It performs an initial information extraction from the signals from the foveola. It sends part of this information to the Superior Colliculus and sends the other part directly to Area 7 of the cortex for further information extraction. These signals enter the cortex from the Pulvinar Pathway and do not pass through Area 17. Area 17 has historically been called the primary visual cortex. However, its claim to this title is dubious as will be discussed below. Area 7 and the associated Area 5 have not been studied as intensely as other areas of the cortex partly because of their location. The initial areas connected to the Pretectum are located largely on the sagittal surfaces of the two hemispheres of the cortex. Environment & Coordinates 2- 135

The nerves of the optic tract related to the Superior Colliculus are antidromic. These are nerves carrying neuro- motor signals back to the eyes to control the iris and lens. These signals are formed in the Superior Colliculus from information derived from the signals received from the Pretectum, the inner ear, the Lateral geniculate nuclei and from the higher cognitive levels of the cortex. The non-foveal nerves found in the optic tract proceed to the Lateral geniculate nuclei (LGN) as shown where the corresponding semi-fields from each eye are compared for the purpose of deriving stereoptic signals to control the pointing of the eyes. These signals are passed to the Superior Colliculus as mentioned above. After this comparison and the associated merging of the fields, the remaining information is transmitted to Area 17 of the cortex for additional processing. Although not obvious from the drawing, there are two time-dispersal filters used in the visual system to aid in signal processing and avoid the need for the brain to employ trigonometric functions in that processing. The one is Meyer’s loop. This arrangement is actually used to restore timing coherence to the signals going to the cortex. The timing coherence of the signals was originally distorted by a similar set of loops (designated Reyem’s loop in this work). Reyem’s loop is associated with the distance of the various ganglion cells of each retina from the exit point of the optic nerve. It is the lack of time coherence introduced by this mechanism that is used to calculate the stereopsis signals in the LGN. After the signals from the LGN enter the cortex at Area 17, the striate cortex, they proceed forward toward Area 7 via Area 18 and Area 19. At each step, the signals become less and less correlated with object space. By the time the information reaches Area 7 by this route, it is highly vectorized. In this form, it presents part of what will be called a saliency vector. This vector can be combined with the similar visual data extracted by the feature extraction engines of Area 7 to create a complete visual saliency map of the exterior environment. The vectors of this map can be combined with similar vectors from the other senses to provide complete vectors that when combined provide a complete saliency map describing the exterior environment.

In the process of computing these vectors and maps, the various feature extraction engines of the cortex rely upon the signals received via Area 17 to provide coarse surveillance information about the entire exterior visual scene. They rely upon the Pretectum and Area 7 for precise information about the exterior scene. Since the signals generated by the Pretectum originate only in the foveola of each eye. Part of the process of completing these maps involves scanning the field of view with the eyes to allow the foveola to sequentially interrogate every area of specific threat or interest. This is performed by generating commands in the cortex instructing the Superior Colliculus to scan the field of view using motions known as the large saccades. It is the combination of the foveola, Pretectum (in combination with the AOS) and Area 7 that perform the detailed analysis of visual information that man has come to depend on for recognition of fine detail (such as in the recognition of faces) as well as for reading. This analysis is performed using both the small saccades and tremor.

Thus, Area 7 can be described as the cognitive visual cortex of the brain and Area 17 can be described as the surveillance visual cortex. The question of which is primary will be left to the reader. Further details of the pathways, coordinate systems used for the brain, and gross brain functions by location are available in Noback242. 2.8.1.1.1 The visual field of view

Perimetry is the study of the visual field of view. Harrington has provided the major reference on this subject243. More recently, the clinical technology associated with these studies has become highly automated244. This level of automation is a problem for the researcher since the protocols tend to obscure significant details. Anderson provides valuable information for understanding the operation of the Goldman perimeter. 2.8.1.1 Signal path definition through sectioning

The general nature of the bifurcated and structured commissures known as the optic nerves and optic tracts have been known for a long time. Figure 2.8.1-3 shows the individual paths followed by various groups of nerves of the

242Noback, C. (1967) The human nervous system. NY: McGraw-Hill 243Harrington, D. (1981) The Visual Fields: A Textbook and Atlas of Clinical Perimetry. St. Louis, MO: C. V. Mosby 244Anderson, D. (1987) Perimetry with and without automation. St Louis, MO: C. V. Mosby 136 Processes in Biological Vision

Figure 2.8.1-3 Plan view of the human visual signal pathways from ABOVE. Lesions at the points marked by the letters cause the visual field defects shown in the diagrams on the right. A modification and extension of Homan (1945). Similar to the previous figure but showing the effect of lesions interrupting the various nerve bundles. All signal paths shown transmit action potentials. The central path from each eye is from the fovea. The antidromic signal paths from the superior colliculi to the retinas are not shown. The paths from the superior colliculi to the oculomotor muscles are shown. The signals from the Pretectums go out of the plane of the paper to reach Area 7 of the cortex without passing through Area 17 (the striate cortex). Lesions of the occipital lobe of the cortex and of the optic radiation (as in D) do not affect foveal vision. Field of vision maps A through F on the right as found in Homan. Field maps at G are predicted from this work. Environment & Coordinates 2- 137 visual system. The signal paths related to the LGN and Area 17 were described in detail by Homan in 1945245. His work has been reproduced in various modifications ever since. Some of the conclusions were determined from actual surgery but much of it came to light through accidents. He did not address the AOS and its elements. Thus, his caricature of what is seen following specific lesions is reproduced as frames A through F on the right of the figure. These are keyed by the same letter to lesion locations in the left frame. Note that Homan recognized the cutout in the field, represented in frames and lesions D, E, F & G. This cutout has usually been associated with the macula of the retina. The Macula is used primarily in introductory anatomy and is usually associated with an external coating of the neural layers of the retina giving it a distinctive color. As shown in Chapter 3, this explanation is deceiving. The macula is an artifact of the varying density and content of the neural layers of the retina. It is not associated with a single coating or layer. The macula is not normally associated with a functional zone of the photoreceptors in more advanced material. The equivalent functional zone of the histologist would include the Foveola and Fovea. This cutout is most often associated with the condition known as macular sparing in the clinical literature. In the psychophysical literature, it is also not well characterized because relatively large test objects are used to define the size of this feature. This is illustrated in figure 15-23 of Carpenter & Sutin where the foveola is basically ignored246. In this work, the cutout will be equated to foveal sparing or foveola sparing because of the separate signal paths used by photoreceptors of the fovea and the foveola. Foveal and foveola sparing are separate phenomena. Foveola sparing involves a visual diameter of only 1.18 degrees in object space while macular (full fovea) sparing would normally involve a five degree diameter, both centered on the line of fixation. Frame G based on a lesion at location G proposes that the cutout is due to a failure in the AOS or in the path leading from the foveola to Area 7 of the cortex via the Pretectum. See the glossary for a specific definition of the pretectum.

The loss of vision indicated by G may be only partial. It is more completely described as a loss in high acuity vision (associated with the PGN) with some retained low acuity vision (associated with the visual cortex), sometimes described as the “grandmother syndrome.” In the grandmother syndrome, the subject can still see faces or other objects imaged on his foveola but cannot identify the face, even if it is his grandmother. His only source of information extraction is via the low acuity LGN/occipital lobe path.

Page 225 of Harrington shows the minute field of view resulting from “complete” anopia with foveola sparing, caused by chloroquine poisoning247. It can be compared with page 224 showing anopia with macular sparing. Section 18.1.4 will discuss major losses in the visual field of view in detail.

The cutout has more recently been associated with the bifurcation between signals to the Pretectum (or perigeniculate nucleus, PGN) and signals to the LGN. Humphrey illustrated this alternate path in his discussion of blindsight248. Note that the recent text and atlas of neuroanatomy by Afifi & Bergman does not show this level of detail or address the role of the Pretectum in vision249. Those authors do terminate their discussion with a brief reference to the Pulvinar pathway. However, their text does not recognize tremor, or the role of the Pretectum in tremor, as a normal and required motion of the eyes. Miller & Newman have recently highlighted an additional order of detail in the discussion of sparing and related system failures250. The vertical dislocation in their images show that there are multiple failure modes that can impact the lateral and temporal areas of the fovea individually. This can occur in one or both eyes of an individual. The author has recently consulted on a case involving these failure modes. These failures lead to an extension of the above figure.

A failure in the Pulvinar pathway leads to serious pathological conditions. These can be divided into two distinct classes. Those involving the Precision Optical System (those circuits related to the foveola distal to and including the pretectum, and those involving the output of the pretectum and the remainder of the Pulvinar pathway. Failure ahead of the Pretectum affect the Precision Optical Servomechanism and can lead to gross abnormalities in the

245Homan, J. (1945) A textbook of surgery, 6th ed. Springfield, IL: Charles C Thomas, Publisher. Also in Eyzaguirre, C. & Fidone, S. (1975) Physiology of the nervous system, 2nd ed. Chicago, IL: Year Book Medical Publishers. pp. 139 246Carpenter, M. & Sutin, J. (1983) Op. Cit. pp 530-533 247Harrington, D. (1981) The Visual Fields: A Textbook and Atlas of Clinical Perimetry. St. Louis, MO: C. V. Mosby 248Humphrey, xxx (1972) xxx New Scient. Vol 53, xxx 249Afifi, A. & Bergman, R. (1998) Op. Cit. Pg. 476 250Miller, N. & Newman, N. ed. (1998) Walsh and Hoyt’s Clinical Neuro-Ophthalmology. 5th ed. vol. 1, Baltimore, MD: Williams & Wilkins pg 360 138 Processes in Biological Vision

pointing and stabilization of the eyes. Failure anywhere along this path can lead to inability to read and identify objects. Failures beyond location C frequently result in a multitude of unusual clinical conditions, including the category of blindsight in the vernacular of medicine. They will be discussed in greater detail in Chapter 18. It should be noted that the division between the temporal and nasal areas of the perceived fields does not pass through the center of the foveola. The precise location of this demarcation and the sharpness of the dividing line at the histological level is of considerable interest in medicine. Patients report a variety of failures to merge the information from the different lines of demarcation illustrated above. Many of these failures appear to result from highly localized small strokes. Patients report fuzzy, sometimes dynamic, images along these various demarcation lines. Stone, et. al. attempted to quantify the range of overlap along the vertical meridian251. They obtained excellent micrographs of the retina following sectioning of one optical tract. These clearly show a demarcation along the vertical azimuth passing through the foveola. They also provide an estimate of the gradient associated with this demarcation. Since they were unable to repair the damage and then section the other tract, they made the assumption that while the demarcation line was asymmetrical in the retina with respect to the center of the foveola, the merging process in the cortex was symmetrical about the foveola. They presented a total of four estimates of the overlap zone based on this technique. The zones were not of constant width. Their technique defined the width of the merge area as approximately twice the magnitude of the difference between the location of the center of the foveola and the location of the vertical demarcation line at the retina.

The above graphical data derived from Homan and the data of Stone, et. al. are the result of the physical interruption of the visual pathways. Whereas Stone, et. al. concentrate on the vertical demarcation, because it is easy to section the optic tract, there is an equally important horizontal demarcation line that can be highlighted by sectioning the optic radiation. As shown in E, the lines also pass through the area of the foveola. This has been confirmed through recent clinical experience (particularly from a subject named Earl). Based on another clinical case, there is also a perceived demarcation line between either the foveola or fovea and the surrounding quadrants of the visual field. In the case of Eugene, the diameter of the demarcation circle is about five degrees. That dimension would suggest that it relates to the demarcation between the full fovea and the surround. These demarcation lines will be discussed in Section 15.2.4 as they relate to the divisions between the analytical and awareness channels of vision. In a normally operating visual system, the signal processing is able to suppress the perception of these pathological features.

Pettigrew has also addressed the demarcation line between the nasal and temporal fields252. However, his caricatures are not drawn to scale. He shows an overlap in the hemifields of V1 of the occipital lobe roughly the size of the foveola (he quotes one degree based on Stone). However, he does not recognize that this overlap is at the low spatial resolution associated with the peripheral retina. The high resolution capability associated with the analytical channel and the foveola is not present in the projections in the occipital lobe. The presence of these two different projections of the foveola is portrayed in [Figure 2.8.1-4] and several other figures of this work.

It is important to note that Pettigrew’s projections are in object space and do not represent the geometry of V1, V4 or the inferotemporal cortex. These geometries are quite different and show a considerable amount of computational anatomy (signal processing accomplished through rearrangement of the signal paths between major engines). The relevant geometries are presented in Chapter 15. It is also important to understand the criteria used in his figure. The maps shown for V1, V4 and the inferotemporal cortex represent the receptive fields of given neurons in those elements, not the projections of the capture area of individual photoreceptors onto the surface of those elements (Section 15.1.2).

House, et. al. have provided a useful graph of the visual pathways253. However, it is important to note they do not show the central region of the foveola. Their inner region is that of the larger, and less critical, fovea. While they present more detail relative to the organization of the peripheral retinal paths, they do not show the paths related to the foveola at all.

251Stone, J. Leicester, J. & Sherman, S. (1973) The nasotemporal division of the monkey’s retina J. Comp. Neuro.l vol 150, pp 333-348 252Pettigrew, J. (2001) Searching for the Switch: neural bases for perceptual rivalry alternations. Brain Mind vol. 2, pp 85-118 253House, E. Pansky, B. & Siegel, A. ( 1979) A Systematic Approach to Neuroscience, 3rd Ed. NY: McGraw-Hill pp 174-175 Environment & Coordinates 2- 139 2.8.1.1.1 Physical impact of gross sectioning the optic nerve and optic tract

The literature includes many reports of complete sectioning of the optic nerve or optic tract and the subsequent loss of the ganglion cells of the retina254. The relationship between these two events is not always appreciated. As will be fully documented in Chapter 10, the axons of the retinal ganglion cells extend all of the way to their synapse with the neurons within the mid brain. While these neurons may be interrupted by Nodes of Ranvier for purposes of signal regeneration, the plasma membrane associated with these cells is continuous over these distances. If these membranes are ruptured, the entire cell can be expected to die. Lam & Bray confirm that this is the case. 2.8.1.1.2 Functional impact of differential sectioning of the optic nerve and optic tract

Data is very limited in this area due partly to the difficulty of evaluating animal experiments. The representation in figure 15-24 of Carpenter & Sutin referencing Noback and Laemle is largely conceptual and does not recognize the signal processing that occurs in the retina prior to transferring the information over the optic nerve255. The central area in the figure has an arbitrary diameter of five degrees and does not correspond to the foveola. Figure 2.8.1-4 provides an alternate representation to theirs. The inner two layers of the lateral geniculate nuclei are the magnocellular layers that process luminance information. The inner target circle has a radius of five degrees. Note the resulting very small area of the lateral geniculate nuclei devoted to the area equivalent to the foveola (1.18 degree radius). To achieve the performance required, the neurons of the foveola are routed to the separate perigeniculate nuclei. Thus there is a second bifurcation associated with each of the optic tracts. No attempt has been made to bifurcate each of the optic tracts into individual tracts serving the perigeniculate nuclei and the lateral geniculate nuclei. Only a few percent of the neurons in either tract are diverted to the perigeniculate nuclei. The neurons proceeding to the perigeniculate nuclei undergo less rearrangement, in the interest of computational anatomy, than do the neurons proceeding to the lateral geniculate nuclei.

The notation used in the figure is complex. SNQ; superior nasal quadrant. STQ; superior temporal quadrant. INQ; inferior nasal quadrant. ITQ; inferior temporal quadrant. SC; superior central region. IC; inferior central region. In addition the above descriptors, the cross sections shown on the right for the right optic nerve and right optic tract employ a prefix to designate right or left and a suffix describing the stage 2 signal processing channel associated with the bundles of neurons within the optic nerves and optic tracts. R; illuminance. O, P & Q for chrominance channels and Y for the channel associated with the foveola. The second character in each label is either N (nasal) or T (temporal). Note the difference in labels related to the dashed circle on the left of each cross section. All of the information carried by the right optic tract applies to the left portion of the visual field in object space.

The orientation of the neuron bundles within the optic nerves and optic tracts are currently unknown. The orientation, and arrangement of these bundles may vary along the length of these nerves in support of the computational anatomy mechanism. This mechanism plays an important role in the overall computational plan used to extract scene information within the engines of the cortex. A perspective on the mechanism as used to map the right visual field, not the retinal map, onto the left lateral geniculate body in monkey is provided by Carpenter & Sutin (figu 15-25). This mapping is distinctly different than the mapping to the perigeniculate nucleus . The mapping to the perigeniculate nuclei is essentially rectilinear with respect to both the retina and object space. It is designed to ease the analytical feature extraction task of the perigeniculate nuclei. The mapping to the LGN is also distinctly different than the mapping to area 17 of the cerebral hemispheres (Section 15.2.3). The mapping to the LGN is designed to optimally extract the version and signals needed to drive and control the oculomotor servomechanism. The mapping projected onto area 17 serves other purposes.

254Lam, D. & Bray, G. (1992) Regeneration and Plasticity in the Mammalian Visual System. Cambridge, MA: The MIT Press, pg 34 255Carpenter, M. & Sutin, J. (1983) Human Neuroanatomy, 8th Ed. Baltimore, MD: Williams & Wilkins fig 15-24 & pp 529-532 140 Processes in Biological Vision

Figure 2.8.1-4 Schematic of projections from the retina to the thalamus in vision. This representation begins at the retinas and does not include the physiological optics. As a result, the temporal quadrants of the retina project to the lateral geniculate and perigeniculate nuclei on the same side. The perigeniculate and lateral geniculate nuclei are shown separately. The two cross sections for the optic nerve and the optic tract are fundamentally different. The optic nerve contains both nasal and temporal information for one eye. The optic tract contains nasal content from one eye and temporal content from the other. Because of the coding of the spectral selective information from the peripheral photoreceptors into luminance and chrominance channels within the retina, the arrangement of the neurons within these nerves is not retinotopic. Although not shown, the neurons within these nerves are rearranged as part of the computational anatomy mechanism. Modified from Carpenter & Sutin, 1983. Environment & Coordinates 2- 141

A recent clinical condition in human has confirmed the fact that the neurons within the optic nerve are grouped according to their function much like the neurons of the spinal chord. This is done to support the easy rearrangement of the neurons at the optic chiasm and the secondary bifurcations. The chrominance channel neurons are grouped to simplify their routing to the parvocellular portion of the lateral geniculate nucleus. The luminance channel neurons are grouped to simplify their routing to the magnocellular portion of the LGN. The neurons related to the foveola are grouped in a small bundle, to simplify their routing to the perigeniculate nucleus, PGN. The relative size of the bundles within the optic nerve is unknown. However, based on the following figure, it is likely that the bundle originating at the foveola (and serving the Y channel) is probably two percent of the total. The percentage of the neurons supporting any image correlation function in the retina of humans is negligibly small. The ratio of chrominance channel (O, P & Q channels) to luminance channel (R) neurons is difficult to estimate because of the many factors involved. Having said this, Figure 2.8.1-5 caricatures the situation. Based on the policy of maximum protection for the most important neural paths, the Y channel neural bundle is likely to be located centrally. The physical orientation of the neurons is shown with respect to the field of view. Its orientation with respect to the cranium is not shown. This orientation may vary along the length of the optic nerve. Based on the earlier figure, the neuron bundles associated with the temporal field of view of the left eye (O, P & Q and also R ) proceed to the left thalamus. The neuron bundles associated with the nasal field of view of the left eye proceed to the right thalamus (these are ipsilateral with respect to the field of view but not with respect to the retina). It is less certain, but it appears the Y channel neurons also divide relative to the vertical centerline and follow similar paths to the other channels but separate and the two secondary chiasms and proceed to the perigeniculate nucleus of the thalamus.

In the reported case, a glioma formed on the optic nerve. In the initial reports (from a child), the glioma caused the loss of all chromatic vision in the left eye. It appears the glioma was in the area where it could interfere with the O, P & Q chrominance channels. Additional information on this subject may be found in Section 18.5.3.

Figure 2.8.1-5 Cross section of optic nerve in caricature. The area of the channel Y bundle is estimated at two percent of the cross section devoted to neurons. The relative size of the efferent neurons is trivial. The role of the O channel is also trivial in humans. The relative size of the other bundles is unknown. The center of the nerve is shown devoted to the vascular system. 142 Processes in Biological Vision

2.8.1.1.3 Failures in the visual field not due to physical failure of the signaling paths.

Besides the visual field failures related to physical damage to the signaling pathways, there are a large number of visual field failures caused by neurological and hydraulic causes. Many of these are best described using maps of the visual field. Burde et al. have provided a number of these presentations with believed causes256. Figure 2.8.1-6 shows the result of damage to the occipital lobe of the cerebral cortex. The area of the foveola and fovea remains functional even though the bulk of the right homonymous field has been lost.

Figure 2.8.1-6 Macular sparing in the right homonymous field in spite of right homonymous hemianopia. The diameter of the test probe used to obtain this data was not specified. It was apparently large. From Brude et al., 1985.

The comments developed in Section 2.2.1.1 should be reviewed before using, or interpreting, this figure. The test probe used to create the above representation was apparently larger than the defined foveola of this work and probably larger or of similar size to the fovea. A more complete discussion of these type displays is presented in Section 18.8.9 when discussing diseases of the eye.

Figure 2.8.1-7 shows the result of a right temporal lobectomy on the visual field.

256Burde, R. Savino, P. & Trobe, J. (1992) Clinical Decisions in Neuro-Ophthalmology, 2nd Ed. St. Louis, MO: Mosby Year Book, pg 1 Environment & Coordinates 2- 143

Figure 2.8.1-7 Effect of a right temporal lobectomy on the visual field EXPLAIN NOTATION. From Brude et al., 1985.

2.8.1.2 Profile view of the visual system

Figure 2.8.1-8 shows the visual pathways of the human visual system in profile. This figure is similar to earlier versions found in the literature257,258 except for its explicit detailing of the secondary radiation along the Pulvinar Pathway to Area 7 of the cortex and the AOS (see inset). It stresses the role of the old brain as a communications hub with various areas of the thalmus and the mid-brain playing important roles in both vision and the response to visual stimulation. Although not shown explicitly, signals from area 17 move forward through area 18, 19 etc. until they reach the higher cognitive centers of the temporal and parietal lobes, including area 7 and 7a.

The inset shows the LGN as part of the thalamus and the Pretectum as part of the mid-brain. These are relatively arbitrary morphological designations. The structure of the LGN is shown in caricature. Its detailed structure will be discussed below. Within the mid-brain, the AOS is labeled the Precision Optical System, POS, in the inset to this figure to more accurately portray its critical role as the primary servomechanism associated with the acquisition of all precision signals related to vision. In this figure, the Superior Colliculus is subdivided into the three individual control groups associated with the three motor muscle groups of the eyes. Also shown is the interconnection of the vestibular system and the Pretectum. This connection provides orientation of the information sent to Area 7 and also supports the calculation of the appropriate oculomotor commands in the presence of other motions of the skeletal system. Although data concerning Area 7 is still sparse, the inset shows command signals returning from layer 5 of Area 7a to the Pretectum/Super Colliculus complex. At this time different authors are correlating the cognitive visual engines of the brain with both areas 5, 7 and a poorly defined 7a. This inconsistency may be partly due to the lack of repeatability in the folds of the cortex of different individuals. The inset also shows the separation in the signal paths between the foveal and non-foveal portions of the retina. As

257Dowling, J. (1992) Op. Cit. pg. 350 258Kandel, E, Schwartz, J. & Jessell, T. (2000) Principles of neural science. NY: McGraw-Hill, pg. 527 144 Processes in Biological Vision

discussed in Chapter 15, recent experiments have demonstrated that the visual signals from the foveola appear in area 7 before similar data appears in area 17. This demonstrates that important parts of the visual signals do not pass through area 17, 18 & 19.

2.8.1.3 Fundamental signaling architecture of the visual system

It has been known for a very long time that there was only a relatively small number of nerve fibers emanating from the eye relative to the number of photoreceptors. The question of how the information is encoded on these nerve fibers has generated much experimentation and discussion. Only recently, investigators have begun to recognize the eye as an integral part of the brain itself. The retina can be looked at as essentially information encoding engines in contrast to the information extraction engines of Areas 7, 17, 18, 19 etc. The intermediate locations can be thought of as primarily switching hubs with some associated computational functions or as intermediate computational hubs with some routing capability. It appears that only the cortex contains longer term memory. Based on these assumptions, it becomes much easier to understand the signal processing required within the eye, the signal projection between the eyes and the cortex, and the signal processing algorithms employed to communicate within the visual system. Figure 2.8.1-8 Profile view of the human visual system. Signals from the optic tract separate at the secondary optic Figure 2.8.1-9 shows the trunk diagram for the sensory chiasm and proceed to the LGN and the Pretectum. These portion of the visual system as it might be described by signals are arranged according their source. Upon a telephone system engineer. The main constraint on reaching the LGN, the luminance signals enter the the overall system is the requirement introduced into Maculocellular region. The chrominance signal enter the Chordata to provide a high degree of angular freedom Parvocellular region. After processing, these signals between the eyes and the head. This requirement calls proceed to Area 17 of the cortex along the calcerine for a minimum size trunk between the eyes and the fissure via the Parvocellular and Magnocellular pathways. head. The entire signaling architecture of the chordate The signals from the foveola proceed to Area 7 of the visual system is determined by this requirement. cortex after processing in the Pretectum via the Pulvinar Where there are as many as 108 photoreceptors in each 9 pathway. Efferent signals from Area 7a return to the retina, probably 10 signal processing neurons in the Super Colliculus via this same pathway. remainder of the retina, and probably more than 1010 neurons in the cortex, there are only about one million nerves in each optic nerve. Each of these numbers are ball-park figures taken from the literature. The cortex value is probably low by one or two orders of magnitude since it is based on light microscopy and does not consider pairs of axon segments and Nodes of Ranvier as separate neural circuits. Environment & Coordinates 2- 145

Figure 2.8.1-9 Fundamental signaling architecture of the human visual system. Sophisticated encoding algorithms are employed to allow the optic nerve to be minimized in diameter and mechanical stiffness while allowing the recovery of all of the required information from the photoreceptors in the cortex. Numbers indicate the ballpark estimate of the number of nerves passing through each trunk.

To retain a degree of generality at this point, an ultraviolet spectral capability is shown dashed in the figure. Most aquatic and small chordates can sense the ultraviolet visual spectrum. So it appears can aphakic humans. The retina is divided into foveola and non-foveola portions here. No data is available to conclude conclusively whether the signals from the foveola are encoded with the other non-foveola signals. It is clear that the signals from the photoreceptors of the foveola enjoy unique signaling channels to the cortex. In the figure, the signals from the approximately 20,000 photoreceptors in the foveola are transmitted to the Pretectum with a minimal amount of sophisticated encoding. This number of neural paths can be compared with the estimate of 104 based on morphology in monkey by Perry & Cowey. The non-foveola signals are processed in a much more sophisticated manner that allows encoding all of the information in a form that can be totally recovered in the brain while reducing the number of discrete channels temporarily by a factor of at least 1000:1. This process appears to include a technique known as diversity encoding (see Chapter 14). The information is then decoded in stages as required to meet the needs of the LGN and the Pretectum before being relayed to the cortex. Several researchers have spoken of the convergence between the signal paths in the retina and those in the optic nerve as if this ratio applied to the information being carried by these paths. This assumption relies upon the analogy of these paths to a hard wire telephone network rather than the ability of many networks to share trunks without any loss of information. A majority of the retina is used for surveillance purposes and need only send an alarm signal to the brain to reorient the line of fixation. By employing both time-dispersal and spatial diversity encoding, the physiology of the eye is able to transmit all of the relevant information without any loss of information through the morphologically based concept of convergence. 146 Processes in Biological Vision

2.8.1.4. Organization of switching and computation centers

The more recent literature describes the pretectum and the superior colliculus differently than the classic literature. The pretectum is typically divided functionally into the perigeniculate nucleus, PGN, and the pulvinar. The PGN is differentiated morphologically from the superior colliculus by describing it as an element on the brachium of the superior colliculus (the neural path leading to the superior colliculus from the LGN). The term pretectum is no longer used in the literature subsequent to 1990. There has been growing recognition in recent years that the retina has the same structural architecture as other brain tissue. In general, this tissue consists of a thin multilayer laminate. Typically, its thickness is similar to that of a piece of paper. The morphological boundaries between the layers are not structural. They are usually defined by the morphological form of the neurons found in zones within the laminate. Most early authors defined either five or six laminates based on these criteria. More recently, investigators have defined sub laminates associated with several of the above defined laminates. The general conclusion can be drawn that the brain, including the retinas, is made up of a single highly folded sheet of neural laminate of nominally six principal layers (based primarily on the perceived morphological differences between the neurons of each layer). Afifi & Bergman provide a table showing how the content of these layers is believed to differ259. The descriptors are morphological adjectives. The overall neural sheet is subdivided into a very large number of discrete signal processing activities that will be defined as engines. The outer layer of the sheet, facing the skull is passive relative to the signaling function. In general, the laminates closest to this outer layer are generally associated with the most sophisticated logic operations of the brain and the more remote layers appear to be associated with inter-engine signaling. This signaling takes on two forms. The one involves long distance signaling, more than a millimeter or two, that employs encoding and the transmission of action potentials. These signals pass over the “long association fibers”of the brain. These pathways constitute individual groups of neurons and are frequently labeled fasciculus. The second relies upon analog signals, which require no encoding and decoding, for communications over lesser distances. Much of the bulk of the brain, the white matter, consists of the great number of neural pathways, grouped into long association fibers, that traverse between various engines of the brain and fill much of the space within the folded neural sheet.

Based on these guidelines, Figure 2.8.1-10 shows additional details of the visual switching that occurs along the major pathways between the eyes and the cortex. Note that all of the signal paths shown carry analog signal information. Whenever traveling more than a millimeter, the information is pulse-encoded and projected as action potentials to reduce delay and reduce energy requirements. The information content remains analog. All of the signal paths shown constitute groups of neurons and are orthodromic except for the signal path from Area 7A of the cortex to the Superior Colliculus. These groups are labeled fasciculi by the morphologist and pathways by the electrophysiologist. The un-processed spectral signals from the foveolas are shown leaving the retinas and traveling directly to the (now the combined PGN and pulvinar). The initially processed luminance, R-channel, and chrominance, P- & Q-channels, signals from the non-foveola areas

259Afifi, A. & Bergman, R. (1998) Functional neuroanatomy NY: McGraw-Hill, Health Professions Div. pg. 341 Environment & Coordinates 2- 147

Figure 2.8.1-10 Data pathways, junction points and major feature extraction engines used to process visual signals viewed from below. All signals are transmitted and processed as analog signals. For transmission between major junctions in the eyes, mid-brain and cortex, the signals are pulse encoded and projected as action potentials. The precision information from the foveola used for recognition is transmitted via the Pretectum (combined PGN and pulvinar) to Area 7 of the cortex via the Pulvinar pathway. The wide area scene information from the non-foveola retinas and used for surveillance is transmitted via the Parvo- and Magno-cellular pathways following the calcerine fissure. The brain, including the retinas, consists of one highly folded continuous sheet of nominally laminated neural material. The sheet consists of a large number of small areas defined as signal processing engines. All signals enter and leave the engines through the same surface of the laminate. Each Lateral Geniculate Nucleus, LGN, consists of two distinct sections. Each section appears to be formed of two interdigitated fan-folded layers of neural material. The curved lines in the figure represent a sandwich of these two layers (Note the continuity of each line as it folds). The inset shows one possible arrangement of the two laminated layers. See text for details. 148 Processes in Biological Vision

of the left portion of the two retinas are shown traveling to the left Lateral Geniculate Nucleus. Because of the different distances the signals travel from different points in the retina and the relatively low transport velocity of these individual signals, they arrive at the LGN time-dispersed in accordance with their point of origin. The neurons within the neural sheet are known to be organized functionally in columns as well as in layers. This is true at least for the retina, the LGN and the cortex. The low precision surveillance information signals from the LGN proceeds to Area 17 of the cortex along the calcerine fissure via what electrophysiologists call the Parvocellular and Magnocellular pathways. The time- dispersed character of the information, as found at the LGN, is removed by the variable path length associated with Meyer’s loops. As received by the cortex, this low precision information is time correlated. The higher precision recognition oriented information is transmitted to Area 7 of the cortex via the Pulvinar pathway after processing in the Pretectum (PGN and pulvinar). The relatively raw spectrally related data received by the Pretectum directly from individual foveola photoreceptors is processed in two ways. One output derives signals for use in the precision servomechanism formed by a group of mid-brain nuclei known as the auxiliary optical system. This name was assigned based on the realization that these nuclei had some unknown function related to vision. A more appropriate name would be the Precision Optical System, POS. These signals are used to control the tremor and small saccades associated with the critical recognition of fine detail in the scene. The Superior Colliculus is the structure of the POS responsible for generating the oculomotor signals to create the tremor and small saccades. It also accepts command signals from Area 7A of the cortex and generates large saccades for purposes of changing the line of fixation of the eyes. The second output is a highly vector oriented signal passed to the cortex describing the detailed features of the scene analyzed by the POS. This signal has no recognizable spatial relationship to the geometry of the input scene.

The PGN portion of the Pretectum analyzes the signals from the foveola synchronously with a signal received from the POS. This technique allows the PGN to extract both amplitude and phase information. The phase information is bipolar and provides a signal to the Super Colliculus that suggests when a contrast edge related to the scene has been crossed.

The data delivered to the higher cognitive centers of the cortex via the PGN-pulvinar couple arrives considerably before the data via the LGN. This allows the cortex to make decisions before all of the data relevant to the complete scene has arrived. However, except for time critical information, that is processed immediately and converted into commands for transmission to the Super Colliculus via Area 7A, the received data is processed without undue attention to time.

The cortex is primarily vector oriented. Although it receives spatially correlated information from the two LGNs at Area 17, the data loses this correlation rapidly as the data is processed and passed to more forward areas of the occipital lobe. The signaling paths, fasciculi, associated with this movement are shown as fish-hooks in the figure.

While a statement by Hubel is technically correct, it may leave an inadequate impression260, There is virtually no spatial correlation between the scene and the surface of the cortex in areas forward of Area 19. It is this uncorrelated data, along with the data from Area 7, that is further processed by the feature extraction engines of the cortex. The results of much of this processing are used to form a putative saliency vector. It is this vector that is passed to the cognitive centers of the frontal lobe of the cortex. There is some information that suggests this activity initially involves Area 6. A simpler figure in Daw shows a familial resemblance to the above figure261. However, the nomenclature differs significantly. In this work, the magnonocellular and parvocellular pathways only extend from the LGN to the Cortex. Between the retina and the LGN, different and more explicit channel designations are shown. Since all of the signaling channels involve analog signals, the designations OFF and ON found in many psychophysical papers, and associated with the parvocellular pathways by Daw, are inappropriate. Due to the signal differencing involved in the chrominance channels, P & Q, either a positive-going or negative-going response to a stimulus can occur in the same neural path depending primarily on the spectral content of the stimulus or stimuli. Under normal circumstances, the signals in these channels never reach saturation. However, because of the logarithmic processing

260Hubel, D. (1988) Op. Cit. Pg. 61 261Daw, N. op. cit. pg. 16 Environment & Coordinates 2- 149 of the signals by the photo receptor cells, the signals may approach saturation under very high contrast situations as found in most psychophysical experiments. As an example, under high contrast conditions, each path may appear to be ON (at one extreme voltage) to M- channel light and OFF (at the other extreme voltage) to L-channel light. The Daw figure shows a conceptual output for color and form information (movement and depth are calculated from form) but does not show a conceptual output for luminance information. Note that when used for its normal function, brain tissue has one electrically passive surface and a second surface through which all signaling passes. The surface that is electrically passive is the most involved chemically. This is largely the same in the retina. The passive surface could be considered to be the retinal pigment epithelium layer, or Bruchus Membrane depending on definition, with the choroid playing a role similar to that of the skull. Most metabolic support involves the choroid supply and the RPE. In normal brain tissue, signals can be considered to enter a particular engine at its signaling surface and project to near the opposite surface before turning around and leaving by the same signaling surface. In the case of the retina, the signals projecting toward the signaling surface originate near the passive surface by other mechanisms. This allows the retina to be described using the same layer designations as the brain itself. Following the nomenclature of Afifi & Bergman referenced above, the suggested correlation between the cytoarchitectonic names and the common retinal names is:

Layer # Morphological Common designation in retina designation in cortex Afferent layers From Dowling Used in this work

--- (Photoreceptor related layers)

I Molecular fiber layer Outer fiber layer II External granular pedicle layer 1st matrix layer III Pyramidal Inner nuclear layer Bipolar layer IV Internal granular 2nd matrix layer efferent layers V Ganglionic Ganglion layer Ganglion layer VI Multiform Optical fiber layer Optical fiber layer

The correlation in the above table leaves much to be desired but it does provide a guide. Layer IV is the nominal termination point of all arriving association fibers. Layer V becomes the nominal source of all association fibers. These fibers are more than two millimeters long, are myelinated and carry pulse signals. Several sources in the literature suggest the axons of the ganglion cells are not myelinated until after they leave the retina through the Lamina Cribosa. The importance of myelination to successful signal propagation over long distances places this view in doubt. The question may be one of degree. Only a few layers of myelination, instead of the 250 layers commonly found, provide major improvements in the functional performance of the axon. XXX have provided imagery purporting to show the myelination of axons within the retina262. They have also provided imagery showing a diseased condition where the myelination appears to have become much more voluminous within the retina.

Layers I, II & III perform the bulk of the analog signal processing within a given engine. Layer VI is the fiber layer supporting short, unmyelinated association fibers supporting mostly analog interconnections. The connections between the two eyes and each LGN are difficult to protray on paper because of the three dimensional nature of the structure. The figure shows the signals from the left eye contacting the inner surfaces of the layers of the LGN (relative to the center of curvature) and the signals from the right eye contacting the outer surface of the adjacent layer. As shown in the inset, signals from these two layers can be shared and compared by association fibers (shown dashed) traveling between the two layers. Two different types of output signals are generated. One set of signals is transferred to the Superior Colliculus for purposes of controlling stereopsis. The other set appears to constitute a merged signal representing one field of object space. This signal is transferred to the appropriate region of area 17.

262Xxx (1991) xxx In Swash, M. & Oxbury, J. Clinical Neurology, Vol. 1. NY: Churchill Livingstone pp 389-405 150 Processes in Biological Vision

2.8.1.4.1 The morphology of the LGN

It is interesting to consider the morphology of the LGN in the above context. Recognizing its role in the spatial comparison of signals between two clearly distinct sources, and its multiple layers of neural laminate, it is reasonable to look at the LGN as composed of two sheets of neural laminate that have been folded with their normally active surfaces in contact with each other and signals being passed across this juxtaposition. These electrophysiological options will be addressed in more detail in Chapter 15. The morphological form of the LGN may be unique within the neural system, or it may be typical of other nodular shaped features. Whereas it is usually shown as formed of two sections of individual stacked discrete structures, the above guidelines suggest an alternate configuration. Hubel has been the most important investigator of the LGN in recent years263. He has noted the unexpected electrophysiological organization of the layers of the LGN, left, right, left, right, right, left as they relate to the eyes beginning from the top of the structure. He noted: “It is not clear why the sequence reverses between the fourth and fifth layers; . . .” His work will be correlated with this work more completely in Chapter 15. It is proposed that each section consists of two parts of the sheet placed back to back and that the total structure is the result of further folding to produce a multilayer section. Figure 2.8.1-11 illustrates this proposal. The upper part of the figure begins with a folded piece of the neural surface. The outside surface is the passive surface. There are feature extraction engines located along the inner surface. These have been numbered to reflect the final configuration. The tissue is folded until it resembles the lower part of the figure. The resultant fold, which is similar to the standard fold used in metal can fabrication, presents the precise order of surfaces noted by Hubel. In the actual nucleus, there is an additional fold relative to the vertical center line that causes the structure to take on a domed shape.

The fact that there are six layers in each LGN is in complete agreement with this theory. The lower two layers compare the luminance, R-channel, information from the retina. The upper two pairs of layers compare the two chrominance channels, the P-channel and the Q-channel. The assignment of these two designations is arbitrary in the figure. No data was located specifying which pair of layers corresponded to the “red-green” layers.

The magno-cellular layers are actually formed from one fold of the structure consisting of two neural layers of the main sheet. Each layer within the dashed rectangle can be considered a separate feature extraction engine. The magno-cellular layers compare equally time-dispersed pairs of luminance information from the two eyes. The two pairs of layers forming the parvo-cellular layers perform similar processing on the information in the chrominance channels. Figure 2.8.1-11 Proposed morphology of the lateral geniculate nucleus based on the assumption that the brain It is suggested that the information from the same is formed from a single laminated sheet of neuron element of the scene in object space is provided to the material. The asterisk is shown only for orientation adjacent areas of the composite sheet as suggested by purposes. The numerics are associated with individual the inset in the previous figure. In this case, there are engines formed on the surface of the sheet. Engines 5 & 6 short associative fibers, less than one millimeter long are associated with the luminance, R-channel, and the based on Hubel, crossing back and forth between the other engines are associated with the chrominance, P- & engines in the paired layers. Many of the neurons in Q-channels. The dorsal four layers are usually labeled the the LGN may be biased to act as correlators and parvocellular (small cell) layers and the ventral two are usually labeled the magnocellular layers.

263Hubel, D. (1988) Eye, brain, and vision. NY: Scientific American Library, pg. 66-67 Environment & Coordinates 2- 151

provide a correlation function instead of linear amplification. This is the standard approach in similar man-made circuits.

2.8.1.4.2 The electrophysiology of the LGN

Van Essen, et. al. quote Perry, et. al264. with regard to the neurons terminating at the LGN in the macaque monkey. Omitting their characterization of the ganglion neurons of the retina, they say about 80% of the neurons project to the Parvocellular (chrominance related) section of the LGN with about 10% projecting to the Magnocellular (luminance related) section. They do not account for the other 10%. However, this work suggests that about 2.5% of the total (the reticulocellular pathway) project to the PGN-pulvinar couple where they project onto Area 7 of the cortex.

2.8.1.4.3 The purpose of the LGN

The purpose of the lateral geniculate bodies appears quite clear and critically important based on this work. Their primary role is to extract the necessary information to obtain convergence of the lines of fixation of the eyes at the desired range and then to merge the two images into one composite image. Two types of information are produced for further processing. Information concerning the stereoptic alignment of the two lines of fixation is processed and returned to the Superior Colliculus for further processing. The result is command signals to the oculomotor muscles to establish proper merging of the imagery with emphasis on the foveal area. Information resulting from the optimal mixing or correlation of the two inputs for all areas of the field of view, after stereo-alignment, is forwarded to area 17 of the cortex. In this concept, most or all of the signals to and from the LGN in this figure are routed out of the plane of the paper. 2.8.2 The building blocks of the neural system

Since at least the time of Cajal, the neural system has been defined in terms of morphology. This has resulted in the neuron being considered the fundamental building block of the system. This is not the case when the system is viewed from a functional perspective. From this perspective, the fundamental unit is the Activa and its associated conduits. Each of these circuit elements has unique and adjustable circuit parameters. The result is a myriad of specialized functional blocks. These blocks are assembled in various combinations and each combination is supported metabolically by a cell nucleus and a soma. Most signal processing neurons only contain one Activa. However, this is not true of the signal detection and signal projection neurons. The signal detection neurons, the photoreceptor cells, contain either two or ten distinct Activa circuits depending on definitions. The signal projection neurons contain a variable number based on their overall length. Each Node of Ranvier and its associated conduits is a distinct functional unit. Since these units recur nominally every two millimeters, the approximate number of functional units per projection neuron is given by the length of the neuron divided by two millimeters.

As developed in Chapter 8, each Activa is a three-terminal device. When associated with its supporting conduits, the overall circuit has six functionally distinct terminals, three for electrical biases and three signal terminals.

2.8.3 The Characteristics of visual signals

There are two distinct classes of signals within the visual system and one class includes several types. The classes related to the requirements, and are easily correlated with the morphology, of the system. The most widely studied signals are the pulse type signals commonly designated action potentials. These signals are only found within the signal projection stages of the process. Their waveform as a function of time does not vary significantly and is largely irrelevant to the study of the system. The time interval between the pulses is the carrier of the information relative to vision. The far more important signals of vision, and the rest of the neural system, are the analog waveforms employed within individual engines of the neural system. These waveforms are much more varied and directly associated with the signal information being processed. With the spatial dispersion of the various engines of the neural system and the great many interconnections between them, signal projection is required frequently. All association fiber signal paths employ pulse signaling. Signal

264Perry, V. Oehler, R. & Cowey, A. (1984) Xxx Neuroscience, vol. 12, pg. 1101 152 Processes in Biological Vision

projection requires the conversion of the analog signals to pulse signals and their subsequent return to analog form. This process is accomplished primarily in two morphologically labeled types of cells. The ganglion cells are configured topologically to provide the mechanism of signal encoding (conversion) at the sending location. Conversely, it appears that the stellate cells are configured to provide decoding at the receiving location. The ganglion cells are frequently associated with layer five and the stellate cells are usually associated with layer three or four of the neural sheet. The following material is a brief summary of the material in Section 10.10.1. 2.8.3.1 Quiescent signal levels of the system

The visual system, and the neural system, are fundamentally electrolytic systems. They employ conventional electronic techniques very similar to those developed by man with one major exception. The systems are based on electrolytic rather than metallic conduction paths. The parallel between natural and man-made systems even extends to the active devices found in the two systems. The liquid-crystalline (or semi-liquid), semiconductor device of neurology, the Activa, is directly analogous to the semi-metallic semiconductor device known as the transistor. Therefore, it is possible to describe the voltages and currents of the neural system using the identical terminology found in conventional electronics. The principle voltage levels found within the visual circuits can easily be defined using this terminology. There are three principal voltages associated with the output of any active circuit device. These are:

1. the cutoff potential, or the potential under zero current conditions associated with the output terminal of the device.

2. the saturation potential, or the minimum potential achievable at the output terminal under maximum current conditions.

3. The quiescent potential, or the resting potential of the output terminal of the device.

Normally, a rise in the collector potential, nominally the axon potential, toward the cutoff potential of the circuit is called hyperpolarizing in the electrophysiological community. A fall in the collector potential toward the saturation potential is labeled depolarizing.

Similarly, a signal at the input terminal, typically but not exclusively a dendrite, that causes the output or collector potential to rise toward the cutoff potential is called hyperpolarizing in the literature. Similarly, a signal causing the potential to decrease toward the saturation potential is labeled depolarizing.

Lacking a working model of the circuitry of the visual system, early electrophysiologists adopted the practice of describing the response of the system in analogy to the input stimulus. This was done under the assumption that the visual system employed 2-state signaling mechanisms internally. This was extremely unfortunate. It is now quite clear that the system employs 3-state signaling in all of the chrominance channels of vision. As a result, descriptions like blue center ON-red surround OFF are less than adequate for purposes of electrophysiology.

To understand the operation of a neuron, one must understand that the active element in that device is a three-state device with three signal terminals. The expressions ON and OFF are not adequate to describe such a device. This is illustrated in the following table attempting to relate depolarization and hyperpolarization to the actual relevant conditions. TABLE 2.8.3-1 STATE TABLE FOR THE POTENTIAL ASSOCIATED WITH A TYPICAL AXON

2-State Adjectives Possible 3-State Actions

Depolarizing Change from quiescent state toward saturation Change from cutoff state toward quiescent condition Hyperpolarizing Change from saturation toward quiescent condition Change from quiescent state toward cutoff Environment & Coordinates 2- 153

Under some conditions, particularly in the pulse circuits, the quiescent condition may correspond to either the saturation or cutoff condition. However, this is not common in the analog circuits. This has led to considerable confusion in the literature as will be discussed below. The cutoff potential of a neuron is determined by the source of electrical power associated with the Activa within that neuron. Except for specific specialized neurons, such as the photoreceptor cells, this potential is normally between -144 and -152 millivolts, depending on temperature. The saturation potential of a neuron is determined by the morphological, and consequently the electrical, size of the Activa within the neuron. It is typically near minus twenty millivolts relative to the surrounding interneural matrix, INM, and is less sensitive to temperature that the cutoff potential. The quiescent potential of the axon of a neuron can be at any level between the above values depending on the potential applied to the input terminal. The photoreceptors typically exhibit a cutoff potential near -70 millivolts as will be discussed more completely in Section 9.2.3. 2.8.3.1.1 Quiescent conditions of photoreceptor cells

The electrical parameters of the axon of the photoreceptor cells have been widely studied. It is important to note that the current flowing to the output terminal of a photoreceptor cell decreases in the presence of increased illumination. This is counterintuitive but easily demonstrated. Thus, turning OFF the illumination results in turning ON the current going to the pedicel of that cell. When speaking of the potential of the pedicel, the potential reaches its highest value at cutoff which corresponds to maximum illumination. When illuminated, the output potential of the photoreceptor cell is normally at an average potential of about -50 millivolts, relative to the INM, regardless of the intensity of the average illumination. 2.8.3.1.2 Quiescent conditions of signal processing cells

The bipolar cells of the signal processing stage are basically summation circuits. They do not introduce amplification into the system. Changes in their input signal are reproduced at their output with the same polarity. Thus, an increase in illumination of the associated photoreceptor cells will result in an increase in the potential of the bipolar cell axons. However, an increase in illumination in one spectral channel of the retina and a decrease in an adjacent spectral channel can leave the output potential of the appropriate bipolar cell unchanged. Thus, the change from red ON-aqua OFF to red OFF-aqua ON may have no impact on the bipolar cell.

The problem becomes much worse when the outputs of the lateral cells are described. These cells are designed to be differential amplifiers. They operate in a biphase mode about a quiescent value. Their output reflects the ratio between the amount of violet and yellow stimulus present and the amount of red and aqua stimulus present. When these ratios are equal to one, the output of the lateral cells is at their quiescent value. This value does not correspond to an OFF condition under any interpretation of the circuits parameters. Thus, where Hubel and others have described the balanced input condition as leading to an OFF condition in the signal channel, the more appropriate designation of the condition is one of quiescence. This is very important when attempting to understand the analog to pulse encoding process prior to signal projection to the brain. 2.8.3.1.3 Quiescent conditions of ganglion cells

The conditions found at the axons of ganglion cells are fundamentally different from those in analog circuits. These circuits are configured as driven monopulse oscillators. 2.8.3.2 Characteristics of the analog signals

It is absolutely critical to specify what types of cells, and what terminals of those cells, are being discussed when describing the analog signals they process. A typical lateral cell has a biphase output centered around a quiescent level of -70 to -80 millivolts. The signal can swing up to the nominal saturation level of -20 millivolts and down to the nominal cutoff potential of -150 millivolts. Which direction it swings depends on the polarity of the signal applied between its dendritic and poditic terminals. The bandwidth of a majority of the analog signals is limited by the bandwidth of the photoexcitation/de-excitation process of the photoreceptor cells. 2.8.3.3 Characteristics of the pulse signals

The axons of ganglion cells have a potential that is the same as their cutoff potential, typically -150 millivolts. When they generate a single pulse, their axon potential changes rapidly to their saturation potential, typically -20 154 Processes in Biological Vision

millivolts before returning to their quiescent potential. The complete cycle is typically accomplished in less than 1.5 milliseconds depending on temperature. The resulting pulse is positive going and has a typical amplitude of more than 120 millivolts. Note that the resulting action potential never achieves a positive potential relative to the INM. Such a condition has often been inferred in the literature based on the assumption of a quiescent potential near -70 millivolts for the ganglion cell. This assumption is usually based on the assumption that the ganglion cell resting potential was similar to that of a photoreceptor cell which was easier to measure. It is not. The information is encoded on the pulses generated by the ganglion cells using time delay modulation. This is a form of phase modulation. Phase modulation is more precise than frequency modulation. It is the time interval between pulses that is the true value of the signal being transmitted. This interval can vary from a very long time in the dark to about 0.01 second under high instantaneous light levels relative to the average light level.

2.8.3.4 Characteristics of the currents found in visual systems

Because most early researchers in vision had a chemistry background, it was natural to think of the electrical currents found in the neural system as based on ionic flow in dilute solutions. Lacking any other model and detailed nature of the composition of biological membranes, this concept of ionic flow was extended to include these membranes. This foundation and its extension have not proven to be an adequate description of these currents. When the system is studied as a whole, it is found that there are significant portions of the system that are impermeable to ionic flow and other areas where the concentration levels are too high to support such flow. Most of these areas can be described as liquid crystalline in nature. A liquid crystalline structure does not support the physical transport of ions. It can support current flow consisting of electrons and holes. (Holes are defined as the absence of an electron from a normally occupied location in a crystalline lattice.) The movement of these electrons and holes are described in terms of quantum-mechanical processes. These processes introduce mechanisms unknown to the field of conventional chemistry. They form the foundation of semiconductor physics and the active semiconductor devices found in the neural system.

To understand the neural system, one must recognize that the currents within the neural system consist of both ionic and fundamental charges. The fundamental charges can be free electrons moving in a conducting medium or a form of bound charge moving within a lattice structure or a dielectric. When these additional forms of current and mechanisms controlling current movement are recognized within the context of biology, the operation of the neural system takes on a totally different character. It is found to be a much more complex electrical medium consisting of a great variety of semiconducting electrical materials and devices. It is also found to employ electromagnetic principles to convey signals over the projections neurons that do not rely upon the conductance of charge or ions at all. 2.8.3.5 The concept of antagonistic visual signals

Gouras reviews the fact265 that in 1874, Hering, an early psychophysiologist, originally proposed that there were separate neural mechanisms mediating the sensations of “brightness” and “darkness.” This distinction was entirely conceptual at that time. No actual separate neural mechanisms have been reported related to these two perceptions. The electrophysiological luminance signal is a monopolar signal representing the amplitude of the normalized response to a stimulus. If the stimulus is removed, the signal returns to its quiescent value. Hering also proposed antagonistic pairs of chrominance signals based on his conceptual determination that certain perceptual colors did not exist, e. g. purplish-yellow. Over time, the idea that the signals related to these sensations were antagonistic in an all-or-nothing sense grew. The actual visual signals are closely aligned to the axes Hering proposed but they are analog in character. The terms, ON and OFF visual neurons was apparently coined by Kuffler in 1953 following the electrophysiological recording of ganglion cells. Since then, the source of these ON and OFF mechanisms have been traced to what Gouras describes as “the first synaptic relay in the retina, between the cone photoreceptors and second order neurons, bipolar cells.” His figure 10.1 is clear that all cones in a given retina exhibit the same polarity of output under the same conditions. The ON and OFF responses are formed in the bipolar cells of his figure. This work takes a different approach and assigns the primary differencing role to the horizontal cells, in their role of creating

265Gouras, P. (1991) The Perception of Color Boca Raton FL: CRC Press pp. 163-164 Environment & Coordinates 2- 155

various chromatic and polarization sensitive difference signals. A similar role is assigned to the amercine cells in their role of creating spatial difference signals. In this work, the bipolar cells function primarily, if not exclusively, as summation circuits. As will be developed later, whether these neurons can support a differencing function depends on how they are biased. If they are biased so that a bipolar neuron can accept a signal from a photoreceptor cell or horizontal cell at its third (or poditic) terminal, it can perform a signal inversion. The arborization of this third terminal may appear to be the same as that of the dendritic terminal. However, it leads to a functionally different signal at the axon. That axon signal is of opposite polarity to the norm. The important points to note are; the ON and OFF signals do not originate in the photoreceptors, and they are not phasic. More important, the signals labeled OFF signals are only found in the differencing channels associated with chromatic, polarization or spatial encoding of the scene. These channels are distinctly separate from the luminance channels. Wherever an OFF signal is found in a differencing channel, there is an ON signal in a parallel luminance channel. However, these two channels are not complimentary. The channel containing the “OFF” signal simultaneously supports an “ON” signal from another nearby photoreceptor. These two signals within the output of a single neuron may differ in chromatic, polarization or position properties and are used to convey information to the brain that is orthogonal to the luminance information. The two signals at the input to a differencing neuron may be considered antagonistic. However, they are not inhibitory in an all-or-nothing sense. 2.8.3.6 Extension of the Hering concept

In psychophysical experiments, the ON and OFF concept is often extended to provide a shorthand describing the center of a field of view and its surround. In those contexts, the expression ON may be applied to either the center or the surround, frequently with a preceding notation indicating the spectral sensitivity. Thus, the expressions L ON- surround or M OFF-center are defined. Generally, the color of lights used in these experiments have not been carefully selected to match or highlight the performance of the actual chromophoric channels of vision. As a result, the precision of the experimental data has been limited and the subsequent discussions have been in-conclusive.

Over a long period of time, the above experiments have yet to determined a unique size for the “center” field that gives maximum differentiation between the center and the surround. 2.8.4 The form of Horizontal and Amercine cells

Because of the previously undefined electrical topology of the horizontal and amercine cells, it has been difficult to interpret the signal path through these neurons. The community has generally developed an architecture for the signal flow through the retina that includes signals doubling back on themselves. This doubling back is frequently described in terms of a signal from a pedicel of a photoreceptor cell entering a horizontal cell and being returned from the same terminal of the horizontal cell to the same pedicel in order to inhibit the transmission of a signal from that photoreceptor cell to other orthodromic cells in the retina. A communications theorist finds this a very unlikely scenario. It does not provide a minimal circuit solution in terms of boolean algebra. The neural system is invariably a minimalist (simplest) system in any individual neural path. As seen in this work, the horizontal and amercine cells are three terminal devices that create a difference signal. One morphological arm of each horizontal and amercine cell is packaged to contain two functional elements, an input neurite and an output axon. In this sense, the morphological arm is like a cable carrying two separate wires. Within a given synaptic complex, the input neurite accepts a signal from a distal neuron and the output axon delivers a signal to a proximal neuron. Based on this structure for the neurons, a different architecture is found more appropriate. There is no signal feedback to the distal neuron based on its own output. 2.8.5 The electrical environment of the neural system

The animal neural system is a very complex totally electrolytic signaling system. It employs many specialized techniques not found in common electronic circuits. However, all of the techniques are found within the niches of specialized electronic systems. The term electrolytic is used to emphasize the fact that the system employs both ionic and electronic charge transfer. The system cannot be understood by assuming only ionic or only electronic transfer. The overall system is highly optimized and operates at impedance levels not normally found in man-made electronics. This makes precise evaluation of the system difficult. Evaluation is further complicated by the fact that the system does not employ any dissipative electrical elements. From an energy perspective, the system employs fundamentally reversible electro-chemical reactions that skirt the Second Law of Thermodynamics. Power consumption is measured in terms of a change in chemical constituents and not in terms of heat generated. The complexity of the electrical environment of the retina has seldom been appreciated by investigators. They have generally assumed a linear time-invariant environment of nominal electrical impedance. This is far from the case. 156 Processes in Biological Vision

Just the complexity of the circuitry associated with the photoreceptor cell is far more complex than recognized in the literature. The method of photon detection employed results in a variable time delay in the signal impressed upon the amplifiers of the photoreceptor. To achieve the very large photometric dynamic range of the eye, each photoreceptor is configured to exhibit a large amount of feedback in its signal generating function. The magnitude of this feedback is state dependent. The effect of this feedback is to cause the amplifiers in this stage to exhibit a variable output impedance. Combining this impedance with a relatively fixed output capacitance results in a variable bandwidth in this signal path. The result is a signal from this stage that is unexpectedly complex and exhibits a variable bandwidth as a function of input intensity. To understand this signal, its temporal characteristics must be understood in detail. For historical reasons, neurons have been considered to be based on two terminal electrical networks. This is not the case. Neurons incorporate a three-terminal active device as a fundamental element. To support these devices, the individual neuron usually exhibits six clearly definable and measurable external electrical terminals. Recognizing this fact calls for an entirely different approach to the measurement of signals associated with the neural system and particularly the very complex retina. Many first level hypotheses have been presented over the years on how the signal generated by absorption of a photon is transferred to the nervous system. These hypotheses have generally not been very convincing, very substantive or very detailed. The problem is a complex one because of the limited knowledge concerning the electrical properties of the inner segment and particularly the region involving the interface with the outer segment. There is a great deal of histological data, but very little data related to the signals. There is also a considerable problem in terminology. In the biophysical world, the term transduction has been used in many contexts. It appears to be a general term, similar to the position of the electrical engineering field just at the time of invention of the transistor. At that time, it became necessary to more precisely define terms like trans-conductance, trans-impedance, and even trans-resistance.

2.8.5.1 The electrical impedance of the layers of the retina

Obtaining signals from the retina in the laboratory for purposes of interpretation is quite difficult. Much of the available data is inferred from various non-invasive measurement techniques. The least complex and non-invasive method is psychophysical, interpreting the results of retinal stimulation by the response of the animal. This method is indirect in the sense that the subject has manipulated the retinal signals in several ways before evaluating them. These ways include computation and interpretation within the cortex. The computation algorithms associated with the cortex are essentially unknown to science at this time. Interpretation necessarily involves prior training and memory.

An alternate non-invasive technique for obtaining signals from the retina is Electroretinography. This is also an indirect method of limited scope. The intent would be to extract waveforms representative of different portions of the signal chain related to the retina. Unfortunately, this method is akin to Tempest activities in conventional electronics. Tempest activities are designed to extract individual signals from a milieu of signals emanating from a specific device or area. Considerable apriori knowledge is needed of the expected signals to recover anything meaningful from this process. Up to the present, this technique has avoided the pulse signals emanating from the eye because there was no rationale for interpreting them. Low frequency analog signals have been recorded from electrodes placed against the cornea of a living eye and a second arbitrary location for a long time. Interpretation of the signals obtained from stimuli by this method has also been difficult because of the lack of a rationale. Invasive techniques have provided good parametric data concerning the amplitude, and frequency parameters of optical signals traveling on the Optic Nerve. However, the interpretation of these signals has not progressed. This has also been primarily due to a lack of a sophisticated rationale as to what to expect. Probing the layers of the retina can involve contacting an individual neuron or the interstitial space between neurons. In either case, the impedance environment is quite difficult. In the case of a single photoreceptor cell, the impedance level of the pedicel is typically 1000 megohms. At this level, only specialized probes can be expected to be useful (typically the open gate of a MOS transistor). Even these probes will typically introduce enough shunt capacitance to limit the bandwidth of the recorded signals. If the investigator attempts to interrogate the interstitial space between neurons, the impedance may be lower but it must be recognized that the resultant signal is an average of the signal from multiple cells that may not all be responding similarly. The result is an equally complex or more complex signal than that obtained by single cell Environment & Coordinates 2- 157 probing. 2.8.5.2 Characteristics of the signal encoding

The output signals traveling along the optic nerve are binary in character but the exact modulation scheme has not been presented in the literature. The general assumption has usually been that the signals are in some way related to the amplitude of the input signal to the photoreceptors. However, the signals being non-synchronous have generally looked like noise pulses even in the absence of input. Looking at the literature a little closer, the assumption has usually been that the frequency of the pulses is directly related to the input signal amplitude (loosely described as pulse frequency modulation), i. e., a brighter input signal results in more pulses in the output signal. An alternate hypothesis that is easily implemented in the electronics world would be that the modulation is of the type known as phase modulation where the time between pulses carries the information. The specific modulation type is called time interval modulation. This modulation mode offers several attractive features in signaling. First, the basic signal can be impressed on a carrier frequency in either of two polarities. Thus, in the animal case, sudden increases in brightness or sharp changes in contrast are usually more important than sudden decreases. Therefore, the signal can be impressed onto the carrier in inverse modulation. Sudden increases in signal brightness result in shorter pulse-to-pulse intervals. Reductions in signal brightness result in longer pulse-to-pulse intervals. This results in the more important information being received sooner than the less important.

Second, lacking any input signal, the modulator can deliver a constant stream of output pulses at a nominally constant pulse-to-pulse interval (frequency). This is an effective way of providing what is called a supervisory signal that says the circuit is operational but not receiving any information to transmit.

The above hypothesis provides an interesting explanation for many waveforms in the literature, especially those displaying what is called a latency effect, the name implying a delay in the output before anything happens. Under the “inverse phase modulation of a carrier” hypothesis, the latency interval is a signal in itself. This interval is telling the receiver the rate of change of the signal. If the rate of change is low, the latency is long. The actual process is more complex and will be discussed in detail in Chapters 10 & 16.

Third, this type of signal can be generated, and regenerated (repeated), by one of the simplest types of oscillator--the relaxation oscillator. This type of oscillator is also extremely easy to inhibit or stimulate by one or more auxiliary input signals. The relaxation oscillator is the type used throughout the animal kingdom to generate and regenerate nervous signals. 2.9 Summary

The eye is a remarkably highly optimized sensor/signal processing system that has evolved from nothing more than a light-sensitive spot on a flat surface of skin.

The optical geometry of the eye provides an image of limited quality over a very wide field of view, about 150 degrees for each eye. Simultaneously, it achieves a quite high performance over a very limited area of about five degrees, about the width of a hand at arms length. Its highest level of performance is achieved over an even smaller area of about the width of one finger at arms length. The sensory system behind the optics takes maximum advantage of the available information from the optical system to service the organism. In doing this, it employs different topologies and morphologies for common components in different regions and uses different signal processing methodologies in different areas.

The animal eye has successfully integrated these most sophisticated techniques to achieve a remarkable level of overall performance. It is best understood if it recognized that the design is based on elliptical, not spherical, optics and uses variable index materials as a basic design characteristic. It also relies on a unique capability of the photoreceptors to achieve an extraordinary degree of automatic lightness compensation without requiring excessive signal or data processing. This method of adaptation also insures a high degree of color constancy within the photopic luminance range. These techinques lead to a much higher level of optimization than man achieves in his designs during the last years of the 20th Century. An interesting engineering observation is that the lens is not attached to the ligaments by a narrow edge contact, as is required in most man-made spherical optical designs. The lens of the human eye in particular is quite compatible 158 Processes in Biological Vision

with a broad contact area leading to low structural stress and high operating reliability for the accommodation function. It also leads to the idea of the lens surfaces being more elliptical that spherical. The motions of the eye are more complex than generally reported in the literature and their coordination with the other movements of the head are not well appreciated in the literature. They contribute significantly (in fact critically) to the overall operation of the visual system. The interrelationship between motion and vision will be developed further below. The literature contains an immense amount of information about the geometry and morphology of the eye but much of it is second hand, inconsistent and poorly documented. Scales and coordinates rarely appear in published pictures; frequently a scale is added to a photomicrograph or electron micrograph later at the time of printing. This obscures the limited resolution capability of the original recording device. Frequently freehand drawings (likenesses, impressions or cartoons but not pictures) are offered which are oversimplified or stress features out of proportion to their importance or physical dimensions. Frequently a figure is borrowed and a new caption is written for the figure that fails to capture all of the details and nuances of the original investigators findings. The reader is cautioned to go back to original sources, and not text books, to clarify any differences between what is written here and what he has encountered elsewhere. This will lead to more valid conclusions and hopefully progress in the overall field.

Wald, G. and Brown P. Human Color Vision and Color Blindness, Symposia on Quantitative Biology vol. XXX, 1965 The Cold Spring Harbor Laboratory of Quantitative Biology LCCN 34-8174

Dowling, J. E. 1965. Foveal receptors of the monkey retina: fine structure. Science 147: 57-59

Wald, G. 1945. Human vision and the spectrum. Science 101: 653-658 Environment & Coordinates 2- 159

2. Environment, Coordinate Reference System and First Order Operation ...... 1 2.1 Physical Environment...... 1 2.1.1 The Radiation Environment...... 1 2.1.1.1 The Luminance Range...... 2 2.1.1.2 Retinal Illumination...... 5 2.1.1.3 The Color Spectrum...... 6 2.1.1.3.1 The dispersion associated with naming colors...... 6 2.1.1.3. Two-dimensional representations of color names ...... 7 2.1.1.3.3 One-dimensional representation of color names ...... 8 2.1.1.4 Historically “Unique and Primal Colors” ...... 10 2.1.1.5 A theoretical basis for color names...... 11 2.1.1.6 New definition of “Unique Colors”...... 12 2.1.1.6.1 Background ...... 12 2.1.1.6.2 Color definitions by Barnes...... 14 2.1.1.6.3 Specific definitions for additive color in object space ...... 15 2.1.1.6.4 Specific definitions for subtractive color in object space...... 15 2.1.1.7 Sources and illuminants ...... 17 2.1.1.7.1 Quantum count, radiant energy & chemical energy vs wavelength . 18 2.1.1.7.2 Illuminants...... 20 2.1.1.7.3 Sources...... 21 2.1.1.8 Instrumentation used in physiological optics ...... 21 2.1.2 The scene environment...... 22 2.1.2.1 Reflectance properties...... 22 2.1.2.1 Fine surface structure related to printing ...... 22 2.1.3 The thermo-mechanical environment...... 22 2.1.3.1 Sensitivity to temperature ...... 23 2.1.4 The hydraulic environment ...... 23 2.2 The retinotopic coordinate systems of vision, Map of the Retina...... 23 2.2.1 Coordinates external to the eye...... 24 2.2.1.1 Perimetry presentations are coarse approximations...... 30 2.2.1.2 Approximations of the binocular field...... 30 2.2.2 Coordinates internal to the eye...... 33 2.2.2.1 Optically important dimensions...... 33 2.2.2.2 Retinal dimensions...... 34 2.2.2.3 Photoreceptor dimensions and refractive indices ...... 37 2.2.2.4 Precision of optical dimensions ...... 38 2.2.3 Coordinates relative to perception ...... 39 2.2.4 Other parameters related to embroyology and morphology of the eye ...... 40 2.3 The organization and mixed coordinate systems of the chordate brain...... 40 2.3.1 The general organization of the brain...... 40 2.3.2 Coordinate transformations within the visual system...... 44 2.3.2.1 The transition from retinotopic to abstract mapping...... 44 2.3.2.2 The transition from abstract to inertial-topic mapping ...... 45 2.3.3 Initial embroyology of the visual system ...... 45 2.3.3.1 Initial embryology of the brain ...... 45 2.3.3.1.1 The initial brain as an open-ended tube ...... 47 2.3.3.2 Initial embryology of the eye ...... 48 2.3.3.3 Initial embroyology of the retina ...... 48 2.3.4 The connections between elements of the brain...... 48 2.3.4.1 The connection architecture of vision...... 48 2.3.4.1.1 The block diagram...... 48 2.3.4.1.2 The morphological picture up to the cerebral cortex ...... 50 2.3.4.2 The optic nerve between the eyes and the midbrain ...... 51 2.3.4.3 The connections between the midbrain and the cerebral cortex ...... 53 2.3.5 Ultimate morphological and topological organization of the brain ...... 53 2.3.5.1 The morphological environment of the midbrain ...... 53 2.3.5.2 Correlating the optic tectum ...... 53 2.3.5.3 The lateral geniculate nuclei ...... 54 2.3.5.4 The Pulvinar, analog of the Pretectum...... 54 160 Processes in Biological Vision

2.3.5.5 The superior colliculus ...... 56 2.3.5.6 The cerebellum ...... 56 2.3.6 The morphological environment of the cerebral cortex ...... 56 2.3.6.x The definition of extrastriate visual cortex ...... 57 2.4 The Visual Optical System...... 57 2.4.1 Overview...... 57 2.4.1.1 General Discussion ...... 58 2.4.1.1.1 Shape of the eyes...... 60 2.4.1.1.2 Dimensions of the eyes ...... 60 2.4.1.1.3 Optical environment of the eyes...... 61 2.4.1.2 Fundamentals of optical analysis ...... 61 2.4.1.2.1 Ray tracing...... 62 2.4.1.2.2 Diffraction effects ...... 66 2.4.1.2.3 Aberration effects...... 69 2.4.1.2.4 interference effects...... 69 2.4.1.2.5 Inertial effects...... 70 2.4.1.2.6 Modes of optical analysis ...... 70 2.4.1.3 State of the art in man-made optics ...... 71 2.4.2 The Physiological Optical System...... 71 2.4.2.1 Spectral transmission of the lens group...... 73 2.4.2.1.1 Spectral transmission of the lens in the human eye ...... 75 2.4.2.1.2 Spectral transmission of the macula lutea in the human eye ...... 78 2.4.2.2 Diffraction properties of the lens group (& neural retina) ...... 79 2.4.2.2.1 Variability of the lens to achieve focus ...... 81 2.4.2.3 Details of the Human eye ...... 81 2.4.2.2.2 Diffraction properties of the neural portion of the retina...... 84 2.4.2.2.3 Diffraction properties of the photoreceptor cell ...... 84 2.4.2.2.4 Vascular support to the oculus ...... 84 2.4.3 Auxiliary Optical Techniques Used in Specific Visual Systems ...... 84 2.4.3.1 The Iris...... 84 2.4.3.1.1 The Iris in the human ...... 85 2.4.3.2 The Elliptical Eyeball ...... 88 2.4.3.3 The Nictating Auxiliary Lens ...... 88 2.4.3.4 The tapetum...... 88 2.4.3.5 The Spatially Separated Retinas ...... 88 2.4.4 The Field Lens and Image Plane of the Optical System...... 89 2.4.3.5 The putative neural retina as a fiber optic plate...... 91 2.4.5 Summary of the Optical System up to the Petzval Surface...... 91 2.4.5.1 On-axis performance...... 92 2.4.5.1.1 Aberrations in On-axis performance ...... 95 2.4.5.1.2 On-axis performance versus light level ...... 96 2.4.5.2 Off-axis performance ...... 96 2.4.5.2.1 Aberrations in Off-axis performance ...... 98 2.4.5.3 Gross distortion in the human eye...... 99 2.4.6 The Optical System at and beyond the Petzval Surface ...... 99 2.4.6.1 The role of the spatial properties of the photoreceptor cell mosaic ...... 102 2.4.6.2 The optical elements of the Inner Segment...... 103 2.4.6.3 The optical elements of the Outer Segment...... 103 2.4.6.5 Modeling of the optics beyond the Petzval Surface ...... 105 2.4.6.6 Measurements of the optics beyond the Petzval Surface ...... 106 2.4.7 Summary of the Overall Optical System...... 107 2.4.7.1 The optical parameters of the human visual system ...... 108 2.4.7.2 The acuity of the human visual system...... 108 2.4.8 Ophthalmological models of the visual system...... 113 2.4.8.1 Generic descriptions ...... 113 2.4.8.2 Existing ophthalmological models of the human eye ...... 115 2.4.8.2.1 Models introduced during the 1970's & 1980's ...... 115 2.4.8.2.2 Models and results introduced during the 1990's & 2000's ...... 117 2.4.8.2.3 Existing ophthalmological models of the macaque eye ...... 121 Environment & Coordinates 2- 161

2.4.8.3 Performance evaluation tools of Optometry ...... 121 2.4.8.4 Performance evaluation tools of Ophthalmology ...... 123 2.4.8.4.1 Precise terminology ...... 123 2.4.8.5 Development of a detailed optical model for research purposes...... 123 2.5 Motion, and fixation in the operation of the eye...... 125 2.5.1 Overview...... 125 2.5.2 Muscles and motions in Chordata ...... 125 2.5.3 Adaptation of ocular motion...... 128 2.5.4 Coordination of ocular motion ...... 129 2.5.5 Sensitivity suppression during eye movements ...... 129 2.6 The Temporal Input Environment of the eye ...... 129 2.6.1 The natural environment...... 129 2.6.2 The man-made visual environment...... 130 2.7 The unique environment of the photoreceptor cells ...... 130 2.7.1 Overview ...... 130 2.7.1.1 Histology, optical signal absorption and electrical signal generation ...... 130 2.7.1.1.1 Opsin generation and chromophore coating ...... 131 2.7.1.1.2 Chemical pathway ...... 132 2.7.1.1.3 Signal pathway...... 132 2.7.1.1.4 Optical pathway...... 132 2.8 The Signal Environment of the visual system...... 132 2.8.1 The Major Signal Pathways of the Visual System...... 132 2.8.1.1 Plan view...... 133 2.8.1.1.1 The visual field of view...... 135 2.8.1.1 Signal path definition through sectioning ...... 135 2.8.1.1.1 Physical impact of gross sectioning the optic nerve and optic tract ...... 139 2.8.1.1.2 Functional impact of differential sectioning of the optic nerve and optic tract...... 139 2.8.1.1.3 Failures in the visual field not due to physical failure of the signaling paths...... 142 2.8.1.2 Profile view of the visual system...... 143 2.8.1.3 Fundamental signaling architecture of the visual system ...... 144 2.8.1.4. Organization of switching and computation centers...... 146 2.8.1.4.1 The morphology of the LGN ...... 150 2.8.1.4.2 The electrophysiology of the LGN ...... 151 2.8.1.4.3 The purpose of the LGN...... 151 2.8.2 The building blocks of the neural system...... 151 2.8.3 The Characteristics of visual signals...... 151 2.8.3.1 Quiescent signal levels of the system ...... 152 2.8.3.1.1 Quiescent conditions of photoreceptor cells ...... 153 2.8.3.1.2 Quiescent conditions of signal processing cells...... 153 2.8.3.1.3 Quiescent conditions of ganglion cells...... 153 2.8.3.2 Characteristics of the analog signals...... 153 2.8.3.3 Characteristics of the pulse signals...... 153 2.8.3.4 Characteristics of the currents found in visual systems ...... 154 2.8.3.5 The concept of antagonistic visual signals ...... 154 2.8.3.6 Extension of the Hering concept...... 155 2.8.4 The form of Horizontal and Amercine cells...... 155 2.8.5 The electrical environment of the neural system...... 155 2.8.5.1 The electrical impedance of the layers of the retina ...... 156 2.8.5.2 Characteristics of the signal encoding...... 157 2.9 Summary...... 157 162 Processes in Biological Vision

List of Figures

Figure 2.1.1-1 Table of equivalent light levels drawn from the literature ...... 3 Figure 2.1.1-2 Perceived Brightness versus luminance...... 5 Figure 2.1.1-3 Comparison of the subtractive color and additive color concepts ...... 17 Figure 2.2.1-1 Coordinates external to the eye...... 24 Figure 2.2.1-2 Coordinate system for the ocular showing planes and axes of rotation...... 25 Figure 2.2.1-3 Field of view of the typical human showing the coordinate system ...... 26 Figure 2.2.1-4 Perimetry of the right eye modified to show 105 degree circle ...... 28 Figure 2.2.1-5 Perimetric plot of color vision in the visual field ...... 29 Figure 2.2.1-6 Monocular and binocular fields of vision in human ...... 31 Figure 2.2.1-7 The visual fields of monocular, binocular and stereoptic vision ...... 32 Figure 2.2.2-1 Section through the right eye of the human ...... 33 Figure 2.2.2-2 Image construction for a wide angle optical system with aperture stop...... 33 Figure 2.2.2-3 The relative sizes and positions of features on the human retina ...... 35 Figure 2.2.2-4 Schematic representation of histological section through fovea of rhesus ...... 36 Figure 2.2.2-5 Retina seen through the ophthalmoscope in a normal human...... 37 Figure 2.2.2-6 Optical parameters of the photoreceptor cell...... 37 Figure 2.2.3-1 Conceptual model of the information handling capability of the eye ...... 39 Figure 2.3.1-1 Diagrammatic overview of the subdivision of the CNS significant in vision ...... 42 Figure 2.3.1-2 CR Caricature showing the position of the Pulvinar, LGN and Thalamus in relation to the old brain ...... 43 Figure 2.3.3-1 The morphogenesis and gross anatomy of the brain related to vision...... 46 Figure 2.3.4-1 Reproduction of Figure 15.2.4-3 to illustrate the top level topology of the visual system ...... 49 Figure 2.3.4-2 The visual-sensory system viewed from the left side...... 51 Figure 2.3.4-3 Caricature of the afferent neural paths between the eyes and the midbrain...... 52 Figure 2.3.4-4 Organization of the oculomotor nucleus viewed from above ...... 53 Figure 2.3.5-1 Three dimensional view of the right human thalamus ...... 55 Figure 2.4.1-1 A variety of animal eye profiles...... 58 Figure 2.4.1-2 CR A variety of crystalline lenses, from both Arthropoda and Mollusca...... 60 Figure 2.4.1-3 Principle rays and points in a non-immersed optical system ...... 63 Figure 2.4.1-4 (a) A paraxial presentation of the human eye. (b) A full field presentation ...... 67 Figure 2.4.2-1 The human physiological optics described as a series of antennas ...... 72 Figure 2.4.2-2CR Relative transmission spectra of lenses from three types of the teleost fish: ...... 74 Figure 2.4.2-3 CR Absorbance spectra of chordate lenses...... 75 Figure 2.4.2-4 Various determinations of the optical density of human eye lenses...... 76 Figure 2.4.2-5 Optical density of living human crystalline lenses ...... 77 Figure 2.4.2-6 Macula lutea absorption versus the theoretical equivalent...... 78 Figure 2.4.2-7 Coarse cytological structure of the human lens...... 81 Figure 2.4.2-8 Refractive index profileof the human crystalline lens ...... 83 Figure 2.4.3-1 Average pupil diameter as a function of background luminance ...... 85 Figure 2.4.3-2 Dynamics of the human iris...... 87 Figure 2.4.3-3 The optical system of Anableps tetrophthalmus...... 88 Figure 2.4.4-1 CR Cross section of the fovea of M. mulatta in blue and green light...... 90 eye ...... 93 Figure 2.4.5-2 MTF of the human eye at its point of fixation using a logarithmic scale...... 94 Figure 2.4.5-4 Visual acuity as a function of object brightness...... 96 Figure 2.4.5-5 DUPL Reduction in visual acuity with visual field eccentricity...... 97 Figure 2.4.5-6 MTF of the human eye at angles relative to the line of fixation on a logarithmic scale...... 98 Figure 2.4.5-7 Caricature of pattern imaged on the retina as a function of object size ...... 99 Figure 2.4.6-1 Alternate optical configurations applicable to the retina...... 101 Figure 2.4.6-2 CR Details of the Outer Segments in the eye of a rhesus monkey...... 104 Figure 2.4.6-3 Location of the Petzval surface relative to the elements of the retina ...... 105 Figure 2.4.6-4 The fraction of light accepted by a waveguide ...... 105 Figure 2.4.7-1 Human visual acuity in red and blue light...... 109 Figure 2.4.7-2 DUPL Acuity as a function of eccentricity from the line of sight ...... 111 Figure 2.4.7-3 Composite of grating acuity as a function of eccentricity ...... 112 Figure 2.4.8-1 The generic schematic of the animal eye...... 114 Environment & Coordinates 2- 163

Figure 2.4.8-2 Optical (stage 0) versus retinal (stage 1) resolution in the human eye...... 120 Figure 2.4.8-3 A novel eye chart for evaluating both the on-axis and off-axis performance ...... 121 Figure 2.5.2-1 CR The extraocular muscles of the eye and their innervation...... 125 Figure 2.7.1-1 Cytology of the photoreceptor cell at the OS/IS interface ...... 131 Figure 2.8.1-1 Block diagram showing major visual signal paths...... 133 Figure 2.8.1-2 Plan view of the human visual system as seen from BELOW...... 133 Figure 2.8.1-3 Plan view of the human visual signal pathways from ABOVE...... 136 Figure 2.8.1-4 Schematic of projections from the retina to the thalamus in vision...... 140 Figure 2.8.1-5 Cross section of optic nerve in caricature ...... 141 Figure 2.8.1-6 Macular sparing in the right homonymous field in spite of right homonymous hemianopia ..... 142 Figure 2.8.1-7 Effect of a right temporal lobectomy on the visual field EXPLAIN NOTATION...... 143 Figure 2.8.1-8 Profile view of the human visual system...... 144 Figure 2.8.1-9 Fundamental signaling architecture of the human visual system...... 145 Figure 2.8.1-10 Data pathways, junction points and major feature extraction engines ...... 147 Figure 2.8.1-11 Proposed morphology of the lateral geniculate nucleus ...... 150 164 Processes in Biological Vision

(Active) SUBJECT INDEX (using advanced indexing option)

1010 ...... 144 3D...... 25 3-D...... 115 95% ...... 73, 105 action potential...... 154 Activa...... 131, 132, 151-153 adaptation...... 3, 8, 19, 106, 109, 128, 157 amplification ...... 19, 151, 153 amygdala ...... 40, 41 analytical mode ...... 31 arborization ...... 155 area 6 ...... 148 area 7 ...... 48, 49, 53, 57, 133-137, 143, 144, 147, 148, 151 area 7a ...... 143, 144, 146, 148 astigmatism ...... 92 attention...... 44, 57, 128, 148 awareness mode...... 31 bifurcation ...... 137, 139 Black Body...... 20, 21 bleaching ...... 106 blindsight...... 133, 137, 138 BOLD...... 13 Brachium ...... 43, 55, 146 bray ...... 48, 139 broadband...... 14, 15 C.I.E...... 7, 8, 12, 17, 18, 20, 21 calibration...... 2, 7, 69, 108 cerebellum ...... 55, 56 cerebrum...... 41, 55 chord...... 141 CIE...... 6, 7, 11, 19, 30 CIE 1976 ...... 7, 11 colliculus ...... 43-45, 47, 49, 55, 56, 133-135, 143, 144, 146, 148, 149, 151 coma...... 117, 118 commissure ...... 40, 43, 44, 48, 50, 51 compensation...... 84, 127, 129, 157 computation ...... 31, 93, 146, 156 computational...... 39, 95, 125, 126, 129, 138-140, 144 computational anatomy ...... 138-140 confirmation...... 39, 132 consciousness...... 41 continuum ...... 12, 104 cross section...... 22, 90, 139, 141 cross-section...... 34, 36, 90, 116 cyclopean...... 32 database ...... 39 depth perception...... 129 diencephalon ...... 40-42, 46 disparity...... 57 DUPL...... 97, 111 dynamic range ...... 3, 5, 85, 156 Edinger-Westphal...... 53 evolution...... 2, 58, 60, 118 expanded ...... 3, 11, 19, 42, 44, 62, 76 Environment & Coordinates 2- 165

fasciculus...... 146 feedback...... 155, 156 focal points...... 66 Fourier transform...... 97, 102, 107 foveal sparing...... 137 foveola sparing...... 137 Gaussian...... 13, 34, 58, 61, 62, 64, 65, 70, 72, 79, 92, 112, 121 glutamate ...... 132 Grandmother ...... 137 half-amplitude ...... 78 hole...... 83 homogeneous...... 69, 80 inferior colliculus ...... 47 latency ...... 157 lateral geniculate ...... 44, 47, 51-55, 133, 135, 139-141, 147, 148, 150, 151 lgn/occipital...... 137 light adaptation...... 106 limbic system...... 45 Limulus ...... 59 liquid-crystalline ...... 132, 152 macular degeneration...... 77 macular sparing ...... 106, 133, 137, 142 magno-cellular ...... 147, 150 medial geniculate...... 54, 55 mesencephalon...... 46, 47 mesotopic...... 11, 13, 19, 85 metamers ...... 11 Meyer’s loop ...... 50, 133, 135 midbrain...... 44, 51-54 modulation ...... 39, 79, 93, 95, 113, 123, 154, 157 monopulse ...... 153 morphogenesis ...... 45, 46, 48, 71 MRI ...... 43, 44 myelinated ...... 149 Myelination ...... 149 N2...... 105 narrow band...... 13, 15, 20 neurite...... 155 nodal points ...... 33, 34, 65, 70, 92 Node of Ranvier...... 151 noise...... 3, 77, 96, 157 Nyquist ...... 94, 95, 102 OCT...... 130 orbital...... 126 P/D equation...... 113 pain...... 3 parametric...... 95, 116, 156 parietal lobe ...... 44, 50 parvocellular...... 141, 144, 148, 150, 151 PEEP ...... 121 perceptual space...... 12 perigeniculate...... 133, 137, 139-141, 146 perigeniculate nucleus ...... 133, 137, 139, 141, 146 perimetry ...... 23, 27-30, 135, 137 pgn/pulvinar ...... 133 phylogenic tree ...... 24, 125 plasticity...... 48, 139 poditic ...... 132, 153, 155 pons ...... 42, 51, 53 POS ...... 49, 53, 54, 56, 143, 148 POSS ...... 54 166 Processes in Biological Vision

precision optical servomechanism...... 137 Pretectal...... 133 Pretectum...... 44, 47-50, 52-54, 56, 57, 133-135, 137, 143-148 protocol ...... 11, 76, 108 pulse-to-pulse...... 157 pulvinar ...... 42-44, 48-50, 53-57, 133, 134, 137, 143, 144, 146-148, 151 Pulvinar pathway...... 48, 53, 133, 134, 137, 143, 144, 147, 148 quadrigemina ...... 47 quantum-mechanical ...... 18, 19, 21, 77, 154 Rayleigh region ...... 75 reading...... 3, 83, 126, 135 roughness ...... 22 saliency map...... 31, 135 servomechanism...... 137, 139, 143, 148 signal-to-noise ...... 3, 96 signal-to-noise ratio...... 3, 96 spatial dispersion ...... 151 spectral colors...... 8 stage 0 ...... 19, 119, 120 stage 1 ...... 19, 120 stage 2 ...... 139 stage 4 ...... 55 stellate ...... 152 stereopsis ...... 135, 149 Stiles-Crawford ...... 5, 19, 66, 70, 72, 73, 79, 83, 84, 95, 100, 114, 117, 119 stress...... 85, 133, 158 superior colliculus ...... 43-45, 47, 49, 55, 56, 133-135, 143, 146, 148, 149, 151 synapse...... 52, 139 syndrome ...... 137 temporal lobe...... 49 thalamic reticular nucleus...... 40-42, 55 thalamus...... 41-44, 47, 48, 53-56, 133, 134, 140, 141, 143 threshold...... 19, 94, 127 topography ...... 1, 48 topology ...... 1, 49, 155 torsion...... 125 transduction ...... 19, 156 translation...... 6, 12 trans-...... 156 tremor...... 49, 58, 99, 126-129, 135, 137, 148 vestibular system ...... 47, 51, 56, 143 visual acuity...... 36, 96, 97, 109, 110, 128 visual cortex...... 53, 55, 57, 133-135, 137 waveguide ...... 34, 37, 38, 82, 84, 99-103, 105, 119 white matter...... 146 xxx...... 18, 44, 47, 54, 75, 78, 83, 104, 106, 107, 112, 123, 127, 137, 149, 151, 158 [xxx ...... 1, 3, 11, 55, 91, 109, 124