<<

Color and Geometrical Structure in Images Applications in microscopy

Jan-Mark Geusebroek This book was typeset by the author using LATEX 2ε.

Cover: Victory Boogie Woogie, by Piet Mondriaan, 1942–1944, oil-painting with pieces of plastic and paper. Reproduction and permission for printing kindly pro- vided by Gemeentemuseum Den Haag.

Copyright c 2000 by Jan-Mark Geusebroek. ° All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission from the author ([email protected]).

ISBN 90-5776-057-6 and Geometrical Structure in Images Applications in microscopy

ACADEMISCH PROEFSCHRIFT

ter verkrijging van de graad van doctor aan de Universiteit van Amsterdam, op gezag van de Rector Magnificus prof. dr J. J. M. Franse ten overstaan van een door het College voor Promoties ingestelde commissie, in het openbaar te verdedigen in de Aula der Universiteit op donderdag 23 november 2000 te 12.00 uur

door

Jan-Mark Geusebroek

geboren te Amsterdam Promotiecommissie: Prof. dr ir A. W. M. Smeulders Dr H. Geerts Prof. dr J. J. Koenderink Prof. dr G. D. Finlayson Prof. dr ir L. van Vliet Prof. dr ir C. A. Grimbergen Prof. dr ir F. C. A. Groen Prof. dr P. van Emde Boas

Faculteit: Natuurwetenschappen, Wiskunde & Informatica Kruislaan 403 1098 SJ Amsterdam Nederland

The investigations described in this thesis were carried out at the Janssen Research Foundation, Beerse, Belgium.

The study was supported by the Janssen Research Foundation.

Advanced School for Computing and Imaging

The work described in this thesis has been carried out at the Intelligent Sensory Information Systems group. This work was carried out in graduate school ASCI. ASCI dissertation series number 54. Contents

1 Introduction 1 1.1 Part I: Color ...... 2 1.2 Part II: Geometrical Structure ...... 4

2 Color and Scale 13 2.1 Color and Observation Scale ...... 14 2.1.1 The Spectral Structure of Color ...... 14 2.1.2 The Spatial Structure of Color ...... 16 2.2 Colorimetric Analysis of the Gaussian ...... 17 2.3 Conclusion ...... 19

3 A Physical Basis for 23 3.1 Formation Model ...... 25 3.1.1 Color Formation for Reflection of ...... 25 3.1.2 Color Formation for Transmission of Light ...... 27 3.1.3 Special Cases ...... 29 3.2 Illumination Invariant Properties of Object Reflectance or Transmittance 30 3.3 Experiments ...... 32 3.3.1 Overview ...... 32 3.3.2 Small-Band Experiment ...... 35 3.3.3 Broad-Band Experiment ...... 36 3.3.4 Colorimetric Experiment ...... 36 3.4 Discussion ...... 38

4 Measurement of Color Invariants 43 4.1 Color Image Formation Model ...... 45 4.2 Determination of Color Invariants ...... 46 4.2.1 Invariants for but Uneven Illumination ...... 46 4.2.2 Invariants for White but Uneven Illumination and Matte, Dull Surfaces ...... 48

i ii CONTENTS

4.2.3 Invariants for White, Uniform Illumination and Matte, Dull Surfaces ...... 49 4.2.4 Invariants for Colored but Uneven Illumination ...... 51 4.2.5 Invariants for a Uniform Object ...... 52 4.2.6 Summary of Color Invariants ...... 53 4.2.7 Geometrical Color Invariants in Two Dimensions ...... 54 4.3 Measurement of Color Invariants ...... 55 4.3.1 Measurement of Geometrical Color Invariants ...... 56 4.3.2 Discriminative Power for RGB Recording ...... 61 4.3.3 Evaluation of Scene Geometry Invariance ...... 63 4.3.4 Localization Accuracy for the Geometrical Color Invariants . . 64 4.4 Conclusion ...... 66

5 Robust Autofocusing in Microscopy 73 5.1 Material and Methods ...... 74 5.1.1 The Focus Score ...... 74 5.1.2 Measurement of the Focus Curve ...... 75 5.1.3 Sampling the Focus Curve ...... 77 5.1.4 Large, Flat Preparations ...... 77 5.1.5 Preparation and Image Acquisition ...... 78 5.1.6 Evaluation of Performance for High NA ...... 81 5.2 Results ...... 82 5.2.1 Autofocus Performance Evaluation ...... 82 5.2.2 Evaluation of Performance for High NA ...... 83 5.2.3 Comparison of Performance with Small Derivative Filters . . . 85 5.2.4 General Observations ...... 85 5.3 Discussion ...... 86

6 Segmentation of Tissue Architecture by Distance Graph Matching 91 6.1 Materials and Methods ...... 93 6.1.1 Hippocampal Tissue Preparation ...... 93 6.1.2 Image Acquisition and Software ...... 93 6.1.3 K-Nearest Neighbor Graph ...... 94 6.1.4 Distance Graph Matching ...... 94 6.1.5 Distance Graph Comparison ...... 96 6.1.6 Cost Functions ...... 97 6.1.7 Evaluation of Robustness on Simulated Point Patterns . . . . . 98 6.1.8 Algorithm Robustness Evaluation ...... 99 6.1.9 Robustness for Scale Measure ...... 100 6.1.10 Cell Detection ...... 100 6.1.11 Hippocampal CA Region Segmentation ...... 100 CONTENTS iii

6.2 Results ...... 101 6.2.1 Algorithm robustness evaluation ...... 101 6.2.2 Robustness for Scale Measure ...... 105 6.2.3 Hippocampal CA Region Segmentation ...... 105 6.3 Discussion ...... 107 6.4 Appendix: Dynamic Programming Solution for String Matching . . . 109

7 A Minimum Cost Approach for Segmenting Networks of Lines 115 7.1 Network Extraction Algorithm ...... 116 7.1.1 Vertex Detection ...... 116 7.1.2 Line Point Detection ...... 116 7.1.3 Line Tracing ...... 118 7.1.4 Graph Extraction ...... 119 7.1.5 Edge Saliency and Basin Coverage ...... 120 7.1.6 Thresholding the Saliency Hierarchy ...... 121 7.1.7 Overview ...... 122 7.1.8 Error Analysis ...... 122 7.2 Illustrations ...... 125 7.2.1 Heart Tissue Segmentation ...... 125 7.2.2 Neurite Tracing ...... 125 7.2.3 Crack Detection ...... 125 7.2.4 Directional Line Detection ...... 126 7.3 Conclusion ...... 127

8 Discussion 137 8.1 Color ...... 137 8.2 Geometrical Structure ...... 139 8.3 General Conclusion ...... 140

Samenvatting 143

Chapter 1

Introduction

When looking at Victory Boogie Woogie, by the Dutch painter Piet Mondrian, the blocks appear jumpy and unstable, as if they move [33]. As the painting hangs firmly fixed to the wall, the visual effect results from within the brain as it processes the incoming visual information. In fact, a visual scene which enters the brain fed into three subsystems [24, 34]. One subsystem segments the scene in parts by the apparent color contrast. The subsystem gives the ability to see the various colored patches as different entities. A second subsystem provides us with the color of the parts. The subsystem is used for identifying the patches based on their color. The third subsystem localizes objects in the world. It tells us where the patches are in the scene. In contrast, the latter system is color blind, judging the scene on intensity variations only. Cooperation between the first subsystem, segmenting the different colored parts, and the latter subsystem, localizing the different patches, results in ambiguity when the intensity of neighboring color patches is similar. The phenomenon is in effect in Victory Boogie Woogie by the yellow stripes on a white background, as described by Livingstone [33]. Apart from the color appearance of the blocks, Mondrian arranged blocks to form a pattern of perpendicular lines. The visual arrangement is sifted out by the third, monochromatic subsystem which extracts the spatial organization of the scene. The lines are effectuated by an intensity contrast with the background. The yellow stripes have no such contrast, but lines appear as the gaps are supplemented by the brain. In Victory Boogie Woogie, Mondrian combined contrast and the geometrical arrangement of details to stimulate a visual sensation in the brain. Like Victory Boogie Woogie, this thesis deals with both color and spatial structure. Part I describes the spatial interaction between . Color is discussed in its phys- ical environment of light. Consequently, the physics of light reflection are included in the human subsystem dealing with shape extraction. Part II describes the quan- tification of geometrical structure specifically applied to microscopy, although some

1 2 Introduction of the concepts may have a broader application span. Tissue at the microscopical level often exhibits a regular pattern. Automatic extraction of such arrangements is considered, aiming at drug screening for pharmaceutical research. The two parts are mostly separated from one another, as is the case for perception. Using the parts in future research in conjunction may have synergy on color image processing.

1.1 Part I: Color

Color seems to be an unalienable property of objects. It is the that has that color. However, the heart of the matter is quite different. Human perception actively assigns colors to an observed scene. There is a discrepancy between the physics of light, and color as signified by the brain. One undeniable fact is that color perception is bootstrapped by a physical cause: it results from light falling onto the eye. Objects in the world respond to daylight by reflecting differently part of the incoming light spectrum. The specific component of reflection mainly instantiates the color appearance of the object. Another fact is that color perception results from experience. We assign the color of an orange that label as we have learned by experience, as we are capable to do so by the biological mechanism. Experience has led to the denominations of signs to colors. It would have given language no advantage to label colors when we could not compare them with memory. A last contribution to color as we know it is evolution that has shaped the actual mechanism of . Evolution, such that a species adapts to its environment, has driven the use of color by perception. Color is one of the main cues for segmenting objects in a scene. The difference in color of the leaves that obscure oranges allows for easy detection of fruit. Color has a high identification power for an object. Orange things in a tree are clearly no lemons, although the shape is similar. Color in combination with provides a clue for depth perception, hence geometry, of an object. For monochromatic vision, such clues are highly ambiguous. Hence, color perception gives primates advantage over monochromatic species. In terms of physics, daylight is reflected by an object and reaches the eye. It is the reflectance ratio over the wavelengths of radiant energy that is an object property, hence the reflection function for an orange indeed is a physical characteristic of the fruit. However, the amount of radiant energy falling onto the retina depends on both the reflectance function and the light source illuminating the object. Still, we observe an orange to be orange in sunlight, by candlelight, independent of shadow, frontal illumination, or oblique illumination. All these variables influence the energy distribution as it enters the eye, the variability being imposed by the physical laws of light reflection. Human color vision has adapted to include these physical laws, due to which we neglect the scene induced variations. Observation of color by the human proceeds by absorption of light 1.1. Part I: Color 3 by three different receptors. The pigment sensitivities extent over a broad range of wavelengths, but they can be characterized by the spectral region for which sensitivity is maximum. The maximum absorption of the pigments is at short, middle, and long wavelengths, for which reason the receptors are named , green, and cones. Before transmission of the image to the brain, the output of the receptors are combined in two ways. First, the output of the three cone types at each position on the retina are combined to represent three opponent color axes. One axis describes intensity, the to white colors. A second axis yields yellow to blue colors, and a third axis results in red to green colors. The combinations are known as opponent colors, first described by Hering [22]. A second combination yields the comparison of neighboring opponent color responses. Such a spatial comparison is performed within circular areas, called receptive fields. The opponent colors are spatially compared, yielding black–white, yellow–blue, and red–green receptive fields. Receptive fields are found in primates at different sizes, and for different opponent pathways [4, 5, 10, 36, 57]. The existence of receptive fields implies that color is a spatial property. Hence color perception is the result of contrast between opponent spectral responses. In computer vision, as opposed to machine vision [48], one would like to mimic human color perception to analyze and interpret color images. For example, in bio- logical and medical research, tissue sections are evaluated by light microscopy. These sections are stained with standard methods, often over one-hundred years old, and especially designed to be discriminated by the human eye. Hence, segmentation of classically stained preparations can best be based on human color perception. How- ever, the understanding of the human visual system has not yet reached the level to describe the world with the mathematical precision of computer vision [53]. A color image may be defined as a coherent measurement of the spatio-spectral energy density reflected by a scene. Hence, a color image is the result of the inter- action between light source and scene, observed by a measurement probe. From a computer vision perspective, this definition raises two fundamental problems. First, how to combine spectral measurements and spatial structure? Common practise in color image processing is to use the color information without considering the spatial coherence [11, 14, 16, 17, 37, 42, 45]. Some early attempts to include spatial scale into color observation is the work by Land [31], and the work of Di Zenzo [58] and Cumani [3]. Application of color receptive fields in computer vision include [15, 19, 40, 50, 55]. Although these methods intuitively capture the structure of color images, no foun- dation for color observation is available. A solid physical basis for combining color information with spatial resolution would solve a fundamental problem of how to probe the spatio-spectral influx of information bearing energy. A second fundamental question is how to integrate the physical laws of light reflec- tion into color measurement? Modeling the physical process of color image formation provides a clue to the object-specific parameters [6, 25, 28, 29, 30, 39, 43, 46, 56, 59]. The question boils down to deriving the invariant properties of color vision, [1, 9, 13, 4 Introduction

16, 17, 20, 46]. With invariance we mean a property f of object t which receives value f(t) regardless unwanted conditions W in the appearance of t [47]. For human color vision, the group of disturbing conditions W 0 are categorized by shadow, highlights, light source, and scene geometry. Scene geometry is determined by the number of light sources, light source directions, viewing direction, and object shape. The in- variant class W 0 is referred to as photometric invariance. For observation of images, geometric invariance is of importance [12, 18, 26, 32, 49]. The group of spatial dis- turbing conditions is given by translation, rotation, and observation scale. Since the human eye projects the three-dimensional world onto a two-dimensional image, the group may be extended with projection invariance. Both photometric and geometric invariance are required for a color vision system to reduce the complexity intrinsic to color images. In this thesis, these two fundamental questions are considered, aiming at robust measurement of color invariants. Here, color invariance represents the combined pho- tometric and geometric invariance class. The aim is to describe the local structure of color images in a systematic, irreducible, and complete sense. The problem is approached from a measurement theoretic viewpoint, by using aperture functions as given by linear scale-space theory. Robust color measurement is achieved by select- ing the appropriate scale for the aperture function. Conventional scale-space theory observes the world without imposing a priori information. As a result, the spatial operators defined in scale-space theory are translation, rotation, and scale invariant. More importantly, classical scale-space apertures introduce no spurious details due to the measurement process [26, 27]. In Chapter 2, we use the general scale-space assumptions to formulate a theory of color measurement [7]. However, our visual system is the result of evolution. When concerned with color, evolution is guided by the physical laws of light reflection, imposing the effects of shadows, shading, and highlights [29, 30, 56]. Hence, human color perception is constrained by the physical laws of light. Chapter 3 describes the physics of color image formation, and makes a connection between color invariance derived from physics and color constancy as characteristic for human color perception. In Chapter 4 the physical laws for color image formation are exploited to derive a complete, irreducible system of color invariants.

1.2 Part II: Geometrical Structure

The second part of this thesis is concerned with the extraction of geometrical arrange- ment of local structure. The processes of cell differentiation and apoptosis in growing tissue result in the clustering of cells forming functional parts [2, 8, 23]. Often, these functional parts exhibit a regular structure, which is the result of cell division and specialization. The minimization of occupied space, a natural constraint imposed by 1.2. Part II: Geometrical Structure 5 gravity [51], yields dense packing of cells into somewhat regular arrays and henceforth they lead to regularly shaped cell patterns. The geometrical arrangement of structures in tissues may reveal differences between physiological and pathological conditions. Classical light microscopy is often used to observe tissue structure. The tissue of interest is cut into the necessary thin slices to observe the structures by transmission of light. Contrast is added to the slices by staining procedures, resulting in the highlighting of structures against a uniform background. The chemical state of cells is quantified by after staining procedures. Tissue architecture is analyzed by the spatial arrangement of cells, neurites, bloodvessels, fibers, and other cellular objects. The regularity of cell aggregates in tissues does not imply that the quintessence of the arrangement can be captured in an algorithm. Biological variety causes clusters to be irregular. Observation by light microscopy demands the extraction of sliced samples from the tissue. The deformation caused by cutting the three-dimensional structure into two-dimensional transections again results in spatial distortion of clus- ter regularity. These distortions impose high demands on the robustness for the algorithm. We consider the fundamental problem of geometric structure: how to capture the arrangement of local structures? For example, a tissue may be considered, at a very naive level, as a cluster of cells. Hence, a cell may be considered a local marker, whereas the arrangement of cells is characteristic for the tissue. Such arrangements impose a grammar of local structures. Graph morphology is the basic tool to describe these grammars [21, 35, 38, 41, 44, 52, 54]. Chapter 6 describes the extraction of architectures by example structures. For a regular architecture, a small sample of the geometric arrangement captures the essential information for automatic extraction. This fact is exploited in the segmen- tation of tissue architecture for histological preparations. The method is validated by comparing algorithm performance with the performance of an expert. Chapter 7 presents an algorithm for the extraction of line networks from local im- age structure. A network is given by knots and their interconnections. The extraction of knots and line points yields a localized description of the network. A graph-based method is applied to the extraction of cardiac myocytes from heart muscle sections. To derive tissue architecture related parameter, as described in Chapter 6 and Chapter 7, the tissue need to be digitized into the computer. For tissue sections, often large compared to the microscope field of view, the automatic acquisition involves a scanning process. During scanning, the microscope need to be focused when tissue surface is not planar, as is often the case. Since sufficiently accurate methods are not available for autofocusing, the second part of this thesis starts with Chapter 5 describing a robust method for focusing preparations in scanning light microscopy. 6 Introduction

Bibliography

[1] E. Angelopoulou, S. Lee, and R. Bajcsy. Spectral gradient: A material descriptor invariant to geometry and incident illumination. In Proceedings of the Seventh IEEE International Conference on Computer Vision, pages 861–867. IEEE Com- puter Society, 1999.

[2] R. Chandebois. Cell sociology: A way of reconsidering the current concepts of morphogenesis. Acta Bioth., 25:71–102, 1976.

[3] A. Cumani. Edge detection in multispectral images. CVGIP: Graphical Models and Image Processing, 53(1):40–51, 1991.

[4] D. M. Dacey and B. B. Lee. The “blue-on” opponent pathway in primate retina originates from a distinct bistratified ganglion cell type. Nature, 367:731–735, 1994.

[5] D. M. Dacey, B. B. Lee, D. K. Stafford, J. Pokorney, and V. C. Smith. Horizontal cells of the primate retina: Cone specificity without spectral opponency. Science, 271:656–659, 1996.

[6] K. J. Dana, B. van Ginneken, S. K. Nayar, and J. J. Koenderink. Reflectance and texture of real world surfaces. ACM Trans Graphics, 18:1–34, 1999.

[7] A. Dev and R. van den Boomgaard. Color and scale: The spatial structure of color images. Technical report, ISIS institute, Department of Computer Science, University of Amsterdam, Amsterdam, The Netherlands, 1999.

[8] K. J. Dormer. Fundamental Tissue Geometry for Biologists. Cambridge Univ. Press, London, 1980.

[9] M. D’Zmura and P. Lennie. Mechanisms of color constancy. J. Opt. Soc. Am. A, 3(10):1662–1672, 1986.

[10] S. Engel, X. Zhang, and B. Wandell. Colour tuning in human visual cortex measured with functional magnetic resonance imaging. Nature, 388:68–71, 1997.

[11] G. D. Finlayson. Color in perspective. IEEE Trans. Pattern Anal. Machine Intell., 18(10):1034–1038, 1996.

[12] L. Florack. Image Structure. Kluwer Academic Publishers, Dordrecht, 1997.

[13] D. H. Foster and S. M. C. Nascimento. Relational colour constancy from invariant cone-excitation ratios. Proc. R. Soc. London B, 257:115–121, 1994.

[14] B. V. Funt and G. D. Finlayson. Color constant color indexing. IEEE Trans. Pattern Anal. Machine Intell., 17(5):522–529, 1995. Bibliography 7

[15] C. Garbay, G. Brugal, and C. Choquet. Application of colored image analysis to bone marrow cell recognition. Analyt. Quantit. Cytol., 3:272–280, 1981.

[16] R. Gershon, D. Jepson, and J. K. Tsotsos. Ambient illumination and the deter- mination of material changes. J. Opt. Soc. Am. A, 3:1700–1707, 1986.

[17] T. Gevers and A. W. M. Smeulders. Color based object recognition. Pat. Rec., 32:453–464, 1999.

[18] L. J. Van Gool, T. Moons, E. J. Pauwels, and A. Oosterlinck. Vision and Lie’s approach to invariance. Image Vision Comput., 13(4):259–277, 1995.

[19] D. Hall, V. Colin de Verdi`ere, and J. L. Crowley. Object recognition using coloured receptive fields. In Proceedings Sixth Europian Conference on Computer Vision (ECCV), volume 1, pages 164–177, LNCS 1842. Springer-Verlag, 26th June-1st July, 2000.

[20] G. Healey and A. Jain. Retrieving multispectral satellite images using physics- based invariant representations. IEEE Trans. Pattern Anal. Machine Intell., 18:842–848, 1996.

[21] H. J. A. M. Heijmans, P. Nacken, A. Toet, and L. Vincent. Graph morphology. J. Visual Communication Image Representation, 3:24–38, 1992.

[22] E. Hering. Outlines of a Theory of the Light Sense. Harvard University Press, Cambridge, MS, 1964.

[23] H. Honda. Geometrical models for cells in tissues. Int. Rev. Cytol., 81:191–248, 1983.

[24] D. H. Hubel. Eye, Brain, and Vision. Scientific American Library, New York, NY, 1988.

[25] D. B. Judd and G. Wyszecki. Color in Business, Science, and Industry. Wiley, New York, NY, 1975.

[26] J. J. Koenderink. The structure of images. Biol. Cybern., 50:363–370, 1984.

[27] J. J. Koenderink and A. J. van Doorn. Receptive field families. Biol. Cybern., 63:291–297, 1990.

[28] J. J. Koenderink and A. J. van Doorn. Illuminance texture due to surface mesostructure. J. Opt. Soc. Am. A, 13:452–463, 1996.

[29] P. Kubelka. New contribution to the optics of intensely light-scattering materials. part I. J. Opt. Soc. Am., 38(5):448–457, 1948. 8 Introduction

[30] P. Kubelka and F. Munk. Ein beitrag zur optik der farbanstriche. Z. Techn. Physik, 12:593, 1931.

[31] E. H. Land. The retinex theory of color vision. Sci. Am., 237:108–128, 1977.

[32] T. Lindeberg. Scale-Space Theory in Computer Vision. Kluwer Academic Pub- lishers, Boston, 1994.

[33] M. Livingstone. Art, illusion and the visual system. Sci. Am., 258:78–85, 1988.

[34] M. Livingstone and D. Hubel. Segregation of form, color, movement, and depth: Anatomy, physiology, and perception. Science, 240:740–749, 1988.

[35] R. Marcelpoil and Y. Usson. Methods for the study of cellular sociology: Vorono¨ı diagrams and parametrization of the spatial relationships. J. Theor. Biol., 154:359–369, 1992.

[36] R. H. Masland. Unscrambling color vision. Science, 271:616–617, 1996.

[37] B. A. Maxwell and S. A. Shafer. Physics-based segmentation of complex objects using multiple hypotheses of image formation. Comput. Vision Image Under- standing, 65(2):269–295, 1997.

[38] F. Meyer. Skeleton and perceptual graphs. Signal Processing, 16:335–363, 1989.

[39] K. D. Mielenz, K. L. Eckerle, R. P. Madden, and J. Reader. New reference spectrophotometer. Appl. Optics, 12(7):1630–1641, 1973.

[40] M. Mirmehdi and M. Petrou. Segmentation of color textures. IEEE Trans. Pattern Anal. Machine Intel., 22(2):142–159, 2000.

[41] A. Mojsilovi´c, J. Kovaˇcevi´c, J. Hu, R. J. Safranek, and S. K. Ganapathy. Match- ing and retrieval based on the vocabulary and grammar of color patterns. IEEE Trans. Image Processing, 9(1):38–54, 2000.

[42] S. K. Nayar and R. M. Bolle. Computing reflectance ratios from an image. Pat. Rec., 26:1529–1542, 1993.

[43] M. Oren and S. K. Nayar. Generalization of the Lambertian model and implica- tions for machine vision. Int. J. Computer Vision, 14:227–251, 1995.

[44] J. Palmari, C. Dussert, Y. Berthois, C. Penel, and P. M. Martin. Distribution of estrogen receptor heterogeneity in growing MCF–7 cells measured by quantitative microscopy. Cytometry, 27:26–35, 1997.

[45] G. Sapiro. Color and illuminant voting. IEEE Trans. Pattern Anal. Machine Intel., 21(11):1210–1215, 1999. Bibliography 9

[46] S. A. Shafer. Using color to separate reflection components. Color Res. Appl., 10(4):210–218, 1985.

[47] A. W. M. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain. Content based image retrieval at the end of the early years. submitted to IEEE Trans. Pattern Anal. Machine Intell.

[48] H. M. G. Stokman. Robust Photometric Invariance in Machine Color Vision. PhD thesis, University of Amsterdam, Amsterdam, The Netherlands, 2000.

[49] B. M. ter Haar Romeny, editor. Geometry-Driven Diffusion in Computer Vision. Kluwer Academic Publishers, Boston, 1994.

[50] B. Thai and G. Healey. Modeling and classifying symmetries using a multi- scale opponent color representation. IEEE Trans. Pattern Anal. Machine Intell., 20(11):1224–1235, 1998.

[51] D. W. Thompson. On Growth and Form. Cambridge University Press, London, England, 1971.

[52] A. Toet. Hierarchical clustering through morphological graph transformation. Pat. Rec. Let., 12:391–399, 1991.

[53] D. Travis. Effective Color Displays, Theory and Practice. Academic Press, 1991.

[54] L. Vincent. Graphs and mathematical morphology. Signal Processing, 16:365– 388, 1989.

[55] S. G. Wolf, R. Ginosar, and Y. Y. Zeevi. Spatio-chromatic model for colour image processing. In Proceedings 12th IAPR International Conference on Pattern Recognition, volume 1, pages 599–601. IEEE, October 9–13, 1994.

[56] G. Wyszecki and W. S. Stiles. Color Science: Concepts and Methods, Quantita- tive Data and Formulae. Wiley, New York, NY, 1982.

[57] R. A. Young. The Gaussian derivative theory of spatial vision: Analysis of cortical cell receptive field line-weighting profiles. Technical Report GMR-4920, General Motors Research Center, Warren, MI, 1985.

[58] S. Di Zenzo. A note on the gradient of a multi-image. Comput. Vision Graphics Image Processing, 33:116–125, 1986.

[59] R. Zhou, E. H. Hammond, and D. L. Parker. A multiple wavelength algorithm in color image analysis and its applications in stain decomposition in microscopy images. Med. Phys., 23(12):1977–1986, 1996.

Part I

Color

Chapter 2

Color and Scale The Spatial Structure of Color Images

appeared in the proceedings of the sixth Europian Conference on Computer Vision, vol. 1, pp. 331–341, 2000.

and color are field phenomena, not point phenomena.” – –Edwin H. Land.

There has been a recent revival in the analysis of color in computer vision. This is mainly due to the common knowledge that more visual information leads to easier interpretation of the visual scene. A color image is easier to segment than a - valued image since some edges are only visible in the color domain and will not be detected in the grey-valued image. An area of large interest is searching for particular objects in images and image-databases, for which color is a feature with high reach in its data-values and hence high potential for discriminability. Color can thus be seen as an additional cue in image interpretation. Moreover, color can be used to extract object reflectance robust for a change in imaging conditions [4, 5, 14, 15]. Therefore color features are well suited for the description of an object. Colors are only defined in terms of human observation. Modern analysis of color has started in where the spectral content of tri-chromatic stimuli are matched by a human, resulting in the well-known XYZ color matching functions [17]. However, from the pioneering work of Land [13] we know that a perceived color does not directly correspond to the spectral content of the stimulus; there is no one-to-one mapping of spectral content to perceived color. For example, a colorimetry purist will not consider to be a color, but as computer vision practisers would like to be able to define brown in an image when searching on colors. Hence, it is not only the spectral energy distribution coding color information, but also the spatial

13 14 Chapter 2. Color and Scale configuration of colors. We aim at a physical basis for the local interpretation of color images. Common image processing sense tells us that the grey-value of a particular is not a meaningful entity. The value 42 by itself tells us little about the meaning of the pixel in its environment. It is the local spatial structure of an image that has a close geometrical interpretation [10]. Yet representing the spatial structure of a color image is an unsolved problem. The theory of scale-space [10, 16] adheres to the fact that observation and scale are intervened; a measurement is performed at a certain resolution. Differentiation is one of the fundamental operations in image processing, and one which is nicely defined [3] in the context of scale-space. In this chapter we discuss how to represent color as a scalar field embedded in a scale-space paradigm. As a consequence, the differential geometry framework is extended to the domain of color images. We demonstrate color invariant edge detectors which are robust to shadow and highlight boundaries. The chapter is organized as follows. Section 2.1 considers the embedding of color in the scale-space paradigm. In section 2.2 we derive estimators for the parameters in the scale-space model, and give optimal values for these parameters. The resulting sensitivity curves are colorimetrical compared with human color vision.

2.1 Color and Observation Scale

A spatio-spectral energy distribution is only measurable at a certain spatial reso- lution and a certain spectral bandwidth. Hence, physical realizable measurements inherently imply integration over spectral and spatial dimensions. The integration reduces the infinitely dimensional Hilbert space of spectra at infinitesimally small spatial neighborhood to a limited amount of measurements. As suggested by Koen- derink [11], general aperture functions, or Gaussians and its derivatives, may be used to probe the spatio-spectral energy distribution. We emphasize that no essentially new color model is proposed here, but rather a theory of color measurement. The specific choice of color representation is irrelevant for our purpose. For convenience we first concentrate on the spectral dimension, later on we show the extension to the spatial domain.

2.1.1 The Spectral Structure of Color

From scale space theory we know how to probe a function at a certain scale; the probe should have a Gaussian shape in order to prevent the creation of extra details into the function when observed at a higher scale (lower resolution) [10]. As suggested by Koenderink [11], we can probe the spectrum with a Gaussian. In this section, we consider the Gaussian as a general probe for the measurement of spatio-spectral 2.1. Color and Observation Scale 15 differential quotients. No essentially new color model is proposed, but rather a theory of color measurement. Formally, let E(λ) be the energy distribution of the incident light, where λ denotes wavelength, and let G(λ0; σλ) be the Gaussian at spectral scale σλ positioned at λ0. The spectral energy distribution may be approximated by a Taylor expansion at λ0,

1 E(λ) = Eλ0 + λEλ0 + λ2Eλ0 + . . . . (2.1) λ 2 λλ

Measurement of the spectral energy distribution with a Gaussian aperture yields a weighted integration over the spectrum. The observed energy in the Gaussian color model, at infinitely small spatial resolution, approaches in second order to

1 Eˆσλ (λ) = Eˆλ0,σλ + λEˆλ0,σλ + λ2Eˆλ0,σλ + . . . (2.2) λ 2 λλ where

λ0,σλ Eˆ = E(λ)G(λ; λ0, σλ)dλ (2.3) Z measures the spectral intensity,

ˆλ0,σλ Eλ = E(λ)Gλ(λ; λ0, σλ)dλ (2.4) Z measures the first order spectral derivative, and

ˆλ0,σλ Eλλ = E(λ)Gλλ(λ; λ0, σλ)dλ (2.5) Z measures the second order spectral derivative. Further, Gλ and Gλλ denote derivatives of the Gaussian with respect to λ. Note that, throughout the thesis, we assume scale normalized Gaussian derivatives to probe the spectral energy distribution.

Definition 1 (Gaussian Color Model) The Gaussian color model measures the λ ,σ λ ,σ ˆλ0,σλ ˆ 0 λ ˆ 0 λ coefficients E , Eλ , Eλλ , . . . of the Taylor expansion of the Gaussian weighted spectral energy distribution at λ0 and scale σλ.

One might be tempted to consider a higher, larger than two, order structure of the smoothed spectrum. However, the subspace spanned by the human visual system is of dimension 3, and hence higher order spectral structure cannot be observed by the human visual system. 16 Chapter 2. Color and Scale

Figure 2.1: The probes for spatial color consists of probing the product of the spatial and the spectral space with a Gaussian aperture.

2.1.2 The Spatial Structure of Color

Introduction of spatial extent in the Gaussian color model yields a local Taylor ex- pansion at wavelength λ0 and position x~0. Each measurement of a spatio-spectral energy distribution has a spatial as well as spectral resolution. The measurement is obtained by probing an energy density volume in a three-dimensional spatio-spectral space, where the size of the probe is determined by the observation scale σλ and σx, see fig. 2.1. It is directly clear that we do not separately consider spatial scale and spectral scale, but actually probe an energy density volume in the 3d spectral-spatial space where the “size” of the volume is specified by the observation scales. We can describe the observed spatial-spectral energy density Eˆ(λ, ~x) of light as a Taylor series for which the coefficients are given by the energy convolved with Gaussian derivatives:

T T ~x Eˆ 1 ~x Eˆ Eˆ ~x Eˆ(λ, ~x) = Eˆ + ~x + ~x~x ~xλ + . . . (2.6) λ Eˆ 2 λ Eˆ Eˆ λ µ ¶ " λ# µ ¶ " λ~x λλ# µ ¶ where

Eˆ i j (λ, ~x) = E(λ, ~x) G i j (λ, ~x; σλ, σx) . (2.7) ~x λ ∗ ~x λ

Here, G~xiλj (λ, ~x; σλ, σx) are the spatio-spectral probes, or color receptive fields. The coefficients of the Taylor expansion of Eˆ(λ, ~x) represent the local image structure com- pletely. Truncation of the Taylor expansion results in an approximate representation, optimal in least squares sense. For human vision, it is known that the Taylor expansion is spectrally truncated at second order [8]. Hence, higher order derivatives do not affect color as observed by the 2.2. Colorimetric Analysis of the Gaussian Color Model 17 human visual system. Therefore, three receptive field families should be considered; the luminance receptive fields as known from luminance scale-space [12] extended with a yellow-blue receptive field family measuring the first order spectral derivative, and a red-green receptive field family probing the second order spectral derivative. For human vision, the Taylor expansion for luminance is spatially truncated at fourth order [18].

2.2 Colorimetric Analysis of the Gaussian Color Model

The eye projects the infinitely dimensional spectral density function onto a 3d ‘color’ space. Not any 3d subspace of the Hilbert space of spectra equals the subspace that nature has chosen. Any subspace we create with an artificial color model should be reasonably close in some metrical sense to the spectral subspace spanned by the human visual system. Formally, the infinitely dimensional spectrum e is projected onto a 3d space c by c = AT e, where AT = (XY Z) represents the color matching matrix. The subspace in which c resides, is defined by the color matching functions AT . The range AT < defines what spectral distributions e can be reached from c, and the nullspace AT ℵ ¡ ¢ defines which spectra e cannot be observed in c. Since any spectrum e = e + e < ¡ ℵ¢ decomposed into a part that resides in AT and a part that resides in AT , we < ℵ define ¡ ¢ ¡ ¢ Definition 2 The observable part of the spectrum equals e = Π e where Π is the < < < projection onto the range of the human color matching functions AT .

Definition 3 The non-observable (or metameric black) part of the spectrum equals e = Π e where Π is the projection onto the nullspace of the human color matching ℵ ℵ ℵ functions AT .

The projection on the range AT is given by [1] < T T¡ ¢ T 1 T Π : A A = A A A − A (2.8) < 7→ < ¡ ¢ ¡ ¢ and the projection on the nullspace

T T T 1 T Π : A A = I A A A − A = Π⊥ . (2.9) ℵ 7→ ℵ − < Any spectral probe BT that has¡the¢same range¡ as A¢T is said to be colorimetric with AT and hence differs only in an affine transformation. An important property of the range projector Π is that it uniquely specifies the subspace. Thus, we can rephrase < the previous statement into: 18 Chapter 2. Color and Scale

Proposition 4 The human is uniquely defined by AT . Any color < model BT is colorimetric with AT if and only if AT = BT . < < ¡ ¢ In this way we can tell if a certain color model is colorimetric¡ ¢ ¡ with¢ the human visual system. Naturally this is a formal definition. It is not well suited for a measurement approach where the color subspaces are measured with a given precision. A definition of the difference between subspaces is given by [7, Section 2.6.3],

Proposition 5 The largest principle angle θ between color subspaces given by their color matching functions AT and BT equals

sin θ(AT , BT ) = AT BT . < − < 2 ° ¡ ¢ ¡ ¢° ° °

Up to this point we did establish expressions describing similarity between differ- ent subspaces. We are now in a position to compare the subspace of the Gaussian color model with the human visual system by using the XYZ color matching func- tions. Hence, parameters for the Gaussian color model may be optimized to capture a similar spectral subspace as spanned by human vision, see fig. 2.2. Let the Gaussian color matching functions be given by G(λ0, σλ). We have 2 degrees of freedom in po- sitioning the subspace of the Gaussian color model; the mean λ0 and scale σλ of the Gaussian. We wish to find the optimal subspace that minimizes the largest principle angle between the subspaces, i.e.:

T B(λ0, σλ) = [G(λ; λ0, σλ) Gλ(λ; λ0, σλ) Gλλ(λ; λ0, σλ)] T T sin θ = argmin A (B(λ0, σλ) ) λ0,σλ < − < 2 ³° ° ´ ° ¡ ¢ ° An approximate solution is obtained° for λ0 = 520 nm and σλ° = 55 nm. The corre- sponding angles between the principal axes of the Gaussian sensitivities and the 1931 and 1964 CIE standard observers are given in tab. 2.1. Figure 2.3 shows the differ- ent sensitivities, together with the optimal (least square) transform from the XYZ sensitivities to the Gaussian basis, given by

Eˆ 0.48 1.2 0.28 Xˆ − Eˆ = 0.48 0 0.4 Yˆ . (2.10)  λ   −    Eˆλλ 1.18 1.3 0 Zˆ   −         Since the transformed sensitivities are a linear (affine) transformation of the original XYZ sensitivities, the transformation is colorimetric with human vision. The trans- form is close to the Hering basis for color vision [8], for which the yellow-blue pathway indeed is found in the visual system of primates [2]. 2.3. Conclusion 19

(a) (b)

Figure 2.2: Cohen’s fundamental matrix < for the CIE 1964 standard observer (a), and for the Gaussian color model (λ0 = 520 nm, σλ = 55 nm) (b).

A RGB-camera approximates the CIE 1931 XYZ basis for colorimetry by the linear transform [9]

Xˆ 0.62 0.11 0.19 R Yˆ = 0.3 0.56 0.05 G . (2.11)       Zˆ 0.01 0.03 1.11 B   −       The best linear transform from XYZ values to the Gaussian color model is given by (eq. 2.10). Hence, the product of (eq. 2.11) and (eq. 2.10) gives the desired implemen- tation of the Gaussian color model in RGB terms,

Eˆ 0.06 0.63 0.27 R Eˆ = 0.3 0.04 0.35 G . (2.12)  λ   −    Eˆλλ 0.34 0.6 0.17 B   −       A better approximation to the Gaussian color model may be obtained for known camera sensitivities. Figure 2.4 shows an example image and its Gaussian color model components.

2.3 Conclusion

We have established the measurement of spatial color information from RGB-images, based on the Gaussian scale-space paradigm. We have shown that the formation of color images yield a spatio-spectral integration process at a certain spatial and spectral resolution. Hence, measurement of color images implies probing a three-dimensional energy density at a spatial scale σx and spectral scale σλ. The Gaussian aperture may be used to probe the spatio-spectral energy distribution. 20 Chapter 2. Color and Scale

Table 2.1: Angles between the principal axes for various color systems. For determining the optimal values λ0, σλ, the largest angle θ1 is minimized. The distance between the Gaussian sensitivities for the optimal values λ0 = 520 nm, σλ = 55 nm and the different CIE colorimetric systems is comparable. Note the difference between the CIE systems is 9.8◦.

Gauss – XYZ 1931 Gauss – XYZ 1964 XYZ 1931 – 1964

θ1 26◦ 23.5◦ 9.8◦ θ2 21.5◦ 17.5◦ 3.9◦ θ3 3◦ 3◦ 1◦

1 2 1

0.5 1.5 0.5

0 1 0

-0.5 0.5 -0.5

-1 0 -1 400 450 500 550 600 650 700 400 450 500 550 600 650 700 400 450 500 550 600 650 700

(a) (b) (c)

Figure 2.3: The Gaussian sensitivities at λ0 = 520 nm and σλ = 55 nm (a). The The best linear transformation from the CIE 1964 XYZ sensitivities (b) to the Gaussian bases is shown in (c). Note the correspondence between the transformed sensitivities and the Gaussian color model.

(a) (b) (c) (d)

Figure 2.4: The example image (a) and its color components Eˆ (b), Eˆλ (c), and Eˆλλ (d), respectively. Note that for the color component Eˆλ achromaticity is shown in grey, negative bluish values are shown in dark, and positive yellowish in light. Further, for Eˆλλ achromaticity is shown in grey, negative greenish in dark, and positive reddish in light. Bibliography 21

We have achieved a spatial color model, founded in physics as well as in measure- ment science. The parameters of the Gaussian color model have been estimated such that a similar spectral subspace as human vision is captured. The Gaussian color model solves the fundamental problem of color and scale by integrating the spatial and color information. The model measures the coefficients of the Taylor expansion of the spatio-spectral energy distribution. Hence, the Gaussian color model describes the local structure of color images. As a consequence, the differential geometry framework is extended to the domain of color images. Spatial differentiation of expressions derived from the Gaussian color model is inherently well-posed, in contrast with often ad-hoc methods for detection of edges and other color edge detectors. Application areas include physics-based vision [5], image database searches [6], and object tracking.

Bibliography

[1] J. B. Cohen and W. E. Kappauff. Color mixture and fundamental metamer: Theory, algebra, geometry, application. Am. J. Psych., 98:171–259, 1985.

[2] D. M. Dacey and B. B. Lee. The “blue-on” opponent pathway in primate retina originates from a distinct bistratified ganglion cell type. Nature, 367:731–735, 1994.

[3] L. M. J. Florack, B. M. ter Haar Romeny, J. J. Koenderink, and M. A. Viergever. Cartesian differential invariants in scale-space. Journal of Mathematical Imaging and Vision, 3(4):327–348, 1993.

[4] R. Gershon, D. Jepson, and J. K. Tsotsos. Ambient illumination and the deter- mination of material changes. J. Opt. Soc. Am. A, 3:1700–1707, 1986.

[5] T. Gevers and A. W. M. Smeulders. Color based object recognition. Pat. Rec., 32:453–464, 1999.

[6] T. Gevers and A. W. M. Smeulders. Content-based image retrieval by viewpoint- invariant image indexing. Image Vision Comput., 17(7):475–488, 1999.

[7] G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins Press Ltd., London, 1996.

[8] E. Hering. Outlines of a Theory of the Light Sense. Harvard University Press, Cambridge, MS, 1964.

[9] ITU-R Recommendation BT.709. Basic parameter values for the HDTV stan- dard for the studio and for international programme exchange. Technical Report BT.709 [formerly CCIR Rec. 709], ITU, 1211 Geneva 20, Switzerland, 1990. 22 Chapter 2. Color and Scale

[10] J. J. Koenderink. The structure of images. Biol. Cybern., 50:363–370, 1984.

[11] J. J. Koenderink and A. Kappers. Color Space. Utrecht University, The Nether- lands, 1998.

[12] J. J. Koenderink and A. J. van Doorn. Receptive field families. Biol. Cybern., 63:291–297, 1990.

[13] E. H. Land. The retinex theory of color vision. Sci. Am., 237:108–128, 1977.

[14] K. D. Mielenz, K. L. Eckerle, R. P. Madden, and J. Reader. New reference spectrophotometer. Appl. Optics, 12(7):1630–1641, 1973.

[15] S. A. Shafer. Using color to separate reflection components. Color Res. Appl., 10(4):210–218, 1985.

[16] B. M. ter Haar Romeny, editor. Geometry-Driven Diffusion in Computer Vision. Kluwer Academic Publishers, Boston, 1994.

[17] G. Wyszecki and W. S. Stiles. Color Science: Concepts and Methods, Quantita- tive Data and Formulae. Wiley, New York, NY, 1982.

[18] R. A. Young. The Gaussian derivative theory of spatial vision: Analysis of cortical cell receptive field line-weighting profiles. Technical Report GMR-4920, General Motors Research Center, Warren, MI, 1985. Chapter 3

A Physical Basis for Color Constancy

Part of this work has appeared in the proceedings of the Second International Conference on Scale-Space Theory in Computer Vision, 1999, pp. 459–464.

“As organisms grew more intricate, their sense organs multiplied and became both more complex and more delicate. More messages of greater variety were received from and about the external environment. Along with that (whether as cause or effect we cannot tell), there developed an increasing complexity of the nervous system, the living instrument that interpreted and stored the data collected by the sense organs.” – –Isaac Asimov.

A well known property of human vision, known as color constancy, is the ability to correct for color deviations caused by a difference in illumination. Although the effect is a long standing research topic [13, 15, 21], the mechanism involved is only partly resolved. A common approach to investigate color constant behavior is by psychophysical experiments [1, 13, 14]. Despite the exact nature of such experiments, there are intrinsic difficulties to explain the experimental results. For relatively simple experi- ments, the results may not explain in enough detail the mechanism underlying color constancy. For example, in [14] the same stimulus patch, either illuminated by the test illuminant, or by the reference illuminant, was presented to the left and right eye. The subject was asked to match the appearance of the color under the reference illuminant to the color under the test illuminant. As discussed by the authors, the experiment is synthetical in that the visual scene lacks a third dimensions. Although the results correspond to their predictions, they are unable to prove their theory on natural scenes, the scenes where shadow plays an important role. On the other hand,

23 24 Chapter 3. A Physical Basis for Color Constancy for complex experiments, with inherently a large amount of variables involved, the results does not describe color constancy isolated from other perceptual mechanisms. In [1], a more natural scene is used, in that objects were placed in the experimenta- tion room. The observer judged the appearance of a test patch mounted on the far wall of the room. The observer was asked to vary the of the test patch so that it appeared achromatic. The color constancy reported is excellent, but the experiments could not be interpreted in enough detail to explain the results. Hence, a fundamental problem in experimental colorimetry is that the complex experimental environment necessary to examine color constancy makes it hard to draw conclusions. An alternative approach to reveal the mechanisms involved in color constancy is by considering the spectral image formation. Modeling the physical process of spectral image formation provides insight into the effect of different parameters on object reflectance [2, 3, 4, 5, 6, 19]. In this chapter, we aim at a physical basis for color constancy rather than a psychophysical one. Object reflectance is well modeled by Shafer [20], based on the older Kubelka-Munk theory [11, 12]. The Kubelka-Munk theory models the reflected and transmitted spectrum of a colored layer, based on a material dependent scattering and absorption function, under the assumption that light is isotropically scattered within the material. The theory has proven to be successful for a wide variety of materials and applications [8, 22]. The theory unites spectral color formation for both reflecting materials as well as transparent materials into one photometric model. Therefore, the Kubelka-Munk theory is well suited for determining material properties from color measurements. In Chapter 4, the use of the Kubelka-Munk model is demonstrated for the measurement of object reflectance from color images, under various general assumptions regarding imaging conditions. In this chapter, we concentrate on color constant measurement of object color under both reflectance of light as well as light transmission. When considering the estimation of material properties on the basis of local mea- surements, differential equations constitute a natural framework to describe the phys- ical process of image formation. A well known technique from scale-space theory [10] is the convolution of a signal with a derivative of the Gaussian kernel to obtain the derivative of the signal. The Gaussian function regularizes the underlying distribution, resulting in robustness against noise. The standard deviation σ of the Gaussian deter- mines the observation scale. Introduction of wavelength in the scale-space paradigm, as suggested by Koenderink [9], leads to a spatio-spectral family of Gaussian aperture functions. These color receptive fields are introduced in Chapter 2 as the Gaussian color model. The Gaussian color model provides a physical basis, which is compatible with colorimetry, for the measurement of color constant object properties. The color constancy problem is often posed as retrieving the unknown illuminant from a given scene [2, 3, 14, 19]. Different from their approach, features invariant to a change in illuminant can be developed [4, 5, 6]. In this chapter, we focus on differential expressions which are robust to a change in illumination color. The per- 3.1. Color Image Formation Model 25 formance of these color invariants is demonstrated by experiments on spectral data. Additionally, robustness against changes in the imaging conditions, such as camera viewpoint, illumination direction, and object geometry is achieved, as demonstrated in Chapter 4. The organization of the chapter is as follows. In section 3.1, color image formation is modeled by means of the Kubelka-Munk theory. Invariant differential expressions which meet the given constraints are derived from the model in section 3.2. Appli- cation of the Gaussian color model described in Chapter 2 implies measurement of the derived spatio-spectral differential invariants. Section 3.3 describes experimental setup and color constancy results for the proposed method compared to well-known methods from literature. Finally, a confrontation between physics based and percep- tion based color constancy is given in section 3.4.

3.1 Color Image Formation Model

In this section, image formation is modeled by means of the Kubelka-Munk theory [8, 22] for colorant layers. Under the assumption that light within the material is isotropically scattered, the material layer may be characterized by a wavelength de- pendent scatter coefficient and absorption coefficient. The class of materials for which the theory is useful ranges from dyed paper and textiles, opaque plastics, paint films, up to enamel and dental silicate cements [8]. The model may be applied to both reflecting and transparent material.

3.1.1 Color Formation for Reflection of Light

Consider a homogeneously colored material patch of uniform thickness d and infinites- imal area, characterized by its absorption coefficient k(λ) and scatter coefficient s(λ). When illuminated by incident light with spectral distribution e(λ), light scattering within the material causes diffuse body reflection (fig. 3.1), while Fresnel interface reflectance occurs at the surface boundaries. When the thickness of the layer is such that further increase in thickness does not affect the reflected color, Fresnel reflectance at the back surface may be neglected. The incident light is partly reflected at the front surface, and partly enters the material, is isotropically scattered, and a part again passes the front-surface boundary. The reflected spectrum in the viewing direction ~v, ignoring secondary scattering after internal boundary reflection, is given by [8, 22]:

2 E (λ) = e(λ) (1 ρf (λ, ~n, ~s,~v)) R (λ) + e(λ)ρf (λ, ~n, ~s,~v) (3.1) R − ∞ where ~n is the surface patch normal and ~s the direction of the illumination source, and ρf the Fresnel front surface reflectance coefficient in the viewing direction. The

26 Chapter 3. A Physical Basis for Color Constancy

£¥¤§¦ ¨ ¢¡

Figure 3.1: Illustration of the photometric model. The object, refractive index n2, is illuminated by e(λ) (medium refractive index n1), and light is reflected and scattered in the viewing direction. body reflectance

R (λ) = a(λ) b(λ) (3.2) ∞ − depends on the absorption and scattering coefficient by

k(λ) a(λ) = 1 + , b(λ) = a(λ)2 1 . (3.3) s(λ) − p Simplification is obtained by considering neutral interface reflection, assuming that the Fresnel reflectance coefficient has a constant value over the spectrum. For commonly used materials, interface reflection is constant with respect to wavelength within a few percent across the [8, 18]. Equation (3.1) reduces to

2 E (λ) = e(λ) (1 ρf (~n, ~s,~v)) R (λ) + e(λ)ρf (~n, ~s,~v) . (3.4) R − ∞ The influence of the Fresnel reflectance varies from perfectly diffuse body reflectance ρf = 0, or Lambertian reflection, to total mirroring of the illuminating source (ρf = 1). Hence, the spectral color of E is an additive mixture of the color of the light source R and the perfectly diffuse body reflectance color. Because of projection of the energy distribution on the image plane, vectors ~n, ~s and ~v will depend on the position at the imaging plane. The energy of the incoming spectrum at a point ~x on the image plane is then related to

2 E (λ, ~x) = e(λ, ~x) (1 ρf (~x)) R (λ, ~x) + e(λ, ~x)ρf (~x) (3.5) R − ∞

3.1. Color Image Formation Model 27

£¥¤§¦ ¨ ¢¡

Figure 3.2: Illustration of the photometric model. The object, refractive index n2, is illuminated by e(λ) (medium refractive index n1). When the material is transparent, light is transmitted through the material, enters medium n3, and is observed. where the spectral distribution at each point x is generated off a specific material patch. The major assumption made for the model of (eq. 3.5) is that locally planar surface patches are examined, for which the material is homogeneously colored. These con- straints are imposed by the Kubelka-Munk theory, resulting in isotropic scattering of light within the material. The assumption is valid when the resolution is fine enough to consider locally uniform colored patches, whereas individual staining particles are not resolved. Further, the thickness of the layer is assumed to be such that no light reaches the other side of the material. For every day scenes, these assumptions seems to be justified. Concerning the Fresnel reflectance, the photometric model assumes a neutral interface at the surface patch. As discussed in [18, 20], deviations of ρf over the visible spectrum are small for commonly used materials, therefore the Fresnel reflectance coefficient may be considered constant. The internally Fresnel reflected light contributes little in many cases [22], and is ignored in the model.

3.1.2 Color Formation for Transmission of Light Consider a homogeneously colored material patch of uniform thickness d and infinites- imal area, characterized by its absorption coefficient k(λ) and scatter coefficient s(λ). When illuminated by incident light with spectral distribution e(λ), absorption and scattering by the material determines its transmission color (fig. 3.2), while Fresnel interface reflectance occurs at both the front and back surface boundaries. When the layer is thin, such that the material is transparent, the transmitted spec- 28 Chapter 3. A Physical Basis for Color Constancy trum through the layer in the viewing direction ~v, ignoring the effect of interreflections between the material surfaces, is given by [8, 22]: e(λ) (1 ρ (λ, ~n, ~s,~v)) (1 ρ (λ, ~n, ~s,~v)) b(λ) E (λ) = − f − b (3.6) T a(λ) sinh[b(λ)s(λ)l(~n, ~s,~v)c] + b(λ) cosh[b(λ)s(λ)l(~n, ~s,~v)c] where again ~n is the material patch normal and ~s is the direction of the illumination source. Further, c is the staining concentration and l the distance traveled by the light through the material. The terms ρf and ρb denote the Fresnel front and back surface reflectance coefficient, respectively. The factors a and b depend on the absorption and scattering coefficients as given by (eq. 3.3). Simplification is obtained by considering neutral interface reflection, assuming that the Fresnel reflectance coefficients have a constant value over the spectrum. In that case, the Fresnel reflectance affects the intensity of the transmitted light only. Further, by considering a small angle of incidence at the transparent layer, the path length l(~n, ~s,~v) = d. Equation (3.6) reduces to e(λ) (1 ρ (~n, ~s,~v)) (1 ρ (~n, ~s,~v)) b(λ) E (λ) = − f − b . (3.7) T a(λ) sinh[b(λ)s(λ)dc] + b(λ) cosh[b(λ)s(λ)dc] Because of projection of the energy distribution on the image plane, vectors ~n, ~s and ~v will depend on the position ~x at the imaging plane, e(λ, ~x)(1 ρ (~x))(1 ρ (~x))b(λ, ~x) E (λ, ~x) = − f − b T a(λ, ~x) sinh[b(λ, ~x)s(λ, ~x)d(~x)c(~x)] + b(λ, ~x) cosh[b(λ, ~x)s(λ, ~x)d(~x)c(~x)] (3.8) where the spectral distribution at each point x is generated off a specific transparent patch. One of the assumptions made for the model of (eq. 3.8) is that locally planar mate- rial patches are examined, with parallel sides, for which the material is homogeneously colored. The assumption is valid when the material is non-fluorescent nor in any sense optically active, and the resolution is fine enough to consider locally uniform colored patches, while individual stain particles are not resolved. Again, these constraints are imposed by the Kubelka-Munk theory. Further, normal incidence of light at the layer is assumed, so that the optical path length through the layer approximates its thick- ness. In transmission light microscopy, the preparation and observation conditions fairly justify these assumptions. Concerning the Fresnel reflectance, the photometric model assumes a neutral interface at the transparent patch. As discussed in [18], deviations of ρf , ρb over the visible spectrum are small for commonly used materi- als. For example, the refractive index of immersion oil often used in microscopy only varies 3.3% over the visible spectrum. Therefore, the Fresnel reflectance coefficients ρf and ρb may be considered constant over the spectrum. The contribution of inter- nally Fresnel reflected light is small in many cases [22], and is therefore ignored in the model. 3.1. Color Image Formation Model 29

3.1.3 Special Cases Thus far, we have achieved a photometric model for spectral color formation, which is applicable for both reflecting and transmitting materials, and valid under a wide variety of circumstances and materials. The following special cases can be derived. For matte, dull surfaces, the Fresnel coefficient can be considered neglectable, ρf (~x) 0, for which E (eq. 3.5) reduces to the Lambertian model for diffuse body ≈ R reflection,

E (λ, ~x) = e(λ, ~x)R (λ, ~x) (3.9) R ∞ as expected. 2 By introducing cb(λ) = e(λ)R (λ), ci(λ) = e(λ), mb(~n, ~s,~v) = (1 ρf (~n, ~s,~v)) ∞ − and mi(~n, ~s,~v) = ρf (~n, ~s,~v), (eq. 3.4) may be reformulated as

E (λ) = mb(~n, ~s,~v)cb(λ) + mi(~n, ~s,~v)ci(λ) (3.10) R which corresponds to the dichromatic reflection model proposed by Shafer [20]. For light transmission, when the scattering coefficient is low compared to the absorption coefficient, s(λ) k(λ), E (eq. 3.8) reduces to Bouguer’s or Lambert- ¿ T Beer’s law for absorption [22],

E (λ, ~x) = e(λ, ~x) (1 ρf (~x)) (1 ρb(~x)) exp ( k(λ, ~x)d(~x)c(~x)) (3.11) T − − − as expected. Further, a unified model for both reflection and transmission of light is obtained when considering Lambertian reflection and a uniform illumination for both cases. For matte, dull surfaces, and a uniform illumination affected by shading, E (eq. 3.5) R reduces to a multiplicative (Lambertian) model for body reflection,

E (λ, ~x) = e(λ)i(~x)R (λ, ~x) (3.12) R ∞ where e(λ) is the colored but spatially uniform illumination and i(~x) denotes the intensity distribution due to the surface geometry. Similar, for a uniform illumi- nated transparent material, intensity affected by shading and Fresnel reflectance, E T (eq. 3.8) may be rewritten as

E (λ, ~x) = e(λ)i(~x)C(λ, ~x) (3.13) T where e(λ) is the uniform illumination, i(~x) denotes the intensity distribution, in- cluding Fresnel reflectance at front and back surface, and C(λ, ~x) represents the total extinction coefficient, that is the total absorption- and scattering coefficient, within the transparent layer. A general model for spectral image formation useful in both reflectance and transmission of light may now be written as a multiplicative model,

E(λ, ~x) = e(λ)i(~x)m(λ, ~x) (3.14) 30 Chapter 3. A Physical Basis for Color Constancy where m(λ, ~x) denotes the material transmittance or reflectance function. Again, e(λ) is the colored but spatially uniform illumination and i(~x) denotes the intensity distribution. The validness of the model may be derived from models (eq. 3.5) and (eq. 3.8). For reflectance of light, the model is valid for matte, dull surfaces, for which the Fresnel reflectance is neglectable, and for isotropic light scattering within the material. For light transmission, the model is valid for neutral interface reflection, small angle of incidence to the surface normal, and isotropic light scattering within the material. The model as such is used in the next sections to derive color invariant material properties.

3.2 Illumination Invariant Properties of Object Re- flectance or Transmittance

Any method for finding invariant color properties relies on a photometric model and on assumptions about the physical variables involved. For example, hue is known to be insensitive to surface orientation, illumination direction, intensity and highlights, under a white illumination [6]. Normalized rgb is an object property for matte, dull surfaces illuminated by white light. When the illumination color varies or is not white, other object properties which are related to constant physical parameters should be measured. In this section, expressions for determining material changes in images will be derived, robust to a change in illumination color over time. Therefore, the photometric model derived in section 3.1 is taken into account. Consider the photometric reflection model (eq. 3.14) and an illumination with locally constant color,

E(λ, ~x) = e(λ)i(~x)m(λ, ~x) (3.15) where e(λ) represents the illumination spectrum. The assumption allows for the extraction of expressions describing material changes independent of the illumination. Without loss of generality, we restrict ourselves to the one dimensional case; two dimensional expressions may be derived according to Chapter 4. Differentiation of (eq. 3.15) with respect to λ results in ∂E ∂e ∂m = i(x)m(λ, x) + i(x)e(λ) . (3.16) ∂λ ∂λ ∂λ Dividing (eq. 3.16) by (eq. 3.15) gives the relative differential, 1 ∂E 1 ∂e 1 ∂m = + . (3.17) E(λ, x) ∂λ e(λ) ∂λ m(λ, x) ∂λ The result consists of two terms, the former depending on the illumination color and the latter depending on material properties. Since the illumination color is constant 3.2. Illumination Invariant Properties of Object Reflectance or Transmittance 31 with respect to x, differentiation to x yields a material property only,

∂ 1 ∂E ∂ 1 ∂m = . (3.18) ∂x E(λ, x) ∂λ ∂x m(λ, x) ∂λ ½ ¾ ½ ¾ Within the Kubelka-Munk model, assuming matte, dull surfaces or transparent layers, and assuming a single light source, Nλx determines changes in object re- flectance or transmittance,

1 ∂2E 1 ∂E ∂E Nλx = (3.19) E(λ, x) ∂λ∂x − E(λ, x)2 ∂λ ∂x which determines material changes independent of the viewpoint, surface orientation, illumination direction, illumination intensity and illumination color. The expression results from differentiation of (eq. 3.18). The expression given by (eq. 3.19) is the fundamental lowest order illumination invariant. Any spatio-spectral derivative of (eq. 3.19) inherently depends on the body reflectance or object transmittance only. According to [17], a complete and irreducible set of differential invariants is obtained by taking all higher order derivatives of the fundamental invariant,

∂m+n 1 ∂2E 1 ∂E ∂E Nλxλmxn = (3.20) ∂λm∂xn E(λ, x) ∂λ∂x − E(λ, x)2 ∂λ ∂x ½ ¾ for m 0, n 0. ≥ ≥ Application of the chain rule for differentiation yields the higher order expres- sions in terms of the spatio-spectral energy distribution. For instance, the spectral derivative of Nλx is given by

2 2 EλλxE EλλExE 2EλxEλE + 2E Ex N = − − λ (3.21) λλx E3 where E(λ, x) is written as E for simplicity and indices denote differentiation. Note that these expressions are valid everywhere E(λ, x) > 0. These invariants may be interpreted as the spatial derivative of the normalized spectral slope Nλ and curvature Nλλ of the reflectance function R . Expressions for higher order derivatives are ∞ straightforward. A special case of (eq. 3.20) is for Lambert-Beer absorption (eq. 3.11) and slices of locally constant thickness. Under these circumstances, ratios of invariants from set N, N m,n N 0 = (3.22) N p,q for m, p 1 and n, q 0, are independent of the slice thickness. The property is ≥ ≥ proven by considering differentiation with respect to λ of (eq. 3.11), and division by 32 Chapter 3. A Physical Basis for Color Constancy

(eq. 3.11), which results in

1 ∂E 1 ∂e ∂k T = dc(x) . (3.23) E (λ, x) ∂λ e(λ) ∂λ − ∂λ T Differentiation of the expression with respect to x yields

∂ 1 ∂E ∂2k ∂k ∂c T = dc(x) d . (3.24) ∂x E (λ, x) ∂λ − ∂λ∂x − ∂λ ∂x ½ T ¾ By taking ratios of higher order derivatives, the constant thickness d is eliminated. Summarizing, we have derived a complete set of color constant expressions de- termining object reflectance or transmittance. The expressions are invariant for a change of illumination over time. The major assumption underlying the proposed invariants is a single colored illumination, effectuating a spatially constant illumina- tion spectrum. For an illumination color varying slowly over the scene with respect to the spatial variation of the object reflectance or transmittance, simultaneous color constancy is achieved by the proposed invariant. We have proven that spatial differentiation is necessary to achieve color constancy when pre-knowledge about the illuminant is not available. Hence, any color con- stant system should perform both spectral as well as spatial comparison in order to be invariant against illumination changes, which confirms the theory of relational color constancy as proposed in [4]. Accurate estimates of spatio-spectral differen- tial quotients can be obtained by applying the Gaussian color model as described in Chapter 2.

3.3 Experiments

3.3.1 Overview

The transmission of 168 patches from a calibration grid (IT8.7/1, Agfa, Mortsel, Belgium) were measured (Spectrascan PR-713PC, Photo Research, Chatsworth, CA) from 390 nm to 730 nm, resampled at 5 nm intervals. The patches include achromatic colors, skin like tints and full colors (fig. 3.3). Each patch i will be represented by its spectral transmission mˆi. For the case of daylight, incandescent and halogen light, the emission spectra are known to be a one parameter function of . For these important classes of illuminants, the spectral energy distribution ek(λ) were calculated according to the CIE method as described in [22]. Daylight illuminants were calculated in the range of 4,000K up to 10,000K color temperature in steps of 500K. The 4,000K and 10,000K illuminants represent extremes of daylight, whereas 6,500K represents average daylight. Emission spectra of halogen and incandescent lamps are equivalent 3.3. Experiments 33

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

Figure 3.3: The CIE 1964 chromaticity diagram of the colors in the calibration grid used for the experiments, illuminated by average daylight D65

to blackbody radiators, generated from 2,000K up to 5,000K according to [22, Section 1.2.2]. For the case of fluorescent light, illuminants F1–F12 are used, as given by [7]. These are 12 representative spectral power distributions for fluorescent lamps. k Considering (eq. 3.14), the spectrum si (λ) transmitted by a planar patch i under illuminant k is given by

k si (λ) = ek(λ)mi(λ) (3.25)

where mi(λ) is the spectral transmittance and ek(λ) the illumination spectrum. Color values are calculated by the weighted summation over the transmitted spec- k trum si at 5 nm intervals. For the CIE 1964 XYZ sensitivities, the XYZ value is obtained by [22, Section 3.3.8]

1 X = x¯ (λ)e (λ)m (λ) k 10 k i 10 λ X 1 Y = y¯ (λ)e (λ)m (λ) k 10 k i 10 λ X 1 Z = z¯ (λ)e (λ)m (λ) (3.26) k 10 k i 10 λ X where k10 is a constant to normalize Yw = 100, Yw being the intensity of the light 34 Chapter 3. A Physical Basis for Color Constancy source. Similarly, for the Gaussian color model (see Chapter 2) we have

E = ∆λ G(λ; λ0, σλ)ek(λ)mi(λ) λ X Eλ = σλ∆λ Gλ(λ; λ0, σλ)ek(λ)mi(λ) λ X 2 Eλλ = σλ∆λ Gλλ(λ; λ0, σλ)ek(λ)mi(λ) (3.27) λ X where ∆λ = 5 nm. Further, σλ = 55 nm and λ0 = 520 nm to be colorimetric with human vision (see Chapter 2). Color constancy is examined by evaluating edge strength under different simu- lated illumination conditions. Borders are formed by combining all patches with one another, yielding 14,028 different color combinations. A ground truth is obtained by taking a perfect white light illuminant. The reference boils down to an equal en- ergy spectrum. The ground truth represents the patch transmission function up to multiplication by a constant a,

ref si (λ) = ami(λ) . (3.28)

The difference in edge strength for two patches illuminated by the test illuminant and the reference illuminant indicates the error in color constancy. We define the color constancy ratio as

dk(i, j) dref (i, j) ²k = 1 − (3.29) − dref (i, j) ¯ ¯ ¯ ¯ ¯ ¯ where dk is the color difference betw¯een two patches i, j ¯under the test illuminant k, and dref is the difference between the same two patches under the reference illuminant, that is equal energy illumination. The color constancy ratio ²k measures the deviation in edge strength between two patches i, j due to illuminant k relative to the edge strength under the reference illuminant. Three experiments are performed. One experiment evaluates the performance of the proposed invariant (eq. 3.19) under ideal conditions. That is, it evaluates Nλx at scale σλ = 5 nm, for multiple small-band measurements with ∆λ = 5 nm covering the visual spectrum. A second experiment assesses the influence of broad-band filters on color constancy. The first experiment is repeated but now for σλ = 55 nm filters, again with ∆λ = 5 nm covering the visual spectrum. The final experiment evaluates color constancy for a colorimetric system detecting color differences. Three broad- band measures (σλ = 55 nm) are taken at λ0 = 515 nm. The proposed invariant is evaluated against the performance for color constancy of [21] and uv color space [22]. 3.3. Experiments 35

Table 3.1: Results for the small-band experiment for invariant Nλx with σλ = 5 nm and 69 spectral samples, δλ = 5 nm apart. Average percentage constancy ²¯ over 14.028 color edges is given, together with standard deviation σ.

Daylight Blackbody Fluorescent T [K] ²¯ [%] (σ) T [K] ²¯ [%] (σ) T [K] ²¯ [%] (σ) 4000 99.9 (0.2) 2000 99.9 (0.1) F1 99.4 (0.5) 4500 99.9 (0.2) 2500 99.9 (0.1) F2 99.1 (0.8) 5000 99.9 (0.2) 3000 99.9 (0.0) F3 98.7 (1.1) 5500 99.9 (0.2) 3500 99.9 (0.0) F4 98.2 (1.6) 6000 99.9 (0.2) 4000 100.0 (0.0) F5 99.5 (0.5) 6500 99.9 (0.2) 4500 100.0 (0.0) F6 99.0 (0.9) 7000 99.9 (0.1) 5000 100.0 (0.0) F7 99.5 (0.5) 7500 99.9 (0.1) F8 99.1 (0.7) 8000 99.9 (0.1) F9 98.8 (1.0) 8500 99.9 (0.1) F10 95.6 (1.7) 9000 99.9 (0.1) F11 94.4 (2.0) 9500 99.9 (0.1) F12 93.2 (2.4) 10000 99.9 (0.1)

3.3.2 Small-Band Experiment

For each patch transmission, 69 Gaussian weighted samples were taken every 5 nm with σλ = 5 nm. Invariant Nλx was calculated between each combination of two patches for each central wavelength λ0 of the filters. For the experiment, color differ- ence is defined by

λc 2 ds = Nλx(i, j) (3.30) λ sXc

th λc where λc denotes the central wavelength of the c filter (σλ = 5 nm), and Nλx(i, j) the edge strength (eq. 3.19) between patch i and j for filter c. Color constancy is determined by (eq. 3.29), using ds as measure for color difference. The results for the experiment are shown in tab. 3.1. Average constancy for day- light and blackbody is 99.9 0.2%, which yields perfect color constancy. For the § fluorescent illuminants, average constancy is 97.9 1.3%, almost perfect color con- § stant. The small error is caused by the spectral spikes in the fluorescent emission spectra, smoothed to the filter size of σλ = 5 nm. The experiment demonstrates that perfect illumination invariance can be achieved by using the proposed invariants and a spectrophotometer. 36 Chapter 3. A Physical Basis for Color Constancy

Table 3.2: Results for the broad-band experiment for invariant Nλx with σλ = 55 nm and 69 spectral samples, δλ = 5 nm apart. Average percentage constancy ²¯ and standard deviation σ is given over the 14.028 different edges.

Daylight Blackbody Fluorescent T [K] ²¯ [%] (σ) T [K] ²¯ [%] (σ) T [K] ²¯ [%] (σ) 4000 97.3 (2.3) 2000 93.2 (5.3) F1 89.6 (9.2) 4500 99.8 (1.9) 2500 95.8 (3.3) F2 86.1 (11.3) 5000 99.2 (1.6) 3000 97.1 (2.2) F3 84.0 (12.0) 5500 99.5 (1.5) 3500 97.9 (1.5) F4 82.2 (12.3) 6000 99.7 (1.3) 4000 98.4 (1.1) F5 89.1 (9.3) 6500 99.8 (1.3) 4500 98.7 (0.8) F6 85.1 (11.7) 7000 99.9 (1.2) 5000 98.9 (0.7) F7 94.7 (7.1) 7500 99.0 (1.2) F8 95.4 (6.8) 8000 99.1 (1.2) F9 94.0 (7.6) 8500 99.1 (1.2) F10 88.0 (9.8) 9000 99.1 (1.2) F11 87.5 (9.8) 9500 99.1 (1.2) F12 86.8 (9.8) 10000 99.1 (1.2)

3.3.3 Broad-Band Experiment The experiment investigates the influence of broad-band filters by repeating the pre- vious experiment but now for σλ = 55 nm. Hence, 69 largely overlapping Gaussian weighted samples of the transmission spectrum are obtained. The results show (tab. 3.2) constancy for daylight of 98.7 1.5%. For blackbody § radiators, a constancy of 97.1 2.6% is achieved. These numbers are almost similar to § the results obtained for small-band filters. For fluorescent illuminants error increases to 15% (average constancy 88.5 9.9%) by using broad-band filters. Hence, approx- § imation of derivatives with broad-band filters is valid under daylight and blackbody illumination.

3.3.4 Colorimetric Experiment

For the colorimetric experiment, Gaussian weighted samples are taken at λ0 = 520 nm and σλ = 55 nm. Color difference is defined by

2 2 dN = Nλx(i, j) + Nλλx(i, j) (3.31) were Nλx(i, j) (eq. 3.19) and Npλλx(i, j) (eq. 3.21) measures total chromatic edge strength between patch i and j. Color constancy is determined by (eq. 3.29), using dN as measure for color difference. 3.3. Experiments 37

For comparison, the experiment is repeated with the CIE XYZ 1964 sensitivities for observation. Color difference is defined by the Euclidian distance in the CIE 1976 u0v0 color space [22, Section 3.3.9],

2 2 duv = (u u ) + (v v ) (3.32) i0 − j0 i0 − j0 q where i, j represent the different patches. Color constancy is determined by (eq. 3.29), using duv as measure for color difference. Note that for the u0v0 color space no information about the light source is included. Further, u0v0 space is similar to uv space up to a transformation of the achromatic point. The additive transformation of the white point makes uv space a color constant space. Differences in u0v0 are equal to differences in uv space. Hence, duv is an illumination invariant measure of color difference. As a well known reference, the von Kries transform for chromatic adaptation [21] is evaluated in a similar experiment. Von Kries method is based on Lambertian reflection, assuming that the (known) sensor responses to the illuminant may be used to eliminate the illuminant from the measurement. For the experiment, von Kries adaptation is applied on the measured color values, and the result is transformed to the equal energy illuminant [7]. Thereafter, color difference between patches i and j taken under the test illuminant is calculated according to (eq. 3.32). Comparison to the color difference between the same two patches under the reference illuminant is obtained by (eq. 3.29), using the von Kries transformed u’v’ distance as measure for color distance. Results for the color constancy measurements are given for daylight illumination (tab. 3.3), blackbody radiators (tab. 3.4), and fluorescent light (tab. 3.5). Average constancy over the different phases of daylight is for the proposed invari- ant 91.8 6.1%. Difference in u0v0 color space performs similar with an average of § 91.9 6.3%. The von Kries transform is 5% more color constant, 96.0 3.3%. As § § expected, the von Kries transform has a better performance given that the color of the illuminant is taken into account. For blackbody radiators, the proposed invariant is on average 88.9 12.5% color § constant. The proposed invariant is more color constant than u0v0 differences, average 82.4 15.1%. Again, von Kries transform is even better with an average of 93.4 6.8%. § § For these types of illuminants, often running at a low color temperature, variation due to illumination color is drastically reduced by the proposed method. The proposed method is less color constant than von Kries adaptation, which requires knowledge on the color of the light source. In comparison to u0v0 color differences, the proposed invariant offers better performance for low color temperature illuminants. Color constancy for fluorescent illuminants is on average 85.0 11.8% for the § proposed invariant, 84.7 10.5% for u0v0 difference, and 89.4 8.8% for the von § § 38 Chapter 3. A Physical Basis for Color Constancy

Table 3.3: Results for the different colorimetric experiments with daylight illumination, ranging from 4,000K to 10,000K color temperature. Average percentage constancy ²¯ and standard deviation σ for the proposed invariant N, the von Kries transform, and u0v0 difference.

N von Kries u’v’ T [K] ²¯ [%] (σ) ²¯ [%] (σ) ²¯ [%] (σ) 4000 92.2 (5.6) 96.1 (3.2) 86.9 (10.0) 4500 94.5 (4.2) 97.9 (1.8) 91.1 (7.1) 5000 94.9 (2.8) 99.2 (0.7) 94.5 (4.6) 5500 94.1 (1.8) 98.9 (1.0) 96.6 (2.0) 6000 93.2 (2.7) 97.9 (1.7) 97.6 (1.8) 6500 92.5 (4.0) 96.9 (2.4) 96.1 (2.7) 7000 91.8 (5.2) 96.1 (2.9) 94.3 (3.8) 7500 91.2 (6.2) 95.4 (3.4) 92.7 (4.9) 8000 90.6 (7.0) 94.8 (3.8) 91.2 (6.0) 8500 90.1 (7.6) 94.3 (4.2) 89.9 (6.9) 9000 89.6 (8.2) 93.8 (4.5) 88.8 (7.7) 9500 89.2 (8.7) 93.4 (4.8) 87.8 (8.4) 10000 88.8 (9.1) 93.0 (5.1) 86.9 (9.1)

Kries transform. As already pointed out for tab. 3.2, the large integration filters are not capable in offering color constancy for the class of fluorescent illuminants. The use of broad-band filters limits the applicability to smooth spectra, for which the Gaussian weighted differential quotients as derived in Chapter 2 are accurate estimations. For outdoor scenes, halogen illumination and incandescent light, the illumination spectra may be considered smooth, as shown by the experimental results tab. 3.2 versus tab. 3.1.

3.4 Discussion

This chapter presents a physics-based background for color constancy, valid for both light reflectance as well as light transmittance. To achieve that goal, the Kubelka- Munk theory is used as a model for color image formation. By considering spatial and spectral derivatives of the formation model, object reflectance properties are derived independent of the spectral energy distribution of the illuminant. Knowledge about the spectral power distribution of the illuminant is not required for the proposed invariant, as opposed to the well known von Kries transform for color constancy [21]. The robustness of our invariant (eq. 3.19) is assured by using the Gaussian color model, introduced in Chapter 3. The Gaussian color model is considered an adequate 3.4. Discussion 39

Table 3.4: Results for the different colorimetric experiments with blackbody radiators from 2,000K to 5,000K color temperature. Average percentage constancy ²¯ and standard deviation σ for the proposed invariant N, the von Kries transform, and u0v0 difference.

N von Kries u’v’ T [K] ²¯ [%] (σ) ²¯ [%] (σ) ²¯ [%] (σ) 2000 75.6 (24.5) 85.6 (12.4) 65.8 (24.9) 2500 82.5 (16.5) 89.0 (9.4) 72.3 (20.6) 3000 87.1 (11.2) 91.9 (6.8) 78.3 (16.3) 3500 90.8 (7.5) 94.3 (4.7) 83.7 (12.4) 4000 93.7 (4.9) 96.3 (3.0) 88.4 (8.9) 4500 96.0 (3.0) 97.9 (1.7) 92.5 (6.0) 5000 96.9 (1.7) 99.1 (0.7) 95.9 (3.4)

Table 3.5: Results for the colorimetric experiments with representative fluorescent illuminants. Av- erage percentage constancy ²¯ and standard deviation σ for the proposed invariant N, the von Kries transform, and u0v0 difference.

N von Kries u’v’ T [K] ²¯ [%] (σ) ²¯ [%] (σ) ²¯ [%] (σ) F1 82.4 (14.4) 89.4 (7.9) 88.6 (7.9) F2 82.7 (12.4) 87.8 (7.9) 82.9 (7.5) F3 79.9 (13.5) 85.5 (9.8) 76.4 (11.4) F4 77.2 (15.4) 83.6 (11.6) 71.4 (14.9) F5 81.1 (15.4) 88.1 (8.6) 87.4 (8.9) F6 80.6 (13.7) 85.9 (8.9) 79.7 (8.8) F7 90.2 (7.8) 95.2 (3.7) 93.7 (3.9) F8 93.6 (3.1) 97.8 (1.6) 94.6 (4.4) F9 93.3 (4.4) 95.3 (3.6) 90.1 (7.8) F10 87.2 (9.1) 91.1 (8.8) 91.7 (9.2) F11 87.1 (10.1) 88.3 (11.2) 85.5 (12.6) F12 84.9 (13.6) 85.0 (13.8) 74.9 (18.7)

approximation of the human tri-stimulus sensitivities. The Gaussian color model measures the intensity, first, and second order derivative of the spectral energy dis- tribution, combined in a well-established spatial observation theory. Application of the Gaussian color model in color constancy ensures compatibility with colorimetry, while inherently physically sound and robust measurements are derived. From a different perspective, color constancy was considered in [1, 14]. The back- 40 Chapter 3. A Physical Basis for Color Constancy ground is experimental colorimetry, where subjects are asked to match the reference and test illumination condition. As a consequence their experiments do not include shadow and shading. The result of their approach shows approximate color con- stancy under natural illuminants. However, their approach is unable to cope with color constancy of three dimensional scenes, where shadow plays an important role. The advantage of our physical approach over an empirical colorimetric approach, is that invariant properties are deduced from the image formation model. Our proposed (eq. 3.19) is designed to be insensitive to intensity changes due to the scene geometry.

The proposed invariant (eq. 3.19) is evaluated by experiments on spectral data of 168 transparent patches, illuminated by daylight, blackbody, and fluorescent illumi- nants. Average constancy is 90 5% for daylight, 90 10% for blackbody radiators, § § and 85 10% for fluorescent illuminants. The performance of the proposed method § is slightly less than that of the von Kries transform. Average constancy for von Kries on the 168 patches is 95 3% for daylight, 95 5% for blackbody radiators, and § § 90 10% for fluorescent illuminants. This is explained from the fact that the von § Kries transform requires explicit knowledge of material and illuminant, and even than the difference is small. There are many circumstances where such a knowledge of ma- terial and illuminant is missing, especially in image retrieval from large databases, or when calibration is not practically feasible as is frequently the case in light mi- croscopy. The proposed method requires knowledge about the material only, hence is applicable under a larger set of imaging circumstances.

As an alternative for color constancy under an unknown illuminant, one could use Luv color space differences [22] instead of the proposed method. We have evaluated color constancy for both methods. The proposed invariant offers similar performance to u0v0 color differences. This is remarkable, given the different background against which the methods are derived. Whereas u0v0 is derived from colorimetric exper- iments, hence from human perception, the proposed invariant N is derived from measurement theory –the physics of observation– and physical reflection models. Ap- parently, it is the physical cause of color, and the environmental variation in physical parameters, to which the human visual system adapts.

As pointed out in [14], mechanisms responding to cone-specific contrast offer a better correspondence with human vision than by a system that estimates illuminant and reflectance spectra. The research presented here raises the question whether the illuminant is estimated at all in pre-attentive vision. The physical model presented demands spatial comparison in order to achieve color constancy, thereby confirming relational color constancy as a first step in color constant vision [4, 16]. Hence, low- level mechanisms as color constant edge detection reported here may play a role in front-end vision. Bibliography 41

Bibliography

[1] D. H. Brainard. Color constancy in the nearly natural image: 2. Achromatic loci. J. Opt. Soc. Am. A, 15:307–325, 1998.

[2] M. D’Zmura and P. Lennie. Mechanisms of color constancy. J. Opt. Soc. Am. A, 3(10):1662–1672, 1986.

[3] G. D. Finlayson. Color in perspective. IEEE Trans. Pattern Anal. Machine Intell., 18(10):1034–1038, 1996.

[4] D. H. Foster and S. M. C. Nascimento. Relational colour constancy from invariant cone-excitation ratios. Proc. R. Soc. London B, 257:115–121, 1994.

[5] B. V. Funt and G. D. Finlayson. Color constant color indexing. IEEE Trans. Pattern Anal. Machine Intell., 17(5):522–529, 1995.

[6] T. Gevers and A. W. M. Smeulders. Color based object recognition. Pat. Rec., 32:453–464, 1999.

[7] R. W. G. Hunt. Measuring Colour. Ellis Horwood Limited, Hertfordshire, Eng- land, 1995.

[8] D. B. Judd and G. Wyszecki. Color in Business, Science, and Industry. Wiley, New York, NY, 1975.

[9] J. J. Koenderink and A. Kappers. Color Space. Utrecht University, The Nether- lands, 1998.

[10] J. J. Koenderink and A. J. van Doorn. Receptive field families. Biol. Cybern., 63:291–297, 1990.

[11] P. Kubelka. New contribution to the optics of intensely light-scattering materials. part I. J. Opt. Soc. Am., 38(5):448–457, 1948.

[12] P. Kubelka and F. Munk. Ein beitrag zur optik der farbanstriche. Z. Techn. Physik, 12:593, 1931.

[13] E. H. Land. The retinex theory of color vision. Sci. Am., 237:108–128, 1977.

[14] M. P. Lucassen and J. Walraven. Color constancy under natural and artificial illumination. Vision Res., 37:2699–2711, 1996.

[15] L. T. Maloney and B. A. Wandell. Color constancy: a method for recovering surface spectral reflectance. J. Opt. Soc. Am. A, 3:29–33, 1986.

[16] S. M. C. Nascimento and D. H. Foster. Relational color constancy in achromatic and isoluminant images. J. Opt. Soc. Am. A, 17(2):225–231, 2000. 42 Chapter 3. A Physical Basis for Color Constancy

[17] P. Olver, G. Sapiro, and A Tannenbaum. Differential invariant signatures and flows in computer vision: A symmetry group approach. In B. M. ter Haar Romeny, editor, Geometry-Driven Diffusion in Computer Vision. Kluwer Academic Publishers, Boston, 1994.

[18] M. Pluta. Advanced Light Microscopy, volume 1. Elsevier, Amsterdam, 1988.

[19] G. Sapiro. Color and illuminant voting. IEEE Trans. Pattern Anal. Machine Intel., 21(11):1210–1215, 1999.

[20] S. A. Shafer. Using color to separate reflection components. Color Res. Appl., 10(4):210–218, 1985.

[21] J. von Kries. Influence of adaptation on the effects produced by liminous stimuli. In D. L. MacAdam, editor, Sources of Color Vision. MIT Press, Cambridge, MS, 1970.

[22] G. Wyszecki and W. S. Stiles. Color Science: Concepts and Methods, Quantita- tive Data and Formulae. Wiley, New York, NY, 1982. Chapter 4

Measurement of Color Invariants

submitted∗ to IEEE Transactions on Pattern Analysis and Machine Intelligence.

“Attaching significance to invariants is an effort to recognize what, because of its form or colour or meaning or otherwise, is important or significant in what is only trivial or ephemeral.” – –H.W. Turnbull.

It is well known that color is a powerful cue in the distinction and recognition of objects. Segmentation based on color, rather than just intensity, provides a broader class of discrimination between material boundaries. Modeling the physical process of color image formation provides a clue to the object-specific parameters [6, 8, 19]. To reduce some of the complexity intrinsic to color images, parameters with known invariance are of prime importance. Current methods for the measurement of color invariance require a fully sampled spectrum as input data usually derived by a spec- trometer. Angelopoulou et al. [1] use the spectral gradient to estimate surface re- flectance from multiple images of the same scene, captured with different spectral narrow band filters. The assumptions underlying their approach require a smoothly varying illumination. Their method is able to accurately estimate surface reflectance independent of the scene geometry. Stokman and Gevers [20] propose a method for edge classification from spectral images. Their method aims in detecting edges and assigning one of the types: shadow or geometry, highlight, or a material edge. Under the assumption of spectral narrow band filters, and for a known illumination spec- trum, they prove their method to be accurate in edge classification. These approaches

∗Part of this work has appeared in the proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2000, vol. 1, pp. 50–57.

43 44 Chapter 4. Measurement of Color Invariants hampers broad use as spectrometers are both slow and expensive. In addition they do not provide two-dimensional spatial resolution easily. In this chapter we aim at a broad range of color invariants measured from RGB-cameras. To that end, differential geometry is adopted as the framework for feature detec- tion and segmentation of images. Its impact in computer vision is overwhelming but mostly limited to grey-value images [5, 15, 21]. Embedding the theory in the scale- space paradigm [11, 13] resulted in well-posed differential operators robust against noisy measurements, with the Gaussian aperture as the fundamental operator. Only a few papers are available on color differential geometry [7, 18], which are mainly based on the proposed by Di Zenzo [23]. In the paper, an expression for the color gradient is derived by analysis of the eigensystem of the color structure tensor. In [2], curvature and zero-crossing detection is investigated for the directional derivative of the color gradient. For these geometrical invariants no physical model is taken into account, yielding measurements which are highly influenced by the spe- cific imaging circumstances as shadow, illumination, and viewpoint. We consider the introduction of wavelength in the scale-space paradigm, as suggested by Koenderink [12]. This leads to a spatio-spectral family of Gaussian aperture functions, in Chap- ter 2 introduced as the Gaussian color model. Hence, the Gaussian color model may be considered an extension of the differential geometry framework into the spatio- spectral domain. We apply the spatio-spectral scale-space to the measurement of photometric and geometric invariants. In Chapter 3, the authors discuss the use of the Shafer model [19], effectively based on the older Kubelka-Munk theory [14], to measure object reflectance independent of illumination color. The Kubelka-Munk theory models the reflected spectrum of a col- ored body [10, 22], based on a material-dependent scattering and absorption function, under the assumption that light is isotropically scattered within the material. The theory has proven to be successful for a wide variety of materials and applications [10]. Therefore, the Kubelka-Munk theory is well suited for determining material proper- ties from color measurements. We use the Kubelka-Munk theory for the definition of object reflectance properties, for a wide range of assumptions regarding imaging conditions. The measurement of invariance involves a balance between constancy of the mea- surement regardless of the disturbing influence of the unwanted transform on the one hand, and retained discriminating power between truly different states of the objects on the other. As a general rule, features allowing ignorance of a larger set of disturb- ing factors, less discriminative power can be expected. We refer to such features as broad features. Hence, both invariance and discriminating power of a method should be investigated simultaneously. Only this allows to asses the practical performance of the proposed method. In this chapter we extensively investigate invariant properties and discriminative power. The chapter is organized as follows. Section 4.1 describes a physical model for im- 4.1. Color Image Formation Model 45 age formation, based on the Kubelka-Munk theory. First contribution of this chapter is a complete set of invariant expressions derived for basically three different imaging conditions (section 4.2). A second important contribution considers the robust mea- surement of invariant expressions from RGB-images (section 4.3). Further, section 4.3 demonstrates the performance of the features as invariance and discriminative power between different colored patches, which may be considered as a third contribution.

4.1 Color Image Formation Model

In Chapter 3, image formation is modeled by means of the Kubelka-Munk theory [10, 22] for colorant layers. Under the assumption that light within the material is isotropically scattered, the material layer may be characterized by a wavelength dependent scatter coefficient and absorption coefficient. The model unites both re- flectance of light and transparent materials. The class of materials for which the theory is useful ranges from dyed paper and textiles, opaque plastics, paint films, up to enamel and dental silicate cements [10]. In the sequel we will derive color invariant expressions under various imaging conditions. Therefore, an image formation model adequate for reflectance of light in real-world scenes is considered. We consider the Kubelka-Munk theory as a general model for color image formation. The photometric reflectance model resulting from the Kubelka-Munk theory is given by (see Chapter 3) 2 E(λ, ~x) = e(λ, ~x) (1 ρf (~x)) R (λ, ~x) + e(λ, ~x)ρf (~x) (4.1) − ∞ where x denotes the position at the imaging plane and λ the wavelength. Further, e(λ, ~x) denotes the illumination spectrum and ρf (~x) the Fresnel reflectance at ~x. The material reflectivity is denoted by R (λ, ~x). The reflected spectrum in the viewing ∞ direction is given by E(λ, ~x). When redefining symbols cb(λ, ~x) = e(λ, ~x)R (λ, ~x), 2 ∞ ci(λ, ~x) = e(λ, ~x), mb(~x) = (1 ρ (~x)) and mi(~x) = ρ (~x), (eq. 4.1) reduces to − f f E(λ, ~x) = mb(~x)cb(λ, ~x) + mi(~x)ci(λ, ~x) (4.2) which is the dichromatic reflection model by Shafer [19]. Concerning the Fresnel reflectance, the photometric model assumes a neutral in- terface at the surface patch. As discussed in [17, 19], deviations of ρf over the visible spectrum are small for commonly used materials, therefore the Fresnel reflectance coefficient may be considered constant. The following special case can be derived. For matte, dull surfaces, the Fresnel coefficient can be considered neglectable, ρ (~x) 0, for which E(λ, ~x) (eq. 4.1) reduces f ≈ to the Lambertian model for diffuse body reflection E(λ, ~x) = e(λ, ~x)R (λ, ~x) (4.3) ∞ as expected. 46 Chapter 4. Measurement of Color Invariants

4.2 Determination of Color Invariants

Any method for finding invariant color properties relies on a photometric model and on assumptions about the physical variables involved. For example, hue is known to be insensitive to surface orientation, illumination direction, intensity and highlights, under a white illumination [8]. Normalized rgb is an object property but only for matte, dull surfaces and only when illuminated by white light. When the illumination color is not white, other object properties should be measured. In this section, expressions for determining invariant properties in color images will be derived for three different imaging conditions, taking into account the photo- metric model derived in section 4.1. The imaging conditions are assumed to be the 5 relevant out of 8 combinations of: a. white or colored illumination, b. matte, dull object or general object, or c. uniformly stained object or generally colored object. Further specialization as uniform illumination or a single illumination spectrum may be considered. Note that each essentially different condition of the scene, object or recording circumstances results in suited different invariant expressions. For nota- tional convenience, we first concentrate on the one dimensional case; two dimensional expressions will be derived later when introducing geometrical invariants.

4.2.1 Invariants for White but Uneven Illumination Consider the photometric reflection model (eq. 4.1). For white illumination, the spectral components of the source are approximately constant over the wavelengths. Hence, a spatial component i(x) denotes intensity variations, resulting in

2 E(λ, x) = i(x) ρf (x) + (1 ρf (x)) R (λ, x) . (4.4) − ∞ n o The assumption allows the extraction of expressions describing object reflectance independent of the Fresnel reflectance. Let indices of λ and x indicate differentiation, and from now on dropping (λ, x) from E(λ, x) when such will cause no confusion.

Lemma 6 Within the Kubelka-Munk model, assuming dichromatic reflection and white illumination, E H = λ Eλλ is an object reflectance property independent of viewpoint, surface orientation, illumi- nation direction, illumination intensity and Fresnel reflectance coefficient.

Proof: Differentiating (eq. 4.4) with respect to λ twice results in

2 ∂R (λ, x) Eλ = i(x)(1 ρ (x)) ∞ − f ∂λ 4.2. Determination of Color Invariants 47 and

2 2 ∂ R (λ, x) Eλλ = i(x)(1 ρ (x)) ∞ . − f ∂λ2 Hence, their ratio depends on derivatives of the object reflectance functions R (λ, x) ∞ only, which proves the lemma. tu

To interpret H, consider the local Taylor expansion at λ0 truncated at second order,

1 2 E(λ + ∆λ) E(λ ) + ∆λEλ(λ ) + ∆λ Eλλ(λ ) . (4.5) 0 ≈ 0 0 2 0

The function extremum of Eλ(λ0 + ∆λ) is at ∆λ for which the first order derivative is zero, d E(λ + ∆λ) = Eλ(λ ) + ∆λEλλ(λ ) = 0 . (4.6) dλ { 0 } 0 0

Hence, for ∆λ near the origin λ0,

Eλ(λ0) ∆λmax = . (4.7) −Eλλ(λ0)

In conclusion, the property H is related to the hue (i.e. arctan (λmax)) of the material. For Eλλ(λ0) < 0 the result is at a maximum and describes a Newtonian (prism) color, whereas for Eλλ(λ0) > 0 the result is at a minimum and indicates a non-Newtonian (slit) color. Of significant importance is the derivation of a complete set Ψ of functionally inde- pendent (irreducible) differential invariants Ψi. Completeness states that all possible independent invariants for the unwanted distortion are present in the set Ψ. From Olver et al. [16], the basic method for constructing a complete set of differential invariants is to use invariant differential operators. A differential operator is said to be invariant under a given distortion if it maps differential invariants to higher order differential invariants. Hence, by iteration, such an operator produces a hierarchy of differential invariants of arbitrarily large order n, given a lowest order invariant. The lowest order invariant is referred to as the fundamental invariant. Summarizing, for a lowest order color invariant, a differential operator may be defined to construct complete, irreducible sets of color invariants under the same imaging conditions by iteration.

Proposition 7 A complete and irreducible set of color invariants, up to a given differential order, is given by all derivatives of the fundamental color invariant.

In the sequel we will define the generating differential operator given the lowest order fundamental invariant. 48 Chapter 4. Measurement of Color Invariants

The expression given by Lemma 6 is a fundamental lowest order invariant. As a result of Proposition 7, differentiation of the expression for H with respect to x or λ results in object reflectance properties under a white illumination. Note that H is ill-defined when the second order spectral derivative vanishes. We prefer to compute differentials of the arctan (H) a monotonic function of H, for which the spatial derivatives yield better numerical stability.

Corollary 8 Within the Kubelka-Munk model, a complete and irreducible set of in- variants for dichromatic reflection and a white illumination is given by m+n ∂ Eλ H m n = arctan (4.8) λ x ∂λm∂xn E ½ µ λλ ¶¾ for m, n 0. ≥ Application of the chain rule for differentiation yields the higher order expressions in terms of the spatio-spectral energy distribution. For illustration, we give all expres- sions for first spatial derivative and second spectral order. The hue spatial derivative is given by

EλλEλx EλEλλx Hx = 2 − 2 (4.9) Eλ + Eλλ 2 2 admissible for Eλ + Eλλ > 0. In the sequel we also need an expression for color saturation S, 1 S = E 2 + E 2 . (4.10) E(λ, x) λ λλ q 4.2.2 Invariants for White but Uneven Illumination and Matte, Dull Surfaces A class of tighter invariants may be derived when the object is matte and dull. Con- sider the photometric reflection model (eq. 4.4), for matte, dull surfaces with low Fresnel reflectance, ρ (~x) 0, f ≈ E = i(x)R (λ, x) . (4.11) ∞ These assumptions allow the derivation of expressions describing object reflectance independent of the intensity distribution.

Lemma 9 Within the Kubelka-Munk model, assuming matte, dull surfaces, and a white illumination, E C = λ λ E is an object reflectance property independent of the viewpoint, surface orientation, illumination direction and illumination intensity. 4.2. Determination of Color Invariants 49

Proof: Differentiation of (eq. 4.11) with respect to λ and normalization by (eq. 4.11) results in an equation depending on object property only,

Eλ 1 ∂R (λ, x) = ∞ E R (λ, x) ∂λ ∞ which proves the lemma. tu

The property Cλ may be interpreted as describing object color regardless intensity. As a result of Proposition 7, all normalized higher order spectral derivatives of Cλ, and their spatial derivatives, result in object reflectance properties under white illumination. The normalization by E is to be evaluated at the spectral wavelength of interest, and therefore is considered locally constant with respect to λ.

Corollary 10 Within the Kubelka-Munk model, a complete and irreducible set of invariants for matte, dull surfaces, under a white illumination is given by

n ∂ Eλm C m n = (4.12) λ x ∂xn E ½ ¾ for m 1, n 0. ≥ ≥ Specific first spatial and second spectral order expressions are given by E C = λλ λλ E EλxE EλEx C = − λx E2 EλλxE EλλEx C = − . (4.13) λλx E2 Note that these expressions are valid everywhere E > 0. These invariants may be interpreted as the spatial derivative of the intensity normalized spectral slope Cλ and curvature Cλλ.

4.2.3 Invariants for White, Uniform Illumination and Matte, Dull Surfaces For uniform illumination, consider again the photometric reflection model (eq. 4.11) for matte, dull surfaces, and a white and uniform illumination with intensity i,

E(λ, x) = iR (λ, x) . (4.14) ∞ The assumption of a white and uniformly illuminated object may be achieved under well defined circumstances, such as photography of art. These assumptions allow the derivation of expressions describing object reflectance independent of the intensity level. 50 Chapter 4. Measurement of Color Invariants

Lemma 11 Within the Kubelka-Munk model, assuming matte, dull surfaces, planar objects, and a white and uniform illumination,

E W = x x E determines changes in object reflectance independent of the illumination intensity.

Proof: Differentiation of (eq. 4.14) with respect to x and normalization by (eq. 4.14) results in

Ex 1 ∂R (λ, x) = ∞ . E R (λ, x) ∂x ∞ This is an object reflectance property. tu

The property Wx may be interpreted as an edge detector specific for changes in spectral distribution. Under common circumstances, a geometry dependent intensity term is present, hence Wx does not represent pure object properties but will include shadow edges where present.

As a result from Proposition 7, all normalized higher order derivatives of Wx yield object reflectance properties under a white and uniform illumination. The normal- ization by E is to be evaluated at the spatial and spectral point of interest. Hence it is considered locally constant.

Corollary 12 Within the Kubelka-Munk model, a complete and irreducible set of invariants for matte, dull surfaces, planar objects, under a white and uniform illumi- nation is given by

Eλmxn W m n = (4.15) λ x E for m 0, n 1. ≥ ≥ Specific expressions for E > 0 up to first spatial and second spectral order are given by

E W = λx λx E E W = λλx . (4.16) λλx E These invariants may be interpreted as the intensity normalized spatial derivatives of the spectral intensity E, spectral slope Eλ and spectral curvature Eλλ. 4.2. Determination of Color Invariants 51

4.2.4 Invariants for Colored but Uneven Illumination For colored illumination, when the spectral energy distribution of the illumination does not vary over the scene, the illumination may be decomposed into a spectral component e(λ) representing the illumination color, and a spatial component i(x) denoting variations in intensity due to the scene geometry. Hence, for matte, dull surfaces ρ 0, f → E = e(λ)i(x)R (λ, x) . (4.17) ∞ The assumption allows us to derive expressions describing object reflectance indepen- dent of the illumination.

Lemma 13 Within the Kubelka-Munk model, assuming matte, dull surfaces and a single illumination spectrum,

EλxE EλEx N = − λx E2 determines changes in object reflectance independent of the viewpoint, surface orien- tation, illumination direction, illumination intensity and illumination color.

Proof: Differentiation of (eq. 4.17) with respect to λ results in ∂e(λ) ∂R (λ, x) Eλ = i(x)R (λ, x) + e(λ)i(x) ∞ . ∞ ∂λ ∂λ Dividing by (eq. 4.17) gives the relative differential,

Eλ 1 ∂e(λ) 1 ∂R (λ, x) = + ∞ . E e(λ) ∂λ R (λ, x) ∂λ ∞ The result consists of two terms, the former depending on the illumination color only and the latter depending on body and Fresnel reflectance only. Differentiation to x yields

∂ Eλ ∂ 1 ∂R (λ, x) = ∞ . ∂x E ∂x R (λ, x) ∂λ ½ ¾ ½ ∞ ¾ The right hand side is depending only on object property. This proves the lemma. tu

The invariant Nλx may be interpreted as the spatial derivative of the spectral change of the reflectance function R (λ, x) and therefore indicates transitions in ∞ object reflectance. Hence, Nλx determines material transitions regardless illumination color and intensity distribution. As a result of Proposition 7, further differentiation of Nλx results in object re- flectance properties under a colored illumination. 52 Chapter 4. Measurement of Color Invariants

Corollary 14 Within the Kubelka-Munk model, a complete and irreducible set of invariants for matte, dull surfaces, and a single illumination spectrum, is given by

m+n 2 ∂ − EλxE EλEx N m n = − (4.18) λ x ∂λm 1∂xn 1 E2 − − ½ ¾ for m 1, n 1. ≥ ≥

The third order example is the spectral derivative of Nλx for E(λ, x) > 0,

2 2 EλλxE EλλExE 2EλxEλE + 2E Ex N = − − λ . (4.19) λλx E3 4.2.5 Invariants for a Uniform Object For uniformly colored planar surface, the reflectance properties are spatially constant. Hence the reflectance function R and Fresnel coefficient ρf are independent of x, ∞ 2 E = e(λ, x) ρf + (1 ρf ) R (λ) . (4.20) − ∞ n o For a single illumination source, expressions describing interreflections may be ex- tracted, i.e. the reflected spectrum of surrounding materials.

Lemma 15 Within the Kubelka-Munk model, assuming dichromatic reflection, a sin- gle illumination source, and a uniformly colored planar surface,

EλxE EλEx U = − λx E2 determines interreflections of colored objects, independent of the object spectral re- flectance function.

Proof: Differentiating (eq. 4.20) with respect to λ results in

2 ∂e(λ, x) 2 ∂R (λ) Eλ = ρf + (1 ρf ) R (λ) + e(λ, x)(1 ρf ) ∞ . − ∞ ∂λ − ∂λ © ª Normalization by (eq. 4.20) results in

2 Eλ 1 ∂e(λ, x) (1 ρf ) ∂R (λ) ∞ = + − 2 . E e(λ, x) ∂λ ρf + (1 ρf ) R (λ) ∂λ − ∞ Differentiation with respect to x results in

∂ E ∂ 1 ∂e(λ, x) λ = ∂x E ∂x e(λ, x) ∂λ ½ ¾ ½ ¾ which depends on the illumination only. Differentiation yields the lemma. tu 4.2. Determination of Color Invariants 53

The property Uλx may be interpreted as describing edges due to interreflections and specularities. When ambient illumination is present casting a different spectral distribution, the invariant describes shadow edges due to the combined ambient illu- mination and incident illumination. Note that the expression for Lemma 15 is identical to the expression in Lemma 13. Consequently, changes in object reflectance cannot be distinguished from interreflec- tions in single images. Further differentiation of Uλx yield interreflections when as- suming a uniform colored planar surface. The result is identical to (eq. 4.19).

4.2.6 Summary of Color Invariants

In conclusion, within the Kubelka-Munk model, various sets of invariants are derived as summarized in tab. 4.1. The class of materials for which the invariants are useful ranges from dyed paper and textiles, opaque plastics, paint films, up to enamel and dental silicate cements [10]. The invariant sets may be ordered by broadness of in- variance, where broader sets allow ignorance of a larger set of disturbing factors than tighter sets.

Table 4.1: Summary of the various color invariant sets and their invariance to specific imaging conditions. Invariance is denoted by “+”, whereas sensitivity to the imaging condition is indicated by “–”. Note that the reflected spectral energy distribution E is sensitive to all the conditions cited.

viewing surface highlights illumi- illumi- illumi- inter direction orienta- nation nation nation reflection tion direction intensity color H + + + + + – – N + + – + + + – U + + – + + + – C + + – + + – – W – – – – + – – E – – – – – – –

The table offers the solution of using the narrowest set of invariants for known imaging conditions, since H N = U C W E. In the case that recording ⊂ ⊂ ⊂ ⊂ circumstances are unknown the table offers a broad to narrow hierarchy. Hence, an incremental strategy of invariant feature extraction may be applied. Combination of invariants open up the way to edge type classification as suggested in [9]. The vanishing of edges for certain invariants indicate if their cause is shading, specular reflectance, or material boundaries. 54 Chapter 4. Measurement of Color Invariants

4.2.7 Geometrical Color Invariants in Two Dimensions

So far, we have established color invariant descriptors, based on differentials in the spectral and the spatial domain in one spatial dimension. When applied in two di- mensions, the result is depending on the orientation of the image content. In order to obtain meaningful image descriptions it is crucial to derive descriptors which are in- variant with respect to translation, rotation and scaling. For the grey-value luminance L geometrical invariants are well established [5]. Translation and scale invariance is obtained by examining the (Gaussian) scale-space, which is a natural representation for investigating the scaling behavior of image features [11]. Florack et al. [5] ex- tent the Gaussian scale-space with rotation invariance, by considering in a systematic manner local gauge coordinates. The coordinate axis w and v are aligned to the gra- dient and isophote tangents directions, respectively. Hence, the first order gradient gauge invariant is the magnitude of the luminance gradient,

2 2 Lw = Lx + Ly . (4.21) q Note that the first order isophote gauge invariant is zero by definition. The second order invariants are given by

2 2 L Lyy 2LxLyLxy + L Lxx x − y Lvv = 2 (4.22) Lw related to isophote curvature,

2 2 LxLy (Lyy Lxx) L L Lxy − − x − y Lvw = 2 (4.23) Lw ¡ ¢ related to flow-line curvature, and

2 2 LxLxx + 2LxLyLxy + LyLyy Lww = 2 (4.24) Lw related to isophote density. Note that the Laplacian operator ∆L = Lxx + Lyy is an invariant and hence equal to

∆L = Lvv + Lww .

On the basis of these spatial results, we combine (eq. 4.21)—(eq. 4.24) with the color invariants for the 1D-case established before. The resulting first order expressions are given in tab. 4.2. Two or three measures for edge strength are derived, one for each spectral differ- ential order. The only exception is H. Total edge strength due to differences in the 4.3. Measurement of Color Invariants 55

Table 4.2: Summary of the first order geometrical invariants for the various color invariant sets. See tab. 4.1 for invariant class.

∂ ∂w 2 2 H Hw = Hx + Hy

q 1 2 2 = (EλλEλx EλEλλx) + (EλλEλy EλEλλy ) E2 +E2 λ λλ − − q 2 2 Cλ Cλw = Cλx + Cλy q1 2 2 = (EλxE EλEx) + (Eλy E EλEy ) E2 − − q2 2 Cλλ Cλλw = Cλλx + Cλλy q1 2 2 = (EλλxE EλλEx) + (Eλλy E EλλEy ) E2 − − q2 2 W Iw = Wx + Wy

q1 2 2 = E Ex + Ey

q 2 2 Wλ Wλw = Wλx + Wλy

q1 2 2 = E Eλx + Eλy

q 2 2 Wλλ Wλλw = Wλλx + Wλλy

q1 2 2 = E Eλλx + Eλλy

q2 2 Nλ Nλw = Nλx + Nλy q1 2 2 = (EλxE EλEx) + (Eλy E EλEy ) E2 − − q2 2 Nλλ Nλλw = Nλλx + Nλλy = q1 √A2 + B2 E3 2 2 where A = EλλxE EλλExE 2EλxEλE + 2E Ex − − λ 2 2 and B = Eλλy E EλλEy E 2Eλy EλE + 2E Ey − − λ

energy distribution may be defined by the root squared sum of the edge strengths un- der a given imaging condition. A summary of total edge strength measures, ordered by degree of invariance, is given in tab. 4.3. For completeness, spatial second order derivatives in two dimensions are given in tab. 4.4 and tab. 4.5. The derivation of higher order invariants is straightforward. Usually many derivatives are involved here, raising some doubt on the sustainable computational accuracy of the result.

4.3 Measurement of Color Invariants

Up to this point we have considered invariant expressions describing material prop- erties under some general assumptions. They are derived from expressions exploring the infinitely dimensional Hilbert space of spectra at an infinitesimally small spatial neighborhood. As shown in Chapter 2, the spatio-spectral energy distribution is mea- 56 Chapter 4. Measurement of Color Invariants

Table 4.3: Summary of the total edge strength measures for the various color invariant sets, ordered by degree of invariance. The edge strength Ew is not invariant to any change in imaging conditions. See tab. 4.1 for invariant class.

2 2 2 2 2 2 E Ew = Ex + Eλx + Eλλx + Ey + Eλy + Eλλy

q 2 2 2 2 2 2 W Ww = Wx + Wλx + Wλλx + Wy + Wλy + Wλλy

q 2 2 2 2 C Cw = Cλx + Cλλx + Cλy + Cλλy

q 2 2 2 2 N Nw = Nλx + Nλλx + Nλy + Nλλy

q 2 2 H Hw = Hx + Hy q surable only at a certain spatial extend and a certain spectral bandwidth. Hence, physical measurements imply integration over spectral and spatial dimensions. In this section we exploit the Gaussian color model as presented in Chapter 2 to define measurable color invariants.

4.3.1 Measurement of Geometrical Color Invariants

Measurement of the geometrical color invariants is obtained by substitution of the Gaussian basis, as derived from the RGB measurements (see Chapter 2), in the in- variant expressions derived in section 4.2. Measured values for the geometrical color invariants given in tab. 4.2 and tab. 4.4 are obtained by substitution of E, Eλ and Eλλ for the measured values Eˆ, Eˆλ and Eˆλλ at given scale σ~x. In this section, we demonstrate the color invariant properties for each of the assumed imaging condi- tions by applying the invariants for an example image. The invariants regarding a uniform object are not demonstrated separately, since the expressions are included in the invariants for colored illumination.

Measurement of invariants for white illumination The invariant Hˆ is representative for hue or dominant color of the material, disre- garding intensity and highlights. The pseudo invariant Sˆ (eq. 4.10) denotes the purity of the color, and therefore is sensitive to highlights since at these points color is de- saturated. An example is shown in fig. 4.1. The invariant Hˆw represents the hue gradient magnitude, detecting color edges independent of intensity and highlights, as demonstrated in fig. 4.1. Common expressions for hue are known to be noise sensitive. In the scale frame, Gaussian regularization offers a trade-off between noise and detail sensitivity. The influence of noise on hue gradient magnitude Hw for various σ~x is shown in fig. 4.2. The influence of noise on the hue edge detection is drastically reduced for larger observational scale σ~x. 4.3. Measurement of Color Invariants 57 y y E E x x E E 2 λ ´ E λλ y y 6 E E E − λy λλy x x E +2 E E E E E λ λ xy y λx λx E E E E E E 2 λ 2 +2 +2 + E − E E λλx x y y +2 E λλx E E E 2 E E x − λx λx E ´ E λλy λy sets. E E λ E E 2 2 t xy E 2 − − − − λλy E 4 ´ y E E λy E E E λλ E 2 λλ 3 xy xy λλ arian E λλxy E E E E +4 v E − λλx E + λ λ E + 3 3 2 in λ E λ E E E 2 y x E E E xy E ∂ − E λy − − E ³ ∂ E λ − λ E E E 2 E λ x x E ´ color E E E λλy λxy ³ λx 2 λλ E λxy E ) λy λy E E − E E E 2 + λλ E +4 2 − − λ λλx − E y E arious E E E E E ´³ y v ³ xy y y λ E E E E E 2 λλ x λλx − E E λλ the λx λx E + E E E + λx 2 λ λλ 2 − + + E for E E 2 2 E E ³ 2 E E λλ es + + E ( E E 2 λxy λλxy xy λxy λλxy λxy λλxy E − E E E E E E E ativ ======deriv xy xy λxy λxy λxy H W λλxy λλxy λλxy C N W C N W order 2 x E 2 λ 2 λx E second 6 E ) 2 x 2 − E − E x ) λλx λλ E xx 2 2 E x x E E spatial E E 2 λ λλ λ λ λλxx +2 λλx E E E E E E E the + 2 λ x +2 +2 +2 E − E E λx 4 E E of 2 λ − x E x x ´ E 2 E E λ ´ E E λλx λ 2 λλ E λxx 2 E 2 λλ E λx λx 2 E λxx E E )( 3 E E E − + E E 2 2 λx 2 2 + λ λλ E xx 3 3 ∂ E − − 2 − λ λλx ∂ E E E E ( ³ E E xx E E Summary ´ +8 xx ³ E λ E xx xx E E 2 λλ 2 x E E λλ − E E λλ λ λ 4.4: E + E E E − λx 2 λ λλ 2 − − − E E E 2 2 E E ³ 2 able E E λλ T + + E ( E E 2 λxx λλxx xx λxx λλxx λxx λλxx E − E E E E E E E ======xx xx λxx λxx λxx H W λλxx λλxx λλxx C N W C N W λ λλ λ λλ λ λλ H C C W W W N N 58 Chapter 4. Measurement of Color Invariants N N W W W C C H λλ λ λλ λ λλ λ W N C W N C λλv λλv λλv H λv λv λv I v v v v v v v v v v ======N N E E E C C H λλx λx λλx λx x 2 2 2 2 2 λλx λx x 2 2 2 E H + + + C E N y y T C E N y λy λy y λy N E C − λλy λλy able λλy − λλy r 2 λλy 2 E y y E y 2 E 2 C N − − E λλy E 2 − r H q y y λλw y 2 λλw λλx 2 2 x 2 2 2 E C x N ∂ E E − − C E − N E 4.5: H ∂ H v λλx λλxx 2 C λλxx N r E x 2 λx 2 2 λx 2 λx y v w 2 y λλw C E 2 + N + λλxx λλw E 2 E H C λλx λλx C E E N E λλx N λx 2 xy + λw xy Summary 2 λy λy y 2 λλy λw 2 λy 2 E + + λλy C E 2 + C E N N E E λλy λλy H λxy λxy λλy λy y 2 λxy 2 y 2 E H xx C E + + N + xx λλxy λλxy C E λλxy N λy λy 2 2 of λy 2 C E N the λxx λxx λxx second W N C W N C λλv λλv λλv H λv λv λv I order v v w w w w w w w w ======N N E E E C C H geometrical λλx λx λλx λx x λλx λx x E H − − − − − C E N − y y E C E N λy λy ( λy ³ E ( ³ ³ ³ ³ ³ r E λλy λλy H C λλy N N E E C r E xx ³ ³ E ³ λx 2 xx λλx λλx λx 2 2 2 λλx λx E C 2 2 N C E E N λx 2 r λw r 2 − ³ λxx ³ λw λxx λλx 2 C E 2 ³ N − λxx − ∂ − − C E C E N + E E λλw v q 2 ∂ − − N λλw C − H 2 λw E 2 λx N 2 λλxx λλxx w λλxx in E λλx 2 C y N E λy C E λw 2 N 2 y λy − 2 − + λy 2 − y H λy 2 λλw 2 λλw y + 2 x λλy v λλy 2 2 2 λλy 2 C E E ) N w ´ 2 ) arian + ´ E − ´ + λy λy λλy 2 C − E λy N − − E − λy 2 ³ E λxy ³ ´ ´ λxy ´ E λxy C E y y y 2 N H y C λλy E 2 N ´ x ´ λλy 2 λλy ´ λλy λλxy λλxy x 2 λλxy ts − − E y H y for y y 2 ´ ´ y ´ 2 ´ ´ E H the xy xy v arious W N C W N color C λλw λλw λλw H λw λw λw I w w w w w w w w w w in ======v N N E E E C C H arian λλx λx λλx λx x 2 2 2 2 2 λλx λx x 2 2 2 E H + + + C E N xx xx C E N λxx λxx λxx t N E C λλxx λλxx +2 λλxx +2 λλy sets. r 2 λλy 2 E E E C N +2 E +2 E λλy 2 +2 r H q λλw ∂ 2 λλw λλx x 2 2 E C x N E E +2 +2 C w E +2 N ∂ E H H λλx λλy C 2 λλy N r x E 2 λx λx w λx y w 2 y λλw C E 2 + N + λλy λλw E 2 E H C λλx λλx C E E N E λλx N λx 2 y xy + y λw xy 2 λy λy y 2 λw λλy 2 2 λy E y + + λλy C E 2 + C E N E N E λλy λλy H λxy λxy λλy λy 2 y λxy 2 y 2 E H y C E + + N + y y λλxy λλxy C E y λλxy N λy λy 2 2 λy 2 C E N λy λy λy y y y 4.3. Measurement of Color Invariants 59

(a) (b) (c) (d)

Figure 4.1: Example of the invariants associated with Hˆ . The example image is shown in (a), invariant Hˆ in (b), the derived expression Sˆ (c), and gradient magnitude Hˆw (d). Intensity changes and highlights are suppressed in the Hˆ and Hˆw image. The Sˆ image shows a low purity at color borders, due to mixing of colors on two sides of the border. For all pictures, σ~x = 1 pixel and the image size is 256 × 256.

(a) (b) (c) (d)

Figure 4.2: The influence of white additive noise on gradient magnitude Hˆw. Independent Gaussian zero-mean noise is added to each of the RGB channels, SNR = 5 (a), and Hˆw is determined for σ~x = 1 (b), σ~x = 2 (c) and σ~x = 4 (d), respectively. Note the noise robustness of the hue gradient Hˆw for larger σ~x.

Measurement of invariants for white illumination and matte, dull surfaces

The invariants Cˆλ and Cˆλλ represent normalized color, consequently their spatial derivatives measure the normalized color gradients. Cˆλw may be interpreted as the color gradient magnitude for transitions in first order spectral derivative, whereas Cˆλλw detects edges related to the second order spectral derivative. An example of the normalized colors and its gradients are shown in fig. 4.3. 60 Chapter 4. Measurement of Color Invariants

(a) (b) (c) (d)

Figure 4.3: Examples of the normalized colors Cˆλ denoting the first spectral derivative (a), Cˆλλ denoting the second spectral derivative (b), and their gradient magnitudes Cˆλw (c) and Cˆλλw (d), respectively. Note that intensity edges are being suppressed, whereas highlights are still present.

(a) (b) (c)

Figure 4.4: Examples of the gradient magnitudes Iˆw (a), Wˆ λw (b) and Wˆ λλw (c), respectively. Note all images show edges due to intensity differences and highlights. Iˆw shows purely intensity edges or shadow edges, while Wˆ λw and Wˆ λλw show color edges.

Measurement of invariants for white and uniform illumination and matte, dull surfaces

The invariant Iˆw denotes intensity or shadow edges, whereas the invariants Wˆ λw and Wˆ λλw represent color edges. Wλw may be interpreted as the gradient magnitude for the first spectral derivative. A similar interpretation holds for Wˆ λλw, but here edges caused by the second spectral derivative are detected. An example of the gradients is shown in fig. 4.4. 4.3. Measurement of Color Invariants 61

(a) (b)

Figure 4.5: Examples of the gradient magnitudes Nˆλw (a) and Nˆλλw (b). Note that intensity edges are suppressed. Further, note that the assumptions underlying this invariant does not account for highlights and interreflections, as is seen in the figure.

Measurement of invariants for colored illumination

The invariant Nˆλw and Nˆλww may be interpreted as the reflectance function gradi- ent magnitudes for spectral first and second order derivatives, respectively. Hence, material edges are detected independent of illumination intensity and illumination color. An example of the gradients Nˆλw and Nˆλλw is shown in fig. 4.5. In Chapter 3, illumination color invariance is investigated for the proposed edge strength, resulting in a significant reduction of chromatic variation due to illumination color. For a more elaborate discussion on the subject, see Chapter 3.

Total color gradients

The expressions for total gradient magnitude are given by Eˆw, Wˆ w, Cˆw, Nˆw, and Hˆw. The proposed edge strength measures may be ordered by degree of invariance, yield- ing Eˆw as measure of spectral edge strength, Wˆ w as measure of color edge strength, disregarding intensity level, Cˆw as measure of chromatic edge strength, disregarding intensity distribution, Nˆw as measure of chromatic edge strength, disregarding illu- mination, and Hˆw as measure of , disregarding intensity and highlights. An example of the proposed measures is shown in fig. 4.6.

4.3.2 Discriminative Power for RGB Recording In order to investigate the discriminative power of the proposed invariants, edge de- tection between 1013 different colors of the † color system is examined. The 1013 PANTONE colors‡ are recorded by a RGB-camera (Sony DXC-930P), un- †PANTONE is a trademark of Pantone, inc. ‡We use the PANTONE edition 1992-1993, Groupe BASF, Paris, France 62 Chapter 4. Measurement of Color Invariants

(a) (b) (c) (d)

Figure 4.6: Examples for the total color edge strength measures. a. Wˆ w invariant for a constant gain or intensity factor; note that this image show intensity, color, and highlight boundaries. b. Cˆw and c. Nˆw invariant for shading are shown. d. Hˆw invariant for shading and highlights. The effect of intensity and highlights on the different invariants are in accordance with tab. 4.1. der a 5200K daylight simulator (Little Light, Grigull, Jungingen, Germany). Purely achromatic patches are removed from the dataset, leaving 1000 colored patches. In this way, numerically unstable result for set Hˆ are avoided. Color edges are formed by combining each of the patches with all others, yielding 499,500 different edges. Edges are defined virtually by computing the left-hand part on one patch and the right-hand side of the filter on one of the other patches. The total edge strength measures for invariants Eˆ, Wˆ , Cˆ, Nˆ, and Hˆ (tab. 4.3) are measured for each color combination at a scale of σx = 0.75, 1, 2, 3 pixels, hence evaluating { } the total performance of each set of invariants. Discrimination between colors is determined by evaluating the ratio of discriminatory contrast between patches to within patch noise,

cˆij DNRc(i, j) = (4.25) 1 2 max 2 x,y cˆk(x, y) k N q P where cˆ denotes one of the edge strength measures for Eˆ, Wˆ , Cˆ, Nˆ, or Hˆ , respectively. Further, cˆij denotes the edge strength between patch i and j, and cˆk denotes the responses of the edge detector to noise within patch k. Hence, for detector cˆ, the denominator in expression (eq. 4.25) expresses the maximum response over the 1000 patches due to noise, whereas the numerator expresses the response due to the color edge. Two colors are defined to be discriminable when DNR 3, effectuating a ≥ conservative threshold. The results of the experiment are shown in tab. 4.6. For colors uniformly dis- tributed in color space, and for the configuration used and spatial scale σx = 0.75, about 970 colors can be distinguished from one another (Eˆ). For invariant W , per- 4.3. Measurement of Color Invariants 63

Table 4.6: For each invariant, the number of colors is given which can be discriminated from one another in the PANTONE color system (1000 colors). The number refers to the amount of colors still to be distinguished with the conservative criterion DNR > 3 given the hardware and spatial scale σx. For σx ≥ 2, Eˆ and Wˆ discriminate between all patches, hence the results are saturated.

σx = 0.75 σx = 1 σx = 2 σx = 3 Eˆ 970 983 1000 1000 Wˆ 944 978 1000 1000 Cˆ 702 820 949 970 Nˆ 631 757 962 974 Hˆ 436 461 452 462

formance reduces to 950 colors. A further decrease is for Cˆ and Nˆ, which distinct between approximately 700 and 630 colors, respectively. Lowest discriminative power is achieved by invariant set Hˆ , which discriminates approximately 440 colors. When the spatial scale σx increases, discrimination improves. A larger spatial scale yields better reduction of noise, hence a more accurate estimate of the true color is obtained. The results shown for σx 2 are saturated for Eˆ and Wˆ . Hence, a larger set of colors ≥ can be discriminated than shown here. Note that for σx 2 the performance for Cˆ is ≥ comparable to the performance of Nˆ, again indicating saturation. Note also that the power of discrimination expressed as the amount of discriminable colors is inversely proportional to the degree of invariance. These are very encouraging results given a standard RGB-camera and not a spectrophotometer. To discriminate 450 to 950 colors while maintaining invariance on just two patches in the image is helpful for many practical image retrieval problems.

4.3.3 Evaluation of Scene Geometry Invariance

In this section, illumination and viewing direction invariance is evaluated by exper- iments on a collection of real-world surfaces. Colored patches from the CUReT§ database are selected [3]. The database consists of planar patches of common mate- rials, captured under various illumination and viewing directions. Hence, recordings of 27 colored material patches, each captured under 205 different illumination and viewing directions are considered. Color edges are formed by combining the patch for each imaging conditions with the others, yielding 205 (205 1)/2 = 20, 910 × − different edges per material. Edges are defined virtually by computing the left-hand part on one patch and the right-hand side of the filter on one of the other patches. The total edge strength measures for invariants Eˆ, Wˆ , Cˆ, Nˆ, and Hˆ (tab. 4.3) are

§http://www.cs.columbia.edu/CAVE/curet/ 64 Chapter 4. Measurement of Color Invariants

obtained for each material at scale σx = 3 pixels. The root squared sum over the measured edge strengths indicates sensitivity to the scene geometry for the material and edge strength measure under consideration. For the spectral edge strength E, edge strength was normalized to the average intensity over all viewing conditions. In this way, comparison between the various edge strengths measured was possible. The results are shown in tab. 4.7. A high value of Eˆ indicates influence of scene geometry on material surface reflectance. By construction of the database, which con- tains planar patches, the value for Wˆ approximates the value for Eˆ. Exceptions are surfaces with rough texture, exhibiting shadow edges larger than the measurement scale (straw, cracker a). Further, the selected center point in the 205 recordings does not correspond to one identical point on the material patches, causing an error for non uniformly colored patches (orange peel, peacock feather), or patches exhibiting intensity variations (rabbit fur, brick b, moss). The measured error for Cˆ and Nˆ is approximately similar. White light is used for the recordings, hence both Cˆ and Nˆ reduce the measurement variation due to changes in intensity. Exceptions are scatter- ing materials with fine texture relative to the measurement scale σx = 3 pixels (velvet, rug b). Hence, causing highlights to influence the measured surface reflectance. Over- all, for Cˆ and Nˆ, variation in edge strength due to illumination and viewing direction is reduced drastically. Even for these non-Lambertian real-world surfaces, invariant sets Nˆ and Cˆ are highly robust against changes in scene geometry. For Hˆ , results are influenced by numerical stability. Highly saturated materials (velvet, artificial ˆ2 ˆ2 grass, lettuce leaf, soleirolia plant, moss) result in a small error since Eλ + Eλλ >> 0. Exception is again the non-uniform colored orange peel. Note that the error due to highlights for velvet is much smaller than as measured for Cˆ, Nˆ. For materials with lower saturation, errors become larger. Overall, influence of illumination and viewing direction is slightly reduced for Hˆ . In conclusion, the table demonstrates the expected error for real, commonly non- Lambertian surfaces. These results demonstrate the usefulness of the various invariant sets for material classification and recognition, based on surface reflectance properties.

4.3.4 Localization Accuracy for the Geometrical Color Invari- ants

Rotational and translational invariance is to be evaluated yet. The independency of the derived expressions on coordinate system is mathematically shown in [4]. It is demonstrated by the examples shown in fig. 4.6. The measurement problem related to rotation and translation invariance is the accuracy of edge localization between differ- ent colors. In order the investigate localization accuracy of the proposed invariants, edge location is evaluated between 1000 different colors of the PANTONE system. The uncoated patches as described in the preceding section (section 4.3.2) are used to form 499,500 different color edges. The total edge strength measures for invariants 4.3. Measurement of Color Invariants 65

Table 4.7: Results for scene geometry invariance evaluation on the CUReT dataset [3]. The root squared sum in measured total edge strength over 205 recordings under different viewing and illumi- nation direction is given for each of the materials. A high value for Eˆ indicates a large influence of scene geometry on surface reflectance for the considered material. A low value for the variation of invariants Wˆ , Cˆ, Nˆ or Hˆ relative to Eˆ indicate robustness against scene geometry for the invariant under consideration. The table offers an indication for the error to expect in estimating surface reflectance for real materials.

material recordings E W/E C/E N/E H/E Velvet 205 102.7 1.01 0.86 1.45 0.03 Pebbles 205 25.0 1.07 0.15 0.15 0.59 Artificial Grass 205 29.1 0.97 0.22 0.23 0.28 Roof Shingle 205 27.4 1.04 0.30 0.30 2.00 Cork 205 34.5 0.93 0.20 0.19 0.59 Rug b 205 28.2 1.01 0.45 0.38 0.58 Sponge 205 29.0 1.09 0.21 0.21 0.62 Lambswool 205 29.6 1.09 0.18 0.17 0.53 Lettuce Leaf 205 41.3 1.02 0.13 0.15 0.26 Rabbit Fur 205 26.0 1.15 0.19 0.18 0.68 Roof Shingle (zoomed) 205 34.2 0.98 0.19 0.19 0.97 Human Skin 205 39.7 0.99 0.10 0.09 0.47 Straw 205 33.7 1.13 0.12 0.12 0.38 Brick b 205 77.6 0.55 0.06 0.06 0.30 Corduroy 205 27.3 1.14 0.07 0.07 0.47 Linen 205 39.1 0.90 0.11 0.11 0.62 Brown Bread 205 29.7 1.06 0.19 0.19 0.54 Corn Husk 205 33.3 1.05 0.10 0.11 0.41 Soleirolia Plant 205 39.4 0.98 0.15 0.17 0.24 Wood a 205 34.8 0.94 0.19 0.18 0.58 Orange Peel 205 73.3 0.73 0.30 0.28 0.36 Wood b 205 34.3 0.97 0.13 0.13 0.36 Peacock Feather 205 61.3 0.66 0.16 0.16 0.70 Tree Bark 205 37.0 0.97 0.16 0.16 0.48 Cracker a 205 28.7 1.22 0.19 0.19 0.71 Cracker b 205 25.3 1.01 0.30 0.27 0.75 Moss 205 46.9 0.84 0.13 0.15 0.20

Eˆ, Wˆ , Cˆ, Nˆ, and Hˆ (tab. 4.3) are measured for each color combination at a scale of σx = 1, 2, 4 pixels, hence evaluating the total performance of each set of invariants. { } For color pairs that can be distinguished (see section 4.3.2), the edge position between different patches is determined by tracing the maximum response along the edge in the resulting edge strength image. The average deviation between the measured edge location and real edge location is considered to be a good measure for the localization accuracy. The root mean squared error in edge location is determined over all color pairs for each of the total edge strength measures. The results of the experiment are shown in tab. 4.8. For the invariants Eˆ, Wˆ , Cˆ, 66 Chapter 4. Measurement of Color Invariants

Table 4.8: Results for the edge localization experiment relative to pixel size. For each invariant, the root mean squared error in measured edge position over the color pairs from the PANTONE system is given.

σx = 1 σx = 2 σx = 4 Eˆ 0.22 0.28 0.36 Wˆ 0.51 1.03 1.82 Cˆ 0.66 1.49 2.44 Nˆ 0.65 1.45 2.38 Hˆ 2.46 1.63 0.70

and Nˆ, localization accuracy degrades for higher spatial scale σx. This is a well known property for Gaussian smoothing in the intensity domain. The invariants all result in a larger localization error than Eˆ, due to severe reduction in edge contrast. The localization error for Cˆ is almost identical to the error for Nˆ, as expected. Note that the localization error remains within spatial scale σx. The results for invariants Cˆ and Nˆ are almost identical, as expected. For the invariant Hˆ , edge strength is normalized by the squared sum of the spectral derivatives (eq. 4.9). Hence, localization accuracy improves for higher spatial scale due to a better estimation of local chromaticity. In conclusion, edge localization accuracy is slightly reduced for the invariant sets in comparison to Eˆ. However, precision remains within the spatial differential scale σx. The results show Hˆ to be noise sensitive for small spatial scale σx < 2.

4.4 Conclusion

We have derived geometrical color invariant expressions describing material properties under three independent assumptions regarding the imaging conditions, a. white or colored illumination, b. matte, dull object or general object, or c. uniformly stained object or generally colored object. The reflectance model under which the invariants remain valid is useful for a wide range of materials [10]. Experiments on an example image showed the invariant set C and N to be successful in disregarding shadow edges, whereas the set H is shown to be successful in discounting both shadow edges and highlights. In Chapter 3 the degree of illumination color invariance for set Nˆ is investigated. We showed the discriminative power of the invariants to be orderable by broadness of invariance. Highest discriminative power is obtained by set Wˆ (950 colors out of 1000) which has the tightest set of disturbing conditions, namely overall illumination intensity or camera gain. Discrimination degraded for set Cˆ (700 colors), which is invariant for shading effects. Set Nˆ invariant for shading and illumination color discriminates between 630 color, whereas set Hˆ , invariant for shadows and highlights, Bibliography 67 has lowest discriminative power (440 colors). Discriminating power is increased when considering a larger spatial scale σx, thereby taking a larger neighborhood into account for determining the color value. Hence, a larger spatial scale results in a more accurate estimate of color at the point of interest, increasing the accuracy of the result. The aim of the chapter is reached in that high color discrimination resolution is achieved while maintaining constancy against disturbing imaging conditions, both theoretically as well as experimentally. We have restricted ourselves in several ways. We have derived expressions up to the second spatial order, and investigated their performance only for the spatial gradient. The derivation of higher order derivatives is straightforward, and may aim in corner detection [21]. Usually many derivatives are involved here, raising some doubt on the sustainable accuracy of the result. Consequently, a larger spatial scale may be necessary to increase the accuracy of measurements involving higher order derivatives. Further, we have only considered spectral derivatives up to second order, yielding compatibility with human color vision. For a spectrophotometer, measurements can be obtained at different positions λ0, for different scales σλ, and for higher spectral differential order, thereby exploiting the generality of the Gaussian color model. We provided different classes of color invariants, under general assumptions regard- ing the imaging conditions. We have shown how to reliably measure color invariants from RGB images by using the Gaussian color model. The Gaussian color model extents the differential geometry approaches from grey-value images to multi-spectral differential geometry. Further, we experimentally proved the color invariants to be successful in discounting shadows and highlights, resulting in accurate measurements of surface reflectance properties. The presented framework for color measurement is well-defined on a physical basis, hence it is theoretically better founded as well as experimentally better evaluated than existing methods for the measurement of color features in RGB-images.

Bibliography

[1] E. Angelopoulou, S. Lee, and R. Bajcsy. Spectral gradient: A material descriptor invariant to geometry and incident illumination. In Proceedings of the Seventh IEEE International Conference on Computer Vision, pages 861–867. IEEE Com- puter Society, 1999.

[2] A. Cumani. Edge detection in multispectral images. CVGIP: Graphical Models and Image Processing, 53(1):40–51, 1991.

[3] K. J. Dana, B. van Ginneken, S. K. Nayar, and J. J. Koenderink. Reflectance and texture of real world surfaces. ACM Trans Graphics, 18:1–34, 1999. 68 Chapter 4. Measurement of Color Invariants

[4] L. M. J. Florack, B. M. ter Haar Romeny, J. J. Koenderink, and M. A. Viergever. Scale and the differential structure of images. Image and Vision Computing, 10(6):376–388, 1992.

[5] L. M. J. Florack, B. M. ter Haar Romeny, J. J. Koenderink, and M. A. Viergever. Cartesian differential invariants in scale-space. Journal of Mathematical Imaging and Vision, 3(4):327–348, 1993.

[6] R. Gershon, D. Jepson, and J. K. Tsotsos. Ambient illumination and the deter- mination of material changes. J. Opt. Soc. Am. A, 3:1700–1707, 1986.

[7] T. Gevers, S. Ghebreab, and A. W. M. Smeulders. Color invariant snakes. In P. H. Lewis and M. S. Nixon, editors, Proceedings of the Ninth British Machine Vision Conference, pages 659–670. University of Southhampton, 1998.

[8] T. Gevers and A. W. M. Smeulders. Color based object recognition. Pat. Rec., 32:453–464, 1999.

[9] T. Gevers and H. Stokman. Reflectance based edge classification. In Proceed- ings of Vision Interface, pages 25–32. Canadian Image Processing and Pattern Recognition Society, 1999.

[10] D. B. Judd and G. Wyszecki. Color in Business, Science, and Industry. Wiley, New York, NY, 1975.

[11] J. J. Koenderink. The structure of images. Biol. Cybern., 50:363–370, 1984.

[12] J. J. Koenderink and A. Kappers. Color Space. Utrecht University, The Nether- lands, 1998.

[13] J. J. Koenderink and A. J. van Doorn. Receptive field families. Biol. Cybern., 63:291–297, 1990.

[14] P. Kubelka and F. Munk. Ein beitrag zur optik der farbanstriche. Z. Techn. Physik, 12:593, 1931.

[15] T. Lindeberg. Scale-Space Theory in Computer Vision. Kluwer Academic Pub- lishers, Boston, 1994.

[16] P. Olver, G. Sapiro, and A Tannenbaum. Differential invariant signatures and flows in computer vision: A symmetry group approach. In B. M. ter Haar Romeny, editor, Geometry-Driven Diffusion in Computer Vision. Kluwer Academic Publishers, Boston, 1994.

[17] M. Pluta. Advanced Light Microscopy, volume 1. Elsevier, Amsterdam, 1988. Bibliography 69

[18] G. Sapiro and D. L. Ringach. Anisotropic diffusion of multivalued images with applications to color filtering. IEEE Trans. Image Processing, 5(11):1582–1586, 1996.

[19] S. A. Shafer. Using color to separate reflection components. Color Res. Appl., 10(4):210–218, 1985.

[20] H. Stokman and T. Gevers. Detection and classification of hyper-spectral edges. In Proceedings of the Tenth British Machine Vision Conference, pages 643–651. CRI Repro Systems Ltd., 1999.

[21] B. M. ter Haar Romeny, editor. Geometry-Driven Diffusion in Computer Vision. Kluwer Academic Publishers, Boston, 1994.

[22] G. Wyszecki and W. S. Stiles. Color Science: Concepts and Methods, Quantita- tive Data and Formulae. Wiley, New York, NY, 1982.

[23] S. Di Zenzo. A note on the gradient of a multi-image. Comput. Vision Graphics Image Processing, 33:116–125, 1986.

Part II

Geometrical Structure

Chapter 5

Robust Autofocusing in Microscopy

appeared in Cytometry, vol. 39, pp. 1–9, 2000. Europian patent application filed under no. 99201795.4 on June 4, 1999.

“The way the Nutri-Matic machine functioned was very interesting. When the Drink button was pressed it made an instant but highly detailed examination of the subject’s taste buds, a spectroscopic analysis of the subject’s metabolism and then sent tiny experimental signals down the neural pathways to the taste centres of the subject’s brain to see what was likely to go down well. However, it invariably produced a plastic cup filled with a liquid which was almost, but not quite, entirely unlike tea.” in The Hitch Hikers Guide to the Galaxy, by Douglas Adams.

Along with the introduction of high throughput screenings, quantitative mi- croscopy is gaining importance in pharmaceutical research. Fully automatic acqui- sition of microscope images in an unattended operation coupled to an automatic image analysis system allows for the investigation of morphological changes. Time lapse experiments reveal the effect of drug compounds on the dynamics of living cells. Histochemical assessment of fixed tissue sections is used to quantify pathological modification. A critical step in automatic screening is focusing. Fast and reliable autofocus methods for the acquisition of microscope images are indispensable for routine use on a large scale. Autofocus algorithms should be generally applicable on a large variety of microscopic modes and on a large variety of preparation techniques and specimen types. Although autofocusing is a long standing topic in literature [4, 5, 7, 8, 9, 10, 11, 19], no such generally applicable solution is available. Methods are often designed for one kind of imaging mode. They have been tested under well-defined circumstances. The assumptions made for determining the focal plane in fluorescence microscopy

73 74 Chapter 5. Robust Autofocusing in Microscopy are not compatible with the same in phase contrast microscopy, and this holds true throughout. We consider the design of a method which is generally applicable in light microscopy. From Fourier optics [13] it has been deduced that well-focused images contain more detail than images out of focus. A focus score is used to measure the amount of detail. The focus curve can be estimated from sampling the focus score for different levels of focus. Some examples of focus curves are shown in fig. 5.2. Best focus is found by searching for the optimum in the focus curve. In a classical approach the value of the focus score is estimated for a few focus positions [8, 19, 2]. Evaluation of the scores indicates where on the focus curve to take the next sample. Repeating the process iteratively should ensure convergence to the focal plane. A major drawback is that such optimization procedure presupposes a. a uni-modal focus function, and b. a broad-tailed extremum to obtain a wide focus range, which do not hold true in general. In reality, the focus curve depends on the microscope setup, imaging mode and preparation characteristics [16]. When the assumed shape of the focus curve does not match the real focus curve, or when local extrema emerge, convergence to the focal plane is not guaranteed. Groen et al. [5] specifies criteria for the design of autofocus procedures. We adopt these criteria of good focusing: a. accuracy, b. reproducibility, c. general applicabil- ity, d. insensitivity to other parameters. Under insensitivity to other parameters is considered robustness against noise and optical artifacts common to microscopic im- age acquisition. Further, we reject the criteria of unimodality of the focus curve, which can not be achieved in practise [16, 15]. As a consequence, the range or broadness of the extremum in the focus curve is of less relevance. In this report, an autofocus method is presented which is generally applicable in different microscopic modes. The aim was to develop a method especially suited for an unattended operational environment, such as high throughput screenings. There- fore, the method should be robust against confounding factors common in microscopy, as noise, optical artifacts and dust on the preparation surface. To evaluate the per- formance of the autofocus method, experiments have been conducted in screening applications.

5.1 Material and Methods

5.1.1 The Focus Score

From Fourier optics, measurement of the focus score can best be based on the energy content of a linearly filtered image [5, 13]. From [5, 8, 16] it can be deduced that an optimal focus score is output by the gradient filter. Scale-space theory [20] leads to the use of the first order Gaussian derivative to measure the focus score. The σ of the 5.1. Material and Methods 75

Gauss filter determines the scale of prominent features. The focus function becomes 1 F (σ) = [f(x, y) G (x, y, σ)]2 + [f(x, y) G (x, y, σ)]2 NM x y x,y ∗ ∗ X (5.1) 1 = f 2 + f 2 NM x y x,y X where f(x, y) is the image grey value, Gx(x, y, σ) and Gy(x, y, σ) are the first order Gaussian derivatives in the x- and y-direction at scale σ, NM is the total amount of pixels in the image, and fx, fy are the image derivatives at scale σ in the x- and y-direction, respectively. Often, a trade-off between noise sensitivity and detail sensitivity can be observed for a specific microscope set-up. For example, in fluorescence microscopy the signal to noise ratio (SNR) is often low, and relatively smooth images are examined. For phase contrast microscopy, SNR is high, and small details (the phase transitions) have to be detected. Accuracy of autofocusing depends on the signal to noise ratio as propagated through the focus score filter [19]. Therefore, the σ of the Gaussian filter should be chosen such that noise is maximally suppressed, while the response to details of interest in the image is preserved. For bar-like structures, the value of σ should conform to [17] d σ (5.2) ≈ 2√3 where the thickness of the bar is given by d. Assuming that the smallest detail to be focused may be considered bar shaped, (eq. 5.2) gives an indication for the minimal value of σ. Note that the filter response degrades for smaller values, whereas a very large value smooths all details to noise level.

5.1.2 Measurement of the Focus Curve Consider a system consisting of the following hardware: 1) a microscope with scanning stage and position controller for both axial and lateral direction, 2) a camera placed on the microscope recording its field of view, 3) a video digitizer connected to a computer system, writing at video rate the camera output into the computers memory. The computer system is able to send positioning commands to the stage controller. Examples of such systems will be given later. The focal plane of the microscope is assumed to be within a pre-defined interval ∆z around the start z-position z. The scanning stage is moved down to the position 1 zmin = z ∆z. Backlash correction is applied by sending the stage further down than − 2 necessary, and raising it again to the given position [10]. In this way, focus positions are always reached from the same direction. As a result, mechanical tolerance in cog-wheels is eliminated. 76 Chapter 5. Robust Autofocusing in Microscopy

At t = 0 ms, the stage controller starts raising the stage to traverse the complete focus interval ∆z. During the stage movement through focus, successive images of the preparation are captured at 40 ms intervals (video rate). The focus score of each captured image is calculated. The image buffer is re-used for the next video frame, necessitating only two image memory buffers to be active at any time. One of the buffers is used for focus score calculation of the previously captured image, while the other is used for capturing the next image. Calculation of the focus score should thus be performed within one video frame time. As soon as the stage has reached the end of the focus interval, timing is stopped at t = td ms. An estimation of the focus curve is obtained for the complete focus interval. The global optimum in the estimate for the focus curve represents the focal plane. Now, each z-position is related to the time at which the corresponding image has been captured. When linear movement of the stage is assumed, the position at which the image at time ti is taken corresponds to

ti zi = ∆z + zmin (5.3) td where td represents the travel duration, ∆z is the focus interval, and zmin is the start position (position at t = 0 ms). Since the focus curve is parabolic around the focal plane [10, 11, 19], high focus precision can be achieved by quadratic interpolation. When assuming linear stage movement, or z = vt + zmin, the focus curve around the focal plane can be approxi- mated by

s(t) = c + bt + at2 (5.4)

The exact focus position is obtained by fitting a parabola through the detected opti- mum and its neighboring measurements. Consider the detected optimum s(to) = so at time t = to. The time axis may be redefined such that the detected optimum is at time t = 0. Then, neighboring scores are given by (sn, tn) and (sp, tp), respectively. Solving for a, b and c gives

2 2 2 2 sot + spt + sot snt s t s t s t + s t − n n p − p o n p n o p n p c = so, b = 2 2 , a = − 2 − 2 (5.5) t tp tnt t tp tnt n − p n − p The peak of the parabola, and thus the elapsed time to the focus position, is given by

2 2 2 2 b sotn sptn sotp + sntp tf = + to = − − + to (5.6) −2a 2 (sotn sptn sotp + sntp) − − The focal plane is at position

tf zf = ∆z + zmin (5.7) td to which is moved, taking the backlash correction into account. 5.1. Material and Methods 77

5.1.3 Sampling the Focus Curve

The depth of field of an optical system is defined as the axial distance from the focal plane over which details still can be observed with satisfactory sharpness. The thickness of the slice which can be considered in focus is then given by [14, 26]

λ zd = (5.8) 2 2n 1 1 NA − − n µ ¶ q ¡ ¢ where n is the refractive index of the medium, λ the wavelength of the used light, and NA the numerical aperture of the objective. The focus curve is sampled at Nyquist rate when measured at zd intervals [18]. The parabolic fitting ensures that focus position is centered within thick specimens, i.e. specimens much larger than zd. Common video hardware captures frames at fixed rate. Thus the sampling density of the focus curve can only be influenced by adjusting the stage velocity to travel zd µm per video frame time. In order to calculate the focus score within video frame time for current sensors and computer systems, simplification of the focus function (eq. 5.1) is considered. For biological preparations, details are distributed isotropically over the image. The re- sponse of the filter in one direction is adequate for determination of the focal plane. Further computation time can be saved by estimating the filter response from a frac- tion of the scan lines in the image. Then, the focus function is given by

L F (σ) = [f(x, y) G (x, y, σ)]2 . (5.9) NM x x,y ∗ X For our purpose, each sixth row (L = 6) is applied. A recursive implementa- tion of the Gaussian derivative filter is used [22], for which the computation time is independent of the value of σ. The calculation time is kept under 40 ms for all computer systems we used in the experiments, even when the system is running other tasks simultaneously. Comparison between the focus curve calculated in two dimen- sions for the whole image (eq. 5.1), and the response of (eq. 5.9) reveals only marginal differences for all experiments.

5.1.4 Large, Flat Preparations

For the acquisition of multiple aligned images from large, flat preparations, the vari- ation in focus position is assumed small but noticeable at high magnification. Proper acquisition of adjacent images can be obtained by focusing a few fields. Within the preparation, the procedure starts by focusing the first field. Fields surrounding the focused field are captured, until the next field to capture is a given distance away from the initially focused field. Deviation from best focus is now corrected for by 78 Chapter 5. Robust Autofocusing in Microscopy focusing over a small interval. The preparation is scanned, keeping track of focus position at fields further away than a given distance from the nearest of all the pre- viously focused fields. The threshold distance for which focusing is skipped depends on the preparation flatness and magnification, and has to be empirically optimized for efficiency. Fields that have been skipped for focusing are positioned at the focus level of the nearest focused field. Small variations in focus position while scanning the preparation are corrected during acquisition.

5.1.5 Preparation and Image Acquisition The autofocus algorithm is intensively tested in the following applications: a. quan- titative neuronal morphology, b. time-lapse experiments of cardiac myocyte dedif- ferentiation, c. immunohistochemical label detection in fixed tissue d. C. Elegans GFP-VM screening, e. acquisition of smooth muscle cells, and f. immunocytochemi- cal label detection in fixed cells. Each of these applications is described below. The software package SCIL Image version 1.4 [21] (TNO-TPD, Delft, The Netherlands) is used for image processing, extended with the autofocus algorithm and functions for automatic stage control and image capturing. All preparations are observed on Zeiss invert microscopes (Carl Zeiss, Oberkochen, Germany), except for the immunohisto- chemical label detection, which is observed with an Zeiss Axioskop. The wavelength of the used light is 530 nm, unless stated different. For automatic position control, the microscopes are equipped with a scanning stage and MAC4000 or (compara- ble) MC2000 controller (M¨arzh¨auser, Wetzlar, Germany). At power on, the stage is calibrated and an initial focus level is indicated manually. Backlash correction is empirically determined. For each application, the focus interval ∆z is determined by evaluating the variability in the z-position between focus events.

Quantitative Neuronal Morphology in Bright-field Mode Morphological changes of neurons are automatically quantified as described in [12]. Briefly, PC12 cells were plated in poly-L-lysine (Sigma, St. Louis, MO) coated 12-well plates. In each well 5 104 cells were seeded. After 24 hours the cells were fixed with × 1% glutaraldehyde for 10 minutes. Then the cells were washed twice with distilled water. The plates were dried in an incubator. The plates are examined in bright-field illumination mode, for details see tab. 5.1. The camera used is an MX5 (Adaptec, Eindhoven, The Netherlands) 780 576 video × frame transfer CCD with pixel size 8.2 16.07µm2, operating at room temperature × with auto gain turned off. Adjacent images are captured by an Indy R4600 132MHz workstation (Silicon Graphics, Mountain View, CA), resulting in an 8 8 mosaic × image for each well. Prior to the acquisition of the well, autofocusing at the center of the scan area is performed. The smallest details to focus are the neurites, which are about 3 pixels thick, yielding σ = 1.0 (eq. 5.2). The wave length of the illumination is 5.1. Material and Methods 79 about 530 nm, resulting in 23.4 µm depth of field (eq. 5.8). The effective stage velocity is somewhat different due to rounding off to controller built-in speeds. Due to the low magnification, backlash correction is not necessary.

Cardiac Myocyte Dedifferentiation in Phase Contrast Mode

Cardiac myocytes were isolated from adult rats (ca. 250 gram) heart by collagenase perfusion as described in [3]. The cell suspension containing cardiomyocytes and fibroblasts was seeded on laminin coated plastic petri dishes, supplied with M199 and incubated for one hour. Thereafter, unattached and/or dead cells were washed away by rinsing once with M199. The petri dishes were filled with M199 +20% fetal bovine serum and incubated at 37◦C. The petri dishes are examined in phase contrast mode, for details see tab. 5.1. Dur- ing the experiment, ambient temperature is maintained at 37◦C. Time-lapse record- ings (15 hours) are made in 6 manually selected fields, one in each of the 6 petri dishes. The scanning stage visits the selected fields at 120 second intervals. Fields are captured using a CCD camera (TM-765E, Pulnix, Alzenau, Germany). They are added to JPEG compressed digital movies (Indy workstation with Cosmo compressor card, SGI, Mountain View, CA), one for each selected field. Autofocusing is applied once per cycle, successively refocusing all the fields in 6 cycles. The smallest details to focus are the cell borders.

Immunohistochemical Label Detection in Bright-field Mode

Sections of the amygdala of mice injected with a toxic compound were cut at 15 µm thickness through the injection site. They were subsequently immunostained for the presence of the antigen, using a polyclonal antibody (44-136, Quality Control Bio- chemicals Inc., Hopkinton, MA) and visualized using the chromogen DAB. Four microscope slides (40 brain slices) at once are mounted on the scanning stage and observed in bright-field illumination mode, see tab. 5.1. Adjacent images are captured (Meteor/RGB frame-grabber, Matrox, Donval, Quebec, Canada in an Optiplex GXi PC with Pentium 200MHz MMX, Dell, Round Rock, TX) by use of an MX5 CCD camera (Adaptec, Eindhoven, The Netherlands). As a result, mosaics of complete brain slices are stored on disk. Prior to acquisition, autofocusing at approximately the center of the brain slice is performed, the smallest details to focus being tissue structures. Due to the low magnification, backlash correction is not necessary.

C. Elegans GFP-VM Screening in Fluorescence Mode

Individual C. Elegans worms transgenic for GFP expressing vulval muscles (GFP- VM) were selected from stock, and one young adult hermafrodite (P0) was placed in 80 Chapter 5. Robust Autofocusing in Microscopy each of the 60 center wells of a 96-well plate (Costar, Acton, MA) filled with natural growth medium, and incubated for five days at 25◦C to allow F1 progeny to reach adult stage. Before image acquisition, fluorescent beads (F-8839, Molecular Probes, Eugene, OR) are added to the wells as background markers for the focus algorithm. The well plate is examined in fluorescence mode, see tab. 5.1. A FITC filter (B, Carl Zeis, Oberkochen, Germany) in combination with a 100W Xenophot lamp is used to excite the GFP. Images are captured (O2 R5000 180MHZ workstation, Silicon Graphics, Mountain View, CA) using an intensified CCD camera (IC-200, PTI, Monmouth Junction, NJ). Each of the selected wells is scanned and the adjacent images, com- pletely covering the well, are stored on disk. Variability in the z-position between the center of the wells turned out to be within 250 µm, which is taken as focus interval for initial focusing. After autofocusing on the well center, deviation from best focus while scanning the well is corrected over one-fifth of the initial focus interval. Focusing of all fields further than 3 fields away from a focused field was sufficient to keep track of the focal plane. The diameter of the fluorescent spheres is 15 µm (30 pixels), which is much larger than zd. Since the spheres are homogeneously stained, the smallest detail to consider in the z-direction is a cylindrically shaped slice through the spheres, where the cylinder height is determined by the horizontal resolution. Therefore, stage velocity is reduced to approximately one third of the sphere diameter during focusing.

Acquisition of Smooth Muscle Cells in Phase Contrast Mode Smooth muscle cells were enzymatically isolated from the circular muscle layer of guinea-pig ileum by a procedure adapted from [1]. Dispersed cells were suspended in a HEPES buffered saline containing 1 mM CaCl2. Aliquots (200 µl) of the cell suspen- sion were distributed over test tubes and maintained at 37◦C for 30 minutes. Then, 800 µl of medium containing the compound to be tested was added and cells were incubated for 30 seconds. The reaction was stopped by addition of 1% glutaraldehyde. A drop of each cell suspension is brought on a microscopic glass slide, and observed in phase contrast mode (see tab. 5.1). A region containing sufficient cells is selected manually and adjacent images are captured (Indy R4600 132MHz workstation, Silicon Graphics, Mountain View, CA) using an MX5 CCD camera (Adaptec, Eindhoven, The Netherlands). Autofocusing is performed at approximately the center of the selected area, the smallest details being the elongated cells.

Immunocytochemical Label Detection in Fluorescence Mode Human fibroblasts were seeded in a 96-well plate (Costar, Acton, MA) at 7000 cells per well, in 2% FBS/Optimem. Cells were immunostained according to [6] with primary antibody rabbit anti human NF-κB (p65) (Santa Cruz Biotechnology, Santa Cruz, CA) and secondary Cy3 labeled sheep anti rabbit (Jackson, Uvert-Grove, PA). 5.1. Material and Methods 81

Table 5.1: Summary of the experimental setup and parameter settings for the various experiments. The value for sigma (eq. 5.2) is given together with the smallest structure (d) in pixels. The focus interval ∆z and depth of field zd are given in [ µm]. The effective velocity used during focusing is given by veff in [ µm / 40 ms].

application mode obj (NA) σ (d) ∆z zd veff Quant neuronal morph bright 5× (0.15) 1.0 (3) 500 23.4 24.7 Cardiac myocyte dediff phase 32× (0.4) 1.0 (4) 100 3.2 2.5 Immunohist label det bright 2.5× (0.075) 1.0 (3) 1,000 94 98.7 C. Elegans screening fluoresc 40× (0.6) 8.5 (30) 50 1.33 4.94 Acq smooth muscle phase 10× (0.3) 1.0 (4) 500 5.75 4.94 Immunocyt label det fluoresc 40× (0.6) 8.5 (30) 250 1.13/1.50 4.94

Further, nuclear counter staining with Hoechst 33342 (Molecular Probes, Eugene, OR) was applied. Well plates are examined in fluorescence mode, see tab. 5.1. A DAPI-FITC-TRITC filter (XF66, Omega Optical, Brattleboro, VT) in combination with a 100W Xenophot lamp is used to excite the cells (emission nuclei at 450 nm, immuno signal at 600 nm). Adjacent images are captured (O2 workstation R5000 180MHZ, Silicon Graphics, Mountain View, CA) using an intensified CCD camera (IC-200, PTI, Monmouth Junction, NJ). Autofocusing is performed at approximately the center of the scan area, the smallest details being the nuclei. Cell thickness is about 5–15 µm, much larger than zd. Therefore, during focusing, stage velocity is reduced to approximate the cell thickness.

5.1.6 Evaluation of Performance for High NA

The autofocus algorithm performance is objectively evaluated by comparing focus random error with observers. For this purpose, 2 µm epon sections of dog left ventricle cardiac myocytes stained with periodic acid schift and toluidin blue are observed with a Zeiss Axioplan. A high NA objective 40 NA 1.4 oil immersion is used, for × which the depth of field is zd = 0.36 µm (eq. 5.8). Autofocusing is considered not trivial under these circumstances. Unfocused, arbitrarily selected fields (20 in total) are visited and manually focused by two independent experienced observers. Focus positions are recorded for both observers. Similarly, the found focus positions for the autofocus algorithm is recorded (σ = 1.0, backlash correction 15 µm, ∆z = 25 µm). Comparison of the random error between observers and for observer vs. autofocus gives an objective evaluation of autofocus performance. 82 Chapter 5. Robust Autofocusing in Microscopy

1

0.8

0.6

0.4

0.2

0 -250 -200 -150 -100 -50 0 50 100 150 200 250 z-position [um]

Figure 5.1: Focus function as measured for the smooth muscle cells in phase contrast mode. The focus score (arbitrary units) of one representative field is plotted as function of the z-position. The peaks are caused by phase transition effects; the focal plane for the cell bodies is at −75 µm.

5.2 Results

5.2.1 Autofocus Performance Evaluation

The focus algorithm was not able to focus accurately on the smooth muscle cells. Figure 5.1 shows a representative focus curve measured with σ = 1.0. Measurement of the focus curve at other scales resulted in similar curves. The peaks are caused by phase transitions occurring when scanning through focus. For different focus po- sitions, bright halos appear around the cells due to light diffraction [15]. The area of the cell bodies is small compared to the size of the halos, and thus the relevant image information content is too low. These circumstances caused failure of the focus algorithm to accurately focus on the cell bodies. For the other applications, fig. 5.2 shows the average focus curves, not considering complete failures. The variation in focus score is mainly due to the different number of cells or amount of tissue present in each field. For the time lapse of the cardiac myocytes (fig. 5.2b), variation in focus score is caused by the dedifferentiation of the cardiac myocytes over time. The variation in focus score for the immunohistochemical label detection (fig. 5.2c) is caused by contrast differences between slices. Further, for the quantitative neuronal morphology (fig. 5.2a), the measured focus curve with lowest maximum score (peak at 0.004) is at a field containing only some dead cells. Note the local maximum beneath focus, caused by a 180◦ phase shift in the point spread function of the optical system [25]. Table 5.2 shows a summary of autofocus performance. All fields were accurately focused according to an experienced observer, except for a few complete failures. Fo- cus could not be determined on empty fields, as is the case for 14 failures in the C. Elegans GFP-VM screening. For the immunohistochemical label detection, focusing 5.2. Results 83

Table 5.2: Summary of the results for the various experiments. The total number of focus events is denoted by # events. The time needed for focusing is given by tfoc in seconds, and as percentage of the total acquisition time tacq.

application mode events fail (correct) tfoc tacq (tacq) Quant neuronal morph bright 180 0 (100%) 1.7 7.5% (4.5 min.) Cardiac myocyte dediff phase 75 0 (100%) 2.8 — Immunohist label det bright 100 2 (98%) 1.5 7% (3 min.) C. Elegans screening fluoresc 1800 14 (> 99%) 1.1 12% (4.5 hour) Immunocyt label det fluoresc 300 2 (> 99%) 2.8 14% (20 min.)

failed on 2 fields, which contained not enough contrast for focusing. Further, for 2 fields in the immuno signal of the immunocytochemical label detection, the camera was completely saturated (bloomed) due to preparation artifacts, causing the autofo- cus algorithm to fail. For the C. Elegans GFP-VM screening, total acquisition time for a 96-well plate was 4.5 hours for 28,000 images, which is reasonable given the time needed for preparation. In summary, failure is caused by a shortage of relevant image information con- tent. The proposed algorithm was completely successful in determining correct focus position for the thoroughly stained preparations of the quantitative neuronal mor- phology, even for fields containing only a few dead cells. Further, complete success was achieved for the cardiac myocyte dedifferentiation. Despite the morphological changes in image content during the experiment, none of the time lapse movies was out of focus any time. A high success rate was obtained for the immunohistochemical label detection, failing for 2 fields containing not enough contrast. For the fluores- cence applications, the images were highly degraded by the presence of random noise (SNR 10 dB) due to fluorescent bacteria (C. Elegans screening), camera noise and ≤ structural noise caused by earth loops in combination with the extremely sensitive CCD camera. Nevertheless, a high success rate was achieved.

5.2.2 Evaluation of Performance for High NA

Comparison between observer 1 and observer 2 resulted in an average error of 0.070 µm, whereas autofocus versus observer 1 resulted in 0.423 µm error. Hence, the autofocus method as implemented is slightly biased. The root mean squared error was 0.477 µm between observers, and 0.494 µm between autofocus and observer, which both is in the range of the depth of field for the used objective. Maximum error between observers was 1.27 µm, and for autofocus versus observer 1.12 µm, both within the slice thickness of 2 µm. Concluding, even for high NA objectives, autofocus performance is comparable to experienced observers. 84 Chapter 5. Robust Autofocusing in Microscopy

1 1 mean mean min min max max 0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0 -250 -200 -150 -100 -50 0 50 100 150 200 250 -40 -20 0 20 40 z-position [um] z-position [um]

(a) (b)

1 1 mean mean min min max max 0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0 -400 -200 0 200 400 -60 -40 -20 0 20 40 60 z-position [um] z-position [um]

(c) (d)

1 1 mean mean min min max max 0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0 -60 -40 -20 0 20 40 60 -60 -40 -20 0 20 40 60 z-position [um] z-position [um]

(e) (f)

Figure 5.2: Average focus score (arbitrary units) as function of the z-position measured for different applications. a. Quantitative neuronal morphology. b. Cardiac myocyte dedifferentiation. c. Immunohistochemical label detection. d. C. Elegans GFP-VM screening. e. Immunocytochemical label detection nuclei and f. immuno signal, respectively. The measured focus curves indicated by “max” and “min” represent the focus events resulting in the lowest and highest maximum score, indicating variability and influence of noise on the estimate of the focus score. 5.2. Results 85

5.2.3 Comparison of Performance with Small Derivative Fil- ters

In order to evaluate the effect of the scale σ in the estimate for the focus score, experiments with σ = 0.5 are performed. For the quantitative neuronal morphology, accurate focusing with σ = 0.5 was not possible for 1 out of 24 fields. In this case, the algorithm focused on the reversed phase contrast image. Application of the small scale in focusing of the cardiac myocyte ded- ifferentiation failed whenever fungal contamination at the medium surface occurred, which was taken as focal plane. Taking σ = 1.0 solved this problem, that is by focus- ing persistingly on the myocytes. Focusing with σ = 0.5 on the immunohistochemical label detection resulted in focusing on dust particles at the glass surface for 5 out of 24 fields. For the fluorescence applications, accurate focusing was not possible with σ = 0.5, due to the small signal to noise ratio (SNR 10 dB). Experiments taken ≤ with σ = 0.75 resulted in inaccurate focusing for 18 out of 30 fields for the C. Elegans GFP-VM screening. Further, the algorithm was not able to focus accurately on 13 out of 30 fields for the nuclei in the immunocytochemical label detection, and failed for 17 out of 30 fields on the immuno signal. Repeating these experiments with the values of σ as given in tab. 5.1 resulted in accurate focus for all fields.

5.2.4 General Observations

The effect of the scale σ results in robustness against noise and artifacts. A larger scale resulted in robustness against phase reversion (quantitative neuronal morphology), fungal contamination at the medium surface (cardiac myocyte dedifferentiation), dust on the glass surface (immunohistochemical label detection) and noise (the fluorescence applications). The performance of small differential filters, as used in [2, 5, 19, 16], is poor given the number of inaccurately focused images for σ = 0.5 or σ = 0.75. For the different applications, the chosen focus interval was effectively used for about 30%, i.e. the top of the measured focus curve was commonly within one-third of the focus interval centered at the origin. The focus interval should not be taken too narrow to ensure that the focal plane is inside the interval, regardless manual placement of the preparations. An effective use of 30% of the interval for 95% of the focus events seems an acceptable rule of thumb. The time needed for the autofocus algorithm varied from 1.5 up to 2.8 seconds for current sensors and computer systems, which is in the same time range as experienced observers. Focus time is completely determined by the depth of field and the video frame time, which both can be considered as given quantities, and by the size of the focus interval. Therefore, further reduction of focus time can only be achieved by a smaller focus interval, on the condition that the variability in preparation position is limited. When positional variability is low or well known, the focus interval ∆z can 86 Chapter 5. Robust Autofocusing in Microscopy be reduced to exactly fit the variability. For the applications given, focus time can be reduced up to a factor 3 in this way. Failure of the autofocus algorithm due to a shortage of image content can be well predicted. If the focal plane is inside the focus interval, there should be a global maxi- mum in the estimate of the focus curve. Comparing the maximum focus score so with the highest of the focus scores at the ends of the focus interval, se = max(s(0), s(td)) which are certainly not in focus, determines the signal content with respect to noise. When the maximum score does not exceed significantly the focus scores at the ends of the interval, or (so se)/se < α, the found focus position should be rejected. In this − case, focusing can better be based on a neighboring field. For the reported results, a threshold of α = 10% safely predicts all failures.

5.3 Discussion

The success of automatic morphological screenings holds or falls with the accuracy of autofocus procedures. Although focusing is trivial for a trained observer, automatic systems often fail to focus images in different microscopic modalities. Autofocus pro- cedures are often optimized for one specific preparation, visualized in one microscopic imaging mode. This report presents a method for autofocusing in multi-mode light microscopy. The objective was to develop a focus algorithm which is generally appli- cable in microscopy, and robust against confounding factors common in microscopy. Defocused images inherently have less information content than well focused im- ages [5, 8]. Focus functions based on this criteria, such as the Gaussian derivative filter used in the presented method, by definition respond to the best focus position with a local maximum. Reliable focusing, without taking a-priori information into account, is possible whenever the best focus response becomes the global maximum. This criterion is fulfilled when the information content due to the signal is higher than that of optical artifacts, inherent to some modes of microscopic image formation, and noise. Sampling of the focus curve at Nyquist rate over the complete focal range guarantees detection of the global maximum. Consequently, the present autofocus method is generally applicable in any microscopic mode, whenever the amount of detail in the preparation is of larger influence than artifacts and noise. The effectiveness of the proposed method has been evaluated experimentally for the following specimen: neuronal cells in bright-field, cardiac myocytes in phase con- trast, neuronal tissue sections in bright-field, fluorescent beads and GFP-VM express- ing C. Elegans Nemotodes, smooth muscle cells in phase contrast, and immunocy- tochemically fluorescent labeled fibroblasts. The method was not able to focus the smooth muscle cells accurately, due to a lack of relevant image information content. For the other experiments, 2830 fields were focused with an overall success rate of 99.4%, where of the remaining 0.6% failure could be safely predicted. For each new Bibliography 87 specimen and microscope set-up, it suffices to set the parameters for scale σ, focus interval ∆z and focus speed, which can be derived from the size of the structures in the specimen, the used light and objective NA. In addition, for the scanning of large preparation, the distance after which focus has to be corrected and the fraction of the focus interval to correct for should be set. In contrast to other autofocus methods, the proposed algorithm is robust against confounding factors like: a. noise, b. optical artifacts, inherent to a particular mode of microscopic image formation, as halos in phase-contrast microscopy, and c. artifacts such as dust and fungal contamination, lying at a different focus level than the preparation. Focusing is performed within 2 or 3 seconds, which is in the same time range as trained observers. Moreover, even for high NA objectives, autofocus accuracy is comparable to experienced observers. For high magnification imaging of thick specimens, the method can be easily combined with focal plane reconstruction techniques [23, 24]. No constraints have been imposed on the focus curve other than that the global maximum indicates the focal plane. Hence, the method is generally applicable in light microscopy. The reliability of the proposed autofocus method allows for unattended operation on a large scale.

Bibliography

[1] K. N. Bitar and G. M. Makhlouf. Receptors on smooth muscle cells: Charac- terization by contraction and specific antagonists. J. Physiology, 242:G400–407, 1982.

[2] F. R. Boddeke, L. J. van Vliet, H. Netten, and I. T. Young. Autofocusing in microscopy based on the OTF and sampling. Bioimaging, 2:193–203, 1994.

[3] L. Ver Donck, P. J. Pauwels, G. Vandeplassche, and M. Borgers. Isolated rat cardiac myocytes as an experimental model to study calcium overload: the effect of calcium-entry blockers. Life Sci., 38:765–772, 1986.

[4] L. Firestone, K. Cook, K. Culp, N. Talsania, and K. Preston Jr. Comparison of autofocus methods for automated microscopy. Cytometry, 12:195–206, 1991.

[5] F. C. A. Groen, I. T. Young, and G. Ligthart. A comparison of different focus functions for use in autofocus algorithms. Cytometry, 6:81–91, 1985.

[6] T. Henkel, U. Zabel, K. van Zee, J. M. Muller,¨ E. Fanning, and P. A. Baeuerle. Intramolecular masking of the nuclear location signal and dimerization domain in the precursor for the p50 nf-κb subunit. Cell, 68:1121–1133, 1992. 88 Chapter 5. Robust Autofocusing in Microscopy

[7] E. T. Johnson and L. J. Goforth. Metaphase spread detection and focus using closed circuit television. J. Histochem. Cytochem., 22:536–545, 1974.

[8] E. Krotkov. Focusing. Int. J. Computer Vision, 1:223–237, 1987.

[9] S. J. Lockett, K. Jacobson, and B. Herman. Application of 3d digital decon- volution to optically sectioned images for improving the automatic analysis of fluorescent-labeled tumor specimens. Proc. SPIE, 1660:130–139, 1992.

[10] D. C. Mason and D. K. Green. Automatic focusing of a computer-controlled microscope. IEEE Trans. Biomed. Eng., 22:312–317, 1975.

[11] M. L. Mendelsohn and B. H. Mayall. Computer-oriented analysis of human chromosomes-iii: Focus. Comput. Biol. Med., 2:137–150, 1971.

[12] R. Nuydens, C. Heers, A. Chadarevian, M. de Jong, R. Nuyens, F. Cornelis- sen, and H. Geerts. Sodium butyrate induces aberrant tau phosphorylation and programmed cell death in human neuroblastoma cells. Brain Res., 688:86–94, 1995.

[13] A. Papoulis. The Fourier Integral and Its Applications. McGraw-Hill, New York, 1960.

[14] M. Pluta. Advanced Light Microscopy, volume 1. Elsevier, Amsterdam, 1988.

[15] M. Pluta. Advanced Light Microscopy, volume 2. Elsevier, Amsterdam, 1989.

[16] J. H. Price and D. A. Gough. Comparison of phase-contrast and fluorescence digital autofocus for scanning microscopy. Cytometry, 16:283–297, 1994.

[17] C. Steger. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Machine Intell., 20:113–125, 1998.

[18] N. Streibl. Depth transfer by an imaging system. Opt. Act., 31:1233–1241, 1984.

[19] M. Subbarao and J. K. Tyan. Selecting the optimal focus measure for aut- ofocusing and depth-from-focus. IEEE Trans. Pattern Anal. Machine Intell., 20:864–870, 1998.

[20] B. M. ter Haar Romeny, editor. Geometry-Driven Diffusion in Computer Vision. Kluwer Academic Publishers, Boston, 1994.

[21] R. van Balen, D. Koelma, T. K. ten Kate, B. Mosterd, and A. W. M. Smeulders. ScilImage: a multi-layered environment for use and development of image pro- cessing software. In H. I. Christensen and J. L. Crowley, editors, Experimental Environments for Computer Vision & Image Processing, pages 107–126. World Scientific Publishing, 1994. Bibliography 89

[22] L. J. van Vliet, I. T. Young, and P. W. Verbeek. Recursive Gaussian derivative filters. In Proceedings ICPR ’98, pages 509–514. IEEE Computer Society Press, 1998.

[23] H. S. Whu, J. Barbara, and J. Gil. A focusing algorithm for high magnification cell imaging. J. Microscopy, 184:133–142, 1996.

[24] T. T. E. Yeo, S. H. Ong, Jayasooriah, and R. Sinniah. Autofocusing for tissue microscopy. Imaging Vision Comput., 11:629–639, 1993.

[25] I. T. Young, J. J. Gerbrands, and L. J. van Vliet. Fundamental Image Processing. Delft University of Technology, Delft, 1995.

[26] I. T. Young, R. Zagers, L. J. van Vliet, J. Mullikin, F. Boddeke, and H. Netten. Depth-of-focus in microscopy. In Proceedings 8th SCIA, pages 493–498, 1993.

Chapter 6

Segmentation of Tissue Architecture by Distance Graph Matching

appeared in Cytometry, vol. 35, pp. 11–22, 1999.

“Cell and tissue, shell and bone, leaf and flower, are so many portions of mat- ter, and it is in obedience to the laws of physics that their particles have been moved, moulded and conformed. They are no exceptions to the rule that God always geometrizes. Their problems of form are in the first instance mathemat- ical problems, their problems of growth are essentially physical problems, and the morphologist is, ipso facto, a student of physical science.” – –D’Arcy W. Thompson.

Quantitative morphological analysis of fixed tissue plays an increasingly impor- tant role in the study of biological and pathological processes. Specific detection issues can be approached by classical staining methods, enzyme histochemical analy- sis, or immunohistochemical processes. The tissue can not only be characterized by the properties of individual cells, such as staining intensity or expression of specific proteins, but also by the geometrical arrangement of the cells [4, 6, 10]. Interesting tissue parameters are derived from the topographical relationship between cells. For instance, topographical analysis in tumor grading can significantly improve routine diagnosis [3, 8, 16]. Studies of growing cancer cell lines have revealed a non-random distribution of cells [5, 14]. Partitioning of epithelial tissue by cell topography is used for quantitative evaluations [17]. We propose a new method for the partitioning of tissues. As an example, structural integrity of hippocampal tissue after ischemia will be examined.

91 92 Chapter 6. Segmentation of Tissue Architecture by Distance Graph Matching

As a first step, tissue parts of interest have to be segmented into cell clusters. Segmentation of cell clusters can be based on distances between the center of gravity of the cells. The recognition of tissue architecture is then reduced to determining borders of point patterns. The problem traced as such can be solved by the application of neighbor graphs, and partitioning them. The Vorono¨ı graph is often applied as a modeling tool for point patterns [3, 5, 13, 16]. The definition of the Vorono¨ı graph is given by polygons Z(p), where each polygon defines the area for which all points are closer to marker p than to any other marker [22]. A polygon Z(p) is called the zone of (geometrical) influence of p. Neighboring markers to p are defined by the set of all markers for which the zone of influence touches that of p. Such a tesselation of the plane depends on the spatial distribution of the cell markers. Cluster membership is determined by evaluation of geometrical feature measurements on the zones of influence [1]. Rodenacker et al. [17] used the Vorono¨ı graph for partitioning epithelial tissue. Segmentation was obtained by propagating the neighbors from the basal layer of the epithelial tissue to the surface. Borders between basal, intermediate and superficial areas were determined by examining the occupied surface of propagation. In this way, every third of the total area of the Vorono¨ı graph was assigned to one of the regions, yielding three regions with approximately similar areas in terms of zones of influence. As discussed elsewhere [5, 12], the Vorono¨ı graph is sensitive to detection errors. Removal or insertion of one object will change the characteristics of the Vorono¨ı graph. A second drawback is that Vorono¨ı’s graph is ill-defined at cluster borders. This makes the Vorono¨ı graph unsuited for robust segmentation of tissue architecture. Another option for the recognition of point patterns is a modification of the Vorono¨ı graph: the k-nearest neighbor graph [11, 15, 19]. The neighbors of a point p are ordered as the nearest, second-nearest, and up to kth-nearest neighbor of p. The k-nearest neighbor graph is defined by connecting each point to its k-nearest neighboring points [22]. The strength of each connection is weighted by the distance between points. Similarity between k-nearest neighbor graphs is determined by com- paring the graphs extracted from detected point patterns with prototype k-nearest neighbor graphs. In Schwarz and Exner [19], the distance distribution to one of the nearest neighbors was used for the separation of clusters from a background of randomly disposed points. The main drawback is that not all patterns can be discriminated by considering only one specific k-nearest neighbor distance. Lavine et al. [11] used sequences of sorted interpoint distances extracted from noisy point images to match the image with one of a set of prototype patterns. Similarity between prototype and point set is based on a rankwise comparison. From the two sorted interpoint distance vectors, the corresponding (relative-) difference vector is calculated. The number of components which exceeds a given threshold is used for discrimination between patterns. A major disadvantage of the rankwise comparison 6.1. Materials and Methods 93 is that all components have to be detected. When the nearest neighbor is missed in the detection, the first one in rank is compared with the second one. Thus, failure to detect one cell results in poor similarity. Automatic segmentation of tissue architecture is difficult because biological vari- ability and tissue preparation have a major influence on the tissue at hand. The de- tection and classification of individual cells in the tissue is prone to error. Although most authors [3, 5, 12] were aware of the lack of robustness in the quantification of tissue architecture, little effort was made to incorporate uncertainty of cell detec- tion in tissue architecture quantification methods. Lavine et al. [11] showed that the k-nearest neighbor graph is well-suited for point pattern recognition under spatial distortions, but the method used is not able to anticipate cell detection errors. In this chapter we present a robust method for tissue architecture segmentation, based on the k-nearest neighbor graph. A sequence comparison algorithm is used to allow missing or extra detected cells in the detected point set. Uncertainty in cell classification is incorporated into the matching process. Experiments show that the robustness of the method presented is superior to that of existing methods. The method is demonstrated by segmentation of the CA region in rat hippocampi, where structural integrity of the CA1 cell layer is affected by ischemia. The correlation between manual scoring and automatic analysis of CA1 preservation is shown to be excellent.

6.1 Materials and Methods

6.1.1 Hippocampal Tissue Preparation

Rat brains were fixed by intracardiac perfusion with diluted Karnovsky’s fixative (2% formaldehyde, 2.5% glutaraldehyde in S¨orensen’s phosphate buffer; pH 7.4). They were immersed overnight in the same fixative. Coronal vibratome sections of the dorsal hippocampus were prepared stereotaxically 3.6 mm caudally to the bregma (Vibratome 1000, TPI, St. Louis, MO). Slices (200 µm) were postfixed with 2% osmium-tetroxide, dehydrated in a graded ethanol series, and routinely embedded in Epon. Epon sections were cut at 2 µm and stained with toluidine blue.

6.1.2 Image Acquisition and Software

Images were captured by a CCD camera (MX5, Adimec, Eindhoven, The Nether- lands), which is a 780 576 video frame transfer CCD with pixel size 8.2 16.07µm2, × × operating at room temperature with auto gain turned off. The camera was mounted on top of an Axioskop in bright-field illumination mode (Carl Zeiss, Oberkochen, Germany). The microscope was equipped with a scanning stage for automatic po- sition control (stage and MC2000 controller, M¨arzh¨auser, Wetzlar, Germany). The 94 Chapter 6. Segmentation of Tissue Architecture by Distance Graph Matching

31 60 28 30 49 43 40 25 35 37 24

Figure 6.1: Example of a k-nearest neighbor graph. The nodes represents cells in tissue, while the edges represent their relation. The relations in this graph are given by the two nearest neighboring cells, and edges are weighted by the distance between the cells. scanning stage was calibrated for a 10 magnification and adjacent 512 512 images × × were captured to ensure that complete hippocampi were scanned. Typical composite image sizes were 6144 4096 pixels, or 4.94 3.30 mm2. For image processing, the × × software package SCIL-Image version 1.4 (TNO-TPD, Delft, The Netherlands) was used on an O2 workstation (SGI, Mountain View, CA). The package was extended with the distance graph matching algorithm.

6.1.3 K-Nearest Neighbor Graph

Consider an image of a tissue containing cells. Detection of cells in the image will result in m markers at possible cell locations. Let V be the set of m detected cell markers, V = v , v , . . . , vm . The elements in V are called vertices or nodes. A { 1 2 } graph G(V, E) (fig. 6.1) defines how elements of V are related to one another. The relation between the vertices is defined by the set of edges E, in which the elements eij connects the vertices vi to vj. A weighted graph is defined by the graph G(V, E), where a value is assigned to each edge eij . The k-nearest neighbor graph of a node v is defined as the subset of k vertices closest to v. The edges between v and the neighboring vertices are weighted by the k Euclidian distance, or N = d , d , . . . , dk di = dist(v, vi), di < di . Taking v { 1 2 | +1 } k = 1 for all v V results in the nearest neighbor graph, in which each cell is ∈ connected to its closest neighbor. The average edge length in the k-nearest neighbor graph gives a measure of scale of the pattern of cells. Division of all distances di in a k-nearest neighbor graph by the average of all distances in the graph, d¯, normalizes the graph for scale, i.e., d˜i = di/d¯.

6.1.4 Distance Graph Matching

Point patterns of interest were extracted from the k-nearest neighbor graph. As an example, consider fig. 6.2. A regular structured tissue was assumed, consisting of cells 6.1. Materials and Methods 95

Figure 6.2: Extraction of tissue architecture. A typical relationship around a cell is obtained from an example of the tissue of interest (a). The prototype k-nearest neighbor graph is derived from distances to cells (b). All prototypes shown are considered equal to fit deformed tissue parts. Further freedom is given by a certain elasticity of the edges in the prototype graph. Extraction of the tissular architecture proceeds by fitting the prototype graph on each cell and its neighborhood in the tissue (c). Within the similar tissue parts, the graph will fit. Outside these regions, matching is limited to only one or two edges. In order to safeguard against cell detection errors, not all edges in the prototype have to fit the cellular neighborhood. regularly distributed over the tissue. Such a point pattern reveals an equally spaced layout everywhere within the tissue borders. The surrounding of each cell belonging to the pattern can be modeled by the neighborhood of one single cell (fig. 6.2). The k-nearest neighbor graph of a typical pattern cell gives a characterization of the point pattern of interest. After selection of a typical cell, the pattern is given by a prototype k-nearest neighbor graph, with distance set P = p , p , . . . , pk , where pi denotes { 1 2 } 96 Chapter 6. Segmentation of Tissue Architecture by Distance Graph Matching the prototype distances. Acceptance or rejection of a detected object as belonging to the cell-cluster of interest is based on comparison of the observed k-nearest neighbor k distances Nv , to the prototype defined by the characteristic distances to the neighbors in P .

6.1.5 Distance Graph Comparison The difference between observation and prototype set is expressed by the replace- ments necessary to match the prototype with the observation. This is referred to as dissimilarity between sets [18]. For example, consider for simplicity the discrete observed set 3, 10, 11, 15, 20, 20, 21, 25 and prototype 5, 5, 10, 10, 20, 20 . { } { } When disregarding the last distances in the observation (21, 25), two substitutions (3 5, 11 10), one insertion (5) and one deletion (15) transforms the observed 7→ 7→ distance set into the prototype. So there are four modifications between prototype and observation. The extra distances at the end of the observed set are necessary for expanding the comparison when elements are deleted in the beginning of the set. Without these extra elements, deletion of one item at the beginning of the set implies the addition of an item at the end of the set. There will be no need for addition when there is a cell at the correct distance. Therefore, the amount of elements in the observation l should be larger than the prototype length k to allow for expansion in the comparison. A cost is assigned to each type of replacement. Let ci be the cost for insertion, cd the cost for deletion, cs the cost for substitution, and cm the cost for matching. In the example, 11 is closer to 10 than 3 is to 5, which can be reflected in their respective matching costs. The minimum total cost t, necessary to transform the observed set into the prototype, gives the similarity between the sets. The minimum cost is obtained by using a string matching algorithm [18] (see Appendix). The lowest possible value for the cost t is obtained when both sets are equal. The amount of replacements is zero, and thus the cost is zero. An upper bound for the cost necessary to match two sets is obtained when all elements are replaced. In this case, either all elements are inserted at the beginning of the set, or all elements are substituted, depending on the respective costs. The upper bound is then given by tupper = k min(ci, cs). Normalization of the minimum total cost gives a corre- spondence measure, indicating how well the observed pattern matches the prototype, i.e.,

tupper t = − 100%. (6.1) C tupper × Discrimination between two known point patterns, cluster and background, can be based on example and counterexample. Consider the observed k-nearest neighbor k graph Nv , the prototype P describing the pattern of interest, and a prototype B char- acterizing the background pattern. When elements in background B match elements 6.1. Materials and Methods 97

in P , the cost tbackgr related to matching P with B is less than the upper bound for the minimum cost. Then, discrimination between the two patterns is enhanced by normalizing the correspondence to the cost given by matching P with background B, or

tbackgr t 0 = − 100%. (6.2) C tbackgr ×

Note that 0 can be negative for patterns which neither correspond to the foreground C prototype nor to the background prototype. The extension to multiclass problems can be made by considering prototype P for the class of interest, and prototypes B1, B2, . . . , Bn for the remaining classes. Matching P with each of the prototypes Bi gives the correspondences between the pattern of interest and the other patterns. The pattern Bi which is most similar to P results in the lowest matching cost, which should be used for normalization.

6.1.6 Cost Functions The total cost depends on the comparison between each of the individual elements k of Nv and P , and thus the replacements necessary to match them. The replacement operations are given by insertion (cost ci), deletion (cost cd) substitution (cs) and match (cm). The cost for matching cm is zero when the two distances are equal. The difference between two distances is defined as their relative deviation, or δ = di pj /pj . Here, | − | di denotes the observed distance and pj the prototype distance with which to compare. Robustness against spatial distortion is obtained by allowing a percentage deviation α in the comparison of distances [11]. In this case, two distances are considered equal as long as their relative deviation is smaller than the tolerance α. A minimum value for α is given by the distance measurement error. When the deviation percentage between two distances is higher than α, their correspondence is included in the matching cost. The correspondence then depends C on the total distance deviation between the compared elements. The matching cost is taken linearly proportional to the distance deviation, or

0 if δ α cs ≤ cm = (δ α) if α < δ < cs (6.3)  cs α − −  cs otherwise.

The cost for matching is cs ifδ cs, which is equivalent to a substitution operation. ≥ For our case, cell detector properties determine the costs for insertion. For a sensitive detector, the probability to miss a cell is low. As a consequence, the cost for insertion should be high compared to deletion. Alternatively, a low-sensitive cell detector will overlook cells, but fewer artifacts will be detected. Thus, the costs for 98 Chapter 6. Segmentation of Tissue Architecture by Distance Graph Matching insertion should be low relative to deletion. The insertion cost is therefore tuned to the cell detector performance, or c #A i . (6.4) cd ∝ #M Here, #A denotes the estimate of the average amount of artifacts detected as cells, and #M denotes the estimate of the average amount of missed cells. The deletion cost is derived from object features. A probability distribution can be obtained from well-chosen measurements, e.g., the contour ratio, on a test set of objects. Afterwards, the probability P (vi) for object vi being a cell is extracted from the measured distribution. When an object has a low probability of being a cell, the object should be deleted. Therefore, rather than considering a fixed deletion cost, the probability of an object being a cell determines the deletion cost for that object, or

ci(vi) P (vi) . (6.5) ∝ As a result, the correspondence measure for the object under examination is only slightly affected by the deletion of artifacts. The rejection of detected objects as being artifacts can be based on both cell probability P (vi) and the correspondence C of the object to the cluster prototype.

6.1.7 Evaluation of Robustness on Simulated Point Patterns Four algorithms, based on the Vorono¨ı graph, nearest neighbor distance, the method of Lavine et al. [11], and the proposed distance graph matching, were tested in sim- ulations. The segmentation performance was measured as a function of the input distortion. The input consisted of a foreground point pattern embedded in a back- ground pattern, distorted by some random process. For the simulations, two arbitrarily chosen patterns were generated. A hexagonal point pattern was embedded in a random point pattern with the same density, and the same pattern was placed in a hexagonal pattern with half the density (fig. 6.3). Arti- ficial distortion was added to the sets by consecutive random removal, addition, and displacement of points. The distortion was regulated from 0% up to a maximum, re- sulting in a noisy realization of the ideal patterns. By removing points, the algorithm is tested for robustness against missing cells. Addition of points reveals robustness of the algorithm against false cell detections. Robustness against spatial distortion is examined by means of point displacement. Each one of the four methods was tested for robustness against the given distortions. The combination of removal and dis- placement of points shows robustness against touching cells. The other combinations show the interaction of distortions on robustness. The segmentation performance indicates how well the foreground pattern was dis- criminated from the background points. It was measured as function of the distortion.

6.1. Materials and Methods 99

§

§

§

§M§>§

§ § §H§L§'§

§§

§ § §§A§

§ §

§ §

§ ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡

§

§ §

§

§

§

§ §

§ §

§J§K§&§¦§

§I§

§ §

§H§

§G§

§ §

§

§

§ ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £

§"§

§

§ § §F§

§ §

§

§ §

§ §

§

§ § §

§

§

§ §

§

§ §

§

§

§ §C§E§

§ §

§:§ ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡

§ §¤§

§ § § §

§

§D§

§§ §

§ §C§

§ § § §

§<§¡§ § §

§ §

§ § §

§¥§ §

§

§B§-§ § § § §

§ §

§

§

§A§ ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £

§

§

§@§

§

§ §

§

§ §?§

§ §

§>§

§ §

§=§

§"§ §<§ § § §"§ § § §

§

§

§

§

§ §

§ §

§ §;§

§ §

§ ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡

§ §

§

§ §

§

§ §

§

§ §

§:§ § §

§ §

§ §

§

§§9§

§ §

§ §

§

§

§ §:§ § §

§ §#§

§

§ ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £

§

§ §9§

§

§

§

§

§§

§

§

§8§

§ § §

§ § §

§$§ § §6§7§

§ §

§

§

§

§

§

§

§

¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡

§

§3§4§5§ §

§ §

§ § §(§¤§¥§¤§¤§¤§ ¤ ¥ ¤ ¤ ¤

§ §

§ §

§ §

§ §

§

§

§

§

§ §

§2§

§

§ §

§ §¤§¥§¤§¤§¤§¥§¦§¥§ § ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥

§ §1§

§ § ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £

§

§

§

§

§

§

§

§¥§¥§¤§¤§¥§¤§¤§¤§¥§¦§¥§ ¥ ¤ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥

§

§/§ §0§

§

§ §

§

§

¢ ¡ ¡ ¡ ¡ ¡ ¡ ¢ ¡ ¡

§

§ §

§

§

§

§ §

§¤§¤§¥§¤§¤§¤§¥§¦§¥§¤§¤§§ § ¤ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥ ¤ ¤

§

§ § §

§

§

§

§

§ §¤§

§

§

§

§ §

§¤§¥§¤§¤§¥§¤§¤§¤§¥§¦§¥§¤§ § ¡ ¢ £ ¢ ¤ ¥ ¤ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥ ¤ ¢ ¡ ¡ ¢ £

§

§

§ § §

§

§

§

§

§

§ §

§

§ §¥§¤§¤§¥§¤§¤§¤§¥§¦§¥§¤§¤§ ¥ ¤ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥ ¤ ¤

§ § § §

§.§

§

¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡

§ §

§

§

§¤§¤§¥§¤§¤§¥§¤§¤§¤§¥§¦§¥§¤§¤§ ¤ ¤ ¥ ¤ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥ ¤

§

§

§

§ §

§

§

§

§

§

¡ ¢ £ ¢ ¡ ¡ ¢ £

§

§¦§¥§¤§¤§¥§¤§¤§¤§¥§¦§¥§¤§¤§¥§-§ ¥ ¤ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥ ¤ ¤ ¥

§,§

§ §

§ §

§

§

§ §

§

§ §

§(§¤§¤§¥§¤§¤§¥§¤§¤§¤§¥§¦§¥§¤§+§§ ¤ ¤ ¥ ¤ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥ ¤

§

§ §

§

§

¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡

§ §

§

§ §

§

§ §

§ §

§

§

§§¥§¤§¤§¥§¤§¤§¤§¥§¦§¥§¤§¤§¥§ ¥ ¤ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥ ¤ ¤ ¥

§ §

§

§

§

§

¡ ¢ £ ¢ ¢ ¡ ¡ ¢ £

§

§*§

§ §§

§¤§¤§¥§¤§¤§¥§¤§¤§¤§¥§¦§¥§¤§ ¤ ¤ ¥ ¤ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥ ¤

§)§ §

§¨§

§

§ §

§

§

§

§

§ §

§ §¥§¤§¤§¥§¤§¤§¤§¥§¦§¥§¤§¤§ ¥ ¤ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥ ¤ ¤

§ § §

§ §

§ § ¢ ¡ ¡ ¡ ¡ ¡ ¢ ¡ ¡

§

§

§

§ §

§

§

§ §¤§¥§¤§¤§¥§¤§¤§¤§¥§¦§¥§'§(§ ¤ ¥ ¤ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥

§

§ §

§

§

§

§

§

§ ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £

§

§

§¤§

§§

§

§

§

§#§¤§¥§¤§¤§¤§¥§¦§¥§¤§ ¤ ¥ ¤ ¤ ¤ ¥ ¦ ¥ ¤

§

§"§ §

§

§

§

§ §

§

§

§

§ §

§

§&§ §¤§¤§¥§¤§¤§¤§¥§ ¤ ¤ ¥ ¤ ¤ ¤ ¥

§ § ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¢ ¡ ¡ ¢ ¡ ¡

§%§ §

§ §

§

§

§§$§

§§ §

§ §

§

§

§ §¤§¤§¥§ §#§ § ¤ ¤ ¥

§

§"§

§ § §

§§ § § §!§

§ § §

§ ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £

§ §

§

§ §

§

§ § § § § §

§ §

§ § §

§ §§

§

§ §

§

§

§

§ § §

§

§ §§ §§ ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡

§ §

§ §

§

§§§§ §

§ §

§ §

§ §

§

§§ §

§

§ §

§ §

§

§ § § §

§§

§§§ §

§ § §§ §

§¡§ § ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £

§

§ §

§

§§§

§

§

§

§ § § § §

§ §§ §

§§

§ § §

§

§ § §

§

§

§

§

§

§

§

§ § § ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡

§ § §

§ § §

§ §

§ §

§

§ §¦§

§§ §§

§

§

§

§ § §

§

§ §

§ §

§§

§ §

§§

§ §

§

§

§ § §

¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £ ¢ ¡ ¡ ¢ ¡ ¡ ¢ £

§

§

§

§

§

§ §¥§

§

§

§ § § §

§ § §§

§ § § §

§

§ § §

§ §

§ §

§

§ §

§ §

§

§

§ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡ ¡ ¡ ¢ ¡ ¡ ¢ ¡ ¡

§

§¨§©§ §

§

§

§ §

§ § § §

§ §

§

§ §

§

§ §

(a) (b)

Figure 6.3: Point patterns used for the experiments. a. A regular hexagonal pattern inside a hexagonal pattern with half the density. b. A regular hexagonal pattern inside a random pattern with the same density.

The performance of the various algorithms was measured as one minus the ratio of the false negatives combined with the ratio of false positives, or #F #B = 1 b f (6.6) P − #T ruthf − #T ruthb

Here, #Fb denotes the number of foreground markers classified as background, #Bf denotes the amount of background markers classified as foreground, and #T ruthf and #T ruthb denotes the true number of foreground and background markers, re- spectively, in the distorted data set.

6.1.8 Algorithm Robustness Evaluation For the experiments, the area of the influence zones in the Vorono¨ı graph was thresh- olded [7] in order to partition the test point patterns. The thresholds were chosen such that 10% distortion on the distance to the nearest neighbors was allowed for the undistorted foreground pattern. This yields calculation of the minimum and maxi- mum area for scaled versions of the pattern, with scaling factor 0.9 and 1.1. With regard to the nearest-neighbor distance, thresholds were taken such that 10% perturbation in the nearest-neighbor distance was allowed, determined in the undistorted foreground pattern. The method given by Lavine et al. [11] was tested for k 5, 10, 15, 20, 25 . ∈ { } Implementation of this method was achieved by using the distance graph matching algorithm. Examples of both foreground and background pattern were used for dis- crimination. Costs for insertion and deletion were taken as infinity (ci = cd = ); ∞ thus, only substitutions or matches were allowed. The allowed perturbation in the distances was set at 10% (cs = α = 0.1). The correspondence 0 (eq. 6.2) was thresh- C olded at 50%. 100 Chapter 6. Segmentation of Tissue Architecture by Distance Graph Matching

Experiments for the proposed distance graph matching method were taken with prototype length k 5, 10, 15, 20, 25 . In order to allow the string matching ∈ { } to expand, the amount of observed elements considered for matching was twice the length of the prototype set (l = 2k). Examples of both foreground and background pattern were used for discrimination. Substitution of cells was not allowed, except as a deletion followed by an insertion operation. This can be achieved by taking the cost for substitution equal to the sum of the costs for insertion and deletion (cs = ci +cd = c). The costs for insertion and deletion were taken as equal. The allowed perturbation in the distances was taken to be 10% (c = α = 0.1). The correspondence 0 (eq. 6.2) C was thresholded at 50%. This way, parameters were set to permit fair comparison between the four methods for tissue architecture segmentation.

6.1.9 Robustness for Scale Measure

In order to investigate the influence of distortions on the scale normalization measure, the measure was tested in the simulations. The normalization factor d¯, the average neighbor distance, was calculated under addition, removal, and displacement of points. The percentage error to the initial scale measure, d¯ for 0% distortion, was measured as function of the distortion. The amount of neighbors k considered for calculation of the scale measure was taken to be 1, 5, 10, 15 . { }

6.1.10 Cell Detection

Cell domes were extracted from the hippocampal images by grey-level reconstruc- tion [2], resulting in a grey-value image containing the tops of all mountains when considering the input image as a grey-level landscape. From the dome image, satu- rated transparent parts were removed, and the remaining objects were thresholded. The results contained cell bodies, neurite parts and artifacts. An opening was ap- plied to remove the neurite parts. After labeling, the center of gravity of each object was calculated and used for determination of the k-nearest neighbor graphs. The reciprocal contour ratio (1/cr) was used as a measure for cell probability (eq. 6.5).

6.1.11 Hippocampal CA Region Segmentation

Segmentation of the CA region was obtained by supervised selection of an example region. An arbitrary section, unaffected by ischemia, was taken and, after cell detec- tion, one of the cells in the CA1 region was manually selected. The neighborhood of the selected cell was used as a prototype for segmentation. No counter (background) example was taken. Each of the four algorithms was used for segmentation of the CA region. Parameters for segmentation were derived from the example neighbor- hood to permit fair comparison between the methods. 6.2. Results 101

Thresholds for the area of the influence zones in the Vorono¨ı graph were derived from the example, such that 35% distortion on the distance to the nearest neighbor was allowed. For the nearest-neighbor method, thresholds were taken such that 35% distortion on the nearest-neighbor distance in the example was allowed. The method of Lavine et al. [11] was implemented by using the distance graph matching algorithm. Costs for insertion and deletion were taken as infinity (ci = cd = ), allowing only substitutions or matches with 35% tolerance (cs = α = 0.35). ∞ The deletion cost for individual objects was adjusted by the cell probability, derived from the contour ratio. For graph matching, 15 neighbors were taken into account. A cell was considered as a cluster cell when the similarity between distance graph and prototype was at least 50%. For the distance graph matching method, substitution of cells was not allowed, achieved by setting cs = ci + cd = c. The substitution cost was tuned to allow for 35% distortion in the distances, from which the last 25% was included in the correspondence measure (c = 0.35, α = 0.1). After visual examination of the detector performance, the insertion cost was set at twice the deletion cost. The deletion cost for individual objects was adjusted by the cell probability, derived from the contour ratio. For the distance graph matching, 15 neighbors were taken into account. Matching was allowed to expand to twice the amount of neighbors in the prototype (l = 2k). A cell was considered as a cluster cell when the similarity between distance graph and prototype was at least 50%.

6.2 Results

6.2.1 Algorithm robustness evaluation

Figure 6.4 shows the results of the performance of the algorithms on the simulated point patterns, where 0% performance corresponds to random classification of the markers. The distortion for removal and addition of points is given as the percentage of points removed or added. For displacement of points, the distortion is given as per- centage of displacement up to half the nearest neighbor distance (100%) of the undis- torted hexagonal foreground pattern. When the distortion in displacement reaches 100%, the hexagonal pattern has become a random pattern, indistinguishable from the random background pattern (fig. 6.4f). The optimum performances which can be reached for the three types of distortion are shown in fig. 6.4b,d,f. In those cases, the segmentation result corresponds to correct classification of all (remaining) markers. The results of the combined experiments are examined for interaction between the different kinds of distortions, and their relation with the individual performances. The behavior of the algorithms under all distortions remains similar for both test patterns. This suggests that the performance of the different methods is insensitive 102 Chapter 6. Segmentation of Tissue Architecture by Distance Graph Matching to the type of test pattern. For addition and displacement of points, the minimum and maximum performance over the 25 simulation trials remains within 20% from the average. For removal of points, the minimum and maximum performance was within 40% from the average for the Vorono¨ı, Lavine et al. [11], and distance graph matching methods. The nearest neighbor method shows a deviation of 60% from the average for removal of points, which is due to the normalization of the performance measure to the amount of markers (eq. 6.6). Figure 6.4a–d reveals that thresholding the area of influence in the Vorono¨ı graph is inadequate in determining cluster membership when cell detection is not reliable. No point can be removed or added without changing the Vorono¨ı partition for all (Vorono¨ı) neighbors surrounding the removed or added point. A second drawback is the high initial error of 20% and 35%, respectively. Under displacement of points (fig. 6.4e,f), segmentation based on the Vorono¨ı graph is shown to be robust. Fig- ure 6.4f reveals the bias (100% distortion, 10% performance) for the Vorono¨ı graph at the image border. Points near the image border are all (correctly) classified as background due to their deviation from the normal area of influence, resulting in a better than random classification for the indistinguishable fore- and background. Ex- periments for the Vorono¨ı method performed with thresholding the deviation on the nearest-neighbor distance at 5% give only marginally better performances (data not shown). For the combination of displacement and removal, the resulting segmentation error showed both factors to be additive below 15% removal (data not shown). Simi- larly, for the displacement and addition of points the combined error was shown to be the addition of errors caused by applying each distortion separately. The performance under removal and addition of points is only slightly better than the addition of the individual errors. Segmentation based on the nearest-neighbor distance behaves like the optimum when distorted by removal of points (fig. 6.4a,b). Under the condition of addition of points (fig. 6.4c,d), performance is as bad as with the Vorono¨ı method. Since 10% distortion on the nearest neighbor distances is allowed, the method performs well up to the 10% displacement (fig. 6.4e,f). As shown elsewhere [19], segmentation based on one of the other k-nearest neighbors is able to improve the discrimination between patterns. Behavior of the method under distortions for higher k remains similar to the results shown for k = 1. The performance for the combinations removal-addition and removal-displacement was completely determined by addition and displacement (data not shown), respectively. As can be expected from fig. 6.4a,b the influence of removal of points may be neglected. For the combination of addition and displacement of points, the effect on the segmentation error is the addition of the errors caused by each distortion separately. For the method of Lavine et al. [11], the results are shown for k = 10. The ini- tial segmentation error between the test point patterns (fig. 6.4a,b) is smaller than 6.2. Results 103 with both the Vorono¨ı and nearest-neighbor method. Taking more neighbors into account clearly results in better discrimination between point patterns. The perfor- mance under removal of points degrades faster than the nearest-neighbor segmenta- tion (fig. 6.4a,b), while the performance for addition of points (fig. 6.4c,d) degrades less severely for small distortions. The tolerance for spatial distortion is improved in comparison to the nearest-neighbor method. Analysis based on larger neighbor- hood sizes (k 15, 20, 25) shows that the performance for removal and addition ∈ of points degrades faster, whereas the performance improves under the condition of displacement of points. Additionally, the initial error increases with a few percent- ages. For k = 5, segmentation performance is comparable, except for the initial error which increases a few percentages. The error due to both the combinations removal- displacement and addition-displacement was shown to be almost perfectly additive (data not shown). For the combination of addition and removal of points, the error due to removal is counteracted by the addition of points for large distortions. The distance graph matching method performs slightly better than the method of Lavine et al. [11] for removal of points (fig. 6.4a,b). Under the condition of point addition, the distance graph matching method is clearly superior. The initial error in the discrimination between both hexagonal foreground and background is zero for both the distance graph method and that of Lavine et al. [11]. For the discrimination between hexagonal foreground and random background, the initial performance for the distance graph matching is better than with the method of Lavine et al. [11]. Performance for a small neighborhood size is comparable to the performance with the method of Labine et al. [11] (k = 5). For large neighborhood sizes (k 15), ≥ performance for removal and addition degrades faster, but remains better than with the method of Lavine et al. [11]. Under displacement of points, the performance increases for high k. Additionally, the initial error increases a few percentages. The performance for the combined distortion from addition and displacement of points was shown to be completely determined by the point displacement (data not shown). For removal and addition, the error due to removal was reduced by the random addition of points for severe distortions. The combination of removal and displacement was shown to be better than the addition of the respective errors. From these experiments, it can be concluded that both thresholding the area of influence in the Vorono¨ı graph and thresholding the distance to one of the near- est neighbors are not suitable for robust segmentation of tissue architecture. The experiments undertaken show the instability of the Vorono¨ı graph for detection er- rors. The Vorono¨ı graph is certainly useful for determination of neighbors [16], but more robust parameters can be estimated from the Euclidian distance between these neighbors [23]. The proposed distance graph matching algorithm indeed has a better performance under detection errors than the method of Lavine et al. [11]. Therefore, the distance graph matching method is more suitable for use in the partitioning of tissue architecture. 104 Chapter 6. Segmentation of Tissue Architecture by Distance Graph Matching

Voronoi 100 100 nearest neighbor Lavine distance graph 80 80 optimum

60 60

40 40 % performance % performance

20 20

0 0 0 20 40 60 80 100 0 20 40 60 80 100 % distortion % distortion

(a) (b)

100 100

80 80

60 60

40 40 % performance % performance

20 20

0 0 0 20 40 60 80 100 0 20 40 60 80 100 % distortion % distortion

(c) (d)

100 100

80 80

60 60

40 40 % performance % performance

20 20

0 0 0 20 40 60 80 100 0 20 40 60 80 100 % distortion % distortion

(e) (f)

Figure 6.4: Average segmentation performance is plotted as function of the distortion. Each point represents the average performance over 25 trials for the given percentage of distortion. For the method of Lavine et al. [11] and the distance graph matching method, results for k = 10 are shown. a. Point removal, hexagonal background. b. Point removal, random background. c. Point addition, hexagonal background. d. Point addition, random background. e. Point displacement, hexagonal background. f. Point displacement, random background. 6.2. Results 105

100

80

60

40 % performance

20 remove add shift 0 0 20 40 60 80 100 % distortion

Figure 6.5: Influence of removal, addition, and displacement of points on the scale normalization measure d¯ for k = 10. Average percentage error over 25 trials.

6.2.2 Robustness for Scale Measure

Robustness of the scale normalization was tested on both artificial data sets. Results for k = 10 on the hexagonal-hexagonal data set are shown in fig. 6.5. The result for k = 1 degrades for addition and displacement of points, while removal of points is more stable. The results for k = 5 and k = 15 are almost identical to the results shown for k = 10. The results for the hexagonal-random data set are almost identical to the hexagonal-hexagonal results for k 5, 10, 15. The experiment shows that the ∈ average k-nearest neighbor distance is useful in normalization for scale when taking k large enough.

6.2.3 Hippocampal CA Region Segmentation

The new method of distance graph matching was tested on the segmentation of the CA region in rat hippocampi (fig. 6.6a), based on the preservation of the CA1 structure after ischemia [9]. Here, the correlation between manual and automatic counting of the preserved cells in the CA1 region is shown. An example of the cell detection is shown in fig. 6.6b. As a result from the distance graph matching, all cells in the CA and Hillus region were extracted from the image (fig. 6.6c). Only cluster cells are preserved in the segmented image. The CA1 region (fig. 6.6) is that part inside the CA region, starting orthogonally at the end of the CA inside the hillus, and ending where the CA region becomes thicker before the U-turn. Manual counting was performed on 2–4 slices for each animal, resulting in a total number of preserved neurons counted in a total length of CA1 region (cells/mm) per animal. To demonstrate the usefulness of the proposed segmentation method, correlation between these manual countings and automatic counting is shown. Due to the am- biguous definition of the CA1 region, manual indication of the CA1 region in the hippocampus image was necessary. For each hippocampus, three points were ob- 106 Chapter 6. Segmentation of Tissue Architecture by Distance Graph Matching length distance segmen Figure of ted 6.6: graph line cells Example SME. matc in figure hing. a. of (a) Hipp the (c) o segmen b campus et w een tation image p oin ts of as S cell and acquired clusters E are b y considered in the the setup. hipp (b) part o b. campus of The the resulting of CA1 a rat. region. image The The line from segmen length the cell t of SME detection. the CA1 indicates (c) region c. Cell the is clusters CA1 deriv ed region. from after the the All 6.3. Discussion 107 tained, indicating the start (S), middle (M), and end (E) of the CA1 region. The segmented cells between start and end point, and within a reasonable distance from the line segment SME connecting the three points, were classified as belonging to the CA1 region. The average amount of cells per unit length was calculated for the obtained cell cluster. The cluster length was taken to be the length of line segment SME. Figure 6.7 shows the correlation between the manual and automatic counting for each of the algorithms tested. Results obtained with segmentation based on the Vorono¨ı graph and for the nearest-neighbor distance do not correlate well with manual counting. The method of Lavine et al. [11] is biased (mean error, -12.9) and results in a mean squared error of 405.0 [20]. For the distance graph matching algorithm, the mean error is 0.1 and the mean squared error is 174.8.

6.3 Discussion

The geometrical arrangement of cells in tissues may reveal differences between phys- iological and pathological conditions based on structure. This intuitive notion does not imply the quintessence that the arrangement can be captured easily in an algo- rithm. Quantification of tissue architecture, when successful and objectively mea- surable, opens the way to better assessment of treatment response. Before deriving parameters from tissue architecture, partitioning of the tissue in its parts of interest is necessary. We present a method for the segmentation of homogeneous tissue parts based on cell clustering. The objective is to develop a method which is robust under spatial dis- tortions intrinsic to the acquisition of biological preparations, such as squeezing the tissue as well as taking a two-dimensional transection through a three-dimensional block. These manipulation artefacts lead to two major confounding factors: a. dis- tortion in the cell density, and b. errors in cell detection. Distortion in cell density is reflected in the distance between cells. Irregularity or spatial distortion in the cell positions, and thus distortion in the neighbor distances, is inherent to tissues. Squeezing of tissue, or local nonrigid deformations result in structural changes in cell density and thus changes in neighbor distances. Small changes in transection angle cause loss of cells in regions of the tissue. A second source of error in cell detection is the classification of artifacts as cells, or else cells may be overlooked during detec- tion, causing lack of proper definition of local tissue architecture. When neighboring cells touch one another, they are often erroneously detected as one single cell. The method also deals with the uncertainty in cell classification often encountered in the automatic processing of tissues. Errors in the assignment of cells on cluster borders should be minimal to prevent influence of cluster shape on the segmentation result. The quantitative method enables reliable classification of areas by type of tissue. In contrast to other cell pattern segmentation methods, the proposed distance 108 Chapter 6. Segmentation of Tissue Architecture by Distance Graph Matching

140 140

120 120

100 100

80 80

60 60

40 40 automatic count [cells/mm] automatic count [cells/mm] 20 20

0 0 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 manual count [cells/mm] manual count [cells/mm]

(a) (b)

140 140

120 120

100 100

80 80

60 60

40 40 automatic count [cells/mm] automatic count [cells/mm] 20 20

0 0 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 manual count [cells/mm] manual count [cells/mm]

(c) (d)

Figure 6.7: Correlation between average number of cells per mm CA1 length per animal counted manually, and the number of segmented cells per estimated mm CA1 length per animal. Dashed line indicates y = x. a. Vorono¨ı method. b. Nearest-neighbor method. c. Method of Lavine et al. [11]. d. Distance graph matching method. graph matching algorithm meets the various demands as formulated above. Detection errors as missing cells or artifact detection are corrected by respective insertion and deletion operations. Deviation of the distances to neighboring cells is incorporated by allowing some tolerance in distance matching. Local deformation of the tissue has only minor influence as long as the deviation in distances remains within tolerance. The total sum of errors, combined with deviation in distances, indicates how well the cell and its environment fit the prototype environment. A possible drawback of the algorithm is its insensitivity for orientation. It is possible for two different patterns to have the same distance graphs. Under these circumstances, segmentation is not possible by any algorithm based on interpoint distances. Including cell probability in the matching process further improves segmentation 6.4. Appendix: Dynamic Programming Solution for String Matching 109 performance. The interference between the probability indicating that the object is or is not a cell, and the fit of the object in the cluster prototype, allow a better rejection of artifacts, while cluster cell classification is less affected. Cell confidence levels can be derived from the evaluation of the probability distribution of cell features as contour ratio. In order to remain independent of microscope and camera settings, the cell features chosen should not depend on scale, absolute intensity, etc. The selection of an example often involves a supervised (i.e., interactive) proce- dure. The design of such a procedure requires adherence to several principles [21]. Among other requirements, reproducibility under the same intention is considered the most important for our purpose. As a consequence, any prototype selection algorithm should only consider cells in conformity with the expert’s intention. Application of the method to the detection of the CA structure in rat hippocampi showed that even narrow elongated structures, only a few cells thick, can be well- segmented using the proposed distance graph matching. Results obtained semiauto- matically correlate well with manual countings of preserved cells in the CA1 region, as long as there are enough cells left to discern regular clusters. The other segmentation methods tested, based on the area of influence in the Vorono¨ı graph, the distance to the nearest neighbor, and the method of Lavine et al. [11], resulted in poor cor- relation between automatic segmentation and the countings by the expert. For the case of CA region determination, the proposed method proved to be compatible with the perception of the pathologist. We have not applied the method to other tissue segmentation problems. For the recognition of tissue architecture, the proposed distance graph matching algorithm has proven to be a useful tool. The method reduces the nonbiological variation in the analysis of tissue sections and thus improves confidence in the result. The present method can be applied to any field where regular patterns have to be recognized, as long as the directional distribution of neighbors may be neglected.

6.4 Appendix: Dynamic Programming Solution for String Matching

The dynamic programming solution for matching the observation with the prototype is given in fig. 6.8. The graph searches a small set (horizontal) inside a larger set (vertical). The graph represents horizontally the prototype set P = p1, p2, . . . , pk l { } and vertically the input set N = d , d , . . . , dl . Each node C[i, j] in the graph v { 1 2 } represents the comparison between the ith element from the prototype with the jth element from the input set. The directional edges in the graph determine which operations (deletion, inser- tion, or matching/substitution) are necessary to obtain the observed and prototype distance at the same position in the comparison string. For instance, each valid path 110 Chapter 6. Segmentation of Tissue Architecture by Distance Graph Matching

start p1 p2 p3 p4 … p k term

Ci Ci d1

Cd Cm Cd Cm

d2

Cd

d3 C[2,3] …

d l

term

Figure 6.8: The dynamic programming solution for string matching. from “start” to node C[2,3] describes the operations necessary to end up with a set where the third element in the observation is considered as the second one. A horizon- tal step represents insertion of the prototype element; the same observed distance is compared to the next prototype element. A vertical step implies deletion of an obser- vation; the next observed distance is compared to the prototype element. Matching or substitution is represented by a diagonal step. A cost is assigned to each edge. Using an edge to reach a particular node implies the addition of the edge penalty to the total cost involved for reaching the node. Horizontal edges have cost ci; vertical edges cost cd. The cost for diagonal edges depends on the comparison between the elements connected to the node from which the arrow starts. The cost is zero when the elements match (cm = 0), cs when the observed element is substituted for the prototype (when cs cm), or cost cm for ≤ making them match otherwise. The cost to reach a particular node is the sum of all costs necessary when taking some valid path from “start” to the node considered. The minimum cost to reach that node is related to the path with the least total cost compared to all other possible paths. When considering only the previous nodes, i.e., all nodes from which the one under consideration can be reached, the problem can be reformulated into a recurrent relation. In this case, the minimum cost path is given by the least of the minimum cost paths to the previous nodes, increased by the cost for traveling to the node of interest. Comparison begins at the “start” node, and each column is processed consecutively Bibliography 111 from top to bottom. In this manner, the minimum cost paths to the previous nodes are already determined when arriving at a particular node. The minimum cost to reach the node under consideration is then given by:

C[i, j 1] + cd − C[i 1, j] + ci C[i, j] = min  − . (6.7)  C[i 1, j 1] + cm  − − C[i 1, j 1] + cs − −  The initial value at “start” is zero; the cost from “start” to the first node is also zero. The cost assigned to nonexisting edges (at the border of the graph) are considered infinity. The “term” nodes at the bottom and right side of the graph are used for collecting the matching costs assigned to matching the last element in the observation (bottom) or the last element from the prototype (right). The term node C[k +1, l +1] describes the costs associated with matching the input set exactly to the prototype. The only interest is in finding the prototype in a (larger) number of observed distances, for which the cost is given by node C[k + 1, k + 1]. This is the first node where the observation is exactly transformed in the prototype. When there exist additional insert and delete operations on the observed set which results in a smaller matching cost, this path should be taken as the minimum cost path. Therefore, the minimum total cost is given by the minimum of the term nodes from C[term, k+1] to C[term, l+ 1]. The order of the string matching algorithm is O(l k) [18]. Here, k is the amount × of neighbors in the prototype, and l is the amount of neighbors taken from the obser- vation. When the cost for matching is constant, as is the cost for substitution, then algorithms with a lower complexity are known to compare ordered sequences.

Bibliography

[1] N. Ahuja and M. Tuceryan. Extraction of early perceptual structure in dot patterns: Integrating region, boundary, and component gestalt. Comput. Vision Graphics Image Process., 48(3):304–356, 1989.

[2] S. Beucher and F. Meyer. The morphological approach to segmentation: The watershed transformation. In E. R. Dougherty, editor, Mathematical Morphology in Image Processing, chapter 12, pages 433–481. Marcel Dekker, New York, 1993.

[3] G. Bigras, R. Marcelpoil, E. Brambilla, and G. Brugal. Cellular sociology ap- plied to neuroendocrine tumors of the lung: Quantitative model of neoplastic architecture. Cytometry, 24:74–82, 1996. 112 Chapter 6. Segmentation of Tissue Architecture by Distance Graph Matching

[4] R. Chandebois. Cell sociology: A way of reconsidering the current concepts of morphogenesis. Acta Bioth., 25:71–102, 1976.

[5] F. Darro, A. Kruczynski, C. Etievant, J. Martinez, J. L. Pasteels, and R. Kiss. Characterization of the differentiation of human colorectal cancer cell lines by means of Vorono¨ı diagrams. Cytometry, 14:783–792, 1993.

[6] K. J. Dormer. Fundamental Tissue Geometry for Biologists. Cambridge Univ. Press, London, 1980.

[7] C. Duyckaerts, G. Godefroy, and J. J. Hauw. Evaluation of neuronal numerical density by Dirichlet tessellation. J. Neurosci. Methods, 51:47–69, 1994.

[8] M. Guillaud, J. B. Matthews, A. Harrison, C. MacAulay, and K. Skov. A novel image cytometry method for quantification of immunohistochemical staining of cytoplasmic antigens. Analyt. Cell. Pathol., 14:87–99, 1997.

[9] M. Haseldonckx, J. Van Reempts, M. Van de Ven, and L. Wouters. Protection with lubeluzole against delayed ischemic brain damage in rats. Stroke, 28:428– 432, 1997.

[10] H. Honda. Geometrical models for cells in tissues. Int. Rev. Cytol., 81:191–248, 1983.

[11] D. Lavine, B. A. Lambird, and L. N. Kanal. Recognition of spatial point patterns. Pattern Rec., 16:289–295, 1983.

[12] R. Marcelpoil and Y. Usson. Methods for the study of cellular sociology: Vorono¨ı diagrams and parametrization of the spatial relationships. J. Theor. Biol., 154:359–369, 1992.

[13] G. A. Meijer, J. A. M. Beli¨en, P. J. van Diest, and J. P. A. Baak. Image analysis in clinical pathology. J. Clin. Pathol., 50:365–370, 1997.

[14] J. Palmari, C. Dussert, Y. Berthois, C. Penel, and P. M. Martin. Distribution of estrogen receptor heterogeneity in growing MCF–7 cells measured by quantitative microscopy. Cytometry, 27:26–35, 1997.

[15] C. R. Rao and S. Suryawanshi. Statistical analysis of shape of objects based on landmark data. Proc. Natl. Acad. Sci. USA, 93:12132–12136, 1996.

[16] E. Raymond, M. Raphael, M. Grimaud, L. Vincent, J. L. Binet, and F. Meyer. Germinal center analysis with the tools of mathematical morphology on graphs. Cytometry, 14:848–861, 1993.

[17] K. Rodenacker and P. Bischoff. Quantification of tissue sections: Graph theory and topology as modelling tools. Pattern Rec. Lett., 11:275–284, 1990. Bibliography 113

[18] D. Sankoff and J. B. Kruskal. Time Warps, String Edits and Macromolecules: The Theory and Practice of Sequence Comparison. Addison-Wesley, Reading, 1983.

[19] H. Schwarz and H. E. Exner. The characterization of the arrangement of feature centroids in planes and volumes. J. Microscopy, 129:155–169, 1983.

[20] L. B. Sheiner and S. L. Beal. Some suggestions for measuring predictive perfor- mance. J. Pharmacokinet. Biopharm., 9(4):503–512, 1981.

[21] A. W. M. Smeulders, S. Delgado Olabariagga, R. van den Boomgaard, and M. Worring. Design considerations for interactive segmentation. In R. Jain and S. Santini, editors, Visual Information Systems 97, pages 5–12, San Diego, 1997. Knowledge Systems Institute.

[22] L. Vincent. Graphs and mathematical morphology. Signal Processing, 16:365– 388, 1989.

[23] F. Wallet and C. Dussert. Multifactorial comparative study of spatial point pattern analysis methods. J. Theor. Biol., 187:437–447, 1997.

Chapter 7

A Minimum Cost Approach for Segmenting Networks of Lines

submitted to the International Journal of Computer Vision.

Alice came to a fork in the road. ‘Which road do I take?’ she asked. ‘Where do you want to go?’, responded the Cheshire cat. ‘I don’t know.’ Alice answered. ‘Then,’ said the cat, ‘it doesn’t matter.’ – –Lewis Carroll.

The detection of lines in images is an important low-level task in computer vision. Successful techniques are available for the detection of curvilinear structures [4, 6, 12]. They are applied in pharmaceutical research, where interesting tissue parameters can be obtained by the extraction of bloodvessels, neurites, or tissue layers. Furthermore, the extraction of roads, railroads, rivers and channels from satellite or aerial images can be used to update geographic information systems. A higher level of information is obtained by connecting the lines into networks. Applications here can be found in the roads between crossings or highways connecting cities, the railway system in between stations, the neurite system connecting the neurons, all yielding organizational information of the network under consideration. Extraction of line networks rests on the detection of connections, the vertices in the network, as well as their interconnecting curves. The linking of line points over the interconnections is an ill-defined problem since the curves are likely to contain gaps and branches. More attractive is to find the minimum cost path between vertices, the path which contains most line evidence. The vertices can be used to guide the line tracking. Network extraction is then reduced to tracing lines between vertices.

115 116 Chapter 7. A Minimum Cost Approach for Segmenting Networks of Lines

In this chapter, we consider the robust extraction of networks of lines by the application of minimum cost graphs. Design objective is robustness against gaps in lines, which we consider as the most prominent source of error in network extraction. We propose a robust measure for edge saliency, which indicates confidence for each connection.

7.1 Network Extraction Algorithm

A network consists of vertices interconnected by lines.

Definition 16 (Network of Lines) A network of lines is defined by a set of vertices indicating line end points, and the corresponding set of lines representing intercon- nections, where none of the lines do cross.

The definition above implies vertices at crossings. The network can be segmented by tracing the lines between vertices. Therefore, four steps are considered: a. the detec- tion of line points, b. the detection of vertices, c. finding the optimal paths between neighboring vertices yielding the lines, and d. the extraction of the network graph from the set of vertices and lines. A flow diagram is given in fig. 7.1. Post-processing may include pruning of the graph to remove false branches, and the assignment of confidence levels to the found graph. Graph confidence is given by the saliency of the detected lines, and the basin coverage indicating how much line evidence is covered by the graph. If the network graph covers all line evidence, no lines are missed. However, if not all line evidence is covered by the graph, lines may be missed during extraction. Hence, basin coverage together with edge saliency indicate missed lines and spurious lines in the network graph. Each of these steps are described in further detail below.

7.1.1 Vertex Detection

For specific applications, the network vertices are geometrical structures which are more obvious to detect than the interconnecting lines. Often, these are salient points in the image. We assume these structures to be detected as landmarks to guide the line tracing algorithm. For a general method one may rely on the detection of saddlepoints, T-junctions, and crossings to obtain vertices [9, 13].

7.1.2 Line Point Detection

Theoretically, in two-dimensions, line points are detected by considering the second order directional derivative in the gradient direction [12]. For a line point, the first order directional derivative perpendicular to the line vanishes, where the second order directional derivative exhibits an extremum. Hence, the second order directional 7.1. Network Extraction Algorithm 117

(a) (b)

Figure 7.1: Flow diagram for network extraction. a. Action flow diagram, b. the corresponding data flow. Graph extraction results in the network graph, line saliency indicating confidence for the extracted lines, and basin coverage indicating missed lines. derivative perpendicular to the line is a measure of line contrast. The second order directional derivatives are calculated by considering the eigenvalues of the Hessian, f f H = xx xy (7.1) f f µ xy yy¶ given by

2 2 λ = fxx + fyy (fxx fyy) + 4fxy (7.2) § § − q where f(x, y) is the grey-value function and indices denote differentiation. After ordering of the eigenvalues by magnitude, λ+ > λ , λ+ yields the second order | | | −| directional derivative perpendicular to the line. Bright lines are observed when λ+ < 0 and dark lines when λ > 0 [10]. For both types of lines, the magnitude λ indicates + | +| line contrast. Note that this formulation is free of parameters. In practice, one can only measure differential expressions at a certain observation σ scale [5, 7]. By considering Gaussian weighted differential quotients, f = G(σ)x f, x ∗ a measure of line contrast is given by 1 R(x, y, σ) = σ2 λσ (7.3) + bσ ¯ ¯ where σ, the Gaussian standard deviation, denotes¯ ¯ the scale for observing the eigen- values, and where line brightness b is given by

f σ if λσ 0, bσ = + ≤ (7.4) (W f σ otherwise. − 118 Chapter 7. A Minimum Cost Approach for Segmenting Networks of Lines

Line brightness is measured relative to black for bright lines, and relative to white level W (255 for an 8-bit camera) for dark lines. The original expression (eq. 7.2) is of dimension [intensity/pixel2]. Multiplication by σ2, which is of dimension [pixel2], normalizes line contrast (eq. 7.3) for the differential scale. Normalization by line brightness b results in a dimensionless quantity. As a consequence, the value of R(.) is within [0 . . . 1]. The response of the second order directional derivate λ does not only depend | +| on the image data, but it is also affected by the Gaussian smoothing scale σ. By analysis of the response to a given line profile as function of scale, one can determine the optimal scale for line detection. For a bar-shaped line profile of width w, the response of R(.) (eq. 7.3) as function of the quotient q = w/σ is plotted in fig. 7.2. The response of R(.) is biased towards thin lines, and gradually degrades for larger w. For a thin line w 0 the response equals line contrast, whereas for a large value of w → relative to σ the response vanishes. Hence, the value of σ should be large enough to capture the line width. For optimal detection of lines, the value of σ should at least equal the width of the thickest line in the image,

σ w . (7.5) ≥ When line thickness varies, one can set the value of σ to the size of the thickest line wˆ to expect,

σ = wˆ . (7.6)

In this case, response is slightly biased to thin lines. The differential equation (eq. 7.3) is a point measure, indicating if a given pixel belongs to a line structure or not. The result is not the line structure itself, but a set of points accumulating evidence for a line. In the sequel we will discuss how to integrate line evidence to extract line structures.

7.1.3 Line Tracing

Consider a line and its two endpoints S1 and S2. For all possible paths Ξ between S1 and S2, the path which integrates most line evidence is considered the best connec- tion between the vertices. Therefore, we reformulate the line tracing to a minimum cost optimization problem. First, let r(x, y, σ be a cost function depending on R(.) (eq. 7.3), ² r(x, y, σ) = (7.7) ² + R(x, y, σ) and let us define the path integral, taking σ for granted, to be

S2 c(S1, S2) = min r (x(p), y(p)) dp . (7.8) Ξ ZS1 7.1. Network Extraction Algorithm 119

1 0.9 0.8 0.7 0.6 0.5 R(0,q) 0.4 0.3 0.2 0.1 0 0 0.5 1 1.5 2 2.5 3 3.5 4 q

Figure 7.2: Response of R(.) (eq. 7.3) at the centerline as function of relative line width q = w/σ.

Here, (x(p), y(p)) is the Cartesian coordinate of the path parameterized by the linear path coordinate p. The path integral c(S1, S2) now yields the integrated cost (eq. 7.7) over the best defined path in terms of line contrast R(.). For high line contrast, the line is well-defined, the cost r(.) should be determined by 1/R(.) 0. For a low ≈ value of R(.), the cost should approximate 1, such that the Euclidian shortest path is traced. Hence, the constant term ² in (eq. 7.7) determines the trade-off between either following the maximum line contrast or taking the shortest route. The value of ² is typically very small, e.g. ² = 0.001, compared to the line contrast, henceforth assures that plateaus are crossed. Note that line extraction does not introduce additional parameters.

7.1.4 Graph Extraction

Now consider an image containing vertices S = S , S , . . . , Sn . For our case, lines { 1 2 } are connecting neighboring vertices. The aim is to extract the network graph G = (S, E) with vertices S, and edges E, the interconnecting lines given by the minimum cost paths. As there will be no crossing paths (see section 7.1.1), the graph G may be found by a local solution. Hence, we concentrate on connecting neighboring vertices. Neighbors are defined by assigning a zone of influence to each vertex, where each region Z(Si) defines the area for which all points p are closer to Si than to any other vertex [15],

2 Z(Si) = p IR , A S Si , c(p, Si) < c(p, A) . (7.9) ∈ ∀ ∈ \ { } © ª Here, distance is measured with respect to cost c(p, Si) (eq. 7.8). The regions of in- 120 Chapter 7. A Minimum Cost Approach for Segmenting Networks of Lines

fluence correspond to the catchment basins of the topographical watershed algorithm [11]. Neighboring vertices to Si are defined by the set of all vertices for which the zone of influence touches that of Si. Hence, neighboring vertices share an edge in the topographical watershed segmentation. The minimum cost path Ψij between Si and Sj runs over the edge shared by Si and Sj . The graph G is computed by applying the topographical watershed transform. First, the grey-weighted distance transform is applied on the cost image given by r (eq. 7.7), with the vertices S as mask. The grey-weighted distance transform prop- agates the costs from the masks over their influence area, resulting in a wavefront collision at places where two zones of influence meet. The collision events result in the edges between neighboring vertices, yielding the watershed by topographic distance. The minimum cost path between two neighboring vertices runs over the minimum in their common edge. Therefore, any edge between two neighboring vertices is traced for its minimum. Steepest descent on each side of the saddlepoint results in the min- imum cost path between the vertices. Tracing the steepest descents for all different borders between the zones of influence results in the network graph G. The described algorithm requires one distance transform to find the zones of influ- ence. Hence, the order of the algorithm is determined by the grey-weighted distance transform, which is of order O(N 2), N being the image dimension [14]. Note that the graph algorithm is free of parameters.

7.1.5 Edge Saliency and Basin Coverage

A natural measure of edge saliency is the integrated line contrast (eq. 7.3) over the edge,

S2 s(S1, S2) = R (x(p), y(p)) dp (7.10) ZS1 where S1, S2 are start and end node, and where (x(p), y(p)) is the path. Note that s(.), as R, is a dimensionless and parameter free quantity. A confidence measure indicating how well the edge is supported by the image data is given by the average saliency over the line, 1 s¯(S , S ) = s(S , S ) (7.11) 1 2 l 1 2 where l is the path length. Again, s¯ is a dimensionless quantity, with range [0 . . . 1], a high value indicating a well-defined line. Each basin in the minimum cost graph is surrounded by a number of connected paths forming the basin perimeter. An indication of segmentation confidence for a basin B may be obtained by considering the average saliency over the surrounding lines, compared to the average line contrast inside the graph basins. The average 7.1. Network Extraction Algorithm 121 saliency over the basin perimeter is given by 1 s¯ (B) = R (x(p), y(p)) dp (7.12) B l BI l being the basin perimeter, and p representing the linear path coordinate. A high value, in the range [0 . . . 1], indicates a well-outlined basin. The average line contrast within basin B is measured by 1 c¯B(B) = R (x(p), y(p)) dx dy . (7.13) A(Bª) BZªZ

Bª is the basin eroded by a band of thickness given by σ. Erosion is applied to prevent the detected line points, smoothed by the Gaussian at scale σ, to influence the basin contrast. In (eq. 7.13), A(.) is the area of the eroded basin. The value of c¯B increases when line structures are present inside the basin, maybe due to a missed line in the graph. Coverage of the graph G is defined by the ratio of the line contrast remaining inside the basins relative to the line contrast covered by the graph edges, c¯ (B) c¯(B) = 1 B . (7.14) − s¯B(B) When all line points are covered by the basin perimeter, c¯ will be close to one. For a basin containing a missed line, the average line contrast over the basin will be high. When a spurious edge outlines the basin, summed contrast over the edges will be low, yielding a lower coverage value.

7.1.6 Thresholding the Saliency Hierarchy The graph G is constructed such that neighboring vertices are connected, regardless the absence of interconnecting lines. For a spurious connection saliency will be low, since evidence of a connecting line is lacking. Pruning of the graph for spurious lines may be achieved by thresholding on saliency. Pruning by saliency of G imposes a hierarchy on the graph, ranging from graph G with all edges included, up to the graph consisting of the one best defined edge in terms of contrast. The threshold parameter indicates the saliency level of the hierarchy. Note the introduction of a parameter, indicating the application dependent hierarchy level of the graph. We propose two methods to prune edges by saliency. First, global pruning may proceed by removing all ill-defined lines for which

s¯(S1, S2) < α . (7.15)

The resulting graph consists of the most contrasting lines, removing lines for which contrast is below the threshold. The method is applicable when a clear distinction between lines and background is present. 122 Chapter 7. A Minimum Cost Approach for Segmenting Networks of Lines

For the case of textured background, a local pruning method based on local com- parison of edge saliency may be applied. Pruning of low confidence edges is installed by removing all edges for which an alternative path can be found, via other vertices, with higher confidence. Path confidence between S1 and Sn via vertices Si is defined by the average saliency over the n edges,

n 1 1 − s¯(S , S , . . . , S ) = s(S , S ) . (7.16) 1 2 n l i i+1 Ã i=1 ! X Here, l is the total path length. The direct path between S1 and Sn is pruned when

min s¯(S1, . . . , Sn) < αs¯(S1, Sn) (7.17) where the minimum is taken over all alternative paths between S1 and Sn. Locally ill-defined lines are removed from the graph, the degree of removal given by α. For α = 0, no lines are removed, whereas for α = 1 all lines are removed for which a detour via another vertex yields higher saliency. Hence, short ill-defined paths are pruned when longer well-defined paths exist between the vertices. The method is applicable for a textured background, and when enough connections are present to determine alternative routes between neighboring vertices.

7.1.7 Overview The algorithm is illustrated in fig. 7.3. The figure shows the extraction of cell borders from heart tissue (fig. 7.3a). Extracted vertices are indicated in fig. 7.3b. Line contrast is calculated according to (eq. 7.3), shown in fig. 7.3c. The tracing of minimum cost paths is shown in fig. 7.3d. Most of the lines are correctly detected, together with some spurious lines. Local pruning the graph results in fig. 7.3e. Here, all edges which are not supported by the image data are removed. Figure 7.3f shows the area coverage, where black indicates c¯ = 1, and white indicates c¯ = 0. In summary, we have proposed a one-parameter algorithm for the extraction of line networks. The parameter indicates the saliency level in a hierarchical graph. The graph tessellates the image into regions, where each edge travels over the minimum cost path between vertices. The resulting graph is labeled by edge saliency and area coverage, both derived from line contrast.

7.1.8 Error Analysis The robustness of the proposed algorithm can best be evaluated when considering the different types of errors that may occur in forming the network graph. Table 7.1 gives an overview of possible errors and their consequences on the network graph G. The columns give a complete representation of the consequences an error may have on the network graph G. The rows overview the errors which may result from the vertex 7.1. Network Extraction Algorithm 123 and line detection. In the sequel we discuss the sensitivity of the proposed algorithm to these types of errors. When the image contains textured regions, the texture may cause a high response for the line point detection. Hence, the algorithm will falsely respond to the texture as being an underbroken line and find an optimal path, illustrated in fig. 7.4a. Further, when spurious line structures are present in the image data, without being part of the network, distortions may occur when the line is near other interconnections. In that case, the best path between vertices may be via the spurious line. An example is shown in fig. 7.4b, where text interferes with dashed line structures. For missed lines, basin coverage degrades. As the line structure is not part of the network, such sensitivity is unwanted. Gaps in lines, or lines slightly off the vertex, illustrated by fig. 7.4c, will have no consequences except that saliency degrades. When a line structure is of too low contrast to contribute enough to form a line, the line maybe pruned after confidence thresholding. An example of a missed line is shown in fig. 7.4d. As a consequence, coverage degrades, thereby indicating the event of a missed line. For the case of a falsely detected vertex off a line (no example available), the vertex will be connected to the network. Saliency of the spurious lines will be low as line evidence is missing from the image. Hence, pruning of the network by saliency is likely to solve such errors. Spurious or missed vertices at lines has, except for the insertion or deletion of a vertex, respectively, no consequence for the extracted network. An examples of spurious vertices is given in fig. 7.4e. The measure of saliency is invariant for insertion and deletion of vertices. This is proven by considering the path integral (eq. 7.10). Insertion of a vertex Sx at the path S1, S2 results in

Sx S2 s(S1, Sx) + s(Sx, S2) = R (x(p), y(p)) dp + R (x(p), y(p)) dp ZS1 ZSx S2 = R (x(p), y(p)) dp ZS1 = s(S , S ) 1 2 tu which is of course similar to the original saliency. Invariance for vertex deletion follows from the reverse argumentation. For the average line contrast within the graph basins is not affected by insertion or deletion of vertices at edges, coverage (eq. 7.14) is invariant for vertex insertion or deletion at lines. More critical is overlooking a vertex at a fork or line-end. An example of a missing vertex is shown in fig. 7.4f. In both cases, an edge is missed in the resulting graph, and coverage degrades as not all line points are covered by the graph edges. For the missing of a vertex at a line end, the line is maybe connected to a different vertex, 124 Chapter 7. A Minimum Cost Approach for Segmenting Networks of Lines

Table 7.1: Types of errors, in general for extraction of networks of lines and their consequences. Columns denote events in graph construction, whereas rows represent detection errors. Wanted sensitivity of the proposed algorithm is indicated by “+”, whereas unwanted sensitivity to errors is indicated by “–”. Robustness of the proposed method to errors is indicated by ¤.

vertex vertex edge edge edge saliency coverage Error type insert delete insert delete deviation (eq. 7.10) (eq. 7.14) spurious line ¤ ¤ ¤ ¤ – ¤ – gap in line ¤ ¤ ¤ ¤ ¤ + ¤ line off vertex ¤ ¤ ¤ ¤ ¤ + ¤ missed line ¤ ¤ ¤ – ¤ ¤ + spurious vertex off line – ¤ – ¤ ¤ + ¤ at line – ¤ ¤ ¤ ¤ ¤ ¤ missed vertex at line ¤ – ¤ ¤ ¤ ¤ ¤ at fork ¤ – ¤ – ¤ ¤ + at line end ¤ – ¤ – – ¤ +

causing the crossing of the background by the minimum cost path. Pruning of the network by saliency is likely to solve the error.

Except for errors general for the extraction of networks of lines, the proposed algo- rithm generates errors specific for minimum cost path based methods. By definition, only one path between two vertices can be of minimum cost. Any other path connect- ing the same vertices will be removed, as illustrated in fig. 7.5a. As a consequence, an edge is missed in the graph, and basin coverage degrades indicating the event of a missed line.

Further, when a better defined path exists in the neighborhood of the traced path, the algorithm tends to take a shortcut via the better defined path, as shown in fig. 7.5b. In that case, coverage degrades to indicate the missed line, whereas saliency increases due to the better defined route.

In conclusion, the proposed method is robust against: a. gaps in lines, b. lines slightly off their vertex, c. spurious lines, and d. spurious vertices at lines. The algorithm is sensitive to: a. missed lines, b. spurious vertices off lines, and c. missed vertices at forks. For missed vertices, the resulting graph is degraded. For missed lines, the graph may be degraded, and confidence of the area in which the missed line is situated may be too high. Specific for the algorithm is the sensitivity to shortcuts, and the inability to trace more than one line between connections. 7.2. Illustrations 125

7.2 Illustrations

7.2.1 Heart Tissue Segmentation

Figure 7.3 illustrates the application of the proposed algorithm on the extraction of cells from heart tissue. The tissue consists of cardiac muscle cells, the dark tex- tured areas, and bloodvessels, the white discs. Cell borders are transparent lines surrounding all cardiac muscle cells. Due to the dense packing of cells, bloodvessels are squeezed between the cells. The cell borders appear as bright lines connecting the bloodvessels. Further, the dense packing causes gaps in the lines at places were light microscopic resolving power is too low to examine the cell border. In the cardiac muscle cell application, the bloodvessels are considered as initial vertices. The vessels are detected by dome extraction [3] (fig. 7.3a). The extracted network graph, together with basin saliency and coverage, is shown in fig. 7.3d,e,f. The heart tissue segmentation is a successful application in that a large number of cells is correctly segmented by the proposed algorithm. Individual cell parameters may be estimated after selecting those cells with high saliency and coverage. The amount of cells extracted from the tissue is in the same range as for qualitative studies based on interactively outlining the cells by experts [1, 2, 16]. Hence, the algorithm enables the quantitative assessment of morphological changes in heart tissue at a cellular level.

7.2.2 Neurite Tracing

A second example fig. 7.6 shows interactive segmentation of neurites. The neurite starting points at the cell bodies are interactively indicated, and used as initial markers for the network segmentation algorithm. The resulting network is shown in fig. 7.6b. In this case, pruning of lines is not possible since no alternative routes between the markers are present. Paths between cells which are not connected are removed by thresholding the saliency (eq. 7.15). Note that no errors are caused from lack of line structure, indicated in fig. 7.6a. The overall saliency of the result is s¯ = 0.44, indicating that the line contrast spans almost half the dynamic range of the camera. Coverage c¯ = 0.95, indicating that 95% of the line structures present in the image is covered by the network graph. Hence, the result is considered highly accurate.

7.2.3 Crack Detection

An example of general line detection is shown in fig. 7.7, where cracks in ink at high magnification are traced. The image shows an ink line, at such a magnification that the ink is completely covering the image. Cracks in the ink form white lines, due to the transmission of light, against a background of black ink. Note that no natural markers are present. 126 Chapter 7. A Minimum Cost Approach for Segmenting Networks of Lines

For the general case of line detection, saddlepoint detection may be used to extract markers. The saddlepoints on bright lines are detected by

σ σ σ fx = 0, fy = 0, λ+ < 0 , f σ f σ f σ 2 < α . (7.18) xx yy − xy − Here, α indicates salient saddlepoints, and is typically small to suppress spurious saddlepoints due to noise. The saddlepoints are used as markers for the network extraction algorithm. The detected saddlepoint are highlighted in fig. 7.7b. The result of the proposed algorithm, the saddlepoints as vertices, is shown in fig. 7.7c. Average saliency is thresholded (eq. 7.15) to remove paths which cross the background. Overall saliency of the graph is 0.313, and coverage 0.962. The cracks are successfully extracted by the proposed algorithm, except that the crack ends are missing when end markers are absent. In that case, the detected cracks are too short. Since no natural markers are present, the algorithm should be robust against marker insertion or deletion at lines. Figure 7.7d shows the result after random re- moval of half the markers in fig. 7.7c. Errors in the result include a shortcut via a more contrasting line. Further, line ends are pruned due to the absence of markers. Note that saliency is only marginally affected by the new situation, 0.316 instead of 0.313, whereas coverage likewise is reduced marginally, from 0.962 to 0.960. Hence, the al- gorithm is robust for variations in the threshold value α for saddle point detection (eq. 7.18).

7.2.4 Directional Line Detection Characteristic for the proposed algorithm is that line evidence is accumulated over the line. When line evidence is absent, the algorithm optimizes the shortest path to the neighboring line parts to continue integration. As a result, when large gaps are present, the algorithm may find an alternative route by crossing the background to a neighboring line, tracking that line, and jumping back to the original line after the gaps. The problem may be solved by including line orientation information into the algorithm. To proceed, we consider directional filtering for detection of line contrast. Con- sider (eq. 7.3), which was measured by isotropic Gaussian filters of scale σ. For the directional filtering, we consider anisotropic Gaussian filters of scale σl and σs, for longest and shortest axis, respectively, and of orientation θ. Hence, line contrast is given by

σl,σs,θ 1 R0(x, y, σl, σs, θ) = σlσs λ+ , (7.19) bσl,σs ¯ ¯ where bσl,σs is given by (eq. 7.4). The scale ¯σ depends¯ on line width as given by ¯ s ¯ (eq. 7.6), whereas σl is tuned to adequately capture line direction. Hence, σl should 7.3. Conclusion 127 be large enough to bridge small gaps, but should be not too large to prevent errors when line curvature is high. In practice, an aspect ratio of σs = wˆ and σl = 3σs is often sufficient. Now we have established how to filter in a particular direction, the filter need to be tuned to the line direction. Therefore, two options are considered. First, eigenvector analysis of the Hessian results in the principal line direction. One could apply a first undirectional pass to obtain line direction, as described by Steger [12]. A second pass yields the tuning of the filter at each pixel in the line direction to obtain line contrast. Note that the filter orientation may be different for each position in the image plane. Instead of tuning the filter, sampling the image at different orientations may be applied. One applies (eq. 7.19) for different orientations. When the filter is correctly aligned with the line, filter response is maximal, whereas the filter being perpendicular to the line results in low response. Hence, the per pixel maximum line contrast over the orientations yields directional filtering. The proposed method is applied to an example of a dashed line pattern as given in fig. 7.8a, taken from [8]. The example is taken from the hardest class, the “complex” patterns. The grey dots represent interactively selected markers, indicating crossings and line end points. Orientation filtering is applied at 0◦, 30◦, 60◦, 90◦, 120◦, and 150◦, for which the maximum line contrast per pixel is taken over the sampled orientations. The result after graph extraction and saliency thresholding is shown in fig. 7.8b. The crude sampling of orientation space causes some of the lines to be noisy. A better sampling enhances the result. Further, one line part is missed, due to a shorter line connecting the same markers. The text present in the example causes the algorithm to follow parts of the text instead of the original line. Not that isotropic line detection does not adequately extract the graph (fig. 7.8c).

7.3 Conclusion

The extraction and interpretation of networks of lines from images yields important organizational information of the network under consideration. We present a one- parameter algorithm for the extraction of line networks from images. The parameter indicates the extracted saliency level from a hierarchical graph. Input for the al- gorithm is the domain specific knowledge of interconnection points. The algorithm results in the network graph, together with edge saliency, and catchment basin cov- erage. The proposed method assigns a robust measure of saliency to each minimum cost path, based on the average path cost. Edges with a low saliency compared to alternative routes are removed from the graph, leading to an improved segmentation result. The correctness of the network extraction is indicated by the edge saliency and area coverage. Hence, confidence in the final result can be based on the overall 128 Chapter 7. A Minimum Cost Approach for Segmenting Networks of Lines network saliency.

Design issues are robustness against general errors summarized in tab. 7.1. The proposed method is robust against: a. gaps in lines, b. lines slightly off their vertex, c. spurious lines, and d. spurious vertices at lines. The algorithm is sensitive to: a. missed lines, b. spurious vertices off lines, and c. missed vertices at forks. Thresholding on saliency reduces the errors caused by spurious vertices. Missed lines are signaled by a measure of coverage (eq. 7.14), indicating how much of the line evidence is covered by the network graph. Specific for the algorithm is the sensitivity to shortcuts, and the inability to trace more than one line between connections. Any algorithm based on minimum cost paths is sensitive to these types of errors.

We restricted ourselves to locally defined line networks, where lines are connect- ing neighboring vertices. For globally defined networks, like electronic circuits, the algorithm can be adapted to yield a regional or global solution. Therefore, several distance transforms have to be applied, at the cost of a higher computational com- plexity. The pruning of the network, and the measure of saliency is again applicable for the global case.

Incorporation of line directional information into the algorithm results in better estimation of line contrast, hence improves graph extraction. The eigenvector analysis of the directional derivatives yields an estimate of the local direction of the line. The directional information may be included by considering an anisotropic metric for the line contrast filtering. Experiments showed a better detection of the network graph for dashed line detection. The example given is considered as a complex configuration, according to [8]. Disadvantage is a longer computation time, due to the anisotropic filtering pass.

The proposed method results in the extraction of networks from connection to connection point. The routing from a starting connection to its final destination depends on the functionality of the network, and is not considered in this chapter. Correct interpretation of the network in the presence of distortion obviously requires information on the function of the network.

For the extraction of line networks the proposed method has proven to be a useful tool. The method is robust against gaps in lines, and against spurious vertices at lines, which we consider as the most prominent source of error in line detection. Hence, the proposed method enables reliable extraction of line networks. Furthermore, the method indicates detection confidence, thereby supporting error proof interpretation of the network functionality. The proposed method is applicable on a broad variety of line networks, including dashed lines, as demonstrated by the illustrations. Hence, the proposed method yields a major step towards general line tracking algorithms. Bibliography 129

Bibliography

[1] J. Ausma, M. Wijfels, F. Thon´e, L. Wouters, M. Allessie, and M. Borgers. Struc- tural changes of atrial myocardium due to sustained atrial fibrillation in the goat. Circulation, 96:3157–3163, 1997.

[2] C. A. Beltrami, N. Finato, M. Rocco, G. A. Feruglio, C. Puricelli, E. Cigola, F. Quaini, E. H. Sonnenblick, G. Olivetti, and P. Anversa. Structural basis of end-stage failure in ischemic cardiomyopathy in humans. Circulation, 89:151–163, 1994.

[3] S. Beucher and F. Meyer. The morphological approach to segmentation: The watershed transformation. In E. R. Dougherty, editor, Mathematical Morphology in Image Processing, chapter 12, pages 433–481. Marcel Dekker, New York, 1993.

[4] L. D. Cohen and R. Kimmel. Global minimum for active contour models: A minimal path approach. Int. J. Computer Vision, 24:57–78, 1997.

[5] L. M. J. Florack, B. M. ter Haar Romeny, J. J. Koenderink, and M. A. Viergever. Scale and the differential structure of images. Image and Vision Computing, 10(6):376–388, 1992.

[6] J. Illingworth and J. Kittler. A survey of the Hough transform. Computer Vision Graphics Image Process., 44:87–116, 1988.

[7] J. J. Koenderink. The structure of images. Biol. Cybern., 50:363–370, 1984.

[8] B. Kong, I. T. Phillips, R. M. Haralick, A. Prasad, and R. Kasturi. A benchmark: Performance evaluation of dashed-line detection algorithms. In R. Kasturi and K. Tombre, editors, Graphics Recognition Methods and Applications, pages 270– 285. Springer-Verlag, 1996.

[9] T. Lindeberg. Scale-Space Theory in Computer Vision. Kluwer Academic Pub- lishers, Boston, 1994.

[10] C. Lorenz, I. C. Carlsen, T. M. Buzug, C. Fassnacht, and J. Weese. A multi- scale line filter with automatic scale selection based on the hessian matrix for medical image segmentation. In Scale Space Theories in Computer Vision, pages 152–163. Springer-Verlag, 1998.

[11] F. Meyer. Topographic distance and watershed lines. Signal Processing, 38:113– 125, 1994.

[12] C. Steger. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Machine Intell., 20:113–125, 1998. 130 Chapter 7. A Minimum Cost Approach for Segmenting Networks of Lines

[13] B. M. ter Haar Romeny, editor. Geometry-Driven Diffusion in Computer Vision. Kluwer Academic Publishers, Boston, 1994.

[14] P. W. Verbeek and J. H. Verwer. Shading from shape, the Eikonal equation solved by grey-weighted distance transformation. Pat. Rec. Let., 11:681–690, 1990.

[15] L. Vincent. Graphs and mathematical morphology. Signal Processing, 16:365– 388, 1989.

[16] H. W. Vliegen, A. van der Laarse, J. A. N. Huysman, E. C. Wijnvoord, M. Men- tar, C. J. Cornelisse, and F. Eulderink. Morphometric quantification of myocyte dimensions validated in normal growing rat hearts and applied to hypertrophic human hearts. Cardiovasc. Res., 21:352–357, 1987. Bibliography 131

(a) (b)

(c) (d)

(e) (f)

Figure 7.3: Example of line detection on heart tissue (a), observed by transmission light microscopy. The dark contours show the segmented blood vessels, superimposed on the original image. Line contrast R(.) is shown in (b), the minimum cost graph in (c). The final segmentation (d) after local pruning of spurious edges for α = 0.9 (eq. 7.17). The estimated saliency (e) (eq. 7.11) and area coverage (f) (eq. 7.14), dark representing high confidence in the result.

132 Chapter 7. A Minimum Cost Approach for Segmenting Networks of Lines

¢

¡

(a) (b) (c)

¤ £

(d) (e) (f)

Figure 7.4: Example of failures in the line detection. a. The detection of a spurious line due to a textured region. b. The deviation of a line due to spurious line structures, the text, in the image. c. A gap in a line; the line is without errors correctly detected by the algorithm (result not shown). d. A missing connections due to lack of line evidence. e. Extra vertices on the line does not influence the algorithm performance. f. Missing of a vertex at a fork, resulting in a missed line in the network graph.

Bibliography 133

¢

¡

(a) (b)

Figure 7.5: Example of failures specific for algorithms based on minimum cost paths. a. The missing of a line due to the double linking of vertices, for which the best connection is preserved. b.

A shortcut along a better defined lines to optimally connect two vertices. £

(a) (b)

Figure 7.6: Extraction of a neurite network (a); note the gaps present in the neurites. The traced network is shown in (b). The dots represent the interactively indicated neurite start points at the cell bodies. 134 Chapter 7. A Minimum Cost Approach for Segmenting Networks of Lines

(a) (b)

¡

(c) (d)

Figure 7.7: Extraction of a general line network. a. shows a high magnification image of ink, completely covering the image, distorted by white cracks through which light is transmitted. No natural markers are present. b. The saddlepoints at the bright lines (eq. 7.18). c. The detected lines, overall saliency s¯ = 0.313, coverage c¯ = 0.962. d. The result for half the number of markers, overall saliency s¯ = 0.316, coverage c¯ = 0.960. Note the shortcut and the removal of line ends. Bibliography 135

(a)

(b) (c)

Figure 7.8: Extraction of a dashed line network (a), taken from [8]; markers are interactively selected at line crossings and line end points, indicated by grey dots. The extracted network is shown in (b). Errors made include some shortcuts, and the missing of the bend line part due to the presence of a second, shorter connection between the markers. Note the difference for the isotropic result for scale σl (c). The scale σl was taken to integrate line evidence over the gaps in the dashed lines. Deviation from centerline in the isotropic case is the result of the large integration scale compared to line width.

Chapter 8

Discussion

8.1 Color

In this thesis, we have developed a theory for the measurement of color in images, founded in physics as well as in measurement science. The thesis considers a physical basis for the measurement of spatio-spectral energy distributions, integrated with the laws of light reflection. The Gaussian color model solves the fundamental problem of color and scale by integrating the spatial and color information. The differential geometry framework [10, 4] is extended to the domain of color images. As a consequence, we have given a physical basis for the opponent , and a physical basis for color receptive fields. Furthermore, it was concluded that the Gaussian color model is the natural representation for investigating the scaling behavior of color image features. The Gaussian color model describes the differential structure of color images. Selection of scale enables robust and accurate measurement of color value, even under noisy circumstances. Color perception is mainly constraint by the number of spectral measurements, the spectral resolution. Due to the limited space available on the retina, evolution was forced to trade-off between the number of different spectral receptors, and their spatial distribution. For humans, spectral vision is limited to three color samples, and a tremendous amount of spatial samples. Therefore, the Gaussian color model measures intensity, first order, and second order derivative of the incoming spectral energy distribution. Daylight has driven evolution to set the central wavelength of color vision at about 520 nm, and a spectral range of about 330 nm. For any colorimetric system, measurement is constraint by these parameters. A second achievement of the thesis is the integration of the physical laws of spectral image formation into the measurement of color invariants. We define a complete framework for the extraction of photometric invariant properties. The framework for color measurement is grounded in the physics of observation. Hence it is theoretically

137 138 Discussion better founded as well as experimentally better evaluated than existing methods for the measurement of color features in RGB-images. The framework can be applied to any vision problem where reflection laws are different from every day vision. Among other imaging circumstances, application areas are satellite imaging [3], vision in bad weather [7], and underwater vision. The physical model presented in Chapter 3 demands spatial comparison in order to achieve color constancy. The model confirms relational color constancy as a first step in color constant vision systems [2, 8]. The subdivision of human perception in edge detection based on color contrast, cooperating with a subsystem for assigning colors to the segmented visual scene, may yield an overall performance which is highly color constant. Hence, spatial edge detection based on color contrast plays an important role in color constancy. Most of the color invariant sets presented in Chapter 3 and Chapter 4 have spatial edge detection as lowest order expression. Edge detection is confirmed by Livingstone and Hubel [5] and by Foster [2] to be of primary importance in human vision. To cite Livingstone and Hubel about one of the three visual subsystems: Although neurons in the early stages of this system are colors-selective, those at higher levels respond to color-contrast borders but do not carry information about what colors form the border. They conclude that the subsystem is important in seeing stationary objects in great detail, given the slow time course and high resolution of the subsystem. From a physical perspective, these results are evident given the invariants derived in Chapter 4. We show in Chapter 4 that the discriminative power of the invariants is orderable by broadness of the group of invariance. A broad to narrow hierarchy of the invariance groups considered is given in section 4.2.6:

viewing surface highlights illumi- illumi- illumi- inter direction orienta- nation nation nation reflection tion direction intensity color H + + + + + – – N + + – + + + – C + + – + + – – W – – – – + – – E – – – – – – –

Invariance is denoted by +, whereas sensitivity to the imaging condition is indi- cated by –. The discriminative power of the invariance groups is given in section 4.3.2: 8.2. Geometrical Structure 139

σx = 0.75 σx = 1 σx = 2 σx = 3 Eˆ 970 983 1000 1000 Wˆ 944 978 1000 1000 Cˆ 702 820 949 970 Nˆ 631 757 962 974 Hˆ 436 461 452 462

The number refers to the amount of colors out of 1,000 patches still to be distin- guished by the invariant, and is an absolute number given the hardware and spatial scale σx. For the proposed color invariants, discriminating power is increased when considering a larger spatial scale σx, thereby taking a larger neighborhood into ac- count for determining the color value. Hence, a larger spatial scale results in a more accurate estimate of color at the point of interest, increasing the accuracy of the result. The aim of the thesis is reached in that high color discrimination resolution is achieved while maintaining constancy against disturbing imaging conditions, both theoretically as well as experimentally. The proposed invariance groups describe the local structure of color images in a systematic, irreducible, and complete sense. The invariance groups incorporate the physics of light reflection as well as the constraints imposed by human color perception.

8.2 Geometrical Structure

Characterization of histological or pathological conditions can be based on the topo- graphical relationship between tissue structures. Capturing the arrangement of local structure enables the extraction of global tissue architecture. Such an extraction pro- cedure should be insensitive to distortions intrinsic to the acquisition of biological preparations. In this thesis, a graph based approach for the robust extraction of tissue architec- ture is established. Design issue is robustness against errors common in the prepara- tion of biological tissues, like taking a transections through a three-dimensional block, and errors in the detection of cells, bloodvessels, and cell border. Biological variety, effectuating the architecture to be irregular, is taken as design issue rather than as error cause [1, 6, 9, 11]. As demonstrated in Chapter 6, these design considerations accomplished the recognition of tissue architecture. In both Chapter 6 and Chapter 7, the extraction of geometrical arrangements is based on local structure. Tissue architecture is derived from the local relationships between markers. Confidence in the final result is estimated by the saliency of the detected structures, and the goodness of fit to the quintessence of the architecture. 140 Discussion

Robust extraction of tissue architecture reduces the nonbiological variation in the analysis of tissue sections and thus improves confidence in the result. The quantitative methods based on local structure enables reliable classification of areas by type of tissue. Combining the methodology as proposed in this thesis enables effective analysis and interpretation of histologically stained tissue sections. The proposed frameworks allow for fully automatic screening of drug targets in pharmaceutical research [12].

8.3 General Conclusion

This thesis makes a contribution to the field of color vision. The constraints im- posed by human color vision are incorporated in the physical measurement of spatio- spectral energy distributions. The spatial interaction between colors is derived from the physics of light reflection. Hence, the proposed framework for color measurement enables the interpretation of color images from both a physical and a perceptual viewpoint. The second contribution of the thesis is the assessment of spatial arrangement. The methodology presented is applied to the segmentation of biological tissue sec- tions observed by light microscopy. The proposed concepts can be utilized in other application areas. As demonstrated by Mondriaan, the combination of color and spatial organization captures the essential visual information, in that the subsystems dealing with shape and dealing with localization both are in effect. Hence, combining color and spatial structure yet to follow, and the way to go, resolves the perceptual organization of images, Victory Boogie Woogie.

Bibliography

[1] F. Darro, A. Kruczynski, C. Etievant, J. Martinez, J. L. Pasteels, and R. Kiss. Characterization of the differentiation of human colorectal cancer cell lines by means of Vorono¨ı diagrams. Cytometry, 14:783–792, 1993.

[2] D. H. Foster and S. M. C. Nascimento. Relational colour constancy from invariant cone-excitation ratios. Proc. R. Soc. London B, 257:115–121, 1994.

[3] G. Healey and A. Jain. Retrieving multispectral satellite images using physics- based invariant representations. IEEE Trans. Pattern Anal. Machine Intell., 18:842–848, 1996.

[4] T. Lindeberg. Scale-Space Theory in Computer Vision. Kluwer Academic Pub- lishers, Boston, 1994. Bibliography 141

[5] M. Livingstone and D. Hubel. Segregation of form, color, movement, and depth: Anatomy, physiology, and perception. Science, 240:740–749, 1988.

[6] R. Marcelpoil and Y. Usson. Methods for the study of cellular sociology: Vorono¨ı diagrams and parametrization of the spatial relationships. J. Theor. Biol., 154:359–369, 1992.

[7] S. Narasimhan and S. Nayar. Chromatic framework for vision in bad weather. In Proceedings of the Conference on Computer Vision and Pattern Recognition, volume 1, pages 598–605. IEEE Computer Society, 2000.

[8] S. M. C. Nascimento and D. H. Foster. Relational color constancy in achromatic and isoluminant images. J. Opt. Soc. Am. A, 17(2):225–231, 2000.

[9] E. Raymond, M. Raphael, M. Grimaud, L. Vincent, J. L. Binet, and F. Meyer. Germinal center analysis with the tools of mathematical morphology on graphs. Cytometry, 14:848–861, 1993.

[10] B. M. ter Haar Romeny, editor. Geometry-Driven Diffusion in Computer Vision. Kluwer Academic Publishers, Boston, 1994.

[11] H. W. Venema. Determination of nearest neighbours in muscle fibre patterns using a generalised version of the Dirchlet tesselation. Pat. Rec. Let., 12:445– 449, 1991.

[12] K. Ver Donck, I. Maillet, I. Roelens, L. Bols, P. Van Osta, T. Bogaert, and J. Gey- sen. High density C. Elegans screening. In Proceedings of the 12th International C. Elegans Meeting, page 871, 1999.

Samenvatting∗

Kleur en Geometrische Structuur in Beelden Toepassingen in microscopie

Dit proefschrift behandelt zowel kleur als geometrische structuur. Kleur wordt benaderd vanuit de theoretische meettechniek, waarbij kleur het resultaat is van een lokale, spatio-spectrale apertuur meting. Vervolgens wordt differentiaalrekening gebruikt voor het afleiden van kenmerken invariant onder alledaagse belichtings- omstandigheden. De kenmerken worden in experimenten uitvoerig getest op invari- antie en discriminerend vermogen. De experimenten tonen het hoge discrimenerende vermogen aan van de veschillende invarianten, en demonsteren daarbij de invariante eigenschappen. Nieuw in het proefschrift is de koppeling tussen het fysisch meten van kleur en de perceptie van kleur door de mens.

Verder behandelt dit proefschrift het kwantificeren van geometrische structuren, speci- fiek toegepast in licht-microscopie, hoewel de ontwikkelde methodologie breder toepas- baar is. Graaf mathematisch-morfologische methodes worden ontwikkeld voor het segmenteren van regelmatige punt- en lijn-patronen, zoals aanwezig in hersen- en hartweefsel. De ontwikkelde methodes zijn succesvol toegepast bij het kwantificeren van morfologische parameters. De methodes kunnen worden ingezet bij het zoeken naar potenti¨ele medicamenten in farmaceutisch onderzoek.

∗Summary in Dutch

143

Dankwoord†

Het proefschrift is de afronding van een leerproces, en is daarmee inherent be¨ınvloed door vele personen. Veel heb ik geleerd van mijn promotor, Arnold, die mij door de moeilijke passages in de tekst heen hielp, nadat hij z’n brood al had verdiend met een paar opmerkingen over het onderzoek zelf . . . Vooral het “pushen” van de www- weekendjes naar de Ardennen, en later de Eifel, de Wadden, en de Zeeuwse kust, heb ik erg gewaardeerd en zal ik in de toekomst naar uitzien.

Het onderzoek is grotendeels uitgevoerd bij Janssen Pharmaceutica, onder supervisie van Hugo, die mij de ruimte gaf om geheel mijn eigen weg te gaan, maar wel de druk op de ketel hield om die weg ook af te lopen. Ik heb deze vrijheid zeer gewaardeerd, waarvoor ook Frans lof verdiend met het vele geduld dat hij had tot er “eindelijk” iets uit kwam dat ook toepasbaar was. De samenwerking met Frans en Peter heeft, naar mijn mening, geleid tot een aantal goede toepassingen van beeldverwerking, in het bijzonder van scale-space methodes, in biologisch en farmaceutisch onderzoek. Dit mede door de “technology pull” van Kris, Luc, en Rony. Hoewel ik er vaak m’n eigen bescheiden “Hollandse” mening op na hield, hebben de discussies met Frans en Peter zeker verandering van inzicht tot gevolg gehad.

Alhoewel Janssen een zeer grote research afdeling heeft, voel je je als AIO informatica toch enigzins verdwaalt tussen de biologen, met als enige uitzondering Luc, die me vele malen met statistiek om de oren sloeg. De twee- tot drie-wekelijkse bezoekjes aan Amsterdam waren dan ook zeer welkom en een onuitputtelijke bron van inspiratie. De weerslag van de discussies op die dagen met Rein, Theo, Anuj, Geert, Dennis, Harro, Erik en Arnold (Jonk) is dan ook duidelijk terug te vinden in het proefschrift. Mijn hartelijke dank hiervoor, het gaf me een flinke steun in de rug.

Een duidelijke tekortkoming als AIO informatica is de gebrekkige kennis van biologie. Na vele verhalen van mijn belgische collega PhD studenten op Janssen over NF-κB receptors, biochemical pathways, en apoptosis van NGF gedepriveerde PC12 cellijnen, is ´e´en en ander me duidelijk geworden. Rony, Gwenda, Gerrit, Christopher, en de

†Acknowledgement

145 studenten van de andere afdelingen, bedankt voor het bijbrengen van de biologische achtergrond om mee te kunnen in een pharmaceutisch bedrijf. Uiteraard ben ik hier ook Astrid, Peter en Jos een bedankje schuldig.

De grafische afdeling van Janssen heeft de figuren in het proefschrift verzorgt, alsmede het drukken van het proefschrift. Vele malen heb ik Lambert verontrust met rare beeld formaten, formules, en encapsulated postscript figuren. Ook ben ik Jozef en Bob dank verschuldigd voor het maken van figuren en posters, zeer succesvol wanneer ze eenmaal op het poster-board pronkten. Ook dank aan Marcel en Luc voor de mogelijkheid om bij Janssen dit onderzoek te doen, en het verzorgen van de financiering voor het boekje.

Ik heb zowel Janssen als de UvA als een prettige omgeving ervaren om onderzoek te doen, mede door de goede sfeer in beide groepen. Hiervoor dank aan alle collega’s van de oude afdeling Life Sciences, met name Mirjam, Gerd, Eddy, Roger, Koen, Marc, Guy en Greet, en iedereen van de ISIS groep, met name Benno, Marcel, Silvia, Carlo, Kees, Wilko, Edo, Herke-Jan, Frank, Joost, Tat, Andy en Jeroen.

Astrid, hoe kan ik jouw ooit bedanken . . .