<<

Layered Imagery in Chuck Close Portraiture

'frân-Quân Luong

Master of Science

School of Computer Science

McGill University

Montreal, Quebec

April 2005

A thesis submitted to McGill University in partial fulfilment of the requirements of the degree of Master's in Computer Science

@'frân-Quân Luong, MMV Library and Bibliothèque et 1+1 Archives Canada Archives Canada Published Heritage Direction du Branch Patrimoine de l'édition

395 Wellington Street 395, rue Wellington Ottawa ON K1A ON4 Ottawa ON K1A ON4 Canada Canada

Your file Votre référence ISBN: 978-0-494-22749-7 Our file Notre référence ISBN: 978-0-494-22749-7

NOTICE: AVIS: The author has granted a non­ L'auteur a accordé une licence non exclusive exclusive license allowing Library permettant à la Bibliothèque et Archives and Archives Canada to reproduce, Canada de reproduire, publier, archiver, publish, archive, preserve, conserve, sauvegarder, conserver, transmettre au public communicate to the public by par télécommunication ou par l'Internet, prêter, telecommunication or on the Internet, distribuer et vendre des thèses partout dans loan, distribute and sell th es es le monde, à des fins commerciales ou autres, worldwide, for commercial or non­ sur support microforme, papier, électronique commercial purposes, in microform, et/ou autres formats. paper, electronic and/or any other formats.

The author retains copyright L'auteur conserve la propriété du droit d'auteur ownership and moral rights in et des droits moraux qui protège cette thèse. this thesis. Neither the thesis Ni la thèse ni des extraits substantiels de nor substantial extracts from it celle-ci ne doivent être imprimés ou autrement may be printed or otherwise reproduits sans son autorisation. reproduced without the author's permission.

ln compliance with the Canadian Conformément à la loi canadienne Privacy Act some supporting sur la protection de la vie privée, forms may have been removed quelques formulaires secondaires from this thesis. ont été enlevés de cette thèse.

While these forms may be included Bien que ces formulaires in the document page count, aient inclus dans la pagination, their removal does not represent il n'y aura aucun contenu manquant. any loss of content from the thesis. ••• Canada ABSTRACT

This thesis explores layered imagery in the portraiture of Chuck Close. Por­ traits that exhibit qualities of layered imagery appear as an integrated whole from a distance, but as the observer approaches the canvas, the figure of the portrait dis­ integrates into atomic elements that stand on their own. We present a discussion of the perceptual princip les that permit the effects of layered imagery, and apply these principles to the creation of systems that emulate Close's blobby and finger­ print styles. We discuss the texturaI aspects of his portraits that contribute to the effect of layered imagery, and detail methods to reproduce these effects. Finally, we present novel stylised renderings inspired by Close's artwork.

ii RÉSUMÉ

Ce mémoire explore les figures hiérarchisées dans les portraits de Chuck Close.

Lorsqu'un portrait possédant des qualités de figures hiérarchisées est perçu de loin, il apparaît comme une éntité unie. Cependant, lorsque l'observateur s'approche de la toile, la figure du portrait se désintègre en éléments atomiques possédants des intérêts distincts. Nous présentons une discussion des principes perceptuels perme­ ttant les effets de figures hiérarchisées et appliquons ces principes à la création de systèmes imitant deux styles de Chuck Close. Nous discutons des qualités texturales contribuant à l'effet de figure hiérarchisées et expliquons ensuite des méthodes pour reproduire ces effets. Nous présentons finalement de nouveaux rendus inspirés par les paintures de Chuck Close.

iii ACKNOWLEDGEMENTS

Many thanks to my supervisor Allison Klein for her wonderful leadership and direction. With her remarkable patience and ability to see goodness when 1 saw disaster, Professor Klein has taught me the values that make a great scientist.

1 would also like to thank members of the graphies laboratory. Our numerous dis­ cussions have, in one way or another, contributed to this research.

1 am finally indebted to the love and support of family and friends. 1 thank my par­ ents and sister for providing an environment that facilitated this research. Thanks are kindly due to Soukia Savage for photographs found in this thesis, and to Alexan­ dre Grégoire for insightful discussions on Art History. 1 am finally thankful for the help and support of Stephanie Troeth. More than anyone, she has endured along my side the hardships of this research, and for that, 1 am eternally grateful.

iv TABLE OF CONTENTS

ABSTRACT ii

RÉSUMÉ .. iii

ACKNOWLEDGEMENTS iv

LIST OF FIGURES . . . vii

LIST OF ALGORITHMS ix

1 Introduction. 1

2 Background . 7

2.1 Principles of Perception. 7 2.1.1 Visual Processing Framework . 7 2.1.2 Chrominance and Luminance . 9 2.2 Layered Imagery in Art History . . . 12 2.3 Related Work in Computer Graphies 15 2.3.1 Blobby Style Related Work . . 16 2.3.2 Fingerprint Style Related Work 18

3 Blobby Style ...... 21 3.1 Observations Regarding Close's Blobby Style. 22 3.2 Blobby Methodology ...... 26 3.2.1 Shape and Joins Assignment 26 3.2.2 Texture Strips ...... 30 3.3 Experimental Results & Discussion 33 3.3.1 Beyond the Blobby Style 37

4 Fingerprint Style ...... 41 4.1 Observations Regarding Close's Fingerprint Style 42 4.2 Fingerprint Methodology ...... 45

v 4.2.1 Position, Orientation and Print Darkness 45 4.2.2 Precision Map ...... 52 4.3 Experimental Results & Discussion . 53 4.3.1 Beyond the Fingerprints Style 60

5 Conclusion. 63

Contributions of Authors 66

References ...... 67

vi LIST OF FIGURES Figure page

1.1 Close's blobby and fingerprint styles. 3

2.1 Five stages of visual processing ... 8

2.2 Effect of isoluminant colour selection 11

2.3 The Flaying of Marsyas ...... 13

2.4 Woman with Parasol and A Sunday on La Grande Jatte 14

3.1 Inspirations for blobby style ...... 22

3.2 Circular, vertical and horizontal marks 24

3.3 Elongated left-diagonal mark. 24

3.4 Edge map . . . 27

3.5 Join assignment 29

3.6 Blobby texture strips 30

3.7 Effect of shape and join assignment 32

3.8 Experimental results of blobby filter . 34

3.9 Effect of variations in grid size of blobby filter 36

3.10 Effect of variations in edge detection sensitivity of blobby filter . 38

3.11 Andy Warhol mosaic texture strips 39

3.12 Novel image mosaics ...... 40

4.1 Inspirations for fingerprint style 42

4.2 Luminance difference maps . . . 46

vii 4.3 Fingerprint luminance ramp 48 4.4 Effect of orienting fingerprints 49 4.5 Fingerprint style at intermediary stages. 50

4.6 Fingerprint texture ...... 51 4.7 Effect of position, orientation and print darkness . 51 4.8 Experimental results of fingerprint filter ...... 56 4.9 Effect of variations in texture size of fingerprint filter 57 4.10 Effect of variations in luminance threshold of fingerprint filter 59 4.11 Colour rendering of fingerprint style 61

4.12 Novel stylised renderings ...... 62

viii LIST OF ALGORITHMS Algorithm page 3.1 Shape Assignment 28 3.2 Join Assignment . 29

3.3 Blobby Algorithm 33 4.4 Print Placement . 47 4.5 Fingerprint Algorithm . 54

ix CHAPTER 1 Introduction

Portraits are pervasive in the cultural history of mankind. From of

Roman emperors to photographs of a newly born baby, a human's remarkable ability to relate to another person's face has ensured that the art of portraiture has crossed geographical, temporal and cultural boundaries. This art form has evolved signifi­ cantly over the centuries. Once reserved for high dignitaries who could afford to have their pictures painted by trained artists, portraits are nowadays within the reach of aU, thanks in large part to the advent of modern photography.

Portraits can sim ply be defined as the depiction of a person, and can be expressed through various media including , sculpture and photography. Simple pic­ tures such as the ones found in passports can be thought of as portraits, but the truly inspiring ones, those that have secured a place in the history of art and pho­ tography, are those that not only possess artistic qualities, but the ones that also provide an insight into the subject's personality. Once we see past the superficial level of physical depiction, the great portraits of this world reveal more as we look closer.

1 In contemporary history, American artist Chuck Close has established himself as a master in the art of portraiture [31, 29]. Close initially found his voice in the early 60's in the art of , painting portraits "rendered in pore-by-pore detail" [19]. Basing his work on reference photography, Close attempted to mimic with uncanny precision every detail onto immense canvases, sometimes as large as 7 feet wide by 9 feet high. In later years, Close's style became progressively more ab­ stract, but his portraits maintained their gigantic proportions. He has demonstrated over the decades a remarkable ability to create portraits using numerous media and techniques, including spitbite aquatint, mezzotint, pulp-paper collage and scribble etching. Over the last four decades, Close has established himself as a predominant figure of American contemporary art, and he is considered by several art critics to be one of the most influential artists of modern times [31, 29].

Inspired by Chuck Close and his artwork, we explore in this thesis two unique portraiture styles developed by this master painter. The first style - his most recent one - is composed by arranging a large collection of abstract shapes next to each other such that when one looks from afar, they reveal the head of a subject. However, as the observer cornes closer, each individu al tile disintegrates from the face and collapses into a distinct piece of work. Figure 1.1 (a) is a self portrait of Chuck Close using this technique, which we refer as the "blobby" style.

The second set of portraits we examine are Close's paintings which are composed by superimposing thousands of fingermarks on a canvas. As was the case with his blobby images, these fingerprinted portraits bear a remarkable resemblance to a photograph when viewed from a distance, but as we inspects the image more carefully,

2 (a) (b)

Figure 1.1: (a) Self Portrait, 2000, an example of Close's blobby style. (b) Georgia, 1985, an example of Close's fingerprint style. we notice that the portrait is made up of a multitude of fingermarks. Although each fingermark is a distinguishable entity, when put together, the numerous prints combine to convey texture, shape and illumination, aUowing the observer to perceive a face. Figure 1.1 (b) is an example of Close's fingerprint style.

Although the two portrait styles described above appear quite different from one another, they share in their essence a crucial feature. Both styles are based on what we caU layered imagery. We define layered imagery as a visual composition that possesses distinct and interesting elements at various viewing distances. The elements of interest can manifest themselves in several ways, including visual and semantic significance. When glanced at a distance, the two types of portraits we study in this thesis reveal distinct faces, but when confronted from a close distance,

3 each element becomes a unique entity. In the case of Close's blobby styles, the diamond shaped cells have a visual appeal by their distinctive shape and colours.

Close's fingerprints, on the other hand, introduces a deeper semantic meaning when one stops to wonder what it means for Close to use his own body as a tool and to apply his unique finger signature over the entire canvas.

In order to further develop our understanding of Close's work and layered im­ agery in general, we devote an important part of our thesis to a discussion of the perceptual princip les that contribute to the striking effect of layered imagery. To val­ idate these principles, we apply them in the creation of two Chuck Close filters. In this thesis, we focus mainly on the changes in tone that convey shape since they are are for the most part responsible for the effect of layered imagery. The contributions of this thesis are thus:

• A discussion of the principles of visual perception that permit the effect of

layered imagery.

• An exploration of the texturaI aspect of a system that mimics Close's blobby

style.

• The creation of a novel automated system that emulates Close's fingerprint

style.

Researchers in physics, computer science, biology and psychology have in recent decades made important discoveries regarding the human perceptual system. It was later observed by researchers in the science of art that sorne of these discoveries were already well known and exploited by artists [13, 11, 23, 15]. Whether artists come to posses this knowledge through actual experiments or sim ply by following

4 their instincts, it remains that artists have, over the centuries, gathered significant knowledge into the characteristics that create a visually appealing composition. By taking inspiration from Close's work, we therefore hope to obtain a greater insight into the perceptual qualities that allow Close's paintings to produce the effect of layered imagery. Moreover, the painting styles we explore provide us with guidance in artistic expression and gives us a target to develop artistic effects which we would not have thought of otherwise.

The blobby and fingerprint filters presented in this thesis were guided by two decisions. First, to create our Chuck Close like images, we wish to use 2D images as input. While an alternative approach might be to use 3D models [12, 18,34,35], pro­ viding a 3D model is generally not feasible for most people. In contrast, photographs are easily available, and people are comfortable with this medium. Providing a source image as input to our program is therefore quick, easy and inexpensive.

Second, we were motivated by the philosophy that the use of our system should minimize human intervention. Many non-photorealistic rendering (NPR) techniques require a human to perform tasks such as the manual placement of strokes, the specification of stroke orientation or the annotation of important regions of the im­ age [7, 25, 26, 3]. Although the approach of letting humans perform operations they can easily accomplish and let comput ers execute more tedious tasks has its merits, we have put considerable effort in our work to minimize human interaction. Our motivation is twofold. First, we seek to develop tools for an audience that may not have formaI training in art. Our filters should therefore be simple to use. Second, by

5 forcing an algorithmic solution, we hope to observe more carefully the work of Close and extract better guidelines to develop our filters.

The rest of this thesis is organised as follows: In Chapter 2, we survey examples of layered imagery in art history and delve into seminal works in the NPR literature that help in the creation of our Chuck Close filters, paying particular attention to the development of textures. We also examine the basis of the human visual processing pathway, from the retinal image to the cognitive stage and attempt to explain the basic principles that make layered imagery so fascinating. The next two chapt ers are dedicated to the algorithms we have created to mimic Close's work. In Chapter 3, we first extract the essence of Close's blobby style and then detail the algorithms to reproduce it. We take a similar approach in Chapter 4 and demonstrate a way to transform an image into a fingerprint stylised rendering. Finally, we conclude in

Chapter 5 with a review of our research and present avenues for future work.

6 CHAPTER 2

Background

In this section, we first explore some basic principles of visual perception. Then, we discuss how layered imagery is perceived, initially at a physical, then at a cognitive level. In the subsequent section, we briefly explore the use of layered imagery in art history and its progression from Ancient Greece to Chuck Close. We conclude by providing a literary review of Computer Graphies research related to our own.

2.1 Principles of Perception

2.1.1 Visual Processing Framework

To understand some of the factors that make layered imagery so appealing, we must first study the basie principles of perception, from the moment light hits our retina until the final stages of the cognitive process that tells us what it is we are seeing. Because we experience visual percepts for the better part of our waking life and do it effortlessly, we tend to dimiss it as a trivial task, but visual perception is a complex pro cess and much of it still has to be explained.

Based on the seminal work of David Marr [17] at MIT in the early 80's, Stephen

E. Palmer describes a general theoretical framework for visual processing [22]. The pro cess begins with the retinal image that is formed in our eyes, due to reflected

7 light from external objects. Through the combined work of cones, rods and other dedicated cells, we fuse the retinal image from our two eyes and extract lines, edges and object contours. This entire process is referred as image-based processing and occurs in 2D. From the data that has been gathered, we derive surfaces from the scene and begin to reconstruct elements of the environment. This stage is generally referred as a surface-based process and our brain can, from that moment, begin to form hypotheses regarding the perceived surfaces. From our knowledge of the world, we then refine, in the object-based stage, our previous hypothesis and attempt to recover the 3D structure of objects. In the final stage, we classify these objects into categories, a pro cess which helps determine the function they serve [22].

Image-based Surface-based Object-based Category-based Retinallmage f----+ Processing r---+; Processing ~ Processing r----+ Processing ~

, ,.", " " '"

Figure 2.1: Five stages of visual processing from [22]

It is important to understand that although each pro cess in Palmer's framework is based on the input of the previous stage, the processing pipeline is not unidirec- tionaI. There is, in effect, an important feedback loop from later stages back to their predecessor. The image-based process, for instance, can establish an initial set of edges and provide that data to the surface-based process, but a hypothesis about the existence of certain surfaces, will in turn be fed back to the image-based process, which can then refine its original assessment. Illusory contours in which we perceive

8 edges that are not present (such as the Kanizsa triangle) can be partly explained by this feedback mechanism [24, 22]. Although these complex interaction are a cause of great confusion to vision scientists, it is this interplay between stages of processing that permits the interesting effect of layered imagery.

In our appreciation of layered imagery, the four stages beyond the retinal stage are of particular interest, but because much of the latter two stages are still not weIl understood at a computationallevel, we primarily discuss the image-based and surface-based representation in the discussions of our algorithms. This is not to say that the object and category-based processes are insignificant. The selection of the reference image, for instance, is a decision that is crucial to the success of the Chuck

Close filters. Automatically selecting a photo that will engage the viewer at a deep cognitive level is, however, outside the scope of our work.

As discussed in Chapt ers 3 and 4, many of the perceptual effects that we distill from Close's artwork are elements that facilitate the transition in our visual system from image-based to surface-based representation. Because the basic elements Close employs in the composition of his portraits are visually distinct entities, Close pro­ vides subtle hints so that the observer can still reconstruct the figure of the portrait.

The discovery and successful application of these hints in our filters are the central foci of this thesis.

2.1.2 Chrominance and Luminance

Underlying the theoretical framework of visu al processing is the crucial difference between chrominance and luminance. In her book Vision and Art: The Biology of Seeing, Margaret Livingstone writes:

9 The elements of art have long been held to be color, shape, texture,

and line. But an even more fundamental distinction is between color and

luminance. Color (in addition to reproducing objects' surface properties)

can convey emotion and symbolism, but luminance (what you see in a

black and white photograph) alone defines shape, texture, and line. [15]

What Livingstone describes as luminance in the previous passage is what artists better know as value or brightness. If we are working in the RGB colour-space, we can approximate a colour's luminance value using a weighted sum of its R, G, and

B components. Equation 2.1 from [5] describes the luminance for a given colour Œ.

Œred

Luminance(Œ) = [0.299 0.587 0.114] Œgreen (2.1)

As Livingstone states, physiologically, humans see the world in two different ways. Initially, colour and luminance are processed separately by our visual system, and then this information is integrated to give us our perceived final image [6, 15].

The luminance processing pathway is much older evolutionarily speaking than our chrominance pathway, and as such, luminance is more critical in our perception of the world. Livingstone asserts that luminance is responsible for "our perception of depth, three-dimensiontality, movement (or the lack of it), and spatial organization" [15].

Thus, the luminance properties of an object are crucial in the transition from image­ based to surface-based representation. It is luminance that helps our visual system perceive the shape of abjects. Therefore, if we ensure that luminance fidelity is

10 maintained in our stylized renderings, then we can preserve the shape of figures in our portraits.

Thus, by constraining luminance fidelity, but exploiting colour variations, we can develop interesting artistic effects. When two fields are isoluminant (i.e. possessing the same luminance values), they look the same to our luminance processing pathway, while potentially looking quite different to the colour processing path. This creates a perceptual tension that skilled artists can exploit for astonishing effects. For example, the sun in Monet's painting Impression Sunrise is often said by viewers to pulse or shimmer (Figure 2.2) because the sun's luminance is the same as that of the surrounding douds even though the sun's hue is quite different [15].

(a) (b)

Figure 2.2: Skilled artists can use isoluminant colours to produce perceptual tension between the luminance and colour processing pathways of the human visual system. In the sky of Impression Sunrise, Claude Monet selects colours with radically different hues (a) but near constant luminance (b). The results is a sun that appears to shimmer in the morning mi st [15].

11 In our emulation of Close's blobby style, we exploit a similar technique. Since it is luminance that defines shape and textures, we can be more liberal in our selection of colours. Thus, as long as luminance fidelity between the source and generated portrait is maintained sufficiently, the viewer will still perceive the figure in the portrait. Similarly in the fingerprint style, we seek to maintain luminance fidelity so as to preserve the shapes and textures that are present in the reference portrait.

2.2 Layered Imagery in Art History

The introduction and wide popularity of photomosaics in recent years might lead one to believe that layered imagery is a novel art form. However, the practice of creating artwork out of sm aller distinct elements has existed for centuries. An ancient example of layered imagery is the art of decorative mosaics. By carefully arranging thousands of tiles of coloured glass, stone, or pottery fragments, mosaic artists could recreate a larger picture or pattern. Well established in Ancient Greece, the practice of mosaicing flourished during the Roman Empire and has spread to all regions of the world. During the Renaissance, talented painters were those who exerted fineness in the manipulation of the paintbrush. Classical painters were trained to apply brushstrokes such that one could not perceive the paint on the canvas. The stroke of the painter had to disappear from the canvas such that the viewer's experience of the painting was independent of his distance. The early work of Renaissance painter Titian pos­ sessed this quality of delicacy, but in paintings he later produced (Figure 2.3), Titian deliberately exposed the brush stroke. Art historian Vasari writes:

12 [... ] these recent works, on the other hand, are dashed off in bold

strokes, broadly applied in great patches in such a manner that they

cannot be looked at dosely but from a distance appear perfect [32].

Titian was among the first painters to intentionally reveal the strokes of his paintings.

He was, thus, introducing in his artwork effects of layered imagery. From a distance, the viewer couid contemplate an integrated scene, but when viewed from proximity, one could see nothing but Titian's broad brush stroke [32].

Figure 2.3: Titian, The Flaying of Marsyas, 1575-76.

Titian's technique of revealing the brush strokes on the canvas eventually became widely popular in the Iate 19th century movement of Impressionism. Artists such as Monet, Renoir and Degas painted scenes which when viewed from a distance communicated a figurative scene that conveyed movement and the essence of the fleeting moment (Figure 2.4 (a)). Upon doser inspection, however, the viewer could

13 easily perceive the seemingly unrelated broken brush strokes that compose the scene.

Painters of the Divisionist era such as Seurat and Signac (Figure 2.4 (b)) followed in the footsteps of Impressionists, borrowing the idea of disjoint elements (dots in the case of the Divisionists), which were juxtaposed to create the overall effect of a coherent scene while simultaneously maintaining the individuality of the basic elements [11].

(a) (b)

Figure 2.4: (a) Claude Monet, Woman with Parasol, 1875. (b) George Seurat, A Sunday on La Grande Jatte, 1884-86.

As a more modern example, Chuck Close has demonstrated, through various painting styles (including the two we explore in this thesis), his command of layered imagery. Of particular interest is Close's masterful manipulation of the grid size in his blobby style. Over the years, Close has refined the size of his paintings and adjusted the size of the grid employed such that the duality of the portrait and its building blocks is obvious. When one views a Divisionist painting, the use of dots is not necessarily striking, but in Close's artwork, the existence of distinct atomic

14 elements is clearly apparent. Intrigued by the apparent duality of Close's painting, the viewer then engages in the "Chuek Close Dance" 1 , stepping further and doser to pereeive at one time the whole image, and at the other, the detail of individu al cens.

As shown in this brief tour through art history, layered imagery is not a new art form, but one that has existed for centuries. From deeorative mosaies to the recent work of Chuck Close, the artworks we have described an possess features of layered imagery: a coherent scene when viewed from a distance and distinguishable, unique elements that form the composition when viewed from up close.

2.3 Related Work in Computer Graphies

The que st for perfect photorealism has for many decades been a driving goal in the development of Computer Graphies. However, since the early 90's, a growing group of researehers have pursued the opposite goal. Instead of aiming for increased realism and complexity, researchers in the field of NPR have attempted to produce renderings that possess the stylistic qualities of traditional artistic media.

In Haeberli's seminal work [7], the author suggests that one ean convert an image into many different representations, each rendition conveying different visual information. Haeberli believes that an image ean be manipulated to emphasize sorne of its aspects by taking into account stroke location, colour, size and direction of brush stroke. In a general manner, Haeberli's goal to transform a source image into

1 Audioguide to Chuck Close Prints: Process and Collaboration, The Metropolitan Museum of Art, New York, April 18, 2004.

15 another related representation is close to our own. In our work, we take inspiration from a 2D photo and pro duce a new representation of it. Additionally, Haeberli's desire to steer the viewer's attention to particular regions of the image is similar to one of our goals.

Hertzmann introduces in [8] an expressive and versatile system that allows the creation of various artistic styles. By making use of a multi-scale approach, brushes of varying sizes and curved strokes that align themselves with image boundaries, the system renders Impressionist-, Expressionist- and Pointillist-like images. In the two systems we develop, we also seek to align the primitive elements of our composition with the edges present in the reference image. Furthermore, in Close's fingerprint style, we also employa multi-scale approach reminiscent of [8] to modulate the size of prints according to the level of detail in the image.

In the development of our blobby and fingerprint style filters, we employ two different techniques. The next two sections therefore separately discuss related work in Computer Graphies. 2.3.1 Blobby Style Related Work

In [4], Finkelstein and Range describe a method to generate image mosaics by selecting images whose colours and shape approximate the edges of the larger image using the multiresolution querying method of [10]. Once an image has been selected to represent an area, its colours are adjusted to more closely approximate the reference image. Along with Silvers's work on photomosaics [28] (a proprietary technique), the system of Finkelstein and Range on image mosaics is most closely related to our blobby filter.

16 One method to develop a blobby style filter would be to create a large library of images, either by segregating grid cells from Close's painting or pre-rendering several thousands such images, and employing traditional photomosaicing techniques. The systems of [28, 4], however, use colour proximity as a primary selection criteria, and seek to minimize modifications to the image library. This contrasts with our approach. We explicitly modulate our textures with colours that are chosen to be far from those of the target. Furthermore, traditional photomosaicing techniques cannot capture important features of Close's portraiture style that we detail in section 3.2.

Hertzmann develops in [9] a framework to pro cess images based on a machine learning technique. Provided an unfiltered and filtered pair of images, the system learns to recreate the effect of the filter using texture synthesis techniques. One method to develop our blobby filter would be to use the machine learning strategy of [9]. This technique, however, is not easily feasible since we do not possess a library of source-destination pairs. Furthermore, [9] has difficulty maintaining and replicating large scale structures such as those found in Chuck Close's grid cells.

Of particular importance in Close's blobby style is his use of disparate colours in each grid celL Although individual colours are far from those of the reference image, when viewed from a distance, the combinat ion of all colours resolves to the colour of the source image. For the algorithm detailed in this thesis, we employ the geometric isoluminant colour picking technique of [16, 27], which allows for the selection of colours with widely varying hue and saturation, while simultaneously maintaining luminance value.

17 2.3.2 Fingerprint Style Related Work

In the NPR literature, the fingerprint style of Chuck Close is most closely related to stroke-based illustrations [35, 3, 26, 34, 25]. In this family of techniques, primitive strokes made of lines and textures are combined to create a larger image. Strothotte and Schlechtweg [30] note that these strokes have the two following properties:

1. "They depict both tone and texture simultaneously."

2. "They work together to express tone and texture."

As such, although each stroke must be visible by itself, the properties of each stroke

(placement, orientation, darkness, etc.) must contribute towards the depiction of the overall tone and texture. Tone and texture in turn provide eues for the viewer to facilitate the transition from image-based to surface-based representation.

Winkenbach and Salesin are among the first to introduce stroke-based techniques in the form of pen-and-ink illustrations [34, 35]. Their approach, however, cannot be used for our filter since they are based on 3D models. Salisbury et al. [25], on the other hand, demonstrate an image based pen-and-ink technique which carefully depicts tone and texture. In [26], they introduce the notion of orientable texture to their system. This technique and the previous ones deseribed, however, rely heavily on user input. Because a goal of our work is to minimise human intervention, the systems discussed so far eannot be used.

Artistic rendering techniques that use halftoning [21, 33, 20] seek to recreate grey-seale images on bilevel devices. The impression of increased intensity range is achieved by the spatial integration performed by our visual system over a small area. To determine whether a pixel should be black or white, artistic halftoning techniques

18 rely on dither matrices that possess interesting patterns. In our fingerprint system, we do not limit ourselves to bilevel intensities, but like halftoning techniques we also exploit spatial integration of the eye (albeit on a slightly larger scale). Halftoning methods would not be adequate for our fingerprint filter since they do not allow the control of location, orientation and size of fingerprints.

An interactive method introduced by Durand et al. [3] seeks to de co upIe the tedious rendit ion of strokes from high-Ievel attributes such as tone, smudging and level of details. The tedious and technical aspects of stylized rendering are le ft to the computer system while the user controls the expressiveness of drawings. The authors introduce the notion of precision map that locally dictates the level of detail necessary in a region of the image. In our fingerprinting technique, we also exploit the idea of precision map. Again, however, because the system of Durand et al. relies heavily on human interaction, it does not satisfy our premise. Furthermore, the tone of individu al strokes is fixed to a constant value, which is not suit able for Close's style.

In the field of non-photorealistic rendering, DeCarlo and Santella explore the notion of hierarchical regions of importance in [2]. In their system, elements that are noted to have higher significance are rendered at a higher level of detail. To determine the location of regions of interest, the authors track the movements of the viewer's gaze on the source image. Related to the notion of regions of importance, [26, 3] provide an interactive method for a user to specify regions that require emphasis.

Because we do not possess the apparatus to track eye movements of an observer and

19 want to minimize human interaction, we provide a mechanism inspired by [8] which automatically determines a precision map.

20 CHAPTER 3 Blobby Style

In this section, we explore a novel image filter that renders portraits in the blobby style of Chuck Close. As described by Livingstone [15], our visual processing mechanism is composed of two distinct pathways, one based on luminance and the other on chrominance. An important property of luminance is that it defines "shape, texture and line" [15]. Because we are only concerned with the effect of the luminance pathway in this thesis, most of our effort will be directed in the selection of textures that will guide the viewer in establishing the figure of the portrait. By selecting textures that bear a meaningful relation to the reference image, we can ease and improve the transition from image-based to surface-based representation.

Part of the magic in Close's work is the knowledge that these paintings are created by hand, with no aids other than Close's creativity and talent. A Chuck Close original will always be unique and valuable. However, the creation of an automatic filter for this style allows us to both better understand Close's work and to produce our own portraits, an appealing quality since most of us will never be the subject of an original Close painting.

21 We begin in section 3.1 by identifying sorne conventions in Close's work, then introduce the algorithrns ernployed to rnirnic Close's artwork in section 3.2. Finally, we present in section 3.3 experirnental results and a discussion of the techniques developed.

3.1 Observations Regarding Close's Blobby Style

(a) (b)

Figure 3.1: Portraits by Chuck Close that have inspired our work. (a) Self Portrait, 2000. (b) Lyle, 2003.

Close's portraits are created by photographing the subject, subdividing both the source photograph and the destination canvas into diarnond-shaped grid cells, and then representing each cell in the source photograph as a collection of coloured, concentric blobs. The canvases Close uses can be quite large, sornetirnes as big as

7 feet wide by 9 feet high. When viewed from far away, they resemble the original photos due to spatial integration of colour and the perceptual tendency to concentrate

22 on the larger, emerging shape of the face [23]. However, when viewed from a near distance, Close's use of colour, isoluminance and texture clutter the visual field [15], invoking "a competition between the face and its constituent blocks to engage our perception of shape from shading." [23]

In or der to create a faithful Chuck Close image filter, we must first carefully examine his artwork and detect the elements of his painting that make his style unique. By doing so, we also note the qualities of his portraits that contribute to the effect of layered imagery. We first observe that his paintings are composed of several diamond shaped cells, which Close calls marks. These marks are tightly packed next to each other to form a regular tiling pattern; we can therefore overlay a gr id of fixed width and height over his portraits. The size of his grids vary from portrait to portrait, but Close seems to have converged over the years to a grid size of 24 marks wide by 29 marks high.

Marks are individually painted by overlaying quads and concentric blobs. Al­ though every mark is unique, we can categorise the overall shape they suggest into five groups: circular, vertical, horizontal, left-diagonal and right-diagonal. Figure 3.2 demonstrates close-ups of circular, vertical and horizontal marks. The shape of each mark is assigned in a manner that suggests edges found in the reference image. If there is no strong edge in a source cell, the target mark usually has circular shape.

Otherwise, the target mark holds elongated vertical, horizontal, or diagonal blobs.

Furthermore, we observe that Close occasionally merges certain grid cells so that they form an elongated shape (Figure 3.3). This operation is performed when adjoining cells have similar average colours, share a fiat edge, and possess a diagonal

23 (a) (b) (c)

Figure 3.2: Most marks in Close's paintings have a default circular shape (a). Other marks are made to convey a vertical (b) or horizontal (c) shape. shape. In general, two to three cells at a time may be joined this way, forming a single, larger diagonal blob, or, occasionally, a "vee." Moreover, within large areas of little or no edge information (such as in the background), Close often joins cells in the above manner to eliminate uniformity in the visual field. Vertical and horizontal blobs are never joined.

Figure 3.3: Two cells are merged to form an elongated left-diagonal mark.

Finally, we note that the individual blobs that compose each mark possess dis- tinct colours. These colours can vary wildly from one another, but when they are

24 blended together through optical mixing, they suggest the underlying colour of the diamond tile. To determine the colour of each layer, Close often starts by filling the grid ceIl (the outer quad) with an arbitrary colour, and then successively adds layers to move closer to the target colour [29]. These quads and blobs often alternate in luminance and/or colour. However, a key point is that each target cell has many fewer luminance levels than colours. Typically, each mark only has two or three [16] luminance levels but it can have up to three to ten colour levels. This effectively increases the discrepancy between the shapes perceived by the luminance and colour visual processing pathways. Colour clustering analysis of paintings by Close [27] reveal that his portraits are in general composed of five colour layers - the value we use in this thesis.

The rules previously distilled are summarized as foIlows:

1. Each mark is generally composed of five blob layers.

2. The colours of each layer blend to suggest the colour of the underlying tHe,

while simultaneously maintaining luminance fidelity.

3. The shape of each mark is determined by the underlying edge information.

4. Adjoining cells with similar colour, shape and edge information can be merged

to form a single mark.

5. Random joins are created in uniform areas of the image to break the visual

field.

In order to create a filter that emulates Close's style, we must therefore reproduce these features. We approximate these properties using the steps described in the following section.

25 3.2 Blobby Methodology

A system to recreate Close's blobby style consists of two main components: shape selection and colour selection. The pro cess of shape selection is first performed to determine which textures to apply. Colour picking is then executed based on the characteristics of the previously selected textures. We describe shape and joins assignment in this thesis and defer many of the details regarding colour selection to [16, 27].

The first step of shape selection examines the reference image's edge information and determines the shape (circular, vertical, horizontal, etc.) that each grid ceIl should be assigned. In a second step, each ceIl is inspected and joined with its neighbour where necessary. Unique marks are then created by applying superimposed textures based on a set of pre-rendered texture strips. At run-time, textures are modulated with colours that maintain luminance fidelity with the source image.

3.2.1 Shape and Joins Assignment

Selecting whether a mark should have a non-circular shape or whether it should be joined with a neighbour is an important pro cess in the creation of blobby style images. Although the effect of a single mark is negligible, the significance of shape selection is witnessed when aIl marks are viewed collectively. The purpose of non­ circular marks is to suggest the presence of an underlying edges in the reference image, thus, helping the viewer to determine object contours. This in turn eases the transition from the image-based to surface-based representation, an important step in visu al processing as discussed in section 2.1.1.

26 In Close's artwork, the majority of marks have a circular shape but certain marks emphasize a vertical, horizontal or diagonal component. Closer observation reveals that these generally occur in regions where an edge is present. These non-circular shapes allow the highlight of the silhouette and facial features such as the mouth, eyes and nose. We therefore begin by filtering the source image with a Canny edge detector [1] to obtain an edge map (Figure 3.4 (b)).

(a) (b)

Figure 3.4: (a) Source image. (b) Edge map.

Once edge detection is completed, we superimpose a tiling pattern over the edge map. For each cell i, we determine an orientation vector Vi by making use of the vertical and horizontal gradient. We then trivially derive the orientation (Ji using the x and y components of Vi. If the magnitude of Vi is below a fixed threshold,

27 it is assigned a default circular shape. Otherwise, (Ji is quantized into one of four orientations (rule 3). Aigorithm 3.1 de scribes the procedure for shape assignment.

Algorithm 3.1 Shape Assignment 1: Perform edge detection on source image 2: for aH cells Ci do 3: Create a vector vi using vertical and horizontal derivatives 4: if magnitude of Vi < threshold then

5: Ci,shape f- circle 6: else 7: Determine orientation (Ji of Vi 8: Quantize (Ji into one of 4 orientations À (vertical, horizontal, left- or right­ diagonal)

9: Ci,shape f- À 10: end if 11: end for

Cells that appear to form an elongated shape in diagonal directions must then be joined to pro duce an effect similar to Close's. To link right-diagonal cells, we scan each cell line-by-line starting from the top of the image and look to its South-West neighbour. If the neighbour is also of right-diagonal shape and has a similar average colour, then the cells are merged together. As can be observed in the portraits of

Close, the length of his elongated shapes never exceed three cells. By the linking procedure previously described, it is possible to generate much longer lists; we must therefore subdivide long lists into sm aller pieces. To accomplish this task, we ran- domly partition lists into groups of two or three cells, being careful not to end a list with a single cell. In Figure 3.5, the red cells (a) must be partitioned into sm aller lists. The first solution proposed (b) is acceptable, but the second one (c) isn't since it leaves a lonely cell.

28 (a) (b) (c)

Figure 3.5: (a) Red list must be partitioned. (b) is an acceptable partitioning, but (c) is not because it leaves a lonely cell.

Algorithm 3.2 describes how right-diagonal cells are joined. A similar method is performed for left-diagonal shapes. Finally, we randomly sample cells to see if they are in large, fiat areas. If they are, we randomly join them as left or right diagonal shapes according to rule 5.

Algorithm 3.2 Join Assignment 1: for aH cells Ci, from top to bottom, left to right do 2: if Ci,shape is right-diagonal then 3: Find south-west neighbour Ci(sw) 4: if Ci(sw),shape equals Ci,shape then 5: Join Ci(sw) and Ci 6: end if 7: end if 8: end for 9: for aIl Joined groups gi do 10: if length of gi > 3 then 11: Break gi into groups of 2 or 3 avoiding lonely cells 12: end if 13: end for

29 Once shape and join assignment is complete, we proceed to the assignment of texture strips and render the final results.

3.2.2 Texture Strips

In our system, a texture strip is a combination of five sub-textures (rule 1), one sub-texture for each desired colour layer. Sub-textures, in turn, are composed of a bilevel value where white signifies the region to be colour modulated. By gathering a collection of texture strips that convey varying shapes, we can build a texture strip library. Unlike traditional photomosaicing systems [28, 4], our library need not be very large. As a bear minimum, the library only requires one texture strip for each type of shape. By increasing the size of the library, however, we augment the diversity of marks and avoid repetitive elements. Figure 3.6 shows ex amples of texture strips included in our Chuck Close texture library.

(b)

(c)

Figure 3.6: (a) Circular texture strip. (b) Diagonal texture strip. (c) Vertical texture strip. The sub-textures are composed of two quads and three more shapes that convey the underlying edge information. Sub-textures are ordered from outermost to innermost.

30 Once a texture strip has been assigned to each ceIl, sub-textures are successively rendered over their respective ceIls as alpha-blended texture-mapped quads from outermost to innermost. Because we use a fixed-size library of textures, vertex coordinat es of quads are perturbed slightly in order to mimic a hand-crafted look.

The colours we apply to each sub-texture are determined by the isoluminant picking technique of [16, 27]. Briefly summarized, this technique picks colours that are far apart, but blend to suggest a target colour. The benefit of this approach is that it chooses colours that have the same luminance as the target colour. Note, however, that even in black and white, our texturaI approach still conveys the shape of the face. Figure 3.7 shows the result of the above technique using various texture strip libraries. In Figure 3.7 (b), an marks have been assigned the default circular shape.

By introducing vertical, horizontal and diagonal marks in (c), we can delineate the lines and edges of the portrait more clearly. Joining diagonal marks as seen in (d) reinforces the suggestion of lines, which in turn improves our perception of the figure.

In practice, we find that certain texture strips can be reused for different shape assignments. A texture strip for horizontal marks, for instance, can be reused for vertical marks by adjusting appropriately the ordering of vertex coordinates when performing texture mapping. In a similar manner, texture strips that convey a circular shape can be rotated to create three more circular texture strips. Byapplying these simple techniques, we add to the diversity of our texture library and preserve texture memory on the graphics cardo

31 (c) (d) Figure 3.7: (a) Source image. (b) Destination image using default circular marks. By introducing vertical, horizontal and diagonal marks (c) and joining diagonal marks (d), we delineate more clearly the lines and edges of the portrait. 32 Finally, we put together all the techniques previously described and state in

Algorithm 3.3 the final steps performed to pro duce our blobby filter.

Aigorithm 3.3 Blobby Algorithm 1: Grid off the source image 2: for aH cells Ci do 3: Perform shape assignment (Algorithm 3.1) 4: end for 5: Perform join assignment (Algorithm 3.2) 6: for aH cells Ci do 7: Assign a texture strip according to Ci,shape 8: Determine colour of sub-textures according to [16] 9: Render Ci with each layer sub-texture modulated by its corresponding colour 10: end for

3.3 Experimental Results & Discussion

We present in this section various results obtained using the technique described in section 3.2. We demonstrate the effect of key parameters such as edge detection sensitivity and grid size. Finally, we present a novel type of image mosaic that combines ideas from photomosaics [28] and Chuck Close.

The images presented in Figure 3.8 are direct results of the filter we have de- veloped to emulate Close's blobby style. The grid size was set to 24 marks wide by

29 marks high, values based on Close's own numbers. The images were also cropped to have the same aspect ratios as Close's paintings. The entire computation is fast, taking less than 2 seconds to generate each image on a Pentium 4, 1.3GHz with 512

MB of RAM using source images of size 1024 x 1280.

A first and natural experiment to perform is to vary the size of the grid as demon- strated in Figure 3.9. As is expected, the smaller the number of marks (Figure 3.9

(a)), the coarser the subject's figure appears. When the number of cells decreases,

33 (a) (b)

(c) (d) Figure 3.8: (a) and (b) Source images. (c) and (d) Destination images created using the blobby filter we have developed.

34 each mark becomes larger. Shape and colour information is therefore averaged over a bigger surface area. One can compare this effect to applying a large convolution kernel to the source image.

As we increase the number of grid cells (Figure 3.9 (c) and (d)), the size of each mark decreases and finer details of the figure are revealed. One could argue that the latter results look better, but with more grid cells, the marks are harder to see, and the striking effect of layered imagery is decreased. As a degenerate case, marks could attain the size of a pixel, at which point the stylistic rendit ion disappears.

In addition to the effect of grid size, one must also consider the distance of the observer from the portrait. According to experiments performed by Pelli [23], the integration of fiat marks into a figure is not only dependent of the number of marks across the face, but also on the width of the face in degree. The emergence of the figure is thus more apparent as one moves away from the painting.

Having explored the effect of grid size, we now look at variations in edge de­ tection sensitivity. Edge detection algorithms (including Canny's, which we employ) are at best moderately successful. Edge detection algorithms are fairly sensitive to noise and rarely capture edges that a human would detect. The imperfections of edge detection algorithms stem from the fact that they are purely based on the image­ based representation and do not benefit from the feedback of surface-, object-, and category-based representation of the visual processing system. Because edge algo­ rithms are so imprecise, we must carefully select the edge threshold sensitivity of our filter so that it captures as closely as possible the edges that a human would pick. If edge detection is not sufficiently sensitive, our system will assign too many circular

35 (c) (d) Figure 3.9: Variations in grid size. (a) 12 by 15. (b) 24 by 29. (c) 36 by 45. (d) 48 by 58. As we increase the number of grid cells, finer details of the figure are revealed at the expense of artistic appeal. 36 marks. On the other hand, if edge detection is too sensitive, incorrect edges will be introduced and shape assignment will not be accurate. Figure 3.10 demonstrates the effect of varying edge detection sensitivity. If we decrease edge sensitivity, then we obtain (a) in which case most marks are assigned a circular shape. Increasing edge sensitivity as in (c) and (d), does not pro duce visually pleasing results since non-circular shapes are assigned in uniform areas of the source image. 3.3.1 Beyond the Blobby Style

FinaIly, we present a new type of image filter, derived from the work we have presented and photomosaics [28]. We first categorize small images by shape (as in rule 3 of section 3.2.1), and then coarsely segment these images to create cell layers (Figure 3.11). Each layer receives a colour based on its relative area within the cell. The colours chosen can either aIl be isoluminant or can have a smaller number of luminance levels than colours, as in the previous section. In the images shown, we used the latter approach, effectively applying the colour picking algorithm of [16, 27] to the segmented image layers. We finally adjust the tiling pattern to form a standard grid in keeping with Silvers's [28] style of image mosaics. Results can be seen in Figure 3.12. Because a large library of source imagery is not required for our technique, this is an easy-to-use method for generating complex image mosaics.

Both tile selection and colour picking are fast, so image generation generally takes less than 2 seconds on a machine similar to the one previously described.

37 (c) (d)

Figure 3.10: Variations in edge detection sensitivity. (b) Shows the sensitivity value we have used thus far, while (a) demonstrates a decrease in sensitivity. (c) and (d) demonstrate the effect of increasing edge detection sensitivity. 38 (J9; ~

Figure 3.11: Sample texture strips from the Andy Warhol mosaic. The human faces correspond to circular shape, while the soup can functions as a diagonal shape.

39 (c) (d) Figure 3.12: Novel image mosaics created by using different texture strip libraries. (a) Flower theme. (b) Bug theme. (c) Sea theme. (d) Andy Warhol theme.

40 CHAPTER 4

Fingerprint StY le

In this chapter, we continue our exploration of layered imagery in portraits of

Chuck Close by examining paintings created through the accumulation of finger­ prints. From a distance, each painting conveys a near perfect representation of a face, but from up close, the canvases reveal thousands of fingerprints that appear unordered.

As a philosophical point, it is interesting to note that by the very nature of the approach Close uses to create these fingerprint portraits, it is impossible to reproduce his paintings. Close's fingermarks are unique and as such, each print he lays on the canvas reaffirms that the portrait is his creation. Nevertheless, we seek inspiration from his artwork and attempt to emulate his style. In this process, we acquire a deeper understanding of his technique and create a filter which can be used by all.

As with our approach in Chapter 3, we begin in section 4.1 by identifying sorne conventions in Close's fingerprint artwork then introduce the algorithms employed to mimic Close's style in section 4.2. Finally, we present in section 4.3 experimental results and a discussion of the techniques developed.

41 (a) (b)

Figure 4.1: Portraits by Chuck Close that have inspired our work. (a) Georgia, 1985. (b) Leslie, 1986.

4.1 Observations Regarding Close's Fingerprint Style

In the previous chapter, we examined a style in which Close uses a fixed grid that sets the location and size of each mark. Locally, each grid cell is unique and distinct from its neighbour, but when viewed globally, each cell contributes to an integrated who le composition. The use of a grid constrains Close to marks of a regular size, since each mark must sit within a fixed-size cell.

In his fingerprinting style, however, Close breaks free of the grid and allows himself to lay marks wherever he deems necessary. By giving himself such liberty,

Close acquires a new artistic freedom. One would expect that by dropping the grid,

Close would also abandon the restriction of a fixed-size mark, but he does not and maintains marks of standard size. He explains:

42 "1 was looking for an incremental unit that was automatic, not deter­

mined by taste or anything else. A stock, standard unit. The size of the

thumb and the size of the index finger." [31]

And so, as with his blobby portraits, Close constrains himself to marks of fixed-size in the creation of his artwork. By limiting the tools of his creation, Close allows his creativity to be unleashed in new directions.

We now begin making more specific observations that lead to the development of our algorithm. As in section 3.1, we examine with attention the artwork of Chuck

Close and extract rules that act as the foundation to our algorithm.

First, we observe that given a sufficiently small subwindow, the luminance of the reference photograph and the fingerprinted portrait are nearly equal. Second, we notice that Close uses two methods to achieve changes in tone. In one method, he applies different levels of pressure on the canvas to achieve variations in darkness.

In brighter areas of the face, such as the sclera of the eyes, Close only applies a light pressure of his finger to imprint a mark. In darker areas however, Close applies more pressure on his finger to leave more ink on the canvas. Another method Close employs to create darker regions is by overlaying several prints in the same location.

This is most evident in the hair. Numerous prints are made so that the shape of the finger mark is barely perceptible.

Close justifies the use of fingermarks because they provide him with a "standard unit". Close employes two such units, his thumb and index. We observe upon close inspection of his paintings that the size of marks are not uniform over the entire painting and Close carefully chooses when to use his thumb and when to use his

43 index finger. In areas of low details, such as the clothes, large prints are applied whereas in regions where Close desires more det ails , he use smaller and more precise prints (presumably his index). Because humans rely most heavily on facial features to perform face recognition, Close, puts more details near the eyes and mouth to increase realism.

We also notice that Close pays particular attention to the arrangement of prints along the silhouette of the person and in regions where there is a strong underlying edge (such as the separation of hair and face). In order to clearly delineate distinct regions, Close is careful never to place a print that straddles an edge. Instead, he reinforces the suggestion of an underlying edge by aligning several prints so that their own edges form a larger one. Since it is more difficult to create the impression of an edge with large prints, Close often uses sm aller prints when he cornes close to an edge. This technique can be most notably observed around the contour of the face and along the facial features.

The observations we have made with respect to Close's fingerprint style are summarized as follows:

1. Luminance fidelity is maintained between the source and destination images

over small windows.

2. Darker regions in the reference image can be created by darker prints or by

overlaying several marks. 3. The level of detail varies across the image depending on the significance of the

region. 4. Fingermarks do not straddle edges and are neatly aligned along lines.

44 In the next section, we see how we can apply these rules to create an automatic filter. 4.2 Fingerprint Methodology

4.2.1 Position, Orientation and Print Darkness

Of utmost importance in the design of our system is tonal precision. If our ren­ dered image does not maintain luminance fidelity with respect to the reference image, then overall texture, shape and illumination cannot be conveyed effectively. Lacking important cues for shape-from-texture and shape-from-shading, the transition from image-based to surface-based representation will be less effective and the emergence of the figure from the portrait will either be less obvious or fail completely. Our system must therefore ensure that it maintains luminance consistency between the source and target image in order for the effect of layered imagery to be apparent.

In order to preserve luminance fidelity, our first concern is to determine the positions where strokes should be applied. In our fiIter, strokes are first applied where the difference between the source and target image luminance is largest. Put differently, regions that need to be darker are treated first. In or der to determine where strokes should be applied, we first introduce the notion of luminance difference map. This map is trivially obtained by computing the luminance difference between the source and target image for each pixel.

Taking this luminance difference map and convolving it with a Gaussian kernel

(of size similar to the thumbprint), we can determine an ordering of positions in which strokes should be applied by sorting each candidate pixel position by decreasing luminance difference value. We denote this ordered list as L. By treating candidate

45 (a) (b) (c) Figure 4.2: Luminance difference map computed at three iterations of the printing process. (a) Early, (b) middle, (c) near the end of process. Light areas denote regions of high luminance difference. positions from largest to smallest luminance difference value, the working canvas converges towards the luminance of the source image. Every region of the image thus maintains luminance fidelity.

Selecting a pixel 'I/J at location ('l/Jx, 'l/Jy) of largest luminance difference value does not, however, signify that we should necessarily apply a print at that location. It could be that for all other pixels surrounding 'I/J, the working canvas has already reached the average luminance of the source image, and so the difference of 'I/J is imperceptible due to spacial integration. We must therefore evaluate pixel 'I/J and its surroundings to determine whether a print P'IjJ centered at 'I/J is necessary.

The size of the area we survey to determine whether P'IjJ is a valid print is relative to the size of the print to apply. Thus, for a rectangle determined by (Xl, YI) and

46 (X2, Y2), that bounds a print P'IjJ, we define its area luminance on an image 1 as:

(4.1)

To determine whether P'IjJ is a valid print position, we evaluate the area luminance of P'IjJ on the source image S (AreaLuminance(P'IjJ, S)) and on the working canvas

W (AreaLuminance(P'IjJ, W)). Finally, we verify the necessity of placing a print at

P'IjJ by evaluating Inequality 4.2.

AreaLuminance(P'IjJ, W) - AreaLuminance(P'IjJ, S) > E (4.2)

If Inequality 4.2 is false then we reject P'IjJ, remove it from list L (for optimization pur­ poses), and evaluate the next candidate pixel. Otherwise, we apply a print centered at ('l/Jx, 'l/Jy). Aigorithm 4.4 demonstrates the procedure to determine the location where we should apply a print on a working canvas W, given the source image S.

Aigorithm 4.4 Print Placement 1: Compute luminance difference map 2: Create list L of pixels ordered by highest luminance difference 3: while !empty(L) do 4: P'IjJ +-- pop(L) 5: if AreaLuminance(P'IjJ, W) - AreaLuminance(P'IjJ, S) > E then 6: Print fingermark centered at ('l/Jx, 'l/Jy) 7: Break 8: end if 9: end while

When applying prints on the working canvas W, the values of the luminance map are only changed in regions that are darkened. In order to avoid the computation of the luminance difference map for every print that is applied, we grid W into tiles

47 of size roughly equivalent to the fingerprint texture and perform Aigorithm 4.4 over each tile instead of the entire image. By proceeding in this manner, we increase the performance of the system and ensure an even distribution of marks over the canvas.

The iterative addition of fingermarks is stopped when each tile is within E of the reference image luminance. Figure 4.3 shows a luminance ramp using the placement technique we have just described.

(b)

Figure 4.3: (a) Luminance ramp. (b) Luminance ramp obtained using our print placement technique.

An immediate improvement to the method described is to rotate the stroke such that fingerprints are aligned with edges of the image. By doing so, we augment the suggestion of lines present in the reference image and provide our visual system with better eues to perform shape-from-texture. Systems sueh as [7, 26] allow the user to speeify the orientation of strokes. However, sinee we seek to minimize human

48 interaction in our work, we use an automated method similar to the one described by Litwinowicz [14] to determine the orientation of fingermarks.

To determine the orientation map which specifies the orientation value for an pixels 'I/J, we compute the vertical and horizontal derivatives of our source image in- dependently. We then compute the orientation at ('l/Jx, 'l/Jy) by finding the arc-tangent of the current pixel's vertical component over its horizontal one. Because the vertical and horizontal derivatives have ne ar-zero value in uniform regions of the source im- age, an orientation cannot be determined at those locations and so we discard such orientations from our orientation map. Additionally, we also want to discard gradient vectors whose magnitude is below a set threshold since the orientation they provide is not valuable. Because we want prints in regions where luminance is uniform to have an orientation that agrees with neighbouring prints, we perform a fire-front in- terpolation between an orientations that remain. Figure 4.4 demonstrates the result of this technique.

(a) (b) (c)

Figure 4.4: ( a) Source image. Prints placed without orientation (b), with orientation (c). By orienting prints along edges, we provide better cues to the visual system to perform shape-from-texture.

49 N ow that we have established how to determine print location and orientation,

we adress the issue of print darkness (rule 2). In systems such as [8, 3] strokes are

simply texture mapped with a constant tone throughout the entire image, relying

instead on the density of strokes to affect luminance. By rule 2, however, we would

like to apply prints of varying tones. This is easily performed at run-time, by mod-

ulating our texture with grey-scale values of increasing darkness. In our system, we

start with light strokes and print progressively darker ones. Proceeding in this order

favours the use of light prints in place of dark ones. Figure 4.5 shows snapshots of the working canvas at various time intervals using the thumbprint texture of Figure 4.6.

(a) (b) (c)

Figure 4.5: Progression of image rendition after (a) 5, (b) 10, (c) 15 time steps. The final image takes 20 iterations.

When this iterative pro cess has ended, we complete our tonal approximation process by using prints of maximal darkness which are applied until no more prints can be

placed without breaking the luminance constraint of Inequality 4.2. Figure 4.7 shows

50 Figure 4.6: Our fingerprint texture. results obtained by combining the position, orientation and print darkness technique we have described.

(a) (b)

Figure 4.7: (a) Source image. (b) Result obtained using position, orientation and print darkness technique of section 4.2.1.

51 4.2.2 Precision Map

In Close's fingerprint portraits, we notice that the amount of detail varies across the image. Regions that require little detail are rendered with large fingermarks.

Regions of high detail, on the other hand, such as the eyes, mouth and to a certain degree the person's hair are rendered with smaller prints. By using smaller prints,

Close increases realism. In order to emulate these variations in fingermark size that reflect the desired level of detail, we introduce the notion of precision map. The precision map reveals regions that require more details. Moreover, the precision map also includes salient edges of the image, in particular those that compose the person's silhouette. The precision map is thus introduced to simultaneously satisfy rules 3 and 4.

By close observation, we notice that regions that require a higher level of detail correspond to regions of high edge density. Thus, we can easily derive a precision map based on the edge map. For each pixel in the precision map, its corresponding value should be proportional to the number of edges that surrounds it. The precision map can be thought of as the convolution of a Gaussian kernel over the edge map.

In our system, the precision map is derived differently. Instead of computing the edge density around each pixel, for ease of implementation, we take the opposite approach and compute the influence of each edge pixel with respect to its surrounding neighbours. For every pixel that contains an edge, we print a texture centered at its location. The result is an edge map that is functionally equivalent, in which regions of high edge densities correspond to a high level of detail in the precision map.

52 Each pixel 'lj; in the precision map M possesses a detail value in the range [0,1], where 1 means lowest level of detail and 0 high level of detail. The size of a print P1jJ is then determined by Equation 4.3.

P1jJ,size = 0.5 + (M1jJ . 0.5) ( 4.3)

A standard print (low detail) therefore has a value of 1, whereas in regions that require more details (i.e. a precision value of 0), the print is scaled to half its size.

In our system, we uniformly scale the size of the texture for simplicity. It would be just as easy to create a set of strokes of varying sizes. The fingerprint library could include, for instance, textures of a thumb, index and pinky finger.

Since our precision map is based on the premise that edge density correlates with the desired level of detail, an alternative way to generate the precision map is to transform the source image into the spectral domain via wavelet transform and determine regions of high frequencies. Although it remains to be seen whether the automatic creation of the precision map method described performs as weIl on non­ portrait images, they perfectly suit our needs in the mimicry of Close's fingerprint style.

Having adressed aIl rules of Close's fingerprint style, we state the final steps to produce a fingerprinted portrait in Aigorithm 4.5.

4.3 Experimental Results & Discussion

In this section, we present experimental results based on the technique described in section 4.2. We explore the significance and effect of certain key parameters of the

53 Algorithm 4.5 Fingerprint Algorithm 1: Grid off the source image 2: Generate edge map 3: Generate orientation map 0 4: Generate precision map M 5: for progressively darker tones do 6: Compute luminance difference map 7: for aU grid cells do 8: Create list L of pixels ordered by highest luminance difference 9: while !empty(L) do 10: P'IjJ +-- pop( L) 11: if AreaLuminance(P'IjJ) W) - AreaLuminance(P'IjJ) S) > E then 12: Print fingermark centered at ('lj;X) 'lj;y) with orientation O'IjJ and size M'IjJ 13: Break 14: end if 15: end while 16: end for 17: end for 18: for aU grid cells do 19: repeat 20: Perform Algorithm 4.4 21: until Luminance fidelity is within E 22: end for

54 system and discuss adjustments that can be made to improve the mimicry of Close's fingerprint style.

The first images we show are direct results from the algorithm we have described in the previous section. The two destination images were rendered with the exact same parameter values. Run time varies between five to ten minutes, depending on image and texture size. The stylized rendit ion of the Arno river (Figure 4.8 (d)) is particularly interesting because it reveals that our assumption that image precision correlates with edge density also holds for images other than portraits. Should the assumption fail in other kinds of images, the automated method we have developed can still be used as the basis for a user to adjust.

The system we present allows the user to control two important parameters, the default size of the fingermark texture and the threshold E that determines luminance fidelity. We first discuss the selection of texture size. In his rendit ion of portraits using his fingerprint style, Chuck Close has used canvases of different sizes, some twice as large as others. Because the size of his thumbprint is fixed, the effect of using a large canvas makes the marks he lays smaller relative to the size of the image. Since the size of the reference image is fixed in our filter, we can simulate a similar effect by scaling up or down the size of the thumbprint texture. Figure 4.9 demonstrates rendit ions in which we have varied the size of the texture. Similar to the effect of varying grid size in Close's blobby style, as the size of the mark decreases, the portrait becomes more realistic. The use of sm aller textures result in a higher level of detail across the image and luminance fidelity is more accurate. These assets, however, come at the cost of artistic interest since it becomes less obvious that the

55 (c) (d)

Figure 4.8: (a) and (c) Source images. (b) and (d) Results rendered using our complete fingerprint filter.

56 (a) (b)

(c) (d) Figure 4.9: Variations in the size of default texture. As we decrease the size of the fingermark, the image becomes more realistic at the expense of artistic appeal.

57 portrait is composed of fingermarks. One must therefore carefully balance image fidelity and artistic expression. In our experiments, we have found that thumbprints that were about the size of the iris provided the most interesting results.

In addition to controlling the size of the texture, the user can also determine the value of the threshold E from Inequality 4.2. If E has a large value, several print candidates are rejected by Inequality 4.2. As we decrease the value of E, more prints are added to the canvas and we obtain higher luminance fidelity. Figure 4.10 demon­ strates results for decreasing values of E. In the experiments we have performed, an

E value of 0.10 was found to be most satisfactory.

Finally, we discuss two issues of Close's fingerprint portraits that cannot be adressed by our filter. Profession al photographers know that when viewing a portrait we instinctively look at the specular highlight present in the eye to determine the

1 location of light sources . We feel this highlight adds to the photorealism of the painting when viewed from a distance. Since the specular highlight is typically much sm aller than the size of a fingermark, it is often the case that a print placed in the region of the eye will cover the location of the specular highlight. In order to avoid this problem, we could manually in sert highlights into resulting images. However, we have not done so in any of the results presented in this thesis.

A peculiar aspect of Close's fingerprint portraits is the unusuallighting in his reference photos. Unlike in a traditional photo portraits in which a single spotlight shines directly on the face of the subject, Close seems to favour photos in which two

1 Soukia Savage, Personal Communication, Montreal, 2004.

58 (a) (b)

(c) (d)

Figure 4.10: Variations in luminance threshold E. As we decrease E, more prints are added to the canvas and we obtain higher luminance fidelity.

59 light sources are directed towards the cheeks of the subject (Figure 4.1). The central part of the figure is then left darker than its sides. If we were to view photos with such unusuallighting, the effect would be striking, but when viewing these fingerprint portraits, our attention is drawn to the duality of the image and we seem to forget about the unnaturallighting of the portrait. Unless the source photograph captures the particular lighting Close favours, it is not currently possible for our system to simulate illumination from the sides. 4.3.1 Beyond the Fingerprints Style

Using the work we have previously developed, we introduce enhancements that allow the creation of stylized renderings that go beyond Close's fingerprint style.

A first enhancement we perform is to add colour to our rendered images. One could think of it as printing the fingermarks using different ink colours. In order to determine the appropriate colour to use, we simply sample the colour of the source image. At run-time, we modulate the colour of the thumbprint texture accordingly.

Figure 4.11 shows results as presented previously and with the addition of colour.

In addition to creating colour renditions, we can also generate images that are based on textures different than fingermarks. By performing a procedure similar to the one described in section 4.2, but replacing the fingerprint texture with another one, we can obtain new types of images that are also visually appealing. Because the algorithm we perform to create these new images is similar to Close's fingerprint

style, the effect of layered imagery is preserved and luminance fidelity is maintained.

60 (a) (b)

Figure 4.11: (a) Rendering without colour. (b) Rendering with colour.

61 Figure 4.12 shows a few results obtained using this method. As can be noted, al­ though the source image is the same, the end results are quite different from Close's fingerprint style.

(a) (b)

Figure 4.12: Novel stylised renderings created by using the textures shown in inset.

The analysis and implementation of Close's fingerprint styles and the new exten­ sions we have presented are, to our knowledge, a unique contribution of this thesis.

62 CHAPTER 5

Conclusion

In this thesis, we explored sorne of the salient components of layered imagery used in the portraiture styles of Chuck Close. Portraits that possess qualities of layered imagery exhibit a visual duality that is affected by the distance of the viewer from the canvas. From a distance, the painting appears as an integrated whole, but as one approaches, the figure of the portrait disintegrates into atomic elements that stand on their own. The component that form the portrait can add to the visual interest of the painting or introduce a new level of semantic significance.

Based on Palmer's general framework of visu al processing [22], we discussed sorne of the princip les of visual perception that permit the effects of layered imagery.

We observed that for a portrait to successfully convey the figure of a person, the elements of the image must provide cues to facilitate shape-from-shading and shape­ from-texture. By doing so, the visual system can perform a more effective transition from image-based to surface-based representation.

We then emphasized the differences, as noted by Livingstone, between the chrominance and luminance pathways [15]. More specifically, we highlighted the significance of luminance in the perception of "shape, texture, and line" and noted

63 that artists have long known of this crucial distinction. We then set out to exploit stylistic rendit ions of 2D images based on the assumption that if luminance fidelity was sufficiently maintained, then we could exploit other aspects of the image to create the effect of layered imagery.

Taking inspiration from renowned American artist Chuck Close, we developed systems that emulate two of Close's famous painting styles. We first explored Close's blobby style. We observed features unique to his portraits and used these observa­ tions as guiding principles in the creation of our blobby image filter. In particular, we focussed our attention on shape and join assignment. Although our work can­ not aspire to replace Close's brilliant brushstroke, we demonstrated renderings that bear significant ressemblance to Close's paintings. Inspired by Close's blobby style and photomosaics, we also introduced new types of image mosaics that require small texture libraries and are quick to render.

In addition to Close's blobby paintings, we also created a filter that emulates his fingerprint style. Similar to the approach we previously took, we carefully observed his portraits and distilled rules that guided us in the creation of our filter. We first designed a system that automatically places and orients prints according to varying tone levels, being careful to maintain luminance fidelity. We then refined this technique by introducing an automated method to generate precision maps.

By specifying the size of fingerprints, precision maps determine regions that require increased realism. Finally, we presented new stylized renderings based on a similar approach, but this time by making use of colours and new textures.

64 The results we presented in this thesis are similar in spirit to Close's own paint­ ings, but enhancements could be made to improve our emulation of his work. In

Close's blobby style, for instance, the shapes he employs in regions of facial features closely follow the contours of the mouth, nose and eyes. This effect is most noticeable around the iris, where circular shapes appear warped such that the arrangement of several neighbouring cells take the shape of the iris. This effect cannot be captured by our system since we do not treat facial features separately. "Vee" joins as dis­ cussed in section 3.1 have also not been adressed. These adjust ment s, though minor, would improve our mimicry of Close's work.

In order to improve our renderings of Close's fingerprint style, we could employ a smalllibrary of textures consisting of several fingermarks. The shape of each print would thus appear less regular than the results we presented. Another intriguing idea to explore would be to vary luminance fidelity across the image, a process akin to what we have done with the size of fingermarks. It may be that in regions that are less significant to the recognition of the face, we can safely decrease luminance fidelity to create additional visual effects.

U sing the basic principles of layered imagery discussed in this thesis, one can in­ vent sever al more rendering styles. As is the case with traditional artists, researchers in NPR are not necessarily constrained by the restrictions of realism. Ultimately, the creation of interesting and intriguing visual compositions that exhibit qualities of layered imagery is only limited by our imagination.

65 Contributions of Authors

The authors of Isoluminant Color Picking for Non-Photorealistic Rendering [16] shared the research work presented in the paper. AIlison Klein and Jason Lawrence were involved in theoretical research and helped write the paper. Ankush Seth and this author shared the work of implementation and production of results. Ankush

Seth was responsible for the development regarding colour selection. This author was responsible for the texturaI aspects of the "blobby" filter, the novel image mosaics and aIl development unrelated to the colour aspects of the paper.

66 References

[1] John F. Canny. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell., 8(6):679-698, 1986.

[2] Doug DeCarlo and Anthony Santella. Stylization and abstraction of pho­ tographs. In SIGGRAPH '02: Proceedings of the 29th annual conference on Computer graphies and interactive techniques, pages 769-776. ACM Press, 2002.

[3] Fredo Durand, Victor Ostromoukhov, Mathieu Miller, Francois Duranleau, and Julie Dorsey. Decoupling strokes and high-Ievel attributes for interactive tradi­ tional drawing. In Proceedings of the 12th Eurographics Workshop on Rendering Techniques, pages 71-82. Springer-Verlag, 2001.

[4] Adam Finkelstein and Marisa Range. Image mosaics. In EP '98/RIDT '98: Proceedings of the 7th International Conference on Electronic Publishing, Held Jointly with the 4th International Conference on Raster Imaging and Digital Typography, pages 11-22. Springer-Verlag, 1998.

[5] James D. Foley, Adries van Dam, Steven K. Feiner, and John F. Hughes. Com­ puter Graphies: Principles and Practice, 2nd ed. in C. Addison-Wesley, 1997.

[6] E. Bruce Goldstein. Sensation and Perception. Brooks/Cole Publishing, 1999.

[7] Paul Haeberli. Paint by numbers: abstract image representations. In SIG­ GRAPH '90: Proceedings of the 17th annual conference on Computer graphies and interactive techniques, pages 207-214. ACM Press, 1990.

[8] Aaron Hertzmann. Painterly rendering with curved brush strokes of multiple sizes. In SIGGRAPH '98: Proceedings of the 25th annual conference on Com­ puter graphies and interactive techniques, pages 453-460. ACM Press, 1998.

[9] Aaron Hertzmann, Charles E. Jacobs, Nuria Oliver, Brian Curless, and David H. Salesin. Image analogies. In SIGGRAPH '01: Proceedings of the 28th annual conference on Computer graphies and interactive techniques, pages 327-340. ACM Press, 2001.

67 68

[10] Charles E. Jacobs, Adam Finkelstein, and David H. Salesin. Fast multiresolution image querying. In SIGGRAPH '95: Proceedings of the 22nd annual conference on Computer graphies and interactive techniques, pages 277-286. ACM Press, 1995.

[11] Martin Kemp. The science of art: optical themes m western art from Brunelleschi to Seurat. Press, 1990.

[12] Michael A. Kowalski, Lee Markosian, J. D. Northrup, Lubomir Bourdev, Ronen Barzel, Loring S. Holden, and John F. Hughes. Art-based rendering of fur, grass, and trees. In SIGGRAPH '99: Proceedings of the 26th annual conference on Computer graphies and interactive techniques, pages 433-438, New York, NY, USA, 1999. ACM Press/ Addison-Wesley Publishing Co.

[13] Ellen Wardwell Lee. The aura of neo-impressionism: the W.J. Holliday Collec­ tion. Indianapolis Museum of Art, 1983.

[14] Peter Litwinowicz. Processing images and video for an impressionist effect. In SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphies and interactive techniques, pages 407-414. ACM Press/ Addison-Wesley Publishing Co., 1997.

[15] Margareth Livingstone. Vision and Art: The Biology of Seeing. Harry N. Abrams, 2002.

[16] Tran-Quan Luong, Ankush Seth, Allison Klein, and Jason Lawrence. Isolu­ minant color picking for non-photorealistic rendering. In Graphies Interface, 2005.

[17] David Marr. Vision: a computational investigation into the human representa­ tion and processing of visual information. W.H. Freeman, 1982.

[18] Barbara J. Meier. Painterly rendering for animation. In SIGGRAPH '96: Pro­ ceedings of the 23rd annual conference on Computer graphies and interactive techniques, pages 477-484, New York, NY, USA, 1996. ACM Press.

[19] James R. Mellow. 'largest' mezzotint, by close, shown. New York Times, January 13 1973.

[20] Victor Ostromoukhov. Digital facial engraving. In SIGGRAPH '99: Proceedings of the 26th annual conference on Computer graphies and interactive techniques, pages 417-424. ACM Press/ Addison-Wesley Publishing Co., 1999. 69

[21] Victor Ostromoukhov and Roger D. Hersch. Artistic screening. In SIGGRAPH '95: Proceedings of the 22nd annual conference on Computer graphies and in­ teractive techniques, pages 219-228. ACM Press, 1995.

[22] Stephen E. Palmer. Vision science: photons to phenomenology. MIT Press, 1999.

[23] Denis G. Pelli. Close encounters - an artist shows that size affects shape. Science, 285:844-846, 1999.

[24] V. S. Ramachandran, D. Ruskin, S. Cobb, D. Rogers-Ramachandran, and C. W. Tyler. On the perception of illusory contours. Vision Research, 34(23):3145- 3152, 1994.

[25] Michael P. Salisbury, Sean E. Anderson, Ronen Barzel, and David H. Salesin. Interactive pen-and-ink illustration. In SIGGRAPH '94: Proceedings of the 21st annual conference on Computer graphies and interactive techniques, pages 101-108. ACM Press, 1994.

[26] Michael P. Salisbury, Michael T. Wong, John F. Hughes, and David H. Salesin. Orientable textures for image-based pen-and-ink illustration. In SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphies and inter­ active techniques, pages 401-406. ACM Press/Addison-Wesley Publishing Co., 1997.

[27] Ankush Seth. Isoluminant color picking and its applications. Master's thesis, McGill University, 2005.

[28] Robert Silvers and Michael Hawley. Photomosaics. Henry Holt and Co., Inc., 1997.

[29] Robert Storr. Chuck Close. Museum of , 1998.

[30] Thomas Strothotte and Stefan Schlechtweg. Non-photorealistic computer graph­ ies: modeling, rendering, and animation. Morgan Kaufmann Publishers Ine., 2002.

[31] Terrie Sultan. Chuck Close Prints: Process and Collaboration. Princeton Uni­ versity Press, 2003.

[32] Giorgio Vasari. Lives of the painters, sculptors, and architects. Everyman's library. Alfred A. Knopf, Inc., 1996. 70

[33] Oleg Veryovka and John W. Buchanan. Halftoning with image-based dither screens. Graphies Interface '99, pages 167-174, June 1999.

[34] Georges Winkenbach and David H. Salesin. Computer-generated pen-and-ink illustration. In SIGGRAPH '94: Proceedings of the 21st annual conference on Computer graphies and interactive techniques, pages 91-100. ACM Press, 1994.

[35] Georges Winkenbach and David H. Salesin. Rendering parametric surfaces in pen and ink. In SIGGRAPH '96: Proceedings of the 23rd annual conference on Computer graphies and interactive techniques, pages 469-476. ACM Press, 1996.