Art and Science in Visualization Victoria Interrante

1 Introduction

Visualization research and development involves the design, implementation and evaluation of techniques for creating images that facilitate the understanding of a set of data. The first step in this process, visualization design, involves defining an appropriate representational approach, determining the vision of what one wants to achieve. Implementation involves deriving the methods necessary to realize the intended results – developing the algorithms required to create the desired visual representation. Evaluation, or the objective assessment of the impact of specific characteristics of the visualization on application-relevant task performance, is useful not only to quantify the usefulness of a particular technique but, more powerfully, to provide insight into the means by which a technique achieves its success, thus contributing to the foundation of knowledge upon which we can draw to create yet more effective visualizations in the future. In this chapter, I will discuss the art and science of visualization design and evaluation, illustrated with case study examples from my research. For each application, I will describe how inspiration from art and insight from visual perception can provide guidance for the development of promising approaches to the targeted visualization problems. As appropriate I will include relevant details of the algorithms developed to achieve the referenced implementations, and where studies have been done, I will discuss their findings and the implications for future directions of work.

1.1 Seeking Inspiration for Visualization from Art and Design

Visualization design, from the creation of specific effective visual representations for particular sets of data to the conceptualization of new, more effective paradigms for information representation in general, is a process that has the characteristics of both an art and a science. General approaches to achieving visualizations that ‘work’ are not yet straightforward or well- defined, yet there are objective metrics that we can use to determine the successfulness of any particular visualization solution. In this section I will discuss ways in which the practices and products of artists and designers can help provide inspiration and guidance to our efforts to develop new, more effective methods for communicating information through images.

Design, as traditionally practiced, is a highly integrative activity that involves distilling ideas and concepts from a great variety of disparate sources and assembling them into a concrete form that fulfills a complex set of objectives. It is an inherently creative process that defies explicit prediction or definition yet whose results are readily amenable to comprehensive evaluation. Across disciplines, from graphic arts to architecture, the art of design is primarily learnt through practice and review and the careful critical study of work by others, and expertise is built up from the lifelong experience of “training one’s eyes”. Providing a “good environment for design” is critical to enabling and facilitating the process of design conceptualization. Creative insights are difficult to come by in a vacuum – designers typically surround themselves in their work area with sketches, images, models, references, and other material that has the potential to both directly and indirectly provide inspiration and guidance to the task at hand. In addition, designers rely heavily on the ability to quickly try out ideas, abstracting, sketching out and contemplating multiple alternative approaches before settling upon a particular design approach. In visualization research, we can take a similar approach to the problem of design conceptualization – drawing inspiration from the work of others and from the physical world around us, and experimenting with new combinations and variants of existing techniques for mapping data to images. We can also benefit from establishing fertile design environments that provide rich support for design conceptualization and varied opportunities for rapid experimentation. Finally, it can sometimes be useful to work with traditional materials to create approximate mockups of potential design approaches that allow one to preview ideas to avoid investing substantial effort in software development in what ultimately prove to be unproductive directions.

Turning now from the process of design to the product, there is, in a wide variety of fields, from art to journalism, from to landscape architecture, a long history of research in visual literacy and visual communication through drawings, paintings, photographs, sculpture and other traditional physical media that we have to potential to be able to learn from and use in developing new visualization approaches and methods to meet the needs of our own specific situations.

In computer graphics and visualization, as in art, we have complete control not only over what to show but also over how to show it. Even when we are determined to aim for a perfectly physically photorealistic representation of a specified model, as in photography we have control over multiple variables that combine to define the ‘setting of the scene’ that creates the most effective result. In many cases this includes not only selecting the viewpoint and defining the field of view, setting up the lighting, and determining the composition of the environment, but also extends to choosing the material properties of the surfaces of the objects that we wish to portray. For practical reasons of computational efficiency or because of the limitations of available rendering systems we often choose to employ simplified models of lighting and shading, which can also be considered to be a design decision. In addition, we may choose to use non physically based ‘artificial’ or ‘artistic’ enhancement to emphasize particular features in our data, and we may selectively edit the data to remove or distort portions of the model to achieve specific effects.

Through we have the potential to interpret physical reality, to distill the essential components of a scene, accentuate the important information, minimize the secondary details, and hierarchically guide the attentional focus. In different media, different methods are used to draw the eye to or away from specific elements in an image, and in each medium different styles of representation can be used to evoke different connotations of meaning.

When seeking to develop algorithms to generate simplified representations of data or models, it’s useful to consider where artists tend to take license with reality. They have similar motivations to avoid the tedium and difficulty of accurately representing every detail in a photorealistic manner, but at the same time they need to indicate enough detail, with enough fidelity, to meet the expectations of the viewer and communicate the subject effectively. Numerous texts on methods of illustration present various artists’ insights on this subject [e.g. Loomis, Watson, Guptill, McCloud]. Vision scientists have also considered this question, from the point of view of seeking to understand how the brain processes various aspects of visual input and it is interesting to note the connection between the findings in perception and common practices in artistic representation. For example, people are found to be highly insensitive to the colors of shadows [Cavanagh], being willing to interpret as shadows patches whose luminance, and general geometry, is consistent with that interpretation, regardless of hue, and to be broadly tolerant of inconsistencies among shadows cast from different objects in a scene [Ostrovsky et al.], despite the significant role that cast shadows play in indicating objects’ positions in space [Kersten]. Although there is much about the processes of vision and perception that remains unknown, research in visual perception has the potential to make explicit some of the intuition that artists rely upon to create images that ‘work’.

1.2 Drawing Insight for Visualization Design from Research in Visual Perception

In addition to seeking inspiration from art for the design of effective methods for conveying information through images, it is possible to use fundamental findings in human visual perception to gain insight into the ‘science behind the art’ of creating successful visual representations. This can be useful because, although it is often, but not always, possible from informal inspection to determine how well a single, particular visualization meets the needs of a specific, individual application, or to comparatively assess the relative merits of alternative visualization solutions for a particular problem, it is much less straightforward to achieve a comprehensive understanding of the reasons that one particular visualization approach is more successful than another, and even more difficult to uncover the theoretical basis for why certain classes of approaches are likely to yield better results than others. From a fundamental understanding of the strengths and weaknesses and abilities and limitations and basic functional mechanisms of the human visual system, we have the potential to become better equipped to more accurately predict what sorts of approaches are likely to work and which aren’t, which can be of immense benefit in helping us determine how to guide our research efforts in the most promising directions and to avoid dead ends.

Mining the vision research literature for insights into a particular visualization problem can be a daunting task. The field of visual perception is broad and deep and has a very long and rich history, with research from decades past remaining highly relevant today. The application domains targeted in visualization are typically far more complex than the carefully controlled domains used in perception studies, and extreme caution must be exercised in hazarding to generalize or extrapolate from particular findings obtained under specific, restricted conditions. Also, the goal in vision research – to understand how the brain derives understanding from visual input – is not quite the same as the goal in visualization – to determine how best to portray a set of data so that the information it contains can be accurately and efficiently understood. Thus it is seldom possible to obtain comprehensive answers to visualization questions from just a few key articles in the vision/perception literature but is more often necessary to distill insights and understanding from multiple previous findings, and then supplement this knowledge through additional studies.

Vision scientists and visualization researchers have much to gain from successful collaboration in areas of mutual interest. Through joint research efforts, interdisciplinary teams of computer scientists and psychologists have already begun to find valuable answers to important questions in computer graphics and visualization, such as determining the extent to which accurately modeling various illumination and shading phenomena, such as cast shadows and diffuse interreflections, can facilitate a viewer’s interpretation of spatial layout [Hu, Madison]. In the remainder of this chapter I will describe in more concrete detail the application of inspiration from art and insights from visual perception to visualization design and evaluation in the context of selected case study examples from my research.

2 Case Study 1: Effectively Portraying Dense Collections of Overlapping Lines

Numerous applications in fields such as aerospace engineering involve the computation and analysis of three dimensional vector fields. Developing techniques to effectively visualize such data presents some challenges. One promising approach has been to provide insight into portions of the data using bundles of streamlines. However, special steps must be taken to maintain the legibility of the display as the number of streamlines grows. Stalling et al. [97] proposed a very nice technique, based on principles introduced by Banks [94], for differentially shading one-dimensional streamlines along their length according to the local orientation of the line with respect to the view direction and a specified light source. Still, the problem of preventing the local visual coalescing of similarly oriented adjacent or proximate overlapping lines remained. Attempts to visually segregate the individual lines by rendering them in a spectrum of different colors rather than in the same shade of grey met with only partial success, because it was still difficult to accurately and intuitively appreciate the existence and the magnitude of the depth separation between overlapping elements in the projected view. The problem is that, in the absence of indications to the contrary, objects are generally perceived to lie on the background over which they are superimposed [Gibson]. In some situations, cast shadows can be used to help disambiguate depth distance from height above the ground plane [Yonas], but this technique is most effective when applied to simple configurations in which individual objects can be readily associated with their shadows. A more robust solution to the problem of effectively portraying clusters of overlapping and intertwining lines (figure 1) is inspired by examples from art, and explained by research in visual perception.

Figure 1: Two overlapping lines of roughly equivalent luminance but differing hue, shown in three subtly different depictions.

For thousands of years, artists have used small gaps adjacent to occluding edges as a visual device to indicate the fact that one surface should be understood to be passing behind another. This convention can be observed in visual representations dating as far back as the Paleolithic paintings within the caves of Lascaux (figure 2). Figure 2: Since prehistoric times, artists have used gaps to indicate the passing of one surface behind another, as shown in this photograph taken in a Smithsonian museum exhibit reproducing the creation of the second ‘Chinese Horse’ in the Painted Gallery of the cave of Lascaux. Photo by Tomás Filsinger.

Recent research in visual perception [Nakayama] helps explain why the technique of introducing gaps to indicate occlusion is so effective. In ordinary binocular vision, when we view one surface in front of another using both of our eyes, we see different portions of the farther surface occluded by the nearer surface in the views from each eye (figure 3). Called ‘da Vinci stereopsis’, in deference to the recognition of this phenomenon by , as reported by Wheatstone (1838), the perception of these inter-ocularly unpaired regions, which he likened to ‘shadows cast by lights centered at each eye’, has been shown to be interpreted by the visual system as being indicative of the presence of a disparity in depth between two surfaces.

Figure 3: When one surface is viewed in front of another using both eyes, a different portion of the more distant surface is occluded in the view from each eye. Psychologists have found evidence that the presence of these interocularly unpaired regions evokes a perception of depth disparity by our visual system [Nakayama and Shimojo].

The use of gaps to clarify depth ordering in computer-generated line drawings dates back to the earliest days of computer graphics – one of the first ‘haloed’ line drawing algorithms was presented by Appel et al. at SIGGRAPH in 1979. In 1997, we [Interrante and Grosch] developed an algorithm for using ‘visibility-impeding halos’ to clarify the depiction of what were effectively clusters of streamlines in volume-rendered images of 3D line integral convolution textures (figure 4). Our decision to take this approach, and the direction of the particular path that we followed in developing it, was guided both by the inspiration from examples in art and by the insights from research in visual perception.

Figure 4: A side-by-side comparison, with and without visibility-impeding halos, illustrating the effectiveness of this technique for indicating the presence of depth discontinuities and facilitating appreciation of the extent of depth disparities between overlapping lines in the 3D flow. Data courtesy of Dr. Chester Grosch.

Our implementation was based on a simple modification to the basic LIC algorithm [Cabral and Leedom, Stalling and Hege] allowing the efficient computation of a matched pair of streamline and surrounding halo textures. We automatically defined a subtle and smoothly continuous 3D visibility-impeding ‘halo’ region that fully enclosed each streamline in the original 3D texture by performing the LIC simultaneously over two input textures containing identically located spots of concentric sizes. Because the streamline tracing only had to be done once for the pair of volumes, the overhead associated with the creation of the halos was kept to a minimum. Halos were implemented, during raycasting volume rendering, by decreasing the contribution to the final image of any voxel encountered, after a halo was previously entered and subsequently exited, by an amount proportional to the largest previously encountered halo opacity. (Because each line was necessarily everywhere surrounded by its own halo, it was important to allow the voxels lying between the entrance and exit points of the first-encountered halo to be rendered in the normal fashion.) It should be noted that this particular implementation assumes a black background, and will fail to indicate the existence of depth discontinuities between lines that pass closely enough that their halos overlap in 3-space, even if the lines themselves do not actually intersect.

3 Case Study 2: Using Feature Lines to Emphasize the Essential 3D Structure of a Form

The goal of an effective graphic representation is to facilitate understanding. When creating computer-generated visual representations of models or surfaces, we desire to portray the data in a way that allows its most important features to be easily, accurately and intuitively understood. In applications where the information that we need to effectively communicate is the 3D shape of a rendered surface, there is particular value in seeking both inspiration from the practices of artists and insight from research in visual perception. 3.1 Nonphotorealistic Rendering in Scientific Visualization

A photographic depiction will capture the exact appearance of an object as it is actually seen, with subtle, complex details of coloration and texture fully represented at the greatest possible level of detail and accuracy. Despite the option to use photography, however, a number of scientific disciplines have historically retained artists and illustrators to prepare hand-drawn images of objects of study [Ridgway, Loechel, Hodges]. One of the reasons for this is that in a drawing it is possible to create an idealized representation of a subject, in which structural or conceptual information is clarified in a way that may be difficult or impossible to achieve in even the best photograph. In some fields, such as cartography [Imhof], and archaeology [Addington, Dillon], specific stylizations are used to encode particular information about important features of the subject within the graphical representation.

When drawings are prepared for visualization purposes, in many cases they are created using photographs of the subject as reference. Through comparing these drawings with the photographs from which they were derived, we have the ability to gain insight into the selective process used by the artist to identify and emphasize the important features in the data while minimizing the visual salience of the extraneous detail. We can also observe at what points and in what respects the artist takes special care to remain faithful to details in the original depiction and where he chooses to exercise artistic license. Through these observations, and by consulting the relevant literature in the instruction of illustration techniques, we can derive inspiration for the development of visualization algorithms that aspire to achieve similar effects.

Searching for deeper insight into the process of determining how to design effective pictorial representations, it can be useful to carefully consider all aspects of the questions of not only how, but also why and when a hand-drawn interpretation of a subject is preferable to a photograph. In some fields, such as zoology [Allen], it is not uncommon to find a wide range of representational modalities – from simple outline drawings to detailed, shaded, colored drawings, to photographs – used at different times, for different purposes. Apart from the influence of various practical concerns, the definition of an optimal representation will depend on the particular purpose for which it is intended to be used.

The critical issue, I believe, is that drawings are intended to be idealizations of actual physical reality. Recognizing a subject from a drawing can require some translation. The success of the representation hinges on the extent to which it captures precisely the necessary information for successful comprehension, which not only depends on the vision of the artist in defining what that information is and on his skill in portraying it faithfully, but also on the needs and experience level of the observer. With photographs, there is no interpretation required – every subtle detail of color and texture, shading and shadow, is exactly represented, for some specific configuration of subject, viewpoint and lighting. Psychologists and others have conducted numerous studies to examine questions of whether, when, and under what circumstances, it might be possible to increase perceptual efficiency by reducing the visual complexity of a pictorial representation [Ryan and Schwartz, Fussel and Haaland, Hirsh and McConathy, Biederman and Ju]. Results are mixed, with the superiority of performance using drawings vs. photographs depending both on the task and on the quality and level of detail in the drawing. However, a common theme in all cases is the potential danger inherent in using highly simplified, schematic, or stylized representations, which can have the opposite effect to facilitating perception, instead increasing cognitive load by imposing an additional burden in interpretation. Studies have found that face recognition from outline drawings is exceptionally poor [Davies]; but that adding “mass”, even through simple bi-level shading, improves recognition rates considerably [Bruce]. Face recognition rates from outline drawings can also be improved by distorting outline drawings in the style of a caricature [Rhodes].

Carefully constructed nonphotorealistic representations have significant potential advantages for use in visualization. They allow the possibility to increase the dynamic range of the attentional demands on the observer by allowing greater differentiation in the salience of the visual representation, and enable guiding the attentional focus of the observer through highlighting critical features and de-emphasizing the visual prominence of secondary details. However the success of such efforts depends critically on the ability to correctly define what to emphasize and how to emphasize it, in order to best support the task objectives.

Nonphotorealistic representations also have the potential to facilitate visualization objectives by allowing greater differentiation in the specificity of the visual representation. By introducing the possibility of ambiguity, through representational methods that are optimized for indication rather than specification, one has the potential to portray things in a way that facilitates flexible thinking. This is critical for applications involving design development, such as architecture, where there is a need for visualization tools that facilitate the ability to work with ideas at an early stage of conceptual definition, when a precise physical instantiation of the model has not yet been determined, and one needs to foster the envisioning of multiple possibilities [Anderson et al]. Nonphotorealistic rendering also allows the expression of multiple styles, potentially establishing various ‘moods’ that can influence the subjective context within which information is perceived and interpreted. Finally, in some applications, ambiguity has the potential to be successfully used as a visual primitive to explicitly encode the level of accuracy or confidence in the data [Interrante 00].

3.2 Critical Features of Surface Shape that can be Captured by Lines

Gifted artists, such as Pablo Picasso, have demonstrated the ability to capture the essence of a form in just a few critical strokes [e.g. Picasso]. In visualization, if we would like to achieve a similar effect, we need to determine an algorithm for automatically defining the set of feature lines that we wish to represent. Both inspiration from art and insight from research in visual perception can be useful in helping to guide these efforts.

According to the Gestalt theory of visual perception, the process of visual understanding begins with the separation of figure from ground. The lines that define this separation are the silhouettes, and their use is ubiquitous in artists’ line drawings. The silhouette can be imagined as the boundary of the shadow that would be cast by an object onto a planar background from a parallel light source oriented in the direction of the line of sight, with the locus of silhouette points indicating the outline of the object. Closely related to the silhouettes are the set of lines called contours [Koenderink 90]. These are formed by the locus of points where the surface normal is orthogonal to the line of sight, and they typically correspond to boundaries across which there is a C0 discontinuity in depth. On a polygonally-defined object, the contour curves can be identified as the locus of all edges shared by a front-facing polygon and a back-facing polygon, plus all boundary edges. Complicating efforts to create effective visualizations using highlighted contour edges is the problem that their definition is viewpoint-dependent. Under conditions of stereo viewing, the set of surface points belonging to the contour curve defined by the view from the left eye will not in general be the same as the set of surface points belonging to the contour curve defined by the view from the right eye. If models are rendered in such a way that implies a correspondence between highlighted contour curves shown in each view, the result will be to impede the stereo matching process and interfere with the observer’s ability to accurately perceive the object’s 3D shape [Interrante 96]. In single images, however, highlighting silhouette and contour edges can be an effective way to emphasize the basic structure of the form [Saito and Takahashi]. There is abundant evidence from psychophysical research that our visual system is adept at inferring information about the 3D shape of an object from the dynamic deformation of its silhouette as it rotates in depth [Wallach/O’Connell] and from the curvature characteristics of the outline of its 2D projection in static flat images [Richards].

Although silhouettes and contours are important shape descriptors, alone they are not ordinarily sufficient to unambiguously describe the essential shape of a 3D object. In addition to using silhouette and contour curves, artists often also include in their line drawings lines that indicate other sorts of visual discontinuities. These include discontinuities in shape, shading, color, texture, and function. Recognizing this, methods have been developed for automatically generating a line drawing style representation from a 2D image based on local functions of the pixel intensities [e.g. Pearson/Robinson]. Similarly, view-dependent methods have been developed to identify shape-emphasizing feature lines in projected 3D data based on the detection of C1 or C2 discontinuities in depth distance (from the viewpoint to the first encountered surface) between neighboring image points [Saito and Takahashi].

In our research, we have sought to identify supplementary characteristic lines that can be used to highlight additional perceptually or intrinsically significant shape features of arbitrary 3D objects, independent of any particular viewpoint. For several reasons, the most promising candidates for this purpose are the valley and sharp ridge lines on a smoothly curving object and the sharp creases, either convex or concave, on a polygonally defined surface. These lines derive importance first of all because they give rise to a variety of visual discontinuities. Prominent shading discontinuities are likely to be found at sharp creases on any object; on smoothly curving forms, specular highlights are relatively more likely to be found along or near ridges, where the surface normals locally span a relatively larger range of orientations, and sharp valleys have a higher probability of remaining in shadow. Miller [94] demonstrated impressive results using “accessibility shading” to selectively darken narrow concave depressions in polygonally-defined models. Equally important as their association with shading discontinuities, crease lines, and valley lines in particular, derive perceptual significance from their ability to specify the structural skeleton of a 3D form in an intuitively meaningful way. Since 1954 [Attneave], psychologists have found evidence that the points of curvature extrema play a privileged role in observers’ encodings of the shape of an object’s contour. Recent research in object perception and recognition has suggested that people may mentally represent objects as being composed of parts [Biederman85, Beusmans], with the objects perceived as subdividing into parts along their valley lines [Hoffman, Braunstein]. Drawing upon this inspiration from art, and insight from visual perception, we developed two distinct algorithms for highlighting feature lines on 3D surfaces for the purposes of facilitating appreciation of their 3D shapes. The first is an algorithm for highlighting valley lines on smoothly curving isointensity surfaces defined in volumetric data [Interrante 95] (figure 5).

Figure 5: By selectively highlighting valley lines, we aim to enhance the perception of important shape features on a transparent surface in a visualization of 3D radiation therapy treatment planning data. Top: the skin rendered as a fully opaque surface. Middle: the skin rendered as a fully transparent surface, in order to permit viewing of the internal structures. Bottom: the skin rendered as a transparent surface with selected opaque points, located along the valley lines. The bottom view is intended to facilitate perception/recognition of the presence of sensitive soft tissue structures that should be avoided by radiation beams in a good treatment plan. Data courtesy of Dr. Julian Rosenman. Middle and lower images © IEEE.

The basic algorithm works as follows. On a smoothly curving surface, valley lines are mathematically defined as the locus of points where the normal curvature is locally minimum in the direction of least negative curvature [Koenderink90]. One can determine, for each point on a surface, whether it should be considered to lie on a ridge or valley line by computing the principal directions and principal curvatures of the surface at that point and then checking to see if the magnitude of the strongest principal curvature assumes an extreme value compared to the magnitudes of the corresponding principal curvatures at the surrounding points on the surface in the corresponding principal direction. To compute the principal directions and principal curvatures at any point in a volumetric dataset, one can begin by taking the grey-level gradient [Hoehne], in our case computed using a Gaussian-weightedr filter over a 3x3x3 voxel neighborhood, to indicater ther surface normal direction, e3. Choosingr any two arbitrary orthogonal directions e1 and e2 that span the tangent plane defined by e3 gives an orthogonal frame at the point. It is then straightforward to obtain a principal frame by computing the  13 23 ωωωƒƒω  1 1  i3 r = ωƒ Second Fundamental Form, A  , where j is determined by the dot product of ei ωωωƒƒ13 ω 23 2 2 r and the first derivative of the gradient in the e j direction, and then diagonalizing it to obtain κ  υυυυ  1 0 12uu r D =   and P =  , where A = PDP-1 and κ > κ . The principal directions e′ and     1 2 1  0 κ  υυυυ  r 2 12vvrr rr ′ υυυ+ υ υυυ+ υ e2 are then given by 11uvee 1 2 and 21uvee 2 2 respectively. In our implementation, we highlighted what we determined to be perceptually significant valley lines by increasing the opacities of candidate valley points by an amount proportional to the magnitude of the principal curvature at the point, if that curvature exceeded a fixed minimum value.

Our second algorithm [Ma] was developed for applications involving the visualization of surface mesh data (figure 6). In this case the goal was to determine which mesh edges were most important to show in order to convey a full and accurate impression of the 3D shape of the surface in the absence of shading cues. In addition to silhouette and contour edges, our algorithm marked for display selected internal crease edges where the angle between neighboring triangles was locally relatively sharp, in comparison with the angles subtended across other edges in the immediate vicinity.

Figure 6: By highlighting feature lines on a surface mesh, we may enhance appreciation of the essential structure of the form, a goal that assumes particular importance under conditions where the use of surface shading is problematic. Clockwise from top left: the full original mesh; the silhouette and contour edges only; the silhouette and contour edges highlighted in a surface rendering in which polygon color is defined purely as a function of the value of a scalar parameter that is being visualized over the mesh; same as previous, except that the feature line set is augmented by crease edges determined by our algorithm to be locally perceptually significant. Data courtesy of Dimitri Mavriplis.

4 Case Study 3: Clarifying the 3D Shapes, and Relative Positions in Depth, of Arbitrary Smoothly Curving Surfaces via Texture

The final case study I describe in this chapter concerns the development of visualization techniques intended to facilitate the accurate and intuitive understanding of the 3D shapes and relative positions in depth of arbitrary smoothly curving, transparent surfaces that are not easily characterized by a small number of feature lines. Examples of such surfaces arise in many applications in visualization, from medical imaging to molecular modeling to aerospace engineering, where scientists seek insight into their data through the visualization of multiple level surfaces in one or more 3D scalar distributions. In striving to accomplish this goal I again found great value in drawing upon both inspiration from the practices used by accomplished artists and insight from fundamental research in visual perception.

4.1.1 Cues to 3D Shape and Depth

As a first step in determining how to most effectively convey the 3D shapes and depths of smoothly curving transparent surfaces in computer-generated images, it is useful to briefly consider the questions of 1) how we ordinarily infer shape and depth information from visual input and 2) why the shapes and depths of transparent surfaces can be difficult to adequately perceive, even in everyday experience. Since a full discussion of shape and depth perception is beyond the scope of this chapter, I focus here on the most important, relevant aspects of scene construction that are typically under the control of the visualization developer: viewpoint and projection, lighting, and the definition of surface material properties.

After occlusion, which unambiguously specifies the depth order relationships of overlapping surfaces, linear perspective is one of the strongest available pictorial cues to depth. Under linear perspective, lines that are parallel in 3D appear in a projected image to converge towards one or more vanishing points as they recede into the distance, and objects accordingly appear smaller/increasingly skewed, with increasing distance from the viewpoint/increasing eccentricity from the center of projection [Kennedy/Juricevic]. In an orthographic projection, or a perspective projection subtending a very narrow field of view, these convergence cues and size gradients are forfeited. The selection of an appropriate vantage point with respect to an object is also a consideration of some importance. In choosing preferred views for objects, observers appear to use task-dependent strategies [Perret/Harries]. Non-generic viewpoints, in which accidental alignments or occlusions of particular object features fortuitously occur, have the greatest potential to be misleading [Freeman].

Lighting is a complex and important factor that can influence the perception of a scene in multifaceted ways, as is well understood in fields related to the cinema and stage. For visualization purposes, in addition to effects of shadows on depth perception, which were mentioned earlier, we need to consider how best to control lighting parameters to facilitate an accurate perception of shape from shading. It has long been recognized that people are accustomed to things being lit from “above” [Luckiesh], and that among the deleterious effects of directing light toward a surface from below is an increased chance of fostering depth reversal, where convexities are perceived as concavities and vice versa [Ramachandran]. Recent research suggests more specifically a bias toward the assumption that lighting is coming from the above- left, possibly attributable to cerebral lateralization [Mamassian02]. Somewhat less dramatic but also significant are the shape-enhancing effects of defining lighting to be at an oblique angle, as opposed to head-on [Johnson/Passmore], whereby shading gradients on surfaces receding in depth are enhanced.

It is perhaps in the definition of surface material properties that the greatest hitherto untapped potential lies for facilitating shape and depth perception in visualizations of surface data. Before addressing this topic, however, I would like to discuss the remaining issue of why transparent surfaces are so difficult to adequately perceive, which will help suggest how material properties might be selected to best advantage.

Plain transparent surfaces clearly provide impoverished cues to shape and depth. Shape- from-shading information is available only through the presence of specular highlights, which, in binocular vision, are perceived not to lie on the curved surface of an object but to float behind the surface if the object is locally convex, and in front of the surface if it is locally concave [Blake/Bulthoff]. Cues to the presence of contours, where the surface normal is orthogonal to the line of sight, are provided by the distorting effects of refraction; however this comes at a significant cost to the clarity of visibility of the underlying material. When artists portray transparent surfaces, they similarly tend to rely heavily on the use of specular highlights (Lucas Cranach) [Friedlander], specular reflection (Don Eddy) [Kuspit] and/or refractive distortion (Janet Fish) [Katz].

While it would be misleading downplay the potential merits of employing a fully physically correct model of transparency for visualization applications – refraction provides a natural way to emphasize silhouettes and contours, and might also provide good cues to the thickness of a transparent layer – it is at the same time clear that the visualization goal of enabling the effective simultaneous understanding of multiple layers will not be met simply by achieving photorealism in the surface rendering. Something must be done to supplement the shape and depth cues available in the scene. The solution that we came up in our research with was to ‘stick something onto’ the surface, in the form of carefully designed, subtle, uniformly distributed texture markings, that can provide valuable cues to the surface shape, along with explicit cues to the surface depth, in a way that is not possible through reliance on specular highlights.

Before moving on to the discussion of surface textures, a final point in regard to the rendering of transparent surfaces bears mentioning. There are several alternative models for surface transparency, corresponding to different types of transparent material. The first model (figure 7a) corresponds to the case where you have an opaque material, such as a gauze curtain, that is very finely distributed, so that over a finite area it is both partly present and partly absent. α α α This is the type of transparency represented by the ‘additive model’: I = If· f + Ib·(1- f) where f

is the (wavelength independent) opacity of the transparent foreground material and If is its intensity, while Ib is the intensity of the background material. If a surface rendered using this model is folded upon itself multiple times, the color in the overlap region will, in the limit, converge to the color of the material; shadows cast by this material will be black. A second model (figure 7b) corresponds to the case where you have a semitransparent material that impedes the transmission of certain wavelengths of light. This type of transparency can be α α approximated by a ‘multiplicative model’: I = f Ib, where f is the (wavelength dependent) transmissivity of the foreground material. Multiple layers of this material will, in the limit, converge to looking black; shadows cast by this material will be colored. Other approaches, incorporating both additive and multiplicative combinations of foreground and background material, are also possible.

Figure 7: Examples of two different models of transparency. Left: additive transparency, exhibited by materials such as gauze that are intrinsically opaque but only intermittently present; Right: multiplicative or subtractive transparency, exhibited by materials such as colored glass that selectively filter transmitted light. 4.1.2 Using Texture on Surfaces to Clarify Shape Having determined to attempt to clarify the 3D shapes of external transparent surfaces through the addition of sparsely distributed texture markings, the question now becomes: what sort of markings should we add? If we could define the ideal texture pattern to apply to an arbitrary smoothly curving surface in order to enable its 3D shape to be most accurately and effectively perceived, what would the characteristics of that texture pattern be? To answer this question we again turn for inspiration to the observation of the practices of artists and illustrators, and for insight to the results of research in psychology on the perception of shape from texture. In stipple drawings, artists carefully control the density of pen markings in order to achieve a desired distribution of tone. With line drawings, in addition to achieving variations in tone, there is the additional concern of carefully controlling the directions of the lines in order to emphasize the surface shape. Although there is no precise or universally recognized convention for defining the directions of pen strokes in line drawings, artists and illustrators have frequently noted the importance of using stroke direction appropriately, and have variously cited advantages, for effectively conveying shape, in using lines which “follow the contours of the form” [Zweifel] or which run “at right angles to the length of the form” [Sullivan]. The significance of texture’s effect on shape (slant) perception was first emphasized in the research literature and formally studied by James Gibson [1950]. Using two different wallpaper patterns on large flat boards, viewed through a circular aperture, he found that slant perception was not only significantly more accurate under either texture condition than under the control condition of no texture, but also that accuracy was greatest in the case of the more “regular” texture pattern. In subsequent studies, comparing the effects of different aspects of ‘texture regularity’, researchers found evidence that regularity in element size, element shape, and element placement all had a positive effect in improving slant perception accuracy [Flock]. Ultimately, it was determined that linear convergence cues (from perspective projection) play the dominant role in slant perception from texture [Attneave, Braunstein]. Looking at the effects of texture on shape perception in the case of curved surfaces, Cutting and Millard [1984] found evidence primarily for the importance of ‘texture compression’ cues, manifested as the changes in the projected shapes of circular texture elements. In later studies, Cumming et. al [1993] found support for these findings. Evaluating the relative impacts of veridical size, density, and compression gradients on curvature perception accuracy under stereo viewing conditions, they found that perceived curvature was least when element compression was held constant, while the presence or absence of appropriate gradients in element size and/or spacing had little effect on curvature perception accuracy. These results are important because they provide clear evidence that the particular characteristics of a surface texture pattern can significantly affect shape perception, even in the presence of robust, veridical cues to shape from stereo. Still, it remains unclear what sort of texture we should choose to apply to a surface in order to facilitate perception of its shape. Stevens [1981], informally observing images from the Brodatz [1966] texture album pasted onto a cylindrical form and viewed monocularly under conditions that minimized shading cues, reported obtaining a compelling impression of surface curvature only in the cases of the wire mesh and rattan textures, the most regular, synthetic patterns. However, observing an extensive variety of line-based texture patterns projected onto a complicated, doubly curved surface, obliquely oriented so as to exhibit contour occlusions, Todd and Reichel [1990] note that a qualitative perception of shape from texture seems to be afforded under a wide range of texturing conditions. Computer vision algorithms for the estimation of surface orientation from texture generally work from assumptions of texture isotropy, or texture homogeneity. Rosenholtz and Malik [1997] found evidence that human observers use cues provided by deviations from both isotropy and homogeneity in making surface orientation judgments. Stone [1993] notes that particular problems are caused for perception by textures which are not ‘homotropic’ (in which the dominant direction of the texture anisotropy varies across the texture pattern). Op artists such as Brigit Riley [ref] have exploited this assumption to create striking illusions of relief from patterns of waving lines. Mamassian and Landy [1998] found that observers’ interpretations of surface shape from simple line drawings are consistent with the following biases under conditions of ambiguity: to perceive convex, as opposed to concave, surfaces; to assume that the viewpoint is from above; and to interpret lines as if they were oriented in the principal directions on a surface. Knill [2001] suggests that texture patterns with oriented components, which under the assumption of texture pattern homogeneity are constrained to follow parallel geodesics on developable surfaces, may provide more perceptually salient cues to surface shape than isotropic patterns. Finally, Li and Zaidi [2000, 2001] have shown that observers can reliably discriminate convex from concave regions in front-facing views of a vertically oriented sinusoidally corrugated surface only when a perspective projection is used and the texture pattern contains patterns of oriented energy that follow the first principal direction. However, in more recent studies, considering a wider range of viewpoints, they have found indications that the texture conditions necessary to ensure the veridical perception of convexity vs. concavity are more complicated than previously believed [Li 2002]. Because of historical limitations in the capabilities of classical texture mapping software and algorithms, with few exceptions nearly all studies investigating the effect of surface texture on shape perception that have been conducted to date have been restricted either to the use of developable surfaces – which can be rolled out to lie flat on a plane – or to the use of procedurally defined solid texture patterns, whose characteristics are in general independent of the geometry of the surfaces to which they are applied. For several years we have believed that important new insights into texture’s effect on shape perception might be gained through studies conducted under less restrictive surface and texture pattern conditions. In the final part of this section I will describe the algorithms that we derived for the controlled synthesis of arbitrary texture patterns over arbitrary surfaces and the results of the studies we have recently undertaken in pursuit of a deeper understanding of how we might best create and apply custom texture patterns to surfaces in scientific datasets in order to most effectively facilitate accurate perception of their 3D shapes. In our first studies [Interrante et. al 97], involving indirect judgments of shape and distance perception under different texture conditions on layered transparent surfaces extracted from radiation therapy treatment planning datasets (figure 8), we created a variety of solid texture patterns by scan-converting individual texture elements (spheres, planes, or rectangular prisms) into a 3D volume at points corresponding to evenly distributed locations over a pre-identified isosurface. We found clear evidence that performance was better in the presence of texture, but we did not find a significant main effect of texture type. As expected, “sticking something onto the surface” helped, but the question of how best to define helpful texture markings remained open. Unfortunately, none of the textures we were able to achieve using this discrete element approach yet resembled anything one might find in an artist’s line drawing. Figure 8: A variety of sparse textures applied to the same external transparent surface. Clockwise from upper left: spots of various sizes (with the larger spots it is easier to infer the local surface normal direction, but aesthetically they are more annoying to look at); short strokes locally oriented in the first principal direction, with length proportional to the first principal curvature (upper right); short strokes of random lengths, randomly oriented (lower right); lines parallel to various object space axes. Shortly afterward, we developed an improved method for synthesizing a more continuous surface texture pattern that everywhere followed the first principal direction over an arbitrary doubly curved surface [Interrante 97]. To achieve this texture pattern we began by scattering a number of discrete high intensity point elements evenly throughout an otherwise empty 3D volume, according to an approximate Poisson distribution. The locations of each point element were determined by dividing the volume into uniformly sized cells and randomly selecting a location within each cell, under the restriction that no location could be selected that was within a pre-determined distance from any previously chosen point. In a separate process, we precomputed the vector field of the first principal directions of the isolevel surfaces defined at every point in the 3D volume dataset, using the principal direction computation approach described in section 3.2 of this chapter. Finally, we used 3D line integral convolution to ‘comb out’ the distributed point texture along the lines of curvature defined by the principal direction vector field (figure 9), to obtain a single solid texture that could be applied to any isolevel surface in the volume data (figure 10). In order to avoid artifacts in the pattern at the points where the first and second principal directions switched places, we forced the filter kernel length to reduce to zero in the vicinity of these umbilic points.

Figure 9: Left: a texture of evenly distributed points over three slices in a volume; Right: the results after advecting the texture according to the first principal direction vector field using 3D line integral convolution.

Figure 10: A 3D line integral convolution texture applied to a transparent isointensity surface in a 3D radiation therapy treatment planning dataset. Data courtesy of Dr. Julian Rosenman.

Although these results were encouraging, important tasks remained. The first was to objectively evaluate the relative effectiveness of the new LIC-based principal direction texturing approach, and in particular to rigorously examine the impact of texture orientation on shape perception in the case of complicated doubly curving surfaces. The second was to pursue development of a flexible, robust principal direction texturing method that could be applied to polygonal datasets (e.g. figure 11). The two principal challenges in that regard were to define a method for obtaining accurate estimates of the principal directions at the vertices of an arbitrary polygonal mesh, and the second was to determine how to synthesize an arbitrary texture pattern over an arbitrary doubly curved surface, in a way that avoids both seams and stretching, and such that the dominant orientation in the texture pattern everywhere locally follows the first principal direction vector field.

Figure 11: A line drawing of a brain dataset, generated by Steve Haker [Girshick et. al] in which tiny straight strokes are oriented in the first principal direction at vertices in the surface mesh. While several methods have been previously proposed for estimating principal directions and principal curvatures at points on meshes [e.g. Taubin, Desbrun], we have found in practice that all exhibit unexplained large errors in some cases. Recently, we set out to investigate the sources of these errors, and in the process developed a new method for principal direction estimation that appears to produce better results [Goldfeather]. Unfortunately, space does not permit a full description of that approach here, however the essential insights are these: 1) large errors can, and do, occur at points that are far from being umbilic; 2) errors are most a problem when the underlying mesh parameterization is not regular; and 3) we can use the vertex location and (approximate) surface normal information available at the neighboring points to a vertex to achieve a least squares cubic surface fit to the mesh at that point, which appears to offer better potential for a more accurate fit than when the surface is restricted to be quadratic. The open questions that remain are: how to robustly resolve problems that arise due to the first and second principal directions switching places on either side of critical points; how to gracefully determine an appropriate texture or stroke direction across patches of umbilic points, where the principal directions are undefined; and, how to balance the concern of emphasizing shape with the concern of minimizing the salience of extraneous detail. Not all surface perturbations are worth drawing attention to, and, depending on the application, it may be desirable to enforce certain smoothness criteria before using a computed principal direction vector field for texture definition purposes.

In the first of our most recent experiments intended to gain insights into methods for using texture effectively for shape representation [Interrante et, al 00], we investigated the effect of the presence and direction of luminance texture pattern anisotropy on the accuracy of observers’ judgments of 3D surface shape. Specifically, we sought to determine 1) whether shape perception is improved, over the default condition of an isotropic pattern, when the texture pattern is elongated in the first principal direction; and 2) whether shape perception is hindered, over the default condition of an isotropic pattern, when the texture is elongated in a constant or varying direction other than the first principal direction. We had five participants, using a surface attitude probe [Koenderink 92], make judgments about local surface orientation at 49 evenly spaced points on each of six different smoothly curving surfaces, under each of four different luminance texture conditions. All of the texture patterns for this study were created via 3D line integral convolution, using either a random/isotropic vector field (rdir), a first principal direction vector field (pdir), a vector field following a constant uniform direction (udir), or a vector field following a sinusoidally varying path (sdir). Sample stimuli are shown in figure 12. The experiment was repeated under two different viewing conditions, flat and stereo. Charts summarizing the results are shown in figure 13. In the flat viewing condition, we found that performance was significantly better in the cases of the pdir and rdir patterns than in the cases of the sdir and udir patterns. Accuracy was significantly improved in the stereo vs. flat viewing condition, for all texture types. Performance remained marginally better in the cases of the isotropic and principal direction patterns than under the other two texture conditions, but significance was achieved only in the rdir case. These results are consistent with the hypothesis that texture pattern anisotropy can impede surface shape perception when the elongated markings are oriented in a way that is different from the principal direction, but they do not support the hypothesis that principal direction textures will facilitate shape perception to a greater extent than isotropic patterns. Figure 12: Representative examples of the sample stimuli used in our first recent experiment investigating the effect of texture orientation on the accuracy of observers’ surface shape judgments. From left to right: Isotropic (rdir), Uniform (udir), Swirly (sdir), and Principal Direction (pdir).

Figure 13: Pooled results (mean angle error) for all subjects, all surfaces, by texture type. Left: flat presentation; the differences {pdir,rdir} < {sdir,udir} were significant at the 0.05 level. Right stereo presentation; only the differences {rdir < sdir, udir} were significant at the 0.05 level.

In a follow-up study, we repeated the experiment using displacement textures instead of luminance textures, and found the same pattern of results. However, two important questions were raised by this work. First: why does shape perception seem to be most accurate in the principal direction orientation condition, when there is little ecological justification for a texture pattern being oriented in the principal directions across a doubly curved surface? Is it because, from a generic viewpoint, the contours traced by a principal direction texture have the greatest potential to reveal the surface curvature to a maximum extent, while the contour traced out by the texture flow along any other direction at that point and for the same view will be intrinsically more flat, which may represent a loss of shape information that is not recoverable? Second: on arbitrary doubly curved surfaces there are two orthogonal directions in which the normal curvature generically assumes a non-zero extrema. Although these directions can be reliably classified into two types, the first principal direction and the second principal direction, there is not a clear algorithm for determining which of these two directions a singly-oriented directional texture should follow at any point, in order to minimize artifacts due to the apparent turning of the texture pattern in the surface. Is it possible that the effectiveness of the pdir textures used in this first experiment was compromised by these ‘corner’ artifacts, and that we might be able to more effectively facilitate shape perception using an orthogonally bi-directional principal direction oriented pattern — one that has 90-degree rotational symmetry? In order to investigate these questions we needed to conduct further studies, and to develop a more general texture synthesis method capable of achieving a wider variety of oriented patterns over surfaces.

Inspired by Efros and Leung’s [1999] algorithm for synthesizing unlimited quantities of a texture pattern that is nearly perceptually identical to a provided 2D sample, we developed a fast and efficient method for synthesizing a fitted texture pattern, without visible seams or projective distortion, over a polygonal model, such that the texture pattern orientation is constrained to be aligned with a specified vector field at a per-pixel level [Gorla et. al]. Our method works by partitioning the surface into a set of equally-sized patches, then using a two-pass variant of the original Efros and Leung method to synthesize a texture for each patch, being careful to maintain pattern continuity across patch boundaries, and performing the texture look-up, at each point, in an appropriately rotated copy of the original texture sample, in order to achieve the desired local pattern orientation.

Using this system to render a new set of textured surface stimuli, we undertook a second experiment [Interrante et. al 02] intended to evaluate the information carrying capacities of two different base texture patterns (one singly oriented and one doubly oriented), under three different orientation conditions (pdir, udir and sdir), and two different viewing conditions (upright and backward slanting). In a four alternative forced choice task, over the course of 672 trials, three participants were asked to identify the quadrant in which two simultaneously displayed B-spline surfaces, illuminated from different random directions, appeared to differ in their shapes. We found that participants were consistently able to more reliably perceive smaller shape differences when the surfaces were textured with a pattern whose orientation followed one of the principal directions than when the surfaces were textured either with a pattern that gradually swirled in the surface or that followed a constant uniform direction in the tangent plane. We did not find a significant effect of texture type (performance was only marginally better overall in the two directional case), or of surface orientation (performance was only marginally better overall in the tilted vs. front-facing case), nor evidence of an interaction between texture type and surface orientation. Sample stimuli and summary results are shown in figures 14-15. These findings support the hypothesis that anisotropic textures not aligned with the first principal direction may support shape perception more poorly, for a generic view, than principal direction oriented anisotropic patterns, which can provide cues to the maximal amount of surface normal curvature in a local region. However this study did not yield much insight into the potential effects on shape perception of principal direction texture type.

In our third recent experiment [Kim et. al 03], we focused on the question of whether some principal direction texture patterns might be more effect for conveying shape than others, and, if so, then what the particular characteristics of these principal direction patterns might be. Five participants each adjusted surface attitude probes to provide surface orientation estimates at two different locations over five different surfaces under each of four different texture conditions. With five repeated measures, we had a total of 200 trials/pp. We compared performance under the control condition of no texture to performance under three different texture type conditions: a high contrast, one-directional line pattern that everywhere followed the first principal direction (1dir) ; a lower contrast, one-directional line integral convolution pattern

Figure 14: Representative stimuli used in our second experiment to investigate the relative extents to which differently oriented patterns have the potential to mask the perceptibility of subtle shape differences. The texture conditions are, from left to right: principal direction, swirly direction, uniform direction.

Figure 15: Summary results of our second experiment. Accuracy increased with increasing magnitude of shape difference in all cases, but increased at a faster rate under the principal direction texture condition. Error bars represent 95% confidence intervals. that similarly followed the first principal direction (lic); and a medium-high contrast, two- directional grid pattern that was everywhere in alignment with both principal directions (2dir). All patterns had equivalent mean luminance. Sample stimuli are shown in figure 16. We used the statistical software package ‘MacAnova’, developed by Prof. Gary Oehlert from the Department of Statistics at the University of Minnesota, to perform a three-way, within subjects mixed analysis of variance (ANOVA) to evaluate the statistical significance of the results. We found significant main effects of probe location (p=0.0000264) and texture type (p=0.0002843), and a significant two-way interaction between texture type and probe location (p<0.00000001). We did not find a significant main effect of subject id (p = 0.18) nor a significant interaction between subject and texture type (p = 0.62). We used Tukey’s HSD (“Honestly Significant Difference”) method to perform post-hoc pairwise comparisons of the means of the angle errors under the different texture conditions. We found that the following differences were statistically significant at the 0.01 level: 2-dir < 1-dir, 2-dir < None, 1-dir < None, and LIC < None. The difference between performance in the 2-dir and LIC conditions was not statistically significant at the 0.01 level, nor was the performance difference between the LIC and the 1-dir conditions. Charts summarizing these results are shown in figure 17. Figure 16: A test surface from our third experiment, in the control condition of no texture (left), and (from left to right) under the three studied principal direction texture conditions: 1dir, lic and 2dir.

Figure 17: The cumulative experiment 3 results.

Through all of the efforts summarized in this section, we have realized that determining the characteristics of a texture pattern that is best able to facilitate surface shape perception is not as straightforward an undertaking as it at first might seem. Conducting controlled experiments is a delicate and time-consuming business, and success is never guaranteed. However, through our efforts we have been able to answer many important questions about the suitability of principal direction oriented patterns for shape representation, and the open questions that remain provide a welcome challenge for future work.

5 Conclusions The process of creating an effective visual representation of a set of data is both an art and a science, requiring extensive efforts in visualization design, implementation and evaluation. For visualization design, there are significant potential benefits in seeking inspiration from previous graphical work in art, illustration, visual communication and design, and in seeking insights from research in vision and visual perception. The process of implementation – figuring out how to develop the algorithms necessary to translate our vision of the results we want to achieve into a reality – though dealt with only lightly in this chapter, are of extreme importance, and this has historically been where the field of visualization has seen its greatest successes. Evaluation, through informal observation or, more rigorously, through controlled observer experiments, can be critical in clarifying our understanding of the strengths and weaknesses of alternative visualization approaches, and for assessing the practical merits of a particular visualization approach for a specific task. Most importantly, evaluation helps us to better understand not only what works and what doesn’t, and by how much, but also to gain insight into why. Based on this insight, we are better equipped to go back to the design stage and work on developing yet more effective approaches to meet our visualization objectives. References

[1] Lucile R. Addington. Lithic Illustration: drawing flaked stone artifacts for publication, The University of Chicago Press, 1986 [2] Gerald R. Allen and D. Ross Robertson. Fishes of the Tropical Eastern Pacific, University of Hawaii Press, 1994 [3] Lee Anderson, James Esser and Victoria Interrante. A Virtual Environment for Conceptual Design in Architecture, 9th Eurographics Workshop on Virtual Environments/7th International Workshop on Immersive Projection Technology, May 2003, to appear. [4] Arthur Appel, F. James Rohlf and Arthur J. Stein. The Haloed Line Effect for Hidden Line Elimination, Proceedings of SIGGRAPH '79, pp. 151-157. [5] Fred Attneave and Richard K. Olson. Inferences About Visual Mechanisms from Monocular Depth Effects, Psychonomic Science, 4, 1966, pp. 133-134. [6] Fred Attneave. Some Informational Aspects of Visual Perception, Psychological Review, 61(3), 1954, pp. 183-193. [7] David C. Banks. Illumination in Diverse Codimensions, Computer Graphics Proceedings, Annual Conference Series, 1994, pp. 327-334. [8] Irving Biederman and Ginny Ju. Surface versus Edge-Based Determinants of Visual Recognition, Cognitive Psychology, 20(1), January 1988, pp. 38-64. [9] Irving Biederman. Human Image Understanding: Recent Research and a Theory, in Human and Machine Vision, Azriel Rosenfeld, ed., Academic Press, 1985, pp. 13-57. [10] Andrew Blake and Heinrich Bülthoff. Shape from Specularities: Computation and Psychophysics, Philosophical Transactions of the Royal Society of London, B, 331, 1991, pp. 237-252. [11] Myron L. Braunstein and John W. Payne. Perspective and Form Ratio as Determinants of Relative Slant Judgments, Journal of Experimental Psychology, 81(3), 1969, pp. 584-590. [12] Myron L. Braunstein, Donald D. Hoffman and Asad Saidpour. Parts of Visual Objects: An experimental test of the minima rule, Perception, 18(6), 1989, pp. 817-826. [13] Phil Brodatz. Textures; a photographic album for artists and designers, Dover, 1966 [14] Vicki Bruce, Elias Hanna, Neal Dench, Pat Healey and Mike Burton. The Importance of 'Mass' in Line Drawings of Faces, Applied Cognitive Psychology, 6(7), December 1992, pp. 619-628. [15] Brian Cabral and Leith (Casey) Leedom. Imaging Vector Fields Using Line Integral Convolution, Computer Graphics Proceedings, Annual Conference Series, 1993, pp. 263- 270. [16] Patrick Cavanagh and Yvan G. Leclerc. Shape from Shadows, Journal of Experimental Psychology: Human Perception and Performance, 15(1), February 1989, pp. 3-27. [17] Bruce G. Cumming, Elizabeth B. Johnston and Andrew J. Parker, Effects of Different Texture Cues on Curved Surfaces Viewed Stereoscopically, Vision Research, 33(5/6), 1993, pp. 827-838. [18] James E. Cutting and Robert T. Millard. Three Gradients and the Perception of Flat and Curved Surfaces, Journal of Experimental Psychology: General, 113(2), 1984, pp. 198- 216. [19] Graham Davies, Hadyn Ellis and John Shepherd. Face Recognition Accuracy as a Function of Mode of Representation, Journal of Applied Psychology, 63(2), April 1978, pp. 180-187. [20] Mathieu Desbrun, Mark Meyer, Peter Schroder, and Alan H. Barr. Discrete Differential Geometry Operators in nD, preprint, July 22, 2000 [21] Brian D. Dillon, ed. The Student's Guide to Archaeological Illustrating, Institute of Archaeology, University of California, Los Angeles, 1981 [22] Alexei A. Efros and Thomas K. Leung. Texture Synthesis by Non-Parametric Sampling, Proceedings of the International Conference on Computer Vision, 2, 1999, pp. 1033-1038. [23] Howard R. Flock and Anthony Moscatelli. Variables of Surface Texture and Accuracy of Space Perceptions, Perceptual and Motor Skills, 19, 1964, pp. 327-334. [24] William T. Freeman. The Generic Viewpoint Assumption in a Framework for Visual Perception, Nature, 368(6471), April 1994, pp. 542-545. [25] Max J. Friedländer and Jakob Rosenberg, The Paintings of Lucas Cranach, Cornell University, 1978 [26] Diana Fussel and Ane Haaland. Communicating with Pictures in Nepal: results of practical study used in visual education, Educational Broadcasting International, 11(1), March 1978, pp. 25-31. [27] James J. Gibson. The Perception of the Visual World, Houghton-Mifflin, 1950 [28] James J. Gibson. The Perception of Visual Surfaces, American Journal of Psychology, 63, 1950, pp. 367-384. [29] Ahna Girshick, Victoria Interrante, Steve Haker and Todd LeMoine. Line Direction Matters: An Argument for the Use of Principal Directions in 3D Line Drawings, First International Symposium on Non Photorealistic Animation and Rendering, June 2000, pp. 43-52. [30] Jack Goldfeather and Victoria Interrante. A Novel Cubic-Order Algorithm for Approximating Principal Direction Vectors, ACM Transactions on Graphics, in re-review. [31] Gabriele Gorla, Victoria Interrante and Guillermo Sapiro. Texture Synthesis for 3D Shape Representation, IEEE Transactions on Visualization and Computer Graphics, 2003, to appear. [32] Arthur L. Guptill. Rendering in Pen and Ink, Watson-Guptill Publications, 1976 [33] Jack M. H. Beusmans, Donald D. Hoffman and Bruce M. Bennett. A Description of Solid Shape and its Inference from Occluding Contours, Journal of the Optical Society of America. A, 4(7), July 1987, pp. 1155-1167. [34] Karl Heinz Höhne and Ralph Bernstein. Shading 3D-Images from CT Using Gray-Level Gradients, IEEE Transactions on Medical Imaging, 5(1), 1986, pp. 45-47 (with a correction in 5(3):165). [35] Kathy Hirsh and Deirdre A. McConathy. Picture Preferences of Thoracic Surgeons, Journal of BioCommunications, Winter 1986, pp. 26-30. [36] Elaine R.S. Hodges. The Guild Handbook of Scientific Illustration, Van Nostrand Reinhold, 1989 [37] Donald D. Hoffman and Whitman A. Richards. Parts of Recognition, Cognition, 18(1-3), December 1984, pp. 65-96. [38] Helen Hu, Amy A. Gooch, William B. Thompson, Brian E. Smits, John J. Rieser and Peter Shirley. Visual Cues for Imminent Object Contact in Realistic Virtual Environments, Proceedings of IEEE Visualization 2000, pp. 179-185. [39] Eduard Imhof. Cartographic Relief Presentation, De Gruyter, 1982 [40] Victoria Interrante and Chester Grosch. Visualizing 3D Flow, IEEE Computer Graphics and Applications, 18(4), July 1998, pp. 49-53. [41] Victoria Interrante and Sunghee Kim. Investigating the Effect of Texture Orientation on the Perception of 3D Shape, SPIE Conference on Human Vision and Electronic Imaging VI, SPIE 4299, January 2001, pp. 330-339. [42] Victoria Interrante, Henry Fuchs and Stephen Pizer. Enhancing Transparent Skin Surfaces with Ridge and Valley Lines, Proceedings of IEEE Visualization '95, pp. 52-59. [43] Victoria Interrante, Henry Fuchs and Stephen Pizer. Conveying the 3D Shape of Smoothly Curving Transparent Surfaces via Texture, IEEE Transactions on Visualization and Computer Graphics, 3(2), April-June 1997, pp. 98-117. [44] Victoria Interrante, Sunghee Kim and Haleh Hagh-Shenas. Conveying 3D Shape with Texture: Recent Advances and Experimental Findings, Human Vision and Electronic Imaging VII, SPIE 4662, January 2002, pp. 197-206. [45] Victoria Interrante. Harnessing Rich Natural Textures for Multivariate Visualization, IEEE Computer Graphics and Applications, 20(6), November 2000, pp. 6-11. [46] Victoria Interrante. Illustrating Transparency: communicating the 3D shape of layered transparent surfaces via texture, PhD Dissertation, UNC-Chapel Hill, 1996 [47] Victoria Interrante. Illustrating Surface Shape in Volume Data via Principal Direction- Driven 3D Line Integral Convolution, Computer Graphics Proceedings, Annual Conference Series, 1997, pp. 109-116. [48] Koenderink, Jan J., Andrea van Doorn and Astrid M. L. Kappers. Surface Perception in Pictures, Perception, 52, 1992, pp. 487-496. [49] Alan Johnson and Peter J. Passmore. Shape from Shading I: Surface Curvature and Orientation, Perception, 23(2), February 1994, pp. 169-189. [50] Vincent Katz. Janet Fish: Paintings, Harry N. Abrams, New York, 2002 [51] John M. Kennedy and Igor Juricevic. Foreshortening Gives Way to Forelengthening, Perception, 31(7), July 2002, pp. 893-894. [52] Dan Kersten, Pascal Mamassian and David C. Knill. Moving Cast Shadows Induce Apparent Motion in Depth, Perception, 26(2), February 1997, pp. 171-192. [53] Sunghee Kim, Haleh Hagh-Shenas and Victoria Interrante. Showing Shape with Texture: Two Directions Seem Better than One, Human Vision and Electronic Imaging VIII, SPIE 5007, January 2003, pp. 332-339. [54] David C. Knill. Contour into Texture: Information Content of Surface Contours and Texture Flow, Journal of the Optical Society of America, A, (1), January 2001, pp. 12-35. [55] Jan J. Koenderink. Solid Shape, MIT Press, 1990 [56] Donald Kuspit. Don Eddy: the art of paradox, Hudson Hills Press, 2002 [57] Andrea Li and Qasim Zaidi. Perception of Three-Dimensional Shape from Texture is Based on Patterns of Oriented Energy, Vision Research, 40(2), January 2000, pp. 217-242. [58] Andrea Li and Qasim Zaidi. Information Limitations in Perception of Shape from Texture, Vision Research, 41(12), June 2001, pp. 1519-1533. [59] Andrea Li and Qasim Zaidi. Limitations on Shape Information Provided by Texture Cues, Vision Research, 42(7), March 2002, pp. 815-835. [60] William E. Loechel. ; a guide for the doctor-author and exhibitor, Charles C. Thomas, 1964 [61] Andrew Loomis. Creative Illustration, The Viking Press, 1947 [62] Matthew Luckiesh. Light and Shade and their applications, Van Nostrand, 1916 [63] Kwan-Liu Ma and Victoria Interrante. Extracting Feature Lines from 3D Unstructured Grids, Proceedings of IEEE Visualization '97, pp. 285-292. [64] Cindee Madison, William Thompson, Daniel Kersten, Peter Shirley and Brian Smits. Use of Interreflection and Shadow for Surface Contact, Perception & Psychophysics, 63(2), February 2001, pp. 187-194. [65] Pascal Mamassian and Ross Goutcher. Prior Knowledge on the Illumination Position, Cognition, 81(1), August 2001, pp. B1-B9. [66] Pascal Mamassian and Michael S. Landy. Observer Biases in the 3D Interpretation of Line Drawings, Vision Research, 38(18), September 1998, pp. 2817-2832. [67] Scott McCloud. Understanding Comics: the invisible art, Harper Perennial, 1994 [68] Gavin Miller. Efficient Algorithms for Local and Global Accessibility Shading, Computer Graphics Proceedings, Annual Conference Series, 1994, pp. 319-326. [69] Olivier Monga and Serge Benayoun. Using Partial Derivatives of 3D Images to Extract Typical Surface Features, Computer Vision & Image Understanding, 61(2), March 1995, pp. 171-189. [70] Ken Nakayama and Shinsuke Shimojo. Da Vinci Stereopsis: Depth and Subjective Contours from Unpaired Image Points, Vision Research, 30(11), 1990, pp. 1811-1825. [71] Yuri Ostrovsky, Patrick Cavanagh and Pawan Sinha. Perceiving Illumination Inconsistencies in Scenes, AI Memo #2001-029, MIT, November 2001 [72] Don E. Pearson and John A. Robinson. Visual Communication at Very Low Data Rates, Proceedings of the IEEE, 73(4), April 1985, pp. 795-812. [73] David I. Perrett and Mark H. Harries. Characteristic Views and the Visual Inspection of Simple Faceted Objects and Smooth Objects: 'Tetrahedra and Potatoes', Perception, 17(6), June 1988, pp. 703-720. [74] Pablo Picasso. Study of a Bull's Head, 5 November 1952 [75] Vilayanur S. Ramachandran. Perceiving Shape from Shading, Scientific American, 259(2), August 1988, pp. 76-83. [76] Gillian Rhodes, Susan Brennan and Susan Carey. Identification and Ratings of Caricatures: implications for mental representations of faces, Cognitive Psychology, 19(4), October 1987, pp. 473-497. [77] Whitman Richards, Jan Koenderink and Donald Hoffman. Inferring Three-Dimensional Shapes from Two-Dimensional Silhouettes, Journal of the Optical Society of America, A, Optics and Imaging Science, 4, July 1987, pp. 1168-1175. [78] John L. Ridgway. Scientific Illustration, Stanford University Press, 1938 [79] Ruth Rosenholtz and Jitendra Malik. Surface Orientation from Texture: Isotropy or Homogeneity (or Both)?, Vision Research, 37(16), August 1997, pp. 2283-2293. [80] T. A. Ryan and Carol B. Schwartz. Speed of Perception as a Function of Mode of Representation, American Journal of Psychology, 69, 1956, pp. 60-69. [81] Takafumi Saito and Tokiichiro Takahashi. Comprehensible Rendering of 3-D Shapes, Computer Graphics, 24(4), 1990, pp. 197-206. [82] Detlev Stalling and Hans-Christian Hege. Fast and Resolution Independent Line Integral Convolution, Computer Graphics Proceedings, Annual Conference Series, 1995, pp. 249- 256. [83] Detlev Stalling, Malte Zöckler and Hans-Christian Hege. Fast Display of Illuminated Field Lines, IEEE Transactions on Visualization and Computer Graphics, 3(2), April/June 1997, pp. 118-128. [84] Kent A. Stevens. The Information Content of Texture Gradients, Biological Cybernetics, 42, 1981, pp. 95-105. [85] James V. Stone. Shape From Local and Global Analysis of Texture, Philosophical Transactions of the Royal Society of London, B, 339, 1993, pp. 53-65. [86] Edmund J. Sullivan. Line; an art study, Chapman & Hall, 1922 [87] Gabriel Taubin. Estimating the Tensor of Curvature of a Surface from a Polyhedral Approximation, Proceedings of the 5th International Conference on Computer Vision (ICCV'95), June 1995, pp. 902-907. [88] James T. Todd and Francene D. Reichel. Visual Perception of Smoothly Curved Surfaces from Double-Projected Contour Patterns, Journal of Experimental Psychology: Human Perception and Performance, 16(3), 1990, pp. 665-674. [89] Hans Wallach and D. N. O'Connell. The Kinetic Depth Effect, Journal of Experimental Psychology, 45(4), April 1953, pp. 205-217. [90] Ernest W. Watson. The Art of Pencil Drawing, Watson-Guptill Publications, 1968 [91] Charles Wheatstone. On some remarkable, and hitherto unobserved, phenomena of binocular vision, Philosophical Transactions of the Royal Society of London, 128, 1838, pp. 371-394. [92] Albert Yonas, Lynn T. Goldsmith and Janet L. Hallstrom. Development of Sensitivity to Information Provided by Cast Shadows in Pictures, Perception, 7(3), 1978, pp. 333-341. [93] Frances W. Zweifel. A Handbook of Biological Illustration, University of Chicago Press, 1961.