INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer.

The quality of this reproduction is dependent upon the quality of the copy sulwnltted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction.

In the unlikely event that the author did riot send UMI a complete manuscript and there are missing pages, these will be noted. Also, If unauthorized copyright material had to be removed, a note will indicate the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps.

Photographs included in the original manuscript have been reproduced xerographlcally in this copy. Higher quality 6" x 9" black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order.

ProQuest Information and Learning 300 North Zeeb Road, Ann Arbor, Ml 48106-1346 USA 800-521-0600 UMÏ

Creating Continuous Design Spaces for Interactive Genetic Algorithms with Layered, Correlated, Pattern Functions

DISSERTATION

Presented in Partial Fulfillment of the Requirements for

the Degree Doctor of Philosophy in the

Graduate School of The Ohio State University

By

Matthew Lewis, B.A., B.S.E., M.S.

*****

The Ohio State University

2001

Dissertation Committee: Approved by

Richard Parent, Adviser Wayne E. Carlson Adviser John R. Josephson Department of Computer and Information Science UMI Number: 3031222

UMI’

UMI Microform 3031222 Copyright 2002 by Bell & Howell Information and Learning Company. Ail rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code.

Bell & Howell Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, Ml 48106-1346 © Copyright by

Matthew Lewis

2001 ABSTRACT

Interactive evolutionary design (lED) is a design paradigm that can be used to

generate content by means of artificial evolution. Traditional evo­

lutionary design research relies on objectively computable fitness functions to evaluate

the quality of individuals in a population of potential solutions to a design problem.

lED systems rely instead on subjective judgment to determine fitness.

Most implementations of lED systems demonstrate significant signature. The term signature refers to the lack of visual diversity in the populations and individuals generated by lED systems. Signature is primarily the result of the solution space rep­ resentation. Frequently, primitives and techniques are used which are not sufficiently general, or are biased towards specific visual qualities. In practice, such systems are only able to access a small region of a problem domain’s ideal potential solution space.

Alternatively, too general a representation is used, resulting in the need to search too large a region of solution space. This makes it impractical for an interactive system to be used to find fit individuals.

One of the most common uses of lED is the generation of nonrepresentational images, usually either for artistic purposes or for use as textures or surface shaders.

Focusing on this problem domain, continuous pattern functions are introduced and used as a new genetic primitive in an evolutionary design context. Abstracted from

11 pattern based procedural texturing^ techniques, continuous pattern functions are de­

fined in order to produce a wide range of patterns and forms for generating images

and surfaces. Their fiexibility enables greater visual diversity and control of visual

attributes than has previously been demonstrated in lED image systems. Formal graphic design knowledge is integrated into continuous pattern functions to further increase the visual diversity of generated populations. Finally, layer-based cloning methods are introduced to address the “s^mchronization problem” of smoothly facil­ itating feature correlation.

m This is dedicated to my wife, Beth.

IV ACKNOWLEDGMENTS

I have reached this point because of many, many people who have had an impact

on my life and education. First, thanks must be given to my advisor Rick Parent.

Computer Graphics is a huge field and Rick has stuck by me patiently through my

many changing research interests. Thanks also to the other members of ray commit­ tee, Wayne Carlson and John Josephson. They have been generous with their time

« and advice. It has been a privilege to work with each of them.

My gratitude also goes to Wayne for all of the wonderful opportunities he has given me over the years I’ve worked at ACC AD. I’ll always appreciate and remember the concern he expressed about whether I’d finish my Ph.D. while considering whether to hire me for a staff position. Thanks also to Maria Palazzi for her ongoing trust and encouragement. Her tireless enthusiasm and drive are a constant source of inspiration.

My fellow Supercomputer Graphics Research Specialists, Steve Spencer, Steve May, and Pete Carswell have helped and taught me a great deal over the years. They, along with the rest of the ACCAD staff, AHne, Barb, Bob, Charlotte, Elaine, Jeff, Midori,

Mike, Phil, Ruedy, Suba, Traci, and Viki, have all made this a fantastic environment to work in.

There are many, many students that I have spent countless hours with at CIS and ACCAD over the past decade. I have learned so much teaching, working, and hanging out with you all. Just a few of these include: Alex, Barb, Beth, Brandon, Breit, Carl, Charles, Chris, Clarke, Craig, Dave, Ed Sindelar, Ed Swan, Erika, Ferdr,

Flip, Hae-jeon, Heath, Ian, Janet, John, Jon, Julie, Karan, Kevin G, Kevin R, Kevin

S, KT, Leslie, Liza, Markus, Matt B, Matt C, Meg, Melissa, Michelle, Miho, Moon,

Muqeem, Nathania, Pete H, Pete S, Rick, Roxie, Scott K, Scott S, Sonia, Todd, Tom,

Tony, Tonya, Torsten, Vita, VVobbe, and Wooksang.

Very special thanks to Matt Beitler, Cory Bowman, Peter Gerstmann, Cheryl

Klepser, Neal McDonald, Greg Rapp, Dave Reed, Todd Sines, and Lawson Wade for

many long hours spent contemplating life’s mysteries. Each of you have provided

inspiration and support in many different ways.

A number of teachers and faculty have given me opportimities, challenges, insights, and friendship. Without people like them I never would have made it this far. Some of them include Norm Badler, Mary Coif, Chuck Csuri, Carol Gigliotti, Mary Gregg,

Ken Gross, Bettie Kelly, Ruth King, Laura Lisbon, Tom Saiers, and Bob Schwartz.

Antoine Durr, Gary Greenfield, Ray Davis, and the staff of Side Effects have provided advice and assistance with numerous technical challenges.

Of course my family are most responsible for who I am today. Infinite thanks to my mother, father, Amy, and Jennifer for their unwavering love, encouragement, and support. Vincent and Simon are two amazing fountains of endless love, potential, and surprise (how it is that T managed to be chosen to be Daddy for the most incredible boys on the face of the earth?) Most of all I must express eternal gratitude to Beth for her patience, intelligence, and love. Thanks for choosing me.

VI VITA

March 21, 1969 ...... Born - Doylestown, PA, USA

1991 ...... B.A. Philosophy

1991 ...... B.S.E. Computer Science Engineering

1993 ...... M.S. Computer Information Science

1993-present ...... Graphics Research Specialist, The Advanced Computing Center for the Arts and Design (ACCAD), The Ohio State University.

PUBLICATIONS

Research Publications

Lewis, Matthew. “Visual Aesthetic Evolutionary Design”, on CD accompanying Creative Evolutionary Systems, Peter J. Bentley and David W. Corne, eds. Morgan Kaufman, 2001.

Lewis, Matthew. A Comparison of Parametric Contour Spaces for Interactive Ge­ netic Algorithms. OSU-ACCAD-6/Ol-TRl, The Ohio State University, Advanced Computing Center for the Arts and Design, 2001.

Lewis, Matthew. “Aesthetic Evolutionary Design with Data Flow Networks” in Proceedings of Generative Arts 2000, Milan, Italy, ed. Celestino Soddu, December, 2000.

YU Lewis, Matthew. An ImpKcit Surface Prototype for Evolvmg^ Humarr Figure-Geonr- etry. OSU-ACCAD-11/00-TR2, The Ohio State University, Advanced Computing Center for the Arts and Design, 2000.

Lewis, Matthew. Evolving Human Figure Geometry. OSU-ACCAD-5/OO-TRl, The Ohio State University, Advanced Computing Center for the Arts and Design, 2000.

Lewis, Matthew. Sanbaso: A Web Based VRh/IL Humanoid Animation Tool. OSU- ACCAD-10/97-TR1, The Ohio State University, Advanced Computing Center for the Arts and Design, 1997.

Carlson, Wayne, Stephen Spencer, Margaret Geroch, Matthew Lewis, Keith Bed­ ford, David Welsh, John Kelley, and Arun Welch. Visualization of Results from Distributed, Coupled Supercomputer-Based Mesoscale Atmospheric and Lake Mod­ els Using the NASA ACTS. OSU-ACCAD-7/95-TRl, The Ohio State University, Advanced Computing Center for the Arts and Design, 1995.

Lewis, Matthew. Texture Mapping and Image-Based Polygon Coloring in VRML Environments. OSU-ACCAD-7/95 TR2, The Ohio State University, Advanced Com­ puting Center for the Arts and Design, 1995.

Lewis, Matthew. Automatic Animation Direction. Computer Graphics Research Laboratory Quarterly Progress Report No. 38. Department of Computer and Infor­ mation Science, University of Pennsylvania, 1991.

FIELDS OF STUDY

Major Field: Computer and Information Science

Studies in: Computer Graphics Prof. Richard Parent Artificial Intelligence Prof. John R. Josephson Art Prof. Robert Schwartz

vui TABLE OF CONTENTS

Page

A bstract...... ii

Dedication ...... iv

Acknowledgments ...... v

V i t a ...... vii

List of Tables...... xiii

List of Figures ...... xiv

Chapters:

1. Introduction ...... 1

1.1 Background ...... 2 1.1.1 Computer Graphics Design Softw are...... 2 1.1.2 Evolutionary Design ...... 6 1.1.3 Interactive Evolutionary Design ...... 8 1.2 Motivation ...... 17 1.3 T h e sis...... 19 1.4 Document Overview ...... 19 1.4.1 Type-face Conventions ...... 20

2. Interactive Evolutionary Design ...... 21

2.1 Traditional Evolutionary Algorithms ...... 21 2.1.1 O ptim ization ...... 22 2.1.2 Common Q u alities ...... 24 2.1.3 Individuals ...... 25

ix 2.1.4 Populations ...... 27 2.1.5 Parent S ele c tio n ...... 27 2.1.6 C rossover ...... 28 2.1.7 M u ta tio n ...... 30 2.1.8 Fitness ...... 32 2.1.9 Convergence ...... 34 2.2 lED Techniques and Limitations ...... 36 2.2.1 Solution Spaces ...... 37 2.2.2 Generating Offspring ...... 39 2.2.3 D isp la y...... 41 2.2.4 Interactive E valuation ...... 42 2.2.5 Convergence ...... 44 2.3 S ig n a tu re...... 45 2.3.1 Sources of S ig n atu re...... 46 2.3.2 Signature R eduction ...... 47

3. Continuous Pattern Functions ...... 50

3.1 Features...... 52 3.1.1 One-dimensional F eatu res ...... 54 3.1.2 Two-dimensional F eatures ...... 57 3.1.3 Three-dimensional Features ...... 66 3.1.4 ...... 69 3.2 P a ttern s ...... 75 3.2.1 Bombing ...... 76 3.2.2 Global Perturbations ...... 79 3.3 L ayers...... 80 3.4 S u m m ary...... 82

4. Formal Design Concept Representation ...... 83

4.1 Design Principle Implementation ...... 85 4.2 Value Biases ...... 87 4.2.1 Continuous C h o ice ...... 87 4.2.2 Multiple Sub-range Remapping...... 88 4.3 U n i t y ...... 89 4.3.1 Repetition ...... 90 4.3.2 C ontinuation ...... 91 4.3.3 Proxim ity...... 93 4.3.4 V a rie ty...... 94 4.3.5 Combining Techniques ...... 95 4.4 Emphasis and Focal Point ...... 96 4.4.1 C ontrast ...... 97 4.4.2 Isolation ...... 99 4.4.3 Placem ent ...... 99 4.4.4 Combining Techniques ...... 100 4.5 B alan ce ...... 101 4.5.1 Symmetric Balance ...... 102 4.5.2 Asymmetric B alance ...... 105 4.6 R h y th m ...... 109 4.7 S h a p e ...... I l l 4.8 Color ...... 113 4.8.1 Color Representation ...... 113 4.8.2 P a le tte s ...... 114 4.8.3 V a lu e ...... 114 4.8.4 Saturation ...... 116 4.8.5 H u e ...... 117 4.9 Trait P a le tte s...... 124 4.10 S u m m ary...... 127

5. Layer Synchronization ...... 128

5.1 Compositing M ethods ...... 130 5.1.1 Value L a y e rs...... 130 5.1.2 Color L a y e rs ...... 131 5.2 Attribute Synchronization ...... 133 5.3 Trait Localization ...... 135 5.4 Global and Layer Level Param eters ...... 139

6. lED Example Domains ...... 140

6.1 Images...... 141 6.2 UV S haders ...... 143 6.3 Height F ie ld s ...... 146 6.4 Solid S h a d e rs ...... 147

7. Conclusion ...... 162

7.1 Summary ...... 162 7.2 Contributions ...... 165 7.3 Future W o r k ...... 167

Appendices:

XI A. Example Image Chromosome Map ...... 170

Bibliography ...... 175

XU LIST OF TABLES

Table Page

3.1 Worley’s bevel p aram eters ...... 55

3.2 2D feature blend bias and gain co n stan ts ...... 64

3.3 3D feature blend bias and gain co n sta n ts ...... 68

xm LIST OF FIGURES

Figure Page

3.1 Example primitive functions ...... 53

3.2 Simple featu re ...... 53

3.3 ID feature parameters ...... 56

3.4 ID feature examples ...... 58

3.5 2D feature prim itives ...... 59

3.6 Single star p o in t ...... 60

3.7 2D feature shape b lending ...... 63

3.8 Blend curve for torus to 8pt-star transition ...... 65

3.9 3D feature primitive ...... 66

3.10 3D feature prim itives ...... 67

3.11 Blend curve for tangle to pillow tra n s itio n ...... 68

3.12 3D feature shape blending ...... 70

3.13 Noise sample ...... 71

3.14 Featines perturbed by n o is e ...... 73

3.15 2D noise ...... 74

XIV 3^,16 Single trait bombing exampfes ...... 7S

3.17 Adding global noise to a pattern’s a ttrib u te s ...... 80

3.18 Combining two ID layers...... 81

4.1 Increasing the value of the unity-by-repetition g e n e ...... 90

4.2 Increasing the value of the unity-by-continuation gene ...... 91

4.3 Increasing the value of the unity-by-proximity g e n e ...... 93

4.4 Increasing the value of the iinity-by-variety gene ...... 94

4.5 Increasing the value of the unity-by-combination gene ...... 95

4.6 Increasing the value of the emphasis-by-contrast g e n e ...... 97

4.7 Increasing the value of the emphasis-by-isolation g e n e ...... 98

4.8 Increasing the the value of the emphasis-by-placement gene ...... 99

4.9 Increasing the value of the emphasis-by-combination gene ...... 100

4.10 Feature Symmetry ...... 102

4.11 Global Sym m etry...... 104

4.12 Size vs. quantity asymmetric b ala n c e ...... 106

4.13 Size vs. complexity asymmetric balance ...... 107

4.14 Quantity vs. complexity asymmetric balance ...... 109

4.15 R h y th m...... 110

4.16 S h a p in g ...... I l l

4.17 Value Design G enes ...... 115

4.18 Saturation G ene ...... 116

XV 4.19 Hue Remapping . . •...... 117

4.20 Warm/Cool G e n e ...... 119

4.21 Color Schemes...... 120

5.1 Layer Correlation ...... 129

5.2 Displacement D irection ...... 130

5.3 Combining Layer V a lu e s ...... 132

5.4 Mixing Color Layers ...... 133

5.5 Clone L a y e rin g ...... 136

5.6 Trait R eg io n s ...... 137

5.7 Layer vs. Global G enes ...... 138

6.1 2D Texture Im a g es...... 149

6.2 Designer Im ages...... 150

6.3 UV Sphere Bump Map Im ages...... 151

6.4 Individual UV tex tu res ...... 152

6.5 Regular pattern UV textures ...... 153

6.6 Csuri Images ...... 154

6.7 Light Properties ...... 155

6.8 UV Sphere Displacement Images ...... 156

6.9 UV Sphere Displacement Detail Im ages ...... 157

6.10 Populations of Game Environment M aps ...... 158

XVI 5.11 Evolved Game Environment ...... 159

6.12 Evolved Game Environm ent ...... 160

6.13 Solid S h a d e rs ...... 161

xvn CHAPTER 1

INTRODUCTION

In the field of computer graphics, trained designers use a myriad of software

tools to craft visual content. Models are sculpted, surfaces are textured and lit,

and characters are animated. These are all extremely time consuming tasks requiring

great expertise with a dizzying array of software.

In recent years, common desktop computers have become able to represent ex­

tremely complex virtual environments containing thousands of objects and seemingly

endless amounts of animation. This capability for complexity opens a bottomless pit of demand for new content design. The fact that home computers can now handle this

level of representation also creates the demand for software which allows untrained users to design the virtual spaces in which they can now work and play.

One technique which has been used for creating large numbers of entities while also facilitating non-expert design is called interactive evolutionary design (lED). lED allows designers to create by selection, rather than construction. The computer first creates a random population of solutions to some design problem. A human designer then judges these solutions for quality. Based on the designer’s input, the computer then generates a new population of solutions, using parts of the selected higher quality individuals to construct the next generation’s individuals. This eventually results

1 m increasingly improved designs. As thé selection and creation process is iterated,

creative high quality solutions gradually evolve.

While lED has successfully been used primarily by specific artists and designers

within limited domains, there are a number of practical issues that have stalled its

success as a generic design tool. Depending on which techniques are used to implement

the lED system components, it may take too long to obtain a quality solution. In addition, the solutions that are obtained often suffér from a strong signature, meaning that most of the solutions evolved by a given system or technique seem unacceptably similar. The intent of this work is to introduce several new methods that begin to address these issues.

1.1 Background

This research is based on several bodies of work. A great number of techniques exist for easing the burden of designing computer graphics content. An overview of these methods is first discussed. This is followed by a brief survey of evolutionary design algorithms in general. Finally, previous work with interactive evolutionary design is reviewed.

1.1.1 Computer Graphics Design Software

Computer graphics (CG) content for games, film, and video is most frequently de­ signed by expert users of complex commercial 3D animation software packages [2] [9] [10] [150].

Using such software typically requires fairly in-depth knowledge of 3D graphics con­ cepts like normals, texture coordinates, transformation ordering, etc. It is not un­ common for users to learn how to create CG content by taking several art and design courses over a period of several years at the college level. Many users complete an

2 entire undergrad or even graduate degree program m computer animation before seeking professional work designing CG content. This section presents an overview of research that attempts to enable both novice and expert users to more easily create complex visual elements.

Simple Design Interfaces

In addition to the design by choosing paradigm used by this research, several other approaches have been employed to allow inexperienced users to design complex CG content through more intuitive interfaces. Some of these other approaches could be categorized as design by sketching, design by inference, design by scanning, or design by assembly. Many of these methods rely on the interface’s similarity to natural “real- world” activities, attempting to free the user from having to learn any new. arcane interfaces or procedures.

The design by sketching approach to simple interfaces allows the user to create vi­ sual elements via a direct manipulation interface. Examples include sculpting objects from virtual clay [148], or “painting” textures directly onto the surfaces of objects [67].

Complex physical animation has been created interactively by mapping a user’s di­ rect control (e.g., via a mouse) to an intuitive combination of a character’s degrees of freedom [88}.

Simple interfaces relying on a design by inference paradigm allow the user to create a rough indication of the properties of a desired design which then serve as constraints to be satisfied by the computer. For geometric modeling, software has been developed which allows even children to create very complex 3D models from just a few quickly drawn shape contours [162]. Object geometry and surface reflectance properties can be inferred from a relatively small number of pbotogf aphs [39j. Desired constraint re­ lationships between parts in a scene can be specified and maintained [14]. Procedural textures can be extrapolated from scanned samples [24] [69]. Complex ink stroke pat­ terns can be brushed onto a surface based on a set of example strokes and interactively specified surface directional information [146]. Physically-based animation can be cre­ ated by specifying only desired boundary and intermediate configurations [32] [128].

Realistic walking animation can be generated from interactively placed footprints [58].

Design by scanning allows a user to create CG content by simply applying a digitizing device to the real world phenomenon to be represented. Objects can be scanned to acquire both geometry and texture [35]. Motion can be digitized using readily-available motion capture equipment [118].

Finally, simple interfaces increasingly rely on design by assembly in order to al­ low naive users to build complex environments. The user interactively combines pre-built components in some novel way. The Alphaworld persistent virtual envi­ ronment for example has been constructed primarily by allowing users to assemble their own persistent structures in a multiuser environment from “prefab” building materials [1]. There are several packages for assembling avatars for use in virtual environments by choosing parts, textures, animations, etc. [19] [34]. Some packages also allow animations to be created by choosing and combining sets of motions from motion libraries [33] [34].

Procedural Interfaces

In addition to the interactive evolutionary design methods presented here, the CG field has a rich history of using procedural methods for creating content. In general these techniques are focused on providing interfaces that enable professional designers

4 to create complex content that wouTd be TmpractfcaT to craft by hand. While to some

extent all computer graphics involves the use of the computer to procediurally produce

visual data, the majority of the techniques involve varying degrees of direct interactive

“sculpting” of data.

Some examples of generative approaches include sweep operators which allow for a

wide range of extrusion and rotation-based surfaces [155], and L-systems [129] which

use a rule-based grammar to produce realistic plants and other branching structures.

Fractals [107] [109] are useful for modeling natural phenomenon such as mountains,

hills, bodies of water, clouds, even entire planets. Particle systems can be used for

creating complex effects like fire, smoke, and airborne water [134].

Behavioral animation techniques create simple behaviors for flocking, herding,

and schooling [135], rich reactive behaviors for simulating animals like dogs [22] or

dinosaurs [49], or even complex action scripting for human characters [11] [49] [124].

CG authoring software is increasingly allowing for the creation of encapsulated

objects [99]. This technique allows a more technically oriented designer to create a

CG item such as a model or shader with complex internal workings and behavior,

but with a simple parametric interface. This high-level interface can then be used by a more artistically trained designer who does not need to know any of the details of the implementation and can thus better focus on the final animation, colors, lighting, etc. [2] [150] [169].

Texture and Surface Design

Values for color and other visual properties (e.g., displacement, light reflectance, etc.) for use in images and object surfacing are typically obtained by painting, scan­ ning, , or a combination of these. Image painting programs

5 and scanners are by far the most common means of producing images, and are quite

frequently used to build textures for surfacing objects. Paint programs have a wealth

of image processing filters to manipulate scanned and painted images. Nearly all

CG programs have a set of procedural surface materials such wood or marble. Less

common are interfaces which allow users to construct their own procedural image or

surface algorithms.

There are several programming environments which allow procedural shaders to

be written: RenderMan® by Pbcar being perhaps the most well known [125]. These

function by allowing a programmer to write algorithms in a familiar C-like language

which compute values for a given location in an image or on a surface. Shadetree

allows someone knowledgeable about shader writing techniques to assemble shaders

interactively by connecting function nodes in a graph structure [28].

A final method of surfacing is currently an active research area. A discrete sample

of a desired texture (e.g., a 300 by 300 pixel image) is analyzed. Larger regions

of surfaces with texture that has the visual attributes of the sample can then be generated. The different methods used largely vary in the sorts of visual qualities

they are capable of reproducing and the artifacts that result. A couple of recent papers provide excellent comparisons of the different approaches [43] [72].

1.1.2 Evolutionary Design

While this research relies entirely on interactive evolutionary design techniques, there has been a vast amount of prior work in evolving designs using more traditional non-interactive evolutionary techniques. Bentley and Reynolds have both assembled excellent surveys of the field [IdJflSfi]'. Some ihterestihg highlights are briefiy men­ tioned to give some indication of the breadth of evolutionary design (ED) research.

Architectural spaces have been evolved using symbiotic enclosure and space rep­ resentations [31]. Two-dimensional floor plans are evolved based on desired floor plan constraints (e.g., desired number of rooms and total area) by identifying successful combinations of low-level genes and recombining them into higher level genes [140].

Feasible LEGO® bridge and crane designs have been evolved using physical simu­ lation [47]. Bridge designs were also produced using both aesthetic psychovectors [50] and formal grammars [141]. Parmee discusses at length the application of evolution­ ary search to engineering design problems [120]. Shapes have been evolved using representations ranging from FFDs and FEM [174] to voxels [13]. ED has been used in domains as disparate as airplane wings [80], spider webs [86], robots [96], analog circuits [85] and even web ad banners [51].

Motion has also been a frequent target of evolutionary design algorithms. Loco- moting creatures [66] [112] [154], competitive behavior [48] [153], pursuit and evasion behavior [29], and human arm motion [104] have all been evolved.

Finally, Bentley describes an impressive generic system called G AD ES which can be used for evolving designs that satisfy physical constraints for domains ranging from coffee tables and race cars to hospital floor plans. It is interesting to note that

Bentley’s typical evolution trials ran for 500 generations, with population sizes around

180 [17]. As will be shown, one of the important constraints of the JED problem is the limitation of running typically less than one or two dozen generations, with population sizes commonly around nine, sixteen, thirty-six or sometimes sixty-four, depending on the domain. 1.1.3 Interactive Evolutionary Design

For some problem spaces, it is nearly impossible to produce the objective fitness function necessary for automating the design process with genetic techniques. In domains where a human designer must make judgments interactively, this is particu­ larly challenging. When a user interactively assigns fitness values to individuals in a population, the process is often referred to as interactive evolutionary design (lED).

Design Domains

Following Dawkins’ biologically inspired Biomorphs program which evolved 2D branching drawings resembling insects or plants [37], Todd and Latham [165] and

Sims [151] were the first to evolve computer graphics content via interactive selection.

Sims introduced the concept of using hierarchical expressions as a genetic representa­ tion for creating 2D and 3D textures. Todd and Latham demonstrated how to produce complex 3D iterative and recursive sculptures using a parametric vector-based genetic representation.

Since then, interactive evolution has been applied to numerous specific design domains. Applications have included the evolution of architecture [21] [30] [117] [139], clothing [81][111], fighting [6], plants [101][167], consumer products [17] [44][61][143][156], tools [83], colors [42], numerical database regularities [172], HIWIL style sheets [105], dynamical system luiles [152], human bodies [91] [92] and faces [12] [27] [114] [119] [143], and character motion [5] [95] [149] [170].

Probably the largest application area has been the evolution of 2D images. A wide array of techniques such as morphing [60], [171], Mondrian-like^ subdivision

^Piet Mondrian (1872-1944) was a dutch painter best known for bis abstract grid paintings. and coloring [71], image processing [I26][I30|, turtle graphics [40], symmetric polar

shapes [76], neural networks [97][119], and simple polygons [115] have all been used.

Many others have implemented variations of the techniques employed by Sims for

evolving images [63] [110] [138] [168]. Rowbottom provides a detailed overview of the

unique functionality of a number of these systems [142]. Links to most of them have

been collected online as well [93].

Generic three dimensional models have been evolved using a number of techniques

including expression-based deformation of a polygonal sphere’s vertices [74], itera­

tive application of polygon modeling functions like extrusion and face division [102],

branching polygonal primitives [144], vertex displacement from reaction-diffusion [87],

and surfaces created by rotating profile curves [160].

There have been a few systems which evolve three-dimensional shapes using im-

phcit surfaces specified by algebraic equations that are relevant to the primitives

used in this dissertation [15] [36][79]. The implicit equations of primitive shapes (e.g., spheres, torii) are represented by expression trees which are then evolved to form somewhat more complex shapes. The expression hierarchy representation used is very similar to the genetic programming method used by Sims for generating images.

The 3D modeling work by Nishino, Takagi, and Utsumiya also has similarities to the research presented here. Implicit primitives and deformations are combined using traditional genetic algorithm techniques [113]. The primary difference is that they typically use a small fixed number of primitives (e.g., three) and a few specific deformations like twisting. The examples generated are primarily simple forms like green peppers and twisted bottles. Takagi recently completed a massive survey oFsystems employing interactive evo­

lutionary computation, including a very large number of publications available only

in Japanese [161]. In addition to the applications areas mentioned above, he provides

references to examples of interactive evolutionary computation applied to domains

such as component layout, speech and music, force feedback, database retrieval, data

mining, robot control, and several others.

Generic Design

There have been a few generic systems for solving general lED problems as well.

Rowley created an X-windows based system which combined a toolkit for displaying and judging image based individuals, with a general system for mutating and cross­ breeding expression based representations. Different “modules” could be written for different problem spaces, with each module providing domains with specific imple­ mentations of mutation, crossbreeding, and image generation. Modules for evolving

Sims-style expression and fractal based images were produced [145].

Pontecorvo is developing a commercial system for generic evolutionary design called Emergent Design Workbench [127]. One of his primary focuses is the facilitation of product design by the consumer [45] [52]. The author presents a framework for a generic system which uses data flow networks for both solution space and genetic algorithm representations [90]. Schier and Gero demonstrate how representations of different “styles” can be combined and evolved [147].

Todd and Latham created a generic evolutionary design system called PC Mutator capable of interfacing with external existing Windows-based design software such as paint programs. Once parametric models are built (e.g., for designing cartoon faces),

PC Mutator sends commands via DDE or OLE to the external application to create

10 a population of instances of the modef. The external application generates images of

the individuals, which are then passed back to PC Mutator for display. Subjective fitness determination, mutation and crossover take place in PC Mutator and the cycle is repeated [163][166].

Marks describes a system for finding parametric design solutions for domains that are too computationally complex for the interactive requirements of lED [98]. It involves “overnight” precomputation of a hierarchy of design solutions. It attempts to compute a maximally diverse set of parameter settings so that the hierarchy of precomputed results can then be presented to a user with no interactive computation time. There is an implicit requirement for a relatively low number of parameters to enable convergence.

Comparing Control in G As and GP

There are a wide number of difierent techniques for implementing evolutionary design systems. This will be the primary focus of the next chapter. The two primary methods are genetic algorithms (GAs) and genetic programming (GP). They diflfer primarily in the representations they use for storing the genetic information which describes an individual, and in the evolution operations that are then performed to mate and mutate individuals. Note that the term genetic algorithms is used by some more generally to encompass all evolutionary algorithm techniques, including genetic programming. Here the narrower meaning of GA commonly used by evolutionary algorithm researchers is used, referring mainly to algorithms which encode individuals using fixed-length representations [70].

Genetic algorithms typically rely on a fixed length vector of parameter values.

Genetic programming uses a hierarchical graph structure, with leaf/terminal nodes

11 usually corresponding to variables or constants which serve as arguments for functions

stored in intermediate graph nodes. While GAs mate individuals by swapping sub­

vectors, GP performs mating by swapping graph subtrees. Mutation involves making

small changes to randomly chosen genes in both cases, but in GP, mutations may

also cause sub-trees to expand or contract, and functions at intermediate nodes may

be randomly changed to a different function.

A few people have commented previously on the comparative difficulty of con­

trolling the results of genetic programming-based methods, as compared to genetic

algorithms, in lED domains. Margaret Boden (as quoted by Greenfield) says that

Sims’ GP based approach is “undisciplined” and “cannot be used to explore or refine

an image space in a systematic way.” Whereas of Latham and Todd’s GA-based

approach, she says that systematic exploration and refinement is possible, if only

parameters are mutated (as opposed to also mutating structure) [23] [62].

Todd and Latham agree. They mention Sims’ approach to structure mutation and marriage saying that while it has yielded “very interesting results” that, “...the structure matching algorithm used in marriage can give rather wild results. There­ fore, we do not consider that this implementation yet gives enough control for artistic applications [165].” Todd also said he felt that “structure mutation is too imcon- troUable for most practical applications” and that the artwork shown in their book

“came from hand-crafted structures [164].”

Musgrave concludes his discussion of his experiences with Sims’ approach by com­ paring genetic programming implementations with those based on genetic algorithms.

He says that GP-based systems, “tend to be simultaneously more chaotic, hard to

1 2 control, and creative [108}.” In aa attempt ta address this^ problem, the research pre­ sented here attempts to combine the degree of control afforded by genetic algorithms with genetic programming's potential for creativity.

Using Design Knowledge to Bias Solution Space

A number of authors have discussed methods of biasing movement through the solution space to more “desirable” subspaces. Sims mentions the usefulness of biasing genes to improved distributions, in a parametric genotype context [151]. Greenfield has mentioned that Sims clamps his saturation values to a minimum to avoid mud­ diness in his images. Greenfield also points out the relatively high dependence on neighborhood image processing nodes in Sims’ function set [62] [64].

Pontecorvo suggests his commercial system will support designers interactively modifying their space representation by allowing them to combine low-level parame­ ters into “bundles” to form a new high-level parameter [127].

A technique called steering is sometimes used with parametric systems. Steering biases mutations in the direction most recently “traveled” in solution space. The intent is to predict the user’s preferences based on previous user selections. Interface elements can allow the user to control the amount of steering, since oversteering can greatly reduce population diversity [142] [166].

Discussion of Signature

Musgrave points out that the genetic functions (or “bases” as he calls them) used tend to determine the individual “looks” towards which each system is biased. As examples he refers to Sims’ “characteristic fractal patterns” [151], Rooke’s “deter­ ministic fractal functions that are generally iterations on the complex plane” [138],

13 and his own “natural looking^ base functions which were “originally honed for the

modeling of natural phenomena such as mountains, clouds, and water [108].”

As was mentioned before, it was Rowbottom who pointed out that “ ... virtually

all of these systems produce output which carries a certain ‘signature’ that identifies

the program far more than the artist [142]."

Evolving Shader Attributes

The surface color and displacement of computer graphics objects is frequently

generated using shaders. One of the more common specifications for shaders is Pixar’s

RenderMan [8]. Shaders are small programs, often written in a C-like language, that

determine an appropriate value for a given location based on the values of a number

of user-controlled parameters. Shaders are commonly used to generate surfaces that can be procedurally defined, like brick walls, or marble.

To provide a concrete example of some of the issues that this dissertation ad­ dresses, this section discusses an lED system by Ibrahim called Genshade which is capable of evolving RenderMan shaders [54] [75]. This system was chosen because of the relatively large number of example images (a few hundred) produced with this system that are available online. The vast majority of work created by lED systems is produced only by the system’s author or, at most, one or two other users. As a result, the output of a given system must generally be judged based on only a handful of images. In addition to the many images available on the Genshade web site, a web page containing fifty-four images produced using Genshade by nearly two dozen art students from SCAD (The Savannah College of Art and Design) provides a unique opportunity to examine a large quantity of output from a genetic programming based lED system [56].

14 Aspects of Genshade are similax to some of the work presented here in that Ren­ derMan is used to create images, and various shader writing techniques are often combined in the genetic representation (although as Ibrahim’s dissertation software has become a commercial package, most details of the techniques used remain un­ published.) Genshade’s genetic representation uses a hierarchy of function nodes, like Sims’ image representation, rather than the fixed-length parameter vectors used in this dissertation. Genshade is specific to shader generation, rather than generic

N-dimensional parametric design.

The Genshade examples shown on the SCAD student work page illustrate a strong bias towards marble-like, turbulence-based surface and displacement shaders, which was confirmed by discussions with a SCAD faculty member and one of the stu­ dents [7][82]. An informal evaluation of the fifty-four images on the page reveals that over three-quarters of the shaders used have either raw turbulence or marble-like noise layers as their primary visual characteristic. Enabling the author of the design solution space to deal with strong signatures like this by facilitating a balance be­ tween the frequency of appearance of different visual traits is one of the major aims of this dissertation’s research.

There are several other pages of examples of the system’s output presented on the system’s web site which illustrate other common attributes of expression-hierarchy- based lED systems. One page displays a title of “Selected Shader Database”. While it contains many beautiful shaders, only two or three of the nearly two-hundred different shaders show any patterns with some degree of regularity [57]. Irregular patterns are one of the primary building blocks of shader authoring [100].

15 Another trait that is often missing firom GP-based lED systems is a lack of corre­

lation between features in composited layers. For example, most of the shaders shown

have both local color features (e.g., dots and stripes) and displacement features (e.g.,

bumps and ridges) on their surface. There is rarely any correspondence between

the color features and the displacement features, i.e., this system would appear to

be unlikely to produce red bumps. Many lED systems calculate separate values for individual color channels (HSV or RGB) independently as well, so there is often no correlation between features in different color channels.

There is actually no explicit notion of “features” in GP-based systems. Repetition usually emerges from periodic primitive functions such as “mod()” or “sin()”. Mod can generate grids of tiles but they generally remain regular grids except when globally distorted. The output of one fimction node is typically just pixel values which “bubble up” as if through a sequence of image processing filters. Because there is no concept of features at higher levels in the hierarchical genotype, “features” which emerge at one level are likely to fall apart as they pass through pixel level manipulations in higher level nodes. When GP-based systems composite layers of visual properties with functions like addition or by using the maximum value, it is rare that features in composited layers will have any correspondences^.

Additionally, the shaders representing the system show little trait localization.

This means that a shader with features like blue spots, small bumps, or noise wrinkles is likely to have spots, bumps, or wrinkles with roughly the same attributes (e.g., size, color, distribution) across the entire surface. In the vast majority of the shaders presented, different areas of the surface are mostly interchangeable.

-An exception is an “echoing” property of self-similarity which creates sub-features on certain types of features.

1 6 There îs one final quality of expression-hierarchy-based systems that the approach

presented here will improve. It is illustrated by a set of pictures demonstrating the

mating of shaders in Genshade [55]. Four different populations resulting from the

mating of multiple parents are shown, each with at least two dozen offspring. It

is surprisingly difficult to find any offspring exhibiting traits of both parents. In

each population, from one quarter to one half of the offspring are identical to one

of the parents, except for an occasional uniform color shift. Up to a quarter of the

remaining offspring resemble neither parent. The majority of the other offspring look

like a mutation of one of the parents, with no resemblance to the other. This above

all is one of the primary deficiencies of GP-based lED systems.

Above all, evolutionary design in general requires that traits be passed from par­

ents to their offspring. Without this, no evolution can occur. If children do not inherit

visual traits from both of their parents, then mating degenerates into mutation with

large step sizes, which is nearly equivalent to a random walk in design space.

1.2 Motivation

The ubiquity of the web, 3D computer games, and affordable CG hardware and software is increasing the demand for tools which allow individuals with little or no artistic experience to create images and textures for use in their own creations. While lED systems are often attractive to researchers for their seeming ease of implemen­ tation and the “something for nothing” allure of generative design, they rarely make it out of the research lab and into the hands of designers because of the challenges involved in making the systems practical and usable. While they often initially yield visually interesting new results, the depth of novelty is usually fairly shallow, and it

17 quickly becomes apparent that the lED systems' output contains fairly severe signa­

ture.

When the representation is very narrow, as is often the case in artist created lED

systems where a specific "look" is very much desired, convergence to very complex

solutions is usually very fast. However, visual diversity is frequently relatively limited.

This is often fine for a given artist’s needs. A good example of this might be lED image

evolving systems which use fractals as a primary generative primitive. The fractal

images which result are often complex and beautiful, but naturally such a system

typically doesn’t generate any images which do not contain fractal complexity.

As representations are made more general and low-level, convergence usually suf­

fers. An extreme example of this might result from a shift towards a pixel-based (or

voxel-based) representation. Most results again have a high signature, now due to

the high amount of random noise produced by the system. Finding representations

which balance generality and noise appears to be a difficult task.

The key to the usability of lED for generic design tasks lies in this relationship

between signature and convergence. In traditional non-interactive evolutionary de­ sign systems which produce results with more easily quantifiable qualities, this is usually viewed as a problem of balancing local minima avoidance and convergence rate. However, the (usually) aesthetic evaluation requirements of lED impose several unique limitations on the evolutionary techniques that can be employed.

Traditionally, a majority of lED systems which have sought to generate a wide range of possible forms and complexity have relied on genetic programming (GP) techniques for evolution, rather than on genetic algorithms (GA). While the fixed representations of GAs have been shown to be of great use for searching very narrow

1 8 domains (e.g. the branching structures of Dawkins or Sims^ [37}[15I|) GP has been

thought to be a requirement for obtaining a broader range of possible form. While

generally much more limited than GP based systems, GAs have the advantage of

being easier to control, analyze, extend, and tune than GP systems.

1.3 Thesis

The work presented here addresses the problems discussed above in image and

shader domains by introducing a new class of building blocks called continuous pattern functions. Because the parametric design spaces constructed with these functions can

be explored with GAs, they can be developed, analyzed, tuned and searched more

rapidly than systems which rely on genetic programming approaches. When formal visual design parameters are represented continuously and integrated into pattern

functions, high-level attributes such as “symmetry”, “imiformity”, and “balance” can be more readily evolved and passed to offspring, resulting in greater visual diversity.

Methods for synchronizing patterns in different layers, are presented. This has the effect of increasing convergence speed while reducing signature. It is hoped that by providing new tools for constructing more eflBcient design spaces, the implementation and use of lED systems will become significantly simpler and more practical.

1.4 Document Overview

This document is structured as follows:

• Chapter one has provided an overview and background of the research problem

and the proposed approach to examining and addressing it.

^In addition to the GP based image work discussed previously, Sims (like Dawkins) also evolved plant-like forms using a small set of parameters like branching frequency, angle, scaling factor, etc.

19 • Chapter two discusses the interactive evohitionarjr design paradigm in detail.

The more traditional evolutionary design techniques are first surveyed and a

comparison to the lED problem’s practical constraints is then considered. Re­

quirements for making solution spaces more efficient, in terms of both satisfac­

tory convergence and signature reduction are examined.

• Chapter three introduces continuous pattern functions. These will be used as

generic building blocks for constructing solution spaces for representing images

and shader attributes.

• Chapter four reviews the formal visual design principles that will be represented.

The integration of these concepts into pattern functions will be explained.

• Chapter five demonstrates how pattern fimction layers can be smoothly syn­

chronized to provide correlation between feature properties in différent layers.

• Chapter six presents images of populations and individuals which illustrate

different design domains implementing the above techniques.

• Chapter seven contains the conclusion and a discussion of potential areas for

future work.

1.4.1 Type-face Conventions

font used purpose

italicized roman titles and introduction of a new term

slanted roman names of application programs and systems

courier program code and program variable names

2 0 CHAPTER 2

INTERACTIVE EVOLUTIONARY DESIGN

Interactive evolutionary design interfaces at their best can enable designers to dis­ cover solutions that they never might have considered when using a direct manipula­ tion interface. At their worst, they often either limit their user to a very small solution domain, or else force the user to wander endlessly through a vast high-dimensional space of poor solutions, with very little chance of ever discovering high-fitness so­ lutions. In order to investigate the strengths and weaknesses of lED systems, the many implementation options that have been developed in the much more general field of evolutionary algorithms are first examined. The unique characteristics of the lED problem in the broader context of evolutionary algorithms is then considered.

The different challenges for obtaining satisfactory convergence in both EA and lED are discussed. This chapter concludes with an examination of sources of signature in lED.

2.1 Traditional Evolutionary Algorithms

Evolutionary algorithms (EAs) can allow a wide range of problem spaces to be searched for high-quafity solutions. Evolutionary algorithms excel at certain tasks for a number of reasons. They can simultaneously search a widely-sampled region of a

2 1 cost surface and deal with very Targe niunhers o f parameters. They are extremely weh

suited for parallel computation. Evolutionary algorithms can optimize parameters

with extremely complicated cost surfaces, jump out of local minima, and provide a

list of optimal solutions, rather than just one [68].

The next section discusses optimization techniques in general. This is followed by

an overview of evolutionary computation techniques. While there are many excellent

books providing accessible overviews of EAs (and GAs in particular), one of the more

accessible ones is by Haupt [68], which several of the following sections reference.

2.1.1 Optimization

Evolutionary algorithms represent a specific approach to a class of problems in­ volving optimization. Any search of a problem space for a better solution than the current best can be viewed as an optimization problem. Haupt presents a concise list of categories of optimization problems [68]:

1. Single vs. Multidimensional: Search spaces can be simple or very complex.

2. Static vs. Dynamic: Search spaces can change over time.

3. Discrete vs. Continuous: There can be either a finite or an infinite number of

solutions.

4. Constrained vs. Unconstrained: Parameter values may or may not be bounded.

5. Determinate vs. Indeterminate: The search space may contain probabilistic

components.

There are numerous minimum seeking algorithms that can be used to find solutions to optimization problems. They vary iu computation and implementation complexity,

2 2 as weir as In efficiency for handling different types of solution spaces. The different algorithms often used can be categorized as follows:

1. Exhaustive Search: The “brute force” approach has sampling and computation

issues.

2. Analytic: This is an “exact” method, but it is frequently not computationally

feasible.

3. Nelder-Mead Downhill Simplex Method: The vertices of a simple shape in search

space are iteratively refined.

4. Line Minimization: Steps are iteratively taken in some direction in a problem

space according to different algorithms:

(a) Coordinate Search: Steps are taken along a parameter grid.

(b) Rotating Coordinate Axes: The parameter grid is rotated while “walking” .

(c) Steepest descent: Steps are taken along the gradient.

(d) Conjugate Gradient Direction: Steps alternate between the gradient direc­

tion and the direction perpendicular to the gradient.

(e) Quasi-Newton with Hessian: Second derivative gradient information is

used.

5. Simulated Annealing: Current values are alternatingly “heated” and then “cooled”

by adding small random amounts and then letting the system resettle once ageiin

into a stable state. The “heating” amplitude is gradually reduced over time.

23 6. Evolutionary Algorithms: Solution surfaces are explored widely in parallel, with

attention gradually narrowed to combinations of the “best” directions. Gaining

the ability to quantify quality intelligently is one of the primary challenges.

When searching cost surfaces, methods using local gradient information, (i.e., the

exploitation-based approaches, such as many of the above) require a much higher

sampling rate than the exploration based methods (e.g., EAs) [68].

2.1.2 Common Qualities

As was previously discussed, many design problems can be viewed as a multi­

dimensional space with each point mapping to a specific solution. Evolutionary algo­

rithms sample large subsets of this space, initially at random, then gradually refining

their search until an adecjuate solution is found. There is a heuristic assumption

that the best solutions will be found in regions of the search space containing high

proportions of good solutions, and that these regions can be found by judicious and

robust sampling of the space [25].

While other problem-space searching methods (e.g., many of the optimization

techniques presented above) may also be appropriate for many domains, evolutionary

algorithms are often used because of their ease of implementation, ability to be paral­ lelized, and robustness. However, the exact kinds of fitness spaces for which simulated evolution is capable of producing good results are still not well understood [73]. Evo­ lutionary algorithms strike a balance in exploration and exploitation between random search and hill-climbing approaches. While the random search will always find an op­ timal solution, it is Ukely to take an unreasonable amount of time. By comparison, hill-climbing tends to become trapped in local minima very rapidly. By balancing

24 between these two approaches, the prunary challenge for evolutionary algorithms is

to avoid premature convergence to local minima [25].

While there are several sub-categories of evolutionary algorithms, the two most

commonly used are genetic algorithms (GAs) and evolutionary programming (EP),

also known as genetic programming (GP). GAs divide their recombination and eval­

uation stages into two separate spaces. Genetic operators are applied in a recom­

bination space, while fitness is evaluated in an evaluation space. An interpretation

function maps between these two spaces. In GP, these two spaces are usually united.

Genotypes are frequently programs and their fitness is evaluated by executing them

and judging their performance [4]. Bentley provides an excellent generalization of

the different EA techniques called GAEA (Generalized Architecture for Evolutionary

Algorithms) [16].

2.1.3 Individuals

The representation of the data used to generate a possible solution is referred to as its genotype. Once produced, the collection of specific traits composing an individual are referred to as its phenotype. Different styles of evolutionary algorithms employ different genotype representations. One of the more interesting properties of evolutionary algorithms is that an understanding of the internal structure of evolving individuals is usually necessary. It is often the case that the individuals ultimately chosen for their high fitness are often totally inscrutable [3].

In most genetic algorithm systems, a genotype consists of an array of parameter values, often referred to as a chromosome. Each parameter is called a gene and each gene’s value is called an allele. Parameter values are often bound (via clamping) to

25 hard limits or to a discrete set of legal values (traditionally they are binary.) Some

parameter values may be normalized so that they do not inappropriately influence

the fitness function'*. Alternatively, one can integrate different weights into the cost

function to more finely control the relative importance of individual parameters. Con­

tinuous parameters are sometimes used instead of binary (discrete) parameters, since

they may require less storage for large numbers than binary, and can provide more

accuracy for the cost function or other computations[68].

The degree to which the parameters are dependent on one another is referred to

as epistasis. Low epistasis refers to highly independent parameters. Under these

conditions, minimum-seeking algorithms (e.g., hill cHmbing) tend to work best. Very

high epistasis with many interdependent parameters makes it extremely hard for

most algorithms to find high-fitness solutions. In this case, random search tends to

work well. Genetic algorithms are the best solution given medium epistasis [68]. The

complete absence of epistasis makes it very diflBcult to make minor, high-level changes, such as scaling a form along a major axis, or evolving symmetric features [16].

In evolutionary algorithms employing a genetic programming (GP) approach, genotypes are represented with a directed acyclic graph with internal nodes corre­ sponding to functions and leaf nodes containing atomic values. Because these geno­ types are not a fixed size (as most GAs’ genotypes are), they not only can evolve details, but also structure [16] [84]. When high-level functions are used to generate programs, smaller simpler genotypes result. Lower-level functions however, while yielding bigger graphs, provide greater flexibility [3]. The “syntax” of a GP language is chosen to minimize the number of syntactically invalid solutions [132].

‘‘Also known as the coat function.

2 6 2.1.4 Populations

Changes in evolutionary populations over time are the result of four primary factors: mutation, gene flow, genetic drift, and natural selection. Mutation can be caused spontaneously or as the result of external factors. Gene flow occurs when new organisms are introduced into the population. Genetic drift can occur over time due to chance. Finally, natural selection occurs when the most fit individuals are selected for mating [68].

One can often use domain specific heuristic methods for choosing an initial popu­ lation to improve initial fitness [65]. Care must be taken however that the heuristics do not bias the system immediately into local minima. One must also decide whether uniform or random sampling is more appropriate for the initial population, based on the problem domain [68]. Smaller population sizes yield improved initial performance, while larger population sizes tend to improve long-term performance [38]. Changing population size to suit the current needs is feasible.

2.1.5 Parent Selection

In order to mate individuals to produce the next generation, pairs of parents are selected from the current population. A crossover rate parameter determines how many of the individuals in the population are eligible for mating. A crossover rate that is too high will not allow enough good building blocks to accumulate in a single chromosome. A low crossover rate, on the other hand, will not explore enough of the cost surface [68].

There are numerous methods for selecting parents for mating. Often all individu­ als are evaluated for fitness and then ranked. A number of the best are then selected

27 for the mating pool. Another methodselects the mating pool by evaluating the fitness

of pairs of individuals, allowing the better of each pair to enter the pool. This tour­

nament selection approach avoids having to sort the population by fitness, which can

be very beneficial for large populations. Once chosen for the mating pool, members can be selected to mate at random, or with their chance of selection proportionate to either rank or fitness.

Another technique, thresholding, involves first determining the fitness of aU chro­ mosomes, then permitting those with a fitness above a certain threshold to survive into the next generation. Earlier generations will have very few members survive while later generations will have many more. A major advantage of this technique is that populations again do not need to be sorted [68).

When using a steady-state G A, only a few individuals are changed each generation.

Two parents are chosen proportionally based on fitness, then crossover and possibly mutation are performed to generate two offspring. These offspring are then placed into the population, replacing two of the poorer individuals (again selected based on their fitness) [159].

2.1.6 Crossover

By combining individuals’ genetic material, mating explores regions of the solution space bounded by the individuals. Mating is generally implemented by combining genotypes of two parents using one of a large number of related techniques. This process is usually referred to as crossover. Part of one parent’s chromosome is first copied to the offspring, then the copying process “crosses over” to the corresponding position in the other parent’s chromosome to copy the remaining genes. In single

28 point crossover, this crossing between parents might only happen once, but other

techniques may involve crossing two, three or an arbitrary number of times.

An extreme example, uniform crossover, can be implemented by creating a bit

mask the same length as a chromosome. Each bit in the mask has its value set

randomly to one or zero. Each gene is then copied from the first parent if the bit is

set, or from the other if it is not [159]. While this totally disjoint method is better for

combinatorial problems [53] [159], problem spaces in which related traits are adjacent

in the chromosome benefit from single point crossover mating^. The optimal number

of crossover points in an EA system is likely to be dependent on a combination of the degree of epistasis and the distance in the chromosome between related genes.

Crossover works best in an environment where there is some correlation between an individual’s fitness and the expected ability of its components. Lacking such a relationship, the environment is called “deceptive”. For example, performing random crossover might yield a very low percentage of viable offspring, or a majority of parents might be incompatible [4].

Often EAs may disallow any offspring which do not follow one or more constraints.

These are referred to as hard constraints. It is often better to penalize these constraint- violating “illegal” offspring in the fitness function, but allow their genetic material to stay in the gene pool. Doing so implies the existence of soft constraints [16]. To reach an optimal region of the problem space, a region containing illegal individuals might have to be traversed, and this might not be possible if illegal individuals are never permitted to be created [53].

® Although De Jong reported that, for his problem domains, the type of crossover used made very little difference [38].

29 Crossover can sometimes be improved so th a t it always generates new offspring;

Potential crossover points can be reduced to places where parents differ. By never

producing offspring that are identical to their parents, no opportunities for exploring

the solution space are wasted [25].

Finally, different methods can be used for blending parameters during crossover,

instead of just copying. Some of these include blending by random percentage, blend­

ing each parameter by the same percentage, or blending via complex linear or heuristic

crossover [68]. The benefit of averaging gene values is dependent on whether the sys­

tem is in the exploring or refining stages of evolution. If exploring, averaging is a poor

strategy because offspring will not have any of the traits of either parent, and diver­

sity is eroded. When refining, averaging can be beneficial for blending two similar

parents.

2.1.7 Mutation

After offspring have been generated via crossover, it is often useful to apply a

small amount of mutation to some percentage of the genes in the population. This

can add diversity back into the population. The number and amount of mutations

that a population can tolerate while maintaining its current quality is extremely

dependent on the distribution of high-fitness solutions in the search space. A very

“flat” space with only small, infirequently occurring highly fit regions will probably

not benefit from mutation. Random changes to genes are unlikely to improve fitness

in such a space. In the case of genotypes that are highly fit, their fitness is even likely to be greatly decreased. Spaces with frequent, large, high quality regions are much more likely to tolerate high quantities of mutation. When performing mutation.

30 it is important that small changes in a genotype correspond to small changes in a phenotype [16]. Otherwise it becomes very difiBcult to fine-tune solutions.

In a traditional GA, mutation is implemented by making a few random changes to the values in the genotype. In genetic programming (OP), mutation occurs by changing a few of the function nodes, or by randomly trimming or growing subtrees.

Mutation rate and amount are often dynamically adjusted. Early in the evolutionary process, mutation might be set high to allow a broad initial global search, whereas later, when an acceptable goal is being reached, small local mutation may be all that is necessary.

Mutation rate is often set between one and twenty percent [68]. The mutation is not equally distributed among individuals. To mutate five percent of the population, rather than mutating five percent of the individuals, five percent of the total number of genes in the population are generally changed. While a high mutation rate can be good for ultimately finding a more global minimum, a low mutation rate has been in some cases better for raising average overall fitness [38]. A population can quickly die out if the mutation rate becomes too high. While the rate of evolution can increase with the mutation rate, if it becomes too high the system can become unstable, and population fitness will plummet [132]. While most mutations lower the fitness of an individual, sometimes they may raise it. The more important result of mutation is that it usually adds diversity. Mutation can be the primary means of escaping local minima.

Additional mutation operators applicable to hierarchical genotypes have also been implemented. Compression collapses a subtree into a new atomic module. Expansion

31 expands a modide back into its sübfreer This alTows the GP to co-evoîve its represen­

tation language, along with its genotype. Parametric G As can also use this technique

by marking and locking protected positions and propagating the protected regions to

offspring [3].

A technique called steering is sometimes used with mutation, in which the current

“velocity” in parameter space is favored when mutating. This allows new individuals

to be generated that are likely to lie in a beneficial, rather than random direction [166].

Another technique called elitism refers to allowing the very best individuals to transfer

into the next generation, untouched by mutation.

Domain knowledge can often be used when implementing recombination operators

(e.g., crossover and mutation) to improve the chances of producing fit offspring [65].

This implies knowing what traits “good” or “bad” offspring are likely to have, and biasing the recombination operators appropriately. Too much bias usually leads to predictable solutions. Too little results in low average population fitness.

2.1.8 Fitness

The calculation of an individual’s fitness is determined by the qualities being optimized that are frequently problem domain specific. Generally a single value is computed by evaluating selected traits of the individual’s phenotype. Occasionally, multiple metrics are calculated simultaneously. Often these are then combined into a single fitness value by weighted summation.

Simplifying fitness fimctions to ignore irrelevant parameters can provide great in­ creases in performance. To deal with particularly expensive cost functions, one can attempt to store values as they are computed, and search before recomputing the

32 values în the future. This îs only useful if the search tzihes less time than the recom­

putation. Individuals that are passed down from generation to generation can have

their values stored with their genotypes so that they don’t need to be recomputed [68].

Note that caching fitness values will not work when given a dynamic fitness function,

as is often the case with subjective evaluation (e.g., when the evaluator grows tired

of seeing the same individual and rates it lower than in previous generations.)

Fitness can sometimes be computed using a multi-level approach in which frequent

inexpensive evaluations are performed, with only occasional expensive, full-accuracy evaluations calculated. This quickly eliminates most of the obviously poor individuals.

It has been suggested that the need for such approaches to limited computation resources should lead to studies in “the ecology of G As”. The cost of reproduction for an individual (in terms of environmental impact for example) could be tied to that individual’s chance of reproducing. This would encourage the evolution of individuals which require less computational resources [137].

Others have suggested approaches for hierarchical fitness calculation. Parallel

G As, for example, involve independent populations simultaneously evolving, with occasional migration between them. Eby describes an improvement in which a hier­ archy of lower resolution parent searches pass their most fit solutions down to higher resolution child searches [41]. The course resolution is used to quickly find useful subcomponents, while the finer grain space is used to refine the combinations of sub­ components.

33 Z.1.9 Convergence

Eventually evolutionary algorithms tend to converge on a stable population of individuals. This state can be detected by keeping track of population statistics such as the average fitness, standard deviation, and best cost found. Any or all of these indicators can serve as convergence tests [68].

Premature loss of diversity can rapidly lead to the search converging on a sub- optimal solution (i.e., a local minimum) [25]. Smaller populations often converge quickly into local minima, while larger populations take much longer to converge, but manage to search a much wider region of the problem space (usually finding a better solution). Population size can be changed over time to provide some control over convergence [46].

As genetic drift occurs, useful alleles can disappear from the population. This can be combatted by increasing the population size. Raising the mutation rate to restore diversity is not a good solution because the mutation operations will tend to ruin high-performance alleles [25].

One method of maintaining diversity is to check when creating a new individual that it is not identical to any of the other individuals already in the population [68].

The feasibility of doing this is obviously largely dependent on the population size.

Another solution for avoiding loss of diversity is to predict rapid convergence and attempt to avoid it before it occurs. Detecting when one individual is having too many offspring, and then limiting the number of children is one approach. This can be determined by measuring the percentage of individuals having offspring. Computing the number of offspring based on rank, not on relative fitness, slows the rate of evolution [25].

34 Evolutionary algorithms converge by slowly bnilding- and combining useful sub­

components or building blocks. Facilitating the identification and exchange of building

blocks with high fitness is the primary challenge when designing an evolutionary al­

gorithm. To encourage the growth and dominance of quality building blocks, their

growth rate cannot be too fast (leading to convergence in a local minimum) or too slow

(convergence won’t occur in a reasonable amoimt of time). Building block competi­

tion is statistical, so larger population sizes yield better selections. Difficult problems

in innovation are challenging specifically because building blocks are problematic to

create within the given domain [59].

In general, there always exists a race between irmovation and selection. At any given time in an EA, copies of the best individual are slowly taking over the popula­

tion. One of two things can happen: either a better individual is eventually created

(which then begins to slowly dominate the population) or the current best individual completely takes over, filling the population with copies of itself. In the latter case, no more innovation is then possible since all diversity has been eliminated from the gene pool. If this dominating individual does not represent a global optimum, then premature convergence to a local optimum has resulted. The ideal solution is what has been referred to as “steady-state innovation”. In this case, a better individual is always created before the current “best” takes over the population and prema­ turely ehminates diversity [59]. Ray suggests that in some problem spaces, it may be desirable and feasible to actually facilitate non-convergence, by manipulating the environment so that there are always higher peaks to climb in the solution space [133].

35 2.2 lED Techniques and Limitations

As was discussed earlier, interactive evolutionary design systems allow non-expert

designers to “discover” interesting design solutions through an exploration-based,

rather than construction-based, interface.

A typical lED process begins with an initial population of arbitrary size. In addi­

tion to controlling the size of the current population, the interface components allow

for the control of standard evolution parameters such as mutation rate, population size, and crossover frequency. The user can interactively select any number of the

“best” individuals in the current population for further breeding. The selected indi­ viduals then form the mating pool for the next generation. Each offspring is created by combining the genes of a randomly chosen pair of parents from the mating pool.

When offspring are produced, varying degrees of mutation (random adjustment of selected gene values) can be performed. The selection and generation process can be continued until the population ultimately converges or a satisfactory individual is found.

In many domains, particularly those involving aesthetic decisions, it is extremely difficult (and maybe even impossible) to produce a fitness function that can pre­ dict a user’s subjective preferences. While some efforts have been made to quan­ tify aesthetic preferences in very limited domains (e.g., the vase profile work of

Birkhoff [18] [158] [157]]), using human judgment to evaluate fitness is a useful tech­ nique for aesthetic problem spaces.

Claims are sometimes made when people develop lED systems that “any image” could conceivably be evolved with the system. Aside from the question of how long it might take to find a specific image interactively, Whitelaw raises the interesting

36 question of whether the Mona Lisa is actually m a given lED image space White

it would certainly be contained within an extremely low level representation (e.g., a

pixel based representation®) most systems are high level enough that there is perhaps

a reason to question this possibility. It would regardless be a diflScult claim to prove

one way or the other. Epistasis in the representation likely plays an important role

in determining flexibility: it is likely in a sufficiently high-level system that necessary

values can’t be obtained while others are maintained.

Interactive evolutionary design and the infrastructure required to support it raise

a unique set of practical issues distinguishing it from automated evolutionary algo­ rithms. The differences result primarily from the relatively low number of individuals that a human can evaluate in a reasonable amount of time.

While natural evolution relies on large populations and numbers of generations,

JED systems do not usually have this luxury. They must make up for it by employing techniques which maximize population fitness using the smallest possible populations for the fewest number of generations. The following section will primarily examine these differences between interactive and non-interactive evolutionary design.

2.2.1 Solution Spaces

One of the primary questions for the designer of an aesthetic solution space rep­ resentation is how fit vs. how general to make the individuals in the space. Spaces can be constructed so that most individuals have a minimum fitness, but this usually results in a lack of flexibility and thus an increase in signature. A representation which will allow individuals to have extremely low fitness will likely take a very long time to converge to something interesting, but will also allow for a broader range of

®Simon’s “Every Icon” artwork explores this concept [77].

37 possibilities. Problem spaces that are hand crafted to contain a large distribution of

“highly fit” regions yield few surprises. Methods for striking a reasonable balance

must be found.

lED systems rely on minimum and maximum values for determining acceptable

value ranges. The setting of these boundaries determines many of the visual attributes

represented in the solution space. For some attributes, it may make sense to create

genes which allow these boundaries to be modified. A narrower range allows the design

space author more control over what is created, while a a wider range potentially

permits more surprises. By using genes that allow the boundaries to vary, both surprises and constrained solutions become likely.

The distribution of “typical” values within these ranges is also of importance. “In­ teresting” occurs at the boundaries of acceptability, not at the center. Unique design by definition requires a few extreme attribute values. On the other hand, it is not desirable to force all properties to extremes. This can be demonstrated by comparing a population of designs with totally average values, a population of individuals with some extreme values, and a population whose individuals all have extreme values.

The populations with all average and all extreme values seem uniformly similar. The population in which a few genes have extreme values for each individual emphasizes the differences between members. The user will have no reason to have a preference between members of a population with no differences, or for a population with totally arbitrarv differences.

38 2 .2.2 Generating Offspring

As was previously mentioned, there are many different methods for selecting par­ ents for mating. The best choice of crossover technique similarly varies based on representation and problem space attributes. As with parent selection, determining the appropriate crossover techniques based on the current design stage can prove ben­ eficial. There may also be an inherent domain dependence, depending on the relative independence of neighboring genes.

Crossover

When individuals are mated, ideally an lED system will cause visual traits to be inherited from both parents, rather than from only one or none. In GP-based systems this is often a problem. When hierarchical genotypes are mated, a subtree of one individual’s genotype is grafted into an arbitrary function node of the other individual. As the subtree becomes an argument for a completely different function, the visual interpretation of the “genes” is likely to be completely different. This may be less a problem if predominantly higher-level “filter” GP node primitives are used because their effect is likely to still be apparent in offspring. Lower level nodes are more likely to be recontextualized in offspring into roles that are totally unrelated to their prior usage in the parent genotype.

Crossover can be less problematic in a GA-based system where it is more likely to find a direct mapping between genes and specific visual properties. Something more akin to, “he has his father’s hair but his mother’s eyes,” can result when gene values drawn from one parent or the other are applied to the identical interpretations in the generation of the offspring.

39 Mutation

Mutation is most successfully employed in “sloppy” problem spaces containing a

high percentage of non-brittle, high fitness solutions. When a relatively large per­

centage of the solution space contains adequate solutions, and many solutions in the

same neighborhood have a high degree of fitness, mutation can often be used as the

primary evolution operator.

As subsequent generations are produced, the user often reduces mutation (as

well as crossover rates) to slow movement through design space from generation to generation. The search is gradually narrowed into a specific region upon which the

users interest is focused.

When the user finds a reasonably satisfactory model, the model’s genotype can sometimes be manually refined until the user is pleased with the results. The possi­ bility of doing this is generally dependent on the degree of epistasis in the problem domain, as well as the intuitiveness of the different values in the genotype.

Typically the user is either exploring or refining. Exploration can tolerate a great deal of discontinuity since by definition it evolves jumping around in the space. Re­ fining one’s current solutions generally requires small controlled steps through space in which the solutions remain recognizable but are modified in a hopefully beneficial direction.

In order for the user to be able to specify small mutation amounts to refine the best-so-far individuals, it is a necessity that the solution space be continuous. If the space is defined so that phenotypes change abruptly for very small changes in gene values, then it will not be possible for the user to investigate variations on a given individual for refinement. Even for parameters which are implicitly discrete, such as

40 the existence of a feature, or a selection between choices, it is advantageous to provide

a means of continuously shifting from one state to the next, via blending or scaling.

In an interactive system, some effort must also be made to coordinate the degree

of smoothness in different dimensions. Since it is usually not practical to have the user specify separate mutation rates for each dimension in design space, a single parameter is usually set by the user to determine the speed of travel in the parametric domain. If the rate of visual change along some axes is significantly greater than along others, it becomes much more difficult for the user to interactively control the degree of visual mutation in the offspring.

2.2.3 Display

In typical lED systems, only a relatively small population can be successfully displayed at one time. Typically at most several dozen individuals are presented, instead of the hundreds or thousands sometimes used in traditional G As. The primary reasons for this involve the size and resolution of available display devices, the time it takes to generate individuals, and the user’s ability to evaluate and compare a large number of entities at once. While it is always possible to allow a user to evaluate an arbitrary number of individuals sequentially, this number will always by limited by a user’s patience and memory.

Multidimensional individuals which can not be evaluated at a glance further com­ plicate matters. While individuals with two-dimensional forms (e.g., images) can be displayed in a grid and evaluated at a glance, objects with three or more relevant dimensions are likely to require animation or interaction. It is also probable that they will need to be evaluated in sequence rather than in a side-by-side comparison.

41 Objects that have important structure at many different resolutions or scales are even more challenging to judge. An individual that is given a high fitness score when viewed at a low resolution might be found to have highly undesirable attributes when considered at higher levels of detail. Likewise, individuals that are dismissed for low apparent fitness when viewed at a low resolution could turn out to have highly desirable qualities under closer inspection. The practical problem is that populations can not be generated or evaluated conveniently with all individuals at high resolutions.

At most only a few dozen generations will also typically be examined (again, instead of the hundreds or thousands potentially processed by non-interactive G As.)

This is again due to practical limits on user patience, endurance, and attention.

Appropriate population size should be a variable trait depending on the user’s current activity. If the user is truly exploring the parameter in search of something interesting, a larger, low resolution population works well. As the user then begins refining entities by exploring the parametric neighborhood of a few high fitness selections, a smaller population is probably necessary in order to increase the resolution of the individuals.

2.2.4 Interactive Evaluation

In an interactive evolutionary design system, after an initial population of models is generated, the resulting entities must be presented to the user for evaluation. Most lED systems present the entire population in a 2D grid, though they can also be displayed sequentially if their structure requires it (e.g., geometries containing relevant interior structure, animations, or individuals requiring high display resolutions.)

Users typically specify fitness by selecting those members of the population de­ termined to be the most pleasing (or otherwise “interesting”.) Interface components

42 are often provided to control the speed and type of evohition. Factors such as the

mutation amount and frequency, crossover frequency, population size, and so forth can usually be manually adjusted.

The user judging the individuals’ fitness is typically permitted to select one or more individuals as the “best”. This binary simplification (i.e., an individual is either good or bad) is strikingly different from the usually precise fitness scores of traditional evolutionary design systems. While allowing more grades of quality to be applied by the user can provide the system with better ranking information, this must be balanced against the cost of evaluation time. Users will certainly not tolerate being forced to rank thirty individuals on a ten point scale before each generation can be produced.

There are many interface decisions to be made with regards to interactive fitness evaluation. Effective evolution of complex objects requires the generation of many populations. For this to occur in an lED system, the presentation of the individuals must be quick and allow for very fast evaluation by the human user. If individuals can be displayed simultaneously instead of sequentially, then they can be more easily compared to one another. Only at most a few dozen generations will typically be examined, instead of the thousands that are often processed by traditional G As.

This is simply due to user patience and attention.

The interface interactions required to display individuals with relevant temporal or spatial variability should be minimized. If low detail representations are available which can be displayed quickly, allowing the user to rapidly eliminate most of the population, all the better. Variable resolution and angle viewing if necessary should be facilitated.

43 Being able to display individuals at multîplë levels of detail is critical for the

use of larger populations. Displaying bigger populations usually necessitates a low-

resolution display of individuals. While this initial look can allow a user to quickly

weed out the worst population members, determining which members have the highest

fitness often requires viewing a higher-resolution representation of select individuals.

It can also be easier to display larger populations if the obviously imfit can be proce-

durally identified and eliminated before being displayed. This will make the best of

use of the limited screen space.

2.2.5 Convergence

While convergence in a traditional evolutionary algorithm with an automated

fitness function usually involves the detection of a stable population, lED convergence

occurs when the user has found one or more individuals that are “good enough”.

It is a curious property of subjective evaluation that the context greatly affects

the perceived fitness. An individual that is quite interesting in one population is

usually significantly less interesting when surrounded by individuals very similar to

itself (e.g., its offspring.) Furthermore, individuals that are ranked highly when first

viewed may be rated more poorly when later viewed as expectations change and they

lose their novelty value [68}. This is a significant difference from traditional G As in

which a fitness function can usually be relied upon to generate a consistent objective scoring of an individual from generation to generation.

Once convergence of properties has begun, the user often is presented with a population containing many similar offspring, which have properties for which the user has been selecting. There will usually be a few individuals that are radically

44 different from the rest. There is a danger that if a user always selects the most

unique individiial(s) in a given population with no regard for previous selections,

that they will have difficulty converging. This approach will, in effect, keep them

exploring the design space in large jumps, rather then switching into a “refine” mode,

and more carefully exploring a limited sub-region. Knowledge of how the mutation

and mating algorithms work can provide additional control for the user, informing

them what to do in different situations in order to improve fitness [62].

2.3 Signature

The body of work evolved using a given lED system often shows a strong sig­

nature [142]. Signature, in this context, refers to the lack of visual diversity and of generality in the designs presented to the user. The system’s signature is also usually apparent in the individuals evolved with the system. Most work produced using a given lED implementation shares extremely strong visual characteristics, identifying the work as having been produced by that system, regardless of user.

Since it is often the case that many systems have been produced for use by a single user to create art, this inherent “style” can be a good thing. But when the system is intended for use by multiple users, and a desire is expressed to allow the users to evolve designs according to their preferences, then biases towards a specific signature and away from the user’s ideal choices are to be avoided if possible.

There are several approaches to reducing signature. The ideal solution is to create a design space that is sufficiently general to contain all desired possible solutions, but no more general than that. This is often accomplished from specific to general, in a method reminiscent of some AI learning algorithms. A representation is chosen which

45 contains just one desired solution. This is obviously too narrow a representation. So another desired solution is identified and described, and the space is generalized to contain both solutions (as well as those that lie in between.) The process can be continued until the model is “suflSciently” general. As long as each generalization step is performed in a continuous rather than discrete fashion (a prerequisite for successful evolution) then each will add a huge number of intermediate design solutions.

Once this “minimally generaUzed” space is created, because of the vastness of the intermediate spaces, it is likely that it will still be much too general in practice for an lED interface to converge in a satisfactory period of time. Further methods of reducing the degree of signature, allowing for a wider search of the potential solution space will be discussed below.

2.3.1 Sources of Signature

Ultimately, signature results from choices made when determining design space representations. Graphics primitives, functions, and techniques are used which are not suflBciently general and therefore are only able to access a small region of a prob­ lem domain’s potential solution space. Alternatively, too general a representation is occasionally used, in which case too large a region of solution space must be searched, making it impractical for an interactive system to find fit individuals.

One class of sources can be categorized as signatuTe-from-specificity. That is, a set of specific traits are noticeably present in the vast majority of individuals presented and evolved. Another category of sources can be called signature-from-generality. In these cases, the representation is too low-level for organized structures to emerge in

46 a reasonable amount of time. The vfsuaT sfmiTafîty between most mdmduaTs is due

to the dominant evidence of the overly general low-level representation.

Specific sources of signature in lED systems include the following:

1. An order from chaos bottom-up approach is frequently employed, instead of

creating chaos from order in a controllable fashion.

2. Specific strong visual traits dominating populations (e.g., the ubiquitous tur­

bulence and absent patterns in the Genshade system.)

3. Traits are composited in layers in an uncorrelated manner.

4. Features rarely if ever have unique and interesting sub-features.

5. Common high-level principles of design (e.g., feature symmetry) are absent.

6. Visual traits are usually present globally in the design rather than localized to

a specific region.

7. All individuals use all available attributes to varying degrees (i.e., using “every

crayon in the box”.)

The next section gives a brief overview of the approaches to addressing these sources. The following three chapters will give details of the methods implemented.

2.3.2 Signature Reduction

Minimizing signature is a challenging task in lED systems. The signature is of­ ten the result of the non-generality of the chosen representation for individuals. For example, an image generation system which represents images by having a separate color gene for each pixel in theory is capable of representing any image. In practice

47 however, the vast majority of images that can be produced interactively in a reason­

able amount of time will appear to consist of random static or “noise”. Likewise, if

images were represented by random placement of collections of fixed size circles, the

production of any images containing a square (or even straight edges) would be very

unlikely. Most images would be strongly recognizable as having been developed by

that particular system, regardless of who evolved the work.

Most JED system implementations have high visual complexity when a small num­

ber of evolved individuals are examined. But after viewing a couple dozen evolved

individuals it is discovered that they lack the “relevant visual diversity” that would

be necessary for the system to serve as a general authoring tool for a given domain.

Signature is often acceptable in a generative system developed for use by a single

artist or designer. It is often expected that artists have a strong consistent “look” to

their work. But such a quality is undesirable in any broad design tool intended for

multiple users. When the same design space is used by others to generate content, the results, having the same signature, are judged by others to look more similar to the work of the original developer than to anything else.

If the system generates colored shapes but is incapable of generating curvy blue things (e.g. the representation involves randomly moving and connecting six vertices to form a polygon of some shade of red/orange/yellow) then that would be a form of signature. Alternatively if an artist who favors curvy blue things were to build an IE system which only generates curvy things, for example, by translating a few spline control points, and choosing a shade of blue, this system would also have a high signature. This can be demonstrated by observing that when someone else uses the system, despite their aesthetic preferences, they only ever generate curvy blue things.

48 All the output would likely be identifiable aa coming'fironr that system because of th e system’s signature.

Corresponding to the list of sources in the previous section, the following tech­ niques can be used to reduce signature when creating design spaces:

1. Entropy^ should be increased gradually and in a controlled fashion to allow the

design space author control over the balance of chaos and order.

2. The probability of the appearance of specific strong visual traits should be

controllable by the design space author.

3. The traits found in separate composited layers should have a controllable chance

of being correlated.

4. There should be the possibility of features having unique and interesting sub­

features.

5. The usage of common high-level principles of design (e.g., feature symmetry)

should be possible.

6. Visual traits should sometimes be localized to finite regions.

7. The “palettes” of traits used should vary from individual to individual within,

a population.

Details of implementing these approaches to signature reduction will be the subject of the next three chapters.

^Entropy can be viewed as the steady disorganization of a system.

49 CHAPTER 3

CONTINUOUS PATTERN FUNCTIONS

Solution space design is a critical factor in determining the degree of signature of the individuals likely to be discovered with an interactive evolutionary design system.

If the space is too high-level, with too few parameters, the potential diversity of the possible designs will be highly constrained. Examples of this might include systems which allow predesigned components to be mixed and matched in “unique” combi­ nations. For example, a house design space might have parameters allowing different rooms to be variably positioned relative to one another.

A representation that is very low-level on the other hand may theoretically contain a much more diverse set of individuals. But if highly fit individuals are too rare in the solution space, then they are unlikely to be found with an lED system, given the practical constraints of low population sizes, and few generations. Again, in a hypothetical house design domain, consider a design representation consisting of a

3D grid of bricks with a set of boolean parameters specifying whether each individual brick is present or not in the design. While a rough approximation of all possible house designs might exist in the space (given a large enough grid) it is extremely unlikely that an lED user would every find anything more than indistinguishable

“clouds” of random bricks in any reasonable amount of time.

50 This chapter describes a representatioir methodthatrprovidesa more general build­

ing block than most domain-specific parametric-component-based methods, while still

attempting to be more high-level than uniform discrete atomic-collection style ap­

proaches. The techniques presented here allow the designer of the solution space to

create a broad variety of structured forms, control the degree and probability of emer­ gent visual structure, and give the user greater control over the evolutionary search process.

The design solution that ultimately results is constructed from layers of continuous patterns made of features. In this chapter, features are presented first, followed by patterns. The chapter then concludes with a brief introduction to layers. A continuous pattern function F takes a vector of normalized real arguments G and a sample point p, and returns a vector of real values A:

F :G X p —* A

G = {< go,9i,... ,9m-i >,0 < Pi < 1}

A ^ Uq, Ï • • • ) U/i—i ^ , Uj Ç

In practice, a vector from G contains dozens or even hundreds of elements. The input vectors from G contain normalized genes. The resulting vectors from A contain values which determine the visual attributes at the sample point p of a specific design.

The concepts introduced here are inspired by the computer graphics area of pro­ cedural texture authoring. Procedural textures are often referred to as shaders. Both surface color and displacement are firequently calculated procedurally when render­ ing 3D objects. For example, a simple road shader could be created by combining a base grey layer with a layer containing a pattern of edge and center stripes (such

51 a shader exists [175}.) Irregular additions might include variation In color, localized oil discolorations, skid marks, pot holes, and cracks. Each individual crack, hole, or stripe could be considered a feature and each could be made completely unique.

Storing pixel and/or geometry values for all features for an arbitrarily long stretch of road would be extremely expensive. When possible, the use of an implicit procedu­ ral representation is greatly preferred. Such representations (examples of which are presented through the remainder of this chapter) can be stored, evaluated, and dis­ played at arbitrary resolutions with no additional storage requirements. By contrast, explicit representations require the storage of multiple discrete resolutions. They also frequently exhibit aliasing artifacts when sampled at resolutions other than those precomputed.

Procedural textures are often constructed using functional composition of building blocks such as abs, sin, floor, clamp, step, and pulse [121]. A wide variety of complex features can be created from a relatively small function set. Examples of a few of these building block functions are shown in equations 3.1 - 3.3 and in figure 3.1:

a if I < a clamp(a, 6,x) = ^{ b if xX > b (3.1) X otherwisotherwise

step(a,i) = { J (3.2)

pulse(a, b, x) = step(a, x) — step(6, x) (3.3)

3.1 Features

This section explains different approaches toward representing features for use within continuous pattern functions. A pattern fimction F typically has a vector of default attribute values Aj associated with it. For example, default surface attributes

52 b

a

a b

(a) clamp(a, b, x) (b) step(a,i) (c) pulse(a,6,i)

Figure 3.1: Example primitive functions

Figure 3.2: Simple feature

53 might be the color white, zero displacement height, dr zero specularity:

F{G,p) = Aa + f{G ,p) (3.4)

A sample point p and an array of parameter values G (i.e., genes) are passed to

/ to determine whether there is a feature at p and what the feature’s attributes are.

A feature corresponds to a local continuous region of the function’s domain in which

the values change from Ad in some parametrically controlled way (e.g., hgure 3.2).

Methods for building function / to determines the presence and attributes of a featiure

at p will be discussed at length in section 3.2.

The attributes of each feature are determined by a subset of G called the feature

genes. One of the primary challenges in feature design is to simultaneously maximize

flexibility and fitness. A feature should be capable of representing a wide range of

shapes, for example, but it should also not degenerate into unstructured noise through

the vast majority of the parametric space.

Several examples of approaches for representing features are presented in this chapter. None are considered to be the only way of shape representation. Different domains have different visual and spatial requirements. It is hoped that the meth­ ods shown here illustrate how the design of solution domains with different visual requirements might be approached.

3.1.1 One-dimensional Features

A one-dimensional feature and design representation is first used to illustrate a number of important lED and visual design concepts in a clear and simple manner.

Two-dimensional and three-dimensional features will then be examined.

54 Bottom Top^ Ridge Slope Slope Plateau? Shape

1.0 1.0 Yes

0.0 0.0 Yes

0.0 1.0 Yes

- 1.0 - 1.0 Yes

0.0 0.0 No

Table 3.1: Worley’s bevel parameters [177]

The method shown here to produce a generic ID feature can be viewed as an ex­

tension to Worley’s generalized bump map edge bevel [177). Worley presents a para­

metric contour consisting of a mirrored Hermite spline with its shape determined by

four parameters: ridge.uidth, bottom-slope, top-slope, and plateau-width. The

ridge.width parameter controls the width of the base of the bevel while plateau-width

controls the width of the flat region on the top. The remaining two parameters control

the continuity at the bevel base and the peak, as well as the shape of the interpola­

tion 3.1).

Worley mentions the importance of keeping his parameters simple so that his users can intuitively and easily manipulate the shape without too much efibrt. An lED algorithm however is perfectly suited for handling a larger number of parameters, while hiding the underlying complexity from the user. A parametric feature profile

55 Figure 3.3: Four vectors with controllable direction and magnitude shape side Bezier curves. Additional parameters determine plateau and base width, and shift the peak/plateau left or right.

controlled by an lED system can be much more complex than one requiring manual setting of parameter values.

To construct ID featiures for lED, the possibility of asymmetry is first added.

Separate parameter sets are created for the left and right contour edges. The side curves are then created from Bezier curves, instead of Hermites, for greater control of the shaping (figure 3.3). A top bias parameter is implemented to offset the plateau or peak to the left or right, within the feature. Additional parameters control the distance and direction of the intermediate control points relative to the end points, allowing for even more variability in shape.

Each of the four vectors (from ciurve end point to neighboring intermediate point) is controlled by two normahzed parameters. The first parameter d controls direction, while the second m controls magnitude. The set of eight parameters can be used to shape each of the feature’s side Bezier curves in a normalized space with the left curve’s endpoints fixed at (0,0) and (1,1) and the right curve’s endpoints at (0,1)

56 and (1,0). The four vectors pointing fronrthe^ curve endpoints^to the intermediate

control points (in the normalized endpoint space) can be found as follows:

^ii = ”^i(l - ^i) vx^=m\dx (3.5)

= 1 — TUgdo U2y = 1 — 7712(1 — d.2) (3.6)

V3x = TMada %^==1 -- 7773(1 - da) (3.7)

U4, = 1 - 7714(1 - dj) = 7714^4 (3.8)

Above, vx and V2 specify the location of the heads of the two vectors controlling the shape of the left Bezier, while % and U 4 yield the heads of the vectors for the feature’s

right Bezier curve. The diagonal constraint imposed by the above ensures that the feature remains a a valid function. After the curves’ shapes have been determined in the normalized space, the curves can be scaled and translated according to the values of the feature height, feature width, plateau width, and plateau shift genes.

Examples of possible figure shapes are shown in figure 3.4.

3.1.2 Two-dimensional Features

Common shader practices illustrate how to combine one-dimensional functions to produce two-dimensional feature shapes. The implicit method employed in generating textures relies on determining whether a given point in the domain being sampled is in the interior or exterior of a feature. A common method of producing features for shaders is to produce an appropriate in-out test for each desired feature shape. Some common 2D feature shapes include discs, stars, rectangles, and lines [100] [121].

For the evolutionary design needs presented here, it is important that we are able to smoothly interpolate from one shape to another. There are many possible representations for 2D shapes that allow interpolation. The method introduced in the

57 Figure 3.4: ID feature examples

previous section could be used in this higher dimension by creating a circle from spline curves and parameterizing the movement of the curves’ control points in any number of ways [94]. An additional method is given here which provides a continuous mapping between a shape parameter and different feature shapes using implicit equations. This representation has the advantage of allowing for smooth interpolation by interpolating the equations [20].

The remainder of this section describes how a few parameters can be used to con­ trol the shape of a 2D featiure’s boundary contour. The implementation discussed here illustrates how to interpolate smoothly between a number of sample shapes. These are again not intended to provide a definitive set of all possible shapes, but rather to illustrate the creation of a continuous shape parameterization. Different design domains can use these techniques to implement different shape parameterizations.

The set of 2D shapes used in this work are shown in figure 3.5. Equations for the

58 (a) line (b) circle (c) circle with hole

(cl) 6-point star (e) square (f) rods

(g) pillow (h) hourglass (i) 8-point star

Figure 3.5: 2D feature primitives

59 P i

Po

Figure 3.6: It can be determined whether or not a given sample (s, t) is inside a star by mapping the sample into one half of a point of the star and determining whether the distance to the sample from the center c is greater than the distance to the star’s edge.

shapes follow [116]:

circle(x, y) = + y^ — r~ (3.9)

diamond(x, j/) = jxj + |y| - r (3.10)

hourglass(x,y) = — x~ + xf (3.11)

line(x,y) = max(|10x| - r, |y| - r) (3.12)

pillow(x, y) = x** + y'* — (x“ + y^) (3.13)

rods(x^y) = max(max(|x| — [y[ — r)^x'* + y‘* - 2x^ - 2y^ - x^y^ + 1) (3.14)

square(x, y) = max(|x| - r, |y| — r) (3.15)

torus(x, y) = (x^ + y^ + Tq -rf)^ — 4ro(x^ + y~) (3.16)

A star shape can also be produced, requiring a bit more comphcated definition, in which eacli (x, y) sample is mapped into one half of a point® of an rip-pointed star.

®One point of a star is divided in half with a line horn the tip of the point to the center of the star.

6 0 The equations beiow are based on. the algorithm presented by Peachey [121}:

r = y/{s - c,)2 + (t - (3.17)

ttj = (atan{s — Ca,t — c j + tt) mod Op (3.18)

^ _ r Os if < (ap/2) (3.19) '' Op — Oa otherwise Ln = normalize(r • (—sin(ar),cos(ar))) (3.20)

dt = L„ • e (3.21)

r ,t = | ifrf.^0 J322) Tmax Otherwise star(x, Î/) = (rst - r)/rgt (3.23)

The radius of the star is fmax, the radius to the base of a star point is rmm, the tip

and base of the star point are po and Pi, and e is the normalized vector from po to pi

(see figure 3.6.) The distance from the star’s center (c,, c j to the point being sampled

(s, t) is r, Op is the angle covered by one point of the star (e.g.,27t divided by the

number of points), is the angle of the the sample within one point of the star, is the angle reflected around the center axis of the star point, and Z-„ is a normalized vector perpendicular to the ray from the center to the sample. The value of dt, the the dot product of this vector and the star edge, is used to calculate the intersection of the ray through the sample and the edge. This can then be used to compute the length of the vector from the star’s center, through the sample point, to the star’s border.

Note that the mod in equation 3.18 is not an integer mod but rather a real­ valued mod which returns the real remainder of its first argument divided by its second (real-valued) argument. This is used to prevent discontinuities in the design space. This real-valued mod is used throughout the work presented here.

61 For each of the feature shapes presented, the sign of the value returned by its

equation determines whether the coordinate being examined is inside or outside the

shape. Typically positive values inside the feature are desired so negation may be

necessary. The magnitude of the value returned gives information about the distance

from the shape's contour edge which can be used in a domain dependent way (e.g., for opacity or height.)

As was mentioned previously, implicitly defined shapes can be smoothly interpo­ lated by interpolating their functions. The rate at which most of the shapes interpo­ late visually is not linear however. Wade’s symmetric versions [173] of Perlin’s bias and gain functions [123] are used to empirically construct a blending function with two parameters to control the rate at which shape blending takes place:

bias(6, t) = (3.24)

symBias(6, () = |

« e P.26 ,

isoBlend(si,S 2 ,i, 6,g) = lerp(si, Sg, symBias(6, symGain(g, ())) (3.28)

The bias function allows a value t 6 [0,1} to be “pushed” toward either extreme smoothly as 6 € [0,1] varies from 0.5 toward zero or one. A lack of symmetry between the downward and upward biases is corrected by using the symBias function which reflects the downward bias. A gain function pushes t 6 [0,1] either away from or towards the “center” value of 0.5 as the gain parameter g varies from zero to one (the symGain version again is more symmetric in its effect.) By composing bias and gain in the isoBlend function, the intermediate shape transitions can be intuitively pushed

6 2 J © o ><9 I

Figure 3.7: Pairs of implicit shapes can be blended by interpolating the values re­ turned by their imphcit functions. The above series shows a blend through a series of ten shapes with two intermediates for each pair. The blend proceeds from the top left, left to right, top to bottom.

63 From To Bias Gain rods pillow 0.9 0.5 pillow square 0.75 0.5 square circle 0.5 0.5 circle diamond 0.5 0.5 diamond fipt-star 0.25 0.5 6pt-star hourglass 0.75 0.5 hourglass torus 0.975 0.15 torus Spt-star 0.0075 0.2 Spt-star line 0.3 0.5 line rods 0.5 0.5

Table 3.2: 2D feature blend bias and gain constants

closer to either shape with the bias control. The speed at which the interpolations ease-in and ease-out at either end can also be tuned using the gain parameter.

Since the speed at which values returned by the shape functions approach zero can vary greatly, attempts must be made to smooth the interpolation between shapes.

Bias and gain functions can be used to shift the shape parameter towards one of the neighboring shapes so that intermediate values produce visually intermediate shapes.

Constants found empirically to improve the distribution of intermediates as seen in figure 3.7 are shown in table 3.2. One of the more dramatic blend curves is plotted in figure 3.8. If a linear interpolation between the torus and star were used, then through almost the entire transition, the torus shape would be maintained. A value for bias was determined through experimentation that would produce a shape that was visually between the torus and star half way through the interpolation. The gain was then tweaked to hold this intermediate state longer.

Appropriate bias and gain values were then determined for each of the neighboring pairs in the shape transition series. The shapes were ordered in a way that provided

64 5 025

025 0.75

Figure 3.8: Blend curve for torus to 8pt-star transition

a more gradual transition between related forms. It would certainly be possible to allow transitions between any of the shapes if appropriate bias and gain values were determined for each possible (n^) pair, but this is not a necessity to illustrate the techniques used in this work.

An additional possible approach to the pairwise blending scheme is to blend N shapes by varying the percentage that each contributes to the final form. This would have the advantage of allowing a much broader range of shape combinations, instead of only allowing transitions between “neighboring” shapes for which intermediate forms had been carefully hand crafted. This advantage is also the approach’s pri­ mary disadvantage: by allowing arbitrary combinations, there is great potential for a majority of the the resultant shape combinations to be “unacceptable’’ noise. The problem of how to maintain the high quality of hand crafted methods, but with the unpredictability and potential variety of random combination approaches is the focus of the next section and the next two chapters.

65 Figure 3.9: 3D feature primitive

3.1.3 Three-dimensional Features

Three-dimensional features (figure 3.9) are frequently used in solid shaders. The attributes of the surface of a geometric object are determined by “dipping” the object into a three-dimensional space containing 3D features. The surface points adopt the attributes of the surrounding features. As in the previous section, implicit surfaces provide an extremely convenient shape representation that is easily interpolated.

6 6 (a) tetrahedron (b) cube (c) hourglass

(d) octahedron (e) pillow (f) rods

(g) tangle (h) top (i) torus

Figure 3.10: 3D feature primitives

67 From To Bias Gain rods tangle 0.85 0.5 tangle pillow 0.9 0.3 pillow cube 0.75 0.5 cube sphere 0.5 0.5 sphere octahedron 0.5 0.5 octahedron top 0.9 0.5 top hourglass 0.1 0.5 hourglass torus 0.975 0.15 torus tetrahedron 0.01 0.3 tetrahedron rods 0.6 0.8

Table 3.3: 3D feature blend bias and gain constants

'i S' « s ag •« S' ?

Figure 3.11: Blend curve for tangle to pillow transition

6 8 The set of Implicitly defined shapes^ of varying topology used in thi&work wa& defined using equations 3.29-3.38. The shapes created are shown In figure 3.10 [20][116]:

tetrahedron(z, y, z, r) = {x~ + y~ + z~ — ak'^Y ~ 6((z - k)~ - 2x2)((2 4- k f - 2y^), (3.29) A: = 5, a = 0.95, h = 0.8

cube(z, y, z) = max(|ij - r, ]y[ - r, )zj - r) (3.30)

hourglass(x, y, z) = y'* - -f- x" -h z^ (3.31)

octahedron(x, y, z) = |x| 4- Jy| 4- ]z| — r (3.32)

pillow(x, y, z) = x ' 4- y'' 4- z'’ - (x" 4- y^ 4- z") (3.33)

rods(x, y, z) = max(max(]x| — r, ]y) - r, ]zj - r), x"* 4- y'' 4- z'* — x^ - y^ - z" - x-y- - x^z^ — y^z^ 4-1)

sphere(x, y, z) = x ‘ 4- y^ 4- z^ — (3.35)

tangle(x, y, z) = x‘‘ — 5x^ 4- y‘* — 5y^ -f- z'* — 5z^ 4-11.8 (3.36)

top(x, y, z) = y^(x^ 4- z^) 4- c"(x^ 4- z") — c^a", a = 1, c = 0.1 (3.37)

torus(x, y, z) = (x^ 4- y“ 4- z^ 4-T q - r?)" - 4ro(x^ 4- y^) (3.38)

3.1.4 Noise

While the above formulations are capable of creating a very wide range of shapes, they are (by definition) geometric and regular. When Irregularity Is desired In a proce­ dural texture a continuous noise fimction (figure 3.13) is often used. Noise is perfect for the needs of Interactive evolutionary design because It can be controllably and continuously added to regular forms to make them smoothly become more Irregular.

Noise is a function which, in most Implementations, outputs a single value based on its inputs. Small changes In the input “seeds” yield a correspondingly small change

69 X ï î K H B & VV %

n

Figure 3.12: Pairs of 3D implicit shapes can be blended by interpolated the values returned by their imphcit functions. The above series shows a blend through a series of ten shapes with two intermediates for each pair. The blend proceeds from the top left, left to right, top to bottom.

70 Figure 3.13: Noise sample

in the output value. The rate at which the output value changes is usually controlled by multiplying the input seeds by a desired frequency [ 1 2 2 ].

Noise’s property of being continuous is extremely important for its use in lED. As discussed in the previous chapter one of the requirements of lED is that small changes in the parametric design space should result in small changes in the individual designs.

If non-continuous random values were used, this would not be the case, as a small mutation might result in an arbitrarily complex change in the individual. Using noise makes it more likely that small steps in parameter space result in small changes in the final design.

The values produced by the noise function are frequently used to perturb another set of values. Noise can be added to a single value or a multidimensional coordinate.

Typically a varying value such as the position of the sample is passed to noise as a

“seed” value, multiplied by some firequency / and offset within the noise function by some value o. The consistent use of this seed value ensures reproduceable results.

The value returned by noise (usually either in [0..1] or [—l..lj) is then scaled by some

71 amplitude a:

v' = v + a- noise(pi * fx + Ox) (3.39)

v' = v + a- noise(pi * fx + Ox,Py * fy + Oy) (3.40)

v' = v + a- noise(px * fx + Ox,Py * f y + Oy,p, * f: + o,) (3.41)

Figure 3.14 shows how high and low frequency noise can be used to deform features

in various dimensions.

Different frequencies of noise can be summed to create a hierarchy of visual com­

plexity for a given attribute. Usually each successive octave of noise is created with

twice the frequency and half the amplitude of the previous octave. Combining these different scales of noise is often referred to generally as “turbulence” although some­ times a distinction is drawn between methods involving summing octaves of noise

{fractalsum or fBm - fractional Brovmian motion) versus summing octaves of the absolute value of noise {turbulence). The former yields a wispier structure, while the latter is lumpier (figure 3.15) 8 [] [121]. Musgrave shows how the number of octaves can be made real-valued to eliminate discontinuities in his terrain [106] [109]. This tech­ nique is equally important here to maintain continuity in the evolutionary solution space:

n—I fractalsum(x, n) = ^ ( 2 “* • noise(2’x)) (3.42) t= 0

f l — 1 turbulence(x, n) — ^ |2“* • noise(2*x)| (3.43) 1=0 The above functions yield results that are self-similar at different scales and throughout the space. This universal self-similarity is not a requirement however.

Musgrave presents multifractals as a means of varying the frequency and amplitude

72 (a) ID feature (b) ID feature with low (c) ID feature with high frequency noise frequency noise

o

(cl) 2D feature (e) 2D feature with low (f) 2D feature with high frequency noise frequency noise

(g) 3D feature (h) 3D feature with low (i) 3D feature with high frequency noise frequency noise

Figure 3.14: Features perturbed by noise

73 (a) Fractalsum (b) Turbulence

Figure 3.15: 2D Fractal noise

of additive noise in different regions. For example, he uses this to create smooth low foothills and spiky mountain peaks in different regions of the same surface [106].

The amount that the amplitude is scaled between octaves is called gain while the amount the frequency changes between octaves is referred to as lacunarity. If gain = 1/lacunarity then the noise is called “1 // noise” [ 8j. There is no reason that gain and lacunarity need be set at 0.5 and 2 as they are in the equations above.

Worley suggests never using a lacunarity (the change in frequency between octaves) of exactly 0.5, but instead using a number with lots of digits like 0.485743, in order to avoid alignment artifacts [177].

There are many different methods for computing noise. The different methods primarily vary in their computation times and degree of visual artifacts. Peachey,

74 Perim, and Apodaca and Gritz present extensive details on a variety of noise varia­ tions [8] [121] [123].

3.2 Patterns

The features presented above function as primitives for building patterns in this section. Creating patterns from features is the next step in constructing fields of visual traits. Returning to procedural texturing techniques, a floating point version of a modulus function can be used to make other simple functions periodic. Given a function f{x) defined on [ 0 ,p], a periodic version of f{x) with period p can be constructed [ 8]:

mod (a, b) = b - {a/b — [_a/6 J ) (3.44)

/p(^) = /(m od(x,p)) (3.45)

pattern(z, p) = feature( mod (r , p) ) (3.46)

pattern(x, y,p) = feature(mod(x,p), mod(p,p)) (3.47)

pattern(x, y, z,p) = feature(mod(x,p), mod(p,p), mod(z,p)) (3.48)

Pattern functions can be created using this technique with any of the feature generation functions defined in the earlier sections of this chapter. We can use this implementation of mod to create an arbitrary number of copies of any feature func­ tion. The mod function creates p cells or tiles, each of which contains a copy of the feature. This ability to create a field of features of arbitrary complexity as implicit functions, requiring almost no storage memory, is an extremely powerful paradigm which is used throughout this work. The following subsections describe some of the ways that patterns of simple repeating features can gain visual complexity.

75 3.2.1 Bombing

As was done with individual features above, we can use noise to smoothly make the

patterns less regular in a number of ways. An important technique for individuaUzing

the features of a pattern is called bombing. Bombing involves using the index of the

feature to determine a single noise value to be used for modifying some property of

that feature [1 0 0 ][1 2 1 ]:

whichTile(x,/) = \x f\ (3.49)

positionBomb(x, /, a) = x + o • no\se{whichTile{x, /)) (3.50)

The whichTile function returns a unique integer for each tile. A function like positionBomb can then use this tile index as a noise seed to modify some value (in this case, the x position of a feature. Figure 3.16 shows examples of bombing several attributes in ID, 2D, and 3D domains.

Since a single noise value is foimd for all points on the feature, the entire feature can be modified uniformly (e.g., scaled, rotated, colored...) The primary benefit is that each feature can be made unique for a given visual property. Depending on the frequency of the noise used, neighboring features may change their properties gradually® or quite erratically, based on their relative spatial location.

Some common feature properties for bombing include size, position, color, and existence. All of the feature parameters defined (e.g. the 2D shape parameter) can be bombed as well. Note that changing some traits like the size or position can

^Recent implementations of RenderMan include a cellnoise function intended for this use. It returns a single uniformly distributed noise value between integer values. While the use of cellnoise can provide a substantial speed improvement over calls to noise, there is no way to control the magnitude of the discontinuities between cells. Bombing using the noise functions allows us to control the continuity between the values used by neighboring cells. More on this in the next chapter.

76 actually move part of a feature (if not aliy into a neighboring cell. This results in a

clipping of the feature at the cell boundary unless all neighboring cells are checked

for features as well [1 2 1 ].

Also notice that in the 3D cases where features leave the boundaries of the sam­

pling volume, holes result, as in figure 3.16(g). If these holes are not desired, it is

simple to use an intersection boolean with an implicit cube that is the same size as the

sampling volume. Computing a boolean intersection between two implicit surfaces /

and g simply requires taking the maximum value of the two equations, at each sample

point;

intersection(/,( 7 ,x ,y ,2 ) = imyi{f{x,y,z),g{x,y,z)) (3.51)

For lED, it is important to create continuous versions of bombing functions, in­

stead of the more commonly used discrete choice functions. This was seen already

in the feature shape functions in previous sections. A continuous version of the exis­

tence bombing function used in figure 3.16(h) would also be appropriate. Existence bombing is frequently implemented by testing whether a value returned by noise for that feature cell is less than some threshold. This binary test can lead to large vi­ sual discontinuities during small evolution steps however. An improved version might quickly but smoothly scale the feature down to zero, below some threshold value:

smooth(x) = (3 — 2x)x^ (3.52)

0 if X < a smoothstep(a, 6 , x) = 1 i f x > 6 (3.53) smooth((x — a) / (6 — a)) otherwise exist(x, t) = smoothstep(t — 0.1, t 4-0.1, noise{whichtile{x, /))) (3.54)

The smooth fimction returns a smooth interpolation from zero to one, with a slope of zero at both zero and one. Smoothstep also smoothly interpolates from zero to one,

77 / v ^ x A V W V w / v y v v

(a) Width (b) Height (c) Position

(d) Shape (e) Orientation (f) Value

(g) Noise Offset (h) Existence (i) Scale

Figure 3.16: Bombing the values of different attributes in regular ID, 2D, and 3D patterns is shown.

78 but the transition from zero to one is seated to occur between the values of a and 6 .

The exist function can use this to make features disappear continuously by smoothly

scaling them down over some user defined range. The feature is scaled down to zero if

the value returned by noise for the feature index in question is below the threshold t.

When bombing, it is often desirable to have noise return a random number with a

uniform chance of being any number in the range being produced. A common method

of doing this is to use uniform distributed noise (UDN) [100]:

udn{x, L, R) = smoothstep(.25, .75, noise{x)){R — L) + L) (3.55)

Uniform distributed noise takes the values noise returns (which are clustered

around 0.5) and pushes them towards zero and one. The resulting distribution can

be tuned by replacing 0.25 and 0.75 with values more precisely chosen for the given

implementation of noise, but these values yield adequate results.

Worley describes another more general and accurate approach for producing a

fractal noise function F with a more uniform distribution of values. He normalizes

the sum of the octaves of fractal noise, and then remaps this value via a histogram-

based equalization [177].

3.2.2 Global Perturbations

In addition to bombing, attributes of the entire pattern can be modified continu­ ously through a global application of noise, in the same manner shown with perturbing individual features in section 3.1.4. By using the global sample position as a seed to noise, continuity through the entire field can be maintained:

v' = v + a- noise(x * freq, y * freq, z * freq) (3.56)

79 (a) Sample Position (b) Frequency

Figure 3.17: Adding global noise to a pattern’s attributes

Figure 3.17 shows noise being applied to the position in space being sampled, and to the pattern frequency value.

3.3 Layers

Once individual patterns of features have been created using the techniques dis­ cussed in this chapter, the patterns can be combined for greater complexity. Fig­ ure 3.18 shows a fairly regular sawtooth pattern being combined with an irregular pattern of narrow spikes.

The method used to combine patterns depends largely on the visual attributes needed for the target design domain. Compositing options include summing, averag­ ing, or taking the maximum value of the individual layers. For example, we might need primarily “tall” features in a color domain, but want features with height pro­ portional to their width for surface height ( “displacement mapping” ) domain. More detailed examples of combining layers in different contexts is shown in chapter five.

8 0 «h.- ni >. /I II n/\.a .«

(a) Layer 1 (b) Layer 2 (c) Combined

Figure 3.18: Combining two ID layers

The maximum number of layers possible determines the length of the individuals’

chromosomes. If a maximum of L layers is possible, and the properties of a single

layer can be described using N genes, then for a given design space, all individuals

will have a chromosome of length c->r {N • L). The constant c represents a small

number of genes describing layer independent properties of an individual. A sample gene map is provided in appendix A.

Note that when individuals are mated, crossover rate plays a large part in deter­ mining the chances of layers being inherited intact. The lower the crossover rate, the better the chance of copying all of the genes of one or more layers into the offspring.

Also note that while no genetic operator has been used here to allow “mating” be­ tween layers within a single individual, one could certainly be introduced into the offspring generation process. This would be similar to mutation, in that a percentage chance could be specified that one or more layers within an individual would have their corresponding genes mixed, likely using crossover. This could occur either before or after mating and mutation.

8 1 3.4 Summary

In this chapter a number of methods have been presented for designing layers of patterns in one, two, and three dimensions from collections of features. These layered patterns are created to produce a broad variety of possible visual qualities.

It has been shown how the visual properties can be made to change smoothly as the controlling parameters are adjusted.

In the next chapter, limitations of the methods presented so far are identified and addressed. The patterns presented here, while fairly general, are unlikely to yield many of the structured visual design traits commonly found in manual design. Addi­ tional parameters are constructed to bias the solution space, increasing the possibility of structured design discovery.

8 2 CHAPTER 4

FORMAL DESIGN CONCEPT REPRESENTATION

The previous chapters have introduced interactive evolutionary design and con­

tinuous layered feature patterns. This chapter will address one of the primary sources

of signature in interactive evolutionary design by facilitating the discovery of design

solutions which evidence usage of common principles of formal visual design.

One of the major problems of interactive evolutionary design is achieving satisfac­

tory convergence speed. Assuming interesting solutions are in the design space, can

they be found in a reasonable amount of time? The answer to this question depends

largely on the method used to create the design-problem space.

The primary use of interactive evolutionary design is finding solutions to aesthetic

design problems. It is often the case that the design problem space is such that one is

unlikely to stumble across solutions having high quality formal graphic design traits.

Symmetry, unity, emphasis, balance, or rhythm are found infrequently unless they are explicitly represented (or at minimum encouraged) in the interpretation of the chromosome, evolutionary operators, or the distribution of traits in the design space.

It is certainly possible that after viewing several populations of many individuals, we might find one or two with a desirable visual trait (and then breed for it). However,

83 lED frequently involves a balance of biasing individuals to have certain visual traits, without forcing these traits into all individuals.

Visual design is a field devoted to conveying information and evoking responses in a viewer via the creation and arrangement of visual elements such as text, images, shapes, colors, etc. While design involves numerous imquantifiable properties such as association by similarity and contemporary trends, there are a number of under­ lying “formal” principles which are frequently used by designers in an attempt to manipulate the viewer’s interpretation of the features in a given design.

There is no way to represent all of a designer’s techniques without a computa­ tionally infeasible model of both the viewer and designer’s knowledge. However, a sizeable number of formal visual traits can be sufficiently represented in useful ways.

Most of these terms will be described here in the language of 2D imagery (e.g., lines, shapes, color fields, patterns.) Many of these concepts can be extended to include other visual domains, such as 30 form and motion.

The following sections in this chapter survey formal visual design principles, dis­ cuss possible methods of representation, and show simple implementations within the implicit pattern function architecture to illustrate each concept. Design parame­ ters (represented as normalized genes) are used to control the degree that each design principle is present in a given region of solution space. It will also be shown how these genes can be arranged hierarchically, allowing for both high and low level adjustment of attributes.

The design concepts under consideration are often described in the literature in terms of the presence or absence of combinations of the lower level visual traits in­ troduced in the previous chapter. For example, an increase in the value of a gene

84 controlling “unity” might decrease the amount of noise, add more regularity, use less

of a mixture of hard and soft edges, decrease the size and complexity of a color palette,

etc.

While there are many terms used for most of these design concepts, the termi­

nology used here is primarily drawn from an introductory design book by Lauer

appropriately named Design Basics [89]. Another discussion of these concepts in a

computer graphics lighting context can be found in the writing of Calahan [26]. Other

interesting sources for design concepts include Meggs [103] and Rand [131].

4.1 Design Principle Implementation

As previously stated, very low-level parameterizations of shape space have a low

chance of yielding interesting designs within the practical constraints of an interactive evolutionary design system. Attempts to include domain specific high-level parame­

ters within a solution space often introduce limitations in the parameter’s space and

its distribution, increasing signature.

If the goal is to increase the likelihood of the appearance of desirable formal design qualities, then those qualities will need to be represented within the solution space in a continuous probabilistic fashion. This section will describe methods for implementing and using formal design parameters in order to enable and encourage the emergence of these qualities within lED solutions.

While it would be unreasonable to make any attempt to completely represent the complexities and subtleties of the highly subjective field of visual design^®, there are

^^Visuai design is much broader than the few areas dealt with here. The principles presented below are a subset of formal graphic design techniques. The term visual design will be used as a convenient shorthand.

85 numerous low-Ievel “rules” which designers frequently employ that do lend themselves

to procedural representation. Symmetry is perhaps one of the simpler examples.

Symmetry is unlikely to evolve accidentally at interactive rates within the bounds

of a low level representation, and yet it is also not something that one would want

present in all design solutions. Symmetry will need to be represented in a continuous

rather than “all or nothing” fashion so that the parametric space contains solutions

that become gradually more or less symmetric.

Note that we are not creating a direct manipulation design tool with a guaranteed

“symmetry dial” yielding a direct correlation between visual symmetry and the user’s

interaction with the dial. Rather, we need to make the solution space that is defined

by the interpretation of the genes contain regions which are more (or less) likely to contain symmetric solutions. Note also that no attempt is being made to guarantee the removal of design properties, although this is often a side effect of reducing the amplitude of a design gene. The effort rather is to create genes which increase the likelihood of their presence.

The implementation of these design parameters will involve the creation of higher level parameters which are used by the system to modulate the effects of the lower- level partners. Examples are given in the following sections. A goal of creating high-level parameters is to enable the system to smoothly modify high-level traits of an entire scene, simultaneously changing a given property in an object’s shape, texture, motion, and environment.

86 4.2 Value Biases

To implement most of the design principles discussed in this section, it will be necessary to bias the controlling gene values toward certain sub-regions of solution space. This biasing could also be viewed as a reshaping or stretching of the solution space in such a way so as to more smoothly distribute areas of high fitness.

The biased remapping of gene values into sub-regions of the solution space could also be considered a means of creating continuous dynamic constraints. That is, constraints which can be gradually enabled or disabled.

In the previous chapter a few practical examples have already been demonstrated for remapping gene values to improve fitness. The shaping parameters of the one­ dimensional features had bias and gain applied to produce a better distribution of classes of contours. Bias and gain were also used to make the rate of visual transition between shapes more linear. A few additional generic techniques for biasing values are introduced in the remainder of this section.

4.2.1 Continuous Choice

When interpolating between a number of choices (such as between different two or three-dimensional feature shapes) it is sometimes desirable to be able to bias the results towards a discrete number of selections. Without this bias the result is likely to be an interpolated value between these selections a majority of the time. To address this need, a function like the following can be used:

smoothInts(u, u/) = [uj 4- smoothstep( [uj + w , [vj, v) (4.1)

87 Using smoothlnts biases the value of u towards fiiteger values w percent ofthe time

and towards interpolated values 1 — w percent of the time^^. The smoothstep function

was defined in equation 3.53. The parameter w is in [0,1] and controls the percentage

chance of interpolation occurring instead of a discrete integer being selected. The

equation assumes that n individual choices correspond to integer values, and that

fractional values correspond to proportional interpolations between those choices.

4.2.2 Multiple Sub-range Remapping

It is frequently the case that a mapping from a normalized gene value into a single range provides results that are too similar. A biased remapping of the value into two or more sub-ranges can often yield a more visually diverse range of offspring. If a range is merely specified between minimum and maximum values, then often only intermediate “average” values are produced in practice.

A common example of this can be seen when selecting periodic function frequen­ cies. It is sometimes desirable to have a visual property present in part of a visual field and not in another. This may require a relatively low frequency. But it is also sometimes advantageous for a given visual property to appear and disappear many times within the visual field, requiring higher frequencies as well.

If an appropriately low frequency range is in the range [1,3} and an appropriate high-frequency range is [90,120], then it does not suffice to remap the normalized gene value to the range [ 1 ,120). Given a uniform distribution of random gene values, the desirable low-frequency range would only appear in less then two percent of our space representatives.

^^The ease-in/ease-out smoothstep function may be replaced with a linear interpolation if a more proportional but less continuous interpolation is desired.

88 While it might be possible in- the above-ease te use bias and gain functions to remap the ranges, perhaps a few intermediate sub-ranges would produce desirable results as well. Recall that we also need to maintain continuity between the ranges of desirability so that evolution doesn’t change anything too abruptly.

A simple solution is to create a look up table^^ with linear interpolation between keys:

lerp{vi,Vi+i,t - ki) if 3i : k{ < t < k{+i lkspline(t,Aro,Uo,--- , ' Vq else if t < fco Vn-i else if t > kn~i (4.2)

This will return a value linearly interpolated between the values at the keys to either side of t.

4.3 Unity

Visual unity refers to the harmonizing of different visual elements. When the components of an entity exhibit unity, they are connected by the viewer in some way.

An entity presenting unity is seen as more “comforting” but also perhaps “boring”,

“uninteresting”, or “safe” depending on context. A design lacking unity could be perceived as “exciting”, “dangerous”, or “random” [89].

Unification of features is frequently accomplished by proximity or repetition of forms. Continuation is another common technique in which implied lines are created to unify elements. Unity can also be obtained by providing a uniform “emphasis on variety" where the visual field is uniformly different throughout [89]. While the unity of a given design solution is determined by many factors, it often relates to the

^~The name Ikspline came from the linear key spline function found in Sidebc’s H ou din i® .

89 rY ^vvnrirv^

Figure 4.1: Increasing the value of the unity-by-repetition gene

relationships, similarities, and connections between the various features. Unity can

be increased or decreased by adjusting the degree of these relationships.

4.3.1 Repetition

Repetition of design elements can be an important factor in the creation of unity.

Smoothly reducing the amount of noise used to differentiate and individualize features unifies the design via association by similarity. As the value of the unity-by-repetition gene is increased, the number of features is also slightly increased. Additionally, the amplitude of global noise is reduced, further decreasing the of individual features (figure 4.1.)

Each bombing and global noise amplitude A,- is decreased proportionally to the increase in the value of the unity-by-repetition gene Ur- The degree of scaling can be adjusted by a constant c,. The value of each c, can be equal, or it can be individually tuned based on the relative visual impact of its corresponding attribute (some traits are more subtle than others.) The number of features N j is increased by w/, the percentage of the difference between the original and maximum number of features:

& = A f.(l-Q (4 ) (4.3)

N, = N,,+m,Ur(N,^-N!,) (4.4)

90 Figure 4.2: Increasing the value of the unity-by-continuation gene

As the unity-by-repetition gene Ut is increased, the first equation scales down each noise amplitude, while the second equation increases the number of features N f

towards the maximum value.

4.3.2 Continuation

Continuation is another unifying tool. The creation or destruction of implied lines can be extremely useful in unifying a design. One means of creating implied lines is to change a parameter continuously. This can be accomplished by reducing the bombing frequencies Bi so that changes in the attributes of features change more gradually across the visual field (figure 4.2). Positional bombing attributes A{ are also reduced to better space the features.

If this is done uniformly to all bombed parameters, then the features will become unnecessarily simple. A solution is to choose a subset or palette of traits to have their frequencies reduced. For each attribute, a palette weight Wi can be computed to determine whether (and how much) that attribute’s frequency should be reduced.

Trait palettes will be further discussed in section 4.9. The attributes selected are cho­ sen using uniform distributed noise (udn) so that the selection will change smoothly

91 throughout the parameter space:

Ai = Ao{l-CiUc) (4.5)

n — udn(i, pspace) (4.6) ILL paletteWeight{mn,mx,n) = linstep(mn, mx, n) (4.8)

Wi = paletteWeight{0.25, 0.5, n); (4.9)

B i = Bi, -symBias(6i,{/c)symBias(62,u^.)(Bi, - Bi„,„) (4.10)

As the unity-by-continuation gene Ur increases, the normalized position bombing amplitudes Ai are decreased. The noise value n is used to calculate the palette weights

Wi- In the above example, the palette Weight function is passed values of 0.25 and 0.5 for the step transition. This means that on average, half of the attributes will have full participation in the palette, a quarter will be modified to some degree, and the remaining quarter will remain untouched. The final equation modifies bombing fre­ quencies Bi towards the minimum, based on both Uc and the computed palette Weight

Wi-

The two constants bi and 62 can be used to control the influence of both Uc and the paletteWeight. Using a value of 61 = 0.9 causes a given bombing frequency to rapidly plummet to the minimum as the unity-by-continuation gene Uc increases. Using a value of 62 = 0.9 causes the frequency Bi to be fully affected if the paletteWeight indicates nearly any degree of activation.

92 Figure 4.3: Increasing the value of the unity-by-proximity gene

4.3.3 Proximity

Another important factor affecting unity is the relative 'proximity of design fea­

tures. A proximity gene can be used to cause features to gather together, forming a

visual group at a given location. This grouping can be accomplished by specifying

an attraction point towards which all of the features can be moved based on their

distance. The distance d (measured in tiles) that a given feature sample s should be

translated is found as follows:

d = t/’pdistance(cj,Ca) (4.11)

In the above equation, Up is the value of the unity-by-proximity gene. The center

of the cell^^ containing the sample s is c*. The position to which features are being

attracted is Cq. In the one-dimensional case shown, c, is just the index of the cell containing s, plus 0.5. Added 0.5 to the cell index c, ensures that distance is calculated

from the center of the sample’s cell. The center of attraction, Ca, is determined by multiplying the normalized attraction gene by the number of cells. For higher dimensions, an additional attraction gene should be used for each axis.

The equation translates each feature towards the point of attraction by an amount proportional to its distance from the attraction point (figure 4.3). This calculation

^^The words “cell" and “tile” will be used interchangeably.

93 _AA_/\.-4_rLrL

Figure 4.4: Increasing the value of the unity-by-variety gene

should be conducted after features have had their positions bombed, otherwise the

distance calculation may not reflect the cell’s actual global position.

4.3.4 Variety

The principle of unity-by-variety creates a perception of unity from the consistency

of individuality among features throughout the visual field. Implementation requires

increasing the frequency and amplitude of the bombing of shape attributes in order

to increase the differences between shapes. At the same time, the amplitudes of

attributes which tend to localize traits in sub-regions of the visual field are reduced

(e.g., existence and position bombing, and global noise.) This reduction causes visual

properties to vary more uniformly throughout the visual field (figure 4.4).

The following equations are used to increase shape bombing amplitudes 5,-, de­ crease the amplitudes Ai of positional bombing, scale bombing, and global noise, to decrease the frequency of value bombing towards its minimum, and finally to

94 _nJJLnAjJl. _JUUJXlUL_

Figure 4.5: Increasing the value of the unity-by-combination gene

increase bombing frequencies Bi towards their maximum:

Si = Si^+aU^{l-Si^) (4.12)

Ai = Ai„{l - aU^) (4.13)

Fv = Fvc - CvU^{Fvo - ■f’vmin) (4.14)

Bi = Bi„ + CiUy{Bi„^ - BiJ (4.15)

4.3.5 Combining Techniques

A singleunity gene U is used to simultaneously modify the value of each of the

above unity genes f/,. This has the effect of increasing or decreasing the degree of

unity in the visual field through a combination of the above techniques (figure 4.5.)

The exception is that unity-by-variety and unity-by-repetition directly conflict, working to increase and decrease the individuality of features, respectively. The solution is to severely dampen the value of the one with the lesser amplitude. This is done smoothly with the previously discussed symBias function so that there will be no visual jumps as their values cross in the design space:

Ui = U i„ + a U { l-U J (4.16)

= = (4.17) ^ n = n •symBias( 6 ,n/m) if m>n . ’

95 The xorBias function can be called with the values of the unity-by-variety and

unity-by-repetition genes f/„ and Ur, and a low bias value (e.g., 0 .1 ) so that the value

of the lower of the two is biased toward zero.

The influence on each Ui can be individually weighted with the c,- constants. For

example, in practice, a significantly lower value has been used for Cp (the constant for

unity-by-proximity) since it has such a relatively strong visual impact compared to

the other unity methods. Alternatively, the palette selection method (equation 4.8)

could again be used to smoothly choose which subset of approaches should be used

to increase visual unity.

This sort of hierarchy of parameters was used in the author’s previous work on hu­

man body geometry evolution [91] [92]. Doing so allowed for high-level attributes such

as height or weight or musculature to be determined by high-level genes, with local

refinement then provided by low-level genes controlling the relative size of individual

body regions.

4.4 Emphasis and Focal Point

It is almost always the case that a design will consist of certain features that are

more visually important than others. The most important elements become focal

points. There is usually a hierarchy of visual importance in a visual design, with

different components receiving different degrees of emphasis [89].

A designer will use varying techniques to attract the eye to areas of greater impor­

tance. These techniques will also be used to move the eye around the visual design,

to add interest [89]. Focal points are created by modifying properties of a localized set of features so that their attributes differ from those of the surrounding features.

96 Figure 4.6: Increasing the value of the emphasis-by-contrast gene

It is the magnitude of this difference that is the primary factor in determining the

degree of emphasis.

4.4.1 Contrast

A common means of developing emphasis is to create a contrast between the

qualities of the features in a localized focal point region, and the surrounding features.

To implement this in a pattern function, the degree of shape bombing must be reduced

for all features. It is important that the featiues outside the focal point are sufficiently similar to one another so that they can be distinguished from the features inside the focal point (figure 4.6.)

Gene values determine the center and radius of the focal point {pc and In higher dimensions additional genes could be used to determine the shape of the region as well (e.g., genes to control whether the focal point boundary contour is square, circular, or irregular.)

If a feature is determined to be inside the focal point, a subset of the feature's shape parameters are selected (using the paletteWeight method) to have their values changed. The magnitude of the change to each shape parameter 5,- is determined by the value of the emphasis-by-contrast gene Ec- The values of the selected properties

97 rjriLlnik/L ^inL Jnk ^nL^v_NVi

Figure 4.7: Increasing the value of the emphasis-by-isolation gene

are pushed towards their opposite extreme value:

linpulse(a, 6, /, x) = Unstep(a - /, a, x) — Iinstep(6, b + f,x) (4.18)

focalPoint = linpulse(pc — Pr,Pc + Pr, Pr/2, x) (4.19)

= % % (4.20)

Wi = palette Weight(i, min, max) (4.21)

Si = pushValue(5,„,£’cU;i) (4.22)

The linpulse function creates a localized focal point region with a value of one, with transitions on either side to zero via a linear fall off^**. This focalPoint value is passed, along with each parameter value to be modified, to the push Value function.

The pushValue function remaps the normalized parameter value towards the opposite extreme based on the paletteWeight for that parameter (equation 4.8) and the degree to which the feature is in the focal point. As features move out of the focal point, they are smoothly made to contrast less with their neighboring features.

98 Figure 4.8: Increasing the the value of the emphasis-by-placement gene

4.4.2 Isolation

Features can also be emphasized by spatially isolating them (figure 4.7.) Given

feature and focal point centers fc and Pc> the signed distance between them is d*®.

Features which are outside the radius r of the focal point are translated away from the

focal point by an amount t. The falloff of their translation with distance is controlled

by an exponent term n. The translation is scaled by both a constant k\ and the

normalized emphasis-by-isolation gene Ej:

d=ifc-Pc) (4.23)

4.4.3 Placement

The values of properties of other features can also be made to converge to the value of focal features (figure 4.8.) In this case, the emphasized feature becomes an

^‘‘In higher dimensions, the euclidian distance between the feature center and the focal point would be passed to linpulse to determine whether the feature is inside the focal point, instead of the single feature center coordinate that is used here in the one-dimensional domain

^^In higher dimensions, the length and direction of the vector v from the focal point to the feature center would be substituted to move each feature: t =

99 Figure 4.9; Increasing the value of the emphasis-by-combination gene

inflection point in the function of the features’ parameter values:

d = \ f c - Pci (4.25)

dmax = max((l - Pc),Pc) (4.26)

t = symBias(6 , — -) (4.27) ^max

S — 1.0 + ^p(Smin "h ^(®max ®min) 1) (4.28)

The distance d from each feature’s center to the focal point, and the maximum

distance dmax are used to compute a value for t. In this example, t then determines

the amount to scale the height of each feature based on its distance from the focal

point. While the ID example shown here illustrates a converging amplitude, in more complex domains this technique can be used to control arbitrary properties such as color or feature rotation.

4.4.4 Combining Techniques

A single gene can be used to increase each of the previously discussed emphasis methods using a common focal point to emphasize a specific location (figure 4.9.) To maintain design space continuity, the focal points of each method Pcj can be rapidly

100 converged to create one commort focal^ point

1 ^ Pa = (4.29) j=i Pcj = symBias(0.75, E){pcj - Pa) (4.30)

Ej = Ej + kE (l - Ej) (4.31)

First the average focal point Pa is found from the focal points of the three different emphasis methods. Each method’s focal point Pc^ is then moved towards pa by an amount proportional to the value of the emphasis gene E. Bias is used to make the focal points merge rapidly. Finally, the amplitude Ej of each of the different emphasis techniques is increased, again proportionally to E, the combined-emphasis gene’s value.

4.5 Balance

Balance is a term used to describe “distribution of visual weight”. Two of the primary different types of balance are symmetrical, and asymmetrical balance^® [89].

Balance often results in an equalization of emphasis between different visual elements and their properties, such as color or size. Differences in separate properties can be balanced, such as a balance between small, complex objects and large, simple shapes.

Other examples might include tiny shapes with a large amount of detail balanced with large objects with little detail, or a large centered object balanced with a smaller object that is further from the center. Balance is often dependent on the concept of

“equal eye attraction” [89].

Implementing this procedurally requires subjective quantification of visual prop­ erties. The techniques shown here are not intended to be the one “correct” way of

'^Others include radial and crystallographic.

101 Figure 4.10: Feature Symmetry: The center feature of the five shown is the “original”. As the value of the gene controlling feature symmetry decreases (as is shown in the left two features) the right side of the feature becomes increasingly like the left side of the feature. As the symmetry gene’s value increases (as in the right two features) the feature’s left side becomes more like its right.

quantifying these designed traits, but rather to show how such quantification can be implemented and tuned as desired by the creator of the design solution space.

While asymmetric balance in particular is a difficult property to ensure paramet­ rically, there are several procedural techniques that can be used to to increase the likelihood of balance emerging.

4.5.1 Symmetric Balance

Symmetric balance will be considered at two levels: the feature level, and the global level.

Feature Symmetry

As mentioned above, the use of symmetric attributes is one of the more simple and common means of attaining balance. Symmetry requires the reflection of values

102 about an axis. Meeting- the requirement for contmuity of change througir parametric

space requires interpolation between reflected and non-reflected values.

For many feature parameters it is extremely unhkely that left-right symmetry will

emerge spontaneously given the interaction limitations of lED. The existence of a

symmetry gene allows the solution space designer to directly manipulate the odds of

symmetry emerging.

A symmetry gene can be implemented for each pair of parameters with a left-right

relationship^^ to gradually force symmetry by allowing the value of either the left or

the right parameter to override the value of the other (figure 4.10.) The symmetry

gene is implemented such that its left sub-range forces a left override, its right sub­

range forces a right override, and middle values enable asymmetric results;

Sr = 1 — S (4.32)

Z ..0 .5 (4.33)

É = 1 — L -h Bf{2L — 1 ) (4.34)

a = linstep(0.5,1,() (4.35)

V = lerp(u, Ur,o) (4.36)

The final value u at a sample s for a given feature can be calculated by interpolating between the non-symmetric value at s and the value Vr found at the reflected sample location Sr- The degree of interpolation a is dependent on the value of the gene controlling balance-by-feature-symmetry B/ and the side of the feature containing the sample being considered. Lower values of 3 / cause the right side to the feature to become more like the left. Higher values of B f cause the left to become more like the

higher dimensions, genes to control top-bottom and front-back symmetry can also be created.

103 Figure 4.11: Global Symmetry: The center image is the original. As the value of the gene controlling global symmetry decreases, the right side becomes increasingly like the left. As the gene’s value increases, the design’s left side becomes more like its right side (e.g., the bottom two features.)

right. Independent of the side, the activation parameter t determines the amplitude of the dissolve. The parameter L specifies whether the sample is on the left side of the feature or not (having the value one if so, zero if not.)

Global Symmetry

Global symmetry is implemented in a similar fashion to feature symmetry. As the value of the gene controlling global symmetry decreases, the visual field’s right side

104 becomes increasingly like the left (e.g., the top two features in figure 4.II.} As the

gene’s value increases, the design’s left side becomes more like its right side (e.g., the

bottom two features in figure 4.11.) As with previous properties, the system can be

implemented such that either all, or a subset of the features’ properties can be made

symmetric. The exact same equations that are used for feature symmetry above can

be used globally, with the only change being that the coordinate s is now a global

coordinate, rather than a cell coordinate.

4.5.2 Asymmetric Balance

A more interesting but also more challenging means of attaining visual balance is through the use of asymmetric balance. This requires that degree of emphasis be balanced. For example, if there is a large number of small features on one side of a visual field, then this could be balanced by a few large features on the other. Examples of differing emphasis factors may include size balanced with detail frequency, color with position, quantity balanced with size, etc.

Size vs. Quantity

Numerous combinations of asymmetric relationships can be implemented. Ex­ ample implementations of three are shown in figure 4.12. The first is a relationship between the quantity and size of features (figure 4.12), which can be implemented

105 /WV\AA/\A

______f~\ <\-A—n—ff—fuyy—A-A. f~\.

Figure 4.12: Size vs. quantity asymmetric balance: From left to right, top to bottom, the four designs show the maintenance of visual symmetry by adjusting the balance between relative size and the number of features. More smaller features on the right balance fewer larger features on the left.

using the following equations:

a = \2 B -■1 | (4.37)

/ i = Nf + - AT/) (4.38)

Î2 = N f- a{Nf - 1) (4.39) if {L > .5 and B < .5) or (£, < .5 and B > .5) Nf = (4.40) otherwise / K{h/fi) if {L > 0.5 and B < 0.5) or {L < = 0.5 and B > 0.5) finh — 1 K — a{ho — l) otherwise (4.41) hnNf r / = (4.42) hoNo

B is the normalized asymmetric-balajice-by-size-vs-number gene. The equation calculating a remaps B = [0,0.5] to [1,0] and B = [0.5,1] to [0,1]. The variables fi and / 2 indicate the desired frequency (i.e., number of features) on the high frequency and low frequency sides, respectively. N/ then is the number of features on the side of the sample currently being considered, L again specifies whether the sample in

106 Figure 4.13: Size vs. complexity asymmetric balance: From left to right, top to bottom, the four designs show a maintaining of visual symmetry by adjusting the balance between relative size and feature complexity. The large simple features on the left balance the small complex features on the right.

question is on the left or right side, ho and hn are the original and new feature height, and Tf is the new feature radius.

The new feature height is calculated based on the change in the number of features on the side with many small features, while the height becomes normalized to the value 1.0 on the side with a few big features. The radius is adjusted to maintain the relative height/width proportion of the feature.

Size vs. Complexity

A second example of asymmetric balance is illustrated with size and complexity in figure 4.13. For features that are on the side being made smaller and more complex, the following equations can be used:

di = a(M,- — Vi) (4.43)

dh — a{ho — m) (4.44)

T f = T f k n / h o (4.45)

107 The amount to increase the amplitude an(h frequency of noise on the small, com­

plex side is given by di which is the difference between the original parameter value

Vi and the maximum value Mi, scaled by a (given by equation 4.37, only now using

the asymmetric-balance-by-size-vs-complexity gene.) The amount to decrease size on

the small, complex side is given by dh, the difference between the value of the original

height ho and the minimum height value m, again scaled by a.

For features that are on the side being made larger and simpler, the following can

be used:

di = —Via (4.46)

dh = a{l — ho) (4.47)

df = a{l-fi) (4.48)

Noise and bombing amplitudes are moved to zero to simplify the features. Feature height is pushed toward a normalized “large” size of one. Noise frequencies /, are reduced to a low value of one so that any remaining noise only provides a few inflection points.

Quantity vs. Complexity

Once the equations to increase or decrease feature properties such as quantity, complexity, or size have been implemented for either side of an asymmetric bal­ ance relationship, they can be recombined with other properties. For example, equa­ tions 4.37-4.40 controlling feature quantity can be combined with equations 4.43 and

4.46 which control complexity to form the quantity vs complexity balance in fig­ ure 4.14. On one side the number of features is increased and the features are made

108 /vWVwm /I w w w w m n

Figure 4.14: Quantity vs. complexity asymmetric balance: FVom left to right, top to bottom, the four designs show a maintaining of visual symmetry by adjusting the balance between the number of features and the feature complexity. A greater number of simple features on the left balance fewer complex features on the right.

more simple. On the other side, the number of features is decreased but the features that remain are made more complex.

4.6 Rhythm

Repetition of properties can greatly affect perception of a given design. While low frequency rhythms can yield a relaxing or calming effect, high frequency patterns can seem jumpy, exciting, or lively. Alternating rhythms (e.g., ABABAB ... ) add to a sense of regularity while progressive rhythms which change in a regular manner (e.g.,

ABCBABCBA ... ) can convey a sense of shrinking and growing [89).

The creation of rhythmic patterns can be implemented by adjusting property values based on wave patterns that can be shaped parametrically. For example, a sawtooth/triangle wave with a “shift” parameter q which varies the position of the peak within one cycle is defined by the following:

109 nr'iULin^^

_y“n(lJlJr\v^ njUliLjrirux-

Figure 4.15: Rhythm can be created from patterns of trait changes. The top image has a saw tooth pattern applied to the height parameter, while the bottom one has a sinusoidal pattern modifying the features’ heights. The original features are in the middle for comparison.

Within one cycle, the function interpolates from zero to one from the beginning of the cycle, to q. It then decreases from one back to zero from q to the end of the cycle.

A single gene g is used to determine the shape of the wave. As the value of g increases, the following function smoothly interpolates from a left sawtooth wave to a triangle wave for 0 < g < 0.25, then from a triangle, to a sine wave, and back to a triangle wave for 0.25 < g < 0.75, and finally from a triangle wave back to a right sawtooth wave for 0.75 < g < 1:

saw(^/0.5, fx ) if g < 0.25 { lerp(saw(0.5, /o;),sin( 2 /x 7 r), 1 — else if g < 0.75 (4.50)

saw(2(g — 0.75) + 0.5, fx ) otherwise

110 h _ r n _

Figure 4.16: The shaping gene is changed from organic to rectilinear (left to right, top to bottom).

This wave can then be used to modify the values of any subset of visual attributes.

Scaling the height of the features is shown in figure 4.15 using the following equation to compute Sy (the amount to scale the height) using the above wave function, adjusted by the value of a rhythm activation gene Rgi

Sy = 1.0 + RgW (4.51)

4.7 Shape

The shape of an designed object is often described using terms such as natural, idealized, or abstract [89]. Most of these are qualities that involve the modification of a “correct” or “realistic” representation of an object. However it is not possible to implement parametric controls to influence the relative “realism” of a design without having extensive knowledge base of the properties of the object being represented.

I ll There are some shape description terms however thaf rely entirely on formal prop­

erties. For example, “rectilinear” shapes with hard, straight edges and angles, can

be compared in their visual effect to “organic” curvilinear forms. While the former

connote artificial or manufactured forms, the latter seem more natural, organic, or

biomorphic [89].

The rectilinearity of edges and forms is determined by continuity of functions.

Continuity of the change in the value determines the relative organic, natural quality

of a line. As value changes are made to change gradually, rather than suddenly (or

not at all) forms become more organic and less artificial.

To make featiures appear more organic in the one-dimensional example domain,

they are scaled to the full width of a cell and their top plateau is removed. The

features’ peaks are recentered so as to not make any sharp corners in a sudden peak-to-

base transition. The tangents of the features’ side splines are pushed to zero yielding smooth continuity at both the peak and base. While the amplitude of noise is turned up, the noise frequency is turned down, yielding large smooth curves (figure 4.16.)

To make features more rectilinear, the plateau value is pushed to the width of the feature, giving the feature a square top. The amplitude of noise is reduced to zero, yielding blocky features of varying height and width. Two and three dimensional features can be made more organic or rectilinear by biasing their base shape either toward circles and spheres or toward squares and cubes.

In general any desired visual property that can be defined in terms of sub-ranges of existing parameters (e.g., pointy, flat, bulbous, holey, jagged, fractal) can be pro­ duced by identifying the region of the solution space having that property and biasing

112 properties toward it as the value of the gene controtting the presence of the property

increases.

4.8 Color

Use of color in images and on object surfaces has an extremely significant impact

on design qualities such as emphasis and balance. Color representation is a very com­

plex subject which will only be touched upon from a introductory design perspective.

However, the framework presented in this section is sufficiently general that a much more complicated perception-based representation could be integrated as desired.

4.8.1 Color Representation

In computer graphics, color is most commonly represented as either red, green, and blue channels (RGB color space) or as hue, saturation, and value channels (HSV color space)*®. The former is usually used for internal representations for pixel values, while the latter is more often used as an intuitive way for artists and designers to think about color in interactive interfaces.

The translation from evolved values to colors has been referred to as the most important control technique in the interactive evolutionary design of images [62].

Musgrave also discusses the problem of mapping evolved values to color [107]. There are two primary approaches to mapping evolved values to color. In whichever space is chosen, the first makes each separate color channel a function of independent gene values. The second uses a function of gene values to index into a color lookup table.

The former method generates results that are like looking through uncorrelated filter layers. Using independent HSV chaimels often leads to “rainbows” as well, as the hue

^^Saturation is also sometimes referred to as intensity or chroma.

113 interpolates through a consistent ordering'of colors; Using'color tables is more in Une

with color palettes that artists and designers create and use, but procedurally creating

good color tables ends up being the principle diflBculty with this method [64] [107].

The following sections will address these issues.

4.8.2 Palettes

To create a color palette, Nc random colors first are generated using imiform

distributed noise for hue, saturation, and value, with genes providing noise offsets for each. Saturation and value are biased upwards to reduce the frequency of dark and muddy palettes. As the gene values change through the design space, each individual color changes gradually and independently. This allows color palettes to be evolved.

The number of colors in the palette can be fixed or controlled by a gene as well.

Once a random palette has been generated, other genes can modify the palette in high-level ways. This makes the system more likely to produce palettes contain­ ing common color relationships (e.g., a palette containing complementary hues) that would be much less likely to evolve at interactive speeds in an unbiased design system.

The following sections will describe the example palette manipulation genes that have been implemented.

4.8.3 Value

The relative range of light and dark in an image can have a profound impact on the effect of a given design. If an image is mostly dark, it is referred to as low key, whereas a primarily light image can be called high key. Also if the range of values used is relatively narrow (be it high or low key) a subdued, calming effect, can be achieved. A wide contrast in values is considered more dynamic and exciting [89].

114 Figure 4.17; Value Design Genes: The top row shows the value key gene being changed. An unadjusted random palette is in the center. The colors become darker or lighter as the gene’s value decreases or increases, respectively. The bottom row shows the value contrast gene being changed. As the contrast gene decreases, the value range shrinks toward an average value. As the contrast gene increases, the brighter colors become much brighter, and the darker colors become much darker. The value of each gene increases from left to right in both rows.

115 Figure 4.18: Saturation Gene: The random palette in the middle becomes less satu­ rated in the palettes to its left in which the value of the gene has been decreased. As the saturation gene’s value is increased, the populations to the right, show the colors increasing in intensity.

The overall values in the palette can be biased towards a high or low key based on a gene gk, or towards a narrow or wide range based on a geneÇr as follows:

^ _ f symGain{gr,Vo) if 9r > 0.5 (4.52) \ 25r(sj/mGoin(^r. Vo) — 0.5)-1-0.5 otherwise V = lerp(clamp(2(pt - 0.5), 0, l),clamp(2fli*,0, l),Vr) (4.53)

The first equation increases or reduces contrast by either pushing the values away from the center, or by remapping them towards the center. The second equation remaps the values into a narrower high or low range (figure 4.17.)

4.8.4 Saturation

Raising or lowering the overall saturation of a palette can make it more vibrant or more subdued [89] (figure 4.18.) The saturation of the colors in the palette can be biased up or down based on a gene g as follows:

lerp(0,2g, s) it g < 0.5 saturation(s) (4.54) = { lerp(2(«y — 0.5), 1, s) otherwise

116 Figure 4.19: Hue Remapping: A linear spline is used to remap hue so that com­ plementary colors are opposite on the color wheel. In the image on the left, red is opposite from cyan, green is opposite from violet, and yellow is opposite from blue. After remapping, in the right image, red is opposite from green, blue is opposite from orange, and yellow is opposite from violet.

4.8.5 Hue Hue Remapping

It is rare that an artist or designer will form a palette from a totally random

collection of hues without considering (often intuitively) their relationship. There

are a number of color schemes that are often employed which rely on the relative

positions of colors on a standard artist’s color wheel in whidr complementary colors

are located opposite one another. This property is not present when hue from the

HSV color space is mapped to a color wheel. An adequate yet simple solution is to use a periodic spline to remap the hues to appropriate values so that, for example, red is located at a value of zero, and green (red’s complement) is at 0.5.

117 On a complementary color wheel, if red is considered to be at zero and green is

at 180 degrees, then orange-yellow would be at ninety degrees, and violet-blue would

be at 270 degrees. Figure 4.19 shows the results of a simple remapping of the hues to

this “complementary” color space using the empirically produced linear spline from

the following equations:

hue = linSpIine(comp, —0.025,0.125,0.3,0.75,0.975) (4.55)

comp = linSpline(hue, 0.042, .429,0.611,0.75,1.0417) (4.56)

The first set of spline keys were determined through quick trial and error to

sufficiently satisfy the complementary relationship. The second set of keys which

invert the previous function were found by computing the inverse values at each quarter position aroimd the color wheel. More perceptually accurate values and

better interpolations could certainly be used to refine this mapping, but again, this will suffice to illustrate the following palette relationships, and as a basis for interactive evolutionary design.

Warm and Cool Colors

Colors are often classified as being warm or cool, although the terms are usually used relatively (i.e., whether one color is “warmer” or “cooler” than another.) Gen­ erally speaking, the closer a color is to red-orange the warmer it is, while a color that is closer to blue-green is described as being more cool. The relative warmth of a color strongly affects perceived emphasis and depth. Warmer colors tend to attract attention and advance, while cooler colors tend to recede into the background [89].

The hues h in the palette can be made warmer or cooler based on the value of a

118 Figure 4.20: Warm/Cool Gene: The random palette in the middle image becomes more warm in the images to the left as the value of the warm-cool gene is decreased. Each color in the palette gradually becomes more red-orange. As the gene’s value is increased in the images to the right, the palettes become cooler, and thus more blue-green.

gene g using the following equations: lerp(0, g, 2h) if g < 0.5 and h < 0.5 { lerp(l - g, 1 ,2{h - 0.5)) i( g < 0.5 and h > 0.5 (4.57)

lerp(p - 0.5,1.5 —g,h) if p > 0.5 hr = (0.1 + warmCool((h — 0.1) mod 1)) mod 1 (4.58) A red-orange at 0.1 was selected as the warmest color and blue-green at 0.6 was chosen as the coolest. These can be adjusted by the creator of the design space as desired. The above equations, when applied to each color in the palette, will push them toward the cool or warm region of the color wheel, depending on the value of g and the relative position of the hue of each color being modified (figure 4.20.)

Color Schemes

There are several different color schemes or harmonies conunonly used for dif­ ferent design effects (figure 4.21.) A monochromatic scheme uses a single hue and is considered “quiet” and “restful”. Ananalogous scheme uses neighboring colors on the color wheel. It produces a more “harmonious” design.Complementary schemes em­ ploy opposite hues on the color wheel and are “vibrant” and “lively”.Triadic schemes

119 Figure 4.21: Color Schemes: five color schemes are shown. FYom top to bottom, they are: monochrome, analogous, discord, complementary, and triadic. The activation gene’s value increases from left to right for each scheme.

120 use three equally spaced hues on the wheet, and are also considered “vibrant" and

“lively” [89].

Color discord is created when there is a relatively wide space on the color wheel

between dominant colors, but not a wide enough space for the colors to be comple­

mentary. This produces a clashing result, conveying “anti-harmony”. This discord is

nullified if there is a large value change as well, instead of just a change in hue [89]. # While color discord is not technically a “color harmony” it is included in the palette

scheme section as an additional method of palette manipulation.

Genes have been created so that the color palette can be biased with some ac­ tivation level towards one of these schemes. One gene controls the scheme that is selected. The schemes are ordered by similarity (e.g., monochrome is similar to anal­ ogous) so that the mapping from the gene value to the schemes can be done smoothly.

Intermediate stages between schemes are computed by calculating the effect of adja­ cent schemes and interpolating the results. The smoothlnts function (equation 4.1) is employed to bias the color scheme selection gene towards individual schemes, rather than to intermediate values. As the value of an activation gene increases, the color palette increasingly takes on the properties of the scheme indicated by the scheme selection gene.

In the representation implemented here, the schemes progress from monochrome to analogous, then to discord, complementary, and finally triadic. An additional gene specifies a single hue upon which each scheme is based, for example. If this color is green (0.5 on the remapped complementary color wheel) as in figure 4.21 then in a complementary scheme, the other dominant color is red. The analogous scheme allows neighboring yellow-green and blue-green hues into the palette. The triadic scheme

121 pushes each palette color to the cfosesf of green, violet, or orange (violet and orange

are each 120 degrees from the base color green on the color wheel.) The discord

scheme biases the palette to colors that are roughly 50 degrees to either side of green:

a greenish yellow and a greenish blue. Value and saturation are also increased to raise

the degree of color discord.

The base color is first remapped to be at 0.5. Each color h in the palette is then

repositioned to maintain its relative distance from the base color. This makes it so

that the color complementary to the base color is always found at zero (and thus also

one.) The two triadic colors are then reliably at 0.167 and 0.834 (120 degrees around

the color wheel firom the base color at 0.5).

The remapping of the palette colors for the monochromatic and analogous color

schemes are implemented using the scheme activation gene Ça as follows:

monochromatic(h) = lerp(0.5ÿa. 1 — O.Sg^,h) (4.59)

analogous(h) = lerp(0.375^a. 1 - 0.375^a> h) (4.60)

In the monochromatic case, the values of all colors are pushed towards the “base” color which has been remapped to 0.5. The top row of figure 4.21 illustrates this. As mentioned above, the base color is green, thus the colors in the random palette in the left column become progressively greener from left to right, until all of the colors in the right column are green, with different values and intensities.

The transition from a monochromatic to an analogous color scheme only requires a widening of the palette hue range. For the analogous color scheme (shown in the second row of figure 4.21) the colors are pushed into the range of colors neighboring the base color. Since the base color is green, the colors in the random palette on the left gradually remap themselves into the blue-green to green-yellow range surrounding

1 2 2 green. As the activation gene’s value increases, orange-red shades become green-

yellow, violet-red shades become green-blue, and yellow and blue become green.

As mentioned above, color discord is accomplished by using colors that are widely

spaced but not complementary. An angle of slightly more than ninety degrees on the

color wheel was subjectively chosen, but as with most of these judgments the tastes

of the designer of the parametric space can be used to adjust this accordingly:

discordCN = { J : _ 0 .5 ), (4.61)

All colors in the palette are pushed toward two colors on either side of the base

color that are spaced far enough apart to not qualify as analogous, but not quite far

enough to appear complementary. In figure 4.21 color discord is shown in the middle

row. The same random colors on the left are pushed toward the closer of clashing

cyan and yellow-green hues (both fully saturated) as the images progress toward the

right. Transitioning smoothly from analogous to discord schemes gradually pushes

the colors near the base out toward the discord colors.

In the fourth row in figure 4.21 a complementary palette is formed by pushing any colors in the same half of the color wheel as the base color towards the base (e.g., yellow and blue get pushed towards a base color of green.) Colors on the opposite side of the wheel get pushed to the base’s complement, which is the color on the opposite side of the wheel (e.g., for a base color of green, violet and orange are both pushed

123 toward red) :

Cl = lerp(0.5ÿa. 1 — 0.5^a,2{h - 0.5) + 0.5) (4.62)

C2 = lerp(0,0.25 — 0.25

C3 = lerp(0.75 + 0.25^., l,4(/i - 0.75)) (4.64) Cl if \h - 0.5| < 0.25 { cg if /i < 0.25 (4.65)

C3 if/i> 0.75

Finally, for a triadic color scheme, colors are pushed towards the closest of either

the base color, or either of the two colors 120 degrees from the base color:

U = lerp(0.333 + 0.167g., 0.667 - 0.1675a, 3(/i - 0.333)) (4.66)

U = lerp(0.1675a, 0.333 - 0.1675a, Zh) (4.67)

h = lerp(0.667 + 0.1675a, 1 - 0.1675a, 3(/i - 0.667)) (4.68) triadic(h) = < ' fi if |h - 0.5| < 0.167 to if \h - 0.1671 < 0.167 (4.69) ^ (3 if |h - 0.8331 < 0.167 So in the bottom row of figure 4.21 each of the random colors on the left is gradually pushed toward the closest on the complementary color wheel of green, red- orange, or magenta.

4.9 Trait Palettes

Developing continuous pattern functions for evolutionary design is an interactive process of implementing new visual traits, followed by empirically tuning the param­ eters (primarily their ranges and value distributions) to balance the probability of the traits’ appearance. When another visual trait is added to the representation, its

124 presence often necessitates a readjustment of prior traits parameters to rebalance the

relative probability of the traits’ appearance.^®

This is similar to what is known as muddiness in poor painting: if too many

paints are mixed together, everything tends to look “muddy”. Likewise, if too many of the techniques presented here for modifying shapes, colors, or features are used

in the majority of the designs generated, everything will have the same “muddy” collection of attributes, ultimately making most designs produced look similar in their dominance by noise.

What is needed for each design is a trait palette which determines which traits are to be enabled and to what degree. We want this palette to change continuously through the design space (i.e., no sudden discontinuities in a phenotype’s visual at­ tributes for small movements in design space.) This can be implemented by making sure each trait has a valid amplitude parameter (or activation gene), and that each trait is biased towards nonexistence with a probability proportional to the number of potential traits, and the visual impact of the trait.

For example, in a given design space perhaps it is determined that either the frequent presence of “holes” is one of the primary visual traits causing signature. To reduce signature, the likelihood of the presence of holes should be adjusted to make them possible (so they can be selected) but not probable. This will make selecting and evolving objects without holes also an option.

The chance of a trait being activated should be inversely proportional to its visual effect. For example, when the unity-by-proximity property described previously is

^^This sort of problem involving small adjustments to a large number of interrelated parameters to find optimal values is itself the sort of task that genetic algorithms are often used to solve. Finding an approach for interactively evaluating populations of populations is an interesting problem.

125 used to pull features together into a tight grouping, it creates an extremely strong vi­

sual property which totally dominates many more subtle visual attributes. Therefore

the chances of this property being activated should be correspondingly reduced.

The simplest implementation of trait palettes uses one activation gene for each

trait with a single global threshold value for all traits. Those traits whose activation

genes' values drift above the threshold are activated with a strength proportional

to the amount the threshold is exceeded. A better but more challenging solution

is to weight all trait thresholds on a trait by trait basis, according to their visual

impact and relationship to other traits. The method used here is between these two approaches. Groups of related traits (e.g., feature shape attribute bombing, or global noise attributes) are all given a uniform threshold that can be easily adjusted while designing the parametric space. The activation thresholds of very visually strong traits (e.g., color scheme activation) are then adjusted individually.

Activation can be implemented by passing the normalized gene’s value to the linstep function (equation 4.7.) The linstep fimction can be used to return zero below the threshold value, and also to remap the gene’s [ 0 , 1 ] domain to the remaining range above the activation threshold.

However, some activation genes, instead of increasing from zero to one, require directionality. There is no activation if the gene has a value near 0.5, a “left” activation as the gene’s value goes toward zero, and a “right” activation as the gene’s value increases to one. Examples of this include the genes for shifting the peak of a one­ dimensional feature left or right, or the symmetry genes which force the left side to become more similar to the right or visa-versa. To implement this directional activation, the linpulse fimction (equation 4.18) can be used along with the activation

126 gene p, and low and high threshold L and IT (bothin [Q, 0.5[), as follows:

u = 1 — linpulse(0.5 — L, 0.5 + L ,H — L,g) (4.70)

dualActivate(< 7 , L, H) = v ■ sign.(g — 0.5) (4.71)

This will return zero if g is in [0.5 - L, 0.5 + L], negative one if g is in [0,0.5 — H], and positive one for g in [0.5 -H /f, 1]. It will also smoothly interpolate the activation level between these amounts.

4.10 Summary

Formal design property representations are important tools for biasing design solu­ tion spaces towards producing qualities that would commonly be developed by human designers but which would be unlikely to be found within the practical constraints of an interactive evolutionary design system. Implementing continuous activation genes for these properties is critical for causing small changes in the genotype to result in proportionally modest changes in generated designs.

Given the ability to create spaces from feature patterns that now implement visual design properties, the next chapter shows how patterns in different layers can be practically correlated to create relationships between composited feature properties.

The resultant complex design spaces are then demonstrated in a number of image and shader domains in chapter six.

127 CHAPTER 5

LAYER SYNCHRONIZATION

As has been illustrated in the previous chapters, constructing a parametric de­

scription of a visual design implies the building of a solution space to represent visual

attributes. In order to increase the complexity of the designs that can be evolved,

layers of feature patterns can be combined (or composited) in different ways. As

the features from one layer interact with those in other layers, correlations can form

which provide visual interest. In most interactive evolutionary design systems, it is

unlikely that complex correspondences between features in different layers will occur

in a practical amount of time. Figure 5.1 compares the effect of combining layers containing unrelated and related features.

In this chapter, different options for combining layers are first discussed. Synchro­ nization of feature attributes in different layers is then demonstrated. A technique is presented for creating different sets of visual attributes in different sub-regions of the visual field. Finally, a distinction is made between global and layer-based genes.

128 k

Æ m

Figure 5.1: Features in layers with no gene correspondence rarely align in any organ nized manner. Above, a layer interpreted as palette color and a layer interpreted as a displacement map (top row) are combined to form the lower left image. The lack of feature correlation can be compared to the lower right image in which a copy of the color layer is interpreted also as a displacement layer.

129 Figure 5.2: Amplitude can be continuously modified so that layers interpreted as displacements can gradually change bumps into indentations and visa-versa.

5.1 Compositing Methods

5.1.1 Value Layers

As pattern functions are layered, the individual samples can be combined via a

number of means including summation, using the maximum value, or performing

a linear interpolation. Different methods result in different visual properties. The appropriate choice is usually domain dependent.

Taking the maximum value of two layers to combine them can be thought of as a boolean union, A U B . Using the minimum value is perhaps less generally useful and corresponds to a boolean intersection AC\ B. Summations can also be a useful method for combining displacements of different frequencies and scales. Care must be taken in some domains to either clamp or normalize the amphtude to maintain a reasonable \'alue. When one layer is subtracted from another, depending on the relative size of the features, the sign of the feature can determine whether it forms a positive or negative shape. By allowing the height of a displacement to smoothly vary from positive to negative, displacements can be made concave or convex (figure 5.2.)

130 Blending values by averaging can combine attributes of both layers into one while

maintaining a normalized range of values. However, in many domains averaging can

have a “muddying” effect of removing desirable extreme characteristics by pushing

attributes to average values, thus removing distinctive qualities from each layer.

Figure 5.3 shows two value layers interpreted as displacements being combined first

with summation and then using the maximum value. If the potential for multiple

effects is desired, both values can be calculated (i.e., the maximum and the sum).

A gene can be added to determine which effect is used. The two values can be

interpolated based on the gene value to ensure design space continuity.

5.1.2 Color Layers

Colors (ct and C&) are often blended using an alpha (or opacity) value a and simple hnear interpolation:

c = Ct(l — a) + c^a (5.1)

Deciding which color space to use when interpolating colors is an interesting prob­ lem in domains using color. Different lED systems use different methods, with each giving a fairly distinctive signature. As was previously mentioned in the previous chapter, interpolations in HSV space often lead to “rainbows”, as the interpolated value passes through a number of hues. Linear paths between points in RGB space often have nonintuitive intermediate qualities. If either space is used to interpolate colors from a fixed color palette, then non-palette colors are likely to result. If the palette color interpolation is conducted in the linear “palette space” then only palette colors will result. However, “rainbow” effects similar to those found with HSV inter­ polation are once again likely.

131 W S i/ V V W s / WWW W <% > S^ V! s / \> w / < / s /

Figure 5.3: The two independent layers in the first column are formed using the features' value and opacity information. The middle column shows the first column reinterpreted as displacements. These displacement layers are then combined in the final column. The top right image shows the result of using the maximum value, while the lower right image shows a summation.

132 Figure 5.4: In each image the color palette running down the left edge is mixed with the identical color palette which runs across the top. The first two images show a hue ramp with hue ranging from from zero to one (in HSV space). In the first image, the ramp is mixed in RGB space. At each point in the image each color component is independently linearly interpolated. For example, The row of pixels containing pure (1,0,0) red on the left mixes with the colunm containing pure (0,1,0) green yielding a (0.5,0.5,0) dark yellow pixel. The second image mixes the same hue palettes, but in HSV space. Each resulting hue is located half way between the two mixing hues. The next two images show random palettes mixed in RGB and HSV, respectively. The final image shows the same random palette mixed in “palette space”, with each resulting color being half way between the two mixing colors in the random palette.

5.2 Attribute Synchronization

A significant weakness of many interactive evolutionary design systems is the lack of correlation between the features in different layers. For example, stripes or spots in one color layer typically have no positional correspondence with any ridges or bumps that may be in a displacement layer. Additionally, in two composited color layers it would be nearly impossible to interactively evolve large blue marks in a lower layer with smaller red marks on top of them in a higher layer. This correspondence between features in different layers is a common attribute of human design. In fact it has been said for texture generation that the coordination of color and bump mapping is the

“most important element” of impressive texture design [177].

133 In most GP-based TED systems, there is no relationship between ancestor and child nodes for frequencies, noise offsets, random seeds, or transformations. Nodes representing the previously mentioned compositing operators (i.e., maximum, addi­ tion, and linear interpolation) act as layer compositing operators. They combine the values found in their child nodes, resulting in a new layer containing attributes from both child layers.

It is possible that a GP mutation operator could copy one of a node’s subtrees to one of the same node’s other child slots. This copying creates an exact correspondence between the two child layers. This is likely to be an uncommon occurrence, since it is not explicitly encouraged. Subsequent indiscriminate mutation of the copied layer’s sub-tree nodes is likely to destroy the correspondences between the layers. Thus, in the rare case that a copy of a layer is composited with itself, the layers are likely to be either identical or uncorrelated.

Placing an identical layer on top of a lower layer is never a good thing since it is likely to simply obscure or remove the lower layer, while requiring additional computation. What is more useful is to composite a modified layer which retains correspondences (mainly spatial) with its parent layer. The method implemented here for creating correspondence between layers is the addition of a set of cloning genes to each layer. These genes control how many additional copies of the layer should be composited on top of the parent layer, and how much each such layer should be modified. This results in the benefit of additional detail being added to the layer’s features. Since these detail features are based on the features in the original layer, they can share visual attributes of the parent layer.

134 Care must be taken to controt thb mutation of the cloned layers. It may be

desirable to have some of the copies’ feature attributes change radically (e.g., shape

and color.) However, very slight mutations to a few of the layer’s genes can make

the copied layer change significantly. This can disrupt any correspondence between

feature positions from the original layer to the copy. To address this problem, a clone

mask is created. This is an array of weights (mostly zero or one) that is the same

length as the genes for one layer. When an array of mutation amoimts is generated

to mutate the genes of a cloned layer, the mutation array can be multiplied by the

clone mask to ensure that none of the most critical genes are modified. The mutation

weights can be further timed so that a few of the attributes are only modified by a

small amount (e.g., position .)

Finally, the size of the features is encouraged to shrink by some amount in each cloned layer so that the cloned features do not totally obscure the original features as they are composited. A gene controls the amount by which the size of the features in each clone layer is reduced.

5.3 Trait Localization

Designs evolved by lED systems usually contain visual attributes that appear uniformly throughout the visual field. It is rare to find one set of visual attributes appearing in one sub-region of the design, and a completely different set of attributes in another region. While specific qualities certainly vary somewhat from local region to local region, the overall characteristics of the design are usually describable in a region-independent manner. For example, a given design could be described as, “a base of red and brown tones with green splotches of various sizes on a low frequency

135 Figure 5.5: The four images illustrate the construction of a design using two inde­ pendent layers and two cloned layers. The first image is the base layer of features (over a background color field.) The next image shows the addition of a cloned layer. The third image shows a new independent layer. The final image adds another cloned layer.

fractal bumpy surface.” While some regions may have more or less splotches or redness, the overall description still holds in general throughout.

When human designers create a design, this uniformity of attributes is often not present. One region might have a certain set of attributes, but a neighboring region might have a set of visual qualities that is totally unrelated in palette, complexity, frequencies, sizes, etc. To create the potential for these sort of significant regionalized differences within the parametric design spaces described here, a small set of region genes can be added.

VoTonoi noise (also called Worley noise) can be used to subdivide an N-dimensional space into discrete regions [8]. Voronoi noise works by first jittering a regular grid of feature points randomly. At any given sample location, the distance to the closest feature point is then calculated. The sets of samples sharing a closest feature point create a network of adjoining cell-like regions. The position of each shared feature

136 * .ct- 4

Figure 5.6: Voronoi noise can be used to divide a region into unique cells or regions. The points in each region share a common closest feature point (shown in the upper left image mapped to color) which can be used to localize a set of visual qualities. Using this technique, the design can be broken into finite spatial regions sharing a set of visual properties. Noise of varying frequencies and amplitudes can be used to make the cell boundaries more (or less) irregular.

137 Figure 5.7; Genes for controlling visual attributes can be created both on a per-layer basis and on a global basis. The technique for localizing traits within finite regions described in the previous section can also be applied to the features of a single layer, as is shown here.

point can be used as a seed for generating a set of attributes to be shared throughout the region.

By using these region-based values as a seed to generate a mutation array for each region, gene values can be mutated dififerently in each region. Different collections of visual qualities in different regions result, as is shown in figure 5.6. By controlling the percentage of genes that are mutated, as well as the amount by which they are mutated, the degree of difference between regions can be continuously controlled.

Additional genes control the shaping of the boundaries between regions. One gene controls the degree of feature point jittering, varying firom a regular grid to crystalline ceUs. Another set of genes adds noise to the sample coordinates which causes the cell boundaries to become curved. The noise genes control the frequency and amplitude of the curvature.

138 5.4 Global and Layer Leveî Parameters

In the previous two chapters, traits such as noise, color palettes, and bombing were

used to manipulate the visual attributes of the patterns within in a single layer. Some

parameters such as global noise and the above regionalization of visual properties were

defined to affect all layers equally. For many of these properties it is advantageous to

introduce both high and low-level versions of the genes. The layer-level genes modify

the property within one layer. The global-level genes affect the selected property in

all layers simultaneously.

For example, the palettes of all of the layers could be made “warmer” simultane­

ously, or the palette of just one layer could be made warmer. The layer genes further

tune the properties of each individual layer. Limiting feature qualities to finite regions

can be implemented on both a global and layer basis. Figure 5.7 shows the changing

of visual properties by “region” as was discussed in the previous section. But in this

example the mutated qualities are limited to the features of a single layer, as opposed

to all layers.

Enabling visual traits to be adjusted on a per-layer basis further increases the diversity that it is possible to represent in a solution space. Providing high-level genes as well increases the likelihood of high-level visual properties being applied to an entire design (e.g., making the entire design “warmer”, or perhaps more “organic”.)

139 CHAPTER 6

lED EXAMPLE DOMAINS

Each of the previous chapters introduced a set of techniques that can be used

for constructing image and shader attribute design spaces from layered continuous

pattern functions. Because of the methods used to construct these solution spaces,

they are interactively searchable using simple, standard genetic algorithms.

This chapter presents images which serve as examples demonstrating a range of qualities attainable by implementing the techniques discussed in the previous chap­

ters. Most of the illustrations will show populations and individuals that were bred to demonstrate the potential for generating a particular set of visual traits. The domains shown will include two-dimensional images, UV shader attributes applied to spheres, bump and displacement mapping, light reflectance properties, height flelds in virtual environments, and three-dimensional solid shader attributes. The remaining chapter will summarize the findings, contributions, and directions for future work.

Each population image shown was produced with the aid of Houdini and Render-

Man. For each individual, the geometry and genotype are written to a RenderMan

RIB file. A render job is started on a processor for each individual^®. The samples

Alternatively the entire population can be rendered as a single image using multiple processors.

140 for each pixel in the image of each individual are calculated by passing the UV co­

ordinates at the sample location to a RenderMan shader along with the individual’s

genotype. Once each individual is rendered, they are tiled together into a single mo­

saic grid image. This grid image is then texture-mapped onto a grid of selectable

objects"* for the next round of interactive selection.

6.1 Images

By far the most common aesthetic use of interactive evolutionary design is the generation of artistic, non-representational images. In this implementation, to pro­ duce an image for an individual a single Bezier patch is placed filling the camera’s view so that image coordinates correspond with the patch’s UV coordinates. At each sample location"^, the sample’s coordinates and the gene values are passed to a Ren­ derMan shader. The shader uses a layered continuous pattern function to compute a color value for the sample. Bump mapping is also sometimes activated depending on the value of the relevant genes.

The examples shown in figure 6.1 illustrate individuals and populations evolved by the author. The images were selected to show a wide range of regularity and randomness. Different degrees of color variation, structure, and layer interaction can be observed. The populations were saved after several iterations of selection and evolution, so many of the individuals within a population can be seen to share traits with their siblings. Note that these populations were all still early enough in the evolutionary process that they have not yet converged to a limited set of traits.

"*The image of each individual is texture mapped onto a four-sided pyramid with its point fac­ ing the camera. When an individual is interactively selected, the edges of the selected object are highlighted causing an “X” to be formed over the selected individual. ''There are usually many samples computed per pixel for anti-aliasing purposes.

141 The individuals at the bottom of ffgure 6T, in addition to being ones that the

author found visually interesting, were chosen to illustrate different visual traits. The

first on the left illustrates the potential for the emergence of complex forms interacting

in layers. The second image shows a restrained analogous color palette as well as an

example of a subdued pattern of varying yet related detailed structures.

The third image illustrates a regular pattern with different structure in different

regions, with subtle local variations and irregular details. The fourth image shows a

complex color palette and interesting form interactions between layers with a wide

variety of shape and detail at different scales. The final image is another good example

of an irregular pattern.

By contrast, the images in figure 6.2 show a population and individuals evolved by

designer Peter Gerstmann. Gerstmann selected individuals based on how interesting

he found the forms, colors, and relationships in each image. In the population shown,

some degree of convergence can be observed to limited unsaturated palettes of two

or three hues, combinations of soft watercolor-like washes on paper-cutout shapes,

wandering black lines, and combinations of large soft shapes with small, tight details.

Gerstmann expressed great satisfaction with the images he generated during his

time using the system. He requested that a set of individuals he evolved be rendered

at higher resolutions so that he could use them in his slide-show screensaver on his

personal computer. Having a professional visual designer wish to be able to regularly

view the images he had evolved using this system provided a great deal of personal satisfaction.

142 6.2 UV Shaders

The techniques used to create images in the previous section need be modified

only slightly to be used as shaders to color and displace the surfaces of 3D objects

with UV texture coordinates defined on their surface. Spheres are commonly used

as a quick means for illustrating shader properties because they render quickly and

show the effect of light hitting the surface at different angles. A sphere’s “U” texture

coordinate typically runs around the sphere latitudinally from zero to one, forming

a seam where one meets zero. The “V” coordinate also runs from zero to one from

pole to pole.

UV mapping on a spherical primitive tends to have a few visual artifacts. In

addition to the seam (which can usually be hidden from the viewer by rotating it away

from the camera) latitudinal stretching can result if a "square” texture is wrapped around a sphere, since the width of the texture will be extended to twice the surface distance as the height. This was resolved by doubling the base feature frequency along the U direction.

Longitudinal “pinching” of a regular pattern is sometimes an undesirable artifact as well if consistent feature size is desired. The pinching results from the decreasing distance covered on the sphere by equal steps in UV coordinate space as a pole is approached. Though this artifact was not addressed, it could be reduced if it were determined to be a significant problem, using an extension of the “imiform line density” methods discussed by Johnston [78]. Using a solid shader with 3D features also removes these artifacts, but potentially at the expense of others, as will be seen in section 6.4.

143 Populations and individuals shown m figurœ 6.3, 6.4, 6.5, and 6.8 were evolvef

by the author. The top two populations in figure 6.3 were selected to show a diverse

range of visual traits. In the lower left image a population of regular patterns was

evolved. The lower right image shows a population that has started to converge

towards green, symmetric “ink-blot” designs. Higher resolution images of some of the author’s favorite individuals from this domain are shown in figure 6.4.

Figure 6.5 shows selected individuals that were evolved when the gene controlling feature position bombing was manually “turned off” forcing regular patterns (see the lower left image in figure 6.3 as well.) This illustrates the ability to manually control the visual properties being evolved.

Figure 6.6 shows spheres evolved by computer artist Charles (“Chuck”) Csuri.

It was interesting to note the role that experience played in using an lED system.

The population shown was one of the final populations of Chuck’s evolutionary run.

He had primarily been selecting forms and line qualities he had liked, despite the individuals’ limited color palettes. When he arrived at the population shown, most of the color diversity had been bred out of the population, leaving him with fittle genetic material for complex color palettes. Experience with the system encourages one to foster diversity in order to avoid premature convergence.

Csuri’s usage of the system pointed out another important property of lED sys­ tems: the amount of time people are willing to wait for the next generation to be displayed is a critical component of their willingness to use the tool. Everyone’s toler­ ance differs. While the author was willing to wait a few minutes for a generation (and some researchers have been known to wait over night,) Csuri seemed a bit tired by the several minute wait between populations. This perhaps could have been alleviated

144 somewhat if he had been able to use the systenf himselTon his own computer at his

own pace (working on other tasks while waiting.) Unfortunately the interface in its

current state (as is typical of such research projects) requires the author to monitor

the distributed rendering progress, occasionally nudge processes, and perform a hand­

ful of ordered data filtering tasks in order to select and display the next generation.

Interface development remains an area open for much improvement.

Returning to the evolution of UV shaders, additional parametric sinface proper­

ties can be added and evolved as desired. Figure 6.7 shows the addition of genes

for controlling roughness and metallic properties. As with other properties, light re­

flectance qualities can vary between features in different layers. The roughness gene

determines the size and falloff of the specular highlight. Layers with “rough” fea­

tures have wide diffuse highlights (if any), while smoother surfaces produce smaller,

brighter highlights. The degree to which a given feature appears to be metallic can

be controlled by making the highlight color blend with the surface color^^. So for

example, a yellowish-brown material can be made to appear more gold or bronze by

making the highlight become more yellowish-brown, instead of wliite.

Additional traits such as these do not need to be limited to surface material qualities discussed so far. Any arbitrary property can be parameterized including the amount that an image file is referenced and blended, environment map-based or raytraced reflectance or transparency, glowing, or emissiveness. Even the growth of particle system-based hair can be an evolvable surface trait.

One of the factors determining practical evolvability of individual traits is the time required to render each individual at a suflBcient resolution. For example, figure 6.8

~^That is, the hue and saturation of the surface color. The highlight’s value remains at the full intensity.

145 shows spheres which use true displacement mapping instead of the bump mapping

shown in the previous examples. The number of samples that must be computed

when rendering a detailed displacement shader to avoid surface tearing artifacts is

significantly higher. Techniques Uke particle systems and raytracing which add signif­

icant computation time may require lower resolution images and smaller populations

to attain tolerable population generation times.

6.3 Height Fields

If the 2D images presented in section 6.1 are used to offset the heights of a fiat grid of vertices, height fields can be created. As with bump and displacement mapping, the images’ color value can be used to determine the magnitude of the offset. The ranges and activation thresholds of different properties can be adjusted to bias the system to produce different properties. For example, the chance of creating subregions with different sets of local properties can be increased (section 5.3) to encourage different terrain types (e.g., rolling hills transitioning into rectangular “buildings”.) Feature frequency and noise levels can be reduced to make the formation of larger continuous structures more likely. This also makes it more probable that the terrain will be navigable. The domain author is free to adjust the gene value remapping biases and boimdaries as he or she sees fit. Figure 6.10 shows two populations of designs.

Two designs were selected and used to generate the game environments shown in figures 6.11 and 6.12.

When an individual 2D image design is selected in the evolution interface, a 3D interactive representation of the geometry is displayed in an adjoining window (shown in the upper right image in figures 6.11 and 6.12) which can be rotated and viewed

146 from any direction. This allows the user to preview the translation from 2D to 3D,

perhaps revealing hidden advantages or disadvantages that were not as obvious from

the “overhead” view (e.g. the “ramps” from low ground to high ground in figure 6.12.)

The lower images in each figure show views of the environment after conversion

to a game format. When an environment geometry is selected and exported, a script

can be run which converts the polygonal geometry into brushes. These brushes can

be imported into a game environment authoring tool where lights and player starting

positions are added. The environment is then “compiled” which involves computing

visibility and lighting information for the different regions of the space. Afterwards

the environment can be loaded into the game and explored interactively from a first

person view.

Figure 6.11 shows an image that was selected for its gentle rolling hills and the

unique “spine” that divides the top of one hill with a high wall. By contrast, fig­

ure 6.12 was chosen to illustrate the potential for generating flat overlapping and

adjoining surfaces at different heights. This is a common trait of many environments in the very common “death-match” and “platform” gaming genres.

6.4 Solid Shaders

In section 6.2, evolved 2D features were ‘tra p p e d ” around each sphere using the sphere’s UV coordinates. An alternative method for shading three-dimensional object surfaces is to generate a 3D texture space, or solid shader which is referenced using the surface’s object or world space (XYZ) coordinates, instead of parametric (UV) coordinates. The effect is one of having immersed the object into the texture space.

147 rts surface takes the color ancT displacement properties of the point in space at which

the surface sample is located.

The features represented are based on a three-dimensional grid of the three-

dimensional features in section 3.1.3. Figure 6.13 shows spheres using evolved sohd

shader attributes. The author evolved the population and the images to show a range

of visual attributes.

Note that there is no longer any “pinching” of features toward the poles of the

sphere. There is a serious artifact however when a majority of the three-dimensional

features become larger then the cell containing them: the grid of cells becomes imme­

diately apparent because of the discontinuities at the cell walls. As with longitudinal

pinching in UV shaders, these can sometimes serve as an interesting and desirable visual trait, but it also is a fairly strong signature. One partial solution would be to create 3D features with hollow interiors instead of only using the solid features described in chapter three. These would also enable the possibility of lines on the surface, instead of only solid shapes. Another solution would be to strongly bias the

3D features to remain within their cells.

Most of the 2D compositional design principles from chapter four can not be easily applied to solid shaders. The visible 2D features on the surface of the textured object are formed by the potentially complex intersection between the object’s surface and the interior of the 3D features^'*. Neither the shape nor the position of these 2D surface features are regular or controllable. As a result, any of the principles which involve forcing features to all become similar in shape or to reposition themselves on the surface relative to the other features are not feasible.

~‘‘As an example, consider a concave 30 feature which might have a number of intersections with an arbitrary 3D object, forming multiple 2D features on the object’s surface.

148 Figure 6.1: Image populations and individuals evolved by the author are shown.

149 %

%

Figure 6.2: The population of images above was evolved by designer Peter Gerstmann. Beneath the population, higher resolution images selected by Gerstmann are shown.

150 m

* a -A-

Figure 6.3; Populations of UV shader attributes mapped to spheres with bump map­ ping are shown.

151 .....'

Figure 6.4: Detail of several individual UV textures.

152 m

Figure 6.5: Regular pattern UV textures: Individual visual properties can be manu­ ally biased as desired. For these shaders the chance of feature position bombing was reduced to zero.

153 m m

Figure 6.6: The population of spheres above was evolved by computer artist Charles Csuri. Beneath the population, higher resolution images selected by Csuri are shown.

154 Figure 6.7: The population of spheres above demonstrates the addition of genes to control lighting properties such as roughness and metal.

155 »

Figure 6.8: Populations of UV shader attributes mapped to spheres using displace­ ment.

156 r

Figure 6.9: UV shaders mapped to spheres with true displacement.

157 Figure 6.10: These are populations of images evolved to be used as height fields. They can then be converted into geometry suitable for use in games and virtual environments.

158 Figure 6.11: The evolved image on the top left was reinterpreted as a height field. The values in the image were used to raise the vertices of a triangle grid. A polygon reduction algorithm was then run to reduce the triangle count. The bottom images are two views of the environment imported into Quake III Arena^^ (a product of id Software, Inc.)

159 Figure 6.12: The evolved value image on the top was reinterpreted as a height field. The values in the image were used to raise the vertices of a triangle grid. A polygon reduction algorithm was then run to reduce the triangle count. The bottom images are two views of the environment imported into Quake III Arena™ (a product of id Software, Inc.)

160 mm

Figure 6.13; These spheres show solid shader attributes. The use of 3D features can avoid some of the artifacts associated with 2D UV coordinate-based surface shaders.

161 CH APTER 7

CONCLUSION

7.1 Summary

Genetic algorithms traditionally require many generations of large populations to work well. The practical constraints imposed by interactive evolutionary design sys­ tems require that fewer generations of smaller populations be explored. lED systems have had Uttle success as general design tools because of the challenges involved in making the systems practical, usable, and flexible. While previous systems yield vi­ sually interesting results initially, the depth of novelty is shallow. It quickly becomes apparent that lED systems’ output contains imacceptably severe signature.

Traditional non-interactive evolutionary systems are usually used to search for highly constrained solutions in solution spaces with very small areas of high fitness. lED is typically useful in unconstrained design domains with large areas of high fit­ ness. While many aesthetic domains fit into this category, the fertility of design spaces can not be relied upon entirely. Some implementations of lED systems at­ tempt to address the problem by using high-level representations and design spaces that primarily contain “good” solutions. This usually leads to few surprises and high signature. When parameter spaces are instead made low-level to increase the

162 representational potential and generality of the system, then convergence becomes

extremely unlikely, given population size and generation limitations.

Systems relying on specific high-level representations converge quickly to very

complex solutions, but with a relatively limited degree of visual diversity. Represen­

tations that are more low-level contain higher fitness solutions, but have a very hard

time providing satisfactory convergence. Finding a balance between signature and

convergence rate requires the use of new combinations of representational primitives

and techniques.

The use of genetic programming techniques has dominated lED system imple­

mentations because of the •potential diversity promised by nearly infinite structural

recombinations. Genetic algorithms with fixed length “flat” genotypes have been

viewed as more limited than GP systems in their representational flexibility. How­

ever, because of their structural limitations, G As do have the advantage of being

easier to control, analyze, and tune than GP-based systems.

The work presented here introduces a new class of solution space building blocks

called continuous pattern functions. These pattern functions are used to construct

parametric design spaces suitable for evolving image and shader attribute designs,

using only simple G As. Techniques for compositing and synchronizing patterns in

multiple layers increases visual diversity and reduces signature.

These continuous pattern functions are based on techniques drawn from computer graphics procedural texture generation. By layering, compositing, and synchronizing these patterns to construct solution spaces, forms can be much more general than prior

163 high-level representations, but with potentiaT for far fewer “”^ results® than prior low-level representations. More importantly, the Gxed-length genotype representation allows for control over visual attributes by permitting intuitive manual adjustment of trait activation genes, range biases, and boimdaries.

Because the pattern functions used here are general in their representational power, without further constraints or biases, they can yield a high percentage of low fitness “noisy” or “muddy” results. As with many low-level representations, it is unlikely a user will stumble across many of the structural relationships that are extremely common in visual design: balance, unity, symmetry, repetition, and so forth. Explicit formal design knowledge representation is used to address this problem. When formal visual design parameters are represented and integrated into pattern fimctions, high-level attributes such as “symmetry", “uniformity”, and “bal­ ance” will emerge in individuals. Their presence yields greater visual diversity and thus reduced signature.

To demonstrate the practical application of the concepts in this research, a generic

GA-based lED system called Metavolve was integrated into the Houdini 3D modeling and animation system [90] [150]. Previous to the image and shader work presented in this dissertation, Metavolve was used to evolve cartoon faces, color combinations, and human figure geometry.

Several RenderMan shaders were produced for use with Metavolve which imple­ ment the techniques presented in this research. Metavolve passes genotypes to the shaders which are then rendered on patches or spheres. Populations are displayed

'^This refers to individuals that have degenerated into a complete lack of structure. Such in­ dividuals, while technically “unique”, are indistinguishable, like different screen shots of television static.

164 în an interactive window allowing selection ofhigh-fitness individuals. The results of

this work have been presented in images illustrating the evolution of the visual traits

identified for signature reduction. The system uses a G A for search which is easily

tuned to control the boundaries and biases of the search space.

Unconstrained lED systems for generating images and surface attributes must

be judged by their ability to generate individuals and populations with the visual

qualities that users will find interesting. The primary goal of this work is to show

that a body of knowledge from the shader authoring domain can be applied to the

lED problem in order to give system architects more control over visual signature.

This in turn will increase the diversity of the individuals presented to users.

The populations and individuals presented in this work exhibit traits that are

generally absent in previous systems, including layer synchronization, formal design

properties, irregular patterns, and correlating feature properties. Providing new tech­

niques for constructing more visually diverse design spaces will make the implemen­

tation of useful lED systems significantly more practical.

7.2 Contributions

This research combines computer graphics techniques and design domain knowl­ edge and applies it to an established research problem in an innovative way. The interdisciplinary nature of the required approach has limited the amount of previous work in this vein. There are few evolutionary algorithms researchers with extensive knowledge of surface and visual design techniques. There are even fewer visual design­ ers with evolutionary algorithm research backgrounds. The value of the techniques’

165 use was demonstrated through examples o f images evolved not only the author, but

by a well known computer artist and a professional multimedia designer.

The primary contributions include the following:

• Layered feature pattern generation techniques are used as the primary basis for

building interactive evolutionary design spaces.

There are a set of tricks and techniques that have evolved over the past decade within

the culture of the film effects and animation production world for generating realistic

materials to surface virtual worlds in feature films. This dissertation introduces these techniques to the problem domain of lED. Approaches are demonstrated for imple­ menting and combining these techniques continuously. This continuity allows visual properties to change smoothly rather than in discrete jumps: a necessity if the user is to control mutation in order to refine a population of designs. The construction of a continuous parametric design space allows it to be searched using simple genetic algorithms. This in turn allows design space authors to easily control the probabilities and biases of individual visual quafities (i.e., control signature.)

• Methods for representing and applying formal design knowledge are presented

to improve the visual diversity of lED systems using these techniques.

The use of explicit representations of common design approaches such as structured color palettes and increasing localized emphasis of features achieves design results that would not be evolved in previous lED systems. Adjusting the biases in the design space allows the space author to explicitly control the chance of these constraints being used within populations.

• Layer synchronization techniques for feature correlation are demonstrated.

166 The introduction of feature clbmng allows for complex correlations and hierarchies

of detail in complex ways that would not be possible using the traditional GP-based

methods previously used in this problem domain. Correlating feature attributes is

made possible by the explicit representation of features, rather than the usual emer­

gent feature representation. The synchronization of different feature attributes in

this domain (e.g., color, displacement, light reflectance) provides a greater degree of

visual richness.

Additional contributions include a new method for designing real-time game en­

vironments requiring no knowledge of 3D modeling techniques. Also, the signature

problem is discussed in terms of visual attributes. Sources of visual signature were

identified and examples of approaches for reducing and controlling them were pre­ sented.

7.3 Future Work

There are many areas where this work could be extended and refined. The 3D features used with solid shaders could be rendered with ray marching techniques and extracted using marching cubes algorithms. Initial attempts at this yielded results that were too slow to be practical, but many potential optimizations have yet to be investigated. Evolving architecture with complex interior spatial qualities would be a very interesting use of evolved layered pattern geometry.

Swapping genetic material between different layers using new mutation operators could yield more interesting interlayer correspondences. The values currently inter­ preted as color and displacement could be interpreted as vector flow fields. These could then be used to evolve complex structured particle systems, perform image

167 processing, or sculpt objects via deformation. Evolving structured motion curves for animation is another potentially interesting interpretation of evolved values. There are numerous shading and image processing techniques that could be integrated to extend the potential visual diversity of the features and patterns.

Additional research that might follow from this work includes the following:

• Predictive methods could be found to help systems converge faster on acceptable

solution regions based on collected user selection data (i.e., improved steering.)

• Attempts could be made to quantify signature. This would permit comparison

between different generative design systems and techniques. This would likely

involve determining how much of a system’s output could be viewed before it

stops looking “unique”.

• The extent to which aesthetic knowledge could be collected, represented, and

used in a generative fashion could be investigated.

• Improved interfaces for allowing users to intuitively tune the ranges, biases,

and activation levels that define the solution space of visual traits could be

developed.

• Intuitive refinement of individuals using direct manipulation of gene values could

be facilitated (i.e., “genetic engineering”.)

• Descriptive terminology for attributes could be collected and used as an al­

ternative evaluation method via more verbose user critiques (e.g., “bright”,

“squiggly” , “ugly”.)

168 • lED could be investigated from the perspective of experimental perceptual psy­

chology.

• The usefulness of group evaluation as a means of “parallel processing” interac­

tive fitness determination could be investigated.

169 A PPEN D IX A

EXAMPLE IMAGE CHROMOSOME MAP

Below is a sample chromosome map showing the layout of both global and layer- based genes in a typical image design space. In this example, genes one through nine represent global properties of the design. The remaining 148 genes define the properties of one layer. An additional 148 genes should be appended to the genotype for each additional layer to be represented in the space. For example, if it is desired that the design space described below be able to represent images with a maximum of three layers, then 9 -j- (148 • 3) = 453 genes would be used.

# Name Description 1 gRegions[0] global region activation 2 gRegionsfl] global region jitter amplitude 3 gRegions[2] global region grid size 4 gRegions[3] global region jitter oflfset 5 gRegions[4] global region noise amplitude 6 gRegions[5] global region noise frequency 7 numLayers number of layers 8 bgMat[0] background smoothness 9 bgMat[l] background metal 10 regions [0 ] layer region activation 11 regionsfl] layer region jitter amplitude 12 regions [2] layer region grid size 13 regions [3] layer region jitter offset 14 regions [4] layer region noise amplitude 15 regions [5] layer region noise frequency 16 clonefO] number of clone layers

170 17 cIone[l] cloning mutation amplitude 18 clone[ 2 ] cloning mutation offset 19 clone[3] feature radius scaling 2 0 clone [4] displacement adjustment 2 1 pc trait palette chance 2 2 bgvA background (amplitude) 23 bgvF background value noise (frequency) 24 bgvO backgroimd value noise (offset) 25 bgvT backgroimd value noise (turbulence) 26 bgBaseColor backgroimd base palette color 27 freqS num of feature columns 28 freqT number of feature rows 29 fbh feature base height 30 fbr feature base radius 31 fbc feature base palette color 32 fiiz feature edge falloff width 33 ufreq uniform number of rows and columns 34 palo[0 ] color palette seed offset 35 palo[l] color palette seed offset 36 pale [ 2 ] color palette seed offset 37 pale [3] color palette seed offset 38 pale [4] color palette seed offset 39 pale [5] color palette seed offset 40 palo[6 ] color palette seed offset 41 palo[7] color palette seed offset 42 paJo[8 ] color palette seed offset 43 palo[9] color palette seed offset 44 pale [1 0 ] color palette seed offset 45 palo[ll] color palette seed offset 46 palo[ 1 2 j color palette seed offset 47 palo[13] color palette seed offset 48 palo[14| color palette seed offset 49 palo[15| color palette seed offset 50 fsc feature base shape choice 51 iiniformScale uniform scale features 52 ftivA jitter feature sample color value (amplitude) 53 fnvF jitter feature sample color value (frequency) 54 fnvO jitter feature sample color value (offset) 55 fnvT jitter feature sample color value (turbulence) 56 ftitA jitter feature sample positions (amplitude) 57 fhtF jitter feature sample positions (frequency) 58 fiitO jitter feature sample positions (offset)

171 59 fntT jitter feature sample positions (turbulence) 60 fnoA jitter feature sample opacity (amplitude) 61 fnoF jitter feature sample opacity (frequency) 62 fnoO jitter feature sample opacity (offset) 63 fnoT jitter feature sample opacity (turbulence) 64 gnvA jitter global sample color value (amplitude) 65 gnvF jitter global sample color value (frequency) 6 6 gnvO jitter global sample color value (offset) 67 gnvT jitter global sample color value (turbulence) 6 8 gntA jitter global sample position (amplitude) 69 gntF jitter global sample position (frequency) 70 gntO jitter global sample position (offset) 71 gntT jitter global sample position (turbulence) 72 grA rotate global coords 73 beA feature existence bombing (amplitude) 74 beF feature existence bombing (frequency) 75 beO feature existence bombing (offset) 76 btA feature position bombing (amplitude) 77 btF feature position bombing (frequency) 78 btO featiure position bombing (offset) 79 bsxA feature width bombing (amplitude) 80 bsxF feature width bombing (frequency) 81 bsxO featiure width bombing (offset) 82 bsyA feature height bombing (amplitude) 83 bsyF feature height bombing (frequency) 84 bsyO feature height bombing (offset) 85 bvA feature color value bombing (amplitude) 8 6 bvF feature color value bombing (frequency) 87 bvO feature color value bombing (offset) 8 8 bcA feature palette color bombing (amplitude) 89 bcF featiure palette color bombing (frequency) 90 bcO feature palette color bombing (offeet) 91 boA feature opacity bombing (amplitude) 92 boF feature opacity bombing (frequency) 93 boO feature opacity bombing (offset) 94 brA feature rotation bombing (amplitude) 95 brF feature rotation bombing (frequency) 96 brO feature rotation bombing (offset) 97 bnvA feature sample color value bombing (amplitude) 98 bnvF feature sample color value bombing (frequency) 99 bnvO feature sample color value bombing (offset) 1 0 0 bnvT feature sample color value bombing (turbulence)

172 1 01 bncA feature sample color palette bombing (amplitude) 1 0 2 bncF feature sample color palette bombing (frequency) 103 bncO featiue sample color palette bombing (offset) 104 bncT feature sample color palette bombing (turbulence) 105 bntA feature sample position bombing (amplitude) 106 bntF feature sample position bombing (frequency) 107 bntO feature sample position bombing (offset) 108 bntT feature sample position bombing (turbulence) 109 bnoA feature sample opacity bombing (amplitude) 1 1 0 bnoF feature sample opacity bombing (frequency) 111 bnoO feature sample opacity bombing (offset) 1 1 2 bnoT feature sample opacity bombing (turbulence) 113 bscA feature shape choice bombing (amplitude) 114 bscF featme shape choice bombing (frequency) 115 bscO feature shape choice bombing (offset) 116 buzA feature falloff width bombing (amplitude) 117 buzF feature falloff width bombing (frequency) 118 buzO feature falloff width bombing (offset) 119 uProxfO] unity by proximity (activation) 1 2 0 uProx[l] unity by proximity (U position) 121 uProx[2] unity by proximity (V position) 1 2 2 uRep unity by repetition 123 uCont imity by continuity 124 uVar unity by variety 125 unity imity by combination 126 eCont[0] emphasis by contrast (activation) 127 eCont[l] emphasis by contrast (U position) 128 eCont[2] emphasis by contrast (V position) 129 eCont[3] emphasis by contrast (radius) 130 elso[0 ] emphasis by isolation (activation) 131 eIso[l] emphasis by isolation (U position) 132 eIso[2} • emphasis by isolation (V position) 133 eIso[3] emphasis by isolation (radius) 134 ePlace[0] emphasis by placement (activation) 135 ePlace[l] emphasis by placement (U position) 136 ePlace[2] emphasis by placement (V position) 137 ePlace[3] falloff shape (concave vs. convex) 138 emphasis emphasis by combination 139 bFsym balance by symmetry (feature) 140 bGsym balance by symmetry (global) 141 bAsymSN balance by asymmetry (size vs. number) 142 bAsymSC balance by asymmetry (size vs. complexity)

173 143 bAsymNC balance by asymmetry (number vs. complexity) 144 rhy[0 ] rhythm (activation) 145 rhy[l| rhythm (frequency) 146 rhy[2 ] rhythm (shape) 147 rhy[3| rhythm (phase) 148 shOR shaping: organic vs. rectilinear 149 vkey value: high/low key 150 vrng value: range/contrast 151 cVVC warm vs. cool 152 csat saturation 153 cscm[0 ] color scheme (activation) 154 cscm[l| color scheme (tonality) 155 cscm[2 ] color scheme (choice) 156 mat[ 0 ] feature smoothness 157 mat[l] feature metal

174 BIBLIOGRAPHY

[1] Active Worlds com, Inc. Active worlds, http://www.activeworlds.com, 2001.

[2] Alias I Wavefront. Maya, http://www.aliaswavefront.com, 2000.

[3] Peter John Angeline and Jordan Pollack. Coevolving high-level representations. July Technical report 92-PA-COEVOLVE, Laboratory for Artificial Intelligence. The Ohio State University, 1993.

[4] Peter John Angeline, Gregory M. Saunders, and Jordan B. Pollack. An evolu­ tionary algorithm that constructs recurrent neural networks. IEEE Transactions on Neural Networks, 5:54-65, 1993.

[5] Riccardo Antonini. Implementing an avatar gesture development tool as a creative evolutionary collaborative system. In P. J. Bentley and D. Corne, edi­ tors, Proceedings of the A ISB ’99 Symposium on Creative Evolutionary Systems (CES). Morgan Kaufmann, 1999.

[6 ] Ken Aoki and Hideyuki Takagi. 3-D CO Lighting with an Interactive GA. In 1st In t’l Conf. on Conventional and Knowledge-based Intelligent Electronic Systems (KES’97), Adelaide, Australia May 21-23, 1997.

[7] Miho Aoki. Personal communication, June 2001.

[8 ] Anthony A. Apodaca and Larry Gritz. Advanced, RenderMan: Creating CGI for Motion Pictures. Morgan Kaufmann, 2000.

[9] Autodesk, Inc. 3d Studio MAX. http://www.discreet.com, 2000.

[10] Avid Technology, Inc. SOFTIMAGE, http://www.softimage.com, 2001.

[11] Norman Badler et al. Real time virtual humans. In International Conference on Digital Media Futures, Bradford, UK, April, 1999.

[12] Ellie Baker. Evolving line drawings. In Proceedings of the Fifth International Conference on Genetic Algorithms. Morgan Kaufinann, 1993.

175 [13} P. Baron et al. A voxel based approach to evolutionary shape optimisation. In Proceedings 4th AISB Workshop on Evolutionary Computing, 1997.

[14] Ronen Barzel and Alan H. Barr. A modeling system based on dynamic con­ straints. Computer Graphics, 22:179-188,1988.

[15] Edward J. Bedwell and David S. Ebert. Artificial evolution of algebraic surfaces. Proceedings Implicit Surfaces '99, 1999.

[16] Peter J. Bentley. Evolutionary Design by Computers. Morgan Kaufmann, 1999.

[17] Peter J. Bentley. From coffee tables to hospitals: Generic evolutionary design. In Peter J. Bentley, editor. Evolutionary Design by Computers, chapter 18, pages 405-423. Morgan Kaufmann, 1999.

[18] George D. Birkhoff, editor. Aesthetic Measure. Harvard University Press, Cam­ bridge, 1933.

[19] Blaxxun Interactive. Avatar studio, http://www.blaxxun.com, 2001.

[20] Jules Bloomenthal, editor. Introduction to Implicit Surfaces. Morgan Kauf­ mann, 1997.

[21] Beth Blostein. Procedural generation of alternative formal and spatial configu­ rations for use in architecture and design, unpublished, 1995.

[22] B. Blumberg, P. Todd, and P. Maes. No bad dogs: Ethological lessons for learning. In From Animals To Animats, Proceedings of the Fourth International Conference on the Simulation of Adaptive Behavior. MIT Press, 1996.

[23] Margaret A. Boden. Agents and creativity. Communications of the ACM, 37(7):117-121, July 1994.

[24] J. S. De Bonet. Multiresolution sampling procedure for analysis and synthesis of texture images. In Computer Graphics, pages 361-368. ACM SIGGRAPH, 1997.

[25] Lashon B. Booker. Improving search in genetic algorithms. In L. Davis, editor. Genetic Algorithms and Simulated Annealing, chapter 5, pages 61-73. Pitman, 1987.

[26] Sharon Calahan. Storytelling through lighting, a computer graphics perspec­ tive. In Anthony A. Apodaca and Larry Gritz, editors, Advanced RenderMan: Creating CGI for Motion Pictures, chapter 13, pages 337-382. Morgan Kauf- maim, 2 0 0 0 .

176 [27} Craig Caldwell and Victor S'. Johnston. 'IVacklng criminal suspect through “face-space” with a genetic algorithm. In Proceedings o f the Fourth Interna­ tional Conference on Genetic Algorithms, pages 416-421, 1991.

[28] Cinema Graphics, Inc. Shadetree. http://www.cinegr£x.com, 2001.

[29] Dave Cliff and Geoffrey F. Miller. Co-Evolution of Pursuit and Evasion II: Sim­ ulation Methods and Results. In P. Maes, M. Mataric, J.-A. Meyer, J. Pollack, and S. VV. Wilson, editors. Prom Animals to Animats 4- Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior (SAB96), pages 506-515. MIT Press Bradford Books, 1996.

[30] Paul Coates. Using Genetic Programming and L-Systems to Explore 3D Design Worlds. In R. Junge, editor, CAADFutures ’97. Kluwer Academic, Mimich, 1997.

[31] Paul Coates, Terence Broughton, and Helen Jackson. Exploring three- dimensional design worlds using lindenmayer systems and genetic programming. In Peter J. Bentley, editor. Evolutionary Design by Computers, chapter 14, pages 323-341. Morgan Kaufmann, 1999.

[32] M. F. Cohen. Interactive spacetime control for animation. Computer Graphics, 26(2):309-315, 1992.

[33] Credo Interactive, Inc. Life Forms, http://www.lifeforms.com, 2001.

[34] Curious Labs, Inc. Poser, http://www.curiouslabs.com, 2001.

[35] Cyberware. 3d scanners, http://www.cyberware.com, 2001.

[36] Sumit Das, Terry Franguidakis, Michael Papka, Thomas A. DeFanti, and Daniel J. Sandin. A genetic programming application in virtual reality. In Proceedings of the first IEEE Conference on Evolutionary Computation, vol­ ume 1 , pages 480-484, Orlando, Florida, USA, 27-29 1994. IEEE Press.

[37] Richard Dawkins. The Blind Watchmaker. Penguin Books, 1986.

[38] Kenneth A. De Jong. An Analysis of the Behavior of a Class of Genetic and Adaptative Systems. PhD thesis. University of Michigan, Ann Arbor, 1975.

[39] Paul E. Debevec. Pursuing reality with image-based modeling, rendering, and lighting. In Keynote paper for the Second Workshop on 3D Structure from Multiple Images of Large-scale Environments and applications to Virtual and Augmented Reality (SMILE2), Dublin, Ireland, June, 2000.

[40] Gregory Dudek. Genetic art. http://www.cim.mcgill.ca/~dudek/ga.html, 2001.

177 [41] David Eby, R. C. Averill, William F. Punch 1 1 1, and Erik D. Goodman. The optimization of Hywheels using an injection island genetic algorithm. In Peter J. Bentley, editor. Evolutionary Design by Computers, chapter 7, pages 167-190. Morgan Kaufmann, 1999.

[42] Claudia Eckert, Ian Kelly, and Martin Stacey. Interactive generative systems for conceptual design: An empirical perspective. Artificial Intelligence for En­ gineering Design, Analysis and Manufacturing, 13:303-320, 1999.

[43] Alexei A. Efros and William T. Freeman. Image quilting for texture synthesis and transfer. In SIGGRAPH 2001 Proceedings, 2001.

[44] Emergent Design. Chair farm. http://www.emergent-design.com/dyn/ chair.html, 2 0 0 0 .

[45] Emergent Design. Emergent Design, http://www.emergent-design.com, 2000.

[46] D. B. Fogel. An introduction to simulated evolutionary optimization. IEEE Transactions on Neural Networks, 5(1):3-14, 1994.

[47] Pablo Funes and Jordan Pollack. Computer evolution of buildable objects. In Peter J. Bentley, editor. Evolutionary Design by Gomputers, chapter 17, pages 387-403. Morgan Kaufmann, 1999.

[48] Pablo Funes, Elizabeth Sklar, Hughes Juille, and Jordan Pollack. The internet as a virtual ecology; Coevolutionary arms races between human and artificial populations. Technical Report CS-97-197, Volen Center for Complex Systems, Brandeis University, 1997.

[49] John Funge, Xioyuan Tu, and Demetri Terzopoulos. Cognitive modeling: Knowledge, reasoning and planning for inteUigent characters. In SIGGRAPH 99 Proceedings, 1999.

[50] Hitoshi Furuta, Kenji Maeda, and Eiichi Watanabe. Application of genetic algorithm to aesthetic design of bridge structrues. Microcomputers in Civil Engineering, 10:415-421, 1995.

[51] Richard Gatarski. Evolutionary banners: An experiment with automated ad­ vertising design. In Proceedings of COTIM ’99, 1999.

[52] Richard Gatarski and Michael S. Pontecorvo. Breed better designs: the gener­ ative approach. Designjoumalen, 6(1), 1999.

[53] Mitsuo Gen and Jong Ryul Kim. Ga-based approach to reliability testing. In Peter J. Bentley, editor. Evolutionary Design by Computers, chapter 8 , pages 191-218. Morgan Kaufmann, 1999.

178 [54} Genetic Graphics, Inc. GensEade. http://www.cmegrfic.coin/genshade/, 2001.

[55] Genetic Graphics, Inc. Interactive running. http://www.cinegr&c.com/genshade/gallery/int/int.htinl, 2 0 0 1 .

[56] Genetic Graphics, Inc. SCAD: The First Large GenShade Network, http://www.cinegrfx.com/genshade/gallery/scad/scad.html, 2 0 0 1 .

[57] Genetic Graphics, Inc. Selected shader database, http://www.cinegrfx.com/genshade/gallery/gui-db/gui-db.html,2 0 0 1 .

[58] Michael Girard and Anthony A. Maciejewski. Computational modeling for the computer animation of legged figures. In Computer Graphics, proceedings of SIGGRAPH 85, ACM SIGGRAPH, New York, NY, pages 263-270, 1985.

[59] David E. Goldberg. The race, the hurdle, and the sweet spot: Lessons from genetic algorithms for the automation of design innovation and creativity. In Peter J. Bentley, editor. Evolutionary Design by Computers, chapter 4, pages 105-118. Morgan Kaufmann, 1999.

[60] Janine Graf and Wolfgang Banzhaf. Interactive evolution of images. In D. B. Fogel, editor. Proceedings of the Fourth Annual Conference on Evolutionary Programming, pages 53-65, 1995.

[61] I. J. Graham, R. L. Wood, and K. Case. Evolutionary form design: The applica­ tion of genetic algorithmic techniques to computer aided product design. In Pro­ ceedings of the 15th National Conference on Manufacturing Research (NCMR), ’’Advances in Manufacturing Technology Vol. 13”, September 1999.

[62] Gary Greenfield. New directions for evolving expressions. In BRIDGES: Math­ ematical Connections in Art, Music and Science July 28-30, 1998.

[63] Gary Greenfield. Mathematical building blocks for evolving expressions. In BRIDGES: Mathematical Connections in Art, Music and Science July 28-31, 2000.

[64] Gary Greenfield, personal communication. May 9th, 2001.

[65] John J. Grefenstette. Incorporating problem specific knowledge into genetic al­ gorithms. In Genetic Algorithms and Simulated Annealing, pages 42-60. Morgan Kaufmann Publishers, 1990.

[6 6 ] L. Gritz and J. K. Hahn. Genetic programming for articulated figure motion. Journal of Visualization and Computer Animation, 6(3):129-142, July 1995.

179 [67f Pat Hanrahan and PauT HaeberlT. Dfrect WYSIWYG Painting and 'Ifexfurfng on 3D Shapes. Computer Graphics, 24(4):215-223,1990.

[6 8 ] Randy L. Haupt and Sue Ellen Haupt. Practical Genetic Algorithms. Wiley, New York, 1998.

[69] D. J. Heeger and J. R. Bergen. Pyramid-based texture analysis/synthesis. In Proceedings of SIGGRAPH ’95, pages 229-238, 1995. [70] Joerg Heitkoetter and David Beasley. The HitchHiker’s Guide to Evolutionary Computation: A list of Frequently Asked Questions (FAQ). USENET : comp.ai.genetic. Available via anonymous FTP from rtfm.mit.edu:/pub/usenet/news.answers/ai-faq/genetic/, 1994. [71] Jano van Hemert. Mondriaan art by evolution. http://www.wi.leidenuniv.nl/~jvhemert/mondriaan, 2 0 0 0 . [72] Aaron Hertzmann et al. Image analogies. In SIGGRAPH 2001 Proceedings, 2001. [73] W. Daniel Hillis. Co-evolving parasites improve simulated evolution as an op­ timization procedure. In ALIFEII, pages 313-324,1990. [74] Andrew Hobden. Genetic algorithms for graphics textures. Technical Report EPCC-SS94-03, Edinburgh Parallel Computing Centre (EPCC), 1994. [75] Alaa Eldin M. Ibrahim. Genshade : an evolutionary approach to automatic and interactive prodecural fexture generation [sic]. PhD thesis. Architecture, Texas A&M University, 1998. [76] Eiji Ito and Shun Ishizaki. Creative design support system using evolutionary computation. In The Second International Conference on Cognitive Science, 1999. [77] John F. Simon, Jr. Every icon, http://www.numeral.com, 2001. [78] Scott Johnston. Nonphotorealistic Rendering with RenderMan. In Anthony A. Apodaca and Larry Gritz, editors. Advanced RenderMan: Creating CGI for Motion Pictures, chapter 16, pages 441-480. Morgan Kaufmann, 2000. [79] Mark W. Jones. Direct Surface Rendering of General and Genetically Bred Implicit Surfaces. In Proceedings o f the 17th Annual Conference of Eurographics (UK Chapter), Cambridge, pages 37-46, 1999. [80] A. J. Keane and N. Petruzzelli. Aircraft wing design using GA-based multi-level strategies. In Proceedings of the 8th AIAA/U SAF/NASSA/ISSM O Symposium on Multidisciplinary Analysis and Optimization, A.I.A.A., Long Beach, 2000.

180 [81]^ Hee-Su Kim and Simg-Bae Cho. Genetic algorithm with knowledge-based en­ coding for interactive fashion design. In Lecture Notes in Artificial Intelligence, pages 404-414. Springer-Verlag, 2000.

[82] Jamie Kirschenbaum. Personal communication, May 14 2001.

[83] Junji Kotani and Masafumi Hagiwara. An evolutionary design-support-system with structural representation. In IEEE International Conference on Industrial Electronics, Control and Instumentation 2000 (IECON2000), (Nagoya, Japan, pages 672-677, 2000.

[84] John Koza. Genetic Programming: on the programming of computers by means of natural selection. MIT Press, Cambridge, 1992.

[85] John Koza, Forrest H. Bennett III, David Andre, and Martin A. Keane. The de­ sign of analogue circuits by means of genetic programming. In Peter J. Bentley, editor. Evolutionary Design by Computers, chapter 16, pages 365-385. Morgan Kaufmann, 1999.

[8 6 ] T. Krink and F Vollrath. Analysing spider web-building behaviour with rule- based simulations and genetic algorithms. Journal of Theoretical biology, 185:321-331,1997.

[87] Shigeru Kuriyama and Toyohisa Kaneko. Morphogenic synthesis of free-form shapes. In Proceedings of First Iteration, pages 106-115, 1999.

[8 8] Joseph F. Laszlo, M. van de Panne, and E. Fiume. Interactive control for physically-based animation. Proceedings of SIGGRAPH 2000 (New Orleans, Louisiana, July 23-28, 2000) Computer Graphics Proceedings, Annual Confer­ ence Series, 2000, ACM SIGGRAPH, pages 201-209, 2000.

[89] David A. Lauer. Design Basics. Holt, Rinehard and Winston, Ft. Worth, 1990.

[90] Matthew Lewis. Aesthetic evolutionary design with data flow networks. In Proceedings of Generative Art 2 0 0 0 , Mitan, Italy, 2000.

[91] Matthew Lewis. Evolving human flgure geometry. Technical Report OSU- ACCAD-5/00-TR1, ACC AD, The Ohio State University, May 2000.

[92] Matthew Lewis. An implicit surface prototype for evolving human figure ge­ ometry. Technical Report OSU-ACC AD-11 /00-TR2, ACC AD, The Ohio State University, November 2000.

[93] Matthew Lewis. Visual aesthetic evolutionary design links, http://www.cgrg.ohio-state.edu/~mlewis/aed.html, 2 0 0 0 .

181 [94} Matthew Lewis and Richard Parent; A comparison of parametric contour spaces for interactive genetic algorithms. Technical Report OSU-ACCAD- 6 /Ol-TRl, ACC AD, The Ohio State University, June 2001.

[95] Ik Sod Lim and Daniel Thalmann. Pro-actively interactive evolution for com­ puter animation. In Proceedings of Eurographics Workshop on Animation and Simulation ’99 (CAS ’99), Milan, Italy, pages 45-52. Springer, 1999.

[96] H. Lipson and J. B. Pollack. Automatic design and manufacture of robotic lifeforms. Nature, 406:974-978, 2000.

[97] Henrik Hautop Lund, Luigi Pagliarini, and Orazio Miglino. Artistic design with genetic algorithms and neural networks. In J. T. Alander, editor. Proceedings of INWGA, University of Vaasa, Vaasa, 1995.

[98] Joe Marks et al. Design galleries: A general approach to setting parameters for computer graphics and animation. In SIGGRAPH 97 Proceedings, 1997.

[99] Stephen F. May. Encapsulated Models: Procedural Representations for Gom- puter Animation. PhD thesis, Ohio State University, 1998.

[100] Stephen F. May. R^IanNotes. http://www.cgrg.ohio-state.edu/ ~smay/RA^IanNotes, 2001.

[101] Jon McCormack. Interactive Evolution of L-System Grammars for Computer Graphics Modelling. In David Green and Terry Bossomaier, editors. Complex Systems: Prom Biology to Computation. ISO Press, Amsterdam, 1993.

[102] Frank McGuire. The Origins of Sculpture: Evolutionary 3D Design. IEEE Computer Graphics, pages 9-11, January 1993.

[103] PhiUip B. Meggs. T^pe& Image: The Language of Graphic Design. Van Nostrand Reinhold, New York, 1989.

[104] David Paul Miller. The generation of human-like reaching motion for an arm in an obstacle-filled 3-D static environment. PhD thesis, Ohio State University, 1993.

[105] N. Monmarch, G. Nocent, M. Slimane, and G. Venturini. Imagine: a tool for generating HTML style sheets with an interactive genetic algorithm on genes frequencies. In IEEE International Conference on Systems, Man, and Cyber­ netics (SMC’99), volume 3, Tokyo, Japan, October 12-15, pages 640-645,1999.

[106] F. Kenton Musgrave. A brief introduction to fractals. In David Ebert, editor, Texturing and Modeling: a Procedural Approach, chapter 10, pages 275-292. Academic Press, 1998.

182 [107] F. Kenton Musgrave. FiractaTsôlTd'fejctures: Some examples. In Davîd Ebert, editor, Texturing and Modeling: a Procedural Approach, chapter 11, pages 293- 324. Academic Press, 1998.

[108] F. Kenton Musgrave. Genetic textures. In David Ebert, editor, Texturing and Modeling: a Procedural Approach, chapter 15, pages 373-384. Academic Press, 1998.

[109] F. Kenton Musgrave. Procedural fractal terrains. In David Ebert, editor, Tex­ turing and Modeling: a Procedural Approach, chapter 12, pages 325-340. Aca­ demic Press, 1998.

[110] F. Kenton Musgrave. Genetic programming, genetic art. http://www.wizardnet.com/musgrave/mutatis.html, 2 0 0 1 .

[111] Yasuto Nakanishi. Applying evolutionary systems to design aid system. In ALIFE V Poster presentations, pages 147-154, 1996.

[112] J. Thomas Ngo and Joe Marks. Spacetime constraints revisited. Computer Graphics, 27(Annual Conference Series) :343-350, 1993.

[113] Hiroaki Nishino, Hideyuki Takagi, and Kouichi Utsumiya. A Digital Prototyp­ ing System for Designing Novel 3D Geometries. In 6th International conference on virtual systems and multimedia (VSMM2000), Ogaki, Gifu, Japan, pages 473-482, 2000.

[114] Kenichi Nishio et al. Fuzzy fitness assignment in an interactive genetic algorithm for a cartoon face search. In Elie Sanchez, Takanori Shibata, and Lotfi A Zadeh, editors, Genetic Algorithms and Fuzzy Logic Systems: Soft Computing Perspectives, volume 7. World Scientific, 1997.

[115] Christian Niss and Andreas Mller. Evolution of color and shape. http://www2.informatik.uni-erlangen.de/IMMD-II/Persons/jacob/ Evolvica/Java/ColorAndShape/cshape.html, 2001.

[116] Tore Nordstrand. Surfaces. http://www.uib.no/People/nfytn/surfaces.htm, 2001.

[117] Una-May O’Reilly and Girish Ramachandran. A preliminary investigation of evolution as a form design strategy. In C. Adami, R. Belew, H. Kitano, and C. Taylor, editors. Artificial Life VI, Los Angeles, June 26-29. MIT Press, 1998.

[118] Oxford Metrics Limited. Vicon motion capture systems. http://www.vicon.com, 2 0 0 1 .

183 [119f Luigi Pagliarini, Henrik Hautop^ Lunc^Orazitr MigHna, and Eiomenica Parisi. Artificial life: A new way to build educational and therapeutic games. In Proceedings of Artificial Life V. MIT Press/Bradford Books, 1996.

[120] Ian Parmee. Exploring the design potential of evolutionary search, exploration, and optimisation. In Peter J. Bentley, editor. Evolutionary Design by Comput­ ers, chapter 5, pages 119-143. Morgan Kaufmann, 1999.

[121] Darwyn Peachey. Building procedural textures. In David Ebert, editor, Tex­ turing and Modeling: a Procedural Approach, chapter 2, pages 7-96. Academic Press, 1998.

[122] Ken Perlin. An image synthesizer. ACM Computer Graphics, 19(3), 1985.

[123] Ken Perlin. Noise, hypertexture, antialiasing, and gestures. In David Ebert, editor, Texturing and Modeling: a Procedural Approach, chapter 9, pages 209- 274. Academic Press, 1998.

[124] Ken Perlin and Athomas Goldberg. Improv: A system for scripting interactive actors in virtual worlds. In SIGGRAPH 96, Computer Graphics Proceedings, Annual Conference Series, pages 205-216, 1996.

[125] Pixar Animation Studios. Photorealistic renderman. http://www.pbcar.com, 2001.

[126] Riccardo Poli and Stefano Cagnoni. Evolution of psuedo-colouring algorithms for image enhancement with interactive genetic programming. In Proceedings of the Second International Conference on Genetic Programming, GP’97, pages 269-277. Morgan Kaufmann, 1997.

[127] Michael Steven Pontecorvo. Designing the undesigned: Emergence as a tool for design. In Proceedings o f Generative Art 1998, Milan, Italy, 1998.

[128] Jovan Popovic, Steven M. Seitz, Michael Erdmann, Zoran Popovic, and An­ drew VVitkin. Interactive manipulation of rigid body simulations. In Computer Graphics (Proceedings o f SIGGRAPH 2000), Annual Conference Series, pages 209-218, 2000.

[129] Przemyslaw W. Prusinkiewicz and Aristid Lindenmayer. The Algorithmic Beauty o f Plants. Springer-Verlag, 1990.

[130] QBeo, Inc. PhotoGenetics. http://www.qbeo.com, 2001.

[131] Paul Rand. Paul Rand: A Designer’s Art. Yale University Press, 1985.

184 [132| Thomas S. Ray. Evolution and optimization of digital organisms. In Keith R. Billingsley, Ed Derohanes, and Hilton Brown III, editors. Scientific Excellence in Supercomputing: The IBM 1990 Contest Prize Papers. The Baldwin Press, Athens, G A, 30602, The University of Georgia, 1991.

[133] Thomas S. Ray. Some thoughts on evolvability. http://www.hip.atr.co.jp/~ray/pubs/evolvability/, 1999.

[134] W. T. Reeves. Particle systems - a technique for modeling a class of fuzzy objects. Computer Graphics, 17(3):359-376, 1983.

[135] Craig Reynolds. Flocks, herds, and schools: A distributed behavioral model. In SIGGRAPH ’87, pages 25-34, 1987.

[136] Craig Reynolds. Evolutionary computation and its application to art and de­ sign. http://www.red3d.com/cwr/evolve.html, May, 2001.

[137] Gordon Robinson, Mohammed El-Beltagy, and Andy Keane. Optimization in mechanical design. In Peter J. Bentley, editor. Evolutionary Design by Com­ puters, chapter 6, pages 147-165. Morgan Kaufmann, 1999.

[138] Steven Rooke. The Evolutionary Art of Steven Rooke. http://www.azstarnet.com/~srooke/, 2001.

[139] M. A. Rosenman. An exploration into evolutionary models for non-routine design. In D. Dasgupta and Z. Michalewicz, editors. Evolutionary Algorithms in Engineering Applications, pages 69-86. Springer-Verlag, 1997.

[140] Mike Rosenman and John Gero. Evolving designs by generating useful complex gene structures. In Peter J. Bentley, editor, Evolutionary Design by Computers, chapter 15, pages 345-364. Morgan Kaufmarm, 1999.

[141] Gerald P. Roston and Robert H. Sturges, Jr. Using the genetic design method­ ology for structure configuration. Microcomputers in Civil Engineering, 11:175- 183,1996.

[142] Andrew Rowbottom. Evolutionary art and form. In Peter J. Bentley, editor. Evolutionary Design by Computers, chapter 11, pages 261-277. Morgan Kauf­ mann, 1999.

[143] Duncan Andrew Rowland. Computer Graphic Control over Human Face and Head Appearance, Genetic Optimisation of Perceptual Characteristics. PhD thesis, Michigan State University, 1998.

[144] Duncan Andrew Rowland. Genetic sculpture park. http://io.mindlab.msu.edu/~dunk/spark/, 2001.

185 [145} Timothy Rowley. A toolkit for visuaT genetic programming. 'Ifechnical Repiort GCG-74, The Geometry Center, University of Minnesota, August 1994.

[146] Michael P. Salisbury, Michael T. Wong, John F. Hughes, and David H Salesin. Orientable textures for image-based pen-and-ink illustration. In Proceedings of SIGGRAPH 97, in Gomputer Graphics Proceedings, Annual Conference Series, pages 401-406, 1997.

[147] Thorsten Schnier and John S. Gero. From Frank Lloyd Wright to Mondrian: Transforming evolving representation. In I. C. Parmee, editor. Adaptive Com­ puting in Design and Manufacture, pages 207-219. Springer-Verlag, Berlin, 1998.

[148] SensAble Technologies, Inc. FreeForm modeling system. http://www.sensable.com. May 16, 2001.

[149] Mitsuhiro Shibuya, Hajime Kita, and Shigenobu Kobayashi. Integration of multi-objective and interactive genetic algorihtms and its application to anima­ tion design. In IEEE International Conference on System, Man, and Cybernet­ ics (SMC’99), Tokyo, Japan,, volume 3, pages 647-651, 1999.

[150] Side Effects Software, Inc. Houdini. http://www.sidefx.com, 2001.

[151] Karl Sims. Artificial evolution for computer graphics. ACM Computer Graphics, 25(4):319-328, 1991.

[152] Karl Sims. Interactive evolution of dynamical systems. In Francisco J. Varela and Paul Bourgine, editors. Toward a Practice of Autonomous Systems: Pro­ ceedings of the First European Conference on Artificial Life, pages 171-178, Paris, France, 11-13 December 1992. MIT Press.

[153] Karl Sims. Evolving 3D Morphology and Behavior by Competition. In R. Brooks and P. Maes, editors. Artificial Life IV Proceedings, pages 28-39. MIT Press, 1994.

[154] Karl Sims. Evolving virtual creatures. Computer Graphics, 28(Annual Confer­ ence Series) :15-22, July 1994.

[155] John M. Snyder. Generative Modeling for Computer Graphics and CAD: Sym­ bolic Shape Design Using Interval Analysis. Academic Press, San Diego, 1992.

[156] Celestino Soddu. Argenic design: Chairs, http://soddu2.dst.polimi.it/stoc_sed.htm, 1999.

[157] Tomas Staudek. How can exact aesthetics recognize good design? (manuscript), http://www.fi.muni.cz/~toms/ documents/design.ps.gz.

186 [I58f Tomâs Staudek. On BirkfibflTs Aesthetic K^Ieasure of Vases. Technical Report FIMU-RS-99-06, Faculty of Informatics Masaryk University, September 1999.

[159] Gilbert Syswerda. Uniform crossover in genetic algorithms. In Proceedings of the Third International Conference on Genetic Algorithms and their Applications, 1989.

[160] P. Tabuada, P. Alves, J. Gomes, and A. Rosa. 3D Artificial Art by Genetic Algorithms. In Proceedings of the Workshop on Evolutionary Design at Artificial Intelligence in Design - AID ’ 98, pages 18-21, 1998.

[161] Hideyuki Takagi. Interactive Evolutionary Computation: Fusion of the Capa­ bilities of EC Optimization and Human Evaluation. Proceedings of the IEEE, 89(9): 1275-1296, September 2001.

[162] Hidehiko Tanaka Takeo Igarashi, Satoshi Matsuoka. Teddy: A Sketching In­ terface for 3D Freeform Design. In ACM SIGGRAPH’99, Los Angeles, pages 409-416, 1999.

[163] Stephen Todd. Personal communication, August 7, 2000.

[164] Stephen Todd. Personal communication. May 14, 2001.

[165] Stephen Todd and William Latham. Evolutionary Art and Computers. Aca­ demic Press, 1992.

[166] Stephen Todd and William Latham. The mutation and growth of art by comput­ ers. In Peter J. Bentley, editor. Evolutionary Design by Computers, chapter 9, pages 221-250. Morgan Kaufmann, 1999.

[167] Christopher Traxler and Michael Gervautz. Using Genetic Algorithms to Im­ prove the Visual Quality of Fractal Plants Generated with CSG-PL-Systems. Technical Report TR-186-2-96-04, Institute of Computer Graphics and Algo­ rithms, Vienna University of Technology, 1996.

[168] Tatsuo Unemi. SBART2.4: Breeding 2D CG Images and Movies, and Creating a type of Collage. In The Third International Conference on Knowledge-based In­ telligent Information Engineering Systems, Adelaide, Australia, August, pages 288-291, Geneva, Switzerland, 1999.

[169] Steve Upstill. The RenderMan Companion. Addison Wesley, Reading, MA, 1990.

[170] Jeffrey Ventrella. Disney meets Darwin: The evolution of funny animated fig­ ures. In Computer Animation ’95 Proceedings, Geneva, Switzerland, 1995.

187 [171]^ Jeffrey Ventreiïa. Tweaks. http://www.vehtreira.c()m, 2000.

[172] Gilles Venturini et al. On using interactive genetic algorithms for knowledge discovery in databases. In T. Baeck, editor, 7th International Conference on Genetic Algorithms (ICGA ’97), East Lansing, (U.S.A.), July 19-23, pages 693- 703, 1997.

[173] Lawson Wade. Personal communication, October, 27 2000.

[174] Hirokazii Watabe and Norio Okino. A study on genetic shape design. In Stephanie Forrest, editor. Proceedings of the Fifth International conference on Genetic Algorithms, Urbana-Champaign (IL), pages 445-450, 1993.

[175] WebNation. twoJane.blacktop - A procedural shader for a street surface. http://www.webnation.com/vidirep/panels/renderman/shaders/, 2001.

[176] Mitchell VVhitelaw. Breeding aesthetic objects: Art and artificial evolution. In P. J. Bentley and D. Corne, editors. Proceedings of the A IS B ’99 Symposium on Creative Evolutionary Systems (CES). Morgan Kaufmann, 1999.

[177] Steven Worley. Practical methods for texture design. In David Ebert, edi­ tor, Texturing and Modeling: a Procedural Approach, chapter 3, pages 97-122. Academic Press, 1998.

1 8 8