Evolutionary Programming

and

Evolution Strategies

Similarities and Dierences

y z

Thomas Back Gunter Rudolph HansPaul Schwefel

University of Dortmund

Department of Chair of Systems Analysis

PO Box Dortmund Germany

gorithms to realworld problems have clearly demon Abstract

strated their capability to yield go o d approximate so

Evolutionary Programming and Strategies

lutions even in case of complicated multimo dal top o

rather similar representatives of a class of probabilis

logical surfaces of the tness landscap e for overviews

tic optimization gleaned from the mo del of

of applications the reader is referred to the conference

organic evolution are discussed and compared to each

pro ceedings or to the annotated

other with resp ect to similarities and dierences of their

bibliography

basic comp onents as well as their p erformance in some

exp erimental runs Theoretical results on global conver

Until recentlydevelopment of these main streams was

gence step size control for a strictly convex quadratic

completely indep endentfromeachother Since

function and an extension of the convergence rate the

however contact b etween the GAcommunity and the

ory for Evolution Strategies are presented and discussed

EScommunity has b een established conrmed by col

with resp ect to their implications on Evolutionary Pro

lab orations and scientic exchange during regularly al

gramming

ternating conferences in the US International Con

ference on Genetic Algorithms and their Applications

ICGA since and in Europ e International Confer

Intro duction

ence on Parallel Problem Solving from Nature PPSN

since Contact b etween the EPcommunity and

Develop ed indep endently from each other three main

the ES communit yhowever has b een established for the

streams of socalled Evolutionary Algorithms ie algo

rst time just in For algorithms b earing so much

rithms based on the mo del of natural evolution as an

similarities as ESs and EP do this is a surprising fact

optimization pro cess can nowadays b e identied Evo

elop ed by LJFogel lutionary Programming EP dev

et al in the US Genetic Algorithms GAs devel

Similar to a pap er comparing ES and GA ap

op ed by J Holland also in the US and Evolution

proaches the aim of this article is to giveanintro

Strategies ESs develop ed in Germanyby I Rechen

duction to ESs and EP and to lo ok for similarities and

b erg and HPSchwefel

dierences b etween b oth approaches A brief overview

These algorithms are based on an arbitrarily initial

of the historical development of ESs as well as an ex

ized p opulation of searchpoints whichby means of ran

planation of the basic are given in section

domized pro cesses of selection mutation and some

In section the basic EPalgorithm is presented Sec

times recombination evolves towards b etter and b et

tion then serves to discuss theoretical results from ESs

ter regions in the search space Fitness of individ

and their p ossible relations to the b ehaviour of an EP

uals is measured by means of an ob jective function

algorithm Section presents a practical comparison

to b e optimized and several applications of these al

of b oth algorithms using a few ob jective functions with

dierent top ological shap es Finallyanoverview of simi

baecklsinformatikunidortmundde

y

larities and dierences of b oth approaches is summarized

rudolphlsinformatikunidortmundde

z

schwefellsinformatikunidortmundde in section

f and f resp ectively This forms the basis of Rechen Evolution Strategies

b ergs success rule

Similar to EP ESs are also based on realvalued ob ject

variables and normally distributed random mo dications

with exp ectation zero According to Rechenb erg

The ratio of successful mutations to al l muta

rst exp erimental applications to parameter optimiza

tions should be Ifitisgreater increase if

tion p erformed during the middle of the sixties at the

it is less decrease the standard deviation

Technical University of Berlin dealt with hydro dynam

ical problems like shap e optimization of a b ended pip e

and a ashing nozzle The algorithm used was a sim

For this algorithm Schwefel suggested to measure

ple mutationselection scheme working on one individ

the success probability p by observing this ratio during

ual which created one ospring by means of mutation

the search and to adjust t according to

The b etter of parent and ospring is selected determin

istically to survive to the next generation a selection

mechanism whichcharacterizes this two membered or

n

t c if p

ES Assuming inequality constraints g IR

j

t t c if p

IRj fvg of the search space an ob jective

n

t if p

function f M IR IR where the feasible region M

is dened by the inequality constraints g a minimiza

j

n

tion task and individual vectors xt IR where t

n

For the constant c he prop osed to use c Do

denotes the generation counter a ES is dened

ing so yields convergence rates of linear order in b oth

y the following algorithm b

mo del cases

Algorithm ES

The multimembered ES intro duces the concepts p op

t

ulation recombination and selfadaptation of strategy

initalize P fxg

parameters into the algorithm According to the selec

such that j g x

j P

tion mechanism the ES and ES are dis

while termination criterion not full led do

tinguished the rst case indicating that parents cre

mutate P tx txt z t

ate ospring individuals by means of recombina

with probability density

tion and mutation The b est individuals out of parents

p

z t pz t exp

i i

and ospring are selected to form the next p opulation

evaluate P tf xtfx t

For a ES with the b est individuals are se

select P t from P t

lected from the ospring onlyEach individual is char

if f x t f xt and j g x t

j

acterized not only byavector x of ob ject variables but

then xt x t

also by an additional vector of strategy variables The

else xt xt

latter may include up to n dierentvariances c

ii

i

t t

i fngaswellasupton n covari

od

ances c i fn g j fi ngofthe

ij

generalized ndimensional normal distribution having a

To each comp onent of the vector xt the same stan

probabilitydensity function

dard deviation is applied during mutation The vari

ation of ie the stepsize control of the algorithm is

done according to a theoretically supp orted rule whichis

s

due to Rechenb erg For the ob jective functions f

det A

T

exp z Az pz

n

a linear corridor of width b

f x Fx c c x

i fng b x b

i

and the spheremodel f

Altogether up to w n n strategy parameters

n

can b e varied during the optimum searchby means of a

X

f x c c x x c c r

i

selectionmutationrecombination mechanism To assure

i

i

p ositivedeniteness of the covariance matrix A the

algorithm uses the equivalent rotation angles he calculated the optimal exp ected convergence rates

j

instead of the co ecients c The resulting from which the corresp onding optimal success probabil

j ij

algorithm reads as follows ities p and p can b e derived for

opt opt

the interval Besides completely missing recombi Algorithm ES ES

nation the dierentvariants indicated are discrete

t

recombination intermediate recombination and

initialize P fa a gI

the global versions of the latter two resp ectively

w

where I IR

Empirically discrete recombination on ob ject variables

a x c c i j fng j i

k i ij ji

and intermediate recombination on strategy parameters

evaluate P

have b een observed to give b est results

while termination criterion not full led do

recombine a tr P t k fg

k

mutate a tma t

Evolutionary Programming

k k

evaluate P tfa ta tg

Following the description of an EP algorithm as given by

ff x tfx tg

Fogel and using the notational style from the previous

select P t if ES

section an EP algorithm is formulated as follows

then sP t

else sP t P t

Algorithm EP

t t

od

t

initialize P fx x gI

Then the mutation op erator must b e extended ac

n

where I IR

cording to dropping time counters t

evaluate P F x Gf x

k k k

while termination criterion not full led do

I ma a x

k

k

mutate x tmx t k fg

k

k

p erforming comp onentwise op erations as follows

evaluate P tfx tx tg

fF x tF x tg

exp exp

i i

i

select P t sP t P t

j j

j

t t

x x z

i i

i

od

This waymutations of ob ject variables are correlated

Besides a missing recombination op erator tness eval

according to the values of the vector and provides a

uation mutation and selection are dierent from cor

scaling of the linear metrics Alterations and

resp onding op erators in ESs Fitness values F x are

i

are again normally distributed with exp ectation zero

p

p

obtained from ob jective function values by scaling them

n and variance one and the constants

p

to p ositivevalues function G and p ossibly by imp osing

n and are rather robust ex

some random alteration For mutation the standard

i

ogenous parameters scaled by is a global fac

deviation for each individuals mutation is calculated as

tor identical for all i fng whereas is an

i

the square ro ot of a linear transformation of its own t

individual factor sampled anew for all i fng

ness value ie for mutation mxx i fng

allowing of individual changes of mean step sizes

i

Concerning recombination dierent mechanisms can

x z x

i i

i

p

b e used within ESs where in addition to the usual re

Fx

i i i

combination of two parents global mechanisms allowfor

taking into account up to all individuals of a p opulation

Again the random variable z has probability density

p

during creation of one single ospring individual The

expz This way for eachcomponent pz

recombination rules for an op erator creating an individ

of the vector x a dierent scaling of mutations can b e

ual a x I are given here representatively

achieved by tuning the parameters and which

i i

for the ob ject variables

however are often set to one and zero resp ectively

The selection mechanism s I I reduces the set

of parents and ospring individuals to a set of parents

x

Si

byperformingakindof q tournament selection q

x or x

Si Ti

In principle for each individual x from P t P t q

k

x u x x

x

Si Ti Si

i

individuals are selected at random from P t P t and

x or x

S i T i

i i

compared to x with resp ect to their tness values F

k j

x u x x

S i i T i S i

i i i

h of the q j fqg Then it is counted for eac

selected individuals whether x outp erforms the individ Indices S and T denote two arbitrarily selected par

j

ual resulting in a score w between and q Whenthisis ent individuals and u is a uniform random variable on j

quite similar to correlated mutations in ESs This algo nished for all individuals the individuals are ranked

rithm however was implemented and tested byFogel in descending order of the rank values w and the in

j

only for n and currently the extension to n is dividuals having highest ranks w are selected to form

j

not obvious since p ositive deniteness and symmetry of the next p opulation Using a more formal notation rank

A must b e guaranteed on the one hand while on the other values w are obtained as follows

j

hand it is necessary to nd an implementation capable

q

X

of pro ducing anyvalid correlation matrix For corre

w F x Fx

j u j

i

IR

lated mutations in ESs the feasibility of the correlation



i

pro cedure has recently b een shown by Rudolph

u denotes a uniform integer random variable on the

i

range of indices fg which is sampled anew for

Theoretical Prop erties of Evo

each comparison the indicator function x is one if

A

lution Strategies

x A and zero otherwise and IR fx IR j x g

Intuitively the selection mechanism implements a kind

Problem statement and metho d for

of probabilistic selection which b ecomes more and

is in more deterministic as the external parameter q

mulation

creased ie the probability that the selected set of indi

Before summarizing convergence results of EStyp e op

viduals is the same as in a selection scheme tends

timization metho ds some basic denitions and assump

to unityas q increases The selection scheme guarantees

tions are to b e made

survival of the b est individual since this is assigned a

guaranteed maximum tness score of q

Definition

In general the standard EP algorithm imp oses some

An optimization problem of the form

parameter tuning diculties to the user concerning the

n

ff xj x M IR g f f x min

problem of nding useful values for and in case

i i

of arbitrary highdimensional ob jective functions To

is called a regular global optimization problem i

overcome these diculties D B Fogel develop ed an

A f

extension called meta EP that selfadapts n variances

A x intM and

c c p er individual quite similar to ESs see

n

A L

f p Then mutation maa applied to an in

dividual a x c pro duces a x c according to

Here f is called the objective function M the feasible

i fng

region f the global minimum x the global minimizer

p

or solution and L fx M j f x ag the lower level

a

x x c z x z

i i i i

i

set of f

p p

c z z c c

i i i

i i

The rst assumption makes the problem meaningful

whereas the second assumption is made to facilitate

where the second identities hold b ecause of c and

i

i

the analysis The last assumption skips those problems

denotes an exogenous parameter To preventvari

where optimization is a hop eless task see p

ances from b ecoming negative Fogel prop oses to set

Furthermore let us rewrite the EStyp e algorithm given

c whenever by means of the mo dication

c

i

in a previous section in a more compact form

rule negativevariances would o ccur However while

the lognormally distributed alterations of standard de

c

X Y Y x Y

t t L t t L t

f x 

t

f x 

viations in ESs automatically guarantee p ositivityof

t

i

the mechanism used in metaEP is exp ected to cause

where the indicator function xisoneifx A and

A

variances set to rather often thus essentially leading

c

zero otherwise and where Y is a random vector with

t

to a reduction of the dimension of the search space when

some probability density p y p y x Usually

Y Z t

t t

is small It is surely interesting to p erform an ex

c

p is chosen as a spherical or el liptical distribution

Z

p erimental investigation on strengths and weaknesses of

Definition

b oth selfadaptation mechanisms

A random vector z of dimension n is said to p os

In addition to standard deviations the Rmeta EP al

sess an el liptical distribution i it has sto chastic rep

gorithm as prop osed byDBFogel see pp

d

resentation z rQ u where random vector u is uni also incorp orates the complete vector of n n

p

formly distributed on a hyp ersphere surface of dimen correlation co ecients c i

ij ij i j

sion n sto chastically indep endent to a nonnegativeran fn g j fi ng representing the covari

dom variable r and where matrix Q k n with ance matrix A into the genotyp e for selfadaptation

Definition rankQ Q k If Q n n and Q I then z is

said to p ossess a spherical symmetric distribution The value Ef X f issaidtobetheexpected

t t

error at step t An algorithm has a sublinear convergence

From the ab ove denition it is easy to see that an ES

b

rate i O t withb and a geometrical

t

typ e algorithm using a spherical or elliptical distribution

t

convergenceratei O r with r

t

is equivalent to a random direction metho d with some

step size distribution In fact if z is multinormally dis

Whereas the pro of of global convergence can b e given

tributed with zero mean and covariance matrix C I

for a broad class of problems the situation changes for

then the step size r has a distribution see Fang

n

pro ofs concerning convergence rates Seemingly the only

et al for more details If matrix Q and the variance

chance is to restrict the analysis to a smaller class of

of r are xed during the optimization we shall saythat

problems that p ossesses some sp ecial prop erties This

algorithm has a stationary step size distribution

has b een done by Rappl and it generalizes the results on

otherwise the distribution is called adapted

convex problems mentioned ab ove

Theorem

Global convergence prop erty

f is con Let f be a l Qstrongly convex function ie

The pro of of global convergence has b een given bysev

tinuously dierentiable and with lQ there holds

eral authors indep endently Although the conditions are

for all x y M

slightly dierent among the pap ers each pro of is based

on the socalled BorelCantelli Lemma which applica

l jjx y jj rf x rf y x y Ql jjx y jj

tion makes a convergence pro of for an elitist GA trivial

d

and Z RU where R has nonvoid supp ort a IR

Then the exp ected error of algorithm decreases for

Theorem

any starting p oint x M with the following rates

Let p P fX L g b e the probability to hit the

t t f

level set L at step tIf

f

n

O t p stationary

Z

Ef X f

t

t

X

O p adapted

Z

p

t

t

with

then f X f as for t or equivalently

t

In order to adapt the step sizes one maycho ose R

t

P f lim f X f g

t

jjrf x jj R Moreover Rappl p has

t

t

shown that the step size can b e adapted via a suc

for any starting p oint x M

cessfailure control similar to the prop osal of The

idea is to decrease the step size with a factor

Lemma

if there was a failure and to increase the step size bya

Let C b e the supp ort of stationary p Then holds

Z

factor if there was a success Then geometrical

L bounded a f x M C and M b ounded

a

convergence follows for

p p for all t lim inf p

t min t

t

It should b e noted that even with geometrical conver

Of course theorem is only of academic interest b ecause

gence the exp ected numb er of steps to achieve a certain

wehave no unlimited time to wait However if condi

accuracy may dier immensely eg consider the number

tion is not fullled then one may conclude that the

of steps needed if orif Therefore

probability to obtain the global minimum for any start

the search for optimal step size schedules should not b e

ing p oint x M with increasing t is zero as p ointed out

neglected

by Pinter

Example

n

jj with M IR b e the problem Clearly Let f xjjx

Convergence rates

f is strongly convex

The attempt to determine the convergence rate of algo

rithms of typ e was initiated by Rastrigin and

rf x rf y x y jjx y jj

continued bySchumer and Steiglitz and Rechenb erg

Each of them calculated the exp ected progress The mutation vector z in algorithm is chosen to

wrt the ob jectivevalue or distance to the minimizer be multinormally distributed with zero mean and co

of sp ecial convex functions Since the latter measure is variance matrix C I Consequentlyforthe

not welldened in general we shall use the following distribution of the ob jective function values wehave

denition f x Z where denotes a noncentral

t t

n n

still b e guaranteed but it will b e very slow compared to distribution with n degrees of freedom and noncen

the optimal setting see g trality parameter jjx jj Using the fact that

t

p

n

n

p

N N

n

for n the limiting distribution of the normalized

variation of ob jective function values V b ecomes

f x f X

t t

V

f x

t

n

jj x jj

t

p

n N n

jjx jj

t

r

s s

N

n n n s

s s

N

n n

Figure Normalized exp ected improvement n EV versus



with s jj x jjn Since algorithm accepts only

normalized standard deviation s nfx t

t

improvements we are interested in the exp ectation of

the random variable V maxfV g

Z

nu s nu

Evolution Strategies

p

du EV exp

s

s

Obviously theorems and can b e applied to this typ e

r

of algorithms However as p ointed out bySchwefel

i h

s s

s exp s

there are some dierences concerning the con

n

vergence rates The larger the numb er of ospring the

b etter the convergence rate We shall discuss the re

where denotes the cdf of a unit normal random

lationship in the next subsection Wardi prop osed

variable The exp ectation b ecomes maximal for s

aES where is a random variable dep ending

see g such that EV n and

on the amount of improvement ie new ospring are

generated as long as the improvement is b elow a certain

jjxjj

decreasing limit which is used to adjust the step sizes as

n

p

well

f x

n

 Evolution Strategies

jjrf xjj

n

For this typ e of algorithms theorem cannot b e applied

where is the value also given by Rechenb erg

Although it is p ossible to show that the level set L

f

which is converted to and in the notation of

will b e hit with some probability there is no guarantee of

Fogel and Rappl resp ectivelyObviouslyifwe

convergence For example a ES is simply a ran

mo dify f to f xjjxjj then control will fail

dom walk A p ossible waytoavoid nonconvergence may

a

to provide geometrical convergence One has to subtract

be achieved by restricting the probability of accepting

some constant from which dep ends on the value

aworse p oint In fact if this probability is decreasing

of the unknown global minimum Similarly optimiz

with a certain rate over time global convergence can b e

assured under some conditions see Haario and Saksman ing f xjjx jj control will fail whereas con

b

With this additional feature a ES without trol will succeed in all cases due to its invariance

recombination is equivalent to sp ecial variants of so wrt the lo cation of the unknown global minimizer and

called SimulatedAnnealing algorithms that are designed the value of the unknown global minimum In addition

n

for optimizing in IR This relationship is investigated the dep endence on the problem dimension n is of imp or

in Rudolph preprint tance Omitting this factor geometrical convergence can

the exp ected sp eedup holds Despite the lack of a theoretical guarantee of global

convergence it is p ossible to calculate the convergence

rates for some sp ecial problems This has b een done by

Et

ES

Schwefel and Scheel for the same problem as

Et

in the previous example using a ES

log c n

log

Example

log n log

Similarly to example for large n the optimal setting of

log logn

can b e calculated

log n

c

logn

jj rf x jj

n

n

O log

Then the exp ected improvementisEV c n with

Z

Thus the sp eedup is only logarithmic It can b e shown

u

p

c u exp u du

that this asymptotic b ound cannot b e improved bya

ES

The value of c has b een tabularized in How

ever a closer lo ok at reveals that this expression is

equivalent with the exp ected value of the maximum of

Op en questions

iid unit normal random variables

Let X N for i Then M

i

Theorem provides convergence rate results and step

maxfX X g is a random variable with cdf

size adjustment rules for strongly convex problems

P fM xg F x x Consequently c

Rappl hasshown that the conditions of theorem

EM According to Resnick pp wehave

can b e weaken such that the ob jective function is re

quired to b e only almost everywhere dierentiable and

P fM a x b g F a x b

that the level sets p ossess a b ounded asphericity es

x

Gx expe

p ecially close to the minimizer

The algorithm of Patel et al called pure adap

for with

tive search requires only convexity to assure geometrical



convergence However this algorithm uses a uniform



a log

distribution over the lower level sets for sampling a new



log log log



point Naturally this distribution is unknown in gen

p

b log

log

eral Recently Zabinsky and Smith havegiven the

impressive result that this algorithm converges geometri

Let Y have cdf Gx then

cally even for lipshitzcontinuous functions with several

lo cal minima This rises hop e to design an Evolutionary

M b

Y M a Y b

Algorithm that converges to the global optimum of cer

a

tain nonconvex problem classes with a reasonable rate

and due to the linearity of the exp ectation op erator

Obviouslyconvergence rates are closely connected to

the adaptation of the sampling distribution Incorp o

EM a EY b a b

rating distribution parameters within the evolutionary

p

log log log

pro cess may b e a p ossible solution This technique can

p

log

b e subsumed under the term selfadaptation First at

log

p

tempts to analyse this technique havebeendonebyVo

log

gelsang

where denotes Eulers constant Now In this context recombination mayplay an imp ortant

it is p ossible to derive an asymptotic expression for the role b ecause it can b e seen as an op eration that connects

sp eedup assuming that the evaluation of trial p oints the more or less lo cal mutation distributions of single in

are p erformed in parallel Let E t andEt denote the dividuals to a more global mutation distribution of the

exp ected numb er of steps to achieveagiven accuracy whole p opulation However nothing is known theoreti

for a ES and ES resp ectively Then for cally ab out recombination up to now

Conclusions Exp erimental Comparison

Just to achieve a rst assessment of the b ehaviour of

As it turned out in the preceding sections Evolution

b oth algorithms exp eriments were run on the sphere

ary Programming and Evolution Strategies share many

mo del f x kxk and the generalized variantofa

common features ie the realvalued representation of

multimo dal function byAckley see pp

search p oints emphasis on the utilization of normally

distributed random mutations as main search op erator

v

u

n

and most imp ortantly the concept of selfadaptation

X

u

t

A

f x exp x

of strategy parameters online during the search There

i

n

i

are however some striking dierences most notably the

n

missing recombination op erator in EP and the softer

X

exp cos x e

i

probabilistic selection mechanism used in EP The com

n

i

bination of these prop erties seems to have some negative

impact on the p erformance of EP as indicated bythe

test functions that represent the class of strictly convex

exp erimental results presen ted in section

unimo dal as well as highly multimo dal top ologies re

Further investigations of the role of selection and

sp ectively The global optimum of f is lo cated at the

recombination as well as the dierent selfadaptation

origin with a function value of zero Threedimensional

metho ds are surely worthwhile just as a further exten

top ology plots of b oth functions are presented in gure

sion of theoretical investigations of these algorithms As

A comparison was p erformed for a mo derate dimen

demonstrated in section some theory available from

sion n and x dening the feasible

i

researchonEvolution Strategies can well b e transferred

region for initialization The parameterizations of b oth

to Evolutionary Programming and is helpful for assess

algorithms were set as follows

ing strengths and weaknesses of the latter approach

ES with selfadap

tation of standard deviations no correlated mu

References

tations discrete recombination on ob ject variables

and global intermediate recombination on standard

DH Ackley A Connectionist Machine for Genetic

deviations

Hil lclimbingKluwer Academic Publishers Boston

Evolutionary Programming MetaEP with self

adaptation of variances p opulation size

N Baba Convergence of random optimization

tournament size q for selection see

metho ds for constrained optimization metho ds

p

Journal of Optimization Theory and Aplications

All results were obtained by running ten indep endent

exp eriments p er algorithm and averaging the resulting

Thomas Back Frank Homeister and HansPaul

data and functions evaluations were p erformed

Schwefel Applications of evolutionary algorithms

for each run on the sphere mo del in contrast to

Rep ort of the Systems Analysis Research Group

function evaluations on Ackleys function The resulting

LS XI SYS UniversityofDortmund De

alue plot curves of the actually b est ob jective function v

partment of Computer Science

ted over the numb er of function evaluations are shown

in gure

Richard K Belew and Lashon B Bo oker editors

The clear dierence of convergence velo cityon f indi

Proceedings of the Fourth International Conference

cates that the combination of relatively strong selective

on Genetic Algorithms and their ApplicationsUni

pressure recombination and selfadaptation as used in

versity of California San Diego USA Morgan

ESs is helpful on that top ology p erformance of the ES

Kaufmann Publishers

can still b e improved by reducing the amount of strat

egy parameters to just one standard deviation Con

Joachim Born Evolutionsstrategien zur nu

vergence reliabilityonAckleys function turns out to b e

merischen Losung von Adaptationsaufgaben Dis

rather go o d since b oth strategies lo cate the global opti

b oldtUniversitat Berlin sertation A Hum

mum in each of the ten runs with average nal b est ob

jective function values of EP and KT Fang S Kotz and KW Ng Symmetric

ES resp ectively This b ehaviour is due to the addi Multivariate and Related Distributionsvolume

tional degree of freedom provided by selfadaptation of of Monographs on Statistics and AppliedProbability

n strategy parameters Chapman and Hall London and New York

Figure Threedimensional plots of the sphere mo del left and Ackleys function right

David B Fogel An analysis of evolutionary pro John H Holland Adaptation in natural and arti

grammingInFogel and Atmar pages cial systems The UniversityofMichigan Press

Ann Arb or

David B Fogel Evolving Articial Intel ligence

PhD thesis University of California San Diego

NL Johnson and S Kotz Distributions in Statis

tics Continuous Distributions Houghton Mif

in Boston

David B Fogel and Wirt Atmar editors Proceed

ings of the First Annual Conference on Evolution

Reinhard Manner and Bernard Manderick editors

ary Programming La Jolla CA February

Paral lel Problem Solving from Nature Elsevier

Evolutionary Programming So ciety

Science Publishers Amsterdam

L J Fogel A J Owens and M J Walsh Arti

NR Patel RL Smith and ZB ZabinskyPure

cial Intel ligence through Simulated Evolution John

adaptive searchinmonte carlo optimization Math

WileyNewYork

ematical Programming

J J Grefenstette editor Proceedings of the First

International Conference on Genetic Algorithms

J Pinter Convergence prop erties of sto chastic opti

and Their Applications Hillsdale New Jersey

mization pro cedures Math Operat Stat Ser Op

Lawrence Erlbaum Asso ciates

timization

J J Grefenstette editor Proceedings of the Second

G Rappl Konvergenzraten von Random Search

International Conference on Genetic Algorithms

Verfahren zur globalen Optimierung Dissertation

and Their Applications Hillsdale New Jersey

HSBw Munc hen Germany

Lawrence Erlbaum Asso ciates

G Rappl On linear convergence of a class of ran

H Haario and E Saksman Simulated annealing

dom search algorithms Zeitschrift f angew Math

dv Appl Prob pro cess in general state space A

Mech ZAMM

LA Rastrigin The convergence of the random Frank Homeister and Thomas Back Genetic al

search metho d in the extremal con gorithms and evolution strategies Similarities and trol of a many

dierences In Schwefel and Manner pages parameter system Automation and Remote Corn

trol

Figure Exp erimental runs of an Evolution Strategy and Evolutionary Programming on f left and f right

 

Ingo Rechenb erg Evolutionsstrategie Optimierung Workshop PPSN Ivolume of Lecture Notes in

technischer Systeme nach Prinzipien der biolo Computer Science Springer Berlin

gischen Evolution FrommannHolzb o og Verlag

FJ Solis and RJB Wets Minimization byran

Stuttgart

dom searchtechniques Math Operations Research

SL Resnick Extreme values regular variation

and point processesvolume of AppliedProbability

A Torn and A Zilinskas Global Optimization

Springer New York

volume of Lecture Notes in Computer Science

Gun ter Rudolph On correlated mutations in evo

Springer Berlin and Heidelb erg

lution strategies In Manner and Manderick

J Vogelsang Theoretische Betrachtungen zur

pages

Schrittweitensteuerungen in Evolutionsstrategien

J David Schaer editor Proceedings of the Third

Diploma thesis UniversityofDortmund Chair of

International Conference on Genetic Algorithms

System Analysis July

and Their Applications San Mateo California

Y Wardi Random search algorithms with sucient

June Morgan Kaufmann Publishers

t for minimization of functions Mathematics descen

A Scheel Beitrag zur Theorie der Evolutionsstrate

of Operations Research

gie Dissertation TU Berlin Berlin

ZB Zabinsky and RL Smith Pure adaptive

MA Schumer and K Steiglitz Adaptive step size

search in global optimization Mathematical Pro

random search IEEE Transactions on Automatic

gramming

Control

AA Zhigljavsky Theory of global random search

Numerische Optimierung von HansPaul Schwefel

volume of Mathematics and its applications So

ComputerModel len mittels der Evolutionsstrategie

viet Series Kluwer AA Dordrecht The Nether

volume of Interdisciplinary systems research

lands

Birkhauser Basel

HansPaul Schwefel Numerical Optimization of

Computer Models Wiley Chichester

HansPaul Schwefel and Reinhard Manner editors

Paral lel Problem Solving from Nature Proc st