<<

1

Adaptation in :

A Survey

Rob ert Hinterding , Zbigniew Michalewicz , and Agoston E. Eib en

 parameter tuning costs a lot of time; Abstract | of parameters and op erators is one

of the most imp ortant and promising areas of research in

 the optimal parameter value may vary during the evo-

evolutionary computation; it tunes the to the

lution.

problem while solving the problem. In this pap er we develop

Therefore it is a natural idea to try to mo dify the values

a classi cation of adaptation on the basis of the mechanisms

1

used, and the level at which adaptation op erates within the

of strategy parameters during the run of the algorithm. It

. The classi cation covers all forms

is p ossible to do this by using some (p ossibly ) rule,

of adaptation in evolutionary computation and suggests fur-

by taking from the current state of the search, or

ther research.

by employing some self-adaptive mechanism. Note that

these changes may a ect a single comp onent of a chro-

I. Introduction

mosome, the whole chromosome (individual), or even the

As evolutionary (EAs) implement the idea of

whole p opulation. Clearly, by changing these values while

, and as evolution itself must have evolved to reach

the algorithm is searching for the solution of the problem,

its current state of sophistication, it is natural to exp ect

further eciencies can b e gained.

adaptation to b e used not only for nding solutions to a

Self-adaptation, based on the evolution of evolution, was

problem, but also for tuning the algorithm to the particular

develop ed in Evolution Strategies to adapt pa-

problem.

rameters to suit the problem during the run. The metho d

In EAs, we not only need to cho ose the algorithm, repre-

was very successful in improving eciency of the algorithm

sentation, and op erators for the problem, but we also need

for some problems. This technique has b een extended to

to cho ose parameter values and op erator probabilities for

other areas of evolutionary computation, but xed repre-

the evolutionary algorithm so that it will nd the solution

sentations, op erators, and control parameters are still the

and, what is also imp ortant, nd it eciently. This pro cess

norm.

of nding appropriate parameter values and op erator prob-

Other research areas based on the inclusion of adapting

abilities is a time-consuming task and considerable e ort

mechanisms are the following.

has gone into automating this pro cess.

 Representation of individuals (as prop osed by Shae-

Researchers have used various ways of nding go o d val-

fer [32]; the Dynamic Parameter Enco ding technique,

ues for the strategy parameters as these can a ect the p er-

Schraudolph & Belew [29] and messy genetic algo-

formance of the algorithm in a signi cant way. Many re-

rithms, Goldb erg et al.[16] also fall into this category).

searchers exp erimented with various problems from a par-

 Op erators. It is clear that di erent op erators play dif-

ticular domain, tuning the strategy parameters on the basis

ferent roles at di erent stages of the evolutionary pro-

of such exp erimentation (tuning \by hand"). Later, they

cess. The op erators should adapt (e.g., adaptive cross-

rep orted their results of applying a particular EA to a par-

over, Scha er & Morishima [27], Sp ears [34]). This is

ticular problem, stating:

true esp ecially for time-varying tness landscap es.

For these exp eriments, we have used the following

 Control parameters. There have b een various exp eri-

parameters: p opulation size = 80, probability of

ments aimed at adaptive probabilities of op erators [7],

crossover = 0:7, etc.

[22], [35], [36]. However, much more remains to b e

without much justi cation of the choice made.

done.

Note that (a run of ) an EA is an intrinsically dynamic,

In this pap er we develop a comprehensive classi cation

adaptive pro cess. The use of rigid, i.e. constant, parame-

of adaptation and give examples of their use. The classi -

ters is thus in contrast to the general evolutionary spirit.

cation is based on the mechanism of adaptation and level

Besides, there are also technical drawbacks to the tradi-

(in the EA) it o ccurs. Such a classi cation can b e useful to

tional approach:

the evolutionary computation community, since many re-

 the users' mistakes in setting the parameters can b e

searchers use the terms \adaptation" or \self-adaptation"

sources of errors and/or sub-optimal p erformance;

in an arbitrary way; in a few instances some authors (in-

cluding ourselves!) used the term \self-adaptation" where

R. Hinterding is with the Department of Computer and Mathemati-

cal Sciences, Victoria University of Technology, PO Box 14428 MMC,

there was a simple (deterministic and heuristic) rule for

Melb ourne 3000, Australia. email: [email protected]

changing some parameter of the pro cess.

Z. Michalewicz is with the Department of , Uni-

The pap er is organised as follows: the next section we

versity of North Carolina, Charlotte, NC 28223, USA, and with In-

stitute of Computer Science, Polish Academy of Sciences, ul. Ordona

develop classi cation of adaptation in Evolutionary Algo-

21, 01-237 Warsaw, Poland. email:[email protected]

1

By strategy parameters, we mean the parameters of the EA, not A.E. Eib en is with the Department of Computer Science, Leiden

those of the problem University, Leiden, The Netherlands. email:[email protected]

2

Type Static Dynamic

Level Deterministic Adaptive Self-adaptive

Environment S E-D E-A E-SA

Population S P-D P-A P-SA

Individual S I-D I-A I-SA

Comp onent S C-D C-A C-SA

TABLE I

Classification of adaptation in EAs

this happ ens by running numerous tests and trying to nd rithms (EAs). Section I I I lo oks at typ es of adaptation,

a link b etween parameter values and EA p erformance. This whereas Section IV | at the levels of adaptation. Section

metho d is commonly used for most of the strategy param- V discusses the combination of typ es and levels of adapta-

eters. tion and Section VI presents the discussion and conclusion.

De Jong [9] put considerable e ort into nding param-

I I. Classification of Adaptation

eter values which were go o d for a numb er of numeric test

problems using a traditional GA. He determined exp eri-

The action of determining the variables and parameters

mentally recommended values for the probability of using

of an EA to suit the problem has b een termed adapting

single-p oint crossover and bit mutation. Grefenstette [17]

the algorithm to the problem, and in EAs this can b e done

used a GA as a meta-algorithm to optimise some of the

while the algorithm is searching for a problem solution.

parameter values.

We give classi cations of adaptation in Table I; this clas-

si cation is based on the mechanism of adaptation (adap-

B. Dynamic

tation type) and on which level inside the EA adaptation

Dynamic adaptation happ ens if there is some mechanism

o ccurs (adaptation level). These classi cations are orthog-

which mo di es a strategy parameter without external con-

onal and encompass all forms of adaptation within EAs.

trol. The class of EAs that use dynamic adaptation can b e

Angeline's classi cation [1] is from a di erent p ersp ective

sub-divided further into three classes where the mechanism

and forms a subset of our classi cations.

of adaptation is the criterion.

The Type of adaptation consists of two main categories:

static (no change) and dynamic, with the latter divided fur-

B.1 Deterministic

ther into deterministic (D), adaptive (A), and self-adaptive

(SA) mechanisms. In the following section we discuss these

Deterministic dynamic adaptation takes place if the

typ es of adaptation.

value of a strategy parameter is altered by some deter-

ministic rule; this rule mo di es the strategy parameter de- The Level of adaptation consists of four categories: en-

vironment (E), p opulation (P), individual (I), and com-

terministically without using any feedback from the EA.

p onent (C). These categories indicate the scop e of the

Usually, a time-varying schedule is used, i.e. the rule will

changed parameter; we discuss these typ es of adaptation

b e used when a set numb er of generations have elapsed

in Section IV.

since the last time the rule was activated.

This metho d of adaptation can b e used to alter the prob- Whether examples are discussed in Section I I I or in Sec-

tion IV is completely arbitrary. An example of adaptive

ability of mutation so that the probability of mutation

individual level adaptation (I-A) could have b een discussed

changes with the numb er of generations. For example:

in Section I I I as an example of adaptive dynamic adapta-

g

p = 0:5 0:3  ;

m

tion or in Section IV as an example of individual level of

G

adaptation.

where g is the generation numb er from 1 :::G. Here the

mutation probability mut% will decrease from 0:5 to 0:2 as

I I I. Types of Adaptation

the numb er of generations increases to G. Early examples

The classi cation of the type of adaptation is made on the

of this approach are the varying mutation rates as used by

basis of the mechanism of adaptation used in the pro cess;

Fogarty [13], or Hesser & Manner [18] in GAs. This metho d

in particular, attention is paid to the issue of whether or

of adaptation was used also in de ning a mutation op erator

not a feedback from the EA is used.

for oating-p oint representations [24]: non-uniform muta-

tion. For a parent ~x, if the element x is selected for this

k

A. Static

0 0

mutation, the result is ~x = (x ; : : : ; x ; : : : ; x ), where

1 n

k

Static adaptation is where the strategy parameters have

8

a constant value throughout the run of the EA. Conse-

x + 4(t; r ig ht(k ) x )

>

k k

>

<

quently, an external agent or mechanism (e.g., a p erson or

if a random binary digit is 0

0

x =

k

a program) is needed to tune the desired strategy param-

x 4(t; x l ef t(k ))

>

k k

>

:

eters and cho ose the most appropriate values. Typically if a random binary digit is 1.

3

~

where b(i) denotes the b est individual, in terms of function The function 4(t; y ) returns a value in the range [0; y ] such

ev al , in generation i, ; > 1 and 6= (to avoid that the probability of 4(t; y ) b eing close to 0 increases as

1 2 1 2

cycling). In other words, the metho d (1) decreases the t increases (t is the generation numb er). This prop erty

p enalty comp onent (t + 1) for the generation t + 1, if all causes this op erator to search the space uniformly initially

b est individuals in the last k generations were feasible, and (when t is small), and very lo cally at later stages.

(2) increases p enalties, if all b est individuals in the last Deterministic dynamic adaptation was also used for

k generations were infeasible. If the b est individuals in changing the ob jective function of the problem. For con-

the last k generations contains b oth feasible and infeasible strained problems it can b e applied by increasing the p enal-

solutions, then (t + 1) is not changed. ties for violated constraints with evolution time [21], [25].

Joines & Houck used the following formula:

Recent work of Eib en & van der Hauw on solving (dis-

P

m

f F (~x) = f (~x ) + (C  t) (~x),

crete) constraint satisfaction problems is also based on an

j

j =1

whereas Michalewicz & Attia exp erimented with

adaptive p enalty technique that p erio dically increases the

P

m

1

2

F (~x;  ) = f (~x ) + (~x). f

j

j =1

2

p enalty of those constraints that are violated. This mecha-

In b oth cases, functions f measure the violation of the

j

nism highly improved GA p erformance on 3-SAT problems,

j -th constraint. Eib en & Ruttkay [10] describ ed an im-

[12], and on graph 3-colouring problems [11].

plementation of an evolutionary algorithm for constraint

Other examples include adaptation of probabilities of

satisfaction problems, where the p enalty co ecients of vi-

eight op erators for adaptive planner/navigator [38], where

olated constraints were increased after each run and used

the feedback from the evolutionary pro cess includes,

in a following run on the same problem.

through the op erator p erformance index, e ectiveness of

op erators in improving the tness of a path, their op era-

B.2 Adaptive

tion time, and their side e ect to future generations.

Adaptive dynamic adaptation takes place if there is some

form of feedback from the EA that is used to determine the

B.3 Self-adaptive

direction and/or magnitude of the change to the strategy

parameter. The assignment of the value of the strategy

The idea of the evolution of evolution can b e used to

parameter may involve credit assignment, and the action

implement the self-adaptation of parameters. Here the

of the EA may determine whether or not the new value

parameters to b e adapted are enco ded onto the chromo-

p ersists or propagates throughout the p opulation.

some(s) of the individual and undergo mutation and re-

Early examples of this typ e of adaptation include

combination. These enco ded parameters do not a ect the

Rechenb erg's `1=5 success rule' in Evolution Strategies,

tness of individuals directly, but \b etter" values will lead

which was used to vary the step size of mutation [26]. This

to \b etter" individuals and these individuals will b e more

rule states that the ratio of successful to all mu-

likely to survive and pro duce o spring and hence propagate

tations should b e 1=5, hence if the ratio is greater than 1=5

these \b etter" parameter values.

then increase the step size, and if the ratio is less than 1=5

Schwefel [30], [31] develop ed this metho d to self-adapt

then decrease the step size. An example for GAs is Davis's

the mutation step size and the mutation rotation angles

`adaptive op erator tness': where feedback on the success

in Evolution Strategies. Theoretical analysis of  -control

of a larger numb er of repro duction op erators is utilised to

(and the 1/5-Rule) done by Beyer can b e found in [5]. Self-

adjust their probability of b eing used [8]. Julstrom's adap-

adaptation was extended to EP by Fogel et al. [14] and to

tive mechanism regulates the ratio b etween crossovers and

GAs by Back [3], Hinterding [19] and Smith & Fogarty [33].

mutations based on their p erformance [22]. An extensive

The parameters to self-adapt can b e parameter values

study of this kind of \learning-rule" mechanisms was done

that control the op eration of the EA, values that control

by Tuson & Ross [36].

the op eration of repro duction or other op erators, or prob-

Adaption was also used to change the ob jective function

abilities of using alternative pro cesses, and as these are nu-

by increasing or decreasing p enalty co ecients for violated

meric quantities this typ e of self-adaptation has b een used

constraints. For example, Bean & Hadj-Alouane [4] de-

mainly for the optimisation of numeric functions. This has

signed a p enalty function where its one comp onent takes a

b een the case when single chromosome representations are

feedback from the search pro cess. Each individual is eval-

used (which is the overwhelming case), as otherwise nu-

uated by the formula:

P

m

merical and non-numerical representations would need to

2

F (~x) = f (~x ) + (t) f (~x),

j

j =1

b e combined on the same chromosome. Examples of self-

where (t) is up dated every generation t in the following

adaptation for non-numerical problems are Fogel et al. [15]

way:

where they self-adapted the relative probabilities of ve

8

(1= )  (t);

1

>

mutation op erators for the comp onents of a nite state ma-

>

>

> ~

if b(i) 2 F for all

>

chine. The other example is Hinterding [?], where a multi-

>

>

>

t k + 1  i  t

<

chromosome GA is used to implement the self-adaptation

 (t); (t + 1) =

in the Cutting Sto ck Problem with contiguity. Here self-

2

>

>

~

>

adaptation is used to adapt the probability of using one of

if b(i) 2 S F for all

>

>

>

>

the two available mutation op erators, and the strength of

t k + 1  i  t

>

:

the group mutation op erator.

(t); otherwise ;

4

pro duced during the generation t (for details, see Michale- IV. Levels of Adaption

wicz [24]). The numb er of pro duced o spring N (t) is pro-

We can also de ne at what level within the EA and the

p ortional to the size of the p opulation at given genera-

solution representation adaptation takes place. We de ne

tion t, whereas the numb er of individuals \to die" D (t) de-

four levels: environment, p opulation, individual, and com-

p ends on age of individual chromosomes. There are several

p onent. These levels of adaptation can b e used with each of

one can use for the age allo cation for individuals

the typ es of adaptation, and a mixture of levels and typ es

[2]; all of them require a feedback from the current state of

of adaptation can b e used within an EA.

the search.

A. Environment Level Adaption

D. Component Level Adaption

Environment level adaptation is where the resp onse of

Comp onent-level adaptation adjusts strategy parameters

the environment to the individual is changed. This cov-

lo cal to some comp onent or of an individual in the

ers cases such as when the p enalties in the tness function

p opulation. The b est known example of comp onent level

change, where weights within the tness function change

adaptation is the self-adaptation of comp onent level muta-

and the tness of an individual changes in resp onse to nich-

tion step sizes and rotation angles in ESs.

ing considerations (some of these were discussed in the pre-

Additionally, in Fogel et al.[15] the mechanism of adapt-

vious section, in the context of typ es of adaptation).

ing probabilities of mutation for each comp onent of a nite

Darwen & Yao [6], explore b oth deterministic environ-

states machine is discussed.

mental adaptation and adaptive environmental adaptation

in their pap er comparing tness sharing metho ds.

V. Combining forms of adaptation

The classic example of combining forms of adaptation

B. Population Level Adaption

is in ESs, where the algorithm can b e con gured for in-

In EAs some (or all in simple EAs) of the parameters are

dividual level adaptation (one mutation step size p er in-

global, mo difying these parameters when they apply to all

dividual), comp onent level adaptation (one mutation step

memb ers of the p opulation is population level adaptation.

size p er comp onent) or with two typ es of comp onent level

Dynamic adaptation of these parameters is in most cases

adaptation where b oth the mutation step size and rotation

deterministic or adaptive. No cases of p opulation level self-

angle is self-adapted for individual comp onents [30].

adaptation have b een seen yet. The example of determin-

Hinterding et al.[20] combine global-level adaptation of

istic mo di cation of the mutation rate given ab ove is de-

the p opulation size with individual level self-adaptation of

terministic p opulation level adaptation, and Rechenb erg's

the mutation step size for optimising numeric functions.

`1=5 success rule' is an example of adaptive p opulation level

Combining forms of adaptation has not b een used much

adaptation.

as the interactions are complex, hence deterministic or

Population level adaptation also covers cases where a

adaptive rules will b e dicult to work out. But self-

numb er of p opulations are used in a parallel EA or oth-

adaptation where we use evolution to determine the b ene -

erwise, Lis [23] uses feedback from a numb er of parallel

cial interactions (as in nding solutions to problems) would

p opulations to dynamically adapt the mutation rate. The

seem to b e the b est approach.

feedback from p opulations with di erent mutation prob-

abilities was used to adjust the mutation probabilities of

VI. Discussion

all the p opulations up or down. Schlierkamp-Vo osen &

The e ectiveness of evolutionary computation dep ends

Muhlen  b ein [28] use comp etition b etween sup-p opulations

on the interaction of representation used for the problem

to determine which p opulations will lose or gain individ-

solutions, the repro duction op erators used, and the con g-

uals. Hinterding et al.[20] use feedback from three sub-

uration of the evolutionary algorithm used.

p opulations with di erent p opulation sizes to adaptively

Adaption provides the opp ortunity to customise the evo-

change some or all of the sub-p opulation sizes.

lutionary algorithm to the problem and to mo dify the con-

guration and the strategy parameters used while the prob-

C. Individual Level Adaption

lem solution is sought. This enables us to not only incorp o-

Individual level adaptation adjusts strategy parameters

rate domain information and multiple repro duction op era-

held within individuals and whose value a ects only that

tors into the EA more easily, but can allow the algorithm

individual. Examples are: the adaptation of the mutation

itself to select those values and op erators which give b etter

step size in ESs, EP, and GAs; the adaptation of crossover

results. Also these values can b e mo di ed during the run

p oints in GAs [27] and [37].

of the EA to suit the situation during that part of the run.

Arabas et al.[2] describ e a metho d for adapting p opu-

Information ab out which of the op erators available are

lation size by de ning age of individuals; the size of the

most suitable to a particular problem is not easily deter-

p opulation after single iteration is

mined, adaptation can b e used here to provide feedback or

P opS iz e(t + 1) = P opS iz e(t) + N (t) D (t),

to determine when they should b e used.

More research on the combination of the typ es and levels where D (t) is the numb er of chromosomes which die o

of adaptation needs to b e done as this could lead to signi - during generation t and N (t) is the numb er of o spring

5

Solving from Nature { PPSV IV, volume 1141 of Lecture Notes cant improvements to nding go o d solutions and the sp eed

in Computer Science, Berlin, 1996. Springer. pp. 420{429.

of nding them.

[21] J.A. Joines and C.R. Houck. On the use of non-stationary

p enalty functions to solve nonlinear constrained optimization

References

problems with gas. In Proceedings of the First IEEE Conference

on Evolutionary Computation. IEEE Press, 1994. pp. 579{584.

[1] P.J. Angeline. Adaptive and self-adaptive evolutionary computa-

[22] B.A. Julstrom. What have you done for me lately? adapting

tion. In M. Palaniswami, Y. Attikiouzel, R.J.I I Marks, D. Fogel,

op erator probabilities in a steady-state . In

and T. Fukuda, editors, Computational Intel ligence, A Dynamic

Proceedings of the Sixth International Conference on Genetic

System Perspective. IEEE Press, 1995. pp.152{161.

Algorithms. Morgan Kaufmann, 1995. pp. 81{87.

[2] J. Arabas, Z. Michalewicz, and J. Mulawka. GAVaPS | a ge-

[23] J. Lis. Parallel genetic algorithm with dynamic control param-

netic algorithm with varying p opulation size. In Proceedings of

eter. In Proceedings of the 1996 IEEE Conference on Evolu-

the First IEEE Conference on Evolutionary Computation. IEEE

tionary Computation, Piscataway,NY, 1996. IEEE Press. pp.

Press, 1994. pp. 73{78.

324{329.

[3] T. Back. Self-adaption in genetic algorithms. In Proceedings

[24] Z. Michalewicz. Genetic Algorithms + Data Structures = Evo-

of the First European Conference on Arti cial Life, Cambridge,

lution Programs. Springer - Verlag, New York, 3rd edition, 1996.

1992. MIT Press. pp. 263{271.

[25] Z. Michalewicz and N. Attia. Evolutionary optimization of con-

[4] J.C. Bean and A.B. Hadj-Alouane. A dual genetic algorithm for

strained problems. In A.V. Sebald and L.J. Fogel, editors, Pro-

b ounded integer programs. Tr 92-53, Department of Industrial

ceedings of the Third Annual Conference on Evolutionary Pro-

and Op erations Engineering, The , 1992.

gramming.World Scienti c, 1994. pp. 98{108.

[5] H.-G. Beyer. Toward a theory of evolution strategies. Evolu-

[26] R. Rechenb erg. Evolutionsstrategie: Optimierung technischer

tionary Computation, 1(2):165{188, 1993.

Syseme nach Prinzipien der biologischen Evolution.Frommann-

[6] P Darwen and X. Yao. Every niching mehto d has its niche:

Holzb o og, Stuttgart, 1973.

Fitness sharing and implicit sharing compared. In H-M. Voigt,

[27] J.D. Scha er and A. Morishima. An adaptive crossover distri-

W. Eb eling, I. Rechenb erg, and H-P. Schwefel, editors, Parel-

bution mechanism for genetic algorithms. In Proceedings of the

lel Problem Solving from Nature { PPSV IV, volume 1141 of

2nd International Conference on Genetic Algorithms. Lawrence

Lecture Notes in Computer Science. Springer, Berlin, 1996. pp.

Erlbaum Asso ciates, 1987. pp. 36{40.

398{407.

[28] D Schlierkamp-Vo osen and Muhlen  b ein. Adaption of p opula-

[7] L. Davis. Adapting op erator probabilities in genetic algorithms.

tion sizes by comp eting subp opulations. In Proceedings of the

In Proceedings of the 3rd International Conference on Genetic

1996 IEEE Conference on Evolutionary Computation, Piscat-

Algorithms. Morgan Kaufmann, pp.61{69 1989.

away,NY, 1996. IEEE Press. pp. 330{335.

[8] L. Davis, editor. Handbook of Genetic Algorithms.Van Nostrand

[29] N. Schraudolph and R. Belew. Dynamic parameter enco ding for

Reinhold, 1991.

genetic algorithms. , vol. 9(1):pp. 9{21, 1992.

[9] K. A. De Jong. An Analysis of the Behaviour of a Class of

[30] H-P. Schwefel. Numerische Optimierung von Computer-

Genetic Adaptive System. Do ctoral dissertation, University of

Model len mittels der Evolutionsstrategie, volume 26 of Inter-

Michigan, 1975. Dissertation Abstract International, 36(10),

disciplinary systems research. Birhauser, Basel, 1977.

5140B. (University Micro lms No 76-9381).

[31] H-P. Schwefel. Evolution and Optimum Seeking. Sixth-

[10] A.E. Eib en and Zs. Ruttkay. Self-adaptivity for constraint sat-

Generation Computer Technology Series. Wiley, 1995.

isfaction: Learning p enalty functions. In Proceedings of the 3rd

[32] C.G. Shaefer. The argot strategy: Adaptive representation ge-

IEEE International Conference on Evolutionary Computation.

netic optimizer technique. In Proceedings of the 2nd Interna-

IEEE Press, 1996. pp. 258{261.

tional Conference on Genetic Algorithms. Lawrence Erlbaum

[11] A.E. Eib en and J.K. van der Hauw. Graph coloring with

Asso ciates, 1987. pp. 50{55.

adaptive evolutionary algorithms. Technical Rep ort TR-

[33] J. Smith and T. Fogarty. Self-adaptation of mutation rates in a

96-11, Leiden University, August 1996. also available as

steady-state genetic algorithm. In Proceedings of the 3rd IEEE

http://www.wi.leidenuniv.nl/~gusz/graphcol.ps.gz.

International Conference on Evolutionary Computation. IEEE

[12] A.E. Eib en and J.K. van der Hauw. Solving 3-SAT by GAs

Press, 1996. pp. 318{323.

adapting constraint weights. In Proceedings of the 4th IEEE

[34] W.M. Sp ears. Adapting crossover in evolutionary algorithms. In

International Conference on Evolutionary Computation. IEEE

J.R. McDonnell, R.G. Reynolds, and D.B. Fogel, editors, Pro-

Press, 1997.

ceedings of the Fourth Annual Conference on Evolutionary Pro-

[13] T. Fogarty.Varying the probability of mutation in the genetic

gramming. The MIT Press, 1995. pp. 367{384.

algorithm. In J.D. Scha er, editor, Proceedings of the 3rd In-

[35] M. Srinivas and L.M. Patnaik. Adaptive probabilities of cross-

ternational Conference on Genetic Algorithms, pages 104{109.

over and mutation in genetic algorithms. IEEE Transactions on

Morgan Kaufmann, 1989.

Systems, Man, and Cybernetics, vol. 24(4):pp. 17{26, 1994.

[14] D.B. Fogel, L.J. Fogel, and J.W. Atmar. Meta-evolutionary pro-

[36] A. Tuson and P. Ross. Cost based op erator rate adaptation:

gramming. In R.R. Chen, editor, Proc. of the 25th Asilomar

An investigation. In H.-M. Voigt, W. Eb eling, I. Rechenb erg,

Conf. on Signals, Systems, and Computers, San Jose, 1991.

and H.-P. Schwefel, editors, Proceedings of the 4th Conference

Maple Press. pp. 540{545.

on Paral lel Problem Solving from Nature, numb er 1141 in Lec-

[15] L.J. Fogel, P.J. Angeline, and D.B. Fogel. An evolutionary pro-

ture Notes in Computer Science, pages 461{469. Springer, Berlin,

gramming approach to self-adaption on nite state machines. In

1996.

J.R. McDonnell, R.G. Reynolds, and D.B. Fogel, editors, Pro-

[37] T. White and F. Oppacher. Adaptive crossover using automata.

ceedings of the Forth Annual Conference on Evolutionary Pro-

In H.-P. Schwefel Y. Davidor and R. Manner, editors, Proceed-

gramming, Massachusetts, 1995. MIT Press. pp. 355{365.

ings of the 3rd Conference on Paral lel Problem Solving from Na-

[16] D.E. Goldb erg, K. Deb, and B. Korb. Do not worry, b e messy.

ture, numb er 866 in Lecture Notes in Computer Science, pages

In Proceeding s of the 4th International Conference on Genetic

229{238. Springer-Verlag, 1994.

Algorithms. Morgan Kaufmann, 1991. pp. 24{30.

[38] J. Xiao, Z. Michalewicz, and L. Zhang. Evolutionary plan-

[17] J.J. Grefenstette. Optimization of control parameters for genetic

ner/navigator: Op erator p erformance and self-tuning. In Pro-

algorithms. IEEE Transactions on Systems, Man, and Cyber-

ceedings of the 3rd IEEE International Conference on Evolu-

netics, vol. 16(1):pp. 122{128, 1986.

tionary Computation. IEEE Press, May 20{22 1996. pp. 366{

[18] J. Hesser and R. Manner. Towards an optimal mutation proba-

371.

bility for genetic algorithms. In H.-P. Schwefel and R. Manner,

editors, Proceedings of the 1st Conference on Paral lel Problem

Solving from Nature, numb er 496 in Lecture Notes in Computer

Science, pages 23{32. Springer-Verlag, 1990.

[19] R. Hinterding. Gaussian mutation and self-adaption in numeric

genetic algorithms. In IEEE International Conference on Evo-

lutionary Computation. IEEE Press, 1995. pp 384{389.

[20] R. Hinterding, Z. Michalewicz, and T.C. Peachey. Self-adaptive

genetic algorithm for numeric functions. In H-M. Voigt, W. Eb el-

ing, I. Rechenb erg, and H-P. Schwefel, editors, Parel lel Problem