Multimodal Optimization Using Crowding Differential Evolution with Spatially Neighbors Best Search

Total Page:16

File Type:pdf, Size:1020Kb

Multimodal Optimization Using Crowding Differential Evolution with Spatially Neighbors Best Search 932 JOURNAL OF SOFTWARE, VOL. 8, NO. 4, APRIL 2013 Multimodal Optimization using Crowding Differential Evolution with Spatially Neighbors Best Search Dingcai Shen State Key Laboratory of Software Engineering, Computer School, Wuhan University, Wuhan, China School of Computer and Information Science, Hubei Engineering University, Xiaogan, China Email: [email protected] Yuanxiang Li State Key Laboratory of Software Engineering, Computer School, Wuhan University, Wuhan, China Abstract—Many real practical applications are often needed options. For example, in the field of mechanical design to find more than one optimum solution. Existing we may choose the less fit solutions as our final choice Evolutionary Algorithm (EAs) are originally designed to due to physical or spatial restrictions that the optimal search the unique global value of the objective function. The solutions may be hard to fabricate, or for the easiness of present work proposed an improved niching based scheme maintenance, or the reliability and so on. named spatially neighbors best search technique combine with crowding-based differential evolution (SnbDE) for Multimodal problems are usually thought as a difficult multimodal optimization problems. Differential evolution work to solve by canonical EAs because of the existence (DE) is known for its simple implementation and efficient of multiple global or local optima. Numerous techniques for global optimization. Numerous DE-variants have been have been applied to EAs to enable them to find and exploited to resolve diverse optimization problems. The maintain multiple optima among the whole search space. proposed method adopts DE with DE/best/1/bin scheme. These techniques can be classified into two major The best individual in the adopted scheme is searched categories [3][4]: (i) iterative methods [5][6], which around the considered individual to control the balance of adopting the same optimization algorithm repeatedly to exploitation and exploration. The results of the empirical acquire multiple optima of a multimodal optimization comparison provide distinct evidence that SnbDE outperform the canonical crowding-based differential problems; (ii) parallel subpopulation models (explicit or evolution. SnbDE has been shown to be efficient and implicit), which achieving multiple solutions for a effective in locating and maintaining multiple optima of multimodal optimization problems by dividing a selected benchmark functions for multimodal optimization population into several groups, species or niches that problems. evolve in parallel, such as Multinational GA [7], Species conservation technique [3][8][9], Crowding [10][11][12], Index Terms—Differential evolution, Multimodal optimiza- AFMDE [13], fitness sharing [14], and so on. The tion, Niching, Crowding canonical crowding and fitness sharing are widely used techniques but at the risk of losing niches during the evolution. I. INTRODUCTION In this study, a new multimodal optimization method Most existing evolutionary algorithms (EAs) are employs a spatially near best search strategy using aiming to find a single global optimum of an optimization crowding-based differential evolution (SnbDE) to find problem. A large number of works have been done to multiple optimal solutions simultaneously. The advantage improve the capability of the EAs to effectively and of SnbDE is weakening the effect of the parameter named efficiently find the global value of the optimization crowding factor (CF), which inappropriate setting may problems. Among the commonly used evolutionary cause the replacement error. Numerical experiments on a algorithms, differential evolution (DE) [1] is one of the set of widely used multimodal benchmark functions show most popular evolutionary algorithm for its simple and that the SnbDE is able to find effective solutions to all efficient. As with other EAs, DE is also a population- tested functions. based stochastic optimization technique for solving The rest of this paper is outlined as follows. The next continues optimization problems, especially for real- section presents relevant work, including crowding parameter optimization [2]. And the main purpose, as scheme used for multimodal optimization problems, and other techniques, is to find the single global value. a brief review of differential evolution and its commonly However, in real-world applications, there often exist employed variants. The proposed SnbDE is described in multiple optima (global or local), and it is often requested detail in section 3. In section 4, the proposed algorithm is to locate either all or most of these solutions for more compared with canonical crowding based DE [12] on © 2013 ACADEMY PUBLISHER doi:10.4304/jsw.8.4.932-938 JOURNAL OF SOFTWARE, VOL. 8, NO. 4, APRIL 2013 933 commonly used multimodal optimization benchmark t individual vij, . The process of the crossover is depicted as functions. Finally, section 5 presents the conclusions and some future work. follows: t ⎪⎧vifrandCrjjij, ,(0,1)or j„ = rnd II. BACKGROUND AND RELATED WORK t uij, = ⎨ t (6) ⎩⎪xij, , otherwise. This section presents a brief review of differential evolution and its extension to use crowding scheme for where rand (0,1) is the uniform random variable drawn solving multimodal optimization problems. j from the interval [0,1], jrnd is a random integer number A. Differential Evolution in {1,2,..., NP } , and Cr ∈[0,1] is the crossover constant Differential evolution (DE) was first proposed by Storn namely crossover rate or recombination factor. and Price [1] in 1995. The DE gained significant The selection operator is performed at the last step of popularity since its original definition because of its DE to sustain the most promising trial individuals into the simplicity and easiness of implementation and efficiency. next generation. The selection operator can be defined as Like other popular used EAs, DE is a population-based ttt algorithm designed to solve optimization problems over ⎪⎧uiffufx,()() > xt+1 = iii (7) continues search space. i ⎨ t Without loss of generality, we refer to the ⎩⎪xi , otherwse. i maximization problem, both global and local, of an The DE algorithm performs repeated applications of objective function f ()x throughout this paper, where the above process until a specified number of iterations x is a vector of D-dimensional variables in feasible have been reached, or the number of maximum solution space, which can be denoted evaluations has been exceeded. The pseudo-code of D canonical differential evolution algorithm is shown in as Ω= [,LU ]. At the initialization step, a ∏i=1 ii Algorithm 1. 0000 Algorithm 1 The algorithm of canonical Differential population xiiiiD==(,xx,1 ,2 ,,LL x , ),1,2,, i NP, is Evolution (DE/best/1) randomly generated from the feasible solution space, 1: iter ⇐ 0 where NPis the population size. 2: Initialize population P with NP randomly generated Following initialization, the mutation operation is individuals. conducted with the creation of a mutant vector 3: Evaluate individual Pii ,1,2,, = NP. ttt t t L vxxxiii= (,,1 ,2 ,,L iD , ) for each individual xi in current 4: while (termination criteria are not satisfied) do population. The five widely used mutation strategy can be 5: Find the best fit individual Pbest of current described as follows. population 1) DE/rand/1: 6: for i = 1 to NP do tt t t 7: Select rr1, 2∈ { 1, 2, , NP } and vxir=+123 Fxx·( r − r ) (1) L rr12≠ ≠ i 2) DE/best/1: 8: Copy individual Pi to Si tt t t vxibestrr=+ Fxx·(12 − ) (2) 9: nrandD= (1, ) 10: for j =1 to D do 3) DE/current-to-best/1: 11: if rand(0,1) < Cr or jD= tt t t t t vxFxii=+⋅()() besti − x +⋅ Fxx rr12 − (3) 12: Snibestr[]= P [] n+− F *([] P12 n P r []) n 4) DE/best/2: 13: end if 14: nn⇐ (1)mod+ D tt t t t t vxibestrr=+⋅−+⋅− Fxx()()12 Fxx r 34 r (4) 15: end for 16: Evaluate individual Si 5) DE/rand/2: 17: if f ()SfPii< () tt tt tt 18: SPii⇐ vxFxxir=+⋅−12345()() r r +⋅− Fxx r r (5) 19: end if t 20: if fS()> fP ( ) where xbest denote the fittest individuals of the ibest generation t , r1 , r2 , r3 , r4 , r5 are integers randomly 21: PSbest⇐ i drawn from the discrete set {1, 2,L ,NP } and not equal to 22: end if the index of the considered individual, and F > 0 is the 23: end for mutation factor used to scale differential vectors. 24: iter⇐ iter +1 The crossover operation is applied subsequently on 25: end while each component j(1,2,,)jD= L of the mutant © 2013 ACADEMY PUBLISHER 934 JOURNAL OF SOFTWARE, VOL. 8, NO. 4, APRIL 2013 B. Crowding locating peaks and maintaining found peaks throughout De Jong [15] first presented his conception of the run. CDE produce many niches around the found crowding to deal with multimodal optimization problems. optima (global or local). Ideally, all the individuals are Crowding is inspired by the natural phenomenon of the gathered around the global optima when the algorithm competition amongst similar members for limited terminated. However, since the polytropic size of basin of resources. The proposal of crowding technique is aimed attraction of various multimodal optimization problems, to preserve the diversity of the population. In order to some suboptima may be vanished during the process of achieve this goal, a subset of the population is randomly evolution. chosen to be replaced by the most similar individuals of DE, as a population-based stochastic search algorithms, the offspring if it is fitter. The determination of similarity should consider two crucial contradictory aspects, is judged by the distance metric either in genotypic or exploration and exploitation [19], in order to guarantee phenotypic search space. In real-valued coding acceptable success on a given task. For multimodal optimization problems, the Euclidean distance between problems, the ability of exploration impulse the algorithm two points is most widely used. The number of to explore every area of the feasible search space so as to individuals selected to be considered for replacement is locate the global optima, while the ability of exploitation decided by the parameter named crowding factor (CF).
Recommended publications
  • Covariance Matrix Adaptation for the Rapid Illumination of Behavior Space
    Covariance Matrix Adaptation for the Rapid Illumination of Behavior Space Matthew C. Fontaine Julian Togelius Viterbi School of Engineering Tandon School of Engineering University of Southern California New York University Los Angeles, CA New York City, NY [email protected] [email protected] Stefanos Nikolaidis Amy K. Hoover Viterbi School of Engineering Ying Wu College of Computing University of Southern California New Jersey Institute of Technology Los Angeles, CA Newark, NJ [email protected] [email protected] ABSTRACT We focus on the challenge of finding a diverse collection of quality solutions on complex continuous domains. While quality diver- sity (QD) algorithms like Novelty Search with Local Competition (NSLC) and MAP-Elites are designed to generate a diverse range of solutions, these algorithms require a large number of evaluations for exploration of continuous spaces. Meanwhile, variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are among the best-performing derivative-free optimizers in single- objective continuous domains. This paper proposes a new QD algo- rithm called Covariance Matrix Adaptation MAP-Elites (CMA-ME). Figure 1: Comparing Hearthstone Archives. Sample archives Our new algorithm combines the self-adaptation techniques of for both MAP-Elites and CMA-ME from the Hearthstone ex- CMA-ES with archiving and mapping techniques for maintaining periment. Our new method, CMA-ME, both fills more cells diversity in QD. Results from experiments based on standard con- in behavior space and finds higher quality policies to play tinuous optimization benchmarks show that CMA-ME finds better- Hearthstone than MAP-Elites. Each grid cell is an elite (high quality solutions than MAP-Elites; similarly, results on the strategic performing policy) and the intensity value represent the game Hearthstone show that CMA-ME finds both a higher overall win rate across 200 games against difficult opponents.
    [Show full text]
  • AI, Robots, and Swarms: Issues, Questions, and Recommended Studies
    AI, Robots, and Swarms Issues, Questions, and Recommended Studies Andrew Ilachinski January 2017 Approved for Public Release; Distribution Unlimited. This document contains the best opinion of CNA at the time of issue. It does not necessarily represent the opinion of the sponsor. Distribution Approved for Public Release; Distribution Unlimited. Specific authority: N00014-11-D-0323. Copies of this document can be obtained through the Defense Technical Information Center at www.dtic.mil or contact CNA Document Control and Distribution Section at 703-824-2123. Photography Credits: http://www.darpa.mil/DDM_Gallery/Small_Gremlins_Web.jpg; http://4810-presscdn-0-38.pagely.netdna-cdn.com/wp-content/uploads/2015/01/ Robotics.jpg; http://i.kinja-img.com/gawker-edia/image/upload/18kxb5jw3e01ujpg.jpg Approved by: January 2017 Dr. David A. Broyles Special Activities and Innovation Operations Evaluation Group Copyright © 2017 CNA Abstract The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last “sea change,” during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems. While no one can predict how AI will evolve or predict its impact on the development of military autonomous systems, it is possible to anticipate many of the conceptual, technical, and operational challenges that DoD will face as it increasingly turns to AI-based technologies.
    [Show full text]
  • Redalyc.Assessing the Performance of a Differential Evolution Algorithm In
    Red de Revistas Científicas de América Latina, el Caribe, España y Portugal Sistema de Información Científica Villalba-Morales, Jesús Daniel; Laier, José Elias Assessing the performance of a differential evolution algorithm in structural damage detection by varying the objective function Dyna, vol. 81, núm. 188, diciembre-, 2014, pp. 106-115 Universidad Nacional de Colombia Medellín, Colombia Available in: http://www.redalyc.org/articulo.oa?id=49632758014 Dyna, ISSN (Printed Version): 0012-7353 [email protected] Universidad Nacional de Colombia Colombia How to cite Complete issue More information about this article Journal's homepage www.redalyc.org Non-Profit Academic Project, developed under the Open Acces Initiative Assessing the performance of a differential evolution algorithm in structural damage detection by varying the objective function Jesús Daniel Villalba-Moralesa & José Elias Laier b a Facultad de Ingeniería, Pontificia Universidad Javeriana, Bogotá, Colombia. [email protected] b Escuela de Ingeniería de São Carlos, Universidad de São Paulo, São Paulo Brasil. [email protected] Received: December 9th, 2013. Received in revised form: March 11th, 2014. Accepted: November 10 th, 2014. Abstract Structural damage detection has become an important research topic in certain segments of the engineering community. These methodologies occasionally formulate an optimization problem by defining an objective function based on dynamic parameters, with metaheuristics used to find the solution. In this study, damage localization and quantification is performed by an Adaptive Differential Evolution algorithm, which solves the associated optimization problem. Furthermore, this paper looks at the proposed methodology’s performance when using different functions based on natural frequencies, mode shapes, modal flexibilities, modal strain energies and the residual force vector.
    [Show full text]
  • DEPSO: Hybrid Particle Swarm with Differential Evolution Operator
    DEPSO: Hybrid Particle Swarm with Differential Evolution Operator Wen-Jun Zhang, Xiao-Feng Xie* Institute of Microelectronics, Tsinghua University, Beijing 100084, P. R. China *Email: [email protected] Abstract - A hybrid particle swarm with differential the particles oscillate in different sinusoidal waves and evolution operator, termed DEPSO, which provide the converging quickly , sometimes prematurely , especially bell-shaped mutations with consensus on the population for PSO with small w[20] or constriction coefficient[3]. diversity along with the evolution, while keeps the self- The concept of a more -or-less permanent social organized particle swarm dynamics, is proposed. Then topology is fundamental to PSO [10, 12], which means it is applied to a set of benchmark functions, and the the pbest and gbest should not be too closed to make experimental results illustrate its efficiency. some particles inactively [8, 19, 20] in certain stage of evolution. The analysis can be restricted to a single Keywords: Particle swarm optimization, differential dimension without loss of generality. From equations evolution, numerical optimization. (1), vid can become small value, but if the |pid-xid| and |pgd-xid| are both small, it cannot back to large value and 1 Introduction lost exploration capability in some generations. Such case can be occured even at the early stage for the Particle swarm optimization (PSO) is a novel multi- particle to be the gbest, which the |pid-xid| and |pgd-xid| agent optimization system (MAOS) inspired by social are zero , and vid will be damped quickly with the ratio w. behavior metaphor [12]. Each agent, call particle, flies Of course, the lost of diversity for |pid-pgd| is typically in a D-dimensional space S according to the historical occured in the latter stage of evolution process.
    [Show full text]
  • Improving Fitness Functions in Genetic Programming for Classification on Unbalanced Credit Card Datasets
    Improving Fitness Functions in Genetic Programming for Classification on Unbalanced Credit Card Datasets Van Loi Cao, Nhien An Le Khac, Miguel Nicolau, Michael O’Neill, James McDermott University College Dublin Abstract. Credit card fraud detection based on machine learning has recently attracted considerable interest from the research community. One of the most important tasks in this area is the ability of classifiers to handle the imbalance in credit card data. In this scenario, classifiers tend to yield poor accuracy on the fraud class (minority class) despite realizing high overall accuracy. This is due to the influence of the major- ity class on traditional training criteria. In this paper, we aim to apply genetic programming to address this issue by adapting existing fitness functions. We examine two fitness functions from previous studies and develop two new fitness functions to evolve GP classifier with superior accuracy on the minority class and overall. Two UCI credit card datasets are used to evaluate the effectiveness of the proposed fitness functions. The results demonstrate that the proposed fitness functions augment GP classifiers, encouraging fitter solutions on both the minority and the majority classes. 1 Introduction Credit cards have emerged as the preferred means of payment in response to continuous economic growth and rapid developments in information technology. The daily volume of credit card transactions reveals the shift from cash to card. Concurrently, the incidence of credit card fraud has risen with the loss of billions of dollars globally each year. Moreover, criminals have evolved increasingly more sophisticated strategies to exploit weaknesses in the security protocols.
    [Show full text]
  • A Covariance Matrix Self-Adaptation Evolution Strategy for Optimization Under Linear Constraints Patrick Spettel, Hans-Georg Beyer, and Michael Hellwig
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. X, MONTH XXXX 1 A Covariance Matrix Self-Adaptation Evolution Strategy for Optimization under Linear Constraints Patrick Spettel, Hans-Georg Beyer, and Michael Hellwig Abstract—This paper addresses the development of a co- functions cannot be expressed in terms of (exact) mathematical variance matrix self-adaptation evolution strategy (CMSA-ES) expressions. Moreover, if that information is incomplete or if for solving optimization problems with linear constraints. The that information is hidden in a black-box, EAs are a good proposed algorithm is referred to as Linear Constraint CMSA- ES (lcCMSA-ES). It uses a specially built mutation operator choice as well. Such methods are commonly referred to as together with repair by projection to satisfy the constraints. The direct search, derivative-free, or zeroth-order methods [15], lcCMSA-ES evolves itself on a linear manifold defined by the [16], [17], [18]. In fact, the unconstrained case has been constraints. The objective function is only evaluated at feasible studied well. In addition, there is a wealth of proposals in the search points (interior point method). This is a property often field of Evolutionary Computation dealing with constraints in required in application domains such as simulation optimization and finite element methods. The algorithm is tested on a variety real-parameter optimization, see e.g. [19]. This field is mainly of different test problems revealing considerable results. dominated by Particle Swarm Optimization (PSO) algorithms and Differential Evolution (DE) [20], [21], [22]. For the case Index Terms—Constrained Optimization, Covariance Matrix Self-Adaptation Evolution Strategy, Black-Box Optimization of constrained discrete optimization, it has been shown that Benchmarking, Interior Point Optimization Method turning constrained optimization problems into multi-objective optimization problems can achieve better performance than I.
    [Show full text]
  • Differential Evolution Particle Swarm Optimization for Digital Filter Design
    Missouri University of Science and Technology Scholars' Mine Electrical and Computer Engineering Faculty Research & Creative Works Electrical and Computer Engineering 01 Jun 2008 Differential Evolution Particle Swarm Optimization for Digital Filter Design Bipul Luitel Ganesh K. Venayagamoorthy Missouri University of Science and Technology Follow this and additional works at: https://scholarsmine.mst.edu/ele_comeng_facwork Part of the Electrical and Computer Engineering Commons Recommended Citation B. Luitel and G. K. Venayagamoorthy, "Differential Evolution Particle Swarm Optimization for Digital Filter Design," Proceedings of the IEEE Congress on Evolutionary Computation, 2008. CEC 2008. (IEEE World Congress on Computational Intelligence), Institute of Electrical and Electronics Engineers (IEEE), Jun 2008. The definitive version is available at https://doi.org/10.1109/CEC.2008.4631335 This Article - Conference proceedings is brought to you for free and open access by Scholars' Mine. It has been accepted for inclusion in Electrical and Computer Engineering Faculty Research & Creative Works by an authorized administrator of Scholars' Mine. This work is protected by U. S. Copyright Law. Unauthorized use including reproduction for redistribution requires the permission of the copyright holder. For more information, please contact [email protected]. Differential Evolution Particle Swarm Optimization for Digital Filter Design Bipul Luitel, 6WXGHQW0HPEHU ,((( and Ganesh K. Venayagamoorthy, 6HQLRU0HPEHU,((( Abstract— In this paper, swarm and evolutionary algorithms approximate the ideal filter. Since population based have been applied for the design of digital filters. Particle stochastic search methods have proven to be effective in swarm optimization (PSO) and differential evolution particle multidimensional nonlinear environment, all of the swarm optimization (DEPSO) have been used here for the constraints of filter design can be effectively taken care of design of linear phase finite impulse response (FIR) filters.
    [Show full text]
  • Differential Evolution with Deoptim
    CONTRIBUTED RESEARCH ARTICLES 27 Differential Evolution with DEoptim An Application to Non-Convex Portfolio Optimiza- tween upper and lower bounds (defined by the user) tion or using values given by the user. Each generation involves creation of a new population from the cur- by David Ardia, Kris Boudt, Peter Carl, Katharine M. rent population members fxi ji = 1,. .,NPg, where Mullen and Brian G. Peterson i indexes the vectors that make up the population. This is accomplished using differential mutation of Abstract The R package DEoptim implements the population members. An initial mutant parame- the Differential Evolution algorithm. This al- ter vector vi is created by choosing three members of gorithm is an evolutionary technique similar to the population, xi1 , xi2 and xi3 , at random. Then vi is classic genetic algorithms that is useful for the generated as solution of global optimization problems. In this note we provide an introduction to the package . vi = xi + F · (xi − xi ), and demonstrate its utility for financial appli- 1 2 3 cations by solving a non-convex portfolio opti- where F is a positive scale factor, effective values for mization problem. which are typically less than one. After the first mu- tation operation, mutation is continued until d mu- tations have been made, with a crossover probability Introduction CR 2 [0,1]. The crossover probability CR controls the fraction of the parameter values that are copied from Differential Evolution (DE) is a search heuristic intro- the mutant. Mutation is applied in this way to each duced by Storn and Price(1997). Its remarkable per- member of the population.
    [Show full text]
  • Accelerating Evolutionary Algorithms with Gaussian Process Fitness Function Models Dirk Buche,¨ Nicol N
    IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS — PART C: APPLICATIONS AND REVIEWS, VOL. 35, NO. 2, MAY 2005 183 Accelerating Evolutionary Algorithms with Gaussian Process Fitness Function Models Dirk Buche,¨ Nicol N. Schraudolph, and Petros Koumoutsakos Abstract— We present an overview of evolutionary algorithms II. MODELS IN EVOLUTIONARY ALGORITHMS that use empirical models of the fitness function to accelerate convergence, distinguishing between evolution control and the There are two main ways to integrate models into an surrogate approach. We describe the Gaussian process model evolutionary optimization. In the first, a fraction of individuals and propose using it as an inexpensive fitness function surrogate. is evaluated on the fitness function itself, the remainder Implementation issues such as efficient and numerically stable merely on its model. Jin et al. [7] refer to individuals that computation, exploration versus exploitation, local modeling, are evaluated on the fitness function as controlled, and call multiple objectives and constraints, and failed evaluations are ad- dressed. Our resulting Gaussian process optimization procedure this technique evolution control. In the second approach, the clearly outperforms other evolutionary strategies on standard test optimum of the model is determined, then evaluated on the functions as well as on a real-world problem: the optimization fitness function. The new evaluation is used to update the of stationary gas turbine compressor profiles. model, and the process is repeated with the improved model. Index Terms— evolutionary algorithms, fitness function mod- This surrogate approach evaluates only predicted optima on eling, evolution control, surrogate approach, Gaussian process, the fitness function, otherwise using the model as a surrogate.
    [Show full text]
  • The Use of Differential Evolution Algorithm for Solving Chemical Engineering Problems
    Rev Chem Eng 2016; 32(2): 149–180 Elena Niculina Dragoi and Silvia Curteanu* The use of differential evolution algorithm for solving chemical engineering problems DOI 10.1515/revce-2015-0042 become more available, and the industry is on the lookout Received July 10, 2015; accepted December 11, 2015; previously for increased productivity and minimization of by-product published online February 8, 2016 formation (Gujarathi and Babu 2009). In addition, there are many problems that involve a simultaneous optimiza- Abstract: Differential evolution (DE), belonging to the tion of many competing specifications and constraints, evolutionary algorithm class, is a simple and power- which are difficult to solve without powerful optimization ful optimizer with great potential for solving different techniques (Tan et al. 2008). In this context, researchers types of synthetic and real-life problems. Optimization have focused on finding improved approaches for opti- is an important aspect in the chemical engineering area, mizing different aspects of real-world problems, includ- especially when striving to obtain the best results with ing the ones encountered in the chemical engineering a minimum of consumed resources and a minimum of area. Optimizing real-world problems supposes two steps: additional by-products. From the optimization point of (i) constructing the model and (ii) solving the optimiza- view, DE seems to be an attractive approach for many tion model (Levy et al. 2009). researchers who are trying to improve existing systems or The model (which is usually represented by a math- to design new ones. In this context, here, a review of the ematical description of the system under consideration) most important approaches applying different versions has three main components: (i) variables, (ii) constraints, of DE (simple, modified, or hybridized) for solving spe- and (iii) objective function (Levy et al.
    [Show full text]
  • Differential Evolution Based Feature Selection and Classifier Ensemble
    Differential Evolution based Feature Selection and Classifier Ensemble for Named Entity Recognition Utpal Kumar Sikdar Asif Ekbal∗ Sriparna Saha∗ Department of Computer Science and Engineering Indian Institute of Technology Patna, Patna, India ßutpal.sikdar,asif,sriparna}@iitp.ac.in ABSTRACT In this paper, we propose a differential evolution (DE) based two-stage evolutionary approach for named entity recognition (NER). The first stage concerns with the problem of relevant feature selection for NER within the frameworks of two popular machine learning algorithms, namely Conditional Random Field (CRF) and Support Vector Machine (SVM). The solutions of the final best population provides different diverse set of classifiers; some are effective with respect to recall whereas some are effective with respect to precision. In the second stage we propose a novel technique for classifier ensemble for combining these classifiers. The approach is very general and can be applied for any classification problem. Currently we evaluate the pro- posed algorithm for NER in three popular Indian languages, namely Bengali, Hindi and Telugu. In order to maintain the domain-independence property the features are selected and developed mostly without using any deep domain knowledge and/or language dependent resources. Experi- mental results show that the proposed two stage technique attains the final F-measure values of 88.89%, 88.09% and 76.63% for Bengali, Hindi and Telugu, respectively. The key contribu- tions of this work are two-fold, viz. (i). proposal of differential evolution (DE) based feature selection and classifier ensemble methods that can be applied to any classification problem; and (ii). scope of the development of language independent NER systems in a resource-poor scenario.
    [Show full text]
  • An Introduction to Differential Evolution
    An Introduction to Differential Evolution Kelly Fleetwood Synopsis • Introduction • Basic Algorithm • Example • Performance • Applications The Basics of Differential Evolution • Stochastic, population-based optimisation algorithm • Introduced by Storn and Price in 1996 • Developed to optimise real parameter, real valued functions • General problem formulation is: D For an objective function f : X ⊆ R → R where the feasible region X 6= ∅, the minimisation problem is to find x∗ ∈ X such that f(x∗) ≤ f(x) ∀x ∈ X where: f(x∗) 6= −∞ Why use Differential Evolution? • Global optimisation is necessary in fields such as engineering, statistics and finance • But many practical problems have objective functions that are non- differentiable, non-continuous, non-linear, noisy, flat, multi-dimensional or have many local minima, constraints or stochasticity • Such problems are difficult if not impossible to solve analytically • DE can be used to find approximate solutions to such problems Evolutionary Algorithms • DE is an Evolutionary Algorithm • This class also includes Genetic Algorithms, Evolutionary Strategies and Evolutionary Programming Initialisation Mutation Recombination Selection Figure 1: General Evolutionary Algorithm Procedure Notation • Suppose we want to optimise a function with D real parameters • We must select the size of the population N (it must be at least 4) • The parameter vectors have the form: xi,G = [x1,i,G, x2,i,G, . xD,i,G] i = 1, 2, . , N. where G is the generation number. Initialisation Initialisation Mutation Recombination Selection
    [Show full text]