
Metaheuristic Optimization, Machine Learning, and AI Virtual Workshop March 8-12, 2021 SPEAKER TITLES/ABSTRACTS Christian Blum Spanish National Research Council “Hybrid Metaheuristics” During the last decade, research on metaheuristics in the context of combinatorial optimization has experienced an interesting shift towards the hybridization of metaheuristics with other techniques for optimization. At the same time, the focus of research has changed from being rather algorithm- oriented to being more problem-oriented. The focus is nowadays rather on solving a problem at hand, and not so much about promoting a certain metaheuristic. This has led to an enormously fruitful cross-fertilization of different areas of optimization, algorithmics, mathematical modelling, operations research, statistics, simulation, and other fields. This cross-fertilization has resulted in a multitude of powerful hybrid algorithms that were obtained by combining metaheuristics with mathematical programming, dynamic programming, constraint programming, and lately also with machine learning. Nowadays nearly all high-performance optimization techniques in combinatorial optimization are hybrids. In this talk, I will provide a short glimpse of some recent developments in this field. Ray-Bing Chen National Cheng Kung University “Particle Swarm Stepwise (PaSS) Algorithm for Information Criterion Based Variable Selections” A new stochastic search algorithm is proposed for solving information-criterion-based variable selection problems. The idea behind the proposed algorithm is to search for the best model for the previously specified information criterion using multiple search particles. These particles simultaneously explore the candidate model space and communicate with each other to share search information. A new stochastic stepwise procedure is proposed to update the model during the search for the best model by adding or deleting variables. The proposed algorithm can also be used to generate variable selection ensembles efficiently. Several examples are used to demonstrate the performances of the proposed algorithm. A parallel version of the proposed algorithm is also introduced to accelerate the performance in terms of computation time. Carlos Coello Coello CINVESTAV-IPN “An Overview of Evolutionary Multi-Objective Optimization” Multi-objective optimization refers to solving problems having two or more (often conflicting) objectives at the same time. Such problems are ill-defined and their solution is not a single solution but instead, a set of them, which represent the best possible trade-offs among the objectives. Evolutionary algorithms are particularly suitable for solving multi-objective problems because they are population-based, and require little domain-specific information to conduct the search. Due to these advantages, the development of the so-called multi-objective evolutionary algorithms (MOEAs) has significantly increased in the last 15 years. In this talk, we will provide a general overview of the field, including the main algorithms in current use as well as some of the many applications of them. Abhishek Gupta Agency for Science, Technology and Research (A*STAR) “Transfer and Multi-task Evolutionary Computation” Optimization is at the heart of problem-solving in domains spanning the natural sciences, engineering, operations research, and even machine learning algorithms. In all of these domains, problems encountered are often repetitive in nature; e.g., optimizing products or processes to operate under different environments, solving partial differential equations (which can be seen as residual minimization problems) under varying initial / boundary conditions, etc. However, traditional optimization solvers, including those of evolutionary computation, are only designed to handle a single task at a time, assuming a zero prior knowledge state. Unlike humans who continually learn with experience, the capabilities of such solvers do not grow despite repeatedly solving similar problems. In this talk, I shall present recent advances towards a new breed of probabilistic model-guided (evolutionary) optimization algorithms that are able to learn across tasks. Instead of limiting the updates of the search process to information acquired from a single optimization problem, the information is transferred and adaptively reused by multiple tasks - leading to faster convergence behaviors. Algorithmic realizations of the aforementioned idea shall be discussed, with diverse applications ranging from rapid training of physics-informed neural networks to manufacturing systems optimization. Yaochu Jin University of Surrey “Data-driven Bayesian Evolutionary Optimization” Bayesian optimization has shown to be successful in handling black-box computationally expensive optimization problems. However, traditional Bayesian optimization is limited to low-dimensional single-objective problems. This talk presents some recent advances in extending Bayesian optimization to high-dimensional, multi-objective optimization by integrating machine learning with evolutionary algorithms, two techniques in artificial intelligence. Advanced machine learning methods such as ensembles and deep neural networks are adopted to replace the Gaussian process for reducing time complexity and evolutionary algorithms are employed to solve high-dimensional (up to 100 decision variables) and multi-objective (up to 20 objectives) expensive optimization problems. Seongho Kim Wayne State University/Karmanos Cancer Institute “Multi-Objective Optimization on Phase II Single-Arm Trial Designs” Simon’s two-stage minimax and optimal designs are widely used for Phase II single-arm trials with a binary endpoint. The initial step in finding these designs is to construct a set of feasible solutions that are constrained by desired type I and II error rates. The minimax and optimal designs are then the feasible solutions that have the smallest total sample size and the smallest expected total sample size under a null response rate, respectively. That is, these are not optimized with regard to type I and II error rates. Therefore, estimated error rates often deviate from the desired error rates. We here develop a new design called Pareto optimal design. The developed designs use multi-objective optimization (MOO) to find a set of Pareto frontier in order to make estimated error rates as close as the desired rates, which increases the probability of early termination under a null response rate. Furthermore, we will show applications of genetic algorithm (GA)-based MOO to find Pareto designs. Jinglai Li University of Birmingham “Entropy Estimation via Normalizing Flow” Entropy estimation is an important problem in information theory and statistical science. Many popular entropy estimators suffer from fast growing estimation bias with respect to dimensionality, rendering them unsuitable for high dimensional problems. In this work we propose a transform- based method for high dimensional entropy estimation, which consists of the following two main ingredients. First by modifying the k-NN based entropy estimator, we propose a new estimator which enjoys small estimation bias for samples that are close to a uniform distribution. Second we design a normalizing flow based mapping that pushes samples toward a uniform distribution, and the relation between the entropy of the original samples and the transformed ones is also derived. As a result the entropy of a given set of samples is estimated by first transforming them toward a uniform distribution and then applying the proposed estimator to the transformed samples. Numerical experiments demonstrate the effectiveness of the method for high dimensional entropy estimation problems. Dietmar Maringer University of Basel "Meta-Heuristics in Finance" Many quantitative problems in finance come with demanding optimization problems, defying traditional numerical methods. This often led to models that rely on simplifying assumptions. It is not always clear, however, how much these simplifications affect and deteriorate the quality of results. Metaheuristics are therefore a welcome extension to the financial economist’s toolbox: they can deal with non-convex and discontinuous search spaces and many types of constraints, which adds substantial flexibility and allows for more realistic models. And in combination with machine learning techniques, the scope of solvable quantitative and computational has been increased. This talk presents how meta-heuristics and related methods can be employed in financial modelling and decision making, and provides examples for topics such as portfolio optimization, algorithmic trading, and price modelling. Soumya Mohanty University of Texas Rio Grande Valley “Particle Swarm Optimization in Statistical Regression: Adaptive spline fitting and other case studies” We discuss some instances of parametric and nonparametric regression where optimization roadblocks were removed successfully with particle swarm optimization (PSO). In nonparametric regression, which constitutes the principal part of the talk, we discuss adaptive spline fitting with free knots and how PSO solves the high-dimensional non-convex optimization barrier that it presents. This allows the virtues of free knot placement, such as better fit quality and unified handling of smooth and (a certain class of) non-smooth curves, to be realized. Besides standard benchmarks, we present applications of the adaptive spline fitting method to real-world problems drawn from the newborn
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-