
International Journal of Fluid Machinery and Systems DOI: http://dx.doi.org/10.5293/IJFMS.2017.10.3.240 Vol. 10, No. 3, July-September 2017 ISSN (Online): 1882--9554 Original Paper Effects of Latin hypercube sampling on surrogate modeling and optimization Arshad Afzal1, Kwang-Yong Kim2 and Jae-won Seo3 1Department of Mechanical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016, India, [email protected] 2,3Department of Mechanical Engineering, Inha University 253 Yonghyun-Dong, Incheon, 402-751, Republic of Korea Abstract Latin hypercube sampling is widely used design-of-experiment technique to select design points for simulation which are then used to construct a surrogate model. The exploration/exploitation properties of surrogate models depend on the size and distribution of design points in the chosen design space. The present study aimed at evaluating the performance characteristics of various surrogate models depending on the Latin hypercube sampling (LHS) procedure (sample size and spatial distribution) for a diverse set of optimization problems. The analysis was carried out for two types of problems: (1) thermal-fluid design problems (optimizations of convergent–divergent micromixer coupled with pulsatile flow and boot- shaped ribs), and (2) analytical test functions (six-hump camel back, Branin-Hoo, Hartman 3, and Hartman 6 functions). The three surrogate models, namely, response surface approximation, Kriging, and radial basis neural networks were tested. The important findings are illustrated using Box-plots. The surrogate models were analyzed in terms of global exploration (accuracy over the domain space) and local exploitation (ease of finding the global optimum point). Radial basis neural networks showed the best overall performance in global exploration characteristics as well as tendency to find the approximate optimal solution for the majority of tested problems. To build a surrogate model, it is recommended to use an initial sample size equal to 15 times the number of design variables. The study will provide useful guidelines on the effect of initial sample size and distribution on surrogate construction and subsequent optimization using LHS sampling plan. Keywords: Latin hypercube sampling, Optimization, Surrogate model, Cross-validation, Global Optimization. 1. Introduction Design optimization has been widely applied in many engineering fields [1-8]. Optimization problems can be classified as either single-objective (a unique global optimum solution) or multi-objective (non-dominated solutions/Pareto-optimal set) problems. Optimization algorithms generally require a large number of function evaluations to yield the optimum design(s). Thus, in many applications, design optimization with high-fidelity analysis is often impractical because of large computational costs; for example, in the field of thermo-fluid engineering, where the non-linear Navier-Stokes equations need to be solved, such an analysis usually requires much computing time. In this respect, surrogate modeling is an effective tool for reducing the computational time by approximating the objective function(s) based on numerical simulations for a limited number of designs. The surrogate-based optimization has been used by many researchers, especially in the field of thermo-fluid engineering in a recent couple of decades [1-6]. A detailed review of surrogate modeling techniques, and their applications can be found in the reviews of Queipo et al. [9], Forrester and Keane [10], Samad and Kim [6]. Barthelemy and Haftka [11] also reviewed the applications of surrogate modeling in structural optimization problems. Samad et al. [5] investigated comparatively the predictive capabilities of different surrogate models such as response surface approximation (RSA), Kriging (KRG), and radial basis neural networks (RBNN) models for optimizing a transonic axial compressor blade. Afzal and Kim [12-15] used RSA, KRG, and RBNN for function approximations in single- and multi- objective optimizations for the shape optimization of micromixer designs. Besides optimization, other aspects of surrogate models were also studied: Jin et al. [16] compared different surrogate models in terms of their accuracy, robustness, and efficiency in approximating mathematical functions. Simpson et al. [17] surveyed the various metamodeling techniques to approximate deterministic computer analysis codes and provided recommendations for their Received August 30 2016; revised January 19 2017; accepted for publication February 23 2017: Review conducted by Minsuk Choi. (Paper number O16028C) Corresponding author: Arshad Afzal, [email protected] 240 appropriate use. These studies revealed the discrepancies among different surrogates in solving a design problem; no single surrogate was found to be the most effective for all problems. To build a surrogate model, a large number of numerical simulations are required to ensure an acceptably accurate surrogate model. The design-of-experiment (DOE) procedure can be used to economically construct a surrogate model with the minimum number of numerical simulations. Several DOE methods can be used to select the design sites for the simulation; full factorial, fractional factorial, central composite design, Latin hypercube sampling (LHS), Quasi-Monte Carlo (QMC), etc., are commonly used. For some DOE procedures such as full factorial, fractional factorial, and central composite design, the sample size is fixed for a given number of factors (i.e., design variables), and it increases with the number of factors; this may lead to unmanageable sample size. On the other hand, LHS and QMC allow flexibility in determining the sample size. Upon a careful review of the literature, it was found that there are two approaches to determine the right DOE samples for surrogate model construction. The first approach is known as “single-stage sampling,” which generates the sample points at once for constructing a surrogate. This method is also known as “one-shot” approach. The other method is “adaptive sampling” where additional points are added in several stages to repeatedly update the surrogate model until a desired level of accuracy is obtained. Efficient global optimization (EGO) has been widely used by many researchers to obtain a reasonable function approximation with a balance in local exploitation and global exploration properties. However, most of the researches were limited for Kriging-based optimization [18-21]. Regis [18] used trust- region-like approach for Kriging-based optimization that selects the additional new points by maximizing the expected improvement function over a small region in the vicinity of the current best solution instead of the entire domain. Wang et al. [19] performed a comparative study of expected improvement (EI)-assisted optimization with different surrogates viz. KRG, RBNN, support vector regression (SVR) and linear Shepard (SHEP) on numerical test problems. The optimization starts with a fixed initial sample size equal to 3D (D = number of design variables) and 10D for small-scale and large-scale test problems, respectively. The EI is used to guide the selection of the next sampling candidate point. As per their test, the Kriging EGO was found to be the most robust method. Irrespective of the different approaches, the surrogate model always depends on the initial number and distribution of the sample points. Although some previous studies [22-23] looked into this matter, the question, “what are the optimum number and distribution of initial sample points in surrogate model construction?” was not clearly answered. Therefore, considering the increasing applications of surrogate-based optimization in engineering problems, it is important to investigate the effects of the number and distribution of the DOE sample points in the design space on the accuracy of the available surrogate models. Among the various DOE procedures, the LHS has been used widely by many researchers owing to its better space-filling property, sample flexibility, and low sample sizes. However, the accuracy of the surrogate model based on LHS depends on the number of sample points, which is selected by the designer, and their distribution in the design variable space. The distributions of sample points are different for different runs of the LHS procedure. In contrast to previous researches [18-19, 23] which were focused on the addition of infill sample points assisted by the EI or similar strategies, the main concern of the present research is to study the effect of initial sample size and distribution in LHS procedure on surrogate modeling and optimization. Thus, the main objective of the present study is to investigate the combined effect of the number and distribution of LHS points on the accuracy of the surrogate model for different problems and for different surrogate models, namely RSA, KRG, and RBNN. The performances of the surrogate models were evaluated in terms of the accuracy and efficiency of finding the global optimum point. 2. Latin Hypercube Sampling LHS [24-25] is used to select design (sample) points in the continuous design space, bounded by the lower and upper bounds of the design variables. This approach generates random sample points, ensuring that all portions of the design space are represented. Using McKay et al.’s notation [24], a sample of size N can be constructed by dividing the range of each factor (input variable) into N strata of equal marginal probability 1/N and sampling once from each stratum. Further, the
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-