Latin Hypercube Sampling and Support Vector Regression-Assisted Analysis of Electromagnetic Scattering Characteristics of Targets Dong-Hai Xiao, Li-Xin Guo, Wei Liu, Wan-Qiang Qin, Mu-Yu Hou School of Physics and Optoelectronic Engineering Xidian University Xi’an 710071, China Abstract-Reliable and efficient analysis of electromagnetic and eliminates the confounding effect of various experimental scattering from targets is important for target recognition and factors without increasing the number of subjects in the stealth design. In this paper, a method combining sampling design with machine learning is proposed. Latin hypercube experiment [10]. Kent R. Davey [11] proposed a hybrid sampling is adopted to sample backscattering data in hopes of approach in magnetic field optimization problems combining improving sampling efficiency, and support vector regression Latin hypercube sampling and pattern search and achieved (SVR) is utilized to train the model so as to obtain an efficient remarkable acceleration effect. Fei Kang et al. [12] developed and accurate nonlinear regressor. With the aid of multilevel fast a system probabilistic stability evaluation method for slopes multipole method (MLFMM), the feasibility of the proposed method is validated by the simulation data of a SLICY model. It based on Gaussian process regression (GPR) and Latin also shows the potential of sampling design in analysis of hypercube sampling (LHS) and obtained very reliable electromagnetic scattering from a complex target. performance. Index Terms—Latin hypercube sampling (LHS); support vector In this paper, SVR is selected as the regression model for its regression (SVR); electromagnetic scattering; sampling design; good applicability to small datasets[13]. And Latin hypercube machine learning sampling (LHS) and uniformly-spaced sampling (USS) are applied to obtained required datasets for two variables: zenith I. INTRODUCTION angle and azimuth angle . Experimental results show that How to efficiently and accurately acquire the LHS can effectively improve the accuracy of regression electromagnetic scattering characteristics of targets has always compared with USS. been a research hotspot in computational electromagnetics. It II. METHODOLOGY plays a vital role in target recognition and stealth structure design. Traditional numerical methods, such as MOM, have A. Latin hypercube sampling (LHS) high accuracy, but their computational efficiency is limited [1]. To tackle this difficulty, a large number of acceleration Latin hypercube sampling (LHS), essentially, is a form of algorithms have emerged in the past decades [2],[3]. The core stratified sampling. The detailed procedure for producing a idea of these algorithms is to reduce computational complexity s-dimensional Latin hypercube sample of size N can be and realize parallel computing. In recent years, machine described as follow [14]: 1) divide the continuous range of learning has made remarkable achievements in many fields, each variable into N equally spaced intervals, and mark the such as image recognition [4], medical diagnosis [5] and cell interval (i 1) N , i N with i ; 2) randomly arrange labels natural language processing [6]. Researchers have also (1, ,N ) for each dimension, such as 1 j,, Nj denotes a extended machine learning to computational electromagnetics. For instance, Du et al. [7] proposed a machine learning random arrangement of the j-dimensional coordinates; 3) scheme for the analysis of polarimetric bistatic scattering from randomly disrupt the order of s dimensions and obtain a random matrix ; 4) the final Latin hypercube sample a finite dielectric cylinder, and experimental results have ij ns demonstrated its feasibility. Li et al. [8] proposed a 3D CC ij can be expressed as Poisson's equation solver based on deep learning technique, ns 1 taking advantage of the power of CNN in approximation of Cij( ij u ij ) N , i 1, , N , j 1, , s (1) highly nonlinear functions and prediction of the potential 2 distribution of the electrostatic field. On the other hand, Latin where uij denotes an i.i.d. sample uniformly distributed on hypercube sampling (LHS) [9] is widely used in regression [ 0.5,0.5] and independent of . Fig. 1 shows a Latin analysis for its advantages that it produces samples with high hypercube sample when N 7 and s 2 . Obviously, each representativeness in the domain. It ensures that each value (or row and each column of the constructed table contain only one a range of values) of a variable is represented in the samples sample. That is, each of those components is represented in an y fx() f() x wT x fx() x o Figure 1. A Latin hypercube sample with N 10 , Figure 2. the -insensitive loss function. Figure 3. the SLICY model s 2 for X [,]XX Distributed Uniformly on the 12 Unit Square. perfectly conducting objects, and the frequency was set to undifferentiated and fully stratified manner, which eliminates 1GHz. In addition, sampling operations to make datasets were the confounding effect of various experimental factors. carried out only in the upper space for the practical significance of the aforementioned model. More details about B. Support Vector Regression(SVR) these datasets can be found in TABLE I. Support vector machine (SVM) was first used to solve the problem of pattern recognition [15]. Since Vapnik introduced TABLE I the -insensitive loss function (Fig. 2) into support vector DATASETS INFORMATION Properties machines, this algorithm has been used to solve the problem Datasets of non-linear regression, which is called support vector Sampling mode Number Polarization regression (SVR) [13]. U1387H USS 1387 HH The solution of SVR is to transform the original L1387H LHS 1387 HH minimization problem into the dual maximization problem. For the sample set (,)xyii, i 1,2, , , the dual problem can be expressed as follows: III. RESULTS AND DISCUSSIONS In this section, two subgoals are established here for max yi( i ˆ i ) ( i ˆ i ) α,αˆ comprehensively analyzing the advantages of this method: 1) i1 to investigate the regression precision of the proposed method; 1 ˆ ˆ (i i )( j j )k (xx i , j ) 2) to analyze the error distribution. 2 ij,1 (2) A. Performance Verification ˆ s.t. (ii ) 0 i1 The SVR model is trained on two data sets U1387H and L1387H, respectively. They are divided into a training dataset ii,ˆ [0,C ] and a test dataset in proportion to 9:1, and therefore the ˆ where i and i denote the Lagrange multipliers, and training dataset contains 1248 HH-polarized monostatic k(,)xxij represents the underlying kernel function such as scattering data about SLICY model. K-fold cross validation linear kernel function, Gaussian kernel function and so on. (CV) is applied to adjust hyper-parameters, and K is set to 7 Gaussian kernel function earns our preference because of the for ln(1248) 7.13 as recommended in [16]. Grid search as nonlinearity of scattering characteristics. Thus the SVR one of the commonly used search algorithms for hyper- approximation can be obtained by parameters is also taken into account. ˆ Fig. 4 and Fig. 5 are the test results of training SVR with f()()(,).x i i k x i x b (3) i1 datasets obtained by different sampling methods in HH polarization. yˆ and y represent the predicted value and the C. Datasets ground truth. It can be seen that the RMSE of SVR trained on In order to make a comprehensive investigation into the U1387H is 1.343 dBsm but on L1387H is 1.182 dBsm, and 2 presented method, two datasets about monostatic scattering the R of the former is 0.949 less than the latter 0.951. It from metal objects were selected to do experimental means that LHS can effectively improve the regression researches, wherein a SLICY model (Fig. 3) was involved. accuracy of SVR for electromagnetic scattering regression One was sampled with LHS, and the other with uniformly- tasks. spaced sampling (USS) that the interval of and was 5 . Considering the computational efficiency and fidelity, the B. Performance Verification multilevel fast multipole method (MLFMM) was employed to Error analysis is of positive significance for the analysis of calculate the monostatic radar cross section (RCS) of the Figure 4. test results of training SVR with U1387H. Figure 5. test results of training SVR with L1387H. Figure 6. the spherical distribution of regression errors where the SVR model is trained with U1387H. ACKNOWLEDGMENT This research was supported by the National Natural Science Foundation of China (Grant No.61871457, Grant No.61431010) and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No.61621005). REFERENCES [1] C. C. Lu and C. C. Weng, "A multilevel algorithm for solving a boundary integral equation of wave scattering," Microwave & Optical Technology Figure 7. the spherical distribution of regression errors Letters, vol. 7, no. 10, pp. 466-470, 2010. where the SVR model is trained with L1387H. [2] J. Guan, S. Yan, and J. M. Jin, "An OpenMP-CUDA Implementation of Multilevel Fast Multipole Algorithm for Electromagnetic Simulation on causes and further improvement. Therefore, it is necessary to Multi-GPU Computing Systems," IEEE Transactions on Antennas & analyze the spherical distribution of regression errors in our Propagation, vol. 61, no. 7, pp. 3607-3616, 2013. experiments. Here, the error between the predicted value yˆ [3] Z. Peng, R. Hiptmair, Y. Shao, and B. Mackie-Mason, "Domain Decomposition Preconditioning for Surface Integral Equations in Solving and the ground truth y is defined as Challenging Electromagnetic Scattering Problems," IEEE Transactions err yy ˆ . (4) on Antennas & Propagation, vol. 64, no. 1, pp. 210-223, 2015. [4] J. Shuiwang, Y. Ming, and Y. Kai, "3D convolutional neural networks Fig. 6 and Fig. 7 show us the spherical distribution of for human action recognition," IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 35, no. 1, pp. 221-231, 2013. regression errors in HH polarization.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages3 Page
-
File Size-