Volume III, Issue III, March 2016 IJRSI ISSN 2321 - 2705 Wind Power Forecasting Using Extreme Learning Machine 1Asima Syed, 2S.K Aggarwal 1Electrical and Renewable energy Engineering Department, Baba Ghulam Shah Badshah University, India 2Electrical Engineering Department, Maharishi Markandeshwar University, India Abstract─ Wind generation is one of the rapidly growing source g ( ) Activation function of ELM of renewable energy. The uncertainty in the wind power generation is large due to the variability and intermittency in the w Input weights of ELM wind speed, and operating requirements of power systems. Wind power forecasting is one of the inexpensive and direct methods to 훽 Output weights of ELM alleviate negative influences of intermittency of wind power generation on power system. This paper proposes an extreme 훽 Approximated output weights learning machine (ELM) based forecasting method for wind power generation. ELM is a learning algorithm for training single-hidden layer feed forward neural networks (SLFNs). The I. INTRODUCTION performance of proposed model is assessed by utilizing the wind ind generation is an increasingly exploited form of power data from three wind power stations of Ontario, Canada W renewable energy. However, there is a great variation as the test case systems. The effectiveness of the proposed model in wind generation due to erratic nature of the earth’s is compared with persistence benchmark model. atmosphere, and this poses a number of intricacies that act as Keywords─ Extreme learning machine (ELM); single-hidden a restraining factor for this energy source. Wind generation layer feed forward network (SLFN); wind power forecasting; vary with time and location due to fluctuations in wind speed. wind power. Therefore, unexpected variation in the output of wind farm may escalate the operating costs for electricity system by NOMENCLATURE increasing the requirements for primary reserves, as well as pose potential risks to the reliability of electricity supply [1]. b Biases of ELM A grid operator’s priority is to forecast the variations of wind power production so as to schedule the spinning reserve 푏 Approximated biases capacity and to regulate the grid operations [2], [3]. This makes the wind power forecasting tools very important. H Hidden layer output matrix of ELM Besides the transmission system operators (TSOs), forecasting tools are required by various end-users which include energy † traders and energy service providers (ESPs), independent 퐇 Moore-Penrose generalized inverse of ELM’s power providers (IPPs), etc. and to provide inputs for different tasks like energy trading, economic scheduling, security hidden layer output matrix assessment, etc. Researchers focus on developing a forecasting tool so as to predict wind power production with f ( ) Output function of ELM adequate accuracy. Several approaches such as the persistence method, physical modeling approach, statistical models, and 푁 Number of ELM’s hidden nodes soft computing models (SCMs) are employed in the literature for short-term deterministic wind power forecasting (WPF). 푤 Approximated input weights The persistence method, also called as a “Naive predictor” is employed as a benchmark for comparing other forecasting N Number of training samples tools for short-term WPF [4]. Physical modeling approach involve Numerical Weather Prediction (NWP) for wind n Dimension of the input vector of ELM forecasting which utilizes various weather data and operates by evaluating complex mathematical models [5]. The m Dimension of the output vector of ELM statistical models such as exponential techniques, autoregressive moving average and grey predictors employ i, j, l Common indices historical data to tune the model parameters and error is minimized when the patterns match the historical ones within www.rsisinternational.org Page 33 Volume III, Issue III, March 2016 IJRSI ISSN 2321 - 2705 a stipulated bound [6]-[8]. SCMs, such as back-propagation WPF. Evolutionary optimization techniques, such as genetic neural network [9], probabilistic neural network, radial base algorithm (GA) [14], particle swarm optimization (PSO) [15] function neural network [10], cascade correlation neural and firefly algorithm [16] are employed for optimizing the network, fuzzy ARTMAP [11], support vector machines [12], neural networks (NN). Hybrid Gaussian processes [13], and fuzzy logic are widely used in 푡푕 intelligent algorithms like neuro-fuzzy methods [17], adaptive 푖 hidden neuron. 푤푖 · 푥푗 denotes the inner product of 푤푖 and neuro-fuzzy inference system (ANFIS) [18] using fuzzy logic 푥푗 .The output neurons are chosen linear. and NN for WPF are also getting popular. The standard SLFNs with 푁 hidden neurons and activation In this paper, a new forecasting approach is proposed based function g(.) can approximate these N samples with zero error on the extreme learning machine (ELM), which is a unique means that there exists 훽 , 푤 and 푏 such that learning algorithm for training single-hidden layer feed 푖 푖 푖 forward neural networks (SLFNs). Conventionally, all the 푁 parameters of feed forward neural networks need to be tuned 푖=1 훽푖 푔 푤푖 . 푥푗 + 푏푖 = 푡푗 , 푗 = 1, … , 푁 (2) and thus there exists the dependency between the parameters of different layers. For past decades, gradient-based methods have The above N equations can be written compactly as: been extensively used for training feed forward NNs, which are generally very slow due to inappropriate learning steps or may H훽 = T (3) converge to local minimums. And numerous iterative learning steps are required by such learning algorithms to attain Where improved learning performance. In contrary to this, ELM randomly chooses the input weights and analytically H (퐰ퟏ,…, 퐰N , 푏1,…, 푏푵 ,퐱1,…, 퐱N) = determines the output weights by simple matrix computations, thus characterizing an extremely fast learning algorithm than most common learning algorithms such as back-propagation 푔 퐰1. 퐱1 + 푏1 ⋯ 푔 퐰N . 퐱ퟏ + 푏푁 [19]. Apart from reaching the smallest training error, the ⋮ … ⋮ (4) proposed learning algorithm also tends to reach the smallest 푔 퐰1. 퐱N + 푏1 ⋯ 푔 퐰N . 퐱N + 푏N 푁×푁 norm of weights which is different from traditional learning algorithms. The proposed model is assessed by using the data T 푇 from the three wind power stations of Ontario, Canada namely 훽1 퐭1 Amaranth wind farm, Port Alma wind farm and Kingsbridge 훽 = ⋮ and 퐓 = ⋮ (5) 훽T 퐭푇 wind farm. The performance of the proposed model is N 푁 ×푚 푁 푁×푚 evaluated by comparing with persistence benchmark model. The results reveal a significant improvement over persistence model. H is called the hidden layer output matrix of the neural network [21], [22]. Rest of this paper is organized as follows. Section II The input weights 퐰 and the hidden layer biases 푏 are describes the ELM learning algorithm for SLFNs, Section III 푖 푖 selected randomly and in fact need not to be tuned. Once briefs the benchmark reference model and Section IV presents arbitrary values have been assigned to these parameters in the the numerical results and discussions and Section V delineates beginning of the learning, H can remain unchanged. the conclusions. In most of the cases the number of hidden neurons is much II. EXTREME LEARNING MACHINE less than the number of distinct training samples, 푁 ≪ 푁 , H is Extreme learning machine (ELM) for single hidden layer feed a non square matrix and there may not exist 푤푖 ,푏푖 , 훽푖 such that forward networks (SLFNs) randomly chooses the input weights Hβ = T. Thus, one may need to find specific 퐰 퐢,푏푖 , 훽푖 푖 = and coherently determines the output weights through simple 1,…,푁 such that matrix computations [20]. For N arbitrary distinct 푇 푛 samples 푥 푡 , where 푥 =[푥 푥 , … 푥 ] ∈ 푅 and 푖, 푖 푖 푖1, 푖2 푖푛 퐇 퐰 1, … , 퐰 푁 , 푏1, … , 푏푁 훽 − 퐓 = 푇 푚 푡 [푡 푡 … , 푡 ] ∈ 푅 , standard SLFNs with 푁 hidden 푖= 푖1, 푖2, 푖푛 푚푖푛푤 ,푏 ,훽 퐇 퐰 1, … , 퐰 푁 , 푏1, … , 푏푁 훽 − 퐓 (6) neurons and activation function g(x) are mathematically 푖 푖 modeled as This is equivalent to minimizing the cost function of the traditional gradient-based leaning algorithms employed in 푁 푓(푥푗 ; 푤, 푏, 훽)= 푖=1 훽푖 푔 푤푖 . 푥푗 + 푏푖 , 푗 = 1, … , 푁 (1) back-propagation learning Where 푤 = [푤 , 푤 … , 푤 ]푇 is the weight vector connecting 푁 푁 2 푖 푖1 푖2, 푖푛 퐸 = 푗 =1 푖=1 훽푖 푔(퐰푖 . 퐱푗 + 푏푖 ) − 퐭푗 (7) the ith hidden neuron and the input neurons, 훽푖 = [훽 , 훽 , … 훽 ]푇is the weight vector connecting the 푖푡푕 hidden 푖1 푖2 푖푚 For fixed and randomly chosen input weights 퐰푖 and neuron and the output neurons, and 푏푖 is the threshold of the hidden layer biases 푏푖 , training a SLFN is simply equivalent to www.rsisinternational.org Page 34 Volume III, Issue III, March 2016 IJRSI ISSN 2321 - 2705 evaluating a least-square solution 훽 of the linear system Hβ = TABLE I gives the average MAE for five years on T: applying persistence and ELM models on Amaranth wind farm for different forecast horizons. Fig. 1 illustrates the performance comparison of the two forecasting models with 퐇 퐰 , … , 퐰 , 푏 , … , 푏 훽 − 퐓 = 1 푁 1 푁 MAE measure as a function of look-ahead time. Fig. 1 reveals 푚푖푛 퐇 퐰 , … , 퐰 , 푏 , … , 푏 훽 − 퐓 (8) 훽 1 푁 1 푁 that the ELM forecasting model outperforms the persistence model from second look-ahead hour only. Also, the MAE of persistence model reaches up to 39.021 for 12 hours ahead The smallest norm least square solution of above linear system forecast whereas in case of ELM model MAE reaches up to is 25.29 only which asserts a significant improvement. Furthermore the proposed model is tested on two more wind 훽 = 퐇† T (9) farms of Ontario, Canada. TABLE II and TABLE III gives the † numerical results obtained for Port Alma and Kingsbridge Where 퐇 is Moore-Penrose Generalized Inverse of matrix H.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-