
atmosphere Article Prediction of Solar Irradiance and Photovoltaic Solar Energy Product Based on Cloud Coverage Estimation Using Machine Learning Methods Seongha Park 1,* , Yongho Kim 1 , Nicola J. Ferrier 1,2 , Scott M. Collis 2,3, Rajesh Sankaran 1,2 and Pete H. Beckman 1,2 1 Mathematics and Computer Science Division, Argonne National Laboratory, Lemont, IL 60439, USA; [email protected] (Y.K.); [email protected] (N.J.F.); [email protected] (R.S.); [email protected] (P.H.B.) 2 Northwestern Argonne Institute of Science and Engineering, Northwestern University, Evanston, IL 60208, USA; [email protected] 3 Environmental Science Division, Argonne National Laboratory, Lemont, IL 60439, USA * Correspondence: [email protected] Abstract: Cloud cover estimation from images taken by sky-facing cameras can be an important input for analyzing current weather conditions and estimating photovoltaic power generation. The constant change in position, shape, and density of clouds, however, makes the development of a robust computational method for cloud cover estimation challenging. Accurately determining the edge of clouds and hence the separation between clouds and clear sky is difficult and often impossible. Toward determining cloud cover for estimating photovoltaic output, we propose using machine learning methods for cloud segmentation. We compare several methods including a classical Citation: Park, S.; Kim, Y.; Ferrier, regression model, deep learning methods, and boosting methods that combine results from the other N.J.; Collis, S.M.; Sankaran, R.; machine learning models. To train each of the machine learning models with various sky conditions, Beckman, P.H. Prediction of Solar we supplemented the existing Singapore whole sky imaging segmentation database with hazy and Irradiance and Photovoltaic Solar overcast images collected by a camera-equipped Waggle sensor node. We found that the U-Net Energy Product Based on Cloud Coverage Estimation Using Machine architecture, one of the deep neural networks we utilized, segmented cloud pixels most accurately. Learning Methods. Atmosphere 2021, However, the accuracy of segmenting cloud pixels did not guarantee high accuracy of estimating solar 12, 395. https://doi.org/10.3390/ irradiance. We confirmed that the cloud cover ratio is directly related to solar irradiance. Additionally, atmos12030395 we confirmed that solar irradiance and solar power output are closely related; hence, by predicting solar irradiance, we can estimate solar power output. This study demonstrates that sky-facing Academic Editor: cameras with machine learning methods can be used to estimate solar power output. This ground- Valentine Anantharaj based approach provides an inexpensive way to understand solar irradiance and estimate production from photovoltaic solar facilities. Received: 2 February 2021 Accepted: 12 March 2021 Keywords: cloud cover estimation; solar irradiance estimation; solar power product estimation; Published: 18 March 2021 deep learning; machine learning; ensemble Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affil- 1. Introduction iations. Clouds have been widely studied in a variety of fields. The shape and distribution of clouds are important for modeling climate and weather, understanding interactions between aerosols and clouds, and developing environmental forecasting models including radiation and cloud properties [1,2]. Detecting and understanding cloud cover have Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. also been investigated for estimating and forecasting solar irradiance and predicting This article is an open access article photovoltaic power generation [3]. Across all these problem domains, the magnitude distributed under the terms and of cloud coverage is important, along with factors such as wind direction, wind speed, conditions of the Creative Commons and temperature. In this work, we focus primarily on the impact of cloud cover on Attribution (CC BY) license (https:// photovoltaic power generation. creativecommons.org/licenses/by/ Images from Earth-observing satellites have been used predominantly to analyze cloud 4.0/). types and solar irradiance over a large area. In order to improve the estimation accuracy, Atmosphere 2021, 12, 395. https://doi.org/10.3390/atmos12030395 https://www.mdpi.com/journal/atmosphere Atmosphere 2021, 12, 395 2 of 28 various cloud detection and segmentation methods have been developed. For example, multispectral imagers have been added to Earth-observing satellites for surface imaging across multiple wavelengths. The intensity of reflection from each wavelength is used to distinguish different cloud types [4,5]. Temperature distribution from satellite images is also used to map cloud-covered areas [6]. Wong et al. [6] numerically calculated solar irradiance based on cloud probability based on cloud height information from light detection and ranging (LiDAR) sensors and meteorological information, such as wind direction, speed, and temperature. Satellites have been utilized primarily to detect multilevel clouds in large areas by using subregioning of clouds [7] and superpixel methods [8] to improve the accuracy of detecting clouds. For hyperlocal analysis of clouds, however, the use of satellite images is unrealistic, expensive, and delayed. Instead, images from sky-facing cameras on the ground are better suited for estimation and prediction of solar irradiance and weather conditions at a neighborhood scale [9–11]. Recently, edge-computing devices along with sensors have been considered for solar irradiance forecasting at time scales of hours to a day in a local area [12]. Using a high-dynamic-range sky imager and a feature-based cloud advection model (computed by using image analysis software), Richardson et al. [12] extracted red and blue features from the sky images and used those features to segment cloud regions. Other methods used to segment cloud regions include the red-to-blue ratio (RBR) [13], clearness index (CI) [3], sky cover indices [6,12,14], and opacity [15]. Additionally, the segmented cloud regions have been grouped by using superpixels to estimate solar irradiance [13]. Methods adopting red-to-blue differences and their ratio work well when the bound- aries of clouds can be easily identified, for example, on a sunny day with some thick white clouds. However, the approach does not yield good results for thin clouds or overcast conditions. Another approach is to utilize various combinations of color channels, for example, including green intensity in addition to RBR to distinguish cloud pixels and classify clouds into seven cloud classes [16]. Explorations in this area have also included extending the color components by including normalized RBR; alternative representations of the RGB color model, such as hue, saturation, and lightness; hue, saturation, and value; and CIELAB color space [13,17–19]. In generating the Singapore Whole Sky Imaging Segmentation (SWIMSEG) dataset that we utilized for training, we considered a total of 16 color channels and their combinations in order to select the most appropriate color channels to segment the clear sky and cloud regions [18]. Although utilizing deep learning methods for cloud segmentation using ground-based images remains largely underexplored, machine learning techniques have been utilized for cloud pixel segmentation. Regression models with diverse color components have been adopted in the research of [16–20], and various combinations of color components have been shown to improve accuracy of cloud cover estimation. However, overcoming mis- segmentation in the cases of overcast conditions or thin clouds is a continuing challenge. In addition to color-based segmentation, cloud detection and segmentation based on neural network methods have been actively studied. Convolutional neural networks (CNNs) are useful in classifying cloud type [21] and differentiating cloud pixels from ground pixels, such as vegetation, snow, and urban spaces [22,23], using satellite images. In one other effort, images annotated with the keyword “sky” were collected from the photos-sharing site Flickr [24] and were used to segment cloud pixels [25]. These snapshots also contained non-sky components such as buildings and other topographical features, a consequence of the original subject of interest of the photographs being a person or object at eye level. Onishi et al. [25] created additional images to overcome the shortage of images for training by overlaying cloud graphics from a numerical weather simulation on sky segments of the images from Flickr. Deep neural networks also have the potential for segmenting clouds to estimate cloud coverage. However, utilizing deep learning methods for cloud segmentation using ground-based images remains largely underexplored. Atmosphere 2021, 12, 395 3 of 28 In the work presented here, we further examine machine learning methods for cloud cover, solar irradiance, and solar power estimation. We utilize three deep neural networks: a fully convolutional network (FCN) [26], a U-shaped network (U-Net) [27], and DeepLab v3 [28] for estimating cloud cover. In addition to these three deep neural networks, we uti- lize color-based segmentation and a boosting method. We compared the performance of cloud segmentation of these methods. The cloud cover estimation results are then used to predict solar irradiance, and the correlations between cloud cover, solar irradiance, and solar power output are analyzed.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages28 Page
-
File Size-