Subcontractor: Dr. Alex Cronin, University of Arizona NREL Contract 99043, “Study Degradation Rates of Photovoltaic (PV) Modules Deployed in Arizona” NREL Technical Monitor: Dr. Sarah Kurtz Description: Fourth Monthly Report Authors: Steve Pulver and Alex Cronin Date: September 21, 2009

Our next conference call is scheduled for September 24 at 3 pm Colorado time. As before, the NREL group will call Alex Cronin’s office number: (520) 626-3348.

Our goal this month was to study the uncertainty in the reported degradation rates. We have done this by comparing several methods of analysis. One finding (that we expected) is that using several PV systems as a reference is better than using only one. This finding is now supported by statistical error bars on the best fit degradation rates. Additional evidence to support this finding comes from scatter plots comparing several methods.

We have also done a PTC regression analysis for each of 17 systems using TEP PV data and AZMET weather data. For this report we re-numbered the systems with an “analysis #” to make batch analysis easier. (There happen to be missing numbers – not all integers are represented – in the list of “system numbers”)

Table 1 relates the system number used in this report to the system number in previous reports. Also, the table shows an additional four systems: 1, 7, and 16, as a result of additional data we received from Tucson Electric Power in September.

Table I. analysis # System number Module Type Data File Name 0 1 Sharp NE-Q5E2U SOLSHARP_AUROR 1 2 Kyocera KC150G-A SOLKYOC_TR 2 3 BP3150U SOLAR BP 150_TR 3 4 Uni-Solar 64W SOLAR UNISOLAR_TR 4 5 Sanyo 167W SOLAR SANYO167_SB 5 7 ASE 300-DGF/17 SOLAR-TR_ASE 6 8 BP SX140S SOLAR BP 140_TR 7 9 ASE 300-DGF/50 SOLAR-SB-ASE 8 10 GSE GG-112 SOLAR GSE_TR 9 11 Shell ST40 SOL_SHELL_40 10 12 Sanyo HIP-J54BA2 SOLAR SANYO180_SB 11 14 BP MST50 SOLMST_BEACON 12 15 Shell SQ150-PC SOLSHELL150_TR 13 NA Front ASE SOLAR_OH3 14 NA Back ASE SOLAR_OH4 15 16 Astro Api-MCB SOLASTRO_TR 16 17 Solarex MST-43 SOLMST_SOLECTRIA

Fig. 1 shows calculated degradation rates from a variety of methods. These results were calculated from the ‘Performance Ratio’ data, where the output of each system is divided by its sticker rating, resulting in units of [kWh/kW], or more simply [h]. This data was then integrated to get a daily PR value, [kWh/kW] per day, which I denoted with the variable E below. Days that were missing more than an hour of data were not included in the calculations.

Each method used to generate a best fit degradation rate, k, uses the arbitrary assumption that the uncertainty in the daily PR data is 1 (kWh/kW per day). [This sounds like a high value to Alex. … but some assumption was necessary to begin to define chi-squared.] This allows us to compare different methods relative to each other. A more accurate method to assess uncertainties in the raw data will be needed to get the correct uncertainty in k.

Figure 1

Figure 1. The vertical axis shows k, the rate of change. For reference k = -0.05 means a 5% decrease per year. The horizontal axis shows analysis number in the order declared in Table I.

The main conclusion from Figure 1 is that the method “Raw Degradation” defined below produces large scatter and large uncertainty. Whereas the methods “sinefit” and “Degradation WRTavg” are better (i.e. report smaller uncertainties). These methods are defined below. The legend on Fig 1 shows the variable names used when solving for degradation rate, each one of these is listed here with the choice of .

Method I. ‘RawDegradation’: linear fit to the daily PR value. N (a  kt(i)  E(i))2 2  2  E 2  1  = 2 i i1  i fit parameters: a and k. The k parameter is shown in figure 1.

Method II. ‘DegradationWRT7’: linear fit to the daily PR value relative to system 7. This system was selected due to its relatively good fit in the first method.

N (a  kt(i)  E(i) / E (i))2 2 7  = 2  i  2 i1  i

Fit parameters: a and k. (assume: E and E7 are relatively equal when assigning .)

Method III. ‘DegradationWRTAvg’: linear fit to the daily PR value relative to the Average of all the systems. N 2 2 (a  kt(i)  E(i) / Avg(E(i)))  = 2  i  2n 1 i1  i fit parameters: a and k where Avg(E(i)) is the average over only the systems not equal to zero.

Method IV. ‘sinefit’: linear plus sine fit to the daily PR value relative to the Average of all the systems. N 2 2 (a  kt(i)  bcos(2t(i) /(1year)  )  E(i) / Avg(E(i)))  = 2  i  2n 1 i1  i

Fit parameters: a, k, b, and .

Method V. ‘DegradationWRT1.5%Avg’: linear plus sine fit to the daily PR value relative to the Average of systems that had error less than .015 in method I. This was a total of twelve systems, as seen in fig. 2

N (a  kt(i)  bcos(2t(i) /(1year)  )  E(i) / Avg (E(i)))2 2 1.5  = 2  i  2n 1 i1  i Fit parameters: a, k, b, and  are parameters. Figure 2. This shows the correlations between results from different methods. Method 1 and Method 2 are less consistent than Methods 3,4 and 5. Figure 3. 0.0450

0.0400 Uncertainty in k from method I, for each 0.0350 system. 0.0300 This was useful to 0.0250 determine a cutoff for 0.0200 which systems to ignore in method V. 0.0150

0.0100

0.0050

0.0000 13 14 7 5 1 3 10 2 9 6 8 4 11 12 15 16 0

The average uncertainty for each method is shown in table 2. We see that lack of a denominator, or a denominator of a single system results in a higher uncertainty of the degradation rate. We see a reduced uncertainty (by a factor of 2) by using any of the methods number 3,4,or 5.

Table 2 System# method I method II method III method IV method V 0 0.0406 0.0246 0.0178 0.0184 0.0186 1 0.0128 0.0080 0.0063 0.0064 0.0064 2 0.0134 0.0081 0.0064 0.0064 0.0065 3 0.0128 0.0086 0.0070 0.0071 0.0071 4 0.0145 0.0086 0.0068 0.0069 0.0070 5 0.0119 0.0075 0.0059 0.0060 0.0061 6 0.0137 0.0079 0.0064 0.0064 0.0065 7 0.0099 0.0074 0.0070 0.0071 0.0072 8 0.0142 0.0075 0.0059 0.0060 0.0060 9 0.0135 0.0080 0.0064 0.0065 0.0065 10 0.0133 0.0086 0.0069 0.0070 0.0070 11 0.0178 0.0092 0.0074 0.0078 0.0080 12 0.0190 0.0113 0.0093 0.0097 0.0099 13 0.0093 0.0084 0.0068 0.0068 0.0069 14 0.0095 0.0094 0.0071 0.0071 0.0072 15 0.0304 0.0174 0.0132 0.0135 0.0137 16 0.0329 0.0047 0.0140 0.0143 0.0145

Average 0.0170 0.0097 0.0083 0.0084 0.0085 For some additional clarity, fig 4 reproduces the data in fig 1, with methods I and II removed.

Figure 3

Figure 3. Degradation rates, k, for different systems as determined from the three best methods (Methods 3, 4, 5).

Note: the overall error bars do get smaller if we revise the initial assumption about the measurement uncertainties in the raw data. Fig 5 shows degradation rates using the PTC regression. The hourly AZMET meteorological data was interpolated with a linear interpolation. The irradiance measurement of a latitude tilt detector was estimated using the AZMET horizontal detector data. The parameters were calculated each month when the estimated irradiance was above a specific cutoff (The legend in Fig 4 is incorrect, green cutoff was 500 W/ m2 and red cutoff was 200 W/m2). Months with less than 20 data points were not included in the final calculation for degradation rates

Figure 5 The parameters calculated for system 10 during a particular month using a 500 W/m2 cutoff are used to show predicted PR over several days in Fig 5.

Predicted = aE  bE 2  cW  dT E>200 0 E<200

The predicted values don’t seem to completely capture the spikes seen in the actual values. Probably more of a result of having only hourly data, than due distance to the AZMET station or having only horizontal plane irradiance data. Also, the predicted value drops below zero, meaning E was greater than 200 at this point, meaning a time lag in the AZMET data might need to be included.

Figure 6 Table 3 (method III) – k, method III k, method PTC (Average PTC) Difference 0 0.0379 0.0246 0.0346 0.0100 1 0.0123 -0.0072 0.0090 0.0163 2 0.0112 -0.0024 0.0079 0.0103 3 0.0328 0.0057 0.0295 0.0238 4 0.0066 -0.0065 0.0033 0.0098 5 0.0086 0.0213 0.0053 -0.0160 6 -0.0061 -0.0146 -0.0094 0.0052 7 -0.0051 -0.0235 -0.0084 0.0151 8 0.0052 -0.0087 0.0019 0.0107 9 -0.0084 -0.0193 -0.0117 0.0077 10 0.0112 -0.0060 0.0079 0.0139 11 -0.0005 -0.0016 -0.0038 -0.0022 12 0.0244 0.0044 0.0211 0.0167 13 0.0033 -0.0096 0.0000 0.0096 14 0.0039 -0.0079 0.0006 0.0085 15 0.0136 -0.0050 0.0103 0.0154 16 0.0102 0.0004 0.0069 0.0064

-0.0033 0.0087

Table 3, compares the results from method III to the PTC method. The results from method III, degradation relative to the average, are adjusted by subtracting the average degradation of all the systems in the PTC method. Ideally, the final difference column would be all zeroes. We see that standard deviation of the difference between our method and the PTC method is .0087. This is on the same order of magnitude as the errors we predicted for method III, but uncertainty in the results from the PTC method may be just as significant. 0.04 0

3 0.03

12 0.02 C T

P 15 1 r 10 2 16 e 0.01 5 v

l 4 8 u 1314 P 0.00 11 m o

r 7

f 6 9 k -0.01

-0.02

-0.03 -0.02 0.00 0.02 0.04 k from Pulver Method III Figure 7. A scatter plot showing degradation rates determined by PTC regression and method 3.

A few questions for us to discuss:

What uncertainty in [h] should be assumed with the raw data? (this affects the error bars in k that are reported)

How else should we assess the uncertainty in degradation rates? - We could use sunny days only - We could use three years only - We could fit the sine term to specific subset (discrete years) of the data

Why did the PTC regression report so many positive rates k? - Perhaps the light sensor at the AZMET station is aging faster than the PV systems?

Did we make the adjustment for PV panel angle correctly when using the AZMET irradiance data for the PTC regression?