<<

Article Evaluation of PROBA-V Collection 1: Refined Radiometry, Geometry, and Cloud Screening

Carolien Toté 1,*, Else Swinnen 1, Sindy Sterckx 1, Stefan Adriaensen 1, Iskander Benhadj 1, Marian-Daniel Iordache 1, Luc Bertels 1, Grit Kirches 2, Kerstin Stelzer 2, Wouter Dierckx 1, Lieve Van den Heuvel 1, Dennis Clarijs 1 and Fabrizio Niro 3 1 Flemish Institute for Technological Research (VITO), Remote Sensing Unit, Boeretang 200, B-3200 Mol, Belgium; [email protected] (E.S.); [email protected] (S.S.); [email protected] (S.A.); [email protected] (I.B.); [email protected] (M.-D.I.); [email protected] (L.B.); [email protected] (W.D.); [email protected] (L.V.d.H.); [email protected] (D.C.) 2 Brockmann Consult, Max--Strasse 2, D-21502 Geesthacht, ; [email protected] (G.K.); [email protected] (K.S.) 3 -European Space Research Institute (ESA-ESRIN), Via Galileo Galilei, 00044 Frascati, ; [email protected] * Correspondence: [email protected]; Tel.: +32-14-336844

 Received: 22 August 2018; Accepted: 27 August 2018; Published: 30 August 2018 

Abstract: PROBA-V (PRoject for On-Board Autonomy–Vegetation) was launched in May-2013 as an operational continuation to the vegetation (VGT) instruments on-board the Système Pour l’Observation de la Terre (SPOT)-4 and -5 . The first reprocessing campaign of the PROBA-V archive from Collection 0 (C0) to Collection 1 (C1) aims at harmonizing the time series, thanks to improved radiometric and geometric calibration and cloud detection. The evaluation of PROBA-V C1 focuses on (i) qualitative and quantitative assessment of the new cloud detection scheme; (ii) quantification of the effect of the reprocessing by comparing C1 to C0; and (iii) evaluation of the spatio-temporal stability of the combined SPOT/VGT and PROBA-V archive through comparison to METOP/advanced very high resolution radiometer (AVHRR). The PROBA-V C1 cloud detection algorithm yields an overall accuracy of 89.0%. Clouds are detected with very few omission errors, but there is an overdetection of clouds over bright surfaces. Stepwise updates to the visible and near infrared (VNIR) absolute calibration in C0 and the application of degradation models to the SWIR calibration in C1 result in sudden changes between C0 and C1 Blue, Red, and NIR TOC reflectance in the first year, and more gradual differences for short-wave infrared (SWIR). Other changes result in some bias between C0 and C1, although the root mean squared difference (RMSD) remains well below 1% for top-of-canopy (TOC) reflectance and below 0.02 for the normalized difference vegetation index (NDVI). Comparison to METOP/AVHRR shows that the recent reprocessing campaigns on SPOT/VGT and PROBA-V have resulted in a more stable combined time series.

Keywords: PROBA-V; reprocessing; data quality; calibration; cloud detection; temporal consistency; surface reflectance; NDVI

1. Introduction PROBA-V (PRoject for On-Board Autonomy–Vegetation) was launched on 7 May 2013. The main objective of PROBA-V is to be an operational mission that provides continuation to the data acquisitions of the vegetation (VGT) instruments on-board the Système Pour l’Observation de la Terre (SPOT)-4 and -5 satellites, and as such to operate as a “gap filler” between the decommissioning of the VGT instrument on SPOT5 and the start of operations of the Sentinel-3 constellation [1]. SPOT/VGT data

Remote Sens. 2018, 10, 1375; doi:10.3390/rs10091375 www.mdpi.com/journal/remotesensing Remote Sens. 2018, 10, 1375 2 of 21 are widely used to monitor environmental change and the evolution of vegetation cover in different thematic domains [2], hence the relevance to the continuity of the service. On board the PROBA-V, the optical instrument provides data at 1 km, 300 m, and 100 m consistent with the 1 km resolution data products of SPOT/VGT [3]. To support the existing SPOT/VGT user community, the PROBA-V mission continues provision of projected segments (Level 2A, similar to the SPOT/VGT P-products), daily top-of-canopy (TOC) synthesis (S1-TOC) and 10-day synthesis (S10-TOC) products. In addition, top-of-atmosphere (TOA) daily synthesis (S1-TOA) products and radiometrically/geometrically corrected data (level 1C) products in raw resolution (up to 100 m) are provided for scientific use [4,5]. Since PROBA-V has no onboard propellant, the overpass time (10:45 h at launch) will decrease as a result of increasing atmospheric drag [6]. In previous years, vegetation monitoring applications were built on PROBA-V data, e.g., on cropland mapping [7], crop identification [8], estimation of biophysical parameters [9,10], and crop yield forecasting [11]. A method was also developed to assimilate data of PROBA-V 100 and 300 m [12]. For the Copernicus Global Land Service, PROBA-V is one of the prime sensors for operational vegetation products [13,14]. Access to and exploitation of the SPOT/VGT and PROBA-V data is facilitated through the operational Mission Exploitation Platform (MEP) [15]. In 2016–2017, the first reprocessing campaign of the PROBA-V archive (Collection 0, C0) was performed, aiming at improving the time series and harmonizing its content, thanks to improved radiometric and geometric calibration and cloud detection. The resulting archive is the PROBA-V Collection 1 (C1). This paper discusses the changes in and the evaluation of the PROBA-V C1. We first describe the modifications in the PROBA-V processing chain (Section2), and the materials and methods used (Section3). Then we evaluate PROBA-V C1 (Section4) focusing on three aspects: first, the new cloud detection scheme is qualitatively and quantitatively evaluated (Section 4.1); secondly, C1 is compared to C0 in order to quantify the effect of the changes applied in the reprocessing (Section 4.2); finally, the combined archive of SPOT/VGT and PROBA-V is compared with an external time series derived from METOP/advanced very high resolution radiometer (AVHRR) in order to evaluate the temporal stability of the combined archive (Section 4.3).

2. Description of PROBA-V C1 Modifications

2.1. Radiometric Calibration The main updates to the instrument radiometric calibration coefficients of the sensor radiometric model are summarized in this section. Due to the absence of on-board calibration devices, the radiometric calibration and stability monitoring of the PROBA-V instrument relies solely on vicarious calibration. The OSCAR (Optical Sensor CAlibration with simulated Radiance) Calibration/Validation facility [16], developed for the PROBA-V mission, is based on a range of vicarious methods such as lunar calibration, calibration over stable desert sites, deep convective clouds (DCC), and Rayleigh scattering. In [17], Sterckx et al. describe the various vicarious calibration methods, the radiometric sensor model, and the radiometric calibration activities performed during both the commissioning and the operational phase, including cross-calibration against SPOT/VGT2 and /MERIS. Only the updates to absolute calibration coefficients (A) and equalization coefficients (g) with respect to C1 are discussed here.

2.1.1. Inter-Camera Adjustments of VNIR Absolute Calibration Coefficients The PROBA-V instrument is composed of three separate cameras, with an overlap of about 75 visible and near infrared (VNIR) pixels between the left and the center camera and between the center and right cameras, respectively. For the camera-to-camera bias assessment, the overlap region between two adjacent cameras was exploited [17]. In order to improve the consistency between the three PROBA-V cameras, adjustments to the absolute calibration coefficients of the VNIR strips were implemented within the C0 near-real time (NRT) processing chain to correct for a band dependent camera-to-camera bias (Figure1). More specifically, on 26 June 2014, a 1.8% reduction in radiance for Remote Sens. 2018, 10, 1375 3 of 21

Remote Sens. 2018, 10, x FOR PEER REVIEW 3 of 21 Remote Sens. 2018, 10, x FOR PEER REVIEW 3 of 21 the blue center strip and a 1.2% increase in radiance for the blue right strip were applied. Furthermore, Furthermore, on 23 September 2014 a reduction of 2.1% to the red center radiance, a 1.1% increase to onFurther 23 Septembermore, on 23 2014 September a reduction 2014 ofa reduction 2.1% to the of 2.1% red center to the radiance,red center aradiance, 1.1% increase a 1.1% to inc therease blue to the blue left radiance and a 1.3% increase to the near infrared (NIR) left radiance were performed. leftthe radianceblue left radiance and a 1.3% and increase a 1.3% toincrease the near to the infrared near (NIR)infrared left (NIR radiance) left radiance were performed. were performed. Finally, Finally, on 25 October 2014, the right NIR radiance was increased by 1%. In order to have a consistent onFinally, 25 October on 25 October 2014, the 2014 right, the NIRrightradiance NIR radiance was was increased increased by 1%.by 1%. In In order order to to have have a a consistentconsistent adjustment of the camera-to-camera bias along the full mission, it was decided to apply these adjustmentadjustment of of the the camera-to-camera camera-to-camera bias along along the the full full mission, mission, it it was was decided decided to to apply apply these these coefficients from the start of the mission as part of the C1 reprocessing, allowing therefore to remove coefficientscoefficients fromfrom thethe startstart ofof thethe missionmission asas partpart ofof thethe C1C1 reprocessing,reprocessing, allowingallowing thereforetherefore toto removeremove the stem-wise corrections introduced in the NRT processing (see Figure 1). thethe stem-wisestem-wise correctionscorrections introducedintroduced inin thethe NRTNRT processingprocessing (see(see Figure Figure1 ).1).

2 2

e

c ence

n r

e e

r 0 f

f Blue (left) e 0

f

f Blue (left) i Blue (center)

d

% di BlueBlue (right) (center) % RedBlue (center) (right) -2 NIRRed (left) (center) -2 NIRNIR (right) (left) NIR (right) 2014 2015 2016 2014 2015date 2016 date Figure 1. Percentage difference in the TOA radiance between C1 and C0 as a function of the Figure 1. Percentage difference in the TOA radiance between C1 and C0 as a function of the acquisitionFigure 1. Percentage date. difference in the TOA radiance between C1 and C0 as a function of the acquisition date. acquisition date. 2.1.2.2.1.2. Degradation Degradation Model Model for for SWIR SWIR Absolute Absolute Calibration Coefficients Coefficients 2.1.2. Degradation Model for SWIR Absolute Calibration Coefficients CalibrationCalibration over over the the Libya-4 Libya-4 desert desert site site is is used used by by the the Image Image Quality Quality Center Center (IQC) (IQC) of of PROBA-V PROBA-V Calibration over the Libya-4 desert site is used by the Image Quality Center (IQC) of PROBA-V toto monitor monitor the the stability of the spectral bands and cameras of the instrument [[17,18].17,18]. The approach to monitor the stability of the spectral bands and cameras of the instrument [17,18]. The approach reliesrelies on comparing the the cloud-free cloud-free TOA TOA reflectance reflectance as as measured by by PROBA-V with with modeled modeled TOA TOA relies on comparing the cloud-free TOA reflectance as measured by PROBA-V with modeled TOA reflectancereflectance values values calculated following following Govaerts Govaerts et et al. [19]. [19]. The The long-term long-term monitoring monitoring of of the the ratio ratio reflectance values calculated following Govaerts et al. [19]. The long-term monitoring of the ratio (model/measurement)(model/measurement) over over these these spectrally spectrally and and radi radiometrically-stableometrically-stable desert desert sites sites allows allows for for the (model/measurement) over these spectrally and radiometrically-stable desert sites allows for the estimationestimation of the the detector detector responsivity responsivity changes with time. For For the the VNIR VNIR strips, strips, the the detector detector response response estimation of the detector responsivity changes with time. For the VNIR strips, the detector response degradationdegradation is not not significant significant during the first first three three years of the the mission mission and and well well within within the the accuracy accuracy degradation is not significant during the first three years of the mission and well within the accuracy ofof the approach [17,18]. [17,18]. In contrast, for the short-wave infrared (SWIR) detectors a more significant significant of the approach [17,18]. In contrast, for the short-wave infrared (SWIR) detectors a more significant degradationdegradation is is observed: observed: between between −0.9%−0.9% and and −1.5%−1.5% per peryear. year. PROBA-V PROBA-V has nine has SWIR nine SWIR strips: strips: each degradation is observed: between −0.9% and −1.5% per year. PROBA-V has nine SWIR strips: each PROBA-Veach PROBA-V camera camera has a has SWIR a SWIR detector detector consisting consisting of three of three linear linear detectors detectors or or strips strips of of 1024 1024 pixels, pixels, PROBA-V camera has a SWIR detector consisting of three linear detectors or strips of 1024 pixels, referredreferred to as SWIR1, SWIR2, and SWIR3. In In C0, C0, the the SWIR SWIR detector detector degradation degradation was was corrected corrected using using referred to as SWIR1, SWIR2, and SWIR3. In C0, the SWIR detector degradation was corrected using aa step-wise step-wise adjustment adjustment of of the the SWIR SWIR absolute absolute calibration calibration coefficients coefficients (A (ASWIRSWIR). In). C1, In C1, this this is replaced is replaced by a step-wise adjustment of the SWIR absolute calibration coefficients (ASWIR). In C1, this is replaced by aby degradation a degradation model: model: a linear a linear trending trending model model is isfitted fitted to to the the OSCAR OSCAR desert desert vicarious vicarious calibration a degradation model: a linear trending model is fitted to the OSCAR desert vicarious calibration resultsresults obtained for the nine different SWIR strips (Figure2 2).). results obtained for the nine different SWIR strips (Figure 2). LEFT CENTER RIGHT LEFT CENTER 1 RIGHT 1 1 1 1 1

0.98 0.98 0.98 0.98 0.98 SWIR1 (C1) 0.98 ΔΑ  SWIR2SWIR1 (C1) (C1)

 SWIR3SWIR2 (C1) (C1) 0.96 0.96 0.96 SWIR1SWIR3 (C0) (C1) 0.96 0.96 0.96 SWIR2SWIR1 (C0) (C0) SWIR3SWIR2 (C0) (C0) 0.94 0.94 SWIR3 (C0) 0.94 0.94 0.94 0.94 2014 2015 2016 2014 2015 2016 2014 2015 2016 2014 2015 2016 2014 2015date 2016 2014 2015date 2016 date date date date FigureFigure 2.2. TemporalTemporal evolution evolution of of changes changes to theto Athe ASWIRfor ninefor nine SWIR SWIR strips strips (three per(three PROBA-V per PROBA-V camera) Figure 2. Temporal evolution of changes to theSWIR ASWIR for nine SWIR strips (three per PROBA-V camera) for C0 (red) and C1 (green), for the left, center and right PROBA-V camera. ΔA is the ratio of forcamera) C0 (red) for andC0 (red) C1 (green), and C1 for (green) the left,, for center the left, and center right PROBA-Vand right PROBA camera.-V∆ Acamera is the. ratioΔA is of the the ratio actual of the actual absolute calibration coefficient to the initial (at the start of the mission) absolute calibration absolutethe actual calibration absolute calibration coefficient tocoefficient the initial to (at the the initial start (at of the mission)start of the absolute mission) calibration absolute coefficient.calibration coefficient. coefficient.

Remote Sens. 2018, 10, 1375 4 of 21 Remote Sens. 2018, 10, x FOR PEER REVIEW 4 of 21 Remote Sens. 2018, 10, x FOR PEER REVIEW 4 of 21 2.1.3.2.1.3. Improvement Improvement of ofMulti-Angular Multi-Angular Calibra Calibrationtion of ofthe the Center Center Camera Camera SWIR SWIR Strips Strips 2.1.3. Improvement of Multi-Angular Calibration of the Center Camera SWIR Strips TheThe distribution distribution of ofthe the cameras cameras with with butted butted SWIR SWIR detectors detectors and and the the lack lack of ofon-board on-board diffusers diffusers The distribution of the cameras with butted SWIR detectors and the lack of on-board diffusers forfor flat-fielding flat-fielding purposes purposes makes makes in-flight in-flight correction correction for for non-uniformities non-uniformities in inthe the field-of-view field-of-view of ofthe the for flat-fielding purposes makes in-flight correction for non-uniformities in the field-of-view of the instrumentinstrument a challenging a challenging issue. issue. In Inorder order to tobette betterr characterize characterize and and correct correct for for non-uniformities, non-uniformities, instrument a challenging ◦issue. In order to better characterize and correct for non-uniformities,◦ PROBA-VPROBA-V performed performed a 90° a 90 yawyaw maneuver maneuver over over the the Nige Niger-1r-1 desert desert site site on on 11 11April April 2016. 2016. In Inthis this 90° 90 PROBA-V performed a 90° yaw maneuver over the Niger-1 desert site on 11 April 2016. In this 90° yawyaw configuration, configuration, the the detector detector array array of ofthe the CENTER CENTER camera camera runs runs parallel parallel to tothe the motion motion direction direction yaw configuration, the detector array of the CENTER camera runs parallel to the motion direction andand a given a given area area on on the the ground ground is issubsequently subsequently viewed viewed by by the the different different pixels pixels of ofthe the same same strip strip and a given area on the ground is subsequently viewed by the different pixels of the same strip (Figure(Figure 3).3 ).Improved Improved low low frequency frequency multi-angular multi-angular calib calibrationration coefficients coefficients for the for SWIR the SWIR strips strips of the of (Figure 3). Improved low frequency multi-angular calibration coefficients for the SWIR strips of the centerthe centercamera camera have havebeen beenderived derived from fromthis thisyaw yawmaneuver maneuver data. data. Figure Figure 4 shows4 shows the the changes changes to to center camera have been derived from this yaw maneuver data. Figure 4 shows the changes to equalizationequalization coefficients coefficients for for the the three three SWIR SWIR strips strips of ofthe the center center camera. camera. The The equalization equalization updates updates equalization coefficients for the three SWIR strips of the center camera. The equalization updates werewere applied applied to tothe the instrument instrument calibration calibration parameters parameters (ICP) (ICP) in inC0 C0 since since June June 2016, 2016, while while for for C1 C1 the the were applied to the instrument calibration parameters (ICP) in C0 since June 2016, while for C1 the equalizationequalization coefficients coefficients of ofth thee center center SWIR SWIR strips strips are are corrected corrected for for the the whole whole archive. archive. equalization coefficients of the center SWIR strips are corrected for the whole archive.

Figure 3. Quicklook image of the yaw maneuver: diagonal lines represent same target on ground FigureFigure 3. QuicklookQuicklook image image of of the the yaw yaw maneuver: diagon diagonalal lines lines represent represent same same target target on on ground ground imaged by all the linear array detectors. imagedimaged by by all all the the linear linear array array detectors. detectors. 1.03 1.03 CENTER SWIR 1 CENTERCENTER SWIR SWIR 2 1 1.02 CENTER SWIR 2 1.02 CENTER SWIR 3 CENTER SWIR 3 1.01 1.01 g

g 1 Δ 1 Δ 0.99 0.99

0.98 0.98

0.97 0.97 0 100 200 300 400 500 600 700 800 900 1000 1100 0 100 200 300 400across-track 500 pixel 600 700 800 900 1000 1100 across-track pixel Figure 4. Changes to the , of the three SWIR strips of the center camera. Δg is the ratio of the Figure 4. Changes to the , of the three SWIR strips of the center camera. Δg is the ratio of the equalizationFigure 4. Changes coefficients to the usedgi,SWIR in C1of theas threederive SWIRd from strips the of yaw the centerexperiment camera. to ∆theg is theequalization ratio of the equalization coefficients used in C1 as derived from the yaw experiment to the equalization coefficientsequalization used coefficients in C0: a value used inlower C1 as than derived 1 result froms thein an yaw increase experiment in the to TOA the equalization reflectance and coefficients vice coefficients used in C0: a value lower than 1 results in an increase in the TOA reflectance and vice versa.used in C0: a value lower than 1 results in an increase in the TOA reflectance and vice versa. versa. 2.1.4.2.1.4. Dark Dark Current Current and and Bad Bad Pixels Pixels 2.1.4. Dark Current and Bad Pixels TheThe dark dark current current correction correction of PROBA-VPROBA-V C0C0 data data acquired acquired before before 2015 2015 was was based based on dark on currentdark The dark current correction of PROBA-V C0 data acquired before 2015 was based on dark currentacquisitions acquisitions over oceans over oceans during during nighttime nighttime with a verywith longa very integration long integration of 3 s. This of 3 resulted s. This inresulted detector current acquisitions over oceans during nighttime with a very long integration of 3 s. This resulted in saturationdetector saturation and/or non-linearityand/or non-linearity effects foreffects some for SWIR some SWIR pixels pixels with awith very a highvery high dark dark current. current. In C1, in detector saturation and/or non-linearity effects for some SWIR pixels with a very high dark current. In C1, the dark current values of these saturated pixels are replaced with the dark current value In C1, the dark current values of these saturated pixels are replaced with the dark current value retrieved from dark current acquisitions performed with lower integration time. retrieved from dark current acquisitions performed with lower integration time.

Remote Sens. 2018, 10, 1375 5 of 21 the dark current values of these saturated pixels are replaced with the dark current value retrieved from dark current acquisitions performed with lower integration time. Furthermore, the ICP files are corrected for a bug found in the code for the final *.xml formatted ICP file generation. Before January 2015, this caused the assignment of the dark current to the wrong pixel ID in the C0 ICP files. Finally, in reprocessing mode, dark current values in the ICP files are based on the dark current acquisitions of the applicable month, while in the near-real time processing the dark current values are based on the previous month. A minor change is made to the date of the assignment of the status ‘bad’ in the ICP files. For the reprocessing, an ICP file is generated at the start of each month. A pixel which became bad during that month is declared ‘bad’ in the C1 ICP file at the start of a month, aligned with the starting date of an ICP update.

2.2. Geometric Calibration As with any operational sensor, PROBA-V exhibits perturbations relative to the ideal sensor model. These perturbations are caused by optical distortions, thermal related focal plane distortions or exterior orientation inaccuracies. Since the start of the PROBA-V nominal operations, in-orbit geometric calibration is applied in order to ensure a high geometric quality of the end products. The exterior orientation (i.e., boresight misalignment angles) and interior orientation deformations at different conditions (e.g., temperature, time exit from eclipse) are continuously monitored and estimated, with the objective to update the ICP files when necessary. Geometric calibration is performed by an autonomous in-flight software package, using the Landsat Global Land Survey (GLS) 2010 [20] Ground Control Point (GCP) dataset. In C0 and C1, there are slight differences in the implementation date of consecutive updates to geometric ICP files. Table1 gives an overview of the creation date and validity period for geometric ICP file updates and the associated geometric error reduction. The geometric calibration updates result in an average geometric error reduction of 64.5% from January-2014 onwards. In the period before, no error reduction is expected since the data suffers from random geometric error caused by an issue in the onboard star tracker. In the nominal processing, updated ICP files are applied from the ‘creation’ date onwards. In the reprocessing workflow, updates are applied from the ‘start validity’ date. This difference causes small geometric shifts between C0 and C1 in the period between the ‘start validity’ and ‘creation’ date.

Table 1. Overview of creation date and validity period for updated geometric ICP files for the period November 2013–November 2016. In C0, the validity period runs from ‘creation date’ till ‘end validity’, while in C1 the validity period runs from ‘start validity’ to ‘end validity’. The geometric error reduction is the difference between new and old geometric error, normalized over the old geometric error (in %).

Geometric Error Reduction (%) Creation Date Start Validity End Validity Blue Red NIR SWIR 8 September 2016 1 September 2016 −26.9 −33.7 −24.9 −23.9 16 February 2016 8 February 2016 1 September 2016 −80.2 −85.7 −91.8 −77.4 25 January 2016 19 January 2016 8 February 2016 −38.9 −46.0 −35.6 −29.3 2 November 2015 27 October 2015 19 January 2016 −72.3 −67.9 −59.0 −60.1 6 October 2015 3 October 2015 27 October 2015 −34.9 −42.7 −36.7 −34.9 9 July 2015 4 July 2015 3 October 2015 −68.2 −74.1 −54.3 −30.3 6 May 2015 20 April 2015 4 July 2015 −74.7 −84.0 −102.8 −93.7 24 March 2014 12 March 2014 20 April 2015 −87.5 −90.8 −90.2 −92.5 7 January 2014 1 January 2014 12 March 2014 −75.3 −93.5 −110.3 −98.6 9 November 2013 1 November 2013 1 January 2014 N/A N/A N/A N/A 28 November 2013 16 October 2013 1 November 2013 N/A N/A N/A N/A Remote Sens. 2018, 10, 1375 6 of 21

2.3. Cloud Detection In PROBA-V C0, cloud detection was performed as in the SPOT-Vegetation processing chain [21]: the cloud cover map was derived as a union of two separate cloud masks, derived from the Blue and SWIR bands by using a simple thresholding strategy [4]. One of the main motivations for the C1 reprocessing was to address the cloud detection issues in the C0 dataset as reported by several users: (i) an overall under-detection of clouds, in particular for optically thin clouds, with resulting remaining cloud contamination in the composite products; and (ii) a systematic false detection over bright targets. In order to tackle the observed drawbacks, a completely new cloud detection algorithm was designed for C1, moving from a static threshold technique to an adaptive threshold approach. Similar to the old algorithm, the output of the new cloud masking algorithm in C1 is a binary mask in which every PROBA-V pixel is marked as ‘cloud’ or ‘clear’. The new algorithm introduces major changes in two aspects (Figure5): (i) the use of ancillary maps as reference data; and (ii) the application of customized tests, according to the land cover status of the pixel. The tests include threshold tests on TOA Blue, TOA SWIR and band ratios, and similarity checks Remotebased Sens. on the 2018 Spectral, 10, x FOR Angle PEER REVIEW Distance (SAD) between the pixel spectrum and reference spectra [227 of]. 21

Figure 5. FlowchartFlowchart of of the the cloud cloud detection detection scheme scheme in in PROBA-V PROBA-V C1. A1, A1, A2, A2, and and A3 A3 represent represent auxiliary auxiliary data. The land cover status in A1 acts as a switch to activate/neglect thresholdi thresholdingng and and similarity similarity tests. tests. The threshold tests T1 use data from A2. The The reference reference spectra spectra from from A3 A3 are are used used in in the the similarity similarity tests. tests. Similarity tests S8, S9, S10, and S14 are common to all pixels.

ThresholdsThe PROBA-V were C1 tuned cloud to detection maximize algorithm the overall is illustrated accuracy inof Figure the algorithm,5. Three typeswhile of keeping auxiliary a good data balance(A1–A3 inbetween Figure 5missed) are used clouds as input: and (i)clear monthly pixels landerroneously cover status marked maps as at clouds, 300 m resolution with respect derived to a trainingfrom the dataset ESA Climate of over Change43,000 manually Initiative labeled (CCI) pixe Landls Coverclassified project in three [23 ];clou (ii)d monthlycover classes: background ‘totally cloudy’surface reflectance(6000 spectra), climatology ‘semi-cloudy’ built from (16,277 the spectra), MERIS full and resolution ‘clear sky’ 10-year (21,200 archive, spectra). complemented The training withdatabase albedo was maps extracted from from the GlobAlbedo four daily global project PROBA-V [23]; and composites (iii) reference for PROBA-Vthe following spectra days: for 21 specific March 2014,land/cloud 21 June cover 2014, types. 21 September 2014, and 21 December 2014. Each pixel was also assigned to one of the followingFirstly, the land land cover statusclasses: maps ‘dark label water each (ocean)’, pixel for ‘dark each monthwater of(inland)’, the year ‘turbid into one water’, of the ‘desert/barefollowing classes: soil’, ‘vegetation’, ‘land’, ‘water’, ‘wetland’, ‘snow/ice’, ‘salt’, and ‘urban’, ‘other’. and For ‘sno eachw/ice’. of these Training classes, pixels a customized were well distributedset of cloud over detection the globe tests and was all defined. land cover In classes a second were step, represented monthly backgroundin the three cloud surface cover reflectance classes. climatology reference maps for the blue band, generated from the MERIS archive in the frame of the 2.4. Other Differences between C0 and C1 and Issues Solved During the C0 nominal processing, two bugs were detected and fixed. Logically, the bug fixes are applied on the complete C1 archive. The first bug fix (implemented in C0 on 16 July 2015) limits the impact of on-board compression errors. Before the fix, entire segments of spectral band data were omitted after a decompression error, which happened on a random basis every few days. The fix limits the amount of missing data in the final L1C, L2A, and synthesis product lines to the block where the decompression error occurred. The second bug fix (implemented in C0 on 10 February 2016) is related to the attitude data check, causing some segment data to be wrongfully marked as ‘no data’. All data lost in C0 before the implementation date have been successfully compiled into the C1 collection. An issue related to leap second implementation was identified in the C0 nominal processing: an incorrect leap second value was applied during the period 23 April 2015 until 29 June 2015. All telemetry data (satellite position, velocity, and attitudes) during this period was consequently timestamped with 1 s inaccuracy leading to an on-ground geolocation error of about 6 km. This was corrected for in C1, resulting in geometric shifts between the two collections for this period. Finally, metadata of the delivered C1 products are compliant to the Climate and Forecast (CF) Metadata Conventions (v1.6) [25]. In C0, metadata were only compliant with these conventions starting from 6 January 2016.

Remote Sens. 2018, 10, 1375 7 of 21

ESA CCI Land Cover project [23] are used. The monthly averages are derived from the seven-daily surface reflectance time series of the two blue MERIS bands (413 and 443 nm), over the period 2003–2012, with a spatial resolution of 300 m. Data gaps were completed with broadband (300–700 nm) albedo values provided by the GlobAlbedo project [24] at a spatial resolution of 5 km. For pixels having the status equal to ‘land’, ‘water’, or ‘other’, a first separation between cloudy and clear pixels is done by a simple thresholding applied to the difference between the actual blue reflectance and the reference value (indicated by T1 in Figure5). Following the thresholding test for the reflectance in the blue band, a series of customized tests are designed for each distinct pixel status, including thresholding on SWIR reflectance and band ratios (T2 to T6). The tests may be active or inactive, depending on the pixel status, and the thresholds are tuned for each status value. Finally, similarity tests are applied to check the SAD between the current pixel and a set of pre-defined reference spectra extracted as average spectra of a large number of PROBA-V pixels belonging to the same type of surface (e.g., deep sea water), generated from the training database (see below). Out of the 14 similarity checks (S1 to S14), 4 are common to all observed spectra and use equal thresholds across pixel statuses (S8, S9, S10, and S14). The remaining similarity checks may be active or not, with tuned thresholds, depending on the pixel status. For the complete list of the tests and their corresponding thresholds, the reader is referred to the PROBA-V Products User Manual [6]. After all tests are completed, the algorithm outputs the computed cloud flag. Thresholds were tuned to maximize the overall accuracy of the algorithm, while keeping a good balance between missed clouds and clear pixels erroneously marked as clouds, with respect to a training dataset of over 43,000 manually labeled pixels classified in three cloud cover classes: ‘totally cloudy’ (6000 spectra), ‘semi-cloudy’ (16,277 spectra), and ‘clear sky’ (21,200 spectra). The training database was extracted from four daily global PROBA-V composites for the following days: 21 March 2014, 21 June 2014, 21 September 2014, and 21 December 2014. Each pixel was also assigned to one of the following land cover classes: ‘dark water (ocean)’, ‘dark water (inland)’, ‘turbid water’, ‘desert/bare soil’, ‘vegetation’, ‘wetland’, ‘salt’, ‘urban’, and ‘snow/ice’. Training pixels were well distributed over the globe and all land cover classes were represented in the three cloud cover classes.

2.4. Other Differences between C0 and C1 and Issues Solved During the C0 nominal processing, two bugs were detected and fixed. Logically, the bug fixes are applied on the complete C1 archive. The first bug fix (implemented in C0 on 16 July 2015) limits the impact of on-board compression errors. Before the fix, entire segments of spectral band data were omitted after a decompression error, which happened on a random basis every few days. The fix limits the amount of missing data in the final L1C, L2A, and synthesis product lines to the block where the decompression error occurred. The second bug fix (implemented in C0 on 10 February 2016) is related to the satellite attitude data check, causing some segment data to be wrongfully marked as ‘no data’. All data lost in C0 before the implementation date have been successfully compiled into the C1 collection. An issue related to leap second implementation was identified in the C0 nominal processing: an incorrect leap second value was applied during the period 23 April 2015 until 29 June 2015. All telemetry data (satellite position, velocity, and attitudes) during this period was consequently timestamped with 1 s inaccuracy leading to an on-ground geolocation error of about 6 km. This was corrected for in C1, resulting in geometric shifts between the two collections for this period. Finally, metadata of the delivered C1 products are compliant to the Climate and Forecast (CF) Metadata Conventions (v1.6) [25]. In C0, metadata were only compliant with these conventions starting from 6 January 2016. Remote Sens. 2018, 10, 1375 8 of 21

3. Materials and Methods

3.1. Data Used

3.1.1. PROBA-V Collection 1 Level 2A For the validation of the cloud detection, 61 Level 2A segment products (i.e., projected TOA reflectances) were used, with acquisition dates on 21 March 2015, 21 June 2015, 21 September 2015, and 21 December 2015. Note that these dates are different from the ones used for training dataset collection, which allows for a completely independent validation of the cloud screening algorithm.

3.1.2. PROBA-V Collection 0 and Collection 1 Level 3 S10-TOC For the evaluation of the PROBA-V Collection 1 archive, a 37 months period (1 November 2013 until 30 November 2016) of S10 TOC reflectance and NDVI of PROBA-V C0 and C1 at 1 km were considered, i.e., 111 10-daily composites. Unclear observations are flagged as ‘cloud/shadow’, ‘snow/ice’, ‘water’, or ‘missing’, based on the information contained in the status map (SM).

3.1.3. SPOT/VGT Collection 2 and Collection 3 Level 3 S10-TOC For the comparison with an external dataset from METOP/AVHRR (see below), the PROBA-V C0 and C1 are extended backward in time, using S10 TOC reflectance and NDVI at 1 km resolution of the SPOT/VGT Collection 2 (C2) and Collection 3 (C3). After the end of the SPOT/VGT mission in May 2014, the data archive from both the VGT1 and VGT2 instruments was reprocessed, aiming at improved cloud screening and correcting for known artefacts such as the pattern in the VGT2 blue band and the –Earth distance bug in TOA reflectance calculation [2]. SPOT/VGT-C3 was proven to be more stable over time compared to the previous SPOT/VGT-C2 archive. The time series from SPOT/VGT-C2 and SPOT/VGT-C3 considered here runs from January 2008 until December 2013. In the comparison with METOP/AVHRR, the switch from SPOT/VGT to PROBA-V data is set on January 2014.

3.1.4. METOP/AVHRR The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) Polar System (EPS) consists of a series of polar orbiting meteorological satellites. METOP has a local stable overpass time at around 9:30 h. The AVHRR-instrument onboard METOP is used to generate 10-daily NDVI by the LSA-SAF (http://landsaf.meteo.pt). The AVHRR S10 TOC surface reflectance and NDVI are processed in a very similar way to those of PROBA-V, with the same water vapor and ozone inputs, and a similar atmospheric correction and compositing method [26]. There are however differences in calibration, cloud detection, and overpass time stability. Global data from METOP-A (launched in 2006) for the period January 2008–November 2016 are used in the comparison.

3.2. Sampling

3.2.1. Validation Database for Cloud Detection The validation database for cloud detection contains almost 53,000 pixels (Figures6 and7). All pixels are manually collected, classified, and labeled by a cloud pixel expert as (i) cloud: totally cloudy (opaque clouds), semi-transparent cloud, or other turbid atmosphere (e.g., dust, smoke); or (ii) clear: clear sky water, clear sky land, or clear sky snow/ice. The semi-transparent clouds were further differentiated through visual assessment into three density classes, which enables to understand which categories of semi-transparent clouds are captured by the cloud detection algorithm during the validation process: (i) thick semi-transparent cloud; (ii) average or medium dense semi-transparent cloud; and (iii) thin semi-transparent cloud. Remote Sens. 2018, 10, 1375 9 of 21

Remote Sens. 2018, 10, x FOR PEER REVIEW 9 of 21 Remote Sens. 2018, 10, x FOR PEER REVIEW 9 of 21

Figure 6. Spatial distribution of pixels in the cloud detection validation database. FigureFigure 6. Spatial 6. Spatial distribution distribution of of pixels pixels in the cloud cloud detection detection validation validation database database.. Number of pixels in database 0N 10000umber of pix 20000els in datab 30000ase 40000 0 10000 20000 30000 40000

Cloudy Cloudy

Totally cloudy (opaque clouds) 12395 Totally cloudy (opaque clouds) 12395

Semi-transparent clouds 17509 Semi-transparent clouds 17509

Other turbid atmosphere (e.g. dust, smoke) 750 Other turbid atmosphere (e.g. dust, smoke) 750

Clear Clear

Clear sky land 11522 Clear sky land 11522

Clear sky water 5422 Clear sky water 5422

Clear sky snow/ice 4042 Clear sky snow/ice 4042

Figure 7. Distribution of surface types within the cloud detection validation database. Figure 7. Distribution of surface types within the cloud detection validation database. Figure 7. Distribution of surface types within the cloud detection validation database. 3.2.2. Global Subsample 3.2.2. Global Subsample 3.2.2. GlobalIn order Subsample to reduce processing time, a global systematic subsample is taken over the global images by takingIn order the to central reduce pixel processing in each time,arbitrary a global window systematic of 21 bysubsample 21 pixels. is For taken the over pairwise the global comparison images Inbybetween order taking to C0 the reduce and central C1, processing pixelspixel in identified each time, arbitrary ain globalboth window SM systematic as ‘clear’of 21 by are subsample 21 selected, pixels. For isand takenthe only pairwise overobservations the comparison global with images by takingbetweenan identical the centralC0 observationand C1, pixel pixels in day each identified (OD) arbitrary are considered.in both window SM Inas order‘clear of 21 to’ byare discriminate 21selected, pixels. and between For only the observationsthe pairwise three different comparison with betweenanPROBA-V identical C0 and cameras, observation C1, pixels additional day identified (OD) sampling are in considered. both is based SM In ason order ‘clear’thresholds to discriminate are selected,on the viewing between and onlyzenith the three observations angle different (VZA) with PROBA-V cameras, additional sampling is based on thresholds on the viewing zenith angle (VZA) an identicaland viewing observation azimuth day angle (OD) (VAA) are of considered. each VNIR observation In order to (Table discriminate 2). As a result, between C0 and the C1 three surface different andreflectance viewing and azimuth NDVI angle that are (VAA) derived of each from VNIR identical observation clear observations (Table 2). As are a result,compared. C0 and C1 surface PROBA-Vreflectance cameras, and NDVI additional that are sampling derived from is based identical on cle thresholdsar observations on the are viewing compared. zenith angle (VZA) and viewingTable azimuth 2. Thresholds angle on (VAA) VNIR VZA of each and VAA VNIR to discriminate observation between (Table left,2). center, As a result,and right C0 cameras and C1 surface reflectanceTable and 2 NDVI. Thresholds that on are VNIR derived VZA and from VAA identical to discriminate clear observationsbetween left, center, are and compared. right cameras LEFT CENTER RIGHT VZA (VNIR) LEFT >20° CENTER <18° RIGHT >20° Table 2. Thresholds on VNIRVAAVZA (VNIR) VZA and <90° VAA>OR20° >270° to discriminate<18° between between >90°20° and left, 270° center, and right cameras. VAA (VNIR) <90° OR >270° between 90° and 270° LEFT CENTER RIGHT VZA (VNIR) >20◦ <18◦ >20◦ VAA (VNIR) <90◦ OR >270◦ between 90◦ and 270◦ Remote Sens. 2018, 10, 1375 10 of 21

3.3. Methods

3.3.1. Validation of the Cloud Detection Algorithm The cloud detection algorithm is validated through visual inspection of the cloud masks and through pixel-by-pixel comparison with the validation database. The accuracy of the method is assessed by measuring the agreement between the cloud algorithm output and the validation dataset. A confusion matrix is used to derive the overall accuracy (OA), user’s accuracy (UA), producer’s accuracy (PA), commission error (CE), and omission error (OE). The OA is computed as the ratio of the sum of all correctly-classified pixels on the sum of all validation pixels. The UA for a specific class is the ratio between correctly-classified pixels and all pixels classified as the specific class, where CE = 100% − UA. The PA for the specific class is the fraction of correctly-classified pixels and all ground truth class pixels, and OE = 100% − PA. In order to assess the overall classification, Krippendorf’s α is derived [27], which accounts for agreement by chance and is scaled from zero (pure chance agreement) to one (no chance agreement) [28]. For the visual inspection, false color composites of image subsets are overlaid with the cloud (and snow/ice) mask.

3.3.2. Spatio-Temporal Variation of Validation Metrics In order to evaluate the temporal evolution of the intercomparison of surface reflectance and NDVI between two datasets, a number of validation metrics (Table3) are calculated for each time step. The linear relationship between two datasets is identified based on a geometric mean regression (GMR) model [29]. Other metrics are the root-mean-squared difference (RMSD), the root systematic and unsystematic mean product difference (RMPDs and RMPDu), all three expressed in the same unit as the datasets themselves (% for TOC reflectance or unitless for NDVI), and the mean bias error (MBE). To perform a combined assessment of the spatial and temporal variability of the metrics, Hovmöller diagrams are based on data intercomparison for each time step and for each latitude band of 6◦.

Table 3. Validation metrics used to compare two datasets.

Abbreviation Metric Formula 1 b = sign(R) σY GMR slope Geometric mean regression slope σX GMR intercept Geometric mean regression intercept a = Y − b·X  2 R2 Coefficient of determination R2 = σX,Y σX ·σY n 1 2 MSD Mean squared difference MSD = n ∑ (Xi − Yi) i=1 s n 1 2 RMSD Root-mean-squared difference RMSD = n ∑ (Xi − Yi) i=1

s n Root of the unsystematic or random mean 1 ˆ ˆ RMPDu RMPDu = n ∑ (|Xi − Xi|)(|Yi − Yi|) product difference i=1 √ RMPDs Root of the systematic mean product difference RMPDs = MSD − MPDu n 1 MBE Mean bias error MBE = n ∑ (Xi − Yi) i=1 1 σX and σY are the standard deviations of X and Y, σX,Y is the covariance of X and Y, R is the correlation coefficient, sign() takes the sign of the variable between the brackets, Xˆ i and Yˆi are estimated using the GMR model fit and n is the number of samples. Remote Sens. 20182018,, 1010,, 1375x FOR PEER REVIEW 11 of 21

4. Results and Discussion 4. Results and Discussion

4.1. Cloud Detection

4.1.1. Qualitative Evaluation Overall, visual visual inspection inspection of of the the cloud cloud masking masking result result shows shows good good detect detectionion of opaque of opaque and semi- and semi-transparenttransparent clouds. clouds. Figure Figure 8 shows8 shows six examples six examples that demonstrate that demonstrate the behavior the behavior of the of algorithm the algorithm over overdifferent different surfaces surfaces (vegetated (vegetated area areas,s, bare bare surface, surface, different different wate waterr types, types, and and snow/ice) and and with different cloud types (opaque (opaque clouds, clouds, small small cumulus cumulus clouds, clouds, semi-transparent semi-transparent clouds). clouds). A Anumber number of offlaws flaws in the in thecloud cloud detection detection algorithm algorithm are illustrated: are illustrated: overdetection overdetection of clouds of clouds (Figure (Figure 8A,C,D);8A,C,D); turbid turbidwaters watersdetected detected as ‘cloud’ as ‘cloud’(Figure (Figure 8A,B,F);8A,B,F); underd underdetectionetection of semi-transparent of semi-transparent clouds clouds over water over watersurfaces surfaces (Figure (Figure 8E), bright8E), bright coastlines coastlines detected detected as cloud as cloud(Figure (Figure 8F). Some8F). Someother otherissuesissues (not shown (not shownhere) are here) that are sun that glint sun or glintice on or water ice on surfaces water surfaces are sometimes are sometimes incorrectly incorrectly detected detected as cloud. as Finally, cloud. Finally,the qualitative the qualitative evaluation evaluation has revealed has revealed that thatthick thick clouds clouds are are sometimes sometimes wrongly wrongly classified classified as snow/ice. In In some some cases, cases, this this is is combined combined with with sa saturationturation of of one one or or more more PROBA-V bands. The snow detection algorithm has not changed betweenbetween C0C0 andand C1.C1.

Figure 8. Qualitative evaluation of the cloud detection algorithm through visual inspection over six different sites. False color composites of image subsetssubsets (left) are overlaid with the cloud mask in cyan ((A–F).).

4.1.2. Quantitative Evaluation Table4 4 shows shows the the confusion confusion matrices matrices comparingcomparing thethe cloud cloud detection detection algorithmalgorithm resultresult withwith thethe validation database,database, for for all all surfaces, surfaces, and and separately separately for landfor land and waterand water surfaces, surfaces, respectively. respectively. The ‘cloud’ The ‘cloud’ class includes opaque clouds and semi-transparent clouds. Pixels flagged as ‘snow/ice’ or ‘missing’ are excluded from the analysis.

Remote Sens. 2018, 10, 1375 12 of 21 class includes opaque clouds and semi-transparent clouds. Pixels flagged as ‘snow/ice’ or ‘missing’ are excluded from the analysis.

Table 4. Confusion matrices for (A) all surfaces, (B) land, and (C) water. The ‘cloud’ class includes semi-transparent clouds.

A. All surfaces, OA = 89.0%, α = 0.764 Validation database Cloud detection Clear Cloud UA CE Clear 13,095 1655 88.8% 11.2% Cloud 2934 24,140 89.2% 10.8% PA 81.7% 93.6% OE 18.3% 6.4% B. Land, OA = 89.7%, α = 0.771 Validation database Cloud detection Clear Cloud UA CE Clear 8633 782 91.7% 8.3% Cloud 2273 18,069 88.8% 11.2% PA 79.2% 95.9% OE 20.8% 4.1% C. Water, OA = 87.3%, α = 0.741 Validation database Cloud detection Clear Cloud UA CE Clear 4462 873 83.6% 16.4% Cloud 661 6071 90.2% 9.8% PA 87.1% 87.4% OE 12.9% 12.6%

The OA yields 89.0% (over all surfaces), 89.7% (over land) and 87.3% (over water), indicating the cloud detection algorithm is slightly more accurate over land than over water. This is also reflected in Krippendorf’s α, which reaches 0.764 (all surfaces), 0.771 (land) and 0.741 (water). For ‘cloud’, the PA is very high over all surfaces (93.6%) and over land (95.9%), indicating clouds are detected with very little omission errors, related to the fact that the cloud detection scheme is cloud conservative. For these cases, OE for ‘clear’ is relatively high. Differently, OE for ‘clear’ and ‘cloud’ is similar over water. Additionally, it has been investigated how the algorithms behave with respect to semi-transparent clouds. Figure9 shows how the three density classes of semi-transparent clouds are classified by the algorithms and answers the question if a semi-transparent cloud is classified as cloud or as clear. This depends on whether the semi-transparent cloud was thin, middle or thick. The light colors indicate the percentage of pixels classified as ‘cloud’, the dark color indicates the percentage of pixels classified as ‘clear’. The figure shows that the thick semi-transparent clouds are almost all classified as ‘cloud’ while from the thin semi-transparent clouds a larger portion is classified as ‘clear’. Remote Sens. 2018, 10, 1375 13 of 21 Remote Sens. 2018, 10, x FOR PEER REVIEW 13 of 21

10 410 1190 100

2435 7283

" r 80

a

e

l

c

"

r

o

"

d 60

u 2501

o

l

c

"

s

a

d 40

e

i

f

i

s

s

a

l

c 20

%

cloud clear 0 thick middle thin Semi-transparent cloud density class Figure 9. Classification of semi-transparent clouds as ‘cloud’ or ‘clear’ (numbers in the plot indicate Figure 9. Classification of semi-transparent clouds as ‘cloud’ or ‘clear’ (numbers in the plot indicate the occurrence, the y-axis the percentage), according to their cloud density class. the occurrence, the y-axis the percentage), according to their cloud density class. 4.1.3. Effect on S10 Product Completeness 4.1.3. Effect on S10 Product Completeness The adapted cloud screening has implications for the amount of clear vs. unclear observations overThe land adapted in the cloud TOC screening-S10 C0 and has C1 implications archives. For for the the period amount November of clear vs. 2013 unclear–November observations 2016, the over landaverage in the amount TOC-S10 of C0 missing and C1 observations archives. For (due the to period illumination November conditions 2013–November or bad radiometric 2016, the quality) average amountremains of missingunchanged observations between C0 (due and to C1 illumination (Table 5). However, conditions a larger or bad amount radiometric of clouds/shadows quality) remains is unchangeddetected (+11.8%), between C0and and as a C1 consequence (Table5). However, there are aless larger clear amount observations of clouds/shadows in C1 there (−5.4%). is detected Less (+11.8%),pixels are and detected as a consequence as snow/ice there(−6.5%). are less clear observations in C1 there (−5.4%). Less pixels are detected as snow/ice (−6.5%). Table 5. Status map labelling in S10-TOC 1 km in PROBA-V C0 and C1 (% land pixels, November Table2013 5.–NovemberStatus map 2016) labelling and difference in S10-TOC between 1 km C0 in and PROBA-V C1 C0 and C1 (% land pixels, November 2013–November 2016) and difference between C0 and C1. PROBA-V C0 PROBA-V C1 Label Difference S10-TOC 1 km S10-TOC 1 km PROBA-V C0 PROBA-V C1 Label Difference Clear S10-TOC79.3% 1 km S10-TOC74.0% 1 km −5.4% Not clear 20.7% 26.0% Clear 79.3% 74.0% −5.4% Missing 6.3% 6.3% −0.0% Not clear 20.7% 26.0% MissingCloud/shadow 2.3% 6.3% 14.1% 6.3% +11.8%−0.0% Cloud/shadowSnow/ice 12.1% 2.3% 5.6% 14.1% −6 +11.8%.5% Snow/ice 12.1% 5.6% −6.5% 4.2. Comparison between PROBA-V C0 and C1 4.2. ComparisonThe reprocessing between PROBA-V causes differences C0 and C1 between PROBA-V C0 and C1, but the magnitude of the differenceThe reprocessing is dependent causes on the differences time period. between The temporal PROBA-V evolution C0 and ofC1, MBE but and the magnitudeRMSD between of the PROBA-V C0 and C1 S10-TOC reflectance is shown in Figure 10. The MBE for the VNIR bands difference is dependent on the time period. The temporal evolution of MBE and RMSD between remains in the range (−0.4%, 0.4%) but confirms sudden changes in the period October 2013–October PROBA-V C0 and C1 S10-TOC reflectance is shown in Figure 10. The MBE for the VNIR bands remains 2014, which are in agreement with the updates applied to VNIR absolute calibration in C0. The in the range (−0.4%, 0.4%) but confirms sudden changes in the period October 2013–October 2014, temporal profile of the MBE for the NIR band shows a peak in the second dekad of November 2016. which are in agreement with the updates applied to VNIR absolute calibration in C0. The temporal This larger difference between C1 and C0 is linked to a problem in C0 in the water vapor data used profileas input of the for MBE the atmospher for the NIRic bandcorrection shows of athe peak images in the acquired second on dekad 20 November of November 2016. 2016. The issue This was larger differencesolved in between C1. TheC1 MBE and for C0 the is linkedSWIR band to a problemclearly shows in C0 the in thedegradation water vapor model data application used as input in C1, for theinstead atmospheric of the correctionsudden changes of the in images C0. The acquired difference on20 between November C1 and 2016. C0 Thepeaks issue in September was solved 2015 in C1. The MBE for the SWIR band clearly shows the degradation model application in C1, instead of the

Remote Sens. 2018, 10, 1375 14 of 21

Remote Sens. 2018, 10, x FOR PEER REVIEW 14 of 21 sudden changes in C0. The difference between C1 and C0 peaks in September 2015 with an MBE up to with an MBE up to −0.6%. The temporal evolution of the RMSD shows periods with relatively larger − 0.6%.difference The temporal between evolution C1 and C0, of the related RMSD to showsthe different periods implementation with relatively date larger of difference updates to between the C1geometric and C0, related calibration to the and different the incorrect implementation leap second date implement of updatesation to in the C0 geometric(23 April 2015 calibration until 29 June and the incorrect2015). leap second implementation in C0 (23 April 2015 until 29 June 2015).

0.4 BLUE 0.4 RED

0.2 0.2

0 0

E E

B B

M -0.2 M -0.2

RIGHT -0.4 -0.4 CENTER LEFT -0.6 -0.6 2014 2015 2016 2014 2015 2016 date date

0.4 NIR 0.4 SWIR

0.2 0.2

0 0

E E

B B

M -0.2 M -0.2

-0.4 -0.4

-0.6 -0.6 2014 2015 2016 2014 2015 2016 date date 1 BLUE 1 RED

0.8 0.8

0.6 0.6

D D

S S

M M

R 0.4 R 0.4

0.2 0.2

0 0 2014 2015 2016 2014 2015 2016 date date

1 NIR 1 SWIR

0.8 0.8

0.6 0.6

D D

S S

M M

R 0.4 R 0.4

0.2 0.2

0 0 2014 2015 2016 2014 2015 2016 date date

FigureFigure 10. 10Temporal. Temporal evolution evolution ofof MBEMBE ((toptop, C0 minus minus C1) C1) and and RMSD RMSD (bottom (bottom) per) per band band and and per per camera for S10-TOC reflectance, November-2013–November-2016. Dark green vertical lines indicate camera for S10-TOC reflectance, November-2013–November-2016. Dark green vertical lines indicate dates of absolute VNIR calibration updates. Grey shading indicates periods of different geometric dates of absolute VNIR calibration updates. Grey shading indicates periods of different geometric calibration. The yellow shading indicates the period for which the leap second issue was fixed. calibration. The yellow shading indicates the period for which the leap second issue was fixed.

Remote Sens. 2018, 10, 1375 15 of 21 Remote Sens. 2018, 10, x FOR PEER REVIEW 15 of 21

Figure 11 illustrates the temporal evolution of the global global difference difference between C0 and C1 S10 S10-TOC-TOC NDVI for the same period. The The updates of of the RED and NIR absolute calibration parameters are clearly reflec reflectedted in in the the temporal temporal evolution evolution of of the the MBE. MBE. Changes Changes in in the the ICP ICP files files causes causes the the RMSD RMSD to fluctuateto fluctuate roughly roughly between between 0.005 0.005 and and 0.015. 0.015. The The different different implementation implementation date date of of updates updates to to the geometric calibrationcalibration andand thethe corrected corrected leap leap second second implementation implementation has has a relativelya relatively large large effect effect on theon theRMSD, RMSD, with with peaks peaks up toup 0.02. to 0.02.

0.02 NDVI 0.04 NDVI

0.01 0.03

D

E

S

B 0 0.02

M

M

R

-0.01 RIGHT 0.01 CENTER LEFT -0.02 0 2014 2015 2016 2014 2015 2016 date date

Figure 11.11. Temporal evolution of MBE ( left) and RMSD ( right) between C0 and C1 per camera for S10S10-TOC-TOC NDVI, November 2013–November2013–November 2016. Dark green vertical lines indicate dates of absolute VNIR calibration calibration updates. updates. Grey Grey shading shading indicates indicates periods periods of different of different geometric geometric calibration. calibration. The yellow The yellowshading shading indicates indicates the period the forperiod which for thewhich leap the second leap second issue was issue fixed. was fixed.

4.3. Comparison to METOP/AVHRR The previous previous sections sections focused focused on on the the effect effect of the of thereprocessing reprocessing on the on PROBA the PROBA-V-V dataset. dataset. Now theNow combined the combined SPOT/VGT SPOT/VGT and PROBA and- PROBA-VV time series time are seriescompared are compared to an external to an dataset: external the dataset:spatio- temporalthe spatio-temporal patterns of patterns the differences of the differences between betweenthe combined the combined SPOT/VGT SPOT/VGT and PROBA and PROBA-V-V time series time beforeseries beforeand after and their after last their reprocessing last reprocessing and the and external the external dataset dataset from fromMETOP/AVHRR METOP/AVHRR allows allows us to evaluateus to evaluate the effect the effect of the of thereprocessing reprocessing campaigns campaigns on on the the temporal temporal stability. stability. Of Of course, course, there there are intrinsic differences between combined TOCTOC reflectancereflectance oror NDVINDVI datasetsdatasets derivedderived fromfrom SPOT/VGTSPOT/VGT and PROBA PROBA-V-V on the one hand, and METOP/AVHRR on the other hand, most importantly linked to differences in overpass time, spectral response functions, radiometric calibration and image differences in overpass time, spectral response functions, radiometric calibration and image processing processing (e.g., cloud detection and atmospheric correction). It is to be noted that differences also (e.g., cloud detection and atmospheric correction). It is to be noted that differences also exist between exist between SPOT/VGT and PROBA-V, although the PROBA-V mission objective was to provide SPOT/VGT and PROBA-V, although the PROBA-V mission objective was to provide continuation to continuation to the SPOT/VGT time series: there are small differences in spectral response functions, the SPOT/VGT time series: there are small differences in spectral response functions, both sensors both sensors were not radiometrically intercalibrated, and there is an important overpass time lag of were not radiometrically intercalibrated, and there is an important overpass time lag of about 45 min about 45 minutes between SPOT/VGT at the end of its lifetime and PROBA-V. between SPOT/VGT at the end of its lifetime and PROBA-V. Figure 12 illustrates the spatio-temporal behavior of the validation metrics between the Figure 12 illustrates the spatio-temporal behavior of the validation metrics between the combined combined series of SPOT/VGT-C3 and PROBA-V-C1 and METOP/AVHRR for the three common series of SPOT/VGT-C3 and PROBA-V-C1 and METOP/AVHRR for the three common spectral bands spectral bands (red, NIR, and SWIR) and the NDVI. The Hovmöller plots indicate a low inter-annual (red, NIR, and SWIR) and the NDVI. The Hovmöller plots indicate a low inter-annual variation, variation, hence stable bias between SPOT/VGT-C3–PROBA-V-C1 and METOP/AVHRR. The switch hence stable bias between SPOT/VGT-C3–PROBA-V-C1 and METOP/AVHRR. The switch from from SPOT/VGT to PROBA-V is however visible in the spatio-temporal plots, with relatively higher SPOT/VGT to PROBA-V is however visible in the spatio-temporal plots, with relatively higher differences for Red and NIR after January-2014. This is related to the relatively larger differences in differences for Red and NIR after January-2014. This is related to the relatively larger differences overpass time between PROBA-V and METOP-A, hence larger differences in illumination angles, in overpass time between PROBA-V and METOP-A, hence larger differences in illumination angles, which impacts especially the red and NIR directional reflectance [30]. Overall, the intra-annual and which impacts especially the red and NIR directional reflectance [30]. Overall, the intra-annual and spatial variation are linked to differences in vegetation cover within the year (i.e., seasonality of spatial variation are linked to differences in vegetation cover within the year (i.e., seasonality of vegetation densities at mid latitudes) and over the globe (e.g., high vegetation densities in the tropics). vegetation densities at mid latitudes) and over the globe (e.g., high vegetation densities in the tropics). For red reflectance, the lowest MBE and RMPDs are observed in the tropics and in mid-latitude For red reflectance, the lowest MBE and RMPDs are observed in the tropics and in mid-latitude summers, related to lower red reflectance when vegetation densities are high. The opposite pattern summers, related to lower red reflectance when vegetation densities are high. The opposite pattern is observed for NIR reflectance, which increases with vegetation cover. For SWIR reflectance, spatio- is observed for NIR reflectance, which increases with vegetation cover. For SWIR reflectance, temporal patterns are overall less pronounced. The NDVI shows highest bias in mid- and high-

Remote Sens. 2018, 10, x FOR PEER REVIEW 16 of 21 latitude winter periods, which is possibly related to the low accuracy of the atmospheric correction Remote Sens. 2018, 10, 1375 16 of 21 applied at high zenith angles [31,32]. In order to evaluate the effect of the reprocessing on spatio-temporal stability, the combined seriesspatio-temporal of the former patterns SPOT/VGT-C2 are overall and less PROBA-V-C0 pronounced. Thearchives NDVI is showscompared highest to METOP/AVHRR bias in mid- and (Figurehigh-latitude 13). The winter figures periods, show which a ismuch possibly larger related spatio-temporal to the low accuracy instability, of the atmosphericand the switch correction from SPOT/VGTapplied at high to PROBA-V solar zenith is more angles pronounced. [31,32].

Figure 12. HovmöllerHovmöller diagrams diagrams of the MBE ( leftleft,, METOP/AVHRR METOP/AVHRR minus minus VGT-C3/PROBA-V-C1) VGT-C3/PROBA-V-C1) and and thethe RMPDs ((rightright)) betweenbetween thethe METOP/AVHRRMETOP/AVHRR and and the the combined combined series series of of VGT-C3 VGT-C3 and and PROBA-V-C1 PROBA-V- C1reflectance reflectance bands bands (in %)(in and%) and the the NDVI ND (unitless)VI (unitless) (January (January 2008–November 2008–November 2016). 2016).

In order to evaluate the effect of the reprocessing on spatio-temporal stability, the combined series of the former SPOT/VGT-C2 and PROBA-V-C0 archives is compared to METOP/AVHRR (Figure 13). The figures show a much larger spatio-temporal instability, and the switch from SPOT/VGT to PROBA-V is more pronounced.

Remote Sens. 2018, 10, 1375 17 of 21 Remote Sens. 2018, 10, x FOR PEER REVIEW 17 of 21

Figure 13. 13. HovmöllerHovmöller diagrams diagrams of the the MBE MBE ( (leftleft,, METOP/AVHRR METOP/AVHRR minus minus VGT-C2/PROBA-V-C0) VGT-C2/PROBA-V-C0) and and thethe RMPDs ((rightright)) betweenbetween thethe METOP/AVHRRMETOP/AVHRR and and the the combined combined series series of of VGT-C2 VGT-C2 and and PROBA-V-C0 PROBA-V- C0reflectance reflectance bands bands (in (in %) and%) and the the NDVI ND (unitless)VI (unitless) (January (January 2008–November 2008–November 2016). 2016).

5. Conclusions Conclusions This paper providesprovides anan overviewoverview of of the the modifications modifications in in the the PROBA-V PROBA-V processing processing chain, chain, and and the therelated related impacts impacts on theon the new new PROBA-V-C1 PROBA-V-C1 archive. archive. The The evaluation evaluation of theof the reprocessing reprocessing is basedis based on on (i) (i)qualitative qualitative and and quantitative quantitative evaluation evaluation of the newof the cloud new detection cloud detection scheme;(ii) scheme, the relative (ii) the comparison relative comparisonbetween PROBA-V-C0 between PROBA-V-C0 and PROBA-V-C1 and PROBA-V-C1 TOC-S10 surface TOC-S10 reflectances surface and reflectances NDVI; and and (iii) comparisonNDVI, and (iii)of the comparison combined SPOT/VGTof the combined and PROBA-V SPOT/VGT series and with PROBA-V an external series dataset with an from external METOP/AVHRR. dataset from METOP/AVHRR.The new cloud detection algorithm shows good performance, with an overall accuracy of 89.0%. The detectionThe new iscloud slightly detection less performant algorithm overshows water good (87.3%) performance, than over with land an (89.7%). overall Sinceaccuracy the algorithmof 89.0%. Theis cloud detection conservative, is slightly clouds less areperformant detected withover verywater little (87.3%) omission than errorsover land and many(89.7%). small Since clouds, the algorithmcloud borders, is cloud and conservative, semi-transparent clouds clouds are aredetected correctly with identified. very little However, omission botherrors the and qualitative many small and clouds,quantitative cloud evaluation borders, hasand shownsemi-transparent that there is clou an overdetectionds are correctly of cloudsidentified. over brightHowever, surfaces both (e.g.,the qualitativebright coastlines, and quantitative sun glint, orevaluation ice on water has shown surfaces that and there turbid is an waters). overdetection The adaptation of clouds of over the bright cloud surfacesdetection (e.g., algorithm bright coastlines, has resulted sun in glint, less clearor ice observations on water surfaces in C1 and compared turbid towaters). C0 (− The5.4%). adaptation A larger ofamount the cloud of clouds/shadows detection algorithm is detected has resulted (+11.8%), in less less clear pixels observations are labeled in as C1 snow/ice compared (− to6.5%). C0 (−5.4%). A largerThe amount temporal of profileclouds/shadows of the MBE is detected between (+ PROBA-V-C011.8%), less pixels and PROBA-V-C1 are labeled as blue, snow/ice red, and(−6.5%). NIR TOCThe reflectances temporal show profile sudden of the changes MBE between in the period PROB OctoberA-V-C0 2013–Octoberand PROBA-V-C1 2014, blue, related red, to and stepwise NIR TOCupdates reflectances to the VNIR show absolute sudden calibration changes in applied the period in C0, October but the 2013–October overall MBE remains2014, related within to thestepwise range updates to the VNIR absolute calibration applied in C0, but the overall MBE remains within the range (−0.4%, 0.4%). The updates of the RED and NIR absolute calibration parameters are also clearly visible

Remote Sens. 2018, 10, 1375 18 of 21

(−0.4%, 0.4%). The updates of the RED and NIR absolute calibration parameters are also clearly visible in the temporal evolution of the difference between C0 and C1 S10-TOC NDVI. The MBE remains within the range (−0.015, +0.015) and the RMSD varies between 0.005 and 0.02. The application of degradation models to the SWIR calibration in C1 results in more gradual differences between C0 and C1 SWIR TOC reflectance, with an MBE up to −0.6%. The different implementation date of updates to the geometric calibration, the incorrect leap second implementation in C0 (23 April 2015 until 29 June 2015) and a problem with the water vapor input data in C1 (20 November 2016) results in periods with relative larger RMSD between C0 and C1, although the RMSD remains well below 1%. The spatio-temporal patterns of the bias between the combined series of SPOT/VGT-C3 and PROBA-V-C1 and an external dataset of red, NIR, and SWIR TOC reflectance and the NDVI derived from METOP-A/AVHRR indicate a relatively low inter-annual variation. Although the switch from SPOT/VGT to PROBA-V is visible, spatio-temporal behavior of the metrics show the recent reprocessing campaigns on SPOT/VGT (to C3) and PROBA-V (to C1) have resulted in a much more stable combined time series. The overpass time evolution for both (drifting) sensors causes bidirectional reflectance distribution function (BRDF) effects due to differences in illumination and viewing geometry.

Author Contributions: Conceptualization, E.S. and C.T.; Methodology, E.S., C.T. and W.D.; Formal Analysis, C.T., G.K. and K.S.; Data Curation, L.V.d.H.; Writing—Original Draft Preparation, C.T., S.S., S.A., I.B., M.-D.I., L.B., G.K., K.S. and W.D.; Writing—Review & Editing, C.T., E.S., S.S., D.C. and F.N.; Supervision, D.C. and F.N.; Project Administration, D.C. and F.N. Funding: This research was funded by Federaal Wetenschapsbeleid (Belspo), contract CB/67/8, and the European Space Agency, contract 4000111291/14/I-LG. The cloud validation work was conducted in the framework of the IDEAS+ project, funded by the European Space Agency. Acknowledgments: The authors thank Michael Paperin (Brockmann Consult), who has compiled and provided the manually-classified data set for the validation of the cloud detection. Conflicts of Interest: The authors declare no conflict of interest.

Abbreviations The following abbreviations are used in this manuscript:

AVHRR Advanced very high resolution radiometer BRDF Bidirectional reflectance distribution function C0 Collection 0 C1 Collection 1 C2 Collection 2 C3 Collection 3 CCI Climate Change Initiative CE Commission error DCC Deep convective clouds EPS EUMETSAT Polar System EUMETSAT European Organisation for the Exploitation of Meteorological Satellites GCP Ground Control Point GLS Global Land Survey GMR slope Geometric Mean Regression slope GMR intercept Geometric Mean Regression intercept ICP Instrument calibration parameters IQC Image Quality Center MEP Mission Exploitation Platform MSD Mean squared difference NIR Near infrared NRT Near-real time Remote Sens. 2018, 10, 1375 19 of 21

OA Overall accuracy OD Observation day OE Omission error OSCAR Optical Sensor CAlibration with simulated Radiance PA Producer’s accuracy PROBA-V PRoject for On-Board Autonomy–Vegetation R2 Coefficient of determination SAD Spectral angle distance RMSD Root-mean-squared difference S Similarity check S1-TOA Top-of-atmosphere daily synthesis product S1-TOC Daily top-of-canopy synthesis product S10-TOC 10-day synthesis product SM Status map SPOT Système Pour l’Observation de la Terre SWIR Short-wave infrared T Threshold test TOA Top-of-atmosphere TOC Top-of-canopy UA User’s accuracy VAA Viewing azimuth angle VGT Vegetation VNIR Visible and near infrared VZA Viewing zenith angle

References

1. Mellab, K.; Santandrea, S.; Francois, M.; Vrancken, D.; Gerrits, D.; Barnes, A.; Nieminen, P.; Willemsen, P.; Hernandez, S.; Owens, A.; et al. PROBA-V: An operational and technology demonstration mission-Results after decommissioning and one year of in-orbit exploitation. In Proceedings of the 4S (Small Satellites Systems and Services) Symposium, Porto Pedro, , 26–30 May 2014. 2. Toté, C.; Swinnen, E.; Sterckx, S.; Clarijs, D.; Quang, C.; Maes, R. Evaluation of the SPOT/VEGETATION Collection 3 reprocessed dataset: Surface reflectances and NDVI. Remote Sens. Environ. 2017, 201, 219–233. [CrossRef] 3. Maisongrande, P.; Duchemin, B.; Dedieu, G. VEGETATION/SPOT: An operational mission for the Earth monitoring; presentation of new standard products. Int. J. Remote Sens. 2004, 25, 9–14. [CrossRef] 4. Dierckx, W.; Sterckx, S.; Benhadj, I.; Livens, S.; Duhoux, G.; Van Achteren, T.; Francois, M.; Mellab, K.; Saint, G. PROBA-V mission for global vegetation monitoring: Standard products and image quality. Int. J. Remote Sens. 2014, 35, 2589–2614. [CrossRef] 5. Sterckx, S.; Benhadj, I.; Duhoux, G.; Livens, S.; Dierckx, W.; Goor, E.; Adriaensen, S.; Heyns, W.; Van Hoof, K.; Strackx, G.; et al. The PROBA-V mission: Image processing and calibration. Int. J. Remote Sens. 2014, 35, 2565–2588. [CrossRef] 6. Wolters, E.; Dierckx, W.; Iordache, M.-D.; Swinnen, E. PROBA-V Products User Manual v3.0; VITO: Mol, Belgium, 2018. 7. Lambert, M.J.; Waldner, F.; Defourny, P. Cropland mapping over Sahelian and Sudanian agrosystems: A Knowledge-based approach using PROBA-V time series at 100-m. Remote Sens. 2016, 8, 232. [CrossRef] 8. Roumenina, E.; Atzberger, C.; Vassilev, V.; Dimitrov, P.; Kamenova, I.; Banov, M.; Filchev, L.; Jelev, G. Single- and multi-date crop identification using PROBA-V 100 and 300 m S1 products on Zlatia Test Site, Bulgaria. Remote Sens. 2015, 7, 13843–13862. [CrossRef] 9. Shelestov, A.; Kolotii, A.; Camacho, F.; Skakun, S.; Kussul, O.; Lavreniuk, M.; Kostetsky, O. Mapping of biophysical parameters based on high resolution EO imagery for JECAM test site in Ukraine. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015. Remote Sens. 2018, 10, 1375 20 of 21

10. Baret, F.; Weiss, M. Algorithm Theoretical Basis Document—Leaf Area Index (LAI) Fraction of Absorbed Photosynthetically Active Radiation (FAPAR) Fraction of Green Vegetation Cover (FCover)-I2.01. GIO-GL Lot1, GMES Initial Operations. 2017. Available online: https://land.copernicus.eu/global/sites/cgls.vito. be/files/products/GIOGL1_ATBD_FAPAR1km-V1_I2.01.pdf (accessed on 25 February 2018). 11. Meroni, M.; Fasbender, D.; Balaghi, R.; Dali, M.; Haffani, M.; Haythem, I.; Hooker, J.; Lahlou, M.; Lopez-Lozano, R.; Mahyou, H.; et al. Evaluating NDVI Data Continuity Between SPOT-VEGETATION and PROBA-V Missions for Operational Yield Forecasting in North African Countries. IEEE Trans. Geosci. Remote Sens. 2016, 54, 795–804. [CrossRef] 12. Kempeneers, P.; Sedano, F.; Piccard, I.; Eerens, H. Data assimilation of PROBA-V 100 m and 300 m. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3314–3325. [CrossRef] 13. Sánchez, J.; Camacho, F.; Lacaze, R.; Smets, B. Early validation of PROBA-V GEOV1 LAI, FAPAR and FCOVER products for the continuity of the copernicus global land service. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. Arch. 2015, XL-7/W3, 93–100. [CrossRef] 14. Lacaze, R.; Smets, B.; Baret, F.; Weiss, M.; Ramon, D.; Montersleet, B.; Wandrebeck, L.; Calvet, J.C.; Roujean, J.L.; Camacho, F. Operational 333 m biophysical products of the copernicus global land service for agriculture monitoring. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. Arch. 2015, XL-7/W3, 53–56. [CrossRef] 15. Goor, E.; Dries, J.; Daems, D.; Paepen, M.; Niro, F.; Goryl, P.; Mougnaud, P.; Della Vecchia, A. PROBA-V Mission Exploitation Platform. Remote Sens. 2016, 8, 564. [CrossRef] 16. Sterckx, S.; Livens, S.; Adriaensen, S. Rayleigh, deep convective clouds, and cross-sensor desert vicarious calibration validation for the PROBA-V mission. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1437–1452. [CrossRef] 17. Sterckx, S.; Adriaensen, S.; Dierckx, W.; Bouvet, M. In-Orbit Radiometric Calibration and Stability Monitoring of the PROBA-V Instrument. Remote Sens. 2016, 8, 546. [CrossRef] 18. Sterckx, S.; Adriaensen, S. Degradation monitoring of the PROBA-V instrument. GSICS Q. 2017, 11, 5–6. [CrossRef] 19. Govaerts, Y.; Sterckx, S.; Adriaensen, S. Use of simulated reflectances over bright desert target as an absolute calibration reference. Remote Sens. Lett. 2013, 4, 523–531. [CrossRef] 20. Gutman, G.; Masek, J.G. Long-term time series of the Earth’s land-surface observations from space. Int. J. Remote Sens. 2012, 33, 4700–4719. [CrossRef] 21. Lissens, G.; Kempeneers, P.; Fierens, F.; Van Rensbergen, J. Development of cloud, snow, and shadow masking algorithms for VEGETATION imagery. In Proceedings of the IEEE IGARSS Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment, Honolulu, HI, USA, 24–28 July 2000. 22. Kruse, F.A.; Lefkoff, A.B.; Boardman, J.W.; Heidebrecht, K.B.; Shapiro, A.T.; Barloon, P.J.; Goetz, A.F.H. The spectral image processing system (SIPS)—Interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [CrossRef] 23. Kirches, G.; Krueger, O.; Boettcher, M.; Bontemps, S.; Lamarche, C.; Verheggen, A.; Lembrée, C.; Radoux, J.; Defourny, P. Land Cover CCI-Algorithm Theoretical Basis Document, Version 2; UCL-Geomatics: London, UK, 2013. 24. Muller, J.-P.; López, G.; Watson, G.; Shane, N.; Kennedy, T.; Yuen, P.; Lewis, P.; Fischer, J.; Guanter, L.; Domench, C.; et al. The ESA GlobAlbedo Project for mapping the Earth’s land surface albedo for 15 years from European sensors. Geophys. Res. Abstr. 2011, 13, EGU2011-10969. 25. Eaton, B.; Gregory, J.; Drach, B.; Taylor, K.; Hankin, S.; Caron, J.; Signell, R.; Bentley, P.; Rappa, G.; Höck, H.; et al. NetCDF Climate and Forecast (CF) Metadata Conventions. Available online: http://cfconventions. org/cf-conventions/v1.6.0/cf-conventions.pdf (accessed on 26 March 2018). 26. Eerens, H.; Baruth, B.; Bydekerke, L.; Deronde, B.; Dries, J.; Goor, E.; Heyns, W.; Jacobs, T.; Ooms, B.; Piccard, I.; et al. Ten-Daily Global Composites of METOP-AVHRR. In Proceedings of the Sixth International Symposium on Digital Earth: Data Processing and Applications, Beijing, , 9–12 September 2009; pp. 8–13. [CrossRef] 27. Krippendorff, K. Reliability in Content Analysis. Hum. Commun. Res. 2004, 30, 411–433. [CrossRef] 28. Kerr, G.H.G.; Fischer, C.; Reulke, R. Reliability assessment for remote sensing data: Beyond Cohen’s kappa. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015. Remote Sens. 2018, 10, 1375 21 of 21

29. Ji, L.; Gallo, K. An agreement coefficient for image comparison. Photogramm. Eng. Remote Sens. 2006, 72, 823–833. [CrossRef] 30. Hagolle, O. Effet d’un Changement d‘Heure de Passage sur les Séries Temporelles de Données de L’Instrument VEGETATION; CNES: Toulouse, , 2007. 31. Proud, S.R.; Fensholt, R.; Rasmussen, M.O.; Sandholt, I. A comparison of the effectiveness of 6S and SMAC in correcting for atmospheric interference of Second Generation images. J. Geophys. Res. 2010, 115, D17209. [CrossRef] 32. Proud, S.R.; Rasmussen, M.O.; Fensholt, R.; Sandholt, I.; Shisanya, C.; Mutero, W.; Mbow, C.; Anyamba, A. Improving the SMAC atmospheric correction code by analysis of Meteosat Second Generation NDVI and surface reflectance data. Remote Sens. Environ. 2010, 114, 1687–1698. [CrossRef]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).