<<

sensors

Communication Optical Design of Imaging Based on Linear Variable Filter for Nighttime Remote Sensing

Yunqiang Xie 1,2,3 , Chunyu Liu 1,3,*, Shuai Liu 1,3 and Xinghao Fan 1,2,3

1 Changchun Institute of , Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; [email protected] (Y.X.); [email protected] (S.L.); [email protected] (X.F.) 2 University of Chinese Academy of Sciences, Beijing 100049, China 3 Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China * Correspondence: [email protected]

Abstract: Nighttime light remote sensing has unique advantages on reflecting human activities, and thus has been used in many fields including estimating population and GDP, analyzing light pollution and monitoring disasters and conflict. However, the existing nighttime light remote sensors have many limitations because they are subject to one or more shortcomings such as coarse spatial resolution, restricted swath width and lack of multi-spectral data. Therefore, we propose an optical system of imaging spectrometer based on linear variable filter. The imaging principle, optical specifications, optical design, imaging performance analysis and tolerance analysis are illustrated. The optical system with a focal length of 100 mm, F-number 4 and 43◦ field of view in the range of 400–1000 nm is presented, and excellent image quality is achieved. The system can obtain the   multi-spectral images of eight bands with a spatial resolution of 21.5 m and a swath width of 320 km at the altitude of 500 km. Compared with the existing nighttime light remote sensors, our system Citation: Xie, Y.; Liu, C.; Liu, S.; Fan, X. Optical Design of Imaging possesses the advantages of high spatial and high spectral resolution, wide spectrum band and wide Spectrometer Based on Linear swath width simultaneously, greatly making up for the shortage of the present systems. The result of Variable Filter for Nighttime Light tolerance analysis shows our system satisfy the requirements of fabrication and alignment. Remote Sensing. Sensors 2021, 21, 4313. https://doi.org/10.3390/ Keywords: nighttime light remote sensing; imaging spectrometer; optical system; high resolution s21134313

Academic Editor: Gwanggil Jeon 1. Introduction Received: 21 April 2021 Nighttime light remote sensing can obtain various light information on the earth’s Accepted: 21 June 2021 surface at cloudless night, most of which are sent out by artificial . In contrast Published: 24 June 2021 to common remote sensing satellite imageries which are mainly used to monitor the physicochemical characteristics of the earth, nighttime light remote sensing imageries have Publisher’s Note: MDPI stays neutral higher correlation with human activities and possess the advantages of spatiotemporal with regard to jurisdictional claims in continuity, independence and objectivity. Therefore, they have been widely used in the field published maps and institutional affil- iations. of social science such as gross national product, greenhouse gas emissions, poverty index, urbanization monitoring and evaluation, urban agglomeration evolution analysis and light pollution analysis. In addition to reflecting the lights from urban areas, nighttime light remote sensing can also capture the lights of fishing boats, natural gas combustion, forest fires, etc., so it can also provide important support for many other research fields such as Copyright: © 2021 by the authors. fishery monitoring, major event estimation and natural disaster risk assessment [1–5]. Licensee MDPI, Basel, Switzerland. Up to now, the Defense Meteorological Satellite Program’s Operational Linescan This article is an open access article System (DMSP/OLS) data and the Visible Imaging Radiometer Suite (VIIRS) distributed under the terms and conditions of the Creative Commons instrument onboard the Suomi National Polar Partnership (NPP) satellite are the two most Attribution (CC BY) license (https:// common sources of nighttime light data [6]. The DMSP/OLS has a rich database and can −9 −7 −2 −1 creativecommons.org/licenses/by/ detect the nighttime light radiance from 1.54 × 10 to 3.17 × 10 W·cm ·sr with 4.0/). a swath width of 3000 km [7,8]. However, it has a coarse spatial resolution of 2700 m

Sensors 2021, 21, 4313. https://doi.org/10.3390/s21134313 https://www.mdpi.com/journal/sensors Sensors 2021, 21, 4313 2 of 18

and can only obtain panchromatic images, limiting its use in fine application [9]. Other shortcomings such as lack of on-board calibration, blooming effects and saturation in urban cores make its application further limited [10]. The NPP-VIIRS has a radiometric range of 3 × 10−9 to 2 × 10−2 W·cm−2·sr−1 and swath width of 3000 km [11–13]. As the successor to the DMSP/OLS, it has a certain degree of upgrade such as finer resolution of 740 m and on-board radiometric calibration [8,14,15]. However, the resolution is still very low and only panchromatic images can be obtained [16]. Luojia 1–01 satellite launched by Wuhan University in China, has the spatial resolution of 130 m and swath width of 250 km [17]. However, like its predecessors, it cannot capture spectral images [18]. On 9 January 2017, Chang-guang Satellite Technology Co., Ltd. launched the JL1-3B satellite. It provides significant improvements over the previous satellite remote sensors, in terms of increased spatial resolution (0.92 m) and multispectral bands including 430–512 nm, 489–585 nm and 580–720 nm [19]. However, its swath width is only 11 km, and the number of wavebands is too few to provide more detailed information. Imaging information with high spatial resolution and high spectral resolution in a wide waveband is imperative to identify the type of the light source and their spatial arrangement [20]. Since the light from different areas such as residential, commercial and war area shows different spatial and spectral characteristics, high-resolution datasets have unique advantages over the coarse resolution imagery in monitoring, analyzing, evaluating, and predicting the changes on the earth’s surface and thus contribute to quantify and track the dynamic of human activities and their impacts on environment and economy [21,22]. In sum, a nighttime light remote sensor that can capture imageries with high spectral and spatial resolution in a wide waveband and swath width is worth studying. In this paper, we present an optical system of imaging spectrometer based on linear variable filter for nighttime light remote sensing that can obtain the multi-spectral images of eight spectral bands in the range of 400–1000 nm with a spatial resolution of 21.5 m and a swath width of 320 km. Compared with the imaging based on , prism-grating and prism-grating-prism module which consist of two optical system namely, the telescopic system and the spectroscopic system, our system possess the advantages of compactness, light weight, and low cost, although the spectral resolution is lower. For example, the imaging spectrometer based on prism-grating-prism developed by Xue consists of telescope system and spectral imaging system [23]. The imaging spectrome- ter contains 24 and has a total length of 300 mm. The imaging spectrometer based on prism-grating by Chen contains 22 lenses [24]. In contrast, our system contains 15 lenses, and the total lenses is 250 mm. Although spectral shift may occur to our spectrometer due to the inherent limitation of linear variable filter, it can be greatly alleviated by spectral cali- bration. Compared with the existing nighttime light remote sensors, our system meeting the requirements of high spatial and high spectral resolution, wide working band and wide swath width simultaneously, greatly making up for the shortage of the present systems. For an imaging spectrometer for nighttime light remote sensing, it is necessary to maintain illumination uniformity of imaging plane to ensure sufficient SNR [25–27]. By analyzing the energy distribution of the focal plane and pupil aberration, we apply to the pupil coma to improve the illumination of the edge field of view. Since the waveband of our system is too wide, it is essential to correct the and secondary spectrum to get excellent image quality. In order to obtain an apochromatic system, some special materials such as crystals and liquids are usually used [28,29]. However, the disadvantages of these materials such as fragility, instability and poor availability prevent their applications in aerospace. Fortunately, the Buchdahl dispersion model can help to provide an effective method to select materials for apochromatic systems [30]. In the design process of our system, the chromatic aberration and secondary spectrum are corrected by properly choosing glass for certain lenses with the dispersion vector method based on the Buchdahl dispersion model. Other aberrations such as astigmatism and are reduced by adjusting the position of aperture stop and shape and bending factors of certain lenses. After iterative optimization in software ZEMAX, we get the final optical system with Sensors 2021, 21, x FOR PEER REVIEW 3 of 19

such as astigmatism and distortion are reduced by adjusting the position of aperture stop and shape and bending factors of certain lenses. After iterative optimization in software ZEMAX, we get the final optical system with excellent image quality. We have carried out the tolerance analysis and the result shows that the system is qualified for practical appli- cation.

2. Imaging Principle Sensors 2021, 21, 4313 3 of 18 A schematic diagram of the imaging spectrometer’s optical system and its imaging principle is shown in Figure 1. The system consists of an optical system and a filter which is placed in proximity to the CMOS, with the coated side towards the pixels. The imaging spectrometer isexcellent a push-broom image quality. system We and have the carried working out principle the tolerance of this analysis type and of system the result is shows shown in Figurethat 1a. the The system light is qualifiedfrom the for ground practical target application. is imaged by the optical system. After passing through2. Imaging the Principle filter, the light forms an image on the focal plane and is finally received by the CMOS,A schematic as shown diagram in Figure of the 1b imaging, where spectrometer’sthe lights with optical different system and repre- its imaging sent different fieldsprinciple of view. is shown We incan Figure therefore1. The obtain system multiple consists linear of an opticalimages system of different and a filter wavebands whichwhich can is placedbe expanded in proximity into to plane the CMOS, images with using the coatedthe push-broom side towards imaging the pixels. The mode of the spectrometer.imaging spectrometer is a push-broom system and the working principle of this type of The data capturedsystem is shownby the inimaging Figure1 spectrometera. The light from is referred the ground to as target a three-dimensional is imaged by the optical data cube denotedsystem. by After two-dimensional passing through spac thee filter, and theone-dimensional light forms an image spectrum on the [31]. focal The plane and data cube containsis finally spectral received and by spatial the CMOS, information as shown of inthe Figure targets1b, within where thethe lights whole with scan- different color represent different fields of view. We can therefore obtain multiple linear images ning range of the imaging spectrometer, as shown in Figure 1c. Each target in the scanning of different wavebands which can be expanded into plane images using the push-broom range correspondsimaging to a mode column of the in spectrometer. the data cube which can be converted into a spectral curve for spectrum characteristics analysis, as shown in Figure 1d,e. Optical System

Reflected Light

CMOS

Flight

Object Filter

(a) (b)

Spectral curve 3D data cube

(e) (d) (c)

Figure 1. Schematic diagram of the imaging spectrometer’s optical system and its push-broom imaging principle. (a) Push- broom imagingFigure principle, 1. Schematic (b) Structure diagram of theof the imaging imaging spectrometer, spectromet (c)er’s Three-dimensional optical system dataand cube,its push-broom (d) Data column in the data cube,imaging (e) Spectrum principle. curve (a) ofPush-broom the object. imaging principle, (b) Structure of the imaging spectrometer, (c) Three-dimensional data cube, (d) Data column in the data cube, (e) Spectrum curve of the ob- ject. The data captured by the imaging spectrometer is referred to as a three-dimensional data cube denoted by two-dimensional space and one-dimensional spectrum [31]. The data 3. Optical Specificationscube contains spectral and spatial information of the targets within the whole scanning range of the imaging spectrometer, as shown in Figure1c. Each target in the scanning range The imagingcorresponds spectrometer to a column is mounted in the data on cubea low-orbit which can satellite be converted at the intoaltitude a spectral of 500 curve for km and the technicalspectrum index characteristics requires analysis,that the asground shown sample in Figure distance1d,e. (GSD) is less than 21.5 m. Such sampling is sufficient to distinguish the adjacent street lamps on most roads [32].3. OpticalBesides, Specifications it can help to distinguish large buildings by detecting all the The imaging spectrometer is mounted on a low-orbit satellite at the altitude of 500 km and the technical index requires that the ground sample distance (GSD) is less than 21.5 m. Such sampling frequency is sufficient to distinguish the adjacent street lamps on most roads [32]. Besides, it can help to distinguish large buildings by detecting all the artificial

Sensors 2021, 21, x FOR PEER REVIEW 4 of 19

artificial light on them such as billboards and neon lights [33]. The focal length of the op- Sensors 2021, 21, 4313 4 of 18 tical system f ' can be given by α ⋅ ' H f = (1) light on them such as billboards and neon lightsGSD [33]. The focal length of the optical system 0 f can beα given by where is the size of the CMOS pixel, H is theα · Hdetection distance. The CMOS image sen- f 0 = (1) sor GSENSE5130 produced by Gpixel Inc withGSD 5056 × 2968 pixels and 4.25 μm × 4.25 μm pixel size is chosen as the detector of our system [34]. It is calculated that the focal length where α is the size of the CMOS pixel, H is the detection distance. The CMOS image sensor is 98.8 mm. In order to have abundant allowance, we choose 100 mm as the focal length GSENSE5130 produced by Gpixel Inc with 5056 × 2968 pixels and 4.25 µm × 4.25 µm pixel of our system. It should be noted that the ground sample distance of 21.5 m corresponds size is chosen as the detector of our system [34]. It is calculated that the focal length is to the nadir pixels. For the marginal pixels, the corresponding ground sample distance 98.8 mm. In order to have abundant allowance, we choose 100 mm as the focal length of will be slightly larger. our system. It should be noted that the ground sample distance of 21.5 m corresponds to The width coverage L of our imaging spectrometer is required to be more than 320 the nadir pixels. For the marginal pixels, the corresponding ground sample distance will bekm slightly to be sufficient larger. to cover two cities during one shot. Besides, such a wide coverage can reduceThe the width number coverage of revisitingL of our imagingwhen capture spectrometer a large is area. required It can to bebe morecalculated than 320 by kmthe tofollowing be sufficient equation to cover that the two width cities of during the image one shot.plane Besides,l is 64 mm. such a wide coverage can reduce the number of revisiting when capturef ' ⋅ L a large area. It can be calculated by the following equation that the width of thel image= plane l is 64 mm. (2) H f 0 · L Since the width of the image sensorl =GSENSE5130 is 21.488 mm, less than that of (2)the H image plane, we use three image sensors to splice the image plane in the form of Figure 2. In orderSince to the avoid width theof chink the imagebetween sensor adjacent GSENSE5130 sensors in is the 21.488 assembly mm, less process, than thatan overlap of the imagearea of plane, 50 pixels we usewill three be set image between sensors them. to spliceIn this the case, image the width plane incoverage the form can of Figurebe calcu-2. Inlated order as 320.2 to avoid km theby chink between adjacent sensors in the assembly process, an overlap area of 50 pixels will be set between them. In this case, the width coverage can be calculated ⋅++() as 320.2 km by L=GSDmin P 1 P 2 P 3 (3) L = GSDmin · (P1 + P2 + P3) (3) = = = where, P1 5006 , P2 5056 and P3 5006 which are the effective number of pixels of where, P1 = 5006, P2 = 5056 and P3 = 5006 which are the effective number of pixels of the the three sensors, GSDmin is the ground sample distance calculated by Equation (1) when three sensors, GSDmin is the ground sample distance calculated by Equation (1) when the focalthe focal length length is 100 is mm.100 mm.

FigureFigure 2.2. SchematicSchematic diagramdiagram ofof imageimage planeplane splicedspliced byby threethree imageimage sensors.sensors.

ThroughThrough geometrygeometry calculation, calculation, we we can can get thatget thethat circumcircle the circumcircle diameter diameter of the spliced of the imagespliced plane image is 77plane mm. is Then77 mm. the Then half-field the half-field angle ω ofangle the opticalω of the system optical can system be calculated can be ascalculated follows: as follows: d ω = arctan = 21.06◦ (4) 2 f 0 where d is the circumcircle diameter of the spliced image plane. We set ω to 21.5◦ to have abundant allowance. In order to match the detector in size, the size of the linear variable filter we choose is 33.2 mm × 33.2 mm × 1.2 mm and the effective coating area is 21.5 mm × 12.7 mm. The corresponding relationship between the filter and the detector pixels is shown in Sensors 2021, 21, x FOR PEER REVIEW 5 of 19

d  ω ==arctan 21.06 (4) 2 f '

where d is the circumcircle diameter of the spliced image plane. We set ω to 21.5° to have abundant allowance. Sensors 2021, 21, 4313 In order to match the detector in size, the size of the linear variable filter we choose5 of 18 is 33.2 mm × 33.2 mm × 1.2 mm and the effective coating area is 21.5 mm × 12.7 mm. The corresponding relationship between the filter and the detector pixels is shown in Figure 3. The area of the panchromatic band corresponds to the first 98 rows pixels of the sensor Figure3. The area of the panchromatic band corresponds to the first 98 rows pixels and the size is about 0.42 mm × 12.7 mm. The next 70 rows of pixels correspond to the of the sensor and the size is about 0.42 mm × 12.7 mm. The next 70 rows of pixels transition region of the filter whose size is about 0.3 mm × 12.7 mm. The remaining area correspond to the transition region of the filter whose size is about 0.3 mm × 12.7 mm. of the filter corresponding to the next 2800 rows of pixels is the spectral gradient region The remaining area of the filter corresponding to the next 2800 rows of pixels is the in the range of 400 nm–1000 nm. The central varies along the longitudinal spectral gradient region in the range of 400–1000 nm. The central wavelength varies direction of the filter. The full width half maximum (FWHM) of filter is around 5% of the along the longitudinal direction of the filter. The full width half maximum (FWHM) center wavelength and the transmittance is above 90%. Accurate co-registration of bands of filter is around 5% of the center wavelength and the transmittance is above 90%. and different rows of pixels will be realized by spectral calibration. After co-registration Accurate co-registration of bands and different rows of pixels will be realized by spectral of bands and detectors, spectral images can be obtained by integrating the data of corre- calibration. After co-registration of bands and detectors, spectral images can be obtained sponding pixels in the push-broom mode. The system can obtain eight spectral images by integrating the data of corresponding pixels in the push-broom mode. The system andcan one obtain panchromatic eight spectral image images at one time and oneand panchromaticthus can achieve image accurate at one detection time and [35]. thus The can bandwidthachieve accurateof spectral detection image can [35 be]. Theset between bandwidth 20 nm of spectraland 40 nm image according can be to set practical between need.20 nmHowever, and 40 since nm according the FWHM to of practical the filter need. increases However, with sincethe central the FWHM wavelength of the in- filter creasing,increases the withspectral the images central in wavelength the waveband increasing, near 1000 the nm spectral will have images wider in thebandwidth. waveband It shouldnear 1000 be noted nm will that have the wider spectral bandwidth. bands can It be should reset beon notedorbit beca thatuse the of spectral the use bands of the can linearbe resetvariable on orbitfilter. because of the use of the linear variable filter.

21.5mm Transition region Panchromatic band 5056 pixels

98 rows 70 rows 400 nm 12.7mm 2800 rows

1000 nm

Spectral gradient region

(a) (b)

FigureFigure 3. ( 3.a) (Thea) The corresponding corresponding relationship relationship between between filter filter and and detector. detector. (b) ( schematicb) schematic of the of the structure. structure.

In Inorder order to toensure ensure the the resolution resolution and and accuracy accuracy of ofmeasurement, measurement, the the modulation modulation transfertransfer function function (MTF) (MTF) of the of the system system should should be begreater greater than than 0.12 0.12 at the at the Nyquist Nyquist frequency frequency whichwhich can can be becalculated calculated as as118 118 lp/mm lp/mm by by 10001000 N N= = (5) ⋅α2 · α (5) 2 where N is the Nyquist frequency, α is the size of the CMOS pixel. The MTF can be α wherecalculated N is the as Nyquist follows [frequency,36,37]: is the size of the CMOS pixel. The MTF can be cal- culated as follows [36,37]: MTF = MTFdesign · MTFman · MTFsample (6) MTF=⋅⋅ MTF MTF MTF design man sample (6) where MTFdesign is the MTF of design value of the optical system, MTFman is the MTF of optical system manufacturing which can be set as 0.8 based on engineering experience, MTFsample is the MTF of detector whose value is 0.5. After calculation, MTFdesign should be greater than 0.3. High signal-to-noise ratio (SNR) of imaging spectrometer is necessary for nighttime light remote sensing, as it enables system to distinguish the target from the background. The SNR [38] is defined as the ratio of signal power to noise power, which can be calcu- lated by Se Se SNR = = q (7) Ne 2 Se + σR + De Sensors 2021, 21, 4313 6 of 18

where Se is the number of signal electronics, Ne is the number of noise electronics, −1 −1 σR = 1.6 e , is the readout noise, De = 3 e /s/pix, is the dark current. The number of signal electrons within the integrated period is obtained by:

πAd Mtint Se = L τ η (8) 4F2hν 0 0

where Ad is the area of a single pixel, M is the integration number of our CMOS whose range is from 1 to 186, tint is the integration time of the system, L0 is the radiance at the entrance pupil, which can be obtained by the software MODTRAN, τ0 is the transmittance of the optical system. We expect to use 15 lenses made of optical glass with a transmittance of 0.995 when the thickness is 10 mm in the spectral range of 500–1000 nm. The transmittance in the range of 400–500 nm will be lower; we must improve the quality of corresponding images by the increase in the integration number or specific image processing algorithm. All surfaces of the lenses will be coated with the anti-reflective film which guarantees a transmittance of more than 0.995 of each surface. Since the average transmittance of the linear variable filter is 0.9, the transmittance of the optical system can be calculated as 0.718. We set τ0 as 0.7 to have abundant allowance. η is the quantum efficiency of the detector, F is the F-number of the optical system, h is Plank’s constant, v is the frequency. The values of the variables used to calculate the SNR are listed as below: Ad = 4.25 µm × 4.25 µm, tint = 0.003s, η = 65%. The technical index requires that the SNR of the system is more than 10 when the radiance of the target is 1E-7 W/cm2/sr in each imaging spectral band to distinguish most human activities [22]. The atmospheric transmittance per wavelength in the spectral range of 400–1000 nm obtained from the software MODTRAN is shown in Figure4. The main parameters used to calculate the transmittance are shown in Table1. Ignore the band with extremely low transmittance, we take the average transmittance as 85%. According to Figure4, the wavelength range 400 nm to 450 nm should also be ignored when the radiance of object is quite low due to the poor atmospheric transmittance. Calculated by the above formulas, when F-number and integration number are 4 and 86, respectively, the SNR is 10. When the integration number is 86, the stability of the system should be controlled within 0.001 degree, which is feasible to the existing satellite. Besides, image motion compensation needs to be applied to avoid the image blurring caused by the relative motion between the object and detector. Spatial calibration is also necessary to achieve high-precision detection. In order to balance the performance and volume and weight of the system, we set the F-number to 4. The technical specifications of the optical system are listed in Table2.

Table 1. Main parameters of MODTRAN used to calculate transmittance.

Parameters Value Model Atmosphere Middle altitude summer Water vapor column 1.0 gm/cm2 CO2 Mixing Ratio 330.0 ppmv Aerosol model used Rural –VIS-23km

Table 2. Technical Specifications of the optical system.

Parameters Value Wavelength range 400–1000 nm Focal length 100 mm F-number 4 Field of view ±21.5◦ MTF >0.3@118lp/mm Distortion <0.05% SensorsSensors 20212021,, 2121,, x 4313 FOR PEER REVIEW 7 7of of 19 18

Figure 4. The atmospheric transmittance per wavelength in the spectral range of 400–1000 nm. Figure 4. The atmospheric transmittance per wavelength in the spectral range of 400 nm–1000 nm. 4. Image Plane Illumination Analysis Table 1. Main parameters of MODTRAN used to calculate transmittance. Relative illumination is the ratio of the illumination of the edge of the image plane to that of the centerParameters point. Nighttime light remote sensing requireValue that the relative illumination of imageModel plane Atmosphere should be greater than 80% for all Middle fields of altitude view to summer ensure sufficient SNR. However,Water the vapor image column plane illumination decreases as the 1.0 field gm/cm of view2 increases. Since ◦ the halfCO2 field Mixing of view Ratio of our system is 21.5 , it is necessary 330.0 to ppmv maintain the illumination uniformityAerosol of the model image used plane. In order to take effective Rural measures –VIS-23km to solve this problem, it is essential to determine the relation between the image-plane illumination and other Tableparameters 2. Technical of optical Specifications system. of the optical system. Figure5 shows the transfer relation between object and image in optical system. Here ds is a radiationParameters element on the object plane, which is imagedValue on the image plane as ds0 by the opticalWavelength system. The range radiation flux [39] received by 400 the nm–1000 system from nm ds is Focal length 100 mm 2 F-number Φ = πL · sin U · ds 4 (9) Field of view ±21.5° where L is the radiance brightness of the object, U is the aperture angle in object space. If >0.3@118lp/mm the transmittanceMTF of the system is τ, the radiation flux of ds0 is Distortion <0.05% Φ0 = Φ · τ (10) 4. Image Plane Illumination Analysis Thus, the illumination of ds0 is Relative illumination is the ratio of the illumination of the edge of the image plane to that of the center point. Nighttimeπτ lightL · sin remo2 U ·teds sensingπτL require· sin2 U that the relative illumina- E0 = = (11) tion of image plane should be greater dsthan0 80% for all fieldsβ2 of view to ensure sufficient SNR. However, the image plane illumination decreases as the field of view increases. Sincewhere theβ ishalf the field lateral of view magnification of our system of the is optical 21.5°, it system. is necessary to maintain the illumina- tion uniformityFor the off-axis of thefield image of plane. view, accordingIn order to to take the effective geometric measures principle, to solve the relationship this prob- lem,between it is on-axisessential and to off-axisdetermine aperture the relation angles between can be obtained the image-plane as illumination and other parameters of optical system. sin2 U = sin2 U cos4 ω (12) Figure 5 shows the transfer relationm between object and image in optical system. Here ds is a radiation element on the object plane, which is imaged on the image plane as ds' by the optical system. The radiation flux [39] received by the system from ds is Φ=π ⋅2 ⋅ L sin Uds (9)

Sensors 2021, 21, x FOR PEER REVIEW 8 of 19

where L is the radiance brightness of the object, U is the aperture angle in object space. If the transmittance of the system is τ , the radiation flux of ds' is Φ=Φ⋅' τ (10) Thus, the illumination of ds' is πτ⋅⋅22 πτ ⋅ ' L sinUds L sin U E == (11) '2β ds where β is the lateral magnification of the optical system. For the off-axis field of view, according to the geometric principle, the relationship between on-axis and off-axis aperture angles can be obtained as 224= ω sinUUm sin cos (12)

Thus, the illumination of off-axis point on image plane is

24 Sensors 2021 21 πτ⋅⋅ ω , , 4313 ' LUsin cos 8 of 18 E = (13) β 2

Entrance pupil

Exit pupil

U ' ds ω ds

U m

Figure 5. Transfer relation relation between between object object and and image image in opticalin optical system. system. Thus, the illumination of off-axis point on image plane is As analyzed above, the illumination is proportional to the fourth power of the cosine of the angle of half field of view. Therefore,πτL · sin for2 U the· cos optical4 ω system with large field of view, E0 = (13) the illumination at the edge of image plane isβ 2much lower than that at the center position, and weAs analyzedhave to adjust above, the illuminationsystem to improve is proportional the uniformity to the fourth of illumination power of the cosineon image plane.of the angle of half field of view. Therefore, for the optical system with large field of view, the illuminationAccording atto theEquation edge of (10), image when plane the is muchhalf fi lowereld of than view that increases, at the center in order position, to keep theand detection we have to ability adjust of the the system system to improve unchanged, the uniformity we can only of illumination reduce the on lateral image magnifica- plane. tion Accordingto improve to the Equation illumination. (10), when Barrel the half distor fieldtion of view is the increases, kind of in aberration order to keep that the com- pressesdetection the ability edge of the systemimage plane unchanged, and thus we it can can only make reduce the lateral the lateral magnification magnification decrease withto improve the increase the illumination. in the field Barrel of view. distortion The barrel is the distortion kind of aberration could then that be compresses employed to improvethe edge the of theillumination image plane of the and edge thus field it can of view. make However, the lateral for magnification an imaging spectrometer decrease basedwith the on increaselinear variable in the field filter of, too view. much The distortion barrel distortion will lead could to sever then bespectrum employed crosstalk to betweenimprove thedifferent illumination optical of channels. the edge fieldThus, of taking view. However, the use of for barrel an imaging distortion spectrometer to promote thebased uniformity on linear variableof image filter, plane too illumination much distortion is not will desirable, lead to sever and we spectrum have to crosstalk find other between different optical channels. Thus, taking the use of barrel distortion to promote the ways. uniformity of image plane illumination is not desirable, and we have to find other ways. Unlike paraxial optical system, pupil aberrations increase sharply in the optical sys- Unlike paraxial optical system, pupil aberrations increase sharply in the optical system temwith with large large field field of view of view [40]. Pupil[40]. Pupil aberrations aberrations in the in optical the optical system system with large with field large of field ofview view will will make make the the location location and and size size of theof th entrancee entrance pupil pupil at the at the off-axis off-axis field field of view of view different from that at the paraxial field of view. Among several pupil aberrations, pupil spherical aberration and pupil coma have the greatest influence on the entrance pupil. Pupil spherical aberration makes the location of the entrance pupil at the edge of the field

of view deviate from the ideal position, thus it mainly affects the calibration of the chief ray and has less influence on the relative illumination. Pupil coma can make the beam width of the off-axis field at the entrance pupil larger than that of the center field when both of them completely fill the stop aperture [41]. Such vignetting effect of pupil aberration can ensure that the off-axis object has a larger aperture than the on-axis object, so it is very beneficial to improve the relative illumination. Thus, we will improve the image plane illumination of the edge field of view by taking advantage of pupil coma.

5. Optical System Design and Optimization 5.1. Initial Optical Structure Design According to the technical specifications of our system calculated in Section3, we selected an inverse telephoto structure with 13 lenses as the starting point for further design. The first element of the system is set as a negative meniscus to compress the angle of the light in the off-axis field of view relative to the optical axis to reduce the diameter of the other elements. Besides, taking a negative meniscus lens as the first element Sensors 2021, 21, 4313 9 of 18

contributes to correct chromatic aberration and generate pupil coma which is helpful to improve the image plane relative illumination. Due to the potential for spectral shift of the linear variable filter, the system should possess a telecentric in image space to ensure that the chief ray in each field of view can be perpendicular incident on the filter and focal plane. In order to realize the telecentric characteristic of the system, a positive lens group is arranged in front of the focal plane to compress the emergent light so that it is parallel to the optical axis. two separated positive and negative lens groups are introduced to the system to emend the field curvature. Since the working band of the system is too wide, it is important to eliminate chromatic aberration and secondary spectrum. In order to achieve excellent chromatic correction, cemented composed of a low dispersion and a short flint glass is usually used. However, cemented doublet is structurally unstable and rarely used in aerospace camera. Thus, we choose two air-spaced doublets that present the same in the middle of the system for apochromatism. In order to increase the transmittance of the system, we chose the glass with the high transmittance in the range of 400–1000 nm and controlled the angle between the light and the normal of each surface as small as possible to reduce the reflection of the light. For the purpose of increasing the optimization degrees of freedom, we added two lenses to the system. We then optimized the system with optical design software ZEMAX. After several iterations of optimization, the initial optical system was obtained, as shown in Figure6. This is a quasi-symmetric system with a telecentric light path in image space. The MTF and longitudinal aberration of the system is shown in Figure7. It can be seen that the MTF of the marginal field of view is very low, which means a poor image quality. We take the chromatic aberration at the pupil of 0.707 as the major reference, because achromatism of such pupil makes the chromatic aberration at the full pupil close to that of the paraxial area and thus achieve the best achromatic effect. In the figure of the longitudinal aberration curves, the of 400 nm, 700 nm, 1000 nm do not converge in a point at the pupil of 0.707 and the chromatic aberration of the system at this zone is 0.0718 mm, which means that the system does not achieve apochromatism. To enhance the apochromatic ability, we need to replace the inappropriate glass in the system. Therefore, material selection method is the key to the subsequent design. In this paper, we will use the Buchdahl model to analyze the dispersion characteristics of materials and select proper glass for our system.

5.2. Buchdahl Dispersion Model Dispersion is defined as the variation of with wavelength, and thus accurate description of the refractive index of glass materials is the premise of studying dispersion. The most commonly used model to calculate the refractive index of glass materials is

2 2 −2 −4 −6 −8 N (λ) = A0 + A1 · λ + A2λ + A3λ + A4 · λ + A5λ (14)

where N(λ) is the refractive index when the wavelength is λ, Ai is the coefficients of glass material. The dispersive properties of glass can be modeled using the Taylor series expansion, which is expressed as

2 3 ∆N(λ) = a0 + a1 · ∆λ + a2 · (∆λ) + a3 · (∆λ) + . . . (15)

where ∆N(λ) = N(λ) − N(λ0), ∆λ = λ − λ0, λ0 represents the base wavelength. It can be seen that Equation (11) describes the relationship between the square of refractive index and wavelength, which will cause the calculation process of Equation (12) very complicated. Besides, Equation (12) converges very slowly even for a small value of ∆λ, leading to a large number of terms to be calculated to obtain accurate result [42]. For the theoretical analysis of dispersive properties, Equations (11) and (12) are too cumbersome for practical application. In order to simplify the calculation process and ensure the accuracy, fast convergent formula of refractive index is essential. Sensors 2021, 21, 4313 10 of 18 SensorsSensors 2021 2021, ,21 21, ,x x FOR FOR PEER PEER REVIEW REVIEW 1010 ofof 1919

FigureFigure 6. 6. Optical Optical configuration configurationconfiguration of of the the initial initial system. system.

(a(a) ) (b(b) )

FigureFigureFigure 7. 7.7. ( (a(a)) )MTF; MTF;MTF; ( (b(b)) )longitudinal longitudinallongitudinal aberration aberrationaberration of ofof the thethe initial initialinitial system. system.system.

5.2.5.2. Buchdahl BuchdahlBuchdahl Dispersion Dispersion model [ 43Model Model] simplifies the dispersion equation by changing the wavelength λ intoDispersionDispersion chromatic is is coordinate defined defined as asω the the, which variation variation is defined of of refractive refractive as index index with with wavelength, wavelength, and and thus thus accurateaccurate description description of of the the refractive refractive index index of of glass glass materials materials is is the the premise premise of of studying studying λ − λ dispersion.dispersion. The The most most commonly commonly used usedω =model model to to calculate calculate0 the the refractive refractive index index of of glass glass ma- (16)ma- 1 + α(λ − λ ) terialsterials is is 0 −− −− where α is222468 a222468() constantλλλλλλ=+⋅+ that makes−− Buchdahl + +⋅refractive −− + index equation converge fast. It can NAAAAAANAAAAAA()λλλλλλ=+⋅+01 2 + 3 +⋅ 4 + 5 (14)(14) be calculated by 01 2 3 4 5 where N()()λλ is the refractive index when the1 wavelength isλλ , A is the coefficients of where N is the refractive index whenα = the wavelength∗ is , Ai iis the coefficients(17) of λ0 − λ glassglass material. material. Here the unit of λ is µm. λ∗ = 0.174 µm, which is the Hartman formula constant and TheThe dispersive dispersive properties properties0 of of glass glass can can be be modeled modeled using using the the Taylor Taylor series series expansion, expansion, can be used to model for all types of glass [44] whichwhich is is expressed expressed as as By replacing the wavelength with chromatic coordinate, the Buchdahl’s equation for 23 the refractive indexΔ=+⋅Δ+⋅Δ+⋅Δ+Δ=+⋅Δ+⋅Δ+⋅Δ+ isNaaa() given()λλλλλλλλ by () ()23 a () () (15) Naaa0101 2 2 a 3 3  (15) 2 i where Δ=Δ=NNN()()λλλλλλ () () − − ( (N ) )=, Δ=NΔ=λλλλλλ0 + v −1 −ω +, vλλ2ωrepresents+ ... + v itheω base wavelength. It can(18) be where NNN00 , 00, 00represents the base wavelength. It can be seen that Equation (11) describes the relationship between the square of refractive index seenwhere thatvi Equationis the Buchdahl (11) describes dispersion the relationship coefficients, between which characterizethe square of the refractive dispersion index of andtheand wavelength, glasswavelength, and the which which value will iswill unique cause cause the tothe eachcalculation calculation glass. Buchdahl’sprocess process of of Equation Equation equation (12) for(12) thevery very refractive compli- compli- Δλ cated.indexcated. Besides, convergesBesides, Equation Equation very fast. (12) (12) When converges converges used tovery very model slowly slowly the even dispersiveeven for for a a small propertiessmall value value ofof of theΔλ ,, ,leading leading a to a large number of terms to be calculated to obtain accurate result [42]. For the theoretical toquadratic a large number equation of terms is sufficient to be calculated to achieve to satisfactory obtain accurate accuracy. result Thus, [42]. For only thev1 theoreticaland v2 are analysisnecessaryanalysis of of todispersive dispersive be determined properties, properties, and theEquations Equations calculation (11) (11) and processand (12) (12) canare are too be too found cumbersome cumbersome in reference for for practical practical [43]. application.application. In In order order to to simplify simplify the the calculation calculation process process and and ensure ensure the the accuracy, accuracy, fast fast con- con- vergentvergent formula formula of of refractive refractive index index is is essential. essential. BuchdahlBuchdahl model model [43] [43] simplifies simplifies the the disper dispersionsion equation equation by by changing changing the the wavelength wavelength λλintointo chromatic chromatic coordinate coordinateωω, ,which which is is defined defined as as

Sensors 2021, 21, 4313 11 of 18

5.3. Method of Glass Selection for Apochromatic As a function of wavelength, dispersion power [45] of an optical glass is defined as

∆N(λ) D(λ) = (19) N0 − 1 Through proper variables’ transformation, the dispersion power can be expressed by Buchdahl model as  n ∆N i  D(λ) = N −1 = ∑ ηi · ω 0 i=1 (20)  η = vi i N0−1

where ηi is called the dispersion coefficient, n is the order of the equation. The expression for the lateral chromatic aberration of an optical system with the object at infinity can be expressed by Buchdahl model as

m y2 · φ = − · = − · i i · ( ) TAchc y1 D0 y1 ∑ 2 Di λ (21) i=1 y1 · φ0

where TAchc is the lateral chromatic aberration, D0 is the dispersion power of the whole system, yi is the ray height of the i-th lens, φi is the focal power of the i-th lens, φ0 is the focal power of the whole system, m is the number of lenses in the optical system. Taking the quadratic Buchdahl model into Equation (20), we can get the following equations:

 m  D0 = η10ω + η20ω = ∑ αi · (η1i · ω + η2i · ω)  i=1  2  yi ·φi αi = 2 (22) y1·φ0  m  = =  ηj0 ∑ αiηji(j 1, 2) j=1

where η10 and η20 are the primary and secondary dispersion coefficients of the whole system, η1i and η2i are primary and secondary dispersion coefficients of the i-th lens, αi is scale factor. If we take the dispersion properties of each glass as a vector (η1i, η2i) in the coordinate system denoted by η1 and η2, then the vector (η10, η20) representing the dispersion of the whole system can be calculated by

→ m → G0 = ∑ αi Gi (23) i=1

→ → → where G0 = (η10, η20), Gi = (η1i, η2i). We can then take the module of G0 as the criterion to measure the apochromatic ability of the optical system. In order to obtain an apochromatic optical system, we should adjust the scale factors and change the glass in certain lenses to → shorten the module of the vector G0.

5.4. Dispersion Vector Analysis and Glass Replacement We have conducted raytracing to our initial optical system and the focal power, ray height, refractive index at the reference wavelength, scale factor of dispersion vector and dispersion coefficient of each lens are listed in Table3. The coordinate of the dispersion co- efficient of each lens and the dispersion vector of the whole system in the two-dimensional coordinate system (η1, η2) are shown in Figure6. The dispersion vector of the whole system → is G0 = (−0.0194, 0.0352) and its module is 0.0402. It can be seen from Figure8 that the sum dispersion vector of the negative lenses is far away from that of the positive lenses, which → lead to a large module of G0 and thus an excessive chromatic aberration. In order to reduce → the module of G0, we need to replace the glass material of some lenses. In consideration of Sensors 2021, 21, x FOR PEER REVIEW 13 of 19

by CDGM. According to the principles above, glass materials TF3 and F7 are selected to replace BAF5 and H-F51. After the materials’ replacement, we optimize the system with the software ZEMAX and the raytracing parameters of the optimized system are listed in Table 4. The dispersion vector of the optimized optical system is shown in Figure 10a,  from which we can see that the module of the vector G0 is 0.0326, much smaller than the former system. The longitudinal aberration curves of the system are shown in Figure 10b.

Sensors 2021, 21, 4313 It can be seen that the wavelengths of 400 nm, 700 nm, 1000 nm converge 12in of a 18 point at the pupil of 0.707. The chromatic aberration and secondary spectrum of the system at the pu- pil of 0.707 is 0.06 μm and 0.1 μm, respectively, much less than that of the initial system. Thus,the the need system to maintain actualizes the stability the apochromatism. of the focal power, the glass far from the origin in the dispersion vector coordinate system cannot be replaced. However, the glass close to the Tableorigin 3. Ray-tracing has little effect parameters on the chromatic of the initial aberration system. of the system. Thus, we should change the glass in the middle of Figure8. It can be seen that BAF5, H-ZBAF21, H-F51 and H-ZF3 are in the middle place 1 of Figure 28 . However, 3 H-ZF3 and4 H-ZBAF21 5 correspond to6 the 7 secondGlass lens andH-K51 the fifteenth H-ZF3 lens of the system,H-FK71 respectively. H-ZF6 InH-ZBAF21 our system, theH-ZF6 second H-ZPK1A lensϕ is used to cooperate with the first lens to compress the incident light and reduce the primaryj aberration−0.006834 and thus has0.01047 a great influence−0.00721 on the−0.008528 light path. 0.009377 Therefore, the−0.006079 material 0.0192 Y of thej second lens12.539 cannot be replaced.12.636 As we11.958 mentioned 11. in936Section 13.251 5.1, the fifteenth14.274 lens 14.412 is arrangedN to compress the emergent light to realize the telecentric characteristic of the 0 1.5190 1.7069 1.4542 1.7435 1.7151 1.7435 1.6135 system.α If the material of the fifteenth lens is changed, the light path will be affected, leadingj to the loss−0.6856 of telecentric 1.0666 characteristic. −0.6578 BAF5 and−0.7752 H-F51 correspond1.0505 to the−0.7902 eighth 2.5444 η and twelfth1 lenses,−0.0549 and they are−0.0995 all part of the−0.0362 air-spaced −0.1060 doublets used−0.0797 to reduce − chromatic0.1060 −0.0509 aberration,η as mentioned in Section 5.1. Therefore, we choose to replace the glass materials 2 −0.0042 0.0232 −0.0041 0.0291 0.0122 0.0291 −0.0022 of the eighth and twelfth lenses. The new glass we select should be close to the original ones in the coordinate system (η1, η2) so that their coefficients do not differ too much and thus prevent 8 the increase 9 of the other 10 aberrations 11 and excessive 12 change13 of focal14 power. 15 GlassFigure 9 showsBAF5 theH-ZF72A dispersion coefficientH-FK61 ofH-ZPK5 all glasses producedH-F51 by CDGM.H-QK1 According H-ZF52 to H-ZBAF21 the principles above, glass materials TF3 and F7 are selected to replace BAF5 and H-F51. ϕ Afterj the−0.02546 materials’ 0.006623 replacement, 0.01426 we optimize 0.0188 the system −0.02664 with the − software0.009781 ZEMAX 0.009608 and 0.004847 Y thej raytracing14.252 parameters 14.664 of the14.754 optimized 14.298 system are listed12.628 in Table 5.8364. The dispersion 4.816 4.321 Nvector of the optimized optical system is shown in Figure 10a, from which we can see 0 1.5995 1.9027 1.4942→ 1.5889 1.6317 1.4672 1.8316 1.7151 αthat the module of the vector G0 is 0.0326, much smaller than the former system. The j −3.2995 0.9087 1.9805 2.4522 −2.7105 −0.2125 0.1422 0.0577 longitudinal aberration curves of the system are shown in Figure 10b. It can be seen that η the1 wavelengths−0.0697 of− 4000.1476 nm, 700 nm,−0.04 1000 nm−0.0465 converge in− a0.0869 point at the−0.0504 pupil of− 0.707.0.1201 The −0.0797 ηchromatic aberration and secondary spectrum of the system at the pupil of 0.707 is 0.06 2 0.0087 0.0606 −0.0036 0.00017124 0.0150 −0.0126 0.0438 0.0122 µ µ m and 0.1 m, respectively, much less than that of the initial system. Thus, the system actualizes the apochromatism.

H-FK71 H-FK61 H-ZPK5

BAF5 η H-QK1 H-F51 2 H-ZF6 H-ZF52 H-K51 H-ZF72A H-ZPK1A H-ZBAF21 H-ZF3

  G+ =−(0.0873, 0.6684) G− =−( 0.1067,0.7036)

 =− G0 ( 0.0194,0.0352) η 1

Figure 8. Dispersion coefficient of each lens and the dispersion vector of the initial system.

Figure 8. Dispersion coefficient of each lens and the dispersion vector of the initial system.

Sensors 2021, 21, 4313 13 of 18

Table 3. Ray-tracing parameters of the initial system.

1 2 3 4 5 6 7 Glass H-K51 H-ZF3 H-FK71 H-ZF6 H-ZBAF21 H-ZF6 H-ZPK1A

ϕj −0.006834 0.01047 −0.00721 −0.008528 0.009377 −0.006079 0.0192 Yj 12.539 12.636 11.958 11.936 13.251 14.274 14.412 N0 1.5190 1.7069 1.4542 1.7435 1.7151 1.7435 1.6135 αj −0.6856 1.0666 −0.6578 −0.7752 1.0505 −0.7902 2.5444 η1 −0.0549 −0.0995 −0.0362 −0.1060 −0.0797 −0.1060 −0.0509 η2 −0.0042 0.0232 −0.0041 0.0291 0.0122 0.0291 −0.0022 8 9 10 11 12 13 14 15 Glass BAF5 H-ZF72A H-FK61 H-ZPK5 H-F51 H-QK1 H-ZF52 H-ZBAF21

ϕj −0.02546 0.006623 0.01426 0.0188 −0.02664 −0.009781 0.009608 0.004847 Yj 14.252 14.664 14.754 14.298 12.628 5.836 4.816 4.321 N0 1.5995 1.9027 1.4942 1.5889 1.6317 1.4672 1.8316 1.7151 αj −3.2995 0.9087 1.9805 2.4522 −2.7105 −0.2125 0.1422 0.0577 η1 −0.0697 −0.1476 −0.04 −0.0465 −0.0869 −0.0504 −0.1201 −0.0797 η2 0.0087 0.0606 −0.0036 0.00017124 0.0150 −0.0126 0.0438 0.0122

Table 4. Ray-tracing parameters of the optimized system.

1 2 3 4 5 6 7 Glass H-K51 H-ZF3 H-FK71 H-ZF6 H-ZBAF21 H-ZF6 H-ZPK1A

ϕj −0.005773 0.009655 −0.008114 −0.008616 0.01 −0.003131 0.018 Yj 12.637 12.660 12.289 12.447 13.773 14.137 15.779 N0 1.5190 1.7069 1.4542 1.7435 1.7151 1.7435 1.6135 αj −0.5836 0.9796 −0.7757 −0.8450 1.2009 −0.3961 2.8370 η1 −0.0549 −0.0995 −0.0362 −0.1060 −0.0797 −0.1060 −0.0509 η2 −0.0042 0.0232 −0.0041 0.0291 0.0122 0.0291 −0.0022 8 9 10 11 12 13 14 15 Glass TF3 H-ZF72A H-FK61 H-ZPK5 F7 H-QK1 H-ZF52 H-ZBAF21

ϕj −0.021 0.003113 0.013 0.016 −0.023 −0.009421 0.007063 0.005341 Yj 15.490 15.556 15.581 14.938 13.025 6.211 4.738 4.298 N0 1.6062 1.9027 1.4942 1.5889 1.6285 1.4672 1.8316 1.7151 αj −3.1897 0.4769 1.9979 2.2602 −2.4701 −0.2301 0.1004 0.0625 Sensors 2021, 21, x FOR PEER REVIEW 14 of 19 η1 −0.0719 −0.1476 −0.04 −0.0465 −0.0849 −0.0504 −0.1201 −0.0797 η2 −0.0014 0.0606 −0.0036 0.00017124 0.0163 −0.0126 0.0438 0.0122

BAF5

H-F51

Figure 9. Dispersion coefficient of each lens and the dispersion vector of the initial system.

Figure 9. Dispersion coefficient of each lens and the dispersion vector of the initial system.

Table 4. Ray-tracing parameters of the optimized system.

1 2 3 4 5 6 7 Glass H-K51 H-ZF3 H-FK71 H-ZF6 H-ZBAF21 H-ZF6 H-ZPK1A ϕ j −0.005773 0.009655 −0.008114 −0.008616 0.01 −0.003131 0.018

Y j 12.637 12.660 12.289 12.447 13.773 14.137 15.779 N 0 1.5190 1.7069 1.4542 1.7435 1.7151 1.7435 1.6135 α j −0.5836 0.9796 −0.7757 −0.8450 1.2009 −0.3961 2.8370 η 1 −0.0549 −0.0995 −0.0362 −0.1060 −0.0797 −0.1060 −0.0509 η 2 −0.0042 0.0232 −0.0041 0.0291 0.0122 0.0291 −0.0022

8 9 10 11 12 13 14 15 Glass TF3 H-ZF72A H-FK61 H-ZPK5 F7 H-QK1 H-ZF52 H-ZBAF21 ϕ j −0.021 0.003113 0.013 0.016 −0.023 −0.009421 0.007063 0.005341 Y j 15.490 15.556 15.581 14.938 13.025 6.211 4.738 4.298 N 0 1.6062 1.9027 1.4942 1.5889 1.6285 1.4672 1.8316 1.7151 α j −3.1897 0.4769 1.9979 2.2602 −2.4701 −0.2301 0.1004 0.0625 η 1 −0.0719 −0.1476 −0.04 −0.0465 −0.0849 −0.0504 −0.1201 −0.0797 η 2 −0.0014 0.0606 −0.0036 0.00017124 0.0163 −0.0126 0.0438 0.0122

Sensors 2021, 21, x FOR PEER REVIEW 15 of 19 Sensors 2021, 21, 4313 14 of 18 Sensors 2021, 21, x FOR PEER REVIEW 15 of 19

H-FK61 H-ZPK1A H-FK61 H-ZPK1A η H-QK1 H-ZBAF21 2 F7 H-ZF6 H-ZF52 η H-QK1 H-FK71 H-ZBAF21 2 H-ZF72A F7 H-ZF6 H-ZF72A H-ZF52 H-FK71H-K51 TF3 H-ZF3 H-ZF72A H-ZF72A H-K51 TF3 H-ZF3

 G+ =−(0.0584, 0.6100)   =− G− =−( 0.0634,0.6423)G+ (0.0584, 0.6100)  G− =−( 0.0634,0.6423)

 G =−( 0.005,0.0323) η 0 1  G =−( 0.005,0.0323) η 0 1 (a) (b) (a) (b) FigureFigure 10. 10.(a ()a Dispersion) Dispersion coefficient coefficient of of each each lens lens andand thethe dispersiondispersion vector of the the optimized optimized system; system; (b (b) )Longitudinal Longitudinal aber- aberrationFigureration 10. curves curves(a) Dispersion of the of theoptimized optimized coefficient system. system. of each lens and the dispersion vector of the optimized system; (b) Longitudinal aber- ration curves of the optimized system. 5.5.5.5. Final Final System System 5.5. Final System TheThe secondary secondary spectrum spectrum is is only only related related to to the the focal focal length length of of the the whole whole system system and and relativerelativeThe partial secondary partial dispersion dispersion spectrum and and is Abbe Abbeonly number related number ofto of the the the glass. focal glass. Therefore,length Therefore, of the when when whole the the focalsystem focal length lengthand relativeandand the the partial glass glass indispersion in the the system system and areare Abbe fixed,fixed, number the se secondary ofcondary the glass. spectrum spectrum Therefore, will will when not not increase the increase focal when length when op- andoptimizingtimizing the glass the the in structure the system parameters are fixed, ofof the thethe se system.systcondaryem.The The spectrum position position will of of thenot the stop increase stop aperture aperture when plays op-plays timizinganan important important the structure role role in in determining parametersdetermining theof the the lens lens syst size siem.ze and Theand the positionthe correction correction of the of of distortionstop distortion aperture and and plays field field ancurvature,curvature, important and and role thus, thus, in determining we we have have to to adjust the adjust lens it foritsi forze a betteranda better the image correctionimage quality. quality. of Besides,distortion Besides, we andwe should shouldfield curvature,alsoalso alter alter the andthe shape shapethus, and weand bending have bending to factors adjust factors ofit of thefor the lensesa lebetternses to toimage control control quality. the the distortion distortion Besides, of of thewe the systemshould system alsoatat a altera low low level.the level. shape ThereThere and mustmust bending be enough factors ofdistan distance the lecenses between between to control different different the distortion elements elements of to tothe provide provide system ad- atadequate equatea low level. clearance clearance There for formust mechanical mechanical be enough spacer spacer distan and andce to between tocorrect correct thedifferent the coma coma elementsand and astigmatism. astigmatism. to provide In orderad- In equateorderto improve to clearance improve the foroptical the mechanical optical transmittance transmittance spacer of and the to ofsystem, correct the system, all the surfaces coma all surfacesand of theastigmatism. lenses of the will lenses In be order coated will tobe withimprove coated the with anti-reflectivethe optical the anti-reflective transmittance film. The film. thickness of the The system, thickness of each all lenssurfaces of each is controlled lensof the is lenses controlled within will a bereasonable within coated a withreasonablerange the toanti-reflective rangereduce to the reduce weight film. the The and weight thickness improve and improveof mach eachinability. lens machinability. is controlled We then We optimizedwithin then optimized a reasonable the system the rangesystemusing to usingthe reduce software the the software weightZEMAX ZEMAX and after improve setting after setting themach opinability. theerands operands to We constrain then to constrain optimized the geometric the the geometric system parame- parametersusingters theand software optical and optical characteristics. ZEMAX characteristics. after settingThe total The the axia totaloperandsl length axial to lengthof constrain the final of the thesystem final geometric system is about parame- is 249 about mm, ters249and mm,and the optical and weight the characteristics. weightis 1.03 kg. is 1.03 The kg.The transmittance Thetotal transmittance axial length of the finalof of the the system final final system system (inclu dingis (including about linear 249 variable linear mm, variableandfilter) the weightis filter) 0.723 isis and 0.7231.03 0.463 kg. and The in 0.463 thetransmittance inspectral the spectral range of the rangeof final500 of nm–1000system 500–1000 (inclu nm nm dingand and 400linear 400–500 nm–500 variable nm, nm, filter)respectively.respectively. is 0.723 Asand As mentioned mentioned 0.463 in the inin spectral Section 33, ,range we will will of improve500 improve nm–1000 the the quality qualitynm and of ofimages400 images nm–500 in the in thenm,range respectively.rangeof 400 of 400–500nm–500 As mentioned nm byby thethe increaseinincrease Section ofof 3, thethe we integration intewillgration improve number number the quality or or specific specific of images image image in processing theprocessing range ofalgorithm. algorithm.400 nm–500 The The nm optical optical by the layout layout increase is is shown shownof the in inte in Figure Figuregration 11 11.. number or specific image processing algorithm. The optical layout is shown in Figure 11.

Figure 11. Optical layout of the final system. Figure 11. Optical layout of the final system. Figure 11. Optical layout of the final system. 6.6. Image Image Performance Performance and and Tolerance Tolerance Analysis Analysis 6.6.1. 6.1.Image Image Image Performance Performance Performance and Tolerance Analysis 6.1. ImageWeWe have Performance have known known from from Section Section4 4that that the the relativerelative illuminationillumination of the image image plane plane de- decreasecreaseWe havewith with theknown the fourth fourth from power power Section of of the the4 that cosine cosine the varelative valuelue of of illuminationthe the half half field field ofof of theview. view. image Our Our planesystem system de- has crease with the fourth power of the cosine value of the half field of view. Our system has

Sensors 2021, 21, 4313 15 of 18 Sensors 2021, 21, x FOR PEER REVIEW 16 of 19

has a large field of view, and the illumination uniformity must be taken into account. As analyzeda large field in of Section view,4 and, we the apply illumination to the pupil uniformity coma to must improve be taken the into illumination account. As of ana- the off-axislyzed infield Section of view. 4, we The apply relative to the illumination pupil coma curveto improve is shown the inillumination Figure 12a, of and the it off-axis can be seenfield thatof view. the relativeThe relative illumination illumination of the curve edge is field shown of view in Figure is above 12a, 85%, and better it can than be seen the designthat the requirement. relative illumination of the edge field of view is above 85%, better than the design requirement.

(a) (b)

(c) (d)

Figure 12.12. ((aa)) RelativeRelative illuminationillumination of of the the image image plane; plane; (b ()b Field) Field curvature curvature and and distortion distortion of theof the final final system; system; (c) MTF(c) MTF of the of finalthe final system; system; (d) Spot(d) Spot diagram diagram of the of finalthe final system. system.

As an imaging spectrometer, the distortiondistortion must bebe controlledcontrolled withinwithin aa reasonablereasonable range. Figure Figure 12b12b shows shows the the curves of the di distortionstortion and field field curvature of the system, from which we can see that the maximummaximum distortiondistortion isis lessless thanthan 0.05%0.05% andand thethe distortiondistortion correction isis excellent,excellent, ensuring ensuring high-precision high-precision detection detection of of the the system. system. The The field field curvature curva- ofture all of bands all bands is less is thanless than 0.08 mm,0.08 mm, meaning meaning a good a good correction. correction. The MTFMTF representsrepresents the the image image quality quality of of the th opticale optical system. system. As isAs shown is shown in Figure in Figure 12c, the12c, MTFthe MTF of the of full the field full offield view of exceeds view exceeds 0.4 at 118 0.4 lp/mm,at 118 lp/mm, better than better the than design the require- design ment.requirement. Therefore, Therefore, the system the system possesses possesses a good a contrast good contrast ratio and ratio the and image the image quality quality of on axisof on and axis off and axis off exhibits axis exhibits good consistency. good consis Thetency. spot The diagram spot isdiagram shown inis Figureshown 12 ind. Figure It can be12d. seen It can that be the seen root that mean the square root mean (RMS) square diameter (RMS) is less diameter than on is single less than pixel on size single (4.25 pixelµm) oversize (4.25 the whole μm) over field the of view,whole meaning field of view, a high meaning resolution a high of the resolution system. of the system.

6.2. Tolerance Analysis Tolerance analysis is used to systematicallysystematically analyze the influenceinfluence of slight error or chromatic dispersion on on optica opticall system system performance. performance. The The purp purposeose of of tolerance tolerance analysis analysis is isto todetermine determine the thetypes types and andvalues values of the of errors the errors and introduce and introduce them into them the into optical the opticalsystem system to analyze whether the system performance meets the practical requirement. In our to analyze whether the system performance meets the practical requirement. In our sys- system, we take the average diffraction MTF at 118 lp/mm as the evaluation criterion and tem, we take the average diffraction MTF at 118 lp/mm as the evaluation criterion and allocate the tolerance as Table5. According to the result of tolerance analysis, the average allocate the tolerance as Table 5. According to the result of tolerance analysis, the average

Sensors 2021, 21, 4313 16 of 18

diffraction MTF at 118 lp/mm is above a value of 0.26 with a probability of 90%, which means that the system meets the requirement of practical fabrication and alignment.

Table 5. Tolerance Allocation.

Tolerance Items Value Radius (fringes) 1 Thickness (mm) 0.01 Decenter X/Y (mm) 0.01 Tilt X/Y (degrees) 0.0042 S + A irregularity (fringes) 0.2 Index of 0.0003 (%) 0.5

7. Discussion and Conclusions We have designed an apochromatic optical system with a wide field of view for spectral remote sensing of nighttime light. Compared with the existing nighttime light remote sensors, our system meeting the requirements of high spatial and high spectral resolution, wide working band and wide swath width simultaneously. It can obtain the spectral information of eight bands in the range of 400 nm to 1000 nm with the ground sample distance of 21.5 m while the radiance of the target is above 1E-7 W/cm2/sr. Besides, the width coverage of our system reaches 320 km. We first determined the parameters of the system according to the practical requirement. Then we designed the initial structure of the optical system. In the optimization process, we applied the pupil coma to improve the relative illumination of the edge field of view. In order to achieve apochromatism, we employed the vector operation based on Buchdahl model to analyze the dispersion characteristics of glass and replace the glass materials of certain lenses in the system. After the final optimization, the system possesses excellent image quality. The RMS spot sizes of different field of view are all less than 4.25 µm, within one-pixel size. The MTF is above 0.4 at the Nyquist frequency of 118 lp/mm. The field curvature and distortion are also well corrected. The result of tolerance analysis shows that the system meets the requirement of fabrication and alignment. The scheme of combining three lenses together can also enlarge the field of view. Such scheme can simplify the process of optical design. However, the volume and weight of the whole system will increase significantly, and the higher stability of the platform will be required. In contrast, our single lens scheme possesses the advantages of compactness and lightweight. Besides, it needs less balancing weight, promoting the stability of the whole system. Another scheme is to introduce aspheric surface to the optical system to use less glass and thus improve the transmittance. However, considering the price of an aspheric lens is about ten times that of spherical lens, the cost will increase sharply. The reflective optical system has a transmittance close to 1 but has a high cost and is difficult to realize the telecentric characteristics.

Author Contributions: Conceptualization, Y.X.; Formal analysis, Y.X.; Funding acquisition, C.L.; Methodology, Y.X.; Project administration, C.L.; Resources, S.L.; Supervision, C.L.; Validation, S.L. and X.F.; Writing—original draft, Y.X.; Writing—review and editing, C.L. All authors have read and agreed to the published version of the manuscript. Funding: This paper was supported by the National Key Research and Development Program of China (Grant No. 2016YFB0501200). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to technical secrets. Sensors 2021, 21, 4313 17 of 18

Conflicts of Interest: The authors declare no conflict of interest.

References 1. Levin, N.; Kyba, C.C.M.; Zhang, Q.L.; de Miguel, A.S.; Roman, M.O.; Li, X.; Portnov, B.A.; Molthan, A.L.; Jechow, A.; Miller, S.D.; et al. Remote sensing of nighttime lights: A review and an outlook for the future. Remote Sens. Environ. 2020, 237, 33. [CrossRef] 2. Levin, N.; Kyba, C.C.M.; Zhang, Q. Remote Sensing of Nighttime lights-Beyond DMSP. Remote Sens. 2019, 11, 1472. [CrossRef] 3. Li, X.; Elvidge, C.; Zhou, Y.Y.; Cao, C.Y.; Warner, T. Remote sensing of night-time light. Int. J. Remote Sens. 2017, 38, 5855–5859. [CrossRef] 4. Zheng, Q.M.; Weng, Q.H.; Wang, K. Developing a new cross-sensor calibration model for DMSP-OLS and Suomi-NPP VIIRS night-light imageries. ISPRS J. Photogramm. Remote Sens. 2019, 153, 36–47. [CrossRef] 5. Li, D.R.; Zhao, X.; Li, X. Remote sensing of human beings—A perspective from nighttime light. Geo-Spat. Inf. Sci. 2016, 19, 69–79. [CrossRef] 6. Zhao, M.; Zhou, Y.Y.; Li, X.C.; Cao, W.T.; He, C.Y.; Yu, B.L.; Li, X.; Elvidge, C.D.; Cheng, W.M.; Zhou, C.H. Applications of Satellite Remote Sensing of Nighttime Light Observations: Advances, Challenges, and Perspectives. Remote Sens. 2019, 11, 1971. [CrossRef] 7. Elvidge, C.D.; Imhoff, M.L.; Baugh, K.E.; Hobson, V.R.; Nelson, I.; Safran, J.; Dietz, J.B.; Tuttle, B.T. Night-time lights of the world: 1994–1995. ISPRS J. Photogramm. Remote Sens. 2001, 56, 81–99. [CrossRef] 8. Elvidge, C.D.; Baugh, K.E.; Zhizhin, M.; Hsu, F.C. Why VIIRS data are superior to DMSP for mapping nighttime lights. Proc. Asia-Pac. Adv. Netw. 2013, 35, 62–69. [CrossRef] 9. Baugh, K.; Elvidge, C.; Ghosh, T.; Ziskin, D. Development of a 2009 Stable Lights Product using DMSP-OLS data. Proc. Asia-Pac. Adv. Netw. 2010, 30, 114. [CrossRef] 10. Elvidge, C.D.; Baugh, K.E.; Dietz, J.B.; Bland, T.; Sutton, P.C.; Kroehl, H.W. Radiance calibration of DMSP-OLS low-light imaging data of human settlements. Remote Sens. Environ. 1999, 68, 77–88. [CrossRef] 11. Miller, S.D.; Straka, W., III; Mills, S.P.; Elvidge, C.D.; Lee, T.F.; Solbrig, J.; Walther, A.; Heidinger, A.K.; Weiss, S.C. Illuminating the Capabilities of the Suomi National Polar-Orbiting Partnership (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band. Remote Sens. 2013, 5, 6717–6766. [CrossRef] 12. Liao, L.B.; Weiss, S.; Mills, S.; Hauss, B. Suomi NPP VIIRS day-night band on-orbit performance. J. Geophys. Res. Atmos. 2013, 118, 12705–12718. [CrossRef] 13. Liang, C.K.; Mills, S.; Hauss, B.I.; Miller, S.D. Improved VIIRS Day/Night Band Imagery with Near-Constant Contrast. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6964–6971. [CrossRef] 14. Shi, K.F.; Yu, B.L.; Huang, Y.X.; Hu, Y.J.; Yin, B.; Chen, Z.Q.; Chen, L.J.; Wu, J.P. Evaluating the Ability of NPP-VIIRS Night- time Light Data to Estimate the Gross Domestic Product and the Electric Power Consumption of China at Multiple Scales: A Comparison with DMSP-OLS Data. Remote Sens. 2014, 6, 1705–1724. [CrossRef] 15. Small, C.; Elvidge, C.D.; Baugh, K. Mapping Urban Structure and Spatial Connectivity with VIIRS and OLS Nighttime light Imagery. In Proceedings of the Joint Urban Remote Sensing Event 2013, Sao Paulo, Brazil, 21–23 April 2013; pp. 230–233. [CrossRef] 16. Hillger, D.; Kopp, T.; Lee, T.; Lindsey, D.; Seaman, C.; Miller, S.; Solbrig, J.; Kidder, S.; Bachmeier, S.; Jasmin, T.; et al. First-Light Imagery from Suomi NPP VIIRS. Bull. Amer. Meteorol. Soc. 2013, 94, 1019–1029. [CrossRef] 17. Li, X.; Zhao, L.X.; Li, D.R.; Xu, H.M. Mapping Urban Extent Using Luojia 1-01 Nighttime Light Imagery. Sensors 2018, 18, 3665. [CrossRef] 18. Li, X.; Li, X.Y.; Li, D.R.; He, X.J.; Jendryke, M. A preliminary investigation of Luojia-1 night-time light imagery. Remote Sens. Lett. 2019, 10, 526–535. [CrossRef] 19. Zheng, Q.M.; Weng, Q.H.; Huang, L.Y.; Wang, K.; Deng, J.S.; Jiang, R.W.; Ye, Z.R.; Gan, M.Y. A new source of multi-spectral high spatial resolution night-time light imagery-JL1-3B. Remote Sens. Environ. 2018, 215, 300–312. [CrossRef] 20. Levin, N.; Johansen, K.; Hacker, J.M.; Phinn, S. A new source for high spatial resolution night time images—The EROS-B commercial satellite. Remote Sens. Environ. 2014, 149, 1–12. [CrossRef] 21. Kuechly, H.U.; Kyba, C.C.M.; Ruhtz, T.; Lindemann, C.; Wolter, C.; Fischer, J.; Holker, F. Aerial survey and spatial analysis of sources of light pollution in Berlin, Germany. Remote Sens. Environ. 2012, 126, 39–50. [CrossRef] 22. Elvidge, C.D.; Cinzano, P.; Pettit, D.R.; Arvesen, J.; Sutton, P.; Small, C.; Nemani, R.; Longcore, T.; Rich, C.; Safran, J.; et al. The Nightsat mission concept. Int. J. Remote Sens. 2007, 28, 2645–2670. [CrossRef] 23. Xue, Q.S.; Tian, Z.T.; Yang, B.; Liang, J.S.; Li, C.; Wang, F.P.; Li, Q. Underwater hyperspectral imaging system using a prism- grating-prism structure. Appl. Opt. 2021, 60, 894–900. [CrossRef][PubMed] 24. Chen, J.J.; Yang, J.; Liu, J.N.; Liu, J.L.; Sun, C.; Li, X.T.; Bayanheshig; Cui, J.C. Optics design of a short- infrared prism-grating imaging spectrometer. Appl. Opt. 2018, 57, F8–F14. [CrossRef] 25. Zhu, C.X.; Hobbs, M.J.; Grainger, M.P.; Willmott, J.R. Design and realization of a wide field of view infrared scanning system with an integrated micro-electromechanical system mirror. Appl. Opt. 2018, 57, 10449–10457. [CrossRef][PubMed] 26. Chen, L.; Yuan, Q.; Ye, J.F.; Xu, N.Y.; Cao, X.; Gao, Z.S. Design of a compact dual-view endoscope based on a hybrid lens with annularly stitched aspheres. Opt. Commun. 2019, 453, 8. [CrossRef] Sensors 2021, 21, 4313 18 of 18

27. Liang, D.; Liang, D.T.; Wang, X.Y.; Li, P. Flexible fluidic lens with polymer membrane and multi-flow structure. Opt. Commun. 2018, 421, 7–13. [CrossRef] 28. Grammatin, A.P.; Kolesnik, E.V. Calculations for optical systems, using the method of simulation modelling. J. Opt. Technol. 2004, 71, 211–213. [CrossRef] 29. Sigler, R.D. Apochromatic color correction using liquid lenses. Appl. Opt. 1990, 29, 2451–2459. [CrossRef][PubMed] 30. Sigler, R.D. Glass selection for airspaced using the Buchdal dispersion equation. Appl. Opt. 1986, 25, 4311–4320. [CrossRef] 31. Pawlowski, M.E.; Dwight, J.G.; Thuc Uyen, N.; Tkaczyk, T.S. High performance image mapping spectrometer (IMS) for snapshot hyperspectral imaging applications. Opt. Express 2019, 27, 1597–1612. [CrossRef] 32. Fotios, S.; Gibbons, R. Road lighting research for drivers and pedestrians: The basis of luminance and illuminance recommenda- tions. Lighting Res. Technol. 2018, 50, 154–186. [CrossRef] 33. Kyba, C.C.M.; Garz, S.; Kuechly, H.; de Miguel, A.S.; Zamorano, J.; Fischer, J.; Holker, F. High-Resolution Imagery of Earth at Night: New Sources, Opportunities and Challenges. Remote Sens. 2015, 7, 1–23. [CrossRef] 34. Gpixel. Available online: https://www.gpixel.com/ (accessed on 10 June 2021). 35. De Meester, J.; Storch, T. Optimized Performance Parameters for Nighttime Multispectral Satellite Imagery to Analyze Lightings in Urban Areas. Sensors 2020, 20, 3313. [CrossRef][PubMed] 36. Yao, Y.; Xu, Y.; Ding, Y.; Yuan, G. Optical-system design for large field-of-view three-line array airborne mapping camera. Opt. Precis. Eng. 2018, 26, 2334–2343. [CrossRef] 37. Zhang, Z.D.; Zheng, W.B.; Gong, D.; Li, H.Z. Design of a zoom telescope optical system with a large aperture, long focal length, and wide field of view via a catadioptric switching solution. J. Opt. Technol. 2021, 88, 14–20. [CrossRef] 38. Shi, Z.; Yu, L.; Cao, D.; Wu, Q.; Yu, X.; Lin, G. Airborne ultraviolet imaging system for oil slick surveillance: Oil-seawater contrast, imaging concept, signal-to-noise ratio, optical design, and optomechanical model. Appl. Opt. 2015, 54, 7648–7655. [CrossRef] 39. Mouroulis, P.; Macdonald, J. and Optical Design; Oxford University Press: Oxford, UK, 1997. 40. Reshidko, D.; Sasian, J. Role of aberrations in the relative illumination of a lens system. Opt. Eng. 2016, 55, 12. [CrossRef] 41. Johnson, T.P.; Sasian, J. Image distortion, pupil coma, and relative illumination. Appl. Opt. 2020, 59, G19–G23. [CrossRef] 42. Robb, P.N.; Mercado, R.I. Calculation of refractive indices using Buchdahl’s chromatic coordinate. Appl. Opt. 1983, 22, 1198–1215. [CrossRef] 43. Reardon, P.J.; Chipman, R.A. Buchdahl’s glass dispersion coefficients calculated from Schott equation constants. Appl. Opt. 1989, 28, 3520–3523. [CrossRef] 44. Chipman, R.A.; Reardon, P.J. Buchdahl’s glass dispersion coefficients calculated in the near-infrared. Appl. Opt. 1989, 28, 694–698. [CrossRef][PubMed] 45. Wang, W.; Zhong, X.; Liu, J.; Wang, X.H. Apochromatic lens design in he short-wave infrared band using the Buchdahl dispersion mode. Appl. Opt. 2019, 58, 892–903. [CrossRef][PubMed]