sensors

Article Optimal PSF Estimation for Simple Optical System Using a Wide-Band Sensor Based on PSF Measurement

Yunda Zheng 1,2, Wei Huang 1,*, Yun Pan 2 and Mingfei Xu 1

1 State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; [email protected] (Y.Z.); [email protected] (M.X.) 2 University of Chinese Academy of Sciences, Beijing 100049, China; [email protected] * Correspondence: [email protected]; Tel.: +86-135-0089-5918

 Received: 30 August 2018; Accepted: 26 September 2018; Published: 19 October 2018 

Abstract: Simple optical system imaging is a method to simplify optical systems by removing aberrations using image . The point spread function (PSF) used in deconvolution is an important factor that affects the image quality. However, it is difficult to obtain optimal PSFs. The blind estimation of PSFs relies heavily on the information in the image. Measured PSFs are often misused because real sensors are wide-band. We present an optimal PSF estimation method based on PSF measurements. Narrow-band PSF measurements at a single depth are used to calibrate the optical system; these enable the simulation of real . Then, we simulate PSFs in the wavelength pass range of each color channel all over the field. The optimal PSFs are computed according to these simulated PSFs. The results indicated that the use of the optimal PSFs significantly reduces the artifacts caused by misuse of PSFs, and enhances the image quality.

Keywords: imaging sensors; point spread function; image restoration; deconvolution; simple optical system

1. Introduction Optical aberration is an important factor in image-quality degradation. Designers of modern optical imaging systems must use lenses of different glass material to obtain aberration corrected and balanced [1], which makes optical systems cumbersome and expensive. Fortunately, spatially varying deconvolution can reduce many aberrations [2]. The use of deconvolution methods in image post-processing enables the constraints in optical system design to be relaxed, and the optical system can be simplified [3]. Deconvolution is the key part of this strategy, and the point spread function (PSF) is an important factor that affects the deconvolution result [4,5]. Once the PSF is obtained inaccurately, the restored image is likely to have severe artifacts. The PSF measurement can be affected by a demosaicing algorithm for color filter arrays (CFAs) and sensor noise. PSFs are usually not large, and their size can even be 3 × 3 pixels, if the field of view (FOV) is well focused. The Bayer filter array [6] is a CFA that is used in most single-chip digital image sensors. As shown in Figure1, the Bayer filter pattern is 25% red, 25% blue and 50% green, which means that 75% red, 75% blue and 50% green information are estimated using the demosaicing algorithm [7,8]. This will produce a significant error in the PSF measurement.

Sensors 2018, 18, 3552; doi:10.3390/s18103552 www.mdpi.com/journal/sensors Sensors 2018, 18, 3552 2 of 11 SensorsSensors 20182018,, 1818,, xx FORFOR PEERPEER REVIEWREVIEW 22 ofof 1111

FigureFigure 1.11.. TheThe Bayer Bayer arrangementarrangement ofof colorcolor filters.filters.filters.

Furthermore,Furthermore, color color filters filtersfilters of of real real imageimage sensorssenssensorsors areare wide-band,wide-band, and and PSFs PSFs varyvary withwith thethe wavelengthwavelength [[9].[9].9]. BecauseBecause the the objectsobjects havehave variousvarious sp spectra,spectra,ectra, the the PSFsPSFs ofof thethe imageimage receivedreceived byby sensorssensors areare notnotnot fixed. fixed.fixed. Measuring MeasuringMeasuring PSFs PSFsPSFs directly directlydirectly can cancan lead leadlead to their toto theirtheir misuse. misuse.misuse. The spectralTheThe spectralspectral response responseresponse of a color ofof aaimage colorcolor sensorimageimage sensor issensor shown isis shown inshown Figure inin2 Figure[Figure10]. The 22 [10].[10]. blue TheThe channel blueblue channelchannel can even cancan receive eveneven 532-nmreceivereceive light.532-nm532-nm Assuming light.light. AssumingAssuming that the spectrumthatthat thethe spectrumspectrum of the object ofof thethe is mostlyobjectobject isis 532-nm mostlymostly light, 532-nm532-nm the blurlilight,ght, in thethe the blurblur blue inin channel thethe blueblue would channelchannel be caused wouldwould by bebe 532-nm causedcaused PSF.byby 532-nm532-nm If the 450-nmPSF.PSF. IfIf thethe PSF 450-nm450-nm is used PSFPSF as theisis usedused PSF asas of thethe the PSPS blueFF ofof channel thethe blueblue in channelchannel deconvolution inin deconvolutiondeconvolution to reduce toto the reducereduce blur, thethe misuseblur,blur, thethe of misusemisuse the PSF ofof will thethe leadPSFPSF will towill a poorleadlead toto restored aa poorpoor result, restoredrestored as result, shownresult, asas in shownshown Figure 3 inin. FigureFigure 3.3.

FigureFigure 2.22.. SpectralSpectral response response ofof thethe threethree differentdifferent colorcolocolorr filtersfiltersfilters ofof aa commerciallycommercially availableavailable colorcolor imageimage sensorsensor array,array, thethethe AptinaAptinaAptina MT9M034.MT9M034.MT9M034. TheThe cutoffcutoff frequencyfrequency ofof anan IR-cutIR-cut filterfilterfilter thatthat isis typicallytypically employedemployed inin camerascameras isis shownshown withwithwith thethethe dasheddasheddashed line.line.line.

BlindBlind deconvolutiondeconvolutiondeconvolution estimates estimatesestimates PSFs PSFsPSFs from fromfrom blurred blublu imagesrredrred imagesimages directly. directly.directly. In principle, InIn principle, blindprinciple, estimation blindblind willestimationestimation not have willwill the notnot problem havehave ofthethe misusing problemproblem PSFs ofof misusingmisusing owing to PSFs wide-bandPSFs owingowing sensors. toto wide-bandwide-band However, sensors.sensors. the blind However,However, estimation thethe ofblindblind PSFs estimationestimation relies heavily ofof PSFsPSFs on the reliesrelies information heavilyheavily onon in thethe the informationinformation blurred image. inin thethe The blurredblurred PSF would image.image. be wronglyTheThe PSFPSF estimatedwouldwould bebe ifwronglywrongly the information estimatedestimated in ifif the thethe FOV informationinformation of the optical inin thethe system FOVFOV ofof is the notthe optical adequate.optical systemsystem Moreover, isis notnot blindadequate.adequate. deconvolution, Moreover,Moreover, whichblindblind estimatesdeconvolution,deconvolution, blur and whichwhich image estimatesestimates simultaneously, blurblur andand is a im stronglyimageage simultaneously,simultaneously, ill-posed problem, isis andaa stronglystrongly can be unreliable ill-posedill-posed asproblem,problem, a result. andand cancan bebe unreliableunreliable asas aa result.result.

Sensors 2018, 18, 3552 3 of 11

SensorsSensors 20182018,, 1818,, xx FORFOR PEERPEER REVIEWREVIEW 33 ofof 1111

Figure 3 3... ((aa)) Image Image blurred blurred by by 532-nm 532-nm (poi (point(point spread function) PSF; ( b)) image image restored restored from from ( (aa)) using using 450-nm450-nm PSF. PSF. TheThe 450-nm450-nm PSFPSF isis shownshown on on the the top top right; right; ( (cc)) image image restored restored from from ( (aa)) using using 532-nm 532-nm PSF. PSF. The 532-nm PSFPSF isis shownshown onon thethe toptop right.right.

We presentpresent anan optimaloptimal PSFPSF estimationestimation methodmethod basedbased onon PSFPSF measurements.measurements. Narrow-band PSF measurements andand image-matching image-matching technology technology are are used used to calibrate to calibrate the optical the optical system, system, which makeswhich makesit possible it possible to simulate to simulate a real . a real Then, lens. we Then, simulate we simulate the PSFs the in PSFs the wavelength in the wavelength pass range pass of range each ofcolor each channel color allchannel over the all FOV. over The the optimal FOV. The PSFs optimal are computed PSFs are according computed to these according simulated to these PSFs, whichsimulatedsimulated can reducePSFs,PSFs, whichwhich the blur cancan caused reducereduce by thethe the blurblur PSFs causedcaused of every by wavelength the PSFs of in every the channel wavelength without in the introducing channel withoutsevere artifacts. introducing A flowchart severe artifacts. of the method A flowchar is illustratedtt ofof thethe methodmethod in Figure isis4 .illustratedillustrated inin FigureFigure 4.4.

Figure 4. FlowchartFlowchart of of our our proposed proposed method. method.

By utilizing the PSF measurements, our proposed method method can can obtain obtain the the PSFs PSFs of of arbitrary arbitrary wavelengthswavelengths all around the FOV accurately, and can reduce thethe interferenceinterference of noise and the demosaicing algorithm. The The use use of of optimal optimal PSFs PSFs avoi avoidsds the the artifacts caused by the misuse of PSFsPSFs owing to the use of wide-bandwide-band sensors,sensors, whichwhich makesmakes thethe resultsresults ofof imageimage restorationrestoration robust.robust. The paper is organizedorganized asas follows.follows. The The related related wo worksrksrks areare presented presented in in Section Section 2 2.2.. InInIn SectionSectionSection3 3,,3, thethe optical optical systemsystem calibrationcalibration based based onon PSFPSF measurementsmeasurmeasurementsements isis described.described. Se SectionSectionction 4 44 demonstrates demonstratesdemonstrates thethethe estimationestimation ofofof optimal optimaloptimal PSFs. PSPSFs. We We show show the the results results of optical of optical system system calibration calibration and the and restored the restored images imagesfromimages real-world fromfrom real-worldreal-world images captured imagesimages bycapturedcaptured our simple byby our opticalour simplesimple system opticaloptical in Section systemsystem5. Finally, inin SectionSection the conclusions 5.5. Finally,Finally, arethethe conclusionspresentedconclusions in areare Section presentedpresented6. inin SectionSection 6.6.

2. Related Work Many recent works have been proposed to esti estimatemate and remove blur due to cameracamera shake or motion [11–15]. [11–15]. Most Most of of them used blind estima estimationtiontion methods methods based based on on blurred blurred images. images. However,However, optical blur has received lessless attention.attention. According to the properties of optical blur, somesome works works introduced introduced priorspriors or or assumptions assumptions in in the the blind method to estimate optical blur kernel [16–19 [16–19].]. However,However, thethe optical optical blurblur isis spatially spatially varying,varying, soso some some fields fieldsfields ofof thethe blurredblurred imageimage maymay notnot havehavehave sufficientsufficientsufficient informationinformationinformation tototo estimateestimateestimate the thethe PSFs. PSFs.PSFs. Some works works created created calibration calibration boards. boards. After After obta obtaininginingining blurredblurred blurred imagesimages images anand clear and images clear images of the calibrationofcalibration the calibration boards,boards, boards, PSFsPSFs werewere PSFs computedcomputed were computed usingusing decodeco usingnvolution deconvolution methods methods[3,20–22]. [Further,3,20–22]. Kee Further, et al. [22][22] developeddeveloped twotwo modelsmodels ofof opticaloptical blurblur toto fitfit thethe PSFs,PSFs, soso thatthat thethe modelsmodels cancan predictpredict spatiallyspatially varying PSFs.

Sensors 2018, 18, 3552 4 of 11 Sensors 2018, 18, x FOR PEER REVIEW 4 of 11

Kee etThe al. [most22] developed closely relatedtwo models work is of [23]. optical Instead blur of to estimating fit the PSFs, PSF sos from that theblurred models images, can predict Shih et spatiallyal. used varyingmeasured PSFs. PSFs to calibrate lens prescription, and computed fitted PSFs by simulation. However,The most real closely sensors related are wide-band. work is [23 The]. Instead misuse of of estimating PSFs had not PSFs been from considered blurred images, in this Shihwork. et al. used measuredKernel fusion PSFs to[24] calibrate looks like lens prescription,the optimal andPSF computedcomputation fitted of PSFsour bywork, simulation. but it is However, actually realdifferent. sensors Kernel are wide-band. fusion fused The several misuse blind of PSFs kernel hads estimated not been considered by other papers in this and work. generated a better one Kernelas a result. fusion In [our24] lookswork, like the thereal optimal kernel PSFis a computationlinear combination of our of work, multiple but it PSFs is actually with unknown different. Kernelproportions. fusion We fused compute several an blind optimal kernels PSF estimated which can by ha otherndle papersblurs caused and generated by all these a better multiple one PSFs as a result.stably. In our work, the real kernel is a linear combination of multiple PSFs with unknown proportions. We compute an optimal PSF which can handle blurs caused by all these multiple PSFs stably. 3. PSF Measurement and Optical System Calibration 3. PSF Measurement and Optical System Calibration Real optical systems differ from designed optical systems owing to manufacturing tolerances and Realfabrication optical systemserrors. differThus, fromthe designedoptical system optical needs systems to owingbe calibrated to manufacturing before PSF tolerances simulation. and fabricationFortunately, errors. for a Thus, simple the optical optical system, system needsthe PSF to er berors calibrated caused before by tolerances PSF simulation. are very Fortunately, small after forsensor-plane a simple optical compensation. system, theWe PSFperformed errors causedmany times by tolerances of tolerance are disturbances very small afterrandomly sensor-plane within a compensation.reasonable range We on performed the designed many optical times system of tolerance using disturbances CODE V optical randomly design within software, a reasonable including rangethe radius on the tolerance, designed thickness optical system tolerance, using element CODE decentration V optical design tolerance, software, element including tilt tolerance the radius and tolerance,refractive thicknessindex tolerance. tolerance, After element the sensor-plane decentration compensation, tolerance, element the changes tilt tolerance of the PSFs and refractive are much indexsmaller tolerance. than the After errors the introduced sensor-plane by compensation,the assumption the that changes there ofis thea constant PSFs are PSF much in each smaller patch than in thespatially errors introducedvarying deconvolution. by the assumption Thus, that we there repl isaced a constant the optical PSF in eachsystem patch calibration in spatially with varying the deconvolution.sensor-plane calibration. Thus, we replaced the optical system calibration with the sensor-plane calibration. PSFPSF measurementsmeasurements and image-matc image-matchinghing technology technology are are used used in insensor-plane sensor-plane calibration. calibration. We Wecompute compute the the match match between between measured measured PSFs PSFs and and simula simulatedted PSFs PSFs of of different different se sensor-planensor-plane positions.positions. TheThe most most matching matching position position is is the the sensor-plane sensor-plane position position after after sensor-plane sensor-plane compensation. compensation. TheThe PSF PSF measurement measurement setup setup is shownis shown in Figure in Figure5a, and 5a, consists and consists of a self-designed of a self-designed two-lens two-lens optical systemoptical withsystem a Sony with IMX185LQJ-C a Sony IMX185LQJ-C image sensor, image an optical sensor, pinhole, an optical light-emitting pinhole, light-emitting diode (LED) lamps, diode and(LED) three lamps, narrow-band and three filters narrow-band whose pass filters ranges whose are pass around ranges 650 are nm, around 532 nm 650 and nm, 450 532 nm. nm The and LED 450 lamp,nm. The the LED optical lamp, pinhole, the optical and the pinhole, narrow-band and the filter narrow-band are installed filter in are a black installed box. in a black box.

FigureFigure 5.5. ((aa)) ExperimentalExperimental setupsetup ofof PSFPSF measurement;measurement; andand ((bb)) thethe self-designedself-designed lenslens andand threethree narrow-bandnarrow-band filters. filters.

TheThe measurements measurements only only need need to beto donebe done for a singlefor a single object plane.object Onceplane. the Once sensor-plane the sensor-plane position isposition calibrated, is calibrated, we can simulate we can the simu PSFslate of the any PSFs object of plane. any object We first plane. adjust We the first position adjust of the the position sensor toof makethe sensor the image to make clear andthe image fix it. Then, clear theand measured fix it. Then, PSF canthe bemeasured captured PSF by thecan camera. be captured We measure by the thecamera. PSFs ofWe different measure FOVs the byPSFs moving of different the black FOVs box by perpendicular moving the to black the optical box perpendicular axis. The exposure to the timeoptical is controlled axis. The exposure to avoid thetime saturation is controlled of the to lightavoid intensity. the saturation of the light intensity. TheThe above above experiment experiment is is used used to to measure measure the the different different field field PSFs PSFs of 650of 650 nm, nm, 532 nm532 andnm 450and nm450 innm a fixedin a fixed and and unknown unknown sensor-plane sensor-plane position. position. Then, Then, we we simulate simulate different-field different-field PSFs PSFs of of 650 650 nm, nm, 532532 nm nm and and 450 450 nm nm in in different different sensor-plane sensor-plane positions positions using using CODE CODE V to V compute to compute the matchthe match with with the measuredthe measured PSFs. PSFs. Before the sensor-plane matching, we need to find the simulated PSF that is in the same field as the measured PSF. As show in Figure 6, the PSFs array is a set of spatially varying simulated PSFs of 650 nm in one sensor-plane position. The single PSF out of the array is a measured PSF of one

Sensors 2018, 18, 3552 5 of 11

Before the sensor-plane matching, we need to find the simulated PSF that is in the same field as Sensorsthe measured 2018, 18, x PSF.FOR PEER As show REVIEW in Figure6, the PSFs array is a set of spatially varying simulated5 PSFs of 11 of 650 nm in one sensor-plane position. The single PSF out of the array is a measured PSF of one field.field. Optical Optical systems systems have have rotational rotational symmetry symmetry with with respect respect to to the the optical axis. In In the the simulation, simulation, thethe intersectionintersection ofof optical optical axis axis and and the the sensor sensor is the is imagethe image center. center. However, However, in the in real the optical real system,optical system,the intersection the intersection will deviate will from deviate the from image the center. imag Accordinge center. According to the rotational to the rotational symmetry symmetry of the optical of thesystem, optical if the system, distance if the from distance the real from intersection the real to intersection the measured to PSF the ismeasur close toed the PSF distance is close from to the distanceimage center from to the the image simulated center PSF, to th theire simulated fields match. PSF, their fields match. We firstfirst calibratecalibrate the the realistic realistic intersection intersection using using the the least-square least-square method. method. In opticalIn optical systems, systems, a PSF a PSFthat doesthat does not lie not on lie the on optical the optical axis of theaxis lens of the has lens reflection has reflection symmetry symmetry with respect with to therespect meridional to the meridionalplane. The symmetricplane. The axissymmetric of the PSF axis goes of the through PSF go thees intersection through the of intersection the optical axisof the and optical the sensor. axis andWe findthe thesesensor. symmetric We find axesthese of symmetric the measured axes PSFs. of the Considering measured PSFs. the error, Considering the point the with error, the least the pointsum of with distances the least from sum these of distances symmetry from axes these is the symme best choicetry axes of theis the intersection. best choice of the intersection. The distances from from simulated simulated PSFs PSFs to to the the image image center center form form a aset set S.. We We findfind the element in in S closestclosest toto thethe distancedistance fromfrom thethe intersectionintersection ofof thethe opticaloptical axisaxis andand thethe sensor to the measured PSF. If theirtheir differencedifference isis lessless than than∆ dΔwhichd which represents represents the the acceptable acceptable field field error, error, the simulatedthe simulated PSF PSFand and the measuredthe measured PSF PSF are consideredare considered to be to in be the in samethe same field. field. If their If their difference difference is more is more than than∆d, whichΔd , which means means that there thatis there no simulated is no simulated PSF in the PSF same in fieldthe assame the measuredfield as the PSF, measured the measured PSF, PSFthe measuredwill be abandoned. PSF will be After abandoned. field matching, After field we rotate matching, the simulated we rotate PSFs the insimulated different PSFs sensor in planesdifferent to sensormake them planes have to make the same them tilt have angle the as same the measured tilt angle as PSF. the measured PSF.

Figure 6. DiagramDiagram of of PSF PSF field field matching.

The measured PSFs are affected by a demosaicing algorithm and sensor noise, but their sizes and shapes areare hardlyhardly affected.affected. Therefore, Therefore, a a template template matching matching approach approach can can be usedbe used tofind to find the realthe realsensor sensor plane. plane. We compute We compute the maximum the maximum of the normalized of the normalized cross-correlation cross-correlation matrix of matrix the simulated of the simulatedPSFs and thePSFs measured and the measured PSF as the PSF matching as the degreematching [25 degree]. The sensor[25]. The plane sensor in which plane thein which matching the matchingdegree of thedegree simulated of the simulated PSF and the PSF measured and the PSFmeasur is highested PSF is is the highest calibration is the result.calibration The normalizedresult. The normalizedcross-correlation cross-correlation matrix is expressed matrix is as expressed follows: as follows: h i [wst(,)−++− w] f ( x sy , t ) f ∑∑ [w(s, t) − w] f (x + s, y + t) −xyf xy γ = s stt γ(x(,, xyy) = ) 1 , , (1)   1 (1) 2 h 2i2 2 2 []−++−2  ∑∑[w(wsts(,), t) − w w] ∑ ∑ f( fx (+ xs, syy ,+ t t) )− ffxyxy sstt s stt

where w is the measured PSF, and f is the simulated PSF. w is the average value of w, and fxy is the average value of f in the overlapping region of w and f.

Sensors 2018, 18, 3552 6 of 11

where w is the measured PSF, and f is the simulated PSF. w is the average value of w, and f xy is the average value of f in the overlapping region of w and f. There is a potential ambiguity that more than one sensor plane has a high matching degree. However, for the best sensor plane, the matching degrees are high in all fields. We calculate the average of the matching degrees in all fields to prevent ambiguity.

4. Optimal Point Spread Function Estimation After calibration, the PSFs of every wavelength in all fields can be simulated accurately. However, real sensors are wide-band. Each color channel of a real sensor allows light in a large wavelength range to pass through. In the daily use of cameras, the complex illumination and the multiple reflectance of objects make it difficult to obtain the spectrum received by sensors. Thus, it is difficult to calculate the real PSF by weighting. We estimate an optimal PSF according to the simulated PSFs in the wavelength pass range of the color channel, which can reduce the blur caused by each wavelength PSF in the channel without introducing severe artifacts. In optical systems, the latent sharp image i is blurred by PSF kλ. The observed image b can be expressed as follows: b = i ∗ kλ + n, (2) where ∗ is the operator, and n is the additive noise. The effect of noise can be suppressed significantly by employing regularization in state-of-the-art deconvolution methods, so we will not consider noise in the following discussion. In the frequency domain, Equation (2) can be written as follows:

B = I × Kλ, (3) where the terms in capital letters are the Fourier transforms of the corresponding terms in Equation (2). 0 If we use a new PSF ko in the deconvolution to restore the image, the new resultant image i in the frequency domain can be expressed as follows:

K I0 = I × λ . (4) Ko

−1 Therefore, the new resultant image can be seen as the latent sharp image blurred by F (Kλ/Ko), where F −1 represents the inverse . In optical systems, the Fourier transform of the PSF is the optical transfer function (OTF), which can be considered as a combination of the modulation transfer function (MTF) and the phase transfer function (PTF): OTF = MTF × eiPTF. (5)

Therefore, the equivalent OTF of the optical system after deconvolution is as follows:

Kλ MTFλ i(PTF −PTFo) OTFe = = e λ , (6) Ko MTFo where MTFλ and PTFλ are the MTF and PTF of kλ, respectively; MTFo and PTFo are the MTF and PTF of ko, respectively. The equivalent MTF and PTF of the optical system after deconvolution are as follows: MTFλ MTFe = MTFo (7) PTFe = PTFλ − PTFo MTF is related to contrast reduction. Ideally, MTF is a constant 1, which means that the modulation of the image is the same as the modulation of the object. Actually, some of the information of the object will be lost in a real optical system, and MTF will be less than 1. If MTF is more than 1, the modulation will increase, and the intensity of the pixels will be saturated easily. As shown in Sensors 2018, 18, x FOR PEER REVIEW 7 of 11

Sensors 2018, 18, 3552 7 of 11 PSFs, that is, kλ , kλ , …, kλ , MTFe should be close to 1 and should not exceed 1. The MTFo 1 2 n should be as follows: Figure3b, this saturation is unacceptable. Thus, for an image that is blurred by a series of PSFs, that is, MTF()ν = max({ MTFλ ()}ν = ), oini 1.. (8) kλ1 , kλ2 ,..., kλn , MTFe should be close to 1 and should not exceed 1. The MTFo should be as follows: ν where is the spatial frequency, and MTFλ represents the MTF of kλ . MTF ( ) = i ( MTF ( ) ) i o ν max λi ν i=1...n , (8) PTF indicates the shape and size information of PSF. Ideal PSF is spot-like or disk-like. The energywhere νisis concentrated, the spatial frequency, and the andPTF MTFis generallyλi represents a constant the MTF 0. For of k λai .set of PSFs, there are various shapesPTF and indicates sizes. The the shapePTFs are and close size informationat some spatial of PSF. frequency Ideal PSF area is spot-like and have or a disk-like. large difference The energy at otheris concentrated, spatial frequency and the areas. PTF isIn generally the close aarea, constant we calculate 0. For a setthe of average PSFs, there of these are variousPTFs as shapesthe PTF and of ν thesizes. optimal The PTFs PSF to are make close atPTF somee () spatial close to frequency 0 for this areaentire and set have of PSFs. a large In the difference other area, at other there spatial is no fixedfrequency value areas. of PTF In theto restore close area, PTF weλ ofcalculate all the thewavelengths average of in these the PTFs pass asrange the PTF of the of thecolor optimal channel. PSF to make PTFe(ν) close to 0 for this entire set of PSFs. In the other area, there is no fixed value of PTF to We set the PTFo to 0 in the area to avoid the presence of artifacts caused by incorrect recovery, restore PTFλ of all the wavelengths in the pass range of the color channel. We set the PTFo to 0 in the although the blur caused by PTFλ in the spatial frequency area will remain. area to avoid the presence of artifacts caused by incorrect recovery, although the blur caused by PTFλ in theThe spatial optimal frequency PSF ko can area be will expressed remain. as follows: − The optimal PSF ko can be expressed as= follows:1 iPTFo kMTFeooF (). (9)   −1 iPTFo ko = F MTFoe . (9) 5. Result 5. ResultIn this section, we show the results of the optical system calibration, and we compare the restoredIn this results section, from we real-world show the images results ofcaptured the optical by our system simple calibration, optical system. and we compare the restored resultsThe from sensor-plane real-world matching images captured curve is by shown our simple in Figure optical 7. system.Where the abscissa is 1.38 mm, the matchingThe degree sensor-plane is the highest matching which curve is 0.858. is shown Note that in Figure the upper7. Where limit of the the abscissamatching is degree 1.38 mm, is 1. Thisthe matching means that degree the matched is the highest simulated which PSFs is 0.858. are Notevery similar that the to upper the measured limit of the PSFs. matching Figure degree 8 shows is 1. theThis comparison means that of the the matched measured simulated PSFs and PSFs the arematc veryhed similar simulated to the PSFs. measured The sizes PSFs. and Figure shapes8 shows of the matchedthe comparison simulated of thePSFs measured in all fields PSFs are and close the to matched the measured simulated PSFs. PSFs. We Thecan sizessee that and the shapes simulated of the PSFsmatched have simulated more details PSFs inand all fieldsare more are close accurate to the, measuredbecause they PSFs. are We not can affected see that theby simulatednoise or demosaicing.PSFs have more details and are more accurate, because they are not affected by noise or demosaicing.

FigureFigure 7. 7. Sensor-planeSensor-plane matching matching curve. curve. The The abscissa abscissa is is the the distance distance from from the the simulated simulated sensor sensor plane plane toto the the designed designed sensor sensor plane, plane, and and the the units units are are in in mm. mm. The The ordinate ordinate is is the the matching matching degree degree of of the the simulatedsimulated PSFs PSFs and and the the measured measured PSF. PSF.

Sensors 2018, 18, x FOR PEER REVIEW 8 of 11

Sensors 2018, 18, 3552 8 of 11 Sensors 2018, 18, x FOR PEER REVIEW 8 of 11

Figure 8. Comparison of the measured PSFs and the matched simulated PSFs. The measured PSFs are on the top, and the matched simulated PSFs are on the bottom.

We simulated the PSFs for which the wavelengths range from 450 nm to 650 nm using CODE V, andFigure the 8.8interval. Comparison is 10 ofof thenm.themeasured measuredAccording PSFs PSFs to and and the the the spectral matched matched simulatedsensitivity simulated PSFs. PSFs.characteristics The The measured measured of PSFs thePSFs are Sony IMX185LQJ-Careon theon the top, top,image and and the sensor,the matched matched the simulated wavelengthsimulated PSFs PSFs are pass onare the onranges bottom.the bottom. of the color channels are extracted. The pass ranges of the red channel, the green channel and the blue channel are 570–650 nm, 470–640 nm and 450–530We simulated nm, respectively. thethe PSFsPSFs forfor After whichwhich computing thethe wavelengths wavelengths the optimal range range PSFs, from from the 450 450 blurred nm nm to to 650 image 650 nm nm is using usingdivided CODE CODE into V, 7V,and × 13and the rectangular intervalthe interval is 10overlapping nm.is 10 According nm. patches According to theand spectral restoredto the sensitivity spectralusing the characteristicssensitivity deconvolution characteristics of method the Sony in IMX185LQJ-C of[26]. the Sony IMX185LQJ-CimageFor sensor, comparison, theimage wavelength wesensor, also therestored pass wavelength ranges the patches of the pass color of ra thenges channels image of the areusing color extracted. blind-estimated channels The are pass extracted. PSFs ranges [13] of andThe the selectedpassred channel, ranges wavelength of the the green red PSFs. channel, channel The the anddeconvolution green the blue channe channel methl andod arethe that 570–650blue was channel employed nm,470–640 are 570–650 was nm the andnm, same 450–530470–640 as in our nm,nm proposedandrespectively. 450–530 method, nm, After respectively. computingand all the thedeconvolutionAfter optimal computing PSFs, parame the blurredopterstimal were imagePSFs, the the issame. divided blurred The into selectedimage 7 × is13 wavelengthsdivided rectangular into are7overlapping × 13620 rectangular nm, 530 patches nm overlapping and and 470 restored nm, patches which using andrespectively the restored deconvolution hausingve the the method highest deconvolution in spectral [26]. methodsensitivities in [26]. in the red channel,For comparison,the green channel we also and restored the blue the cha patches nnel of of the the Sony image IMX185LQJ-C using blind-estimated image sensor. PSFs [13] [13] and selectedFigures wavelength 9–11 show PSFs. PSFs. the The The restored deconvolution deconvolution results obtainedmeth methodod that thatfrom was was real-world employed employed images was was thethe captured same same as as by in our our simpleproposed optical method, system. and allTh e the resolution deconvolution is 1920 parametersparame × 1080ters pixels. were Th thee same.results TheTh obtainede selected using wavelengths selected wavelengthare 620 nm, 530 PSFs nm and and blind-estima 470 nm, whichted respectively PSFs include ha have noticeableve the highest artifacts. spectral They sensitivities can hardly in thehandle red imageschannel, reasonably the green channel that are and captured the blue by channelcha ournnel si ofmple the Sony optical IMX185LQJ-C system with image a wide-band sensor.sensor. sensor. ComparedFigures to 9 9–11–the11 showother show themethods, the restored restored our results proposed results obtained obtained meth fromod hasfrom real-world fewer real-world artifacts images images and captured the captured image by our quality by simple our is moresimpleoptical stable. system.optical Furthermore, system. The resolution Th eour resolution is method 1920 × is1080is 1920several- pixels. × 1080dozen The pixels. resultstimes Th obtainedfastere results than using obtainedthe selectedblind using method wavelength selected [13], becausewavelengthPSFs and we blind-estimated onlyPSFs need and simple blind-estima PSFs calculations includeted noticeable PSFs to estimate include artifacts. thenoticeable PSFs. They can arti hardlyfacts. handleThey can images hardly reasonably handle imagesthatWe are reasonablyhave captured not compared by that our are simple ourcaptured results optical bywith system our the si withrestormple a edoptical wide-band image system using sensor. measuredwith Compared a wide-band PSFs because to the sensor. other it is difficultComparedmethods, and our to time proposedthe other consuming methodmethods, to has measureour fewer proposed artifactsPSFs methin andalodl thefields has image feweraccurately. quality artifacts isAs moreand shown the stable. imagein Figure Furthermore, quality 8, the is measuredmoreour method stable. PSFs isFurthermore, several-dozen are significantly our times method affected faster is than byseveral- noise the blinddozen and methoddemosaicing. times faster [13], becauseThethan restoredthe we blind only results method need will simple [13], be worsebecausecalculations than we theonly to estimateresults need simpleobtained the PSFs. calculations using selected to estimate wavelength the PSFs. PSFs. We have not compared our results with the restored image using measured PSFs because it is difficult and time consuming to measure PSFs in all fields accurately. As shown in Figure 8, the measured PSFs are significantly affected by noise and demosaicing. The restored results will be worse than the results obtained using selected wavelength PSFs.

FigureFigure 9. 9. ResultsResults for for the the periodic periodic table table image. image. TheThe top-left top-left image image is the is captured the captured blurred blurred image. image. The rightThe right three three images images on the on thetop toprow row are arethethe restored restored results results using using three three sets sets of of spatially spatially varying varying PSFs. PSFs. The bottom two rows are the details of the top-row images. The bottom-right corners corners of of the detail imagesimages of of the the restored restored results show the used local PSFs. Figure 9. Results for the periodic table image. The top-left image is the captured blurred image. The right three images on the top row are the restored results using three sets of spatially varying PSFs. The bottom two rows are the details of the top-row images. The bottom-right corners of the detail images of the restored results show the used local PSFs.

Sensors 2018, 18, 3552 9 of 11 Sensors 2018, 18, x FOR PEER REVIEW 9 of 11

Figure 10. ResultsResults for for the the dictionary dictionary image. image. TheThe top-left top-left image image is the is captured the captured blurred blurred image. image. The Theright right three three images images on the on thetop toprow row are are the the restored restored results results using using three three sets sets of of spatially spatially varying varying PSFs. PSFs. The bottom two rows areare thethe detailsdetails ofof thethe top-rowtop-row images.images. The bottom-right corners of the detail images of the restored resultsresults showshow thethe usedused locallocal PSFs.PSFs.

Figure 11. Results for the Mickey image. The top-left image is the captured blurred image.image. The right three images images on on the the top top row row are are th thee restored restored results results using using three three sets sets of spatially of spatially varying varying PSFs. PSFs. The Thebottom bottom two tworows rows are arethe thedetails details of ofthe the top-row top-row images. images. The The bottom-right bottom-right corners corners of of the detail images of the restored resultsresults showshow thethe usedused locallocal PSFs.PSFs.

We have not compared our results with the restored image using measured PSFs because it 6. Conclusions is difficult and time consuming to measure PSFs in all fields accurately. As shown in Figure8, the measuredIn this paper, PSFs are we significantly present an affected optimal by noisePSF andestimation demosaicing. method The that restored is based results on will PSF be worsemeasurements. than the resultsBy performing obtained PSF using measurements selected wavelength and PSF PSFs. simulations, the proposed method can obtain PSFs more accurately. Considering the use of wide-band sensors, the method can restore 6.images Conclusions of simple optical systems stably without severe artifacts. The experiment carried out using the real-world images captured by a self-designed two-lens optical system shows the effectiveness the real-worldIn this paper, images we present captured an by optimal a self-designe PSF estimationd two-lens method optical that system is based shows on PSF the measurements. effectiveness of our method. Byof our performing method. PSF measurements and PSF simulations, the proposed method can obtain PSFs more In the optical system calibration part, the proposed method ignores the error caused by accurately.In theConsidering optical system the usecalibration of wide-band part, sensors, the proposed the method method can restoreignores images the error of simple caused optical by residual tolerance after sensor-plane compensation. The method is suitable for lenses with high systemsresidual stablytolerance without after severesensor-plane artifacts. compensation. The experiment The carriedmethod outis suitable using the for real-world lenses with images high machining accuracy. However, for low-precision lenses, there is a need for a more precise optical capturedmachining by accuracy. a self-designed However, two-lens for low-precision optical system lens showses, there the effectiveness is a need for of a our more method. precise optical system calibration method.

Sensors 2018, 18, 3552 10 of 11

In the optical system calibration part, the proposed method ignores the error caused by residual tolerance after sensor-plane compensation. The method is suitable for lenses with high machining accuracy. However, for low-precision lenses, there is a need for a more precise optical system calibration method.

Author Contributions: Y.Z., W.H. and M.X. conceived and designed the experiments; Y.Z. and Y.P. performed the experiments, analyzed the data and wrote the paper. Funding: This research received no external funding. Acknowledgments: The project was supported by the State Key Laboratory of applied optics. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Meyer-Arendt, J.R. Introduction to Classical and Modern Optics; Prentice Hall: Upper Saddle River, NJ, USA, 1995. 2. Schuler, C.J.; Hirsch, M.; Harmieling, S.; Scholkopf, B. Non-stationary correction of optical aberrations. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 659–666. 3. Heide, F.; Rouf, M.; Hullin, M.B.; Labitzke, B.; Weidrich, W.; Kolb, A. High-quality computational imaging through simple lenses. ACM Trans. Graph. 2013, 32, 149. [CrossRef] 4. Ahi, K.; Shahbazmohamadi, S.; Asadizanjani, N. Quality control and authentication of packaged integrated circuits using enhanced-spatial-resolution terahertz time-domain spectroscopy and imaging. Opt. Lasers Eng. 2017, 104, 274–284. [CrossRef] 5. Ahi, K. A method and system for enhancing the resolution of Terahertz imaging. Measurement 2018.[CrossRef] 6. Bayer, B.E. Color Imaging Array. U.S. Patent No. 3,971,065, 5 March 1975. 7. Gunturk, B.K.; Glotzbach, J.; Altunbasak, Y.; Schafer, R.W.; Mersereau, R.M. Demosaicking: Color filter array interpolation. IEEE Signal Process. Mag. 2005, 22, 44–54. [CrossRef] 8. Ramanath, R.; Snyder, W.E.; Bilbro, G.L. Demosaicking methods for Bayer color arrays. J. Electron. Imaging 2002, 11, 306–315. [CrossRef] 9. Ahi, K. Mathematical modeling of THz point spread function and simulation of THz imaging system. IEEE Trans. THz Sci. Technol. 2017, 7, 747–754. [CrossRef] 10. Sahin, F.E. Novel Imaging Systems for Intraocular Retinal Prostheses and Wearable Visual Aids. Ph.D. Thesis, University of Southern California, Los Angeles, CA, USA, 2016. 11. Levin, A.; Fergus, R.; Durand, F.; Freeman, W. Understanding and evaluating algorithms. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; Volume 8, pp. 1964–1971. 12. Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.T.; Freeman, W. Removing camera shake from a single photograph. ACM Trans. Graph. 2006, 27, 787–794. [CrossRef] 13. Krishnan, D.; Tay, T.; Fergus, R. Blind deconvolution using a normalized sparsity measure. In Proceedings of the 2011 Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; Volume 42, pp. 233–240. 14. Cho, S.; Lee, S. Fast motion deblurring. In Proceedings of the ACM SIGGRAPH Asia 2009, Yokohama, Japan, 16–19 December 2009; Volume 28, pp. 1–8. 15. Xu, L.; Jia, J. Two-phase kernel estimation for robust motion deblurring. In Proceedings of the European Conference on Computer Vision (ECCV), Crete, Greece, 5–11 September 2010; Volume 4, pp. 157–170. 16. Schuler, C.J.; Hirsch, M.; Harmieling, S.; Scholkopf, B. Blind correction of optical aberrations. In Proceedings of the European Conference on Computer Vision (ECCV), Florence, Italy, 7–13 October 2012; pp. 187–200. 17. Yue, T.; Suo, J.; Wang, J.; Cao, X.; Dai, Q. Blind optical aberration correction by exploring geometric and visual priors. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1684–1692.

18. Gong, X.; Lai, B.; Xiang, Z. A L0 sparse analysis prior for blind poissonian image deconvolution. Opt. Express 2014, 22, 3860–3865. [CrossRef][PubMed] 19. Fang, H.; Yan, L.; Liu, H.; Chang, Y. Blind poissonian images deconvolution with framelet regularization. Opt. Lett. 2013, 38, 389. [CrossRef][PubMed] Sensors 2018, 18, 3552 11 of 11

20. Joshi, N.; Szeliski, R.; Kriegman, D.J. PSF estimation using sharp edge prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. 21. Brauers, J.; Seiler, C.; Aach, T. Direct psf estimation using a random noise target. In Proceedings of the SPIE 7537, Digital Photography VI, San Jose, CA, USA, 17–21 January 2010; p. 75370B. [CrossRef] 22. Kee, E.; Paris, S.; Chen, S.; Wang, J. Modeling and removing spatially-varying optical blur. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Pittsburgh, PA, USA, 8–12 April 2011; pp. 1–8. 23. Shih, Y.; Guenter, B.; Joshi, N. Image enhancement using calibrated lens simulations. In Proceedings of the European Conference on Computer Vision (ECCV), Florence, Italy, 7–13 October 2012; Volume 7575, pp. 42–56. 24. Mai, L.; Liu, F. Kernel fusion for better image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 371–380. 25. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2008. 26. Krishnan, D.; Fergus, R. Fast image deconvolution using hyper-Laplacian priors. In Proceedings of the Advances in Neural Information Processing Systems 22 (NIPS 2009), Vancouver, BC, Canada, 7–10 December 2009; Volume 22, pp. 1–9.

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).