<<

applied sciences

Article Sampling Based on Kalman Filter for Shape from Focus in the Presence of

1, 2, 3 Hoon-Seok Jang † , Mannan Saeed Muhammad † , Guhnoo Yun and Dong Hwan Kim 3,*

1 Center for Imaging Media Research, Korea Institute of Science and Technology, Seoul 02792, Korea 2 College of Information and Communication, Natural Sciences Campus, Sungkyunkwan University, Suwon 16419, Korea 3 Center for Intelligent and Interactive Robotics, Korea Institute of Science and Technology, Seoul 02792, Korea * Correspondence: [email protected]; Tel.: +82-2-958-6719 These authors contributed equally as first authors to this work. †  Received: 10 July 2019; Accepted: 7 August 2019; Published: 9 August 2019 

Abstract: Recovering three-dimensional (3D) shape of an object from two-dimensional (2D) information is one of the major domains of computer vision applications. Shape from Focus (SFF) is a passive optical technique that reconstructs 3D shape of an object using 2D images with different focus settings. When a 2D image sequence is obtained with constant step size in SFF, mechanical vibrations, referred as noise, occur in each step. Since the jitter noise changes the focus values of 2D images, it causes erroneous recovery of 3D shape. In this paper, a new filtering method for estimating optimal image positions is proposed. First, jitter noise is modeled as Gaussian or speckle function, secondly, the focus curves acquired by one of the focus measure operators are modeled as a quadratic function for application of the filter. Finally, Kalman filter as the proposed method is designed and applied for removing jitter noise. The proposed method is experimented by using image sequences of synthetic and real objects. The performance is evaluated through various metrics to show the effectiveness of the proposed method in terms of reconstruction accuracy and computational complexity. Root Square Error (RMSE), correlation, Peak Signal-to-Noise Ratio (PSNR), and computational time of the proposed method are improved on average by about 48%, 11%, 15%, and 5691%, respectively, compared with conventional filtering methods.

Keywords: shape from focus (SFF); jitter noise; focus curve; Kalman filter

1. Introduction Inferring three-dimensional (3D) shape of an object from two-dimensional (2D) images is a fundamental problem in computer vision applications. Many 3D shape recovery techniques have been proposed in literature [1–5]. The methods can be categorized into two categories based on the optical reflective model. The first one includes active techniques which use projected rays. The second category consists of passive techniques which utilize reflected light rays without projection. The passive methods can further be classified into Shape from X, where X denotes the cue used to reconstruct the 3D shape as Stereo [6], Texture [7], Motion [8], Defocus [9], and Focus [10]. Shape from Focus (SFF) is a passive optical method that utilizes a series of 2D images with different focus levels for estimating 3D information of an object [11]. For SFF, a focus measure is applied to each of the image sequence, to evaluate the focus quantity at every point. The best focused position is acquired by maximizing the focus measure values along the optical axis. Many focus measures have been reported in literature [12–16]. Initial depth map, obtained through any of the focus measure operators, has the problem of information loss between consecutive frames

Appl. Sci. 2019, 9, 3276; doi:10.3390/app9163276 www.mdpi.com/journal/applsci Appl. Sci. 2019, 9, 3276 2 of 23

Appl. Sci. 2019, 9, x 2 of 23 due to the discreteness of predetermined sampling step size. To solve this problem, refined depth map depthis acquired map usingis acquired approximation using approximation techniques, astechniques, reported in as literature reported [17 in– 22literature]. As an important[17–22]. As issue an importantof SFF, when issue images of SFF, are when obtained images by are translating obtained the by object translating plane the with object constant plane step with size, constant mechanical step size,vibration, mechanical referred vibration, to as jitter referred noise, to occurs as jitter in eachnoise, step, occurs as shown in each in step, Figure as 1shown[21]. in Figure 1 [21].

Figure 1. Image acquisition for Shape from Focus. Figure 1. Image acquisition for Shape from Focus. Since this noise changes the focus values of images by oscillating along the optical axis, accuracy of 3DSince shape this recovery noise changes is considerably the focus values degraded. of images Unlike by oscillating any image along noise the [23 optical,24], this axis, noise accuracy is not ofdetectable 3D shape by recovery simply observing is considerably images. degraded. Unlike any image noise [23,24], this noise is not detectableMany by filtering simply methods observing for images. removing the jitter noise have been reported [25–27]. In [25,26], Kalman andMany Bayes filtering filtering methods methods for for removing the Gaussian jitter noise jitter have noise been have reported been proposed, [25–27]. respectively. In [25,26], KalmanIn [27], aand modified Bayes Kalman filtering filtering methods method for removing for removing Gaussian Lévy noise jitter has noise been have presented. been proposed, respectively.In this paper,In [27], a a newmodified filtering Kalman method filtering for removing method for the removing jitter noise L𝑒́vy is proposednoise has been as an presented. extended versionIn this of [paper,25]. First, a new jitter filtering noise is method modeled for as removi Gaussianng the or specklejitter noise function is proposed to reflect as morean extended types of versionnoise that of [25]. can occur First, injitter SFF. noise At the is secondmodeled stage, as Gaus the focussian or curves speckle acquired function by oneto reflect of the more focus types measure of noiseoperators that can are occur modeled in SFF. as At Gaussian the second function stage, for the application focus curves of acquired the filter by and one a of clearer the focus performance measure operatorscomparison are of modeled various filters.as Gaussian Finally, function Kalman for filter application as the proposed of the filter method and is a designedclearer performance and applied. comparisonKalman filter of is various a recursive filters. filter Finally, that tracks Kalman the filter state as of athe linear proposed dynamic method system is containingdesigned and noise, applied. and is Kalmanused in manyfilter is fields a recursive such as computerfilter that vision,tracks robotics,the state radar,of a linear etc. [28 dynamic–33]. In manysystem cases, containing this algorithm noise, andis based is used on in measurements many fields madesuch as over computer time. More vision, precise robotics, results radar, can beetc. expected [28–33]. than In many from cases, using this only algorithmmeasurements is based at that on measurements moment. As the made filter over recursively time. More processes precise input results data, can including be expected noise, than optimal from usingstatistical only predictionmeasurements for the at currentthat moment. state can As bethe performed. filter recursively The proposed processes method input isdata, experimented including noise,by using optimal image statistical sequences prediction of synthetic for the and current real objects. state can The be performance performed. of The the proposed proposed method method is is experimentedanalyzed through by using various image metrics sequences to show of its synthe effectivenesstic and inreal terms objects. of reconstruction The performance accuracy of andthe proposedcomputational method complexity. is analyzed Root through Mean Square various Error me (RMSE),trics tocorrelation, show its effectiveness Peak Signal-to-Noise in terms Ratio of reconstruction(PSNR), and computational accuracy and computational time of the proposed complexi methodty. Root are Mean improved Square by Error an average (RMSE), of correlation, about 48%, Peak11%, Signal-to-Noise 15%, and 5691%, Ratio respectively, (PSNR), comparedand computatio with conventionalnal time of the filtering proposed methods. method In are the improved remainder byof an this average paper, of Section about2 48%, presents 11%, 15%, the concept and 5691%, of SFF respectively, and a summary compared of previously with conventional proposed filtering focus methods.measures In as the background. remainder of Sections this paper,3 and 4Sectio providen 2 presents the modeling the concept of jitter of noise SFF and and a focus summary curves, of previouslyrespectively. proposed Section focus5 explains measures the Kalman as background. filter as theSection proposed 3 and method4 provide in the detail. modeling Experimental of jitter noiseresults and and focus discussion curves, arerespectively. presented Section in Section 5 explai6. Finally,ns the Section Kalman7 concludes filter as the this proposed paper. method in detail. Experimental results and discussion are presented in Section 6. Finally, Section 7 concludes this paper.

Appl. Sci. 2019, 9, 3276 3 of 23 Appl. Sci. 2019, 9, x 3 of 23

2. Related Work

2.1. Shape from Focus In SFFSFF methods,methods, images with didifferentfferent focus levelslevels (such that some parts are wellwell focusedfocused andand the restrest ofof thethe partsparts areare defocuseddefocused withwith somesome blur)blur) are obtainedobtained by translatingtranslating the object plane at a predetermined step size along thethe opticaloptical axisaxis [[11]11].. By applying a focus measure,measure, the bestbest focusedfocused frame for each each object object point point is is acquired acquired to to find find the the depth depth of the of theobject object with with an unknown an unknown surface. surface. The Thedistance distance of the of corresponding the corresponding object object point pointis comput is computeded by using by usingthe camera the camera parameters parameters for the forframe, the frame,and utilizing and utilizing the lens the formula lens formula as follows: as follows:

11 11 1 == + (1)(1) f𝑓 u𝑢 𝑣v where,where, f𝑓is is the the focal focal length, length,u and 𝑢 v andare the𝑣 are distances the distances of object andof object image and from image the lens, from respectively. the lens, Figurerespectively.2 shows Figure the image 2 shows formation the image in the formation optical lens. in the The optical object lens. point The at theobject distance point atu isthe focused distance to the𝑢 is image focused point to the at theimage distance pointv at. the distance 𝑣.

Figure 2. Image formation in optical lens. Figure 2. Image formation in optical lens. 2.2. Focus Measures 2.2. FocusA focus Measures measure operator calculates the focus quality of each pixel in the image sequence, and is evaluatedA focus locally. measure As the operator image sharpness calculates increases, the focus the qu valueality of of each the focus pixel measure in the image increases. sequence, When and the imageis evaluated sharpness locally. is maximum, As the image the sharpness best focused increase images, is the attained. value of Some the focus of the measure popular increases. gradient-based, When statistical-based,the image sharpness and is Laplacian-based maximum, the operatorsbest focused are image briefly is given attained. in [12 Some]. of the popular gradient- based,First, statistical-based, there are Modified and Laplacian-ba Laplacian (ML)sed and operators Sum of Modifiedare briefly Laplacian given in [12]. (SML) as Laplacian-based operators.First, there When are Laplacian Modified is usedLaplacian in textured (ML) images,and Sumx andof Modifiedy components Laplacian of the (SML) Laplacian as Laplacian- operator maybased cancel operators. out and When provide Laplacian no response. is used MLin textured is calculated images, by adding𝑥 and 𝑦 the components squared second of the derivatives Laplacian foroperator each pixelmay ofcancel the image out andI as: provide no response. ML is calculated by adding the squared second derivatives for each pixel of the image 𝐼 as: !2 !2 ∂2I(x, y) ∂2I(x, y) F (x, y) = + (2) ML 𝜕 𝐼(𝑥,2 ) 𝑦 𝜕 𝐼(𝑥,2 𝑦) 𝐹 (𝑥,) 𝑦 = ∂x + ∂y (2) 𝜕𝑥 𝜕𝑦 If the image has rich textures with high variability at each pixel, focus measure can be evaluated for eachIf the pixel. image In orderhas rich to textures improve with robustness high variabi for weak-texturedlity at each pixel, images, focus SML measure is computed can be byevaluated adding thefor each ML values pixel. In in order a W toW improvewindow robustness as: for weak-textured images, SML is computed by adding the ML values in a 𝑊×𝑊× window as:  !2 !2 X X  ∂2I(x, y) ∂2I(x, y)    FSML(i, j) = 𝜕 𝐼(𝑥,) 𝑦 + 𝜕 𝐼(𝑥, 𝑦) (3) x W y W ∂x2 ∂y2  𝐹(𝑖, 𝑗) =∈ ∈  +  (3) ∈ ∈ 𝜕𝑥 𝜕𝑦

where, 𝑖 and 𝑗 are the x and y coordinates of center pixel in a 𝑊×𝑊 window, respectively. Appl. Sci. 2019, 9, 3276 4 of 23 Appl. Sci. 2019, 9, x 4 of 23 where,Next,i and therej are is the Tenenbaumx and y coordinates (TEN) as a of gradient-bas center pixeled in operator. a W W window,TEN is calculated respectively. by adding the × squaredNext, responses there is Tenenbaumof horizontal (TEN) and vertical as a gradient-based Sobel operators. operator. For robustness, TEN is calculated it is also bycomputed adding theby squaredadding the responses TEN values of horizontal in a 𝑊×𝑊 and window vertical Sobelas: operators. For robustness, it is also computed by adding the TEN values in a W W window as: × 𝐹(𝑖, 𝑗) =  (𝐺(𝑥, 𝑦)) +𝐺(𝑥, 𝑦)  (4) X∈ X∈ 2  2 FTEN(i, j) = (Gx(x, y)) + Gy(x, y) (4) x W y W ∈ ∈ where, 𝐺(𝑥, 𝑦) and 𝐺(𝑥, 𝑦) are images acquired through convolution with the horizontal and where,verticalG Sobelx(x, y )operators,and Gy(x ,respectively.y) are images acquired through convolution with the horizontal and vertical SobelFinally, operators, there respectively. is Gray-Level Variance (GLV) as a statistics-based operator. It has been proposed on theFinally, basis thereof the is idea Gray-Level that the Variance variance (GLV) of gray as alevel statistics-based in a sharp image operator. is higher It has beenthan proposedin a blurred on theimage. basis GLV of the for idea a central that thepixel variance in a 𝑊×𝑊 of gray window level in is a sharpcalculated image as: is higher than in a blurred image. GLV for a central pixel in a W W window is calculated as: × 1 ( ) ( ( ) ) 𝐹 𝑖, 𝑗 = X X 𝐼 𝑥, 𝑦 − 𝜇 (5) 𝑁1 ∈ ∈ n 2o FGLV(i, j) = (I(x, y) µ) (5) 2 x W y W N ∈ ∈ − where, 𝜇 is the mean of the gray values in a 𝑊×𝑊 window. where, µ is the mean of the gray values in a W W window. × 3. Noise Modeling 3. Noise Modeling When a sequence of 2D images is obtained by translating the object at a constant step size along When a sequence of 2D images is obtained by translating the object at a constant step size along the optical axis, mechanical vibrations, referred as jitter noise, occur in each step. In this manuscript, the optical axis, mechanical vibrations, referred as jitter noise, occur in each step. In this manuscript, two probability density functions are used for modeling the jitter noise. At first, the jitter noise is two probability density functions are used for modeling the jitter noise. At first, the jitter noise is modeled as Gaussian function with mean 𝜇 and 𝜎, as shown in Figure 3. modeled as Gaussian function with mean µn and standard deviation σn, as shown in Figure3.

Figure 3. Noise modeling through Gaussian function. Figure 3. Noise modeling through Gaussian function. µn represents the position of each image frame without the jitter noise, and σn represents the amount𝜇 ofrepresents jitter noise the occurred position in of each each image image frame. frameσ withoutn is determined the jitter by noise, checking and depth𝜎 represents of field andthe correspondingamount of jitter imagenoise occurred position. in The each depth image of frame. field is 𝜎 aff ected is determined by magnification by checking and depth different of field factors. and σn is selected as σn 10 µm through repeated experiments with real objects used in this manuscript. corresponding image≤ position. The depth of field is affected by magnification and different factors. Second,𝜎 is selected the jitter as 𝜎 noise ≤ 10 is modeledμm through as speckle repeated function experiments as follows with [real34,35 objects]: used in this manuscript.

Second, the jitter noise is modeled as speckle function as followsζ [34,35]: 1 −2 f (ζ) = e 2σn (6) 1 2 × 2σn (6) f(𝜁) = ×𝑒 2𝜎 where, ζ is the amount of jitter noise before or after filtering. where, ζ is the amount of jitter noise before or after filtering. 4. Focus Curve Modeling 4. Focus Curve Modeling In order to filter out jitter noise, the focus curve obtained by one of the focus measure operators is In order to filter out jitter noise, the focus curve obtained by one of the focus measure operators modeled by Gaussian approximation with mean z f and standard deviation σ f [11]. This focus curve modelingis modeled is by shown Gaussian in Figure approximation4. with mean 𝑧 and standard deviation 𝜎 [11]. This focus curve modeling is shown in Figure 4. Appl. Sci. 2019,, 9,, x 3276 55 of of 23

Figure 4. Gaussian fitting of the focus curve. Figure 4. Gaussian fitting of the focus curve. Related equation about this method is given as: Related equation about this method is given as: z z 2 ( 1 ( − f ) ) − 2 σ f F(z) = Fp e (7) ( ( )) × (7) 𝐹(𝑧) =𝐹 ×𝑒 where, z is the position of each image frame, F(z) is the focus value at z, z f is the best-focused position where,of the object 𝑧 is the point, positionσ f is standard of each image deviation frame, of the 𝐹(𝑧) approximated is the focus focus value curve at 𝑧, (by𝑧 Gaussianis the best-focused function), positionand Fp is of amplitude the object of point, the focus 𝜎 is curve. standard Using deviation the natural of the logarithm approximated to (7), (8) focus is obtained: curve (by Gaussian function), and 𝐹 is amplitude of the focus curve. Using the natural logarithm to (7), (8) is obtained: 2   1 z z f ln(F(Z)) = ln Fp 1( 𝑧−𝑧− ) (8) − 2 σ f 𝑙𝑛𝐹(𝑍)=𝑙𝑛𝐹− ( ) (8) 2 𝜎 Using (8) and initial best-focused position obtained through one of the focus measure operators zi Using (8) and initial best-focused position obtained through one of the focus measure operators and the positions below and above initial best-focused position zi 1 and zi+1 and their corresponding 𝑧 and the positions below and above initial best-focused position− 𝑧 and 𝑧 and their focus values Fi, Fi 1 and Fi+1, (9) and (10) are obtained: − corresponding focus values 𝐹, 𝐹 and 𝐹, (9) and (10) are obtained:  2  2 1 ( zi z f zi 1 z f ) 1 (𝑧 −−𝑧 −−𝑧−−−𝑧 ) 𝑙𝑛ln((𝐹F)i)−𝑙𝑛(𝐹ln(Fi 1)=−) = (9)(9) − − −2 σ2 2 𝜎f

 2  2 1 (𝑧 −𝑧 −𝑧 −𝑧 ) ( zi z f zi+1 z f ) 𝑙𝑛(𝐹) −𝑙𝑛(𝐹)=−1 (10) ln(F ) ln(F + ) = 2 − −𝜎 − (10) i − i 1 −2 2 σ f Using (10), (11) is acquired as follows: Using (10), (11) is acquired as follows: 1 𝑙𝑛(𝐹 ) − 𝑙𝑛(𝐹 ) = 1 ln(F ) ln(F ) (11) 𝜎1 − (𝑧 −𝑧i − 𝑧i+1 −𝑧 ) = 2 − (11) σ2 1  2  2 f ( zi z f zi+1 z f ) Applying (11) to (9), (12) is obtained:− 2 − − − Applying (11) to (9), (12) is obtained: (𝑧 −𝑧 −𝑧 −𝑧 ×(𝑙𝑛(𝐹 ) −𝑙𝑛(𝐹 )))  𝑙𝑛(𝐹) −𝑙𝑛(𝐹)=  2  2 (12) ( zi z f 𝑧 z−𝑧i 1 z−f 𝑧(ln(−𝑧Fi) ln(Fi+1)) − − − − × − ln(Fi) ln(Fi 1) = (12) − −  2  2 zi z f ) zi+1 z f ) Assuming ∆z = 𝑧 −𝑧 =𝑧 −𝑧 =1 and utilizing− − (12), (13)− is acquired as:

(𝑙𝑛(𝐹) −𝑙𝑛(𝐹))(𝑧 −𝑧 ) − (𝑙𝑛(𝐹) −𝑙𝑛(𝐹))(𝑧 −𝑧 ) 𝑧 = (13) 2(𝑙𝑛 (𝐹 )−𝑙𝑛 (𝐹 )) + (𝑙𝑛 (𝐹 )−𝑙𝑛 (𝐹 ))

Using (11) and (13), (14) is obtained as: Appl. Sci. 2019, 9, 3276 6 of 23

Assuming ∆z = zi+1 zi = zi zi 1 = 1 and utilizing (12), (13) is acquired as: − − −  2 2  2 2 (ln(Fi) ln(Fi+1)) zi zi 1 (ln(Fi) ln(Fi 1)) zi zi+1 − − − − − − − z f = (13) 2((ln(Fi) ln(Fi 1)) + (ln(Fi) ln(Fi+1))) − − − Appl. Sci.Using 2019 (11), 9, x and (13), (14) is obtained as: 6 of 23

 2 2  2 2 (𝑧zi −𝑧zi 1 )++ (z𝑧i −𝑧zi+1 ) 2 = − − − σ𝜎f =− (14)(14) −22(((ln𝑙𝑛((F𝐹i)) −𝑙𝑛ln(F i (𝐹1))))++((ln𝑙𝑛(F(𝐹i)) −𝑙𝑛ln(Fi +1(𝐹 ))))) − − − Utilizing (7), (13), and (14), F is acquired by following: Utilizing (7), (13), and (14), 𝐹p is acquired by following:

Fi F = 𝐹 (15) 𝐹p = z z 2 1 i− f ( ( ) ) (15) 2 σ f e− 𝑒 Substituting (13), (14), and (15) into (7), final focus curve obtained by Gaussian approximation is acquired.Substituting Since jitter (13), noise (14), is and considered (15) into in (7), this final paper, focus Equation curve obtained (7) is modified by Gaussian as follows: approximation is acquired. Since jitter noise is considered in this paper, Equation (7) is modified as follows: (z+ζ) z 2 ( 1 ( − f ) ) 2 ()σ f Fn(z) = Fp e− (16) × (16) 𝐹(𝑧) =𝐹 ×𝑒 where, ζ is previously modeled (jitter) noise, approximated by Gaussian or speckle function. Using the where, 𝜁 is previously modeled (jitter) noise, approximated by Gaussian or speckle function. Using proposed filter (in the next section), this noise is filtered to obtain a noise-free focus curve. the proposed filter (in the next section), this noise is filtered to obtain a noise-free focus curve. 5. Proposed Method 5. Proposed Method Various filters can be used for removing the jitter noise. In this manuscript, Kalman filter is usedVarious as an optimal filters can estimator be used and for isremoving designed the accordingly. jitter noise. In It isthis a recursivemanuscript, filter, Kalman which filter tracks is used the stateas an ofoptimal a linear estimator dynamic and system is designed that contains accordingly. noise, and It is is a basedrecursive on measurements filter, which tracks made the over state time. of Morea linear accurate dynamic estimation system that results contains can be noise, obtained and thanis based by using on measurements only measurements made over at that time. moment. More Theaccurate Kalman estimation filter, which results recursively can be obtained processes than input by datausing including only measurements noise, can predict at that optimal moment. current The stateKalman statistically filter, which [36– 39recursively]. The application processes of input the Kalman data including filter to the noise, SFF systemcan predict is shown optimal in Figure current5. state statistically [36–39]. The application of the Kalman filter to the SFF system is shown in Figure 5.

Figure 5. Application of Kalman filter to SFF system. Figure 5. Application of Kalman filter to SFF system. The system is defined by the position of each image frame in a 2D image sequence. The system The system is defined by the position of each image frame in a 2D image sequence. The system state is changed by the jitter noise, which is the measurement noise, in the microscope. The optimal state is changed by the jitter noise, which is the measurement noise, in the microscope. The optimal estimate of the system state is obtained by removing the jitter noise through the Kalman filter. estimate of the system state is obtained by removing the jitter noise through the Kalman filter. The entire Kalman filter algorithm can be divided into two parts: prediction and update. The entire Kalman filter algorithm can be divided into two parts: prediction and update. The The prediction refers to the prediction of the current state, and update that a more accurate prediction refers to the prediction of the current state, and update means that a more accurate prediction can be made through the values from the present state to the observed measurement. The prediction of the state and its variance is represented as follows:

𝑆=𝑇×𝑆+𝐶×𝑈 (17)

𝑉 = 𝑇 × 𝑉 ×𝑇 +𝑁 (18) where, 𝑆 is “estimate of the system state”, 𝑇 is “transition coefficient of the state”, 𝑈 is “input”; 𝐶 is “control coefficient of the input”, 𝑉 is “variance of the state estimate”, and 𝑁 is “variance of the process noise”. In the SFF system, 𝑆 is represented as the position of each image frame in the 2D image sequence estimated by the Kalman filter, and 𝐶, 𝑈, and 𝑁 are all set as 0, since there is no control input in the SFF system, and only jitter noise, as the measurement noise, is considered in this Appl. Sci. 2019, 9, 3276 7 of 23 prediction can be made through the values from the present state to the observed measurement. The prediction of the state and its variance is represented as follows:

S = T S + C U (17) × ×

V = T V T0 + Np (18) × × where, S is “estimate of the system state”, T is “transition coefficient of the state”, U is “input”; C is “control coefficient of the input”, V is “variance of the state estimate”, and Np is “variance of the process noise”. In the SFF system, S is represented as the position of each image frame in the 2D image sequence estimated by the Kalman filter, and C, U, and Np are all set as 0, since there is no control input in the SFF system, and only jitter noise, as the measurement noise, is considered in this manuscript. Next, the computation of the Kalman gain is given for updating the predicted state as follows:

G = V A0 inv(A V A0 + Nm) (19) × × × × where, G is “Kalman gain”, A is “observation coefficient”, and Nm is “variance of the measurement noise”. In an SFF system, Nm is defined as the variance of the previously modeled jitter noise. Finally, the update of the predicted state on the basis of the observed measurement is provided by:

S = S + G (O A S) (20) × − × V = V G A V (21) − × × where, O is “observed measurement”. In an SFF system, O is represented as the position of each image frame in the image sequence before filtering. Parameters that are not set to values, T and A, are all set to 1 for simplicity. For the start of the algorithm, S and V are initialized as:

S = inv(A) O (22) ×

V = inv(A) Nm inv(A0) (23) × × Through the Kalman filter algorithm, the optimal position S of each image frame in the image sequence is estimated. The pseudo code for the Kalman filter algorithm is shown in Algorithm 1.

Algorithm 1 Computing optimal position of each image frame and remaining jitter noise 1: procedure Optimal position S & remaining jitter noise ζ 2: S O B Set initial position of image frame with observed position ← 3: V Nm B Initialize variance of position of image frame to variance of jitter noise ← 4. for i = 1 N do B Total number of iterations of Kalman filter → 1 5: G V (V + Nm)− B Compute Kalman gain ← · 6: V V G V B Correct variance of position of image frame ← − · 7: S S + G (O S) B Update position of image frame ← · − 8: ζ S µn B Compute remaining jitter noise ← − 9: end for 10: end procedure

The difference between the true position µn, which is the position of each image frame without the jitter noise, and optimal position S, is put to ζ. This algorithm is repeated for all image frames in the image sequence. After acquiring a filtered image sequence, a depth map is obtained by maximizing the focus measure obtained by using the previously modeled focus curve, for each pixel in the image sequence. A list of frequently used symbols and notations is shown in Table1. Appl. Sci. 2019, 9, 3276 8 of 23

Table 1. List of frequently used symbols and notation.

Notation Description

µn Position of each image frame without jitter noise σn Standard deviation of jitter noise z f Best focused position through Gaussian approximation in each object point σ f Standard deviation of Gaussian focus curve ζ The amount of jitter noise before or after filtering S Position of each image frame after Kalman filtering O Position of each image frame before Kalman filtering N Total number of iterations of filters

6. Results and Discussion Appl. Sci. 2019, 9, x 8 of 23 6.1. Image Acquisition and Parameter Setting 6.1. Image Acquisition and Parameter Setting For experiments, four objects were used, as shown in Figure6, consisting of one simulated and For experiments, four objects were used, as shown in Figure 6, consisting of one simulated and three real objects. three real objects.

Figure 6. 10th frame of experimented objects: (a) Simulated cone, (b) Coin, (c) Liquid Crystal FigureDisplay-Thin 6. 10th Filmframe Transistor of experimented (LCD-TFT) objects: filter, (a ()d Simulated) Letter-I. cone, (b) Coin, (c) Liquid Crystal Display- Thin Film Transistor (LCD-TFT) filter, (d) Letter-I. First, a simulated cone image sequence consisting of 97 images, with dimensions of 360 360 , × was acquired.First, a simulated These images cone wereimage generated sequence using consis camerating of simulation 97 images, software with dimensions [40]. of 360 × 360 pixels,The was real acquired. objects usedThese for images experiments were genera were:ted coin, using Liquid camera Crystal simulation Display-Thin software Film [40]. Transistor (LCD-TFT)The real filter, objects and used letter-I. for experiments The coin images were: were coin magnified, Liquid Crystal images Display-Thin of Lincoln’s Film head Transistor from the (LCD-TFT)back of the USfilter, penny. and Theletter-I. coin The sequence coin images consisted we ofre 80magnified images, withimages dimensions of Lincoln’s of 300 head300 from pixels the. × Theback LCD-TFT of the US filterpenny. images The coin consisted sequence of microscopic consisted of images 80 images, of an with LCD dimensions color filter. of The 300 image × 300 sequence pixels. Theof LCD-TFT LCD-TFT filter filter had images 60 images, consisted with of the microscopic dimensions images of 300 of300 an pixelsLCD color each. filter. The thirdThe image × sequence of consisted LCD-TFT of filter letter-I, had engraved 60 images, on with the the metallic dimensions surface. of 300 It consisted × 300 pixels of 60 each. images, The third with imagedimensions sequence of 300 consisted300 pixels of letter-I,each. engraved The real on objects the metallic were acquired surface. throughIt consisted a microscopic of 60 images, control with × dimensionssystem (MCS) of [30018]. × The 300 system pixels consists each. The of a real personal objects computer were acquired integrated through with aa framemicroscopic grabber control board (Matroxsystem (MCS) Meteor-II) [18]. The and system a CCD consists camera of (SAMSUNG a personal computer CAMERA integrated SCC-341) with mounted a frame on grabber a microscope board (NIKON(Matrox Meteor- OPTIPHOT-100S).II) and a CCD Computer camera software (SAMSUNG obtains CAMERA images by SCC-341) translating mounted the object on planea microscope through (NIKONa stepper OPTIPHOT-100S). motor driver (MAC Computer 5000), possessing software a obta 2.5 nmins minimumimages by step translating size. The the coin object and letter-Iplane throughimages were a stepper obtained motor under driver 10 (MACmagnification, 5000), possessing while the LCD-TFTa 2.5 nm minimum filter images step were size. acquired The coin under and × letter-I50 magnification. images were obtained under 10× magnification, while the LCD-TFT filter images were × acquiredIn parameter under 50× setting, magnification. the standard deviation of the jitter noise for each object was assumed to be ten timesIn parameter the sampling setting, step the size standard of each imagedeviation sequence, of the jitter i.e., 254 noise mm, for 6.191 eachµ objectm, 1.059 was m, assumedand 1.529 toµ bem tenfor simulatedtimes the sampling cone, coin, step LCD-TFT size of filter, each andimage letter-I, sequence, respectively. i.e., 254 For mm, comparison 6.191 μm, of 1.059 3D shape m, and recovery 1.529 μresults,m for simulated a local window cone,7 coin,7 for LCD-TFT focus measure filter, and operators letter-I, was respectively. used. The total For numbercomparison of iterations of 3D shapeN of × recoverythe Kalman results, filter a was local set window as 100. 7×7 for focus measure operators was used. The total number of iterationsFor performance 𝑁 of the Kalman comparison, filter was Bayes set as filter 100. and particle filter were employed [41–46]. The depth estimationFor performance through the comparison, Bayes filter isBayes presented filter and in Figure particle7. filter were employed [41–46]. The depth estimation through the Bayes filter is presented in Figure 7.

In Figure 7, 𝑧 is defined as a total number of 2D images obtained for SFF, and 𝑝(𝑖) is presented as follows:

1 ()() 𝑝(𝑖) = 𝑒 , 1≤𝑖≤𝑀, 1≤𝑗≤𝑁 (24) 2𝜋𝜎 where, 𝑝(𝑖) is Gaussian probability density function, 𝑧(𝑗) is the position of each image frame changed by the jitter noise, 𝑟(𝑖) is the possible positions of each image frame in the presence of the jitter noise, 𝜎 is standard deviation of previously modeled jitter noise, 𝑀 is the total length of 𝑟(𝑖) with intervals of 0.01, and 𝑁 is the total number of iterations of Bayes filter. The reason why 3𝜎 is set in the range of 𝑟, is because 3𝜎 makes 𝑧(𝑗) be in the range of 𝑟 with the probability of 99.7% due to the Gaussian probability density function. The recursive Bayesian estimation was applied to all 2D image frames obtained for SFF. After the filtered image sequence was acquired, an optimal depth map was obtained using the previously modeled focus curve. Appl. Sci. 2019, 9, 3276 9 of 23

In Figure7, z0 is defined as a total number of 2D images obtained for SFF, and pj(i) is presented as follows: (z(j) r(i))2 − − 1 2σ2 pj(i) = q e n , 1 i M, 1 j N (24) 2 ≤ ≤ ≤ ≤ 2πσn where, pj(i) is Gaussian probability density function, z(j) is the position of each image frame changed by the jitter noise, r(i) is the possible positions of each image frame in the presence of the jitter noise, σn is standard deviation of previously modeled jitter noise, M is the total length of r(i) with intervals of 0.01, and N is the total number of iterations of Bayes filter. The reason why 3σn is set in the range of r, is because 3σn makes z(j) be in the range of r with the probability of 99.7% due to the Gaussian probability density function. The recursive Bayesian estimation was applied to all 2D image frames obtained for SFF. After the filtered image sequence was acquired, an optimal depth map was obtained Appl. Sci. 2019, 9, x 9 of 23 using the previously modeled focus curve.

Figure 7. Depth estimation through Bayes filter. Figure 7. Depth estimation through Bayes filter. Particle filter algorithm is mainly divided into two steps: Generating the weights for each of particlesParticle and resamplingfilter algorithm for acquiring is mainly new divided estimated into particles.two steps: In Generating the first step, the the weights weights for are each based of onparticles the probability and resampling of the givenfor acquiring observation new forestimated each of particles. the particles In the as: first step, the weights are based on the probability of the given observation for each of the particles as: 2 (z zp(i)) 1 − − 2σ2 pw(i) = q e ()n , 1 i P (25) 1 ≤ ≤ 2 𝑝(𝑖) = 2πσ𝑒n , 1≤𝑖≤𝑃 (25) 2𝜋𝜎 where, pw(i) is Gaussian probability density function, z is the observed position of each image frame changedwhere, 𝑝 by(𝑖) the is jitter , probabilityzp(i) is vector density of particles, function, and 𝑧P isis thethe number observed of particlesposition theof each SFF systemimage generates.frame changed In this by manuscript, the jitter noise,zp(i) 𝑧is(𝑖) initialized is vector by of randomly particles, selecting and 𝑃 is the the values number on of the particles x-axis from the theSFF previouslysystem generates. modeled In jitter this noise,manuscript, and P is𝑧 set(𝑖) as is 1000.initialized After by the randomly weights are selecting normalized, the values resampling, on the asx-axis the secondfrom the step, previously is needed formodeled acquiring jitter new noise, estimated and 𝑃 particles. is set as The 1000. new After estimated the weights particles are obtainednormalized, by samplingresampling, the as cumulative the second distributionstep, is needed of thefor normalizedacquiring newpw (estimatedi), randomly particles. and uniformly. The new Throughestimated this particles sampling, are theobtained particles by withsampling the higher the cumulative weights are distribution selected. This of particlethe normalized filter algorithm 𝑝(𝑖), israndomly repeated andN times, uniformly. as the Through total number this ofsampling, iterations the of particles the particle with filter. the Thehigher optimal weights position are selected. of each imageThis particle frame isfilter the meanalgorithm of the is final repeated estimated 𝑁 times, particles as thezp( i)totalobtained number through of iterations resampling of the in iterationparticle Nfilter.. After The the optimal filtered position image sequenceof each image is acquired frame throughis the mean application of the final of the estimated particle filterparticles to all 𝑧 2D(𝑖) imageobtained frames, through optimal resampling depth map in iteration is obtained 𝑁. After in the the same filtered way asimage the depth sequence estimation is acquired in Figure through7. application of the particle filter to all 2D image frames, optimal depth map is obtained in the same way as the depth estimation in Figure 7.

6.2. Experimental Results Figure 8 presents the performance comparison of the filters in the 97th frame of the simulated cone using various iterations in the presence of Gaussian jitter noise. Appl. Sci. 2019, 9, 3276 10 of 23

6.2. Experimental Results Figure8 presents the performance comparison of the filters in the 97th frame of the simulated Appl. Sci. 2019, 9, x 10 of 23 cone using various iterations in the presence of Gaussian jitter noise.

Figure 8. Performance of the filters: (a) Iteration—50, (b) Iteration—100, (c) Iteration—150, (Figured) Iteration—200. 8. Performance of the filters: (a) Iteration—50, (b) Iteration—100, (c) Iteration—150, (d) Iteration—200. Figure9 provides performance comparison of the filters in the 100th iteration using various frames of theFigure simulated 9 provides cone in performance the presence ofcomparison Gaussian jitterof the noise. filters in the 100th iteration using various framesThese of the Figures simulated are intensivelycone in the enlargedpresence of versions Gaussian of thejitter last noise. iteration. “Kalman output” is the estimatedThese position Figures throughare intensively Kalman enlarged filter, “Bayesian versions output” of the islast the iteration. estimated “Kalman position throughoutput” Bayesis the filter,estimated “Particle position output” through is the Kalman estimated filter, position “Bayesia throughn output” particle is the estimated filter, and position “True position” through Bayes is the positionfilter, “Particle without output” the jitter isnoise. the estimated It is clear position from these through Figures particle that Kalman filter, and output “True converged position” better is the to Trueposition position without than the Bayesian jitter noise. output It andis clear Particle from output. these Figures It means that that Kalman Kalman output filter outperformedconverged better the otherto True filters position compared than Bayesian for experiments. output and Particle output. It means that Kalman filter outperformed the other filters compared for experiments.

Appl. Sci. 2019, 9, 3276 11 of 23 Appl. Sci. 2019, 9, x 11 of 23

Figure 9. Performance of the filters: (a) Frame number—10, (b) Frame number—30, (c) Frame Figure 9. Performance of the filters: (a) Frame number—10, (b) Frame number—30, (c) Frame number—50, (d) Frame number—70. number—50, (d) Frame number—70. Figure 10 shows the Gaussian approximation of the focus curves using experimented objects in the presenceFigure 10 of shows Gaussian the jitterGaussian noise. approximation of the focus curves using experimented objects in the presence“Without of Noise” Gaussian is the jitter Gaussian noise. approximation of the focus curve without the jitter noise, “After Kalman“Without Filtering” Noise” is the is the Gaussian Gaussian approximation approximation of of the the focus focus curve curve afterwithout Kalman the jitter filtering, noise, “After BayesianKalman Filtering” Filtering” is is the Gaussian approximationapproximation ofof thethe focus focus curve curve after after Bayes Kalman filtering, filtering, and “After ParticleBayesian Filtering” Filtering” is is the the Gaussian Gaussian approximation approximation of of the the focus focus curvecurve afterafter particleBayes filtering, filtering. and It is“After clear fromParticle Figure Filtering” 10 that is the the optimal Gaussian position approximation with the highest of the focusfocus value curve in after After particle Kalman filtering. Filtering It isis closer clear tofrom the Figure optimal 10 position that the in optimal Without position Noise than with the th optimale highest positions focus value in the in focus After curves Kalman obtained Filtering after is usingcloser otherto the filtering optimal techniques. position in Without Noise than the optimal positions in the focus curves obtainedFor performanceafter using other evaluation filtering of techniques. 3D shape recovery, three metrics were used in case of simulated cone,For since performance the synthetic evaluation object had of an3D actual shape depth recovery, map, three as in metrics Figure 11were[47 ].used in case of simulated cone, since the synthetic object had an actual depth map, as in Figure 11 [47].

Appl. Sci. 2019, 9, x 12 of 23

Appl. Sci. 2019, 9, x 12 of 23 Appl. Sci. 2019, 9, 3276 12 of 23

Figure 10. Gaussian approximation of focus curves: (a) Simulated cone (60, 60), (b) Coin (120, 120), (c) Figure 10. Gaussian approximation of focus curves: (a) Simulated cone (60, 60), (b) Coin (120, 120), LCD-TFT filter (180, 180), (d) Letter-I (240, 240). (cFigure) LCD-TFT 10. Gaussian filter (180, approximation 180), (d) Letter-I of focus (240, curves: 240). (a) Simulated cone (60, 60), (b) Coin (120, 120), (c) LCD-TFT filter (180, 180), (d) Letter-I (240, 240).

Figure 11. Actual depth map of simulated cone. Figure 11. Actual depth map of simulated cone. The first one is Root Mean Square Error (RMSE), which is a commonly used measure when dealing withThe the first difference one is between Root Mean estimatedFigure Square 11. andActual Error actual depth (RMSE), value, map whichof as simulated follows: is a commonlycone. used measure when dealing with the difference between estimatedr and actual value, as follows: The first one is Root Mean Square1 ErrorXX 1(RXMSE),Y 1 which is a commonly2 used measure when RMSE = − − (d(x, y) dˆ(x, y)) (26) dealing with the difference between estimatedXY x= 0and actualy value,− as follows: Appl. Sci. 2019, 9, 3276 13 of 23 where, d(x, y) and dˆ(x, y) are the actual and estimated depth map, respectively, and X and Y are width and height of 2D images, which are used for SFF, respectively. The second one is correlation, which shows the linear relationship and strength between two variables as:

PX 1 PY 1  − − d(x, y) d(x, y) (dˆ(x, y) dˆ(x, y) x=0 y=0 − − Correlation = r (27) 2 PX 1 PY 1 2 PX 1 PY 1 ( − − d(x, y) d(x, y) )( − − (dˆ(x, y) dˆ(x, y)) ) x=0 y=0 − x=0 y=0 − where, d(x, y) and dˆ(x, y) are the means of the actual and estimated depth map, respectively. The third one is Peak Signal-to-Noise Ratio (PSNR), which is the power of noise over the maximum power a signal can have. It is usually represented in terms of the logarithmic scale as:

d2 PSNR = 10log ( max ) (28) 10 MSE where, dmax is maximum depth value in the depth map and MSE is the Mean Square Error, which is the square of the RMSE. The lower the RMSE, the higher the correlation, and the higher the PSNR, the higher the accuracy of 3D shape reconstruction. Tables2–4 provide the quantitative performance of 3D shape recovery of the simulated cone using three focus measures, SML, GLV, and TEN, before and after filtering in the presence of Gaussian jitter noise.

Table 2. Comparison of focus measure operators with proposed method for simulated cone in the presence of Gaussian noise by using RMSE (Root Mean Square Error). SML: Sum of Modified Laplacian; GLV: Gray-Level Variance; TEN: Tenenbaum.

Focus Measure Operators SML GLV TEN Before Filtering 9.2629 12.4038 15.2304 After Particle Filtering 9.1993 10.9459 11.2293 After Bayesian Filtering 7.3260 8.2659 8.4961 After Kalman Filtering 7.3169 8.1400 8.3652

Table 3. Comparison of focus measure operators with proposed method for simulated cone in the presence of Gaussian noise by using correlation.

Focus Measure Operators SML GLV TEN Before Filtering 0.7831 0.7430 0.7121 After Particle Filtering 0.7925 0.8157 0.7914 After Bayesian Filtering 0.9536 0.9427 0.9200 After Kalman Filtering 0.9541 0.9438 0.9206

Table 4. Comparison of focus measure operators with proposed method for simulated cone in the presence of Gaussian noise by using PSNR.

Focus Measure Operators SML GLV TEN Before Filtering 20.1276 17.8644 16.0812 After Particle Filtering 20.4604 18.9504 18.7284 After Bayesian Filtering 22.3481 21.3897 21.1510 After Kalman Filtering 22.3588 21.5229 21.2859

The order of the general performance of the focus measures is that SML is the best, then the GLV, and finally the TEN. In Before Filtering, it is difficult to distinguish the performance of the focus Appl. Sci. 2019, 9, 3276 14 of 23 measures due to the jitter noise. However, in After Bayesian Filtering and After Kalman Filtering, it is shown in Tables2–4 that the performance order of the focus measures is almost correct, as described above. The particle filter suitable for nonlinear systems does not remove jitter noise well in a linear SFF system. It is seen in Tables2–4 that the performance order of the focus measures in After Particle Filtering is slightly different from the one presented above. Tables5–7 provide the quantitative performance of 3D shape recovery of the simulated cone using three focus measures, SML, GLV, and TEN, before and after filtering in the presence of speckle noise. The performance order of the focus measures for each filtering technique is almost the same as that of focus measures when Gaussian jitter noise is present. However, in the presence of speckle noise, After Kalman Filtering and After Bayesian Filtering have poor performance in terms of RMSE and PSNR. This is because these two filters estimate the position of each 2D image after assuming the jitter noise to be Gaussian function. It is evident from Tables2–7 that the best overall performance is that of the Kalman filter as the proposed method, which provides the optimal estimation results in a linear system. The Bayes filter comes second, and finally the particle filter, which estimates the optimal value in a nonlinear system.

Table 5. Comparison of focus measure operators with proposed method for simulated cone in the presence of speckle noise by using RMSE.

Focus Measure Operators SML GLV TEN Before Filtering 21.6257 19.5639 19.1356 After Particle Filtering 18.2074 18.1522 18.3674 After Bayesian Filtering 16.9866 17.4630 17.9747 After Kalman Filtering 16.9458 17.4064 17.9344

Table 6. Comparison of focus measure operators with proposed method for simulated cone in the presence of speckle noise by using correlation.

Focus Measure Operators SML GLV TEN Before Filtering 0.8133 0.8661 0.8570 After Particle Filtering 0.8941 0.9083 0.8873 After Bayesian Filtering 0.9518 0.9496 0.9316 After Kalman Filtering 0.9527 0.9504 0.9325

Table 7. Comparison of focus measure operators with proposed method for simulated cone in the presence of speckle noise by using PSNR (Peak Signal-to-Noise Ratio).

Focus Measure Operators SML GLV TEN Before Filtering 12.2884 13.3517 13.3074 After Particle Filtering 13.2806 13.8092 13.5967 After Bayesian Filtering 14.0878 13.8476 13.6162 After Kalman Filtering 14.1087 13.8758 13.6389

Tables8 and9 present the time taken to estimate the position of one image frame by using the filters for the experimented objects in the presence of Gaussian and speckle noise, respectively.

Table 8. Computation time of filters for the experimented objects in the presence of Gaussian noise.

Experimented Objects Particle Filter Bayes Filter Kalman Filter Simulated cone 0.651821 0.112929 0.006780 Coin 0.699162 0.125152 0.009283 LCD-TFT filter 0.624884 0.112957 0.007769 Letter-I 0.688404 0.112758 0.007259 Appl. Sci. 2019, 9, 3276 15 of 23

Table 9. Computation time of filters for the experimented objects in the presence of speckle noise.

Experimented Objects Particle Filter Bayes Filter Kalman Filter Simulated cone 0.637766 0.113022 0.008112 Coin 0.677874 0.124471 0.009683 LCD-TFT filter 0.625544 0.116263 0.007738 Letter-I 0.636709 0.112466 0.009179

The computation time in Tables8 and9 is expressed in seconds. It is evident that the computation time of the Kalman filter was about 14 times better than the Bayes filter and about 80 times better than the particle filter. Figures 12–14 show the qualitative performance of 3D shape reconstruction of the experimented objects using three focus measures, SML, GLV, and TEN, before and after filtering in the presence of Gaussian noise. Figures 15 and 16 provide the qualitative performance of 3D shape reconstruction of the experimented objects using three focus measures, SML, GLV, and TEN, before and after filtering in the presence of speckle noise. In Before Filtering and After Particle Filtering, it can be seen that the performance of 3D shape reconstruction was very poor due to unremoved or poorly removed jitter noise. However, in After Bayesian Filtering and After Kalman Filtering, it is evident that the performance of the 3D shape recovery was greatly improved due to the elimination of most of the jitter noise. It is proved from these experimental results that filtering the jitter noise using the Kalman filter improves the 3D shape reconstruction faster and more accurately. Appl. Sci. 2019, 9, x 16 of 23 Appl. Sci. 2019, 9, 3276 16 of 23

Figure 12. 3D shape recovery of simulated cone, before and after filtering, using SML, GLV, and TEN in the presence of Gaussian noise. (a) Before filtering for SML; (b) Before filtering for GLV; (c) Before Figure 12. 3D shape recovery of simulated cone, before and after filtering, using SML, GLV, and TEN filtering for TEN; (d) After particle filtering for SML; (e) After particle filtering for GLV; (f) After particle in the presence of Gaussian noise. (a) Before filtering for SML; (b) Before filtering for GLV; (c) Before filtering for TEN; (g) After Bayesian filtering for SML; (h) After Bayesian filtering for GLV; (i) After Bayesianfiltering for filtering TEN; for(d) TEN;After (particlej) After filtering Kalman for filtering SML; for(e) SML;After (particlek) After filtering Kalman for filtering GLV; for(f) After GLV; (particlel) After filtering Kalman filteringfor TEN; for(g) TEN.After Bayesian filtering for SML; (h) After Bayesian filtering for GLV; (i) After Bayesian filtering for TEN; (j) After Kalman filtering for SML; (k) After Kalman filtering for GLV; (l) After Kalman filtering for TEN. Appl. Sci. 2019, 9, x 17 of 23 Appl. Sci. 2019, 9, 3276 17 of 23

Figure 13. 3D shape recovery of coin, before and after filtering, using SML, GLV, and TEN in the presence of Gaussian noise. (a) Before filtering for SML; (b) Before filtering for GLV; (c) Before filtering Figure 13. 3D shape recovery of coin, before and after filtering, using SML, GLV, and TEN in the for TEN; (d) After particle filtering for SML; (e) After particle filtering for GLV; (f) After particle filtering presence of Gaussian noise. (a) Before filtering for SML; (b) Before filtering for GLV; (c) Before filtering for TEN; (g) After Bayesian filtering for SML; (h) After Bayesian filtering for GLV; (i) After Bayesian for TEN; (d) After particle filtering for SML; (e) After particle filtering for GLV; (f) After particle filtering for TEN; (j) After Kalman filtering for SML; (k) After Kalman filtering for GLV; (l) After Kalman filtering for TEN; (g) After Bayesian filtering for SML; (h) After Bayesian filtering for GLV; (i) After filtering for TEN. Bayesian filtering for TEN; (j) After Kalman filtering for SML; (k) After Kalman filtering for GLV; (l) After Kalman filtering for TEN. Appl. Sci. 2019, 9, x 18 of 23

Appl. Sci. 2019, 9, 3276 18 of 23

Figure 14. 3D shape recovery of LCD-TFT filter, before and after filtering, using SML, GLV, and TEN

in the presence of Gaussian noise. (a) Before filtering for SML; (b) Before filtering for GLV; (c) Before Figurefiltering 14. for 3D TEN; shape (d) recovery After particle of LCD-TFT filtering filter, for SML; before (e) Afterand after particle filtering, filtering using for GLV;SML, (GLV,f) After and particle TEN g h i infiltering the presence for TEN; of Gaussian ( ) After Bayesiannoise. (a) filteringBefore filtering for SML; for ( SML;) After (b) Bayesian Before filtering filtering for for GLV; GLV; (c () )Before After Bayesian filtering for TEN; (j) After Kalman filtering for SML; (k) After Kalman filtering for GLV; filtering for TEN; (d) After particle filtering for SML; (e) After particle filtering for GLV; (f) After (l) After Kalman filtering for TEN. particle filtering for TEN; (g) After Bayesian filtering for SML; (h) After Bayesian filtering for GLV; (i) After Bayesian filtering for TEN; (j) After Kalman filtering for SML; (k) After Kalman filtering for GLV; (l) After Kalman filtering for TEN. Appl. Sci. 2019, 9, x 19 of 23 Appl. Sci. 2019, 9, 3276 19 of 23

Figure 15. 3D shape recovery of simulated cone, before and after filtering, using SML, GLV, and TEN in the presence of speckle noise. (a) Before filtering for SML; (b) Before filtering for GLV; (c) Before Figurefiltering 15. for 3D TEN; shape (d )recovery After particle of simulated filtering cone, for SML; before (e) Afterand after particle filtering, filtering using for SML, GLV; (GLV,f) After and particle TEN infiltering the presence for TEN; of speckle (g) After noise. Bayesian (a) Before filtering filtering for SML; for SML; (h) After (b) Before Bayesian filtering filtering for for GLV; GLV; (c) ( iBefore) After filteringBayesian for filtering TEN; (d for) After TEN; particle (j) After filtering Kalman for filtering SML; ( fore) After SML; particle (k) After filtering Kalman for filtering GLV; ( forf) After GLV; particle(l) After filtering Kalman for filtering TEN; ( forg) After TEN. Bayesian filtering for SML; (h) After Bayesian filtering for GLV; (i) After Bayesian filtering for TEN; (j) After Kalman filtering for SML; (k) After Kalman filtering for GLV; (l) After Kalman filtering for TEN. Appl. Sci. 2019, 9, x 20 of 23 Appl. Sci. 2019, 9, 3276 20 of 23

Figure 16. 3D shape recovery of letter-I, before and after filtering, using SML, GLV, and TEN in the presence of speckle noise. (a) Before filtering for SML; (b) Before filtering for GLV; (c) Before filtering for TEN; (d) After particle filtering for SML; (e) After particle filtering for GLV; (f) After particle filtering Figure 16. 3D shape recovery of letter-I, before and after filtering, using SML, GLV, and TEN in the for TEN; (g) After Bayesian filtering for SML; (h) After Bayesian filtering for GLV; (i) After Bayesian presence of speckle noise. (a) Before filtering for SML; (b) Before filtering for GLV; (c) Before filtering filtering for TEN; (j) After Kalman filtering for SML; (k) After Kalman filtering for GLV; (l) After Kalman for TEN; (d) After particle filtering for SML; (e) After particle filtering for GLV; (f) After particle filtering for TEN. filtering for TEN; (g) After Bayesian filtering for SML; (h) After Bayesian filtering for GLV; (i) After 7. ConclusionsBayesian filtering for TEN; (j) After Kalman filtering for SML; (k) After Kalman filtering for GLV; (l) After Kalman filtering for TEN. For SFF, an object is translated at a constant step size along the optical axis. When an image of the object is captured in each step, mechanical vibrations occur, which are referred as jitter noise. Appl. Sci. 2019, 9, 3276 21 of 23

In this manuscript, jitter noise is modeled as Gaussian function with mean zn and standard deviation σn for simplicity. Then, the focus curves obtained by one of the focus measure operators are also modeled as Gaussian function, with mean z f and standard deviation σ f , for the application of the proposed method. Finally, a new filter is proposed to provide optimal estimation results in a linear SFF system with the jitter noise, utilizing Kalman filter to eliminate jitter noise in the modeled focus curves. Through experimental results, it was found that the Kalman filter provided significantly improved 3D reconstruction of the experimented objects compared with before filtering, and that the 3D shapes of the experimented objects were recovered with more accurate and faster performance than with other existing filters, such as the Bayes filter and the particle filter.

Author Contributions: Conceptualization, H.-S.J. and M.S.M.; Methodology, H.-S.J. and M.S.M.; Software, H.-S.J. and G.Y.; Validation, H.-S.J.; Writing—original draft preparation, H.-S.J.; Writing—review and editing, M.S.M.; Supervision, D.H.K.; Funding acquisition, D.H.K. Funding: This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (No.2018-0-00677, Development of Robot Hand Manipulation Intelligence to Learn Methods and Procedures for Handling Various Objects with Tactile Robot Hands). Acknowledgments: We thank Tae-Sun Choi for his assistance with useful discussion. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Lin, J.; Ji, X.; Xu, W.; Dai, Q. Absolute Depth Estimation from a Single Defocused Image. IEEE Trans. Image Process. 2013, 22, 4545–4550. [PubMed] 2. Humayun, J.; Malik, A.S. Real-time processing for shape-from-focus techniques. J. Real Time Image Process. 2016, 11, 49–62. [CrossRef] 3. Lafarge, F.; Keriven, R.; Brédif, M.; Vu, H.H. A Hybrid Multiview Stereo Algorithm for Modeling Urban Scenes. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 5–17. [CrossRef][PubMed] 4. Ciaccio, E.J.; Tennyson, C.A.; Bhagat, G.; Lewis, S.K.; Green, P.H. Use of shape-from-shading to estimate three-dimensional architecture in the small intestinal lumen of celiac and control patients. Comput. Methods Programs Biomed. 2013, 111, 676–684. [CrossRef][PubMed] 5. Barron, J.T.; Malik, J. Shape, Illumination, and Reflectance from Shading. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1670–1687. [CrossRef] 6. De Vries, S.C.; Kappers, A.M.L.; Koenderink, J.J.; Vries, S.C. Shape from stereo: A systematic approach using quadratic surfaces. Percept. Psychophys. 1993, 53, 71–80. [CrossRef][PubMed] 7. Super, B.J.; Bovik, A.C. Shape from Texture Using Local Spectral Moments. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 333–343. [CrossRef] 8. Parashar, S.; Pizarro, D.; Bartoli, A. Isometric Non-Rigid Shape-From-Motion in Linear Time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4679–4687. 9. Favaro, P.; Soatto, S.; Burger, M.; Osher, S.J. Shape from Defocus via Diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 518–531. [CrossRef][PubMed] 10. Tiantian, F.; Hongbin, Y. A novel shape from focus method based on 3D steerable filters for improved performance on treating textureless region. Opt. Commun. 2018, 410, 254–261. 11. Nayar, S.K.; Nakagawa, Y. Shape from Focus. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 824–831. [CrossRef] 12. Pertuz, S.; Puig, D.; García, M.A. Analysis of focus measure operators for shape-from-focus. Pattern Recognit. 2013, 46, 1415–1432. [CrossRef] 13. Rusiñol, M.; Chazalon, J.; Ogier, J.M. Combining Focus Measure Operators to Predict OCR Accuracy in Mobile-Captured Document Images. In Proceedings of the 2014 11th IAPR International Workshop on Document Analysis Systems, Tours, France, 7–10 April 2014; pp. 181–185. 14. Xie, H.; Rong, W.; Sun, L. Construction and evaluation of a -based focus measure for microscopy imaging. Microsc. Res. Tech. 2007, 70, 987–995. [CrossRef][PubMed] 15. Choi, T.S.; Malik, A.S. Vision and Shape—3D Recovery Using Focus; Sejong Publishing: Busan, Korea, 2008. Appl. Sci. 2019, 9, 3276 22 of 23

16. Lee, I.-H.; Mahmood, M.T.; Choi, T.-S. Robust Focus Measure Operator Using Adaptive Log-Polar Mapping for Three-Dimensional Shape Recovery. Microsc. Microanal. 2015, 21, 442–458. [CrossRef][PubMed] 17. Asif, M.; Choi, T.-S. Shape from focus using multilayer feedforward neural networks. IEEE Trans. Image Process. 2001, 10, 1670–1675. [CrossRef][PubMed] 18. Mahmood, M.T.; Choi, T.-S.; Choi, W.-J.; Choi, W. PCA-based method for 3D shape recovery of microscopic objects from image focus using discrete cosine transform. Microsc. Res. Tech. 2008, 71, 897–907. 19. Ahmad, M.; Choi, T.-S. A heuristic approach for finding best focused shape. IEEE Trans. Circuits Syst. Technol. 2005, 15, 566–574. [CrossRef] 20. Muhammad, M.S.; Choi, T.S. A Novel Method for Shape from Focus in Microscopy using Bezier Surface Approximation. Microsc. Res. Tech. 2010, 73, 140–151. [CrossRef][PubMed] 21. Muhammad, M.; Choi, T.-S. Sampling for Shape from Focus in Optical Microscopy. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 564–573. [CrossRef] 22. Lee, I.H.; Mahmood, M.T.; Shim, S.O.; Choi, T.S. Optimizing image focus for 3D shape recovery through genetic algorithm. Multimed. Tools Appl. 2014, 71, 247–262. [CrossRef] 23. Malik, A.S.; Choi, T.-S. Consideration of illumination effects and optimization of window size for accurate calculation of depth map for 3D shape recovery. Pattern Recognit. 2007, 40, 154–170. [CrossRef] 24. Malik, A.S.; Choi, T.-S. A novel algorithm for estimation of depth map using image focus for 3D shape recovery in the presence of noise. Pattern Recognit. 2008, 41, 2200–2225. [CrossRef] 25. Jang, H.-S.; Muhammad, M.S.; Choi, T.-S. Removal of jitter noise in 3D shape recovery from image focus by using Kalman filter. Microsc. Res. Tech. 2017, 81, 207–213. 26. Jang, H.-S.; Muhammad, M.S.; Choi, T.-S. Bayes Filter based Jitter Noise Removal in Shape Recovery from Image Focus. J. Imaging Sci. Technol. 2019, 63, 1–12. [CrossRef] 27. Jang, H.-S.; Muhammad, M.S.; Choi, T.-S. Optimal depth estimation using modified Kalman filter in the presence of non-Gaussian jitter noise. Microsc. Res. Tech. 2018, 82, 224–231. [PubMed] 28. Vasebi, A.; Bathaee, S.; Partovibakhsh, M. Predicting state of charge of lead-acid batteries for hybrid electric vehicles by extended Kalman filter. Energy Convers. Manag. 2008, 49, 75–82. [CrossRef] 29. Frühwirth, R. Application of Kalman filtering to track and vertex fitting. Nucl. Instrum. Methods Phys. Res. A 1987, 262, 444–450. [CrossRef] 30. Harvey, A.C. Applications of the Kalman Filter in Econometrics; Cambridge University Press: Cambridge, UK, 1987. 31. Boulfelfel, D.; Rangayyan, R.; Hahn, L.; Kloiber, R.; Kuduvalli, G. Two-dimensional restoration of single photon emission computed tomography images using the Kalman filter. IEEE Trans. Med. Imaging 1994, 13, 102–109. [CrossRef][PubMed] 32. Bock, Y.; Crowell, B.W.; Webb, F.H.; Kedar, S.; Clayton, R.; Miyahara, B. Fusion of High-Rate GPS and Seismic Data: Applications to Early Warning Systems for Mitigation of Geological Hazards. In Proceedings of the American Geophysical Union Fall Meeting, San Francisco, CA, USA, 15–19 December 2008. 33. Miall, R.C.; Wolpert, D. Forward Models for Physiological Motor Control. Neural Netw. 1996, 9, 1265–1279. [CrossRef] 34. Frieden, B.R. Probability, Statistical Optics, and Data Testing; Springer: Berlin, Germany, 2001. 35. Goodman, J.W. Speckle Phenomena in Optics; Roberts and Company Publishers: Englewood, CO, USA, 2007. 36. Akselsen, B. Kalman Filter Recent Advances and Applications; Scitus Academics: New York, NY, USA, 2016. 37. Pan, J.; Yang, X.; Cai, H.; Mu, B. Image noise smoothing using a modified Kalman filter. Neurocomputing 2016, 173, 1625–1629. [CrossRef] 38. Ribeiro, M.I. Kalman and Extended Kalman Filters: Concept, Derivation and Properties; Institute for Systems and Robotics: Lisbon, Portugal, 2004. 39. Welch, G.; Bishop, G. An Introduction to the Kalman Filter; University of North Carolina at Chapel Hill: Chapel Hill, NC, USA, 1995. 40. Subbarao, M.; Lu, M.-C. Image sensing model and computer simulation for CCD camera systems. Mach. Vis. Appl. 1994, 7, 277–289. 41. Tagade, P.; Hariharan, K.S.; Gambhire, P.; Kolake, S.M.; Song, T.; Oh, D.; Yeo, T.; Doo, S. Recursive Bayesian filtering framework for lithium-ion cell state estimation. J. Power Sources 2016, 306, 274–288. [CrossRef] 42. Hajimolahoseini, H.; Amirfattahi, R.; Gazor, S.; Soltanian-Zadeh, H. Robust Estimation and Tracking of Pitch Period Using an Efficient Bayesian Filter. IEEE/ACM Trans. Audio Speech Lang. Process. 2016, 24, 1. [CrossRef] Appl. Sci. 2019, 9, 3276 23 of 23

43. Li, T.; Corchado, J.M.; Bajo, J.; Sun, S.; De Paz, J.F. Effectiveness of Bayesian filters: An information fusion perspective. Inf. Sci. 2016, 329, 670–689. [CrossRef] 44. Yang, T.; Laugesen, R.S.; Mehta, P.G.; Meyn, S.P. Multivariable feedback particle filter. Automatica 2016, 71, 10–23. [CrossRef] 45. Zhou, H.; Deng, Z.; Xia, Y.; Fu, M. A new sampling method in particle filter based on Pearson correlation coefficient. Neurocomputing 2016, 216, 208–215. [CrossRef] 46. Wang, D.; Yang, F.; Tsui, K.-L.; Zhou, Q.; Bae, S.J. Remaining Useful Life Prediction of Lithium-Ion Batteries Based on Spherical Cubature Particle Filter. IEEE Trans. Instrum. Meas. 2016, 65, 1282–1291. [CrossRef] 47. Mahmood, M.T.; Majid, A.; Choi, T.-S. Optimal depth estimation by combining focus measures using genetic programming. Inf. Sci. 2011, 181, 1249–1263. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).