<<

Letter CMOS Depth with Offset Using a Back-Side Illumination Structure for Improving Disparity

Jimin Lee 1, Sang-Hwan Kim 1, Hyeunwoo Kwen 1, Juneyoung Jang 1, Seunghyuk Chang 2, JongHo Park 2, Sang-Jin Lee 2 and Jang-Kyoo Shin 1,* 1 School of Engineering, Kyungpook National University, 80 Deahak-ro, Buk-gu, Daegu 41566, Korea; [email protected] (J.L.); [email protected] (S.-H.K.); [email protected] (H.K.); [email protected] (J.J.) 2 Center for Integrated Smart Sensors, KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Korea; [email protected] (S.C.); [email protected] (J.P.); [email protected] (S.-J.L.) * Correspondence: [email protected]; Tel.: +82-53-950-5531

 Received: 10 August 2020; Accepted: 7 September 2020; Published: 9 September 2020 

Abstract: This paper presents a CMOS depth with offset pixel aperture (OPA) using a back-side illumination structure to improve disparity. The OPA method is an efficient way to obtain depth with a single image sensor without additional external factors. Two types of (i.e., left-OPA (LOPA) and right-OPA (ROPA)) are applied to . The depth information is obtained from the disparity caused by the phase difference between the LOPA and ROPA . In a CMOS depth image sensor with OPA, disparity is important information. Improving disparity is an easy way of improving the performance of the CMOS depth image sensor with OPA. Disparity is affected by pixel height. Therefore, this paper compared two CMOS depth image sensors with OPA using front-side illumination (FSI) and back-side illumination (BSI) structures. As FSI and BSI chips are fabricated via different processes, two similar chips were used for measurement by calculating the ratio of the OPA offset to pixel size. Both chips were evaluated for chief ray angle (CRA) and disparity in the same measurement environment. Experimental results were then compared and analyzed for the two CMOS depth image sensors with OPA.

Keywords: offset pixel aperture; CMOS depth image sensor; back-side illumination structure; improving disparity

1. Introduction Recent studies on image sensors have focused on depth . Since the development of cameras, there has been consistent and continuous research on the advancement of technology for capturing two-dimensional (2D) images, higher-dimensional images, etc. However, despite the improved quality of 2D imaging technology, depth information is not captured. Due to the inability of conventional 2D image sensors to capture the depth information of images, the quality of 2D images is lower compared with actual images seen by the human eye. Three-dimensional (3D) imaging technology complements the 2D image technology. 3D imaging technology can provide more stereoscopic image information to observers through depth information, which is not present in 2D images. Presently, a wide variety of studies are being conducted on 3D image sensor technology. Typical 3D imaging methods include time of flight (ToF), structured , and stereo vision [1–17]. The ToF method is used to calculate the return time of incident light on an object and determine the distance using a high-power infrared (IR) source. Structured light imaging is a method of illuminating an object with a specific light pattern and analyzing it algorithmically to determine the object distance.

Sensors 2020, 20, 5138; doi:10.3390/s20185138 www.mdpi.com/journal/sensors Sensors 2020, 20, x FOR PEER REVIEW 2 of 13

Sensors 2020, 20, 5138 2 of 13 illuminating an object with a specific light pattern and analyzing it algorithmically to determine the object distance. Finally, a stereo is a setup that uses a plurality of cameras to obtain an image Finally,according a stereo to the camera incidence is a angle, setup which that uses is used a plurality to obtain of the cameras distance. to obtain The above-mentioned an image according methods to theare incidence used as external angle, which elements is used to extract to obtain depth the information distance. The to realize above-mentioned a 3D image. methodsThe manufacturing are used ascost external and the elements power consumption to extract depth of 3D information image sensor tos realizeusing these a 3D methods image. Theare relatively manufacturing high because cost andtwo the or power more cameras, consumption structured of 3D imagelight sources, sensors or using IR sources these methods are required. are relatively However, high depth because image twosensors or more adopting cameras, the structured offset pixel light aperture sources, (OPA or IR) technique sources are can required. extract depth However, information depth image with a sensorssimple adopting structure the and offset without pixel aperture external (OPA) elements. technique Previously, can extract a sensor depth informationfabricated using with athe simple OPA structuretechnique and was without reported external with elements.a front-side Previously, illuminati aon sensor (FSI) fabricatedstructure [18–20]. using the Generally, OPA technique in depth wasimage reported sensing, with disparity a front-side tends illumination to be inversely (FSI) structureproportional [18– 20to]. pixel Generally, height. in However, depth image CMOS sensing, depth disparityimage sensors tends towith be inverselyOPA, which proportional have the FSI to structure, pixel height. are limited However, in that CMOS the depthpixel height image is sensors reduced withdue OPA, to multiple which havemetal the layers FSI structure, on the front-side are limited of inthe that pixel. the Therefore, pixel height a ba is reducedck-side illumination due to multiple (BSI) metalstructure layers is on applied the front-side to OPA ofCMOS the pixel. depth Therefore, image sensors a back-side to improve illumination disparity. (BSI) The structure OPA CMOS is applied depth toimage OPA CMOS sensor depthwith the image BSI sensorsstructure to has improve a pixel disparity. height that The is OPA relatively CMOS lower depth than image that sensor of one with with thethe BSI FSI structure structure. has If the a pixel pixel height height that is low, is relatively the difference lower in than the peak that ofangle one of with response the FSI between structure. left- IfOPA the pixel (LOPA) height and is right-OPA low, the di ff(ROPA)erence inand the the peak disparity angle ofincreases. response between left-OPA (LOPA) and right-OPAThe (ROPA)rest of this and paper the disparity is organized increases. as follows. Disparity is described in Section 2. The design of theThe pixel rest structures of this paper is described is organized in as Section follows. 3.Disparity Measurement is described of disparity in Section based2. The on design distance of the and pixelmeasurement structures is of described the chief in ray Section angle3. Measurement(CRA) with OPA-based of disparity baseddepth onimage distance sensors and measurementis described in ofSection the chief 4. rayFinally, angle Section (CRA) 5 withconcludes OPA-based the paper. depth image sensors is described in Section4. Finally, Section5 concludes the paper. 2. Disparity 2. Disparity Depth information is the most important factor for realizing 3D images. A universalized 2D imageDepth sensor information uses only is thethe most x- and important y-axis information factor for realizing of the 3Dobject images. to generate A universalized data. Therefore, 2D image the sensorimage uses data only from the thex- and 2D ycamera-axis information is formed of without the object distance to generate information data. Therefore, to the actual the image object datainformation. from the 2DTo implement camera is formed this in the without 3D method, distance depth information information to the must actual be applied object information. to the planar Toimage implement data. The this depth in the information 3D method, of depth the image information is important must data be applied to detect to the the distance planar image of the data.object. TheTherefore, depth information when the depth of the information image is important is extracted data an tod detect applied the to distance the image, of the it is object. possible Therefore, to know whenthe distance the depth of information the object. isThe extracted OPA depth andapplied image se tonsor the image,described it is in possible this paper to know used the disparity distance to ofextract the object. the depth The OPA information. depth image In the sensor OPA described method, inan thisaperture paper is used applied disparity to the to inside extract of thethe depthpixel to information.obtain disparity In the information. OPA method, The an aperture isintegrat applieded to inside the inside the pixel of the exhibits pixel to equivalent obtain disparity optical information.characteristics The with aperture the camera integrated . inside Figure the 1 shows pixel exhibits that the equivalent effects of the optical OPA characteristics and lens aperture with of thea .are equivalent. Figure1 shows O1 is that the the offset e ffects value of theof the OPA camera and lens lens aperture aperture of and a camera O2 is arethe equivalent.offset value O1ofis the the OPA. offset value of the camera lens aperture and O2 is the offset value of the OPA.

Figure 1. Equivalent effect between OPA and camera lens aperture. Figure 1. Equivalent effect between OPA and camera lens aperture.

Sensors 2020, 20, 5138 3 of 13

SensorsFigure 20202 ,shows 20, x FOR a PEER cross-sectional REVIEW view of the pixel of the CMOS depth image sensor with3 of 13 OPA. The equivalent offset f-number (FOA) is inversely proportional to the pixel height and is given by: SensorsFigure 2020, 202 ,shows x FOR PEER a cross-sectional REVIEW view of the pixel of the CMOS depth image sensor with OPA.3 of 13 The equivalent offset f-number (FOA) is inverselyf proportionalh toh the pixel height and is given by: = = = Figure 2 shows a cross-sectionalFOA view of the pixelℎ of theℎ CMOS, depth image sensor with OPA. (1) OCA Op 2O2n The equivalent offset f-number (FOAF) is inversely= = proportional = ,to the pixel height and is given by:(1) O O 2O where f is the ; OCA is the same as 2 O1ℎ; h is theℎ pixel height; and OP is an offset length where f is the focal length; OCA is the sameF = as 2 ×× O=1 ; h is = the pixel, height; and OP is an offset length(1) O O 2O betweenbetween LOPA LOPA and and ROPA. ROPA.n is n theis the refractive refractive index index of of the the microlens microlens and and planarizationplanarization layer. layer. Since Since OCA is proportionalOwhereCA is proportional f is the to OfocalP divided to length; OP divided byOCA the is by pixelthe the same pixel height, as height, 2 F× OAO1 ;F ishOA proportionalis is the proportional pixel height; to to the andthe pixel pixelOP is height heightan offset divided divided length by OP. If thebybetween F OOAP. Ifvalue the LOPA FOA is value lessand thanROPA.is less the than n is f-number the f-numberrefractive of the ofindex the camera cameraof the lens, microlens lens, light light andisis not planarization transmitted transmitted accuratelylayer. accurately Since to the OPA,toO theCA is andOPA, proportional thus and the thus disparity to the OP disparitydivided cannot by cannot bethe correctlypixel be correctly height, obtained. F obtained.OA is proportional Therefore, Therefore, FtoOA theFOAis pixelis set set to height to a numbera number divided greater thangreater theby O f-numberP. thanIf the the FOA off-number value the camerais lessof the than lens.camera the f-number lens. of the camera lens, light is not transmitted accurately to the OPA, and thus the disparity cannot be correctly obtained. Therefore, FOA is set to a number greater than the f-number of the camera lens.

FigureFigure 2. 2.Cross-sectional Cross-sectional viewview of the the pixel pixel of of th thee CMOS CMOS depth depth image image sensor sensor with withOPA. OPA.

FigureFigure3 shows Figure3 shows the 2. Cross-sectional etheffect effect of object of objectview distance of distance the pixel and ofand image the image CMOS distance distancedepth image on on disparity. sensordisparity. witha is OPA.a theis the object object distance in adistance non-focusing in a non-focusing state, and state,a0 is and the a object0 is the object distance distance in a focusingin a focusing state. state.b bis is the the imageimage distance distance in a non-focusingin a non-focusingFigure state, 3 shows and state,b the0 andis effect the b0 is image ofthe object image distance distance distance in aand focusingin a image focusing state.distance state.d is don theis disparity.the distance distance a between is between the object a a sensor andsensor thedistance focus and in inthe a anon-focusing focus non-focusing in a non-focusing state, state. and Definitionsa 0state. is the Definitions object of distance disparity of disparity in aare focusing asare follows: as state. follows: b is the image distance in a non-focusing state, and b0 is the1 image1 distance1 in a focusing state. d is the distance between a sensor and the focus in a non-focusing1 + state.1 = Definitions1 → = of disparitya, f are as follows: (2) + = b =− , (2) a 1 b 1 f1 → a f + = → = − , (2) 1 1 1 − + = → = , (3) 1 1 1 −a 0 f +1 1= 1 b0 = , (3) a0 b0 f → a0 f + = →( − =) 1 , (3) −− = = 2 × , (4) d (f (a− a)0) 1 ddis = OCA = (− − ) 1 , (4) b a(a0 f ) × FOA where ddis is the disparity, which can be= calculated = from− the lens × equation., In particular, as the pixel(4) ( − ) whereheightddis decreases,is the disparity, FOA decreases, which and can thus be calculatedthe disparity from increases. the lens equation. In particular, as the pixel where ddis is the disparity, which can be calculated from the lens equation. In particular, as the pixel height decreases, FOA decreases, and thus the disparity increases. height decreases, FOA decreases, and thus the disparity increases.

Figure 3. Effect of object distance and image distance on disparity.

FigureFigure 3. E3.ff Effectect of of object object distance distance and image image distance distance on on disparity. disparity.

Sensors 2020, 20, 5138 4 of 13 Sensors 2020, 20, x FOR PEER REVIEW 4 of 13

3. Design of the Pixel Structure

The disparity varies depending on thethe sizesize ofof thethe FFOAOA,, as as expressed expressed in in the the equation described in the previousprevious section.section. DisparityDisparity increases increases when when the the pixel pixel height height decreases decreases or theor otheffset offset value value of the of OPA the increases.OPA increases. Therefore, Therefore, when when designing designing a pixel a usingpixel theusing OPA the method, OPA method, it is essential it is essential to accurately to accurately set the pixelset the height pixel andheight the and offset the value offset of value OPA. of Two OPA. diff Twoerent different sensors weresensors used were for used measurement. for measurement. Figure 44a,ba,b showshow aa cross-sectionalcross-sectional illustrationillustration ofof pixelpixel heightheight didifferencesfferences depending on the fabrication process. As As shown shown in in Figure Figure 44a,a, thethe pixelpixel heightheight ofof thethe CMOSCMOS depthdepth imageimage sensorsensor withwith OPA usingusing the the FSI FSI structure structure used used for for measurement measurement in this in this work work was 4.435was 4.435µm. Theμm. CMOS The CMOS depth imagedepth sensorimage withsensor OPA with using OPA the using FSI fabrication the FSI fabrication process has process multiple has metal multiple layers metal for layers transmission for signal andtransmission power voltage and power on the voltage front side on the of the front pixel. side of the pixel.

(a) (b)

Figure 4. Cross-sectional illustration of of pixel pixel height height di differencesfferences depending depending on on th thee fabrication fabrication process. process. (a) CMOS depth image sensor with OPA usingusing anan FSIFSI structure.structure. (b) CMOS depth image sensor with OPA using aa BSIBSI structure.structure.

These metal metal layers layers are are an an essential essential element element for for driv drivinging the thesensor. sensor. Therefore, Therefore, the decrease the decrease in pixel in pixelheight height for this for type this of type sensor of sensoris limited. is limited. As above-me As above-mentioned,ntioned, to increase to increasedisparity, disparity, the offset the value offset of valuethe OPA of themust OPA be increased must be increased or the pixel or theheight pixel must height be reduced. must be The reduced. increasing The increasing offset value off ofset OPA value is oflimited. OPA isIt is limited. possible It to is increase possible the to increaseoffset value the of off OPAset value within of a OPA pixel within area. If a the pixel offset area. value If the of oOPAffset valueis increased, of OPA the is increased,opening region the opening of the OPA region is reduce of thed OPA accordingly, is reduced and accordingly, there is a risk and that there sensitivity is a risk thatwill sensitivitydecrease. Therefore, will decrease. the method Therefore, of decreasing the method the of decreasingpixel height the has pixel advantageous height has advantageouscharacteristics characteristicswithout the risk without of the theCMOS risk ofdepth the CMOSimage depthsensor image with sensorOPA, as with described OPA, as previously. described previously.If the BSI Ifstructure the BSI structureis applied is to applied the CMOS to the depth CMOS image depth imagesensor sensorwith OPA, with OPA,light lightis transferred is transferred through through the themicrolens microlens to the to back-side the back-side of the of pi thexel, pixel, even eventhough though the multiple the multiple metal metallayers layers are applied are applied to the tofront- the front-sideside of the of pixel. the pixel.Therefore, Therefore, the pixel the pixelheight height can be can considerably be considerably reduced. reduced. As shown As shown in Figure in Figure 4b, 4theb, thepixel pixel height height of the of theCMOS CMOS depth depth image image sensor sensor wi withth OPA OPA using using the the BSI BSI structure structure used used in in this measurement was was 1.16 1.16 μµm.m. A A drawback drawback of ofthe the BSI BSI structure structure app appliedlied to reduce to reduce the thepixel pixel height height is the is thecrosstalk crosstalk of light of light passing passing through through the the color filters. filters. In Inparticular, particular, there there is is a a possibility possibility that that a a lightlight component that has passed through the color filter filter may flow flow into the of an adjacent pixel, resulting in a decrease in color response. This can be complemented by formin formingg a deep trench isolation (DTI) region between adjacent adjacent . photodiodes.

4. Measurement Results and Discussion This section describes the results of the measurements using two chips with different structures. The color patterns of the two chips form a RGBW pattern, as shown in Figure 5. If the LOPA and

Sensors 2020, 20, 5138 5 of 13

4. Measurement Results and Discussion

SensorsThis 2020 section, 20, x FOR describes PEER REVIEW the results of the measurements using two chips with different structures.5 of 13 The color patterns of the two chips form a RGBW pattern, as shown in Figure5. If the LOPA and ROPA channelsROPA channels are separated are separated to calculate to calculate the disparity, the dispar the resolutionity, the resolution of the OPA of the pixel OPA images pixel will images be lower will thanbe lower the resolution than the resolution of the entire of the image. entire The image. entire The pixel entire area canpixel be area composed can be composed of only LOPA of only and LOPA ROPA patterns,and ROPA but patterns, the sensitivity but the ofsensitivity the image of sensorthe image is reduced. sensor is Therefore, reduced. Therefore, for each 2 for2 each area, 2 one × 2 OPAarea, × pixelone OPA is integrated pixel is intointegrated the OPA into image the sensor OPA withimage a colorsensor pattern. with a In color other pattern. words, 2DIn qualityother words, improves 2D asquality the density improves of OPAas the pixels density decreases, of OPA pixels but there decreases, is a trade-o but thereff in whichis a trade-off the disparity in which resolution the disparity and depthresolution accuracy and decrease.depth accuracy Two types decrease. of chips usedTwo inty thepes measurement of chips used were in manufactured the measurement via diff erentwere fabricationmanufactured processes. via different Therefore, fabrication the pixel processes. pitch is di Thfferent.erefore, The the pixel pixel pitch pitch of the is CMOSdifferent. depth The image pixel sensorpitch of with the OPACMOS using depth the image FSI structure sensor waswith 2.8 OPAµm using2.8 µ them. TheFSI structure offset value was of 2.8 the μ OPAm × was2.8 μ 0.65m. µThem, × andoffset the value OPA of opening the OPA width was was0.65 0.5 μm,µm. and To the compare OPA opening the two chips,width awas BSI 0.5 chip μm. with To similar compare conditions the two mustchips, be a BSI selected. chip with The ratiosimilar of conditions the OPA off mustset to be pixel selected. area is The 46.42% ratio in of the the CMOS OPA offset depth to image pixel sensorarea is with46.42% OPA in usingthe CMOS the FSI depth structure. image The sensor pixel with pitch OPA of the using CMOS the depth FSI imagestructure. sensor The with pixel OPA pitch using of the BSICMOS structure depth was image 1 µ sensorm 1 µ withm. For OPA the using CMOS the depth BSI structure image sensor was with1 μm OPA × 1 μ usingm. For the the BSI CMOS structure, depth a × similarimage sensor offset ratiowith valueOPA using to FSI the structure BSI structure, was 0.24 a µsim.milar The offset ratio ratio of OPA value off setto FSI to pixel structure area waswas 48%0.24 andμm. theThe OPA ratio opening of OPA widthoffset to was pixel 0.26 areaµm. was 48% and the OPA opening width was 0.26 μm.

Figure 5. Color pattern of two chips with didifferentfferent structures.

CRA andand disparitydisparity measurementsmeasurements werewere performedperformed usingusing thethe above-mentionedabove-mentioned chips.chips. In thisthis measurement, CRA is the ray angle that passes throughthrough the center of an OPA. The responseresponse of thethe sensor can be estimated by the incidence angle from the light source, as shown in Figure 6 6.. Therefore,Therefore, thethe angleangle responseresponse ofof LOPALOPA andand ROPAROPA cancan bebe confirmedconfirmed usingusing CRACRA measurements.measurements. Figure6 6 shows shows thethe measurement environment environment of of the the CRA. CRA. A A connected connected collimator collimator was was designed designed to totransmit transmit light light in inparallel parallel to tothe the sensor. sensor. Angle Angle of ofincident incident light light is is an an important important factor factor in in the CRA measurementmeasurement environment. Therefore, itit is essential to align the centercenter of the collimator and the center of the sensor verticallyvertically and horizontally. As shown in Figure6 6,, thethe angleangle ofof thethe sensorsensor isis controlled controlled afterafter aligningaligning thethe centercenter ofof thethe sensorsensor toto thethe centercenter ofof thethe collimator.collimator. This is the same eeffectffect onon changingchanging thethe angleangle of incident light in the measurement environment. The CRA measurement range was 30 to 30 , of incident light in the measurement environment. The CRA measurement range was −30°− to ◦30°, and◦ andthe measurement the measurement was wasperformed performed at 2° at intervals. 2◦ intervals. Figure7 shows the CRA measurement results of the CMOS depth image sensor with OPA using the FSI structure. For LOPA, maximum output was obtained when the incidence angle was 24 . − ◦ The maximum output of ROPA was obtained when the incidence angle was 20◦. Therefore, the peak angle response difference between the LOPA and ROPA in the CMOS depth image sensor with OPA using the FSI structure was 44◦. In this result, the origin was shifted because the position of the microlens was also shifted due to the error in the fabrication process.

Sensors 2020, 20, x FOR PEER REVIEW 6 of 13

Sensors 2020, 20, 5138 6 of 13 Sensors 2020, 20, x FOR PEER REVIEW 6 of 13

Figure 6. Measurement environment of the CRA.

Figure 7 shows the CRA measurement results of the CMOS depth image sensor with OPA using the FSI structure. For LOPA, maximum output was obtained when the incidence angle was −24°. The maximum output of ROPA was obtained when the incidence angle was 20°. Therefore, the peak angle response difference between the LOPA and ROPA in the CMOS depth image sensor with OPA using the FSI structure was 44°. In this result, the origin was shifted because the position of the microlens FigureFigure 6. Measurement 6. Measurement environment environment of the ofCRA. the CRA. was also shifted due to the error in the fabrication process. Figure 7 shows the CRA measurement results of the CMOS depth image sensor with OPA using the FSI structure. For LOPA, maximum output was obtained when the incidence angle was −24°. The maximum output of ROPA was obtained when the incidence angle was 20°. Therefore, the peak angle response difference between the LOPA and ROPA in the CMOS depth image sensor with OPA using the FSI structure was 44°. In this result, the origin was shifted because the position of the microlens was also shifted due to the error in the fabrication process.

FigureFigure 7. CRA7. CRA measurement measurement results results of the of CMOS the CMOS depth image depth sensor image with sensor OPA with using OPA the FSI using structure. the FSI structure. Figure8 shows the CRA measurement results of the CMOS depth image sensor with OPA using the BSIFigure structure. 8 shows For the LOPA, CRA maximum measurement output results was of obtained the CMOS when depth the image incidence sensor angle with was OPA 26using. − ◦ Thethe maximum BSI structure. output For of LOPA, ROPA maximum was obtained output when was the obtained incidence when angle the was incidence 26◦. Therefore, angle was the −26°. peak The anglemaximum response output difference of ROPA between was obtained the LOPA when and the ROPA incidence in the CMOSangle was depth 26°. image Therefore, sensor the with peak OPA angle usingresponse the BSI difference structure between was 52 the. FollowingLOPA and theROPA results in the of CMOS the CRA depth measurement, image sensor the with peak OPA angle using Figure 7. CRA measurement results◦ of the CMOS depth image sensor with OPA using the FSI responsethe BSI ofstructure the CMOS was depth 52°. Following image sensor the results with OPA of the using CRA the measurement, BSI structure the was peak 8 higher angle response than that of structure. ◦ withthe the CMOS FSIstructure. depth image sensor with OPA using the BSI structure was 8° higher than that with the FSI Figurestructure.Figure 8 shows9 shows the CRA a cross-sectional measurement view results of the of the correlation CMOS depth between image pixel sensor height, with o ff setOPA value using of OPA, the BSIand structure. response For angle. LOPA, The maximum formula foroutput the response was obtained angle when (α) is the given incidence below: angle was −26°. The maximum output of ROPA was obtained when the incidence angle was 26°. Therefore, the peak angle O α = n 2 , (5) response difference between the LOPA and ROPA in the CMOSh depth image sensor with OPA using the BSI structure was 52°. Following the results of the CRA measurement, the peak angle response of the CMOSwhere depthh is the image pixel sensor height; with O2 isOPA the using offset the value BSI of structure OPA; and wasn is 8° the higher refractive than that index with of the the microlens FSI structure.and planarization layer or SiO2 layer. The response angle of the OPA pixel as expressed in the formula can be modified by adjusting the offset value of OPA or the pixel height. Therefore, the range of the

Sensors 2020, 20, x FOR PEER REVIEW 7 of 13

Sensors 2020, 20, 5138 7 of 13

CMOS depth image sensor’s peak-to-peak response angle with the BSI structure is wider than that of theSensors CMOS 2020 depth, 20, x FOR image PEER sensor REVIEW with OPA using the FSI structure. 7 of 13

Figure 8. CRA measurement results of the CMOS depth image sensor with OPA using the BSI structure.

Figure 9 shows a cross-sectional view of the correlation between pixel height, offset value of OPA, and response angle. The formula for the response angle (α) is given below:

O = , (5) ℎ

where h is the pixel height; O2 is the offset value of OPA; and n is the refractive index of the microlens and planarization layer or SiO2 layer. The response angle of the OPA pixel as expressed in the formula can be modified by adjusting the offset value of OPA or the pixel height. Therefore, the range of the CMOS depth image sensor’s peak-to-peak response angle with the BSI structure is wider than that of theFigure CMOSFigure 8. 8.depthCRA CRA measurement image measurement sensor results withresults of OPA the of CMOS theusing CMOS depth the FSI imagedept structure.h sensorimage withsensor OPA with using OPA the BSIusing structure. the BSI structure.

Figure 9 shows a cross-sectional view of the correlation between pixel height, offset value of OPA, and response angle. The formula for the response angle (α) is given below:

O = , (5) ℎ

where h is the pixel height; O2 is the offset value of OPA; and n is the refractive index of the microlens and planarization layer or SiO2 layer. The response angle of the OPA pixel as expressed in the formula can be modified by adjusting the offset value of OPA or the pixel height. Therefore, the range of the CMOS depth image sensor’s peak-to-peak response angle with the BSI structure is wider than that of the CMOS depth image sensor with OPA using the FSI structure.

Figure 9. Cross-sectional view of the correlation between pixel height, offset value of OPA, Figure 9. Cross-sectional view of the correlation between pixel height, offset value of OPA, and and response angle. response angle. Disparity is the most important factor in obtaining depth information with the CMOS depth image sensorDisparity with is OPAthe most to realize important the 3D factor image. in obtaining Improving depth disparity information also improves with the the CMOS depth depth of informationimage sensor resolution. with OPA Disparity to realize measurements the 3D image. were Improving performed disparity to confirm also that improves the disparity the depth of the of CMOSinformation depth . sensor Disparity with OPA measurements using the BSI structure were performed that had to alow confirm pixel that height the was disparity improved of the CMOS depth image sensor with OPA using the BSI structure that had a low pixel height was compared to the CMOS depth image sensor with OPA using the FSI structure. Figure 10 shows the measurement environment for disparity evaluation; the object is a black-and-white pattern for the disparity measurement because the monotonic boundary area makes it easy to measure disparity. The measurement environments for both sensors are equally configured. First, the camera lens of the CMOS depth image sensors with OPA (using the FSI and BSI structures, respectively) is placed 100-cm away fromFigure the 9. objectCross-sectional and focused. view Then,of the thecorrelation disparity between according pixel to height, the distance offset value is measured of OPA, fromand 86 to 114response cm at 1-cm angle. intervals using a step motor rail. It is possible to extract the disparity over a range

Disparity is the most important factor in obtaining depth information with the CMOS depth image sensor with OPA to realize the 3D image. Improving disparity also improves the depth of information resolution. Disparity measurements were performed to confirm that the disparity of the CMOS depth image sensor with OPA using the BSI structure that had a low pixel height was

SensorsSensors 2020 2020, ,20 20, ,x x FOR FOR PEER PEER REVIEW REVIEW 88 of of 13 13 improvedimproved compared compared to to the the CMOS CMOS depth depth image image sensor sensor with with OPA OPA using using the the FSI FSI structure. structure. Figure Figure 10 10 showsshows the the measurement measurement environment environment for for disparity disparity eval evaluation;uation; the the object object is is a a black-and-white black-and-white pattern pattern forfor thethe disparitydisparity measurementmeasurement becausebecause thethe monotomonotonicnic boundaryboundary areaarea makesmakes itit easyeasy toto measuremeasure disparity.disparity. The The measurement measurement environments environments for for both both sensors sensors are are equally equally configured. configured. First, First, the the camera camera lenslens of of the the CMOS CMOS depth depth image image sensors sensors with with OPA OPA (u (usingsing the the FSI FSI and and BSI BSI structures, structures, respectively) respectively) is is Sensors 2020, 20, 5138 8 of 13 placedplaced 100-cm 100-cm away away from from the the object object and and focused. focused. Then, Then, the the disparity disparity according according to to the the distance distance is is measuredmeasured from from 86 86 to to 114 114 cm cm at at 1-cm 1-cm intervals intervals using using a a step step motor motor rail. rail. It It is is possible possible to to extract extract the the disparitydisparityof 14 cm over fromover a thea range range focal of of length,14 14 cm cm from butfrom if the the it is focal focal too farlength, length, from but but the if if focal it it is is too point,too far far blurringfrom from the the offocal focal the point, imagepoint, blurring becomesblurring ofofsignificant the the image image and becomes becomes it is di significantffi significantcult to calculate and and it it is is the difficult difficult accurate to to calculate disparitycalculate the value.the accurate accurate disparity disparity value. value.

Figure 10. Measurement environment for disparity evaluation. FigureFigure 10. 10. Measurement Measurement environment environment for for disparity disparity evaluation. evaluation. Figure 11 shows the disparity measurement results for the OPA depth image sensors with the Figure 11 shows the disparity measurement results for the OPA depth image sensors with the FSI structureFigure 11 and shows the BSIthe structure,disparity measurement respectively. Basedresults on for the the position OPA depth of 100 image cm, which sensors is thewith focal the FSI structure and the BSI structure, respectively. Based on the position of 100 cm, which is the focal FSIpoint structure on the xand-axis the of BSI the structure, graph, it isrespectively. shown from Based14 cmon tothe 14 position cm at 1 of cm 100 intervals. cm, which The is disparitythe focal point on the x-axis of the graph, it is shown from −14− cm to 14 cm at 1 cm intervals. The disparity was waspoint obtained on the x-axis by separating of the graph, the LOPAit is shown and ROPAfrom −14 images cm to from 14 cm the at entire1 cm intervals. images and The calculating disparity was the obtained by separating the LOPA and ROPA images from the entire images and calculating the obtaineddifference by between separating the images the LOPA in the and boundary ROPA areaimages of thefrom object. the entire The disparity images increasesand calculating because the of difference between the images in the boundary area of the object. The disparity increases because of differencethe blurring between of images the dueimages to the in distancethe boundary of the area sensors of the from object. the focal The disparity point. Therefore, increases the because disparity of the blurring of images due to the distance of the sensors from the focal point. Therefore, the disparity theat the blurring farthest of pointimages from due theto the focal distance point wasof the compared sensors from in the the measurement focal point. Therefore, result. For the the disparity CMOS at the farthest point from the focal point was compared in the measurement result. For the CMOS atdepth the imagefarthest sensor point withfrom OPA the usingfocal point the BSI was structure, compared the in disparity the measurement was 6.91 pixels result. when For the the distance CMOS depth image sensor with OPA using the BSI structure, the disparity was 6.91 pixels when the distance depthbetween image the sensor with and theOPA focal using point the wasBSI structure,14 cm.When the disparity the distance was 6.91 between pixels thewhen sensor the distance and the between the sensor and the focal point was −−14 cm. When the distance between the sensor and the betweenfocal point the was sensor 14 cm, and the the disparity focal point was was 3.53 − pixels.14 cm. TheWhen disparity the distance for the between CMOS depththe sensor image and sensor the focal point was 14 cm, the disparity was 3.53 pixels. The disparity for the CMOS depth image sensor withfocal OPApoint using was 14 the cm, FSI the structure disparity was was 1.73 3.53 pixels pixels. while The the disparity distance for between the CMOS the sensordepth image and the sensor focal with OPA using the FSI structure was 1.73 pixels while the distance between the sensor and the focal withpoint OPA was using14 cm. the When FSI structure the distance was between1.73 pixels the wh sensorile the and distance the focal between point wasthe sensor 14 cm, and the disparitythe focal point was −−14 cm. When the distance between the sensor and the focal point was 14 cm, the disparity waspoint 0.89 was pixels. −14 cm. Accordingly, When the distance the disparity between of the the CMOS sensor depthand the image focal sensorpoint was with 14 OPA cm, usingthe disparity the BSI was 0.89 pixels. Accordingly, the disparity of the CMOS depth image sensor with OPA using the BSI wasstructure 0.89 pixels. increased Accordingly, by 5.18 pixels the disparity at 14 cm of and the 2.63CMOS pixels depth at 14 image cm from sensor the with focal OPA point. using the BSI structurestructure increased increased by by 5.18 5.18 pixels pixels at at − −1414 cm cm and and 2.63 2.63 pixels pixels at at 14 14 cm cm from from the the focal focal point. point.

Figure 11. Disparity measurement results of the CMOS depth image sensor with OPA using the FSI Figure 11. Disparity measurement results results of of the CMOS depth image sensor sensor with with OPA OPA using the FSI structure and the BSI structure. structure and the BSI structure.

Following the measurement results of the disparity, the disparity of the CMOS depth image sensor with OPA using the BSI structure, which has a low pixel height, was increased. As can be seen from Equations (1) and (4), as the pixel height decreases, the FOA value decreases, and the disparity value increases. Sensors 2020, 20, x FOR PEER REVIEW 9 of 13

Following the measurement results of the disparity, the disparity of the CMOS depth image Sensors 2020 20 sensor with, OPA, 5138 using the BSI structure, which has a low pixel height, was increased. As can be seen9 of 13 from Equations (1) and (4), as the pixel height decreases, the FOA value decreases, and the disparity value Theincreases. disparity obtained as described above can be converted into distance information. First, it is importantThe disparity to derive obtained the effective as described FOA value above to convert can be itco intonverted distance into information.distance information. The formula First, for it theis OA importantFOA is given to derive below: the effective F value to convert it into distance information. The formula for 2 the FOA is given below: f (a a0) FOA = − , (6) ( − ddis) dpixel a(a0 f ) × = − × 1000 , × (6) ( − ) ×[ ] where f is the focal length; a is the object distance in a non-focusing1000 state; and a0 is the object distance d d wherein a focused f is the state.focal length;dis is the a is disparity, the object and distancepixel is in the a non-focusing distance between state; the and LOPA a0 is the and object ROPA distance images, independing a focused state. on the ddis pixel is the pitch. disparity, The OPAand d CMOSpixel is the depth distance image between sensor the with LOPA theBSI and structure ROPA images, had an dependingeffective F OAon valuethe pixel of 5.7 pitch. and theThe OPA OPA CMOS CMOS depth depth image image sensor sensor with with FSI the structure BSI structure had an ehadffective an effectiveFOA value FOA of value 6.9. Atof 5.7 present, and the itis OPA diffi CMOScult to directlydepth image compare sensor the with disparity FSI structure of the BSI had structure an effective with FthatOA value of the of FSI 6.9. structure At present, because it is difficult the fabrication to directly process compare and pixelthe disparity pitch of twoof the structure BSI structure are diff witherent. thatTherefore, of the FSI the structure FOA value because of the OPAthe fabrication CMOS depth process image and sensor pixel with pitch the of FSItwo structure structure was are adjusteddifferent. to Therefore,provide the the conditions—excluding FOA value of the OPA theCMOS pixel depth height image of the sens FSIor structure—that with the FSI structure are consistent was adjusted with the toBSI provide structure. the conditions—excluding The adjusted FOA value the of pixel the OPA height CMOS of the depth FSI structure—that image sensor with are theconsistent FSI structure with thewas BSI 18.63. structure. These The set conditionsadjusted FOA were value calculated of the OPA and CMOS compared depth with imag thee sensor measured with distance the FSI structure values for wasthe 18.63. FSI structure These set and conditions BSI structure. were Thecalculated formula and for compared the object with distance the measured is given below: distance values for the FSI structure and BSI structure. The formula for the2 object distance is given below: f a0 a = (7)  d d  = 2 ( ) dis× pixel f FOA a0 f ×1000 (7) −− ( − −) ×[× ] 1000 The results of the measured distance using the equation above and the measured disparity is The results of the measured distance using the equation above and the measured disparity is shown in Figure 12. When the actual distance was 86 cm, the measured distance with the BSI structure shown in Figure 12. When the actual distance was 86 cm, the measured distance with the BSI structure was 87.40 cm and the measured distance with the FSI structure was 89.28 cm. When the actual distance was 87.40 cm and the measured distance with the FSI structure was 89.28 cm. When the actual distance was 114 cm, the measured distance with the BSI structure was 106.18 cm and the measured distance was 114 cm, the measured distance with the BSI structure was 106.18 cm and the measured distance with the FSI structure was 104.88 cm. The results revealed that the measured distance result of the with the FSI structure was 104.88 cm. The results revealed that the measured distance result of the OPA CMOS depth image sensor with BSI structure was further improved. OPA CMOS depth image sensor with BSI structure was further improved.

FigureFigure 12. 12. ResultsResults of of the the measured measured distance distance calc calculatedulated from from the the measured measured disparity. disparity.

AA comparison comparison with with the the disparity disparity tendencies tendencies calcul calculatedated under under similar similar conditions conditions with with a adifferent different focalfocal point point was was also also performed performed for for a a clearer clearer analysis analysis.. Figure Figure 13 13 shows shows the the results results of of the the calculated calculated disparitydisparity with with different different focal focal points points using using effective effective F FOAOA values.values. The The focal focal point point was was 40 40 cm cm and and the the calculation range was from 40 to 80 cm. The calculated results reveal that the disparity of the CMOS depth image sensor with OPA using the BSI structure increased by 22.97 pixels at 80 cm. Sensors 2020, 20, x FOR PEER REVIEW 10 of 13 Sensors 2020, 20, x FOR PEER REVIEW 10 of 13 Sensors 2020 20 calculation ,range, 5138 was from 40 to 80 cm. The calculated results reveal that the disparity of the CMOS10 of 13 calculation range was from 40 to 80 cm. The calculated results reveal that the disparity of the CMOS depth image sensor with OPA using the BSI structure increased by 22.97 pixels at 80 cm. depth image sensor with OPA using the BSI structure increased by 22.97 pixels at 80 cm.

Figure 13. Results of the calculated disparity with different focal points using effective FOA values. Figure 13. Results of the calculated disparity with different focal points using effective FOA values. Figure 13. Results of the calculated disparity with different focal points using effective FOA values. DisparityDisparity resolution resolution is isan an important important characteristic characteristic change change due due to to the the decreasing decreasing pixel pixel height height by by Disparity resolution is an important characteristic change due to the decreasing pixel height by applyingapplying the the BSI BSI structure. structure. Figure Figure 14 14 shows shows the the re resultssults of of the the disparity disparity resolution resolution between between the the FSI FSI applying the BSI structure. Figure 14 shows the results of the disparity resolution between the FSI andand BSI BSI structures structures due due to to the the difference difference in in pixel pixel he heightight under under the the same same conditions. conditions. For For the the calculation calculation and BSI structures due to the difference in pixel height under the same conditions. For the calculation conditions,conditions, the the focal focal point waswas 40 40 cm, cm, and and the the disparity disparity range range was was from from5 to − 55 pixels to 5 atpixels 1-pixel at intervals.1-pixel conditions, the focal point was 40 cm, and the disparity range was− from −5 to 5 pixels at 1-pixel intervals.The disparity The disparity resolution resolution results revealed results thatrevealed when that the disparitywhen theincreased disparity byincreased one pixel, by the one disparity pixel, intervals. The disparity resolution results revealed that when the disparity increased by one pixel, theresolution disparity of resolution the BSI sensor of the wasBSI 0.43sensor cm was and 0.43 the cm disparity and the resolution disparity ofresolution the FSI sensor of the wasFSI sensor 1.42 cm. the disparity resolution of the BSI sensor was 0.43 cm and the disparity resolution of the FSI sensor wasHowever, 1.42 cm. the However, disparity the resolution disparity of resolution the BSI sensor of the had BSI been sensor improved had been compared improved to compared the FSI sensor. to was 1.42 cm. However, the disparity resolution of the BSI sensor had been improved compared to theThe FSI summary sensor. The of two summary chipswith of two di ffchipserent with structures different is given structures in Table is given1. in Table 1. the FSI sensor. The summary of two chips with different structures is given in Table 1.

Figure 14. Results of the disparity resolution between the FSI and BSI structures due to the difference Figure 14. Results of the disparity resolution between the FSI and BSI structures due to the difference inFigure pixel height 14. Results under of the the same disparity conditions. resolution between the FSI and BSI structures due to the difference inin pixelpixel heightheight underunder thethe samesame conditions.conditions.

Sensors 2020, 20, 5138 11 of 13

Table 1. Summary of the two chips with different structures.

Front-Side Illumination Back-Side Illumination Structure Structure Pixel size 2.8 µm 2.8 µm 1 µm 1 µm × × Fabrication process FSI process BSI process Pixel resolution 1632 (H) 1124 (V) 4656 (H) 3504 (V) × × Color pattern RGBW pattern Pixel height 4.435 µm 1.16 µm Effective equivalent offset 6.9 5.7 f-number Peak response angle 44◦ 52◦ Disparity ( 14 cm) * 1.73 pixels 6.91 pixels − Disparity (14 cm) * 0.89 pixels 3.53 pixels * This number denotes the sensor position from the focal point.

5. Conclusions In this study, we examined two OPA-based CMOS depth image sensors with different side illumination structures to improve disparity. The CMOS depth image sensors with OPA for the measurement were designed and manufactured via different fabrication processes. The previously reported CMOS depth image sensor with OPA was designed via the FSI fabrication process. The difference in pixel height caused by the presence or absence of a color filter was compared in the FSI structure. As the pixel height of the OPA sensor without the color filter was lower than that of the OPA sensor with the color filter, the CRA increased. However, the decrease in pixel height is limited depending on the structural characteristics of FSI. The pixel height of the CMOS depth image sensor with OPA using the BSI structure can be reduced without affecting the multiple metal layers. The pixel height of the CMOS depth image sensor with OPA using the BSI structure used for measurement in this work was lower than that of the one with the FSI structure by 3.035 µm. The color filter patterns of the FSI chip and BSI chip manufacturing processes were the same as the RGBW pattern for improved sensitivity. The two chips used for measurement did not have the same pixel size and fabrication process, so it is difficult to make a quantitative comparison. Due to the multiple metal layers and high pixel height, the signal-to- ratio (SNR) of the CMOS image sensor with OPA using FSI structure was inferior to that of the BSI structure under the same conditions. If the SNR of the image is not good, the disparity between LOPA and ROPA cannot be acquired accurately. Therefore, the disparity resolution is expected to be improved with better SNR. The results of the measurements show that the CRA and disparity values increase as the pixel height decreases. In particular, the results of the CRA measurement show that the peak angle response of the CMOS depth image sensor with OPA using the BSI structure is 52◦, and the peak angle response of the CMOS depth image sensor with OPA using the FSI structure is 44◦. Accordingly, the peak angle response value of the sensor with the BSI structure was extended by 8◦ because the incidence angle of the microlens increased as the pixel height decreased. The disparity of the CMOS depth image sensor with OPA using the BSI structure was greater than that of the CMOS depth image sensor with OPA using the FSI structure by 5.18 pixels at 14 cm and 2.63 pixels at 14 cm from the focal point. − This is because the pixel height and the FOA values of the sensor with the BSI structure are decreased. Assuming all other conditions except the pixel height are the same, disparity resolution increases as FOA decreases. Therefore, high-quality depth information can be obtained with a single sensor by applying the BSI structure to the CMOS depth image sensor with OPA. In conclusion, the CMOS depth image sensor with OPA using the BSI structure can be easily applied to portable devices or other fields to improve depth information quality as well as to achieve low fabrication costs and low power consumption with a simple structure and without external elements. In future experiments, we plan to analyze the characteristics of various color patterns such as RBWW and monochrome patterns as well as RGBW patterns. Sensors 2020, 20, 5138 12 of 13

Author Contributions: Conceptualization, S.C., J.P., S.-J.L., and J.-K.S.; methodology, S.C., J.P., and S.-J.L.; software, J.L. and S.-J.L.; validation, S.C., J.P., S.-J.L., and J.-K.S.; data curation, J.L.; formal analysis, J.L., S.-H.K., H.K., J.J., S.C., J.P., S.-J.L., and J.-K.S.; investigation, J.L., S.-H.K., H.K., and J.J.; writing— original draft preparation, J.L. and J.-K.S.; writing—review and editing, J.L. and J.-K.S.; visualization, J.L., S.-H.K., H.K., and J.J.; supervision, S.C., J.P., S.-J.L., and J.-K.S.; project administration, S.C., J.P., S.-J.L., and J.-K.S.; funding acquisition, S.C., J.P., S.-J.L., and J.-K.S. All authors have read and agreed to the published version of the manuscript. Funding: This work was supported by the Center for Integrated Smart Sensors funded by the Ministry of Science and ICT as Global Frontier Project (CISS-2016M3A6A6931333), and the BK21 Plus project funded by the Ministry of Education, Korea (21A20131600011). Conflicts of Interest: The authors declare no conflict of interest.

References

1. Kim, Y.M.; Theobalt, C.; Diebel, J.; Kosecka, J.; Miscusik, B.; Thrun, S. Multi-view image and ToF sensor fusion for dense 3D reconstruction. In Proceedings of the 2009 IEEE 12th International Conference on Workshops, ICCV Workshops, Kyoto, Japan, 27 September–4 October 2009; pp. 1542–1549. [CrossRef] 2. Ximenes, A.R.; Padmanabhan, P.; Lee, M.; Yamashita, Y.; Yaung, D.N.; Charbon, E. A 256 256 45/65 nm × 3D-stacked SPAD-based direct TOF image sensor for applications with optical polar modulation for up to 18.6dB interference suppression. In Proceedings of the 2018 IEEE International Solid—State Circuits Conference—(ISSCC), San Francisco, CA, USA, 11–15 February 2018; pp. 96–98. [CrossRef] 3. Xu, K. Integrated Silicon Directly Modulated Light Source Using p-Well in Standard CMOS Technology. IEEE Sens. J. 2016, 16, 6184–6191. [CrossRef] 4. Park, B.; Park, I.; Choi, W.; Chae, Y. A 64 64 APD-Based ToF Image Sensor with Background Light × Suppression up to 200 klx Using In-Pixel Auto-Zeroing and Chopping. In Proceedings of the 2019 Symposium on VLSI Circuits, Kyoto, Japan, 9–14 June 2019; pp. C256–C257. [CrossRef] 5. Chen, A.R.; Akinwande, A.I.; Lee, H.-S. A CMOS-based microdisplay with calibrated backplane. In Proceedings of the 2005 IEEE International Solid-State Circuits Conference, Digest of Technical Papers (SSCC), San Francisco, CA, USA, 10 February 2005; Volume 1, pp. 552–617. [CrossRef] 6. Fukuda, T.; Ji, Y.; Umeda, K. Accurate Range Image Generation Using Sensor Fusion of TOF and Stereo-basedMeasurement. In Proceedings of the 2018 12th France-Japan and 10th Europe-Asia Congress on Mechatronics, Tsu, Japan, 10–12 September 2018; pp. 67–70. [CrossRef] 7. Yasutomi, K.; Okura, Y.; Kagawa, K.; Kawahito, S. A Sub-100 µm-Range-Resolution Time-of-Flight Range Image Sensor With Three-Tap Lock-In Pixels, Non-Overlapping Gate Clock, and Reference Plane Sampling. IEEE J. Solid-State Circ. 2019, 54, 2291–2303. [CrossRef] 8. Silberman, N.; Fergus, R. Indoor scene segmentation using a structured light sensor. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 601–608. [CrossRef] 9. Park, B.; Keh, Y.; Lee, D.; Kim, Y.; Kim, S.; Sung, K.; Lee, J.; Jang, D.; Yoon, Y. Outdoor Operation of Structured Light in Mobile Phone. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2392–2398. 10. Benxing, G.; Wang, G. Underwater Image Recovery Using Structured Light. IEEE Access 2019, 7, 77183–77189. [CrossRef] 11. Yao, H.; Ge, C.; Xue, J.; Zheng, N. A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light. Sensors 2017, 17, 805. [CrossRef][PubMed] 12. Liberadzki, P.; Adamczyk, M.; Witkowski, M.; Sitnik, R. Structured-Light-Based System for Shape Measurement of the Human Body in Motion. Sensors 2018, 18, 2827. [CrossRef][PubMed] 13. Kobayashi, M.; Yamazaki, K.; Caballero, F.G.; Osato, T.; Endo, T.; Takemura, M.; Nagasaki, T.; Shima, T. Wide Angle Multi-Shift with Monocular Vision. In Proceedings of the 2020 IEEE International Conference on (ICCE), Las Vegas, NV, USA, 4–6 January 2020; pp. 1–4. [CrossRef] 14. Koyama, S.; Onozawa, K.; Tanaka, K.; Saito, S.; Kourkouss, S.M.; Kato, Y. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera. Opt. Express 2016, 24, 18035–18048. [CrossRef][PubMed] Sensors 2020, 20, 5138 13 of 13

15. Kumazawa, I.; Kai, T.; Onuki, Y.; Ono, S. Measurement of 3D-velocity by high-frame-rate sensors to extrapolate 3D position captured by a low-frame-rate stereo camera. In Proceedings of the 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 18–22 March 2017; pp. 295–296. [CrossRef] 16. Ohashi, A.; Tanaka, Y.; Masuyama, G.; Umeda, K.; Fukuda, D.; Ogata, T.; Narita, T.; Kaneko, S.; Uchida, Y.; Irie, K. Fisheye stereo camera using equirectangular images. In Proceedings of the 2016 11th France-Japan & 9th Europe-Asia Congress on Mechatronics (MECATRONICS)/17th International Conference on Research and Education in Mechatronics (REM), Compiegne, France, 15–17 June 2016; pp. 284–289. [CrossRef] 17. Huang, S.; Gu, F.; Cheng, Z.; Song, Z. A Joint Calibration Method for the 3D Sensing System Composed with ToF and Stereo Camera. In Proceedings of the 2018 IEEE International Conference on Information and (ICIA), Wuyishan, China, 11–13 August 2018; pp. 905–910. [CrossRef] 18. Choi, B.-S.; Bae, M.; Kim, S.H.; Lee, J.; Oh, C.W.; Chang, S.; Park, J.H.; Lee, S.-J.; Shin, J.-K. CMOS image sensor for extracting depth information using offset pixel aperture technique. In Proceedings of the 2018 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Houston, TX, USA, 14–17 May 2018; pp. 3–5. 19. Lee, J.; Choi, B.-S.; Kim, S.-H.; Lee, J.; Lee, J.; Chang, S.; Park, J.; Lee, S.-J.; Shin, J.-K. Effects of Offset Pixel Aperture Width on the Performances of Monochrome CMOS Image Sensors for Depth Extraction. Sensors 2019, 19, 1823. [CrossRef][PubMed] 20. Yun, W.J.; Kim, Y.G.; Lee, Y.M.; Lim, J.Y.; Kim, H.J.; Khan, M.U.K.; Chang, S.; Park, H.S.; Kyung, C.M. Depth extraction with offset pixels. Opt. Express 2018, 26, 15825–15841. [CrossRef][PubMed]

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).