International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

Review on Three-Dimensional (3-D) Acquisition and Range Techniques

Maged Aboali1, Nurulfajar Abd Manap2, Abd Majid Darsono3, and Zulkalnain Mohd Yusof4

Machine Learning and Signal Processing (MLSP) Research Group, Center of Telecommunication Research & Innovation (CETRI), Fakulti Kejuruteraan Elektronik Dan Kejuruteraan Komputer (FKEKK), Universiti Teknikal Malaysia Melaka (UTeM) Malaysia.

1ORCID: 0000-0003-2214-549X

capability of providing a full range data and high-resolution Abstract distance information even in real time has been a unique With the several developments in visualization, sensing, opportunity for numerous researchers and developer. perception innovation, and transmission systems, 3D On another hand, for current intricate technologies, 3D information has considered as main part of real-world perception has become a state-of-art concept to touch and applications such as multimedia, medical, architecture, deal with the real-world, where the visual image is the most manufacturing, computer vision, security, and more. appropriate method for 3D perception and data capturing as Acquiring the scene depth information is one of the well as extracting 3D information from the scene. Thus, in fundamental demands for some of these applications. order to capture the entire shape for desire objects, a massive Although the extracting of 3D information has been studied data and samples must be acquired, in which a convenient for more than three decades in different fields including method, accurate and high-speed system is necessary for robotics, computer graphics, and computer vision, it remains dependable image acquisition as well as reliable processing. a challenge and fundamental issue. Systems with high In contrast, the fundamental sensing and ranging devices accuracy, more quality of information and high frame rate of could not meet all requirements and fulfill the desire needs data processing have been a unique research opportunity for for a variety of applications especially with high-precision, numerous researchers in order to control the huge volume reliability, high-robustness as well as power, cost, and safety data and provide convenient solutions. Therefore, several 3D limitations still exist. Consequently, diverse systems, sensors, systems of various types, which have satisfied multiple and techniques have been proposed and developed in order to objectives have been proposed and developed to meet the overcome the previous issues and disadvantages with respect previous requirements. However, in the last few years, a to several characteristic including physical size, robustness, special attention is devoted to time-of-flight (TOF) weight, interfacing option, reliability, process requirement, technology, which has more features including accuracy, high-resolution, price, and power consumption [1]–[7]. In efficiency, low-price and robust alternative for data addition, 3D Vision is the most powerful sense of human, acquisition compared to other technologies. This paper which provides us a remarkable measure of information for presents the 3D imaging techniques and acquisition systems the complex world around us in three-dimensions. It that concern with range measurement. The differences in their encourages us to interact and communicate with our principles and characteristics are discussed, considering surrounding environment intelligently, study the position and various factors and appropriate method that provides low realize identities for the objects and the relationship between cost, high accuracy, and rapid range measurement, which both of them, without the need of a physical contact. allow the future researchers to select the suitable technique in Therefore, the efforts and interests to construct machines and developing their systems. systems with a sense of Vision that able to interact and response to the situation have been one of the thrusts in the Keywords: 3D Image Acquisition Systems, Time-of-Flight research vision community field [8] (TOF) Camera, Scene Depth Data, Range Measurement 3D Vision is visualizing objects, surfaces, and things in their Techniques real shapes, dimensions, and geometries, in a comparable way to human visualizing process. For a long term, this principle INTRODUCTION has been widely investigated in order to understand 3D The knowledge of three-dimensional (3D) data and range structures and generate a unique 3D system with the ability to imaging information is an essential element across a variety extract 3D coordinate information of objects from their of global applications including computer graphics, medical, geometries and reconstruct their profile information. multimedia, robot vision, automatic navigation, automotive Inception the interest was to produce a 2D image from 3D safety, computer interaction, tracking, action recognition, and real time objects. But, with the increase of demands and life more. Hence, acquiring three-dimensional (3D) geometries improvement begins of realization of how to obtain a real- data from our real and complex environment has been an time image or convert these 2D images to 3D that seem active subject of research for years. However, and in spite of exactly like the real one, which provides these systems a significant achievement made during the last decades in such feature of sense with enhanced naturalism and realism over different fields, developing lower-price systems with the traditional 2D systems.

2409 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

Recently, with the rapid improvements in photonics, optics, accuracy and speed [15]. another typical system is the Light electronics fields, 3D Vision systems form a major part, Detection and Ranging (), a system for terrain where there is a real evidence that a future huge market will mapping, forest and vegetation measurement. LIDAR has the be established as a result of the advances in 3D system and ability of detect various reflections from only a single laser imaging technologies [9]–[12]. The imaging and pulse [16]. Radio Frequency Identification (RFID), an visualization systems and techniques based (3-D) data had automatic tracking and identification technology that utilized several advances due to the necessity of display, processing, small tags (transponders) which are attached to a physical recording, capturing, and high-quality 3-D image, where object, where the tags contain a store information. The system reliable and fast acquisition of 3D information has become a can be used as automatic tracking for people, machine, cash, major requirement future development. airport baggage, train, shipping, cars, containers, automobile, This review paper will proceed as follows: In Section 1, an and more [17]. Radio Detection and Ranging (RADAR), a introduction to 3D vision and range acquisition is presented. unique technology of detection that utilized radio wave to be Section 2, a review of common 3D systems used to perceive transmitted and reflected back by objects in order to obtain objects and achieve depth measurements. Section 3, a angle, velocity, range, and properties of targets. It has various discussion of various techniques of scene depth estimation types and utilization involving detect ships, guided missiles, and 3D acquisition in term of respective performance, aircraft, spacecraft, and weather formations [18]. Thermal configuration, features, limitations, and solutions (including Imaging is another technology of sensing objects based on of the use of time-of-flight technique and technology, which temperature scale difference, these technologies able to detect starts to have an impact on widely research and commercial the heat given by the desired targets. A wide range of applications) is followed. Thus, the completion of discussion applications has utilized the concept of thermal imaging such can illustrate the interest of using time-of-flight technologies as mining industry, firefighting, planes, military, medical to revolutionize several research fields such as computer applications. Global Positioning System (GPS), space-based vision, computer graphics, and human machine interaction system that determined the accurate position and provides (HMI) towards more efficient output performance. In Section timely information of the desired location anywhere on earth 4, a clear comparison between the different techniques, for and under all-weather conditions. The GPS provides to the importance of representing the efficient 3D measurement commercial users, civil, and military with high capabilities system, and conclusions of the paper are presented. worldwide [19]. Ultrasound technologies based on the sound wave is other approach of distances measurement and objects REVIEW OF COMMON (3D) SYSTEMS UTILIZED detection. They transmit high-frequency sound pulse by using TO PERCEIVE OBJECTS AND ACHIEVE RANGE transducer probe, which considers as main part of ultrasound MEASUREMENTS. technology machine. Then, the reflected waves received back For long term and in response to the major requirements of by same transducer probe [20]. Table 1 review common (3 – future development and quality of life improvements, D) sensing systems which have the potential to perceive and enormous of advanced technologies and systems as well as detect objects, where this several technology systems are methods that are appropriate for perceiving objects, gain data presented with different applications. and depth in three-dimension (3D) have been proposed, Moreover, other unique resources of perceiving and capturing extensively investigated, and developed with improving in 3D information of objects in our physical environment are their integrity, accuracy and decline in their costs [13]. These (3D) cameras, these systems found their ways into a wide set systems and technologies utilize different resources including of global applications [21]. However, through the various electromagnetic wave, sound wave, ultrasonic, laser, light methods utilized for 3D imaging, Stereo Vision, Structure ray, and more. Which have exploited in a variety of systems Light, and Time-of –Flight are the most common principles including laser, radar, sonar, ultrasonoscope, imaging [22]. In contrast, stereovision principle is certainly the most systems, (etc.). Among these systems, Laser Detection and widespread and direct technique for gaining three-dimensional Ranging (LADAR), a system of transmitting, detecting, (3D) data and range information among all the principles [23]. processing, and receiving of an electromagnetic wave that The advance in (3-D) imaging and visualization techniques reflects from targets. The laser sensors of these systems and technologies starting with introducing what called “Mirror provide range image constructed of a set of point stereoscope” by Charles Wheatstone in 1828 which able to measurements from exactly one viewpoint. The system view two "flat" pictures drawn from slightly differing designed with the high capability of perceiving hard targets viewpoints. By 1844 David Brewster made the “Closed-box as the main feature. However, it requires complex stereoscope” that can take stereo photographs image in (3-D) components and the overall process to collect range is [19]. In 1855 "Stereo Animation Camera” the Kinematoscope expensive and time-consuming. The Laser range finder produced which can to create 3-D motion pictures [20]. (LRF), operates with the time of flight principle, by William Friese in 1890 introduced the instrumentation of transmitting laser pulse onto the surface object and receiving stereoscopy that allows watching two separate films at right the reflected pulse to measure the time between sending and angles to each other. In 1915 (3-D) glasses with two different receiving pulse [14]. On 3D Laser Scanner, the concept of colored lenses was introduced which capable to direct an operation is similar to LADAR. However, the system has image to each eye for the first 3 D movie [20]. Hammond in been generated for the aim of 3D digital measurement, the mid of (the 1920s) presented the anaglyph imaging visualization, and documentation, it’s capable of capturing stereoscopic shadow, that presents a three-dimensional view 3D digital information and obtaining images with high in theater seats [21]. The technicolor “3-strip color image”

2410 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

system came out in 1935 and used for the first color 3D movie techniques and technologies until the innovation of [22]. In addition, the advance was continued forward where stereovision. in 1970 the stereovision was innovated. Table 2 illustrates the several advances in (3-D) imaging and visualization

Table-1: A review of common (3 –D) sensing systems and technologies, which have the potential to perceive and detect objects in close proximity to heavy equipment.

Technologies Application References

Laser Detection and Designed mainly for the aim of hard targets detection. [23] Ranging Laser Range Finder Surveying applications, industrial applications, military purpose, traffic safety. [14]

3D Laser Scanner 3D digital measurement, visualization, and documentation. [15]

Light Detection and A typical system for terrain mapping, forest and vegetation measurement. [16] Ranging

Radio Detection and Has different types and uses including detecting ships, guided missiles, aircraft, spacecraft, weather [18] Ranging formations, and more.

Radio Frequency Automatic tracking for people, machine, cash, airport baggage, train, shipping, cars, containers, [17] Identification automobile, etc. Thermal Imaging Utilized at most in mining industry, firefighting, planes, military, medical applications [24]

Global Positioning Provide accurate position and time information of desired location. The system provides to [25] System commercial users, civil, and military with high capabilities worldwide.

Ultrasound Technology Used mainly in medical devices, and checking for invisible flaws, and more. [26]

Table- 2: Indicates to the advances (3-D) Systems to the innovation of stereo vision.

Year Type of 3D System Application References

1828 Mirror stereoscope by Charles Wheatstone Viewing two "flat" pictures drawn from slightly differing viewpoints [27] 1844 Closed-box stereoscope by David Brewster Taking stereo photographs image in 3D [19]

1844- (Stereo Animation Camera) the Kinematoscope Ability to create 3-D motion pictures. [20] 1890

Instrumentation of stereoscopy 1890 A viewer has been made to watch two separate films at right angles to each other [28], [29] by William Friese

1915 3-D glasses with two different colored lenses Direct an image to each eye for the first 3D movie [20]

Anaglyph imaging 1920s Create anaglyphs, that present a three-dimensional view in theater seats [21] (stereoscopic shadow) by Hammond

1935 Technicolor (3-strip color image) system Using a technicolor for first color 3D movie [22],[30] 1952 Natural vision process Using 3-D color feature in producing (bwana devil) movie USA. [20] 1960 Space-Vision 3D technology Taking two pictures and printed them over each other on a single strip [22]

3D innovation set up two pictures 1970 Stereovision [31] squeezed together next to each other on a single strip.

2411 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

OVERVIEW OF COMMON 3D ACQUISITION AND RANGE MEASUREMENT TECHNIQUES

Table 3: 3D imaging techniques classification [32]–[35].

Technique Direct Indirect Active passive Range Triangulation

Stereo vision    

Time-of-flight    Structured light     Photogrammetry     Interferometry    Laser triangulation   

Shape from   shading Shape from focus    

Shape from    shadow Shape from   photometry Texture gradients  

to the process of determining the 3D data and The presence of an object can be sensed and detected, while its distance can be measured via some range the range for any visible object in the space by measuring physical phenomenon limitations. Thus, in order to the angle from two known points with a fixed baseline achieve efficient depth measurements, various techniques instead of determining the distance to object point directly. have been developed and widely investigated. These Where the target point can be considered as the third point techniques are basically based on structured light, for the triangle [32]. Figure 1 illustrates the Stereo Vision triangulation or time of flight [36]. The methods common principle. techniques are existed to acquire the depth measurement and information from the scene. In general, these techniques are classified into two main parts (direct and indirect) [32]. However, the direct techniques are grouped into two main categories, active and passive. The first category (passive techniques) don’t use a certain structure power source to form or construct an image [33].Thus, in the calculation of range, the power light source might not be directly used. While, second category refers to (active techniques) that use a certain structure energy source to emit light, where the entire object surface is scanned through the active source that moves around an object such as a scanning laser, project pattern of light and the camera works as detector [34], [35].

Stereo Vision (Passive Triangulation) To date, the most widespread and common direct approach for obtaining three-dimensional (3D) information and depth acquisition is stereovision principle (SV) [37], based on the concept of Sir Wheatstone of (mirror stereoscope) [20], using two images taken from

two different points of view. However, the term “stereo” Figure 1: Stereo-vision principle (passive is an origin Greek word “stereos” that means solid or firm, triangulation). with stereo, objects are seen firm in three-dimensions (3- D) with range [38].

While stereo vision (passive triangulation) principle refers

2412 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

The common stereo vision systems use at least two multiple cameras concurrently separated by a horizontal distance called (baseline) [39]. These cameras capture the identical images for the object from two different positions, and these images can be further processed to provide reliable and sufficient features, and by applying disparity approach on the two images the depth (z-axis) can be obtained for the same object besides the coordinates (x, y- axis) planar. In principle, stereo vision reconstruction approach utilizes the next sequence of steps in processing (1) Scene image acquisition, (2) Stereo camera modeling, (3) feature extractions, (4) Analysis correspondence points, (5) Triangulation, no need for extra equipment e.g. (light sources or special projection). The remarkable features of stereo vision approach are the low cost, simplicity, the achieving of a high-resolution of entire range scene without moving parts or energy emission. However, the major challenge in stereo a system is solving the correspondence problem (identifying the common points for image pairs) [40]. Until the correspondence can be established, disparity, and therefore depth, cannot be accurately determined. Solving the correspondence problem Figure 2: Structured light system principle. involves complex computationally intensive algorithms for feature extraction and matching to minimize percentage error matches and achieve high depth accuracy. To overcome these Photogrammetry problems various researches and techniques have suggested As the name implies, Photogrammetry is the science of and performed such as [41]–[47]. Moreover, the limited field acquiring an accurate coordinate measurement and reliable of view is another disadvantages of stereo system, where information of desire objects from photographs. However, several methods have been proposed to enhance this issue based on the triangulation principle, Photogrammetry is a including using rotating cameras [48]–[51]increase number method for precise three-dimensional (3D) measurements, by of cameras [52]–[54]. taking Photographs from two or more locations known as (the lines of sight) and measuring the point of interest for each Structured light photograph. The interaction of these lines of sight pairs can Structure light technique is representing the active version of be triangulated then to produce (3D) coordinate of the point triangulation technique for scene acquisition and 3D of on target object. A remote sensing and high-speed imaging measurement, which had a wide range acceptance in are may be employed by Photogrammetry for the aim of scientific and industrial applications. In this technique, a detecting, recording, and measure three-dimensional motion combination of an active external projection e.g. (LCD/DLP and complex 2D fields [57]. Where the information extracted projector or laser) and a single camera (with imaging sensor) from the photographs is transformed into a meaningful and are utilized which known as (an active stereo pair). The reliable result. concept of structure light is well illustrated in Figure 2. In general, there are two typical types of Photogrammetry: The light emitting source (20) projects light pattern which can Aerial Photogrammetry and close range Photogrammetry. be (multi stripe, stripe, single spot, binary, sine wave, and any The aerial Photogrammetry is often involved with complex pattern) toward the desired 3D object. Then, the topographical mapping production, where the images are light is distorted by the 3D object and detected by the camera carefully collected to support mapping and the camera (18). One or more of detected images are stored for further parameters are well known for ultimate accuracy achieving. processing by an image processing part (12). During this While the close-range Photogrammetry involves with process, an analyzation for the distortions of structured light photographs that are acquired from proximity distances. It is performed to obtain a spatial measurement for different may use for creating 3D models, but perhaps it’s not utilized points on the desired 3D object. The extraction of depth for topographical mapping. However, the concept of this information can be done by the measuring of distortion technique is applied to a diverse set of applications including between reflected and captured image. However, the main Heritage, city modeling, architecture, medical imaging, advantages of this system are the high data acquisition rate security, anthropometric analysis, databases, face intermediate measurement volume, and performance recognition, and more. [58]. Figure 3 illustrates the operating generally dependent on ambient light. In contrast, the Photogrammetry principle. limitations of this system are missing data in correspondence with occlusions and shadows, and computationally middle- complex [15], [55], [56].

2413 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

Figure 4: Laser triangulation configuration and principle. Figure 3: Photogrammetry principle

Interferometry In addition, Photogrammetry and stereo share the same objective of acquiring 3D models from an image and this is A typical measurement method by means of using currently employed in the applications of computer vision. phenomenon interference of various waves (often light, Where the techniques depend on the utilizing of camera sound or radio waves). This measurement usually includes calibration methods [59]. specific characteristics of the waves and surfaces that waves are interacted with. Interferometry often operates (In term of Laser triangulation light waves) by sending a coherent beam of light that splits or Triangulation is a common formulation for images. The divides into two identical light beams using a beam splitter general laser triangulation configuration and principle is (also known as half-mirror, half-transparent mirror). Each shown in Figure 4, which comprises of triangulation laser beam of these will move in a different path and they sensors contains laser source and CCD/CMOS detector or recombined before received by the detector. The difference in position sensitive detector (PSD). The laser generates a beam the distance that each beam traveled (the path difference) toward the surface or measurement target at position P. Then, creates a certain phase difference between both of the beams, the reflected light of laser beam falls or drops incident to the and that generates an interference pattern which analyzed by detector element at a specific angle based on the distance, the detector in order to evaluate material properties wave where the lens is utilized to create a picture or an image of the characteristics, calculating the displacement change. It has set beam at the image plane. From the laser beam position on the of applications in science and industry including growth detector and the distance between the sender and receiver monitoring, concentration mapping, surface topography, element as well as calculating of the angles, distance to the temperature sensing. [63], [64]. target object can be measured. However, systems based on the interferometry principle are successfully used in coordinate measuring machines (CMMs) The laser triangulations are a common instrument for calibration. Where (CMMs) utilized to perform and achieve acquiring 3D information and gaining the distance of objects. high accuracy measurement [65]. another system depend on However, regarding the ease of implementation and low in interferometry is called Michelson interferometer, Figure 5 price, and high data acquisition rate made laser triangulations clarifies interferometry concept related to Michelson one of prevalent technology on the global market. Where its interferometer working principle. Moreover, the main feature application extends from biomedical such as cardiovascular of this system is the high-level accuracy “sub-micron of instrument to industrial applications, which required non- accuracy for the sub- micro ranges”. While main contact measurement and [60], [61], [62]. disadvantages for interferometry method is the high cost and the limited applicability especially in the industry environment.

2414 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

Figure 5: Interferometry principle based on Michelson Figure 6: Shape from focus (SFF) basic schematic. interferometry.

Shape from Shading Shape from focusing Shape from shading (SFS) is a method that utilized a pattern Shape from Focus (SFF) is an optical technique that involved of shading in one image to estimate or deduce the shape of from the passive approach by recovering three-dimensional the surface in the scene. The (SFS) requires the acquisition of (3D) shapes of objects from two-dimensional (2D) image to the target object from a single view angle, while the position an active sensing approach by projecting light toward target of the light source is varied. Contrary to other 3D objects to overcome difficulties in discerning surface texture. reconstruction techniques e.g. (photometric and stereo), in In this method, a sequence of images at different focus levels shape from shading (SFS) data are minimal (use of a single is acquired. While the objective in (SFF) is to determine the image) to infer three-dimensional of a surface. The variant of absolute depth by calculating or measuring the distance from this technique refers to the variation of the lighting cases or the camera lens of well-focused position for each object point. conditions. The monochrome images of smooth surfaces The moment that distances for every point of the object are particularly with homogeneous reflecting features exhibit a found, the recovering of 3D shape can be easily preformed. varying in the image shading. This is because of the Imaging devices with lenses of long focal lengths particularly interaction of various factors including the shape of the suffer from the limited depth of field. Thus, in acquiring an surface, the reflecting characteristics of the material, the image, there are parts of an object are defocused, while others illumination, and the image projection. are well-focused. The issue of shape-from-shading being in extracting the shape information from shading data. However, due to the Figure 8 shows a simple schematic of (SFF). As an initial number of factors affected in the shading values, additional step, the object with unknown depth is located on a reference information should be provided in order to determine the plane and then translated in optical direction with finite steps shape depicted in an image. Thus, the shape information can with respect to aperture camera. Where an image is capturing be extracted by a review of the algorithms, starting from the at every step and huge visual observations are acquired. The reflectance map that specifies the radiance of the surface as distance between the reference plane and focus plane is function of its orientation.[68],[69]. known. The Measuring of focus point needs a major number of images with gradual distance movement onto the focus Time-of-flight (TOF) plane. However, every shape from focus scheme depends on During the last years, several time-of-flight (TOF) range an approximation technique and focus measure operator. technologies established themselves in multiple fields Where focus measure operator has an important role for (3D) including industrial, entertainment and academic shape recovering because it considered as the initial step in environments. While the time-of–flight principle has been measuring the depth map. Thus, it must provide an efficient extensively studied and investigated. Time-of-flight (TOF) estimation of depth map by showing high robustness even is a method of resolving or measuring the distance between with the existence of noise [66], [67].

2415 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com camera and object surface, based on a known speed of light distance. In contrast, the continuous wave modulation (CW) “transmitting medium” and the absolute time difference operation mode has the possibility of utilizing alternative between the emission signal from “transmitter” and its return demodulation and detection mechanisms. However, for this to the “receiver” of the camera after the reflection by the operation mode, a larger variety of light sources are available object. Figure 7 illustrates time-of-flight (TOF) common because fall times and fast rise are not required. In addition, operating principle. various shapes of signal are available including square- waves, sine-waves, and “PN modulation”. However, in this method the phase difference between sent and received signal is measured. As the knowing of the modulation frequency, the measured phase corresponds to the time of flight directly. On another hand, the operation mode for the combination of the pulsed light and CW-modulation combine their specific features including a better optical signal-to-noise performance, increased eye safety because the low mean value of optical power and large freedom in selecting receivers and modulators.

Furthermore, the recent decades have seen an enormous tendency to employ new technologies that are characterized with more accuracy, reliability, and inexpensive computational cos to capture and analyses objects in our surrounding in order to provide reliable information. These technologies are based on many techniques as has been explained above. Time- of flight camera is a great device with the ability to capture 3D information in real time. The particular characteristics of time-of-flight camera have proved to offer important advantages in numerous fields.

Table 4 gives a comprehensive overview to a different set of Figure 7: Shows the basic time-of-flight (TOF) principle applications for several researchers using the (3-D) image and configuration. systems especially the time-of-flight camera as main part of their systems. These applications involving various tasks The term (TOF) reference to the time for the light to travel such as object task with different advantages including 3D from “illumination unit that is typically either laser or a near- acquisition at a high rate, depth enhancement, accuracy and infrared wavelength LED array” to the target surface and robustness improvement [71]–[77]. On contrast, other task back to the (receiving unit that contains an image CMOS. involves scene and human such as in [78]–[82] with multiple However, the most fundamental description form of (TOF) features including high depth data for rich texture objects, concept is shown in Figure 10. The light pulse projected by overcoming the modular errors at a high rate, generate and the transmitter of “illumination unit” needs to travel onto a propose solutions for systematic error. visible object and back to the reference point “detector” of the receiving unit after being reflected by the visible object. In Moreover, time-of-flight is typically distinct from the laser practical, the receiver and the active light source are located scanning system, where it requires no moving parts and close to each other. While the target distance is obtained from captures the entire field simultaneously rather than scanning the time difference between the emitted and receiving light point by point through the scene. However, by capturing the signals. Due to precise knowledge of the speed of light that is entire scene at once, Time-of-flight system is capable of constant, the distance of the desired target can be easily providing depth at high frame rates. In comparison to another determined [70]. system such as stereo-based range imaging TOF systems required much less computation, and not affected by ambient In general, diverse techniques of illumination are utilized in light. It is also not dependent upon an intensive textured time-of-flight (TOF) measurements. They certainly surface. In addition, in comparing to structured light (TOF) distinguished as continuous wave modulation (CW), pulsed system offer other features, where its resolution of light modulation, or a combination of both. Each of these measurement is independent of the hardware’s geometry and operation modes has its specific advantages and the setup is confocal it suffers no occlusion, Tables 5 and 6 disadvantages. However, for the pulse light operation mode, illustrate a clear comparison between (TOF), stereo, and the time is measured directly and the main feature is a structured light systems in term of several characteristics possibility and in a very short time to transmit a high amount including range, material cost, depth accuracy, power of energy. Therefore, a high short-term of the signal-to-noise consumption, frame rate, correspondence problems, and ratio is achieved and low main of optical power value is more. maintained. Furthermore, it decreases the requiring on a high sensitivity and the signal-to-noise ratio of the receiver (detector). Thus, it will enable the measurements for long

2416 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

Table 4: The utilization of time-of-flight principle and technologies by several researchers.

Year Type of Application Features References technology used 2007 TOF Visual position tracking Acquiring 3D data and depth with ease, [71] SR3000 without need for model 2007 TOF Human tracking applications 3D data with high rate [79] SR3000 2008 TOF 3D maps for planar surface Improve range data / 3D acquisition with [72] SR3000 extraction high rate 2008 TOF Car park assist and backup [75] Camera application 3D acquisition with high rate

2008 TOF Calibration method to improve Enhanced depth, improved accuracy, [76] camera + Stereo (TOF) performance. and robustness 2009 TOF 3D mapping and localization for 3D at high rate [80] SR3000 larger indoor environment 2009 TOF Perform objects categorization for Accurate categorization for objects, easy [73] SR4000 + Stereo range of objects combination TOF with stereo 2010 TOF Close range metrology Providing sub-millimeter precision [76] SR4000 applications measurement. camera range 2011 TOF Surfaces with rich texture A high-quality depth data for rich [74] SR4000 + Stereo texture object 2012 TOF Wide range scene measurements Overcoming the modular errors of a [78] SR4000 stereo TOF with high rate 2013 TOF 3D distance measurement Reduction temperature induced error of [81] camera illumination source S2014 TOF Long distance measurement Accurate distance measurements [82] camera systems 2015 TOF 3D data and range measurements Acquiring an accurate set of [83] SR4000 + color applications correspondences camera pair 2016 TOF Range measurements and scene Overcoming systematic [84] SR4000 + acquisition applications error

Table 5: Characteristic comparison between stereo vision, structured light, and time-of-flight technologies [2], [85], [86] [70].

Characteristic Stereo Vision Structured Light Time- of -flight

Software complexity High High Low

Material cost Low High/Middle Middle

Response time Middle Slow Fast

Low light performance Weak Light source (IR or visible) Good (IR, laser)

Outdoor Good Weak Fair

Depth (“z”) accuracy Cm µm – cm mm – cm

Range Mid-range Very short range Short range

Bright light performance Good Weak Good

Power consumption Low Medium Scalable

2417 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

Table 6: Comparison between time-of-flight, stereo vision, structured light systems [87], [40].

Difference Time-of-flight (TOF) Stereoscopic vision Structured light

Correspondence Problem No Yes Yes Extrinsic calibration No Yes Yes (When used alone) Auto illumination Yes No Yes Untextured surfaces Good performance Bad performance Good performance Image resolution Up to 204x204 High resolution. High resolution. Camera dependent Camera dependent Frame rate Up to 25 fps Typically, 25 fps Typically, 25 fps Camera dependent Camera dependent

SUMMARY OF COMPARISON

Table 7: Comparison between the most popular 3d acquisition and range measurement techniques in term of their advantages and disadvantages [86], [85] , [70], [42], [48], [55], [15], [88], [60], [63]. Technique Advantage Disadvantage

Stereo Vision - High level precision on well-defined - Low data acquisition rate objects and targets - Computation difficult and quite demanding - Simple and inexpensive - Sparse information and data covering - Achieving a high resolution of entire - Usually limited to the well-defined object range scene - Dependence on scene surface texturing

Structure Light - High rate of data acquisition - Missing data in correspondence with - Intermediate or medium volume occlusions and shadows measurement - Safety constraints, if laser based and expensive - Generally, the performance of the - Computationally middle-complex technique dependent of ambient light Photogrammetry - Simple and low-priced - Limited to well-defined scenes - High accuracy on well-defined targets - Computation demanding - Computation demanding - Sparse data covering and Low data acquisition rate

Laser - Intermediate or medium volume - Safety constraints, if laser based Triangulation measurement - Cost - High data acquisition rate - Computationally middle-complex - Performance generally dependent on - Missing data in correspondence with ambient light occlusions and shadows

Interferometry - High-level accuracy - Measurement capability limited to quasi-flat - Sub-micron accuracy level for the surfaces micro-ranges - Cost - Limited applicability in industrial environment

Time-of-Flight - Generally, the performance of (TOF) - At close ranges, accuracy is inferior to independent of ambient light triangulation method - Efficient data acquisition rate - Require more power for the active - Medium to large measurement range illumination. - High accuracy - A bit cost

Shape from - Simple and low-priced - Performance affected by ambient light (if focus - Available sensors for micro-profilometry passive) and surface inspection. - Limited fields of view and non-uniform spatial resolution

2418 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

Low systems accuracy of controlling the huge volumes data with [2] C. E. Niclass C, Rochas A, Besse PA, “Toward a 3-D a more convenient method is one of the critical issues of 3D camera based on single photon avalanche diodes,” IEEE J. acquisition and imaging systems. It can be concluded that high Sel. Top. Quantum Electron. Jul;10(4)796-802. 2004. accuracy, more quality of information, a high frame rate of data [3] M. Tobias, K. Holger, F. Jochen, A. Martin, and L. Robert, processing, and reasonable cost are main terms for any unique “Robust 3D Measurement with PMD Sensors,” Range system. For this and as a brief conclusion for the table above Imaging Day, Zürich, 7, 8. 2005. several 3D systems of various types and different methods have [4] Q. Yang, R. Yang, and J. Davis, “Spatial-Depth Supre been discussed in term of features and limitations. Table 7 Resolution for Range Images,” Comput. Soc. Conf. illustrates the reasons behind the special attention, which devoted Comput. Vis. Pattern Recognition. IEEE, 2007. to the use of time-of-flight technology to have a system with high [5] Q. Yang, K. Tan, and J. Apostolopoulos, “Fusion of accuracy and more quality information toward achieving a Active and Passive Sensors for Fast 3D Capture,” further advance in 3D imaging and acquisition technologies. In Multimed. Signal Process. (MMSP), 2010 IEEE Int. contrast, most of the other systems are limited to well-defined Work. on. IEEE, 2010. scenes with less controlling for the complex data and [6] J.-Y. Son, B. Javidi, S. Yano, and K.-H. Choi, “Recent information. Time-of- Flight with the features of accuracy, Developments in 3-D Imaging Technologies,” J. Disp. efficiency, reasonable price, and high data processing can be a Technol. 6.10 .394-403. 2010. robust alternative for data acquisition compared to other [7] M. J. McGrath and C. N. Scanaill, “Sensor technologies: technologies. However, the use of Time-of- Flight technology Healthcare, wellness, and environmental applications,” has the potential of opening a massive field of supplemental Sens. Technol. Heal. Wellness, Environ. Appl., 2013. applications and solutions in markets including multimedia, [8] L. L. Sutro and J. B. Lerman, “Robot Vision,” MIT Press. robotics, 3D imaging, and medical technologies. Another feature 1986. of the time-of-flight is that (TOF) range data is complete, and [9] T. Motoki, H. Isono, and I. Yuyama, “Present status of thus it can be used for the avoidance/negotiation of obstacles in three-dimensional television research,” Proc. IEEE 95.6 , the surrounding environment with a high rate. pp. 1143–1145. 2007. [10] L. Onural, “Television in 3-D: What are the prospects?,” Proc. IEEE 95.6 1143-1145. 2007. CONCLUSION [11] W. Sun, O. C. Au, S. Him CHUI, and C. Wing KWOK, Three-dimension (3D) acquisition and depth measurements have “An Overview of Free Viewpoint Depth-Image-Based gained a significant role in industrial, academic, entertainment, Rendering (DIBR),” in APSIPA Annual Summit and and more environments applications. Systems with high Conference. 2010. accuracy, more quality of information, high frame rate of data [12] L. Madracevic and S. Sogoric, “3D Modeling From 2D processing, low cost have been a key point of research for Images,” MIPRO, 2010 Proc. 33rd Int. Conv. IEEE, 2010. numerous researchers in order to provide convenient solutions in [13] E. Spelke, “Principles of object perception,” Cogn. Sci. such different area. A review of some common systems of the (3- 14(1), pp.29-56. 1990. D) data extraction and range imaging measurements with [14] M.-C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. different concerns including respective performance, Rioux, “Laser ranging: a critical review of usual configuration, and applications have been presented. In contrast, techniques for distance measurement,” Opt. Eng. 40(1), range imaging techniques all with their operation principles, 10-19. 2001. advantages, and disadvantages have been discussed with more [15] S. Jecić and N. Drvar, “The assessment of structured light details. Full clarification for the time-of-flight (TOF) concept, and laser scanning methods in 3D shape measurements,” which had special attention in last few years due to their high 4th Int. Congr. Croat. Soc. Mech. 2003. performance, efficient measurement, and high accuracy if [16] S. E. Reutebuch, H. Andersen, R. J. Mcgaughey, and L. followed. Thus, this work is certainly helpful in understanding Forest, “Light Detection and Ranging ( LIDAR ): An the (3D) perception and range measuring techniques as well as Emerging Tool for Multiple Resource Inventory,” J. For. introducing an appropriate technique which has a high low cost, 103(6), 286-292. 2005. accuracy, easy to use, rapid measurement. It aims to provide [17] M. Hassan, M. Ali, and E. Aktas, “Radio Frequency reliable reference for researchers who are interested in (3D) Identification (RFID) Technologies for Locating image acquisition and depth measurement. Warehouse Resources: A Conceptual Framework,” Smart Objects, Syst. Technol. (SmartSysTech), Proc. 2012 Eur. ACKNOWLEDGEMENT Conf. on. VDE. 2012. [18] R. Rawal, V. Yadav, and S. Sharma, “RADAR- A BRIEF Authors would like to thank Universiti Teknikal Malaysia STUDY,” IJIRT, vol. 1, no. 12, pp. 1017–1020. 2015. Melaka (UTeM) and Ministry of Science, Technology and [19] J. E. Bendis, “A History and Future of Stereoscopy in Innovation (MOSTI) for supporting this research under Science Education,” Instr. Technol. Acad. Comput. Case West. Fund 06-01-14-SF0125 L00026. Reserv. Univ. pp .1-5. 2006. [20] L. Lipton, foundations of the stereoscopic cinema a study REFERENCES in depth, vol. 1. 1982. [21] V. K. Beckwith M, “Stereoscopic theatre: the impact of [1] A. Schmidt, “Ubiquitous Computing - Computing in gestalt perceptual organization in the stereoscopic theatre Context,” Diss. Lancaster Univ. 2003. environment,” . 2013. [22] E. Sammons, The World of 3-D Movies. First published Book. 1992.

2419 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

[23] J. M. Vaughan, K. O. Steinvall, C. Werner, and P. H. [42] S. Roy and I. J. Cox, “A Maximum-Flow Formulation of Flamant, “Coherent laser radar in Europe,” Coherent the N-camera Stereo Correspondence Problem,” Comput. laser radar Eur. Proc. IEEE, 84(2), 205-226. 1996. Vision, 1998. Sixth Int. Conf. on, pp. 492-499. IEEE, [24] K. Chrzanowski, Testing thermal imagers - Practical 1998. book. Military University of Technology, 00-908 [43] J. Maciel and J. o P. Costeira, “A global solution to sparse Warsaw, Poland. 2010. correspondence problems,” IEEE Trans. Pattern Anal. [25] Trimble, “The First Global Navigation Satellite System,” Mach. Intell. 25, no. 2, pp. 187-199. 2003. Trimble Navig. Limited. California.United States. 2007. [44] J. Y. Goulermas, P. Liatsis, and T. Fernando, “A [26] J. L. Whittaker et al., “Rehabilitative ultrasound constrained nonlinear energy minimization framework for imaging: understanding the technology and its the regularization of the stereo correspondence problem,” applications,” J. Orthop. Sport. Phys. Ther. 37(8), 434- IEEE Trans. circuits Syst. video Technol. 15(4), 550-565. 449. 2007. 2005. [27] A. Stern and B. Javidi, “Three-Dimensional Image [45] I. Sarkar and M. Bansal, “A Wavelet-Based Sensing, Visualization, and Processing Using Integral Multiresolution Approach to Solve the Stereo Imaging,” Proc. IEEE, 94(3), pp.591-607. 2006. Correspondence Problem Using Mutual Information,” [28] E. Cinema and W. F. Greene, “Pioneers of Early IEEE Trans. Syst. Man, Cybern. Part B (Cybernetics), Cinema : William Friese Greene ( 1855-1921 ),” . 2006. 37(4), pp.1009-1014. 2007. [29] A. GAUDREAULT, “American_Cinema_1890-1909. [46] S. Jegelka, A. Kapoor, and E. Horvitz, “An interactive THEMES AND VARIATION,” Vol. 1. Rutgers Univ. approach to solving correspondence problems,” Int. J. Press. 2009. Comput. Vis. 108.1-2. pp. 49-58. 2014. [30] S. J. Koppal, C. L. Zitnick, M. F. Cohen, S. B. Kang, and [47] T. Botterill, R. Green, and S. Mills, “A decision-theoretic B. Ressler, “A Viewer-Centric Editor for 3D Movies,” formulation for sparse stereo correspondence problems,” IEEE Comput. Graph. Appl. Jan;31(1)20-35. 2011. 3D Vis. (3DV), 2014 2nd Int. Conf. on, vol. 1, pp. 224- [31] P. Reviewed, G. B. Anthropology, and G. Basin, “The 231. IEEE, 2014. California Indian in Three-Dimensional Photography,” J. [48] S. K. Nayar and A. Karmarkar, “360×360 Mosaics,” Calif. Gt. basin Anthropol. UC merced. 1979. Comput. Vis. Pattern Recognition, 2000. Proceedings. [32] R. Lange, “3-D Time-of-flight distance measurement IEEE Conf. on. Vol. 2. IEEE, 2000. with with custom custom solid-state image sensors in [49] N. Ahuja, “An Omnidirectional Stereo Vision System CMOS / CCD-technology,” Sensors Peterbrgh. NH, vol. Using a Single Camera,” Pattern Recognition, 2006. ICPR 222, p. 223. 2000. 2006. 18th Int. Conf. on, vol. 4, pp. 861-865. IEEE, 2006. [33] G. Bianco, A. Gallo, F. Bruno, and M. Muzzupappa, “a [50] B. Y. A. Krishna, “Stereo Vision-based Vehicular Comparison Between Active and Passive Techniques for Proximity Estimation. RUcore: Rutgers University Underwater 3D Applications,” ISPRS - Int. Arch. Community Repository,” . 2014. Photogramm. Remote Sens. Spat. Inf. Sci., pp. 357–363. [51] D. Caruso, J. Engel, and D. Cremers, “Large-Scale Direct 2012. SLAM for Omnidirectional Cameras,” . IEEE, 2015. [34] N. Uchida, T. Shibahara, T. Aoki, H. Nakajima, and K. [52] Y. Pritch, M. Ben-Ezra, and S. Peleg, “Automatic Kobayashi, “3D face recognition using passive stereo disparity control in stereo panoramas (OmniStereo),” vision,” ICIP 2005. IEEE Int. Conf. on. Vol. 2. IEEE, Omnidirectional Vision, 2000. Proceedings. IEEE Work. 2005. on. IEEE, 2000. [35] F. Da and Y. Sui, “3D reconstruction of human face [53] R. Orghidan, J. Salvi, and E. M. Mouaddib, “Modelling based on an improved seeds-growing algorithm,” Mach. and accuracy estimation of a new omnidirectional depth Vis. Appl., vol. 22, no. 5, pp. 879–887. 2011. computation sensor,” attern Recognit. Lett., vol. 27, no. 7, [36] S. B. Gokturk, H. Yalcin, and C. Bamji, “A time-of-flight pp. 843–853. 2006. depth sensor - System description, issues and solutions.,” [54] J. Yamaguchi, “Three Dimensional Measurement Using Comput. Vis. Pattern Recognit. Work. 2004. Fisheye Stereo Vision,” Adv. theory Appl. stereo CVPRW’04. Conf. on. IEEE, 2004. vision’(InTech, 2011) 151-164. 2011. [37] J. Geng, “Three-dimensional display technologies,” Adv [55] J.-A. Beraldin, F. Blais, L. Cournoyer, G. Godin, and M. Opt Photonics, vol. 5, no. 4, pp. 456–535. 2013. Rioux, “Active 3D sensing,” Natl. Res. Counc. Canada, [38] R. P. Jacobi, R. B. Cardoso, and G. A. Borges, “VoC: A vol. 10, no. Lc, pp. 22–46. 2003. reconfigurable matrix for stereo vision processing,” 20th [56] L. G. Hassebrook, D. L. Lau, and H. G. Dietz, “System Int. Parallel Distrib. Process. Symp. IPDPS. 2006. and method for 3D imaging using structured light [39] U. R. Dhond and J. K. Aggarwal, “Structure from Stereo. illumination,” U.S. Pat. No. 8,224,064. 17 Jul. 2012. A Review,” IEEE Trans. Syst. Man Cybern., vol. 19, no. [57] J. B. and Georage, “Video Photogrammetry in 6, pp. 1489–1510. 1989. Shipbuilding,” 1996 Sh. Prod. sympsium, San Diego, [40] M. Ali, J. R. Shah, and M. Ahmed, “3D reconstruction California.U.S., pp. 411–425. 1996. and model acquisition of objects in real world scenes [58] F. Remondino, S. El-Hakim, A. Gruen, and L. Zhang, using stereo imagery,” Multi Top. Conf. 2003. INMIC “Turning images into 3-D models,” IEEE Signal Process. 2003. 7th Int. IEEE, 2003. Mag., vol. 25, no. 4, pp. 55–65. 2008. [41] S. Roy and I. J. Cox, “A Maximum-Flow Formulation of [59] R. I. Hartley and J. L. Mundy, “The Relationship between the N-camera Stereo Correspondence Problem,” photogrammmetry and computer vision,” Opt. Eng. Comput. Vision, 1998. Sixth Int. Conf. on, pp. 492-499. Photonics Aerosp. Sens., pp. 92–105. 1993. IEEE, 1998.

2420 International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 10 (2017) pp. 2409-2421 © Research India Publications. http://www.ripublication.com

[60] T. A. Clarke, K. T. V. Grattan, and N. E. Lindsey, Pattern Recognition, 2008. CVPR 2008. IEEE Conf. on. “Laser-based triangulation techniques in optical IEEE, 2008. inspection of industrial structures,” In San Dieg-DL [77] M. J. Dorrington, A. A., Payne, A. D., & Cree, “An Tentative, pp. 474-486. International Society for Optics evaluation of time-of-flight range cameras for close range and Photonics. 1991. metrology applications,” Int. Arch. Photogramm. Remote [61] A. Walch and C. Eitzinger, “A Combined Calibration of Sens. Spat. Inf. Sci. 38 201-206. 2010. 2D and 3D Sensors A Novel Calibration for Laser [78] O. Choi and S. Lee, “Wide range stereo time-of-flight Triangulation Sensors based on Point Correspondences,” camera,” Image Process. (ICIP), 2012 19th IEEE Int. . IEEE, 2002. Conf. on, pp. 557-560. IEEE, 2012. [62] J. G. D. M. França, M. A. Gazziro, A. N. Ide, and J. H. [79] T. Kahlmann, F. Remondino, and S. Guillaume, “Range Saito, “A 3D scanning system based on laser imaging technology : new developments and applications triangulation and variable field of view,” ICIP 2005. for people identification and tracking,” Electron. Imaging IEEE Int. Conf. on, vol. 1, pp. I-425. IEEE, 2005. 2007. Int. Soc. Opt. Photonics, 2007, no. Sect. 3, p. [63] K. J. Gåsvik, “Optical metrology.Third Edition,” John 64910C–64910C–12. 2007. Wiley Sons. 2003. [80] S. May, S. Fuchs, D. Droeschel, D. Holz, and N. Andreas, [64] R. Bommareddi, “Applications of Optical Interferometer “Robust 3D-Mapping with Time-of-Flight Cameras,” Techniques for Precision Measurements of Changes in Intell. Robot. Syst. 2009. IROS 2009. IEEE/RSJ Int. Conf. Temperature, Growth and Refractive Index of on. IEEE, 2009. Materials,” Technol. vol. 2, no. 2, pp. 54–75. 2014. [81] J. Seiter, M. Hofbauer, M. Davidovic, S. Schidl, and H. [65] S. Zhang, “High-resolution, Real-time 3-D Shape Zimmermann, “Correction of the temperature induced Measurement,” Phys. Rev. Lett., vol. 107, no. May, p. error of the illumination source in a time-of-flight distance 21802. 2011. measurement setup,” Sensors Appl. Symp. (SAS). IEEE, [66] A. S. Malik, S.-O. Shim, and T.-S. Choi, “Depth Map 2013. Estimation using a Robust Focus Measure,” Image [82] B. Langmann, W. Weihs, K. Hartmann, and O. Loffeld, Process. 2007. ICIP 2007. IEEE Int. Conf. on. Vol. 6. “Development and investigation of a long-range time-of- IEEE, 2007. flight and color imaging system,” IEEE Trans. Cybern. [67] M. T. Mahmood, “Shape from focus by total variation,” 44.8 1372-1382. 2014. IVMSP Work. 2013 IEEE 11th. IEEE, 2013. [83] J. Jung, J.-Y. Lee, Y. Jeong, and I. S. Kweon, “Time-of- [68] B. K. Horn and M. J. Brooks, “The Variational Approach Flight Sensor Calibration for a Color and Depth Camera to Shape from Shading,” Comput. Comput. Vision, Pair,” IEEE Trans. pattern Anal. Mach. Intell. 37.7 1501- Graph. Image Process. 33.2 174-208. Graph. Image 1513. 2015. Process. 1986. [84] P. Fursattel et al., “A Comparative Error Analysis of [69] G. Wang, S. Liu, J. Han, and X. Zhang, “A novel shape Current Time-of-Flight Sensors,” IEEE Trans. Comput. from shading algorithm for non-Lambertian surfaces,” Imaging, vol. 9403, no. c, pp. 1–1. 2015. Meas. Technol. Mechatronics Autom. (ICMTMA), 2011 [85] D. Ko and G. Agarwal, “Gesture recognition : enabling Third Int. Conf. on. Vol. 1. IEEE, 2011. natural interactions with electronics,” Texas Instruments [70] S. Foix and G. Aleny, “Lock-in Time-of-Flight (ToF) White Pap. Video Vision. 2012. Cameras : A Survey,” IEEE Sensors J. 11.9 1917-1926. [86] L. Li, “Time-of-Flight Camera–An Introduction,” Tech. 2011. White Pap. SLOA190B. 2014. [71] U. Reiser and J. Kubacki, “Using a 3D time-of-flight [87] L. Jovanov, A. Pižurica, and W. Philips, “Fuzzy logic- range camera for visual tracking,” IFAC Proc., vol. 6, no. based approach to wavelet denoising of 3D images 1, pp. 355–360. 2007. produced by time-of-flight cameras.,” Opt. express 18, no. [72] A. Swadzba, A.-L. Vollmer, M. Hanheide, and S. 22 22651-22676. 2010. Wachsmuth, “Reducing noise and redundancy in [88] P. Rönnholm, E. Honkavaara, P. Litkey, H. Hyyppä, and registered range data for planar surface extraction,” 2008 J. Hyyppä, “Integration of laser scanning and 19th Int. Conf. Pattern Recognit., pp. 1–4. 2008. photogrammetry,” Int. Arch. Photogramm. Remote Sens. [73] Z. C. Marton, R. B. Rusu, D. Jain, U. Klank, and M. Spat. Inf. Sci. 36.3/W52 355-362. 2007. Beetz, “Probabilistic categorization of kitchen objects in table settings with a composite sensor,” 2009 IEEE/RSJ Int. Conf. Intell. Robot. Syst. IROS 2009, pp. 4777– 4784. 2009. [74] Z. Mingming and Z. Yu, “High Quality Range Data Acquisition Using Time-of-Flight Camera and Stereo Vision,” Virtual Real. Vis. (ICVRV), 2011 Int. Conf. on, pp. 78-83. IEEE, 2011. [75] S. Acharya, C. Tracey, and A. Rafii, “System design of time-of-flight range camera for car park assist and backup application,” Comput. Vis. Pattern Recognit. Work. 2008. CVPRW’08. IEEE Comput. Soc. Conf. on, pp. 1-6. IEEE, 2008. [76] J. Zhu and J. Davis, “Fusion of Time-of-Flight Depth and Stereo for High Accuracy Depth Maps,” Comput. Vis.

2421