
3D Scanning with a Calibrated Stereo Camera Array Simón Lorenzo Kevin Johnson Stanford University Stanford University Department of Electrical Engineering Department of Electrical Engineering [email protected] [email protected] depth camera seen in Figure 1 [11]. This commercially Abstract available camera uses active infrared stereo vision. It We design and test an automated low-cost scanner for producing three-dimensional models of sub-meter scale generates infrared laser light, projects it onto the scene, objects with millimeter-scale resolution. We mount and uses the infrared-sensitive stereo cameras to stereo camera pairs on an aluminum scanning arc determine depth. It also records visible light data which controlled by a stepper motor. These camera pairs can be overlaid onto the depth map. This device outputs acquire image pairs of a centered object as the motor a 1280 x 720 pixel resolution depth map in real time rotates the aluminum arc, performing a scan. By first [11]. calibrating these pairs of low-cost internet-protocol In contrast, our proposed design uses only the visible cameras, we can convert the pixel disparity between light data recorded from pairs of stereo cameras. By image pairs to depth. We process these calibrated depth thoroughly calibrating these camera pairs, we can maps to reduce noise and remove holes. We then convert the pixel disparity between left and right images generate aligned point clouds from each of these to depth. This reduces cost by eliminating the need for calibrated depth maps and merge them. By further infrared laser sources and detectors. Our static depth processing the merged point cloud, we generate a color- maps are 1280 x 720 pixels. correct 3D object model with millimeter-scale resolution. 1. Introduction 1.1. Motivation Our primary motivation is to develop a low-cost and fully automated three-dimensional (3D) scanner for producing 3D models of sub-meter scale objects with millimeter resolution. Current 3D scanning technologies are high-cost and not entirely automated [1,2]. Our final 3D scan could then be converted to the appropriate file Figure 1: Intel® RealSense™ Depth Camera. format for existing 3D printer technologies. Ultimately, our device would allow rapid and accurate scanning and 1.4. Related 3D Scanning Systems printing of any appropriately size object. Our envisioned final product is similar to stereo scanning technology already on the market. The David 1.2. Related Software Visions HP 3D-scanner in Stanford’s Product Realization Laboratory (PRL) is one such example (Fig. In this project, we will use existing software to 2) [1,2]. This scanner projects an array of known light interface with, collect, and process image data from patterns onto an object surface. The 3D structure of the commercially available hardware. We will be using the surface is then determined using the distortion in the MATLAB Computer Vision Toolbox for calibrating patterns. This technology can achieve sub-millimeter images and generating point clouds from stereo pairs of resolution, but costs upwards of four thousand US ELP Megapixel Mini™ IP cameras [3-7]. We will dollars [2]. This resolution is below our target collect data from these cameras using the open source millimeter-scale resolution, but we can reduce costs by ONVIF device manager and iSpy surveillance software using only IP camera pairs for data collection. This 3D [8,9]. The scanning arc will be controlled using the TIC scanner is also not entirely automated, as the object must stepped motor controller and software [10]. be rotated manually to yield a full scan. Our scanner autonomously moves about the object to reduce the 1.3. Related Depth Mapping Systems work load on the user. The depth mapping capabilities of our system are comparable to those of the Intel® RealSense™ D435 1 motor with gearbox, 3D printed motor mount, 3D printed aluminum to motor shaft interfaces, and a computer (Fig. 4). The IP stereo camera pairs are first mounted to an aluminum arc (Fig. 5). The aluminum arc was custom made in the PRL by putting it through a metal rod bender. The arc is free to rotate via D-shaft rods which are attached to the scanner’s frame through pillow bearings. One of these D-shafts is coupled and up-shafted to and rotated by the stepper motor during the scan (Fig. 6). The computer controls the stepper motor using a Tic stepper motor controller chip [10]. Figure 2: David Vision HP 3D Scanner. The rotation occurs about the object of interest which is placed on the center of the scan bed (Fig. 4). The central 2. Methods computer acquires image data from each of the IP We combine existing hardware and software cameras via ethernet cables through the POE switch enclosed in the server rack mount hardware. A techniques in novel configurations to create our Cyberpower surge protector with an industrial grade functioning 3D scanner. We begin by describing the metal housing was used to power the system through a hardware setup of the constructed scanner. Afterwards, wall outlet 120V / 20A. we outline the high-level software process used to both calibrate the cameras and generate depth maps from stereo image pairs. Finally, we describe the image processing used to remove noise and fill holes in both the generated two-dimensional (2D) and 3D data sets. 2.1. Hardware The high-level hardware system is described in the block diagram of Figure 3. A central computer with graphical user interface (GUI) interfaces with the stepper motor and the power over ethernet (POE) switch. The POE switch provides power to each of the IP camera pairs and allows data acquisition by the computer through a local network. Each camera was assigned an IP address through ONVIF Device manager [192.168.1.100 – 192.168.1.107] [8]. An external LED Figure 4: Rendered CAD Model of 3D Scanner. strip was added to help illuminate the scanning bed. A power supply located in the scanner base provides the necessary power for all elements. Figure 5: Mounted Stereo Camera Pair. Figure 3: High-Level hardware system design. A single central computer with graphical user interface (GUI) controls the IP camera pairs, scanner arc motion, and scene illumination. The scanner consists of several hardware components: the IP cameras, aluminum scanning arc, aluminum stage, POE switch, motor control driver, Figure 6: Stepper Motor. 2 2.2. Software camera focal length, distortion, and image skew due to viewing angle. The software used for the scanner falls into two main To generate a merged 3D point cloud of an object of categories: software for interfacing with the hardware interest, we begin by taking stereo image pairs of the interfacing, and software for processing the images. object during the scan (Fig. 7b). We compute the pixel Several software packages are used to interface with disparity between these images by using a block search the scanner hardware. The Tic stepper motor controller algorithm. This algorithm searches for similar regions chip comes with GUI software to control the motor between left and right images to determine their lateral rotation range and speed [10]. The ONFIV device pixel shift, or disparity. Disparity is dependent on the manager is used by the computer to detect and assign IP distance from the stereo camera pair. Closer objects addresses to the IP cameras [8]. ISpy home security have a larger disparity than farther objects. This software is used to view the real-time camera data and disparity map is processed to reduce noise and fill holes trigger imaging throughout the scan [9]. These images using the methods detailed in the following section [12]. are saved on the computer for later processing. We then use the calibration data from Figure 7a) to The image processing was performed in MATLAB convert this pixel disparity to a distance in mm. This using both the Computer Vision toolbox [3] and strategy yields a partial point cloud for each disparity additional disparity map processing functions [12]. A high-level overview of the 3D scanning process along map. We combine the point clouds from different with descriptions of the denoising and hole filling viewing angles to fully reconstruct the object. To algorithms are provided below. determine the extrinsic transforms necessary to align these point clouds, we use the detect checkerboard 2.2.1 High-Level Software Overview function on a fixed alignment checkerboard next to the object of interest. Finally, the result of the autonomous The camera calibration and three-dimensional (3D) scan is a merged 3D point cloud. point cloud generation are outlined in the high-level software block diagram of Figure 7. 2.2.2 Disparity Map Processing In order to improve the quality of the merged 3D point cloud, the disparity maps used to generate each individual point cloud went through two processing steps (Fig 8). First, the rectified stereo image pairs were put through a gaussian filter with a sigma of 3. This value was found through multiple trial and error runs with visual inspection. We further enhanced the results by pre-processing the disparity maps with a gaussian filter [12]. To remove unwanted holes, each disparity Figure 7: High-level software design: a) We calibrate map was fed into a hole-filling function. This process is pairs of stereo cameras to provide the camera intrinsic detailed in Figure 8. and extrinsic data necessary to create depth maps. b) We capture stereo pair images of both an object and a small checkerboard. We then compute and process the disparity map for each of the image pairs and generate a 3D point cloud using the calibration data. These point clouds are then aligned using the small checkerboard and merged.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-