DEGREE PROJECT IN TECHNOLOGY, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2017

3d scanner Accuracy, performance and challenges with a low cost 3d scanning platform

JOHAN MOBERG

KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL ENGINEERING AND MANAGEMENT 1 Accuracy, performance and challenges with a low cost 3d scanning platform

JOHAN MOBERG

Bachelor’s Thesis at ITM Supervisor: Didem Gürdür Examiner: Nihad Subasic

TRITA MMK 2017:22 MDAB 640

2 Abstract

3d scanning of objects and the surroundings have many practical uses. During the last decade reduced cost and increased performance has made them more accessible to larger consumer groups. The price point is still however high, where popular scanners are in the price range 30,000 USD-50,000 USD. The objective of this thesis is to investigate the accuracy and limitations of time-of-flight scanners and compare them to the results acquired with a low cost platform constructed with consumer grade parts. For validation purposes the constructed 3d scanner will be put through several tests to measure its accuracy and ability to create realistic representations of its environment.

The constructed demonstrator produced significantly less accurate results and scanning time was much longer compared to a popular competitor. This was mainly due to the cheaper laser sensor and not the mechanical construction itself. There are however many applications where higher accuracy is not essential and with some modifications, a low cost solution could have many potential use cases, especially since it only costs 1% of the compared product.

3 Referat

3d skanning av föremål och omgivningen har många praktiska användningsområden. Under det senaste årtiondet har sjunkande priser och nya tekniker möjliggjort att större grupper fått tillgång till tekniken. Utrustningen är dock fortfarande relativt kostsam, populära skanners kostar mellan 300 000 - 500 000 kr. Syftet med denna uppsats är att utvärdera och granska noggranheten hos 3d skanning baserat på time-of-flight teknologi och jämföra resultatet med en billig platform baserad på konsumentprodukter. För att utvärdera processen konstrueras en 3d skanner som sedan genomgår flertalet tester i syfte att undersöka noggrannheten och förmågan att skapa en verklighetstrogen modell.

Den konstruerade 3d skannern hade betydligt lägre noggrannhet och skanningen tog längre tid jämfört med en populär produkt på marknaden. Detta beror i huvudsak på den billigare lasersensorn och inte på den mekaniska konstruktionen. Däremot finns det många användningsområden där väldigt hög noggranhet inte är nödvändig. Med vissa förändringar skulle lågkostnadsplattformen kunna ha många användningsområden, i synnerhet då den bara kostar 1% av den jämförda produkten.

4 Acknowledgments I would like to thank my supervisor Didem Gürdür for good and efficient feedback.

5 Contents

Abstract ii Sammanfattning iii Acknowledgments iv

1 Introduction 1.1 Background 1 1.2 Purpose 1 1.3 Scope 1 1.3 Method 1 2 Theory 2.1 3d scanning concept 2 2.1.2 Non-contact active methods 3 2.1.3 Non-contact passive methods 6 2.1.4 Typical use cases 7 3 Demonstrator 3.1 Hardware – Construction overview 9 3.1.1 Upper body 10 3.1.2 Lower body 12 3.2 Mechanical limitations/constraints 14 3.2.1 Dynamic properties 14 3.2.3 Belts, gears and pulleys 17 3.3 Device overview 18 3.3.1 Electronics overview 18 3.4 Software overview 22 3.4.1 Image mapping 23 3.4.2 Virtual points 24 3.4.3 Connectivity 26 4 Results 4.1 Single color results 27 4.2 RGB results 31 4.3 Comparison 32 5 Discussion and conclusions 5.1 Sample density 35 5.2 Scan overlap 36 5.3 Scan speed and beam divergence 37 5.4 RGB challanges 37 5.5 A good compromise 38 5.6 Conclusion 39 6 Recommendations and future work 6.1 Redesign 40 6.2 Multipurpose 3d scanner 41 6.3 Potential use cases 42 Bibliography Appendices A PCB design B C-code for micro-controller C Optional Python RGB point mapping D Main Python code for Raspberry Pi

6 Nomenclature

Symbol Description d Axial offset distance from center of gravity F Friction force counteracting rotation g Gravity

Izz Inertia around rotational axis

Mv1 Torque reaction

Mv2 Torque applied by motor

Mdt Detent torque

Mht Motor holding torque m Mass N Normal force

µfriction Friction coefficient

ω1 Angular acceleration of lower body

ω2 Angular acceleration of upper body

ρsteps Output torque factor per Micro step

7 List of Abbreviations

Abbreviation Description 3d Three-dimensional AC Alternating current ADC Analog-to-digital converter CAD Computer-aided design CoG Center of gravity CPU Central Processing Unit DC Direct current I2C Inter-Integrated Circuit ICP Iterative closest point IR ISP In-System Programming LED Light Emitting Diode Light detection and ranging PCB Printed Circuit Board PPR Pulses per revolution RGB Red, green, blue RPS Revolutions per second SD Secure Digital SLAM Simultaneous localization and mapping SPI Serial Peripheral Interface TOF Time of flight UART Universal asynchronous receiver/transmitter USB Universal Serial Bus

8 List of Tables

Table Description 2.1 Product comparison between popular 3d scanners 3.1 Holding torque depending on micro step setting 3.2 Resulting torque with different micro step settings 3.3 Step motor accuracy test results 4.1 Product comparison between typical 3d scanners 4.2 Scanning time and accuracy depending on settings

9 Chapter 1 Introduction

1.1 Background

3d scanning of objects and the surroundings have many practical uses. During the last decade reduced cost and increased performance has made them more accessible to larger consumer groups. The price point is still however high, where popular scanners are in the price range 30,000 USD-50,000 USD. The objective of this thesis is to investigate the accuracy and limitations of time-of-flight laser scanners and compare them to the results acquired with a low cost platform constructed with consumer grade parts.

1.2 Purpose

The purpose of this thesis is to answer two main research questions:

Research Question 1: What accuracy can be achieved with a low cost platform that has been developed?

Research Question 2: How does this implementation differs from comparable popular, high end devices on the market?

1.2 Scope

A low cost device (<3,000 SEK) will be constructed out of parts available to the average consumer. The design and construction phase is limited to two months, part time for a single person.

1.3 Method

Before the device is constructed, some estimations need to be made to see what ideal performance can be expected from the device. These estimations show that even the ideal platform based on cheap parts will be significantly slower and less accurate than a popular competitor, due to the lower performance of the cheaper laser sensor. So the main design goal for the platform is to maximize the performance of the laser sensor while being stable and easy to use. It is not an option to use a better laser sensor since it would roughly tenfold the costs.

When the demonstrator is completed, its performance will be tested in static tests such as repeatability and accuracy tests, but also practical tests such as scanning rooms and entire buildings. The resulting 3d models should be usable in commonly used 3d software.

1 Chapter 2 Theory

This chapter covers the concept and various techniques for 3d scanning.

2.1 3d scanning concept

The purpose of 3d scanning is to collect data of a real-world object or environment and recreate it in the form a digital 3d model. This 3d model has many applications, ranging from movie productions to and production quality control.

To create a 3d model, the first step is the 3d scan. This results in a pointcloud, often millions of points shaped like the scanned object and placed in a Cartesian Figure 2.1: coordinate system. These points are Concept image, 3d scanner sampling points often so densely packed that they might from its environment [Architecture Media] appear as solid 3d model (as seen in Figure 2.1). To capture the color of each sample point a camera can be used. This results in a “RGB pointcloud”, a point cloud where each point has a color in the RGB (red, green blue) color scale.

When the 3d scan is completed, the pointcloud is analyzed in a computer program to create a surface model from the pointcloud. Simply put and very simplified it’s “connecting the dots” to Figure 2.2: create surfaces (Figure 2.2). Concept illustration, from pointcloud to surface. [Made in Illustrator] These surface reconstruction methods are generally very mathematically complex and are outside of the scope of this thesis. It is however relevant to note that the quality of the final surface model is very much dependent of the quality of the 3d scan.

2 3d scanning devices can generally be divided into two main groups; contact and non-contact scanners [Curless 2000]. Contact scanners probe the object using mechanical sensors to measure shapes and distances of smaller objects. Non-contact scanning in turn can be divided into active and passive scanners. Active scanners emit various kinds of light while passive scanners do not emit anything themselves but rely on ambient light.

3d scanning methods

Contact Non-contact

Active Passive Time-of- ight Stereoscopic Triangulation Silhouette Structured light Photometric

Figure 2.3: Illustration of 3d scanning methods. [Made in Illustrator]

2.1.2 Non-contact active methods

Time-of-flight (TOF)scanners pulse a laser beam onto an object and calculates the time between emitting the light and receiving the diffuse reflected light.The equation for the distance is therefore; ct⋅ d= distance d = c= speed of light (2.1) 2 t= time of flight

There are several methods for transmitting and analyzing the reflected light, mainly to reduce noise and increase accuracy. The range limitation of the system depends how much diffuse light is reflected back to the sensor.

3 Figure 2.4: Illustration of the principle of TOF scanning. [Adams 2010]

As the amount of reflected light varies with both angle Θ and the reflectance of the material scanned, range limitations are often specified to a specific reflectance and angle instead of an absolute range. For example: “range of 0.6m - 30m with upright incidence to a 90% reflective surface”.

As the speed of light is nearly 300 000 km/s, the TOF for a 1 cm measurement would be approximately 6·10-11 seconds. According to the theorem of Nyqvist frequency the sample rate must be at least twice the sampled frequency. In the above example this would require at least a 1.2 THz analogue to digital converter (ADC) or several coupled Ghz ranged ADCs [Texas instruments 2011].

Triangulation is commonly used where very precise and short range measurements are needed, such as verification of machined parts. The method utilizes at least one laser and image sensor placed with an offset to each other. The laser commonly projects a point, line or pattern on the scanned object and the image sensor captures its offset from the image center. When combining this offset with the focal length between the image sensor and the lens, the distance can be calculated.

Figure 2.5: Illustration of the principle of triangulation scanning. [János 2008]

4 Structured light on the other hand uses light to project various striped images on the scanned object. Initially the cameras take photos with the light source turned off to capture the colors of the object. During the second stage the projector is turned on and projects one or more patterns on the surface which the cameras registers. By analyzing the distortions in the patterns the 3d geometry can be determined. The concept of using cameras and pro- jector enables this method to quickly scan objects but due to the use of visible light is more sensitive to ambient light, reflections and shadows compared to triangulation [Nayar 2012].

projector

camera 1 camera 2

Figure 2.6: Illustration of the principle of structured light scanning. [Vanessaezekowitz 2008]

Figure 2.7: Figure 2.8: Figure 2.9: HandyScan 300 Go!Scan 300 Faro 3D X30 [Faro 2015] [Creaform 3D 2017] [Creaform 3D 2017]

Table 2.1: Product comparison between popular products of each type

Product HandySCAN300 Go!SCAN 50 Faro 3D X 30 Technology Laser triangulation Structured light Time of flight Accuracy 0.040 mm 0.100 mm 2 mm Samples per second 205 000 550 000 122 000 360° by 300° Scanning area 225 x 250 mm 143 x 108 mm @ 30 m radius 400,000 SEK +200,000 SEK 300,000 SEK Price approx. [iReviews 2014] [Aniwaa 2016] [Pobonline 2011]

5 2.1.3 Non-contact passive methods

Stereoscopic scanning is based on the same principle as the human vision, by observing an object with two slightly offset cameras we can calculate the depth of object. Both triangulation and structured light can use the same principle of two cameras, but plain stereoscopic 3d scanning is passive in the sense that no external light (only ambient light) is projected onto the object.

Figure 2.10: Illustration of the principle of stereoscopic scanning. [Made in Illustrator]

Eyes Cameras

Silhouette scanning uses only one camera and requires the scanned object to be placed in front of a single colored backplate.

Step 1 Step 2 Step 1 The object is rotated 360° around the z-axis while several photos are taken.

Step 2 Each photo is converted to a two colored silhouette of the objects outline. The object is filled with a single color with a black background.

Step 3 Camera Camera All silhouette images are merged to form a 3d object Step 3 with the surface texture added from the original photo.

Figure 2.11: Illustration of the principle of silhouette scanning. [Made in Illustrator]

6 Photometric scanning requires only a single hand-held camera and computer software to create a 3d model with high surface texture quality. Several overlapping images are taken of the object so surface normals can be calculated by analyzing the shadows. Using these surface normals a 3d object can be created. By placing up to 100 cameras in a fixed position surrounding an object, an “instant” 3d scan be completed. This is useful when doing high resolution scans of people. Figure 2.12: Illustration of the principle of photometric scanning. [Meekohi 2017]

As many 3d scanners implement a combination of stereoscopic, silhouette and photometric techniques, they are not suitable for a direct comparison. Instead we will look at different applications where they can be used.

2.1.4 Typical use cases

Use case 1 - Scan of matte, gray coffee mug to use for

Cheap (5000 SEK [Amazon 2017]) and accurate 3d scanner with 0.5 mm resolution (using ideal objects). Shiny and dark objects are problematic. Creates a 3d model without texture (just the shape) and scanning is time consuming, a coffee mug would take about 30-40 minutes.

Figure 2.13: Matter and Form MFS1V1 3D Scanner [Amazon 2017]

Use case 2 - Scan of room for interior design company Scans large areas quickly, user can walk around and scan objects which are then 3d modeled with textured surface. Low cost (2500 SEK [Amazon 2017]) but poor accuracy and performance especially in dark areas. The primary use case for this type of device is more to create an acceptable representation of an object or area rather than an accurate one.

Figure 2.14: ’ iSense [Amazon 2017] 7 Use case 3 - Scan of a full size car to use in a or movie

Using a professional camera, lightning and correct techniques the software produces a realistic high resolution 3d model. Poor lightning or incomplete 360° scans cause problems. The size of the 3d model is not related to the real object’s size which makes it unsuitable for many engineering applications.

Figure 2.15: Agisoft PhotoScan software [Agisoft 2017]

Use case 4 - Scan of an engine for .

Extremely high accuracy, suitable for smaller objects for engineering applications but comes at a high cost (400 000 SEK [iReviews 2014]).

Figure 2.16: HandySCAN300 [Creaform 3D 2017]

Use case 5 - Scan of a church for historial conservation

Accurate (2 mm), fast and long range enables the scanner to quickly scan large areas where using a hand-held device is not possible or would be too time consuming.

Figure 2.17: Faro 3D X 30 [Faro 2017]

8 Chapter 3 Demonstrator

This chapter covers the design,construction and testing of the demonstrator.

3.1 Hardware – Construction overview

The device is roughly 20x30 cm large. The top assembly rotates 360° around the θ-axis (red) and 97° around the φ-axis (blue). The main components in top assembly are the camera and the light detection and ranging sensor (LIDAR) which are responsible for capturing the environment. The focus of the design is small size and convenience to prototype. For example:

• Most parts are locked in place by lip/groove design and gravity – to disassemble, parts can simply be lifted out • The most vital parts of the construction will need to be adjusted during development. To simplify this process they are placed together on a separate frame (inner assembly) which is easy to remove, making them easy to reach and adjust. • Firmware can be updated by WiFi or by using a connector located on the lower body, no need to disassemble the unit when revising code. • 3d printed parts have been designed to be easy to manufacture even on entry level 3d printers. Upper body

– Hat

LIDAR – Inner assembly Camera

– Cylinder

Lower body

– Housing Figure 3.1: Exploded view of main components. – Bottom plate [Model made in Solidworks and image rendered in Keyshot] 9 3.1.1 Upper body Upper body The upper body contains virtually all complex components and electronics. The whole upper – Hat body is rotated around the Z-axis by the lower body. To increase stability during rotation of the upper body, all components have been placed as to minimize the rotational inertia. – Inner assembly The cylinder is kept in place by two bearings press fitted on the bottom axis. These bearings are in turn press fitted into the lower body.

The upper body is connected with 12 wires to the lower body. The wire harness is connected to a – Cylinder slip ring (see 3.2 lower body) which allows the upper body to rotate freely.

Figure 3.2: Hat and cylinder locking mechanism. [Model made in Solidworks and image rendered in Keyshot]

The three sub assemblies of the upper body are linked together by lip/groove features and locking indents. This makes it easier to disassemble the unit and less screws improves the aesthetics with a sleeker look.

Figure 3.3: Figure 3.4: Hat and cylinder locking mechanism. Inner assembly and cylinder locking [Model made in Solidworks and mechanism. [Model made in Solidworks and image rendered in Keyshot] image rendered in Keyshot]

10 The unit can be operated in two ways, either using the two buttons located in the lower body or by connecting to the Raspberry Pi with WiFi. By pressing the green start button the unit will start to sample roughly 2 millions points from 360° (z-axis) times 96° (x-axis) area while the red button resets the unit. Operating the device on WiFi however allows for more control of the operation, such as setting specific angles to scan.

The motors, sensors and LIDAR are run and managed by the micro-controller on the printed circuit board (PCB) which in turn is run by the Raspberry Pi. To verify the position and angle of the LIDAR two optical sensors are used, one in the inner assembly and one inside the cylinder. When an object obscures the infrared light (IR) light the micro-controller registers the position. In the inner assembly this is done by the position disc on the LIDAR bracket. 96° of the disc has a larger diameter compare to the rest, this area obscures the sensor until either the top or bottom position is reached.

By knowing the actual position the unit can be re-calibrated during the sampling process. This minimizes offset errors and thereby improves accuracy. Figure 3.5: Front and back view of inner assembly [Model made in Solidworks and image rendered in Keyshot] Optical sensor

Custom made PCB Lidar bracket with (simplified illustration) position disc Belt Lidar Pulley Camera bracket Bearing

Steel axis

Step motor Motor bracket Camera with with rails harness Raspberry Pi

Figure 3.6: Exploded view of the inner assembly. [Model made in Solidworks and image rendered in Keyshot] 11 The cylinder is press-fitted to bearings located in the lower body. When the cylinder is in place, the second optic sensor is aligned. The extruded pin on the lower body will trigger the sensor once per revolution, this is the starting or “home” position. To avoid triggering the sensor instantly with the next run, the sensor reading is ignored until the sensor has passed the pin. Optical sensor To ensure the cylinder stops at the exact place every time, the cylinder will only rotate in one direction. Pin This is due to the sensor triggering on the edge of the extruded pin, if the cylinder would switch direction, the position would be offset with the width Figure 3.7: of the extruded pin. Connection between the cylinder and lower body. [Model made in Solidworks and image rendered in 3.1.2 Lower body Keyshot]

The lower body houses a stronger step motor with pulleys, two bearings and a belt. By using two aligned bearings, the belt can be tensioned without exerting resulting torque to a single bearing. Both motors are attached to a rail instead of a fixed position to allow belt tension adjustments.

The lower body also holds a slip ring for wire connections with the upper body and two connectors on the side panel. These connectors powers the unit and enables connections over serial peripheral interface (SPI) to be made to the printed circuit board (PCB) micro-controller. The ideal position for the motor would be to be centered below upper body, as this would eliminate the need for a pulley and belt. However, unless the slip ring is placed in this position it would not possible for the upper body to rotate freely without tangling the wires.

Bearing Belt Pulley Pulley

Bearing

Step motor 15V DC input

8-pin connector for firmware Slip ring update

Bottom plate Figure 3.8: Cut-away view of the lower body. [Model made in Solidworks and image rendered in Keyshot]

12 Figure 3.9: Sketch of mechanical friction slip ring concept [Powerbyproxi 2017]

The slip ring allows for twelve wires to be connected with the rotating upper body assembly. The concept of the friction slip ring is similar to a traditional brushed DC motor, where brushes transfer electricity to a rotating shaft. This interface does however have its drawbacks, depending on the quality and type of the slip ring. In the demonstrator a mechanical friction slip ring is used, which adds some signal distortion and with time will be subject to corrosion and wear. Other types of contactless slip rings are less prone for wear and signal distortion but are generally more expensive and complicated.

13 3.2 Mechanical limitations/constraints

To be able to use the unit in various environments such as outdoors, storage rooms etc, size and mobility is an important factor. Moreover the unit is not dependent of WiFi or external computers to take measurements. The unit only requires 230V alternating current (AC) power for its external direct current (DC) adapter.

3.2.1 Dynamic properties

To acquire reliable and fast readings, the upper assembly of the unit need to accelerate and maintain a relative constant speed without any rotation or movement of the lower assembly. This set some conditions regarding center of mass, total mass, geometry and torque. To find the requirements of the motors, a more detailed analysis regarding forces and inertia is needed.

Figure 3.10: Friction force and torque acting upon the body. [Model made in Solidworks, image rendered in Keyshot, drawings in Illustrator]

By using data from the computer-aided design (CAD) model with respect to mass, the center of gravity (CoG), and inertia it is possible to estimate the maximum angular acceleration. If the angular acceleration of the upper assembly is higher than the lower assembly can counteract, the lower assembly would move and cause incorrect measurements. This can be solved by either balancing the torque of the motor to suit the unit or control the acceleration through software ramping. However If the angular acceleration is too low, the total scan period will be longer than necessary. To find the maximum acceleration we solve for Mv2max:

MMvv21max =+Fd⋅ (3.1)

2 max MIv =+()FrameRINotor ⋅=ωµ2max friction ⋅⋅dI+ 111⋅ω (3.2)

ω1 = 0 (3.3)

The maximum torque that can be applied, without the lower assembly accelerating (ω = 0), µ friction ⋅⋅Nd −2 1 is equal ωto the =counteracting torque=⋅35. created7rads sby the friction force between the body and the 2max + surface. IIFrameRotor 14 Mv1 maax =⋅Fd= 0.485 Nm

Mht Motor(max =−η 2Mdt) ρsteps η = 3,

ρsteps = MMvv21max =+Fd⋅

2 max MIv =+()FrameRINotor ⋅=ωµ2max friction ⋅⋅dI+ 111⋅ω ω = 0 With ω1 = 0 the equation is reduced to:

µ friction ⋅⋅Nd 2 ω = =⋅35. 7rads s− (3.4) 2max + MMvv21max II=+FrameRotorFd⋅

Mv1 maax =⋅Fd= 0.485 Nm (3.5) 2 max MIv =+()FrameRINotor ⋅=ωµ2max friction ⋅⋅dI+ 111⋅ω The maximal accelerationMht is slightly above 1 rotations per second per second (RPS-2), well within Motor(max =−η 2Mdt) the speed requirement.ρ The chosen motor has a holding torque (M ) of 0.48 Nm [Wantmotor ω1 = 0 steps ht 2017], close to optimal for the angular acceleration. There are however more aspects to take into consideration.η = 3, The motor µ ⋅⋅ and theρ axis= are connectedfriction Ndthrough a belt and pulleys−2 with a gear ratio of 3 (η). By using a methodωstep calleds =“micro stepping”, the =⋅motor35. 7resolutionrads s is increased but torque is decreased. 2max II+ The motor also hasFram a eR detent otor torque (-2Mdt) resisting rotation [Motioncontroltips 2017] and depending on the micro step setting, the motor will only partially engage its coils (ρsteps) to be able to produce the “extra steps”. The final output torque to the axis is therefor: Mv1 maax =⋅Fd= 0.485 Nm

2 max ⋅ Mnvh=−()MMt ρsteps 2 dt (3.6) η = 3, Table 3.1: Holding torque depending on micro step setting [MICROMO 2015] ρsteps = Micro steps per full step Output torque factor per Micro step (ρsteps) 1 1.0 2 0.71 4 0.383 8 0.195 16 0.098 For example: Using 2 micro steps per full step, the resolution of the motor is doubled but output torque is reduced with 29%.

Table 3.2: Resulting Mv2max with different micro step settings.

Micro steps per full step Low speed torque, Mv2max [Nm] 1 1.27 2 0.85 4 0.38 8 0.11 16 ≈ 0

There are two viable options without using software acceleration control, using either two or four micro steps per full step to maximize the acceleration without rotating the lower assembly. Depending on unaccounted or changing parameters (such as friction coefficient), we can either output slightly more or less torque than the theoretically optimal by simply changing one parameter in the software (micro stepping setting).

15 3.2.2 Motor accuracy

According to the motor manufacturer the step accuracy is ±5%. To test the accuracy of the entire upper assembly, it’s set to rotate and count the number of steps until it has triggered the optic sensor (one full revolution). This is repeated 30 times and results were logged. If all components were ideal this would be, with current step mode settings, 2432 steps.

Table 3.3: Step motor accuracy test results

Sample Size 30 Mean [steps] 2432 Geometric Mean [steps] 2432 Standard Deviation [steps] 1.0171 Results in angle offset [°] 0.0156 Min/Max [steps] 2430/ 2434 Mean Absolute Deviation [steps] 0.8 99.9% Confidence Interval [steps] 2431.3 – 2432.7

As this error (0.0156°) might seem minuscule for one revolution, a full scan consists of about 500 revolutions. As this error accumulates over time the final error would be between 5-10 degrees. As the maximum range of the device is 40 m, a 5-10° offset will result in the LIDAR measuring a sample 3-7 m off the indented point.

The solution is to have the unit reset its position every time the upper assembly passes the sensor (every full revolution). For comparison, the two images below represents the samples taken of a plane ceiling, with and without recalibration.

Figure 3.11: Samples of ceiling, without recalibration. Clearly distorted (skewed) .[Image captured from CloudCompare]

Figure 3.12: Samples of ceiling, with recalibration. Samples are now in the same plane. [Image captured from CloudCompare] 16 3.2.3 Belt and pulley accuracy

As with most types of power transmissions there is some inaccuracy between the movement of the belt and the pulley. This is known as “backlash”, when the pulley is able to rotate some distance before gripping onto the belt. The magnitude of this error depends on the type of belt and pulley. There are many types of belt standards, each has it’s own benefits and drawbacks. In the demonstrator a T.25 belt is used, mainly due to the wide availability and low cost. This type does however generally induce more backlash than the G-type belt. But around the demonstrators z-axis, the backlash is avoided by always rotating in the same direction and using bearings with relatively high friction. The bearings are always applying a small amount of friction torque acting as a break. Figure 3.13: Illustration of backlash concept [TMD 2017] T profile G profile

Figure 3.14: T/G profile belts. The round profile of the G belts minimizes the gap between the teeth and therefor is subject to less backlash. [Printersketch3d 2017]

When the motor stops driving the belt, the rotational inertia of the upper body would force the pulley to move forwards within the small gap between the teeth (backlash). But due to the friction torque applied by the bearings acting as a break, the rotation of the upper body is halted and the pulley never loses contact with the belt’s teeth.

This does however have a rather minuscule impact of the total offset. If the demonstrator constantly would switch direction, this error would accumulate over time but since it’s only rotating in one direction and frequently recalibrate, the effect would only result in all samples being shifted x degrees. This has no impact of the final result.

When the demonstrator is rotating the LIDAR and camera around the φ-axis (“up and down”), the initial backlash is removed by rotating 3.5° past the “home position”, switch direction and revolve back to the “home position” and stop. This way the teeth are in full contact when the scanning begins. This is the reason why the demonstrator is able to rotate 97° around the φ-axis but only takes samples from 90 °.

17 3.3 Device overview

To easier understand how the demonstrator utilizes its components to 3d scan, a concept sketch is used below. The micro-controller is central in the operation as it controls most of the hardware. However the micro controller fetches some of its initial parameters from the Raspberry Pi.

This method enables the user to easily modify parameters (such as scan area, scan detail) on the Raspberry Pi (which can be accessed over WiFi) without having to interfere with the more complex the firmware on the micro-controller.

Raspberry Pi Camera Credit card-sied computer

Logic level PC converter Start and stop button

Micro-controller Step-down converter 15V DC input Optional connection for revision or increased 5V DC control

Light barrier Step motor drivers 15V DC input LIDAR sensors

Step motors Figure 3.15: Concept illustration of components. [Made in Illustrator]

3.3.1 Electronics overview The demonstrator mainly depends on ten different electrical components to perform a 3d scan. In the following section their purpose and use will be briefly explained.

1. Camera The camera (Module v2) is made for Raspberry Pi and existing libraries make it easy to interface with. Python code running on the Raspberry Pi initiates the camera and sets parameters such as shutter speed, exposure and resolution. After initialization the camera takes several photos and the Raspberry Pi saves them on its memory card.

18 2. Micro-controller The micro-controller (Atmega328-pu) is a cost efficient and widely available device. It features 20 input/output pins for digital and analog signals. It has much more limited capabilities in terms of performance compared to the Raspberry Pi. It is however suitable for controlling all components of the demonstrator except the camera (image processing is too demanding).

3. Raspberry Pi The Raspberry Pi (Model 3, version B) is a credit card-sized computer. Users can connect monitors, keyboards and mice to it to run as a normal desktop computer with software and web browsing. It is a lot more powerful than the micro-controller and has necessary features such as WiFi and memory card which the demonstrator requires to operate.

4. Step motors Unlike traditional electrical motors, step motor rotates in small increments. These step motors (Form factor Nema 17, Model: 2BYGHW811) rotate 1.8° per step. This means after 200 steps, the motor made one full revolution. Step motors are commonly used in automation applications where precision is prioritized over high torque or speed.

5. Step motor drivers The step motor driver (A4988) is a component designed to optimize and add features to the step motors. Using these step motor drivers, the minimum step angle can be reduced by a factor of up to 16 times. A step motor driver using 16:th step settings has 3200 unique steps per revolution instead of the original 200 of the motor. The use of drivers also reduces vibrations and noise and also adds some safety features, such as over current/voltage protection.

6. LIDAR The LIDAR receives instructions over the inter-integrated circuit (I2C) protocol to take a sample. The device has several built-in features to minimize the error of the samples. In short, to minimize the error, each measurement takes more time. The micro-controller instructs the LIDAR to recalibrate every 4800:th sample (after one full revolution).

7. Light barrier sensors Each of the two light barrier sensors (KTIR0621DS) consists of two parts, one light sensitive diode and one light emitting diode. These are aligned so when an object obscures the IR light, the micro-controller registers this change. For the demonstrator this means the rotating part is either in its start or end position.

8. Logic level converter The Raspberry Pi uses 3.3 V logic signals, but the micro-controller and the remaining components uses 5V logic. To enable communication between the devices, the logic level (BOB-12009) converter shifts the signal either from 3.3V to 5 or from 5 to 3.3V.

9. Step down converter The demonstrator is powered by 15V DC. The main reason for this is the step motors drivers ability to utilize higher voltage. To be able to use this voltage source, it needs to be stepped down to 5 V which all of the components can use. The step down converter (LM2596) is more efficient compared to resistors.

10. Buttons Using push buttons the demonstrator can be used without a computer. When the green button is pushed the preset scan program will run. When the red button is pushed the unit resets. 19 1 Start

PC ser connects with wifi Start button Sends start command when prompted or diagnostic ser push start utton command

The start utton has Micro-controller Raspberry Pi een pushed

Receive Sends motor settings and 3 Take pictures 2 instruction to rotate oth motors until settings the home position is reached and home Everthing in position

Ever time i tell ou move steps Micro-controller Raspberry Pi ulls up certain pins on drivers for correct micro stepping settings Tae picture drives motors until light arrier switches change state Complete Camera Step motor drivers Repeat 24 times. Now room is now photographed from relevant angles

Something is locing Energies coils in a specific order and magnitude to achieve 4 Measure distance micro stepping

Light barrier sensors Step motors Raspberry Pi

Do this 0 times Turn the motors ust a little tae a measurement with the LD and when ou collected Tae reading 0 readings send them. LIDAR esult Micro-controller

Here are the 0 results m saving those readings in a document. Do that 3 000 times more and for ever 00th reading tilt the lidar down a little it.

Micro-controller Raspberry Pi

Figure 3.16: Typical operation of the demonstrator, concept illustration. [Made in Illustrator]

20 To maintain a small form factor and reliability, a two-layered PCB was designed and manufactured. To simplify assembling and prototyping all wires are connected to the PCB with either screw terminals or connectors.

The PCB enables about 140 different connection points. This makes it a vital component for providing robust and reliable connections. It would not be practically possible to use a prototype breadboard due to the many connections. Figure 3.17: Printed circuit board without components. [Designed in Eagle]

Screw terminals for Various sized electrolytic power connections capacitators

Step motor drivers

Logic level converter

Micro-controller REPROG connector Figure 3.18: Printed circuit board with components. [Designed in Eagle]

Since the microcontroller can be re-programmed using the “REPROG” connector, it is soldered direct to the board without a socket.

21 3.4 Software overview

The 3d scanning operation requires 4 different software steps.

1. Micro-controller code written in C for 3d scanning. 2. Raspberry Pi code written in Python or 3d scanning. 3. Raspberry Pi code written in Python for mapping image pixels to sample points. 4. Merging and meshing of clouds in open source software

The communication between the Raspberry Pi and micro-controller can be summarized as a call and response dialog. Generally the Raspberry issues a command to the micro-controller using the universal asynchronous receiver/transmitter (UART) protocol at 1 Mbit/s. The micro- controller then returns a confirmation when the operation has been completed. Most of the operations are stored in the micro-controller as predefined cases and the micro- controllers standard mode is simply to listen if the Raspberry issues any commands.

For example; if the Raspberry Pi sends the number “16”, the micro-controller will 3d scan 360° with the motor settings received on start up. For each list of comma separated samples it passes back to the Raspberry Pi it will finish with by adding “END”.

Now the micro-controller continues to take more samples while the Raspberry transforms the received packets of measurements to Cartesian coordinates. This way no time is wasted by having one device wait for each other to complete their task.

The micro-controller only passes a distance to the Raspberry Pi so the Raspberry keeps track of the angles when the sample was taken. Using this information it saves the samples in two different documents, one for spherical coordinates and one for Cartesian coordinates. The spherical coordinates are only used for debugging purposes.

To convert the samples into Cartesian z coordinates the Raspberry Pi applies the transformation: (r, θ, φ)

Sphericalr(,θϕ,)→ Cartesianx(,yz,) r xr=⋅sin(ϕθ)c⋅ os() (3.7) φ yr=⋅sin(ϕϕθ)s⋅ in() (3.8) zr=⋅cos(ϕ) (3.9) y θ x Figure 3.19: Spherical coordinates compared to cartesian coordinates. [Made in Illustrator]

22 3.4.1 Image mapping

To synchronize the LIDAR samples with the pixels from the taken photos, images are divided into four rows and six columns, in total 24 images. This distribution is based on the cameras view angle. By looking at the illustration below one can get the impression of perfect overlaps between the images, this is not the case. The images are taken with some overlap due to the cameras view angles, 62.2° horizontal and 48.8 vertical°.

Top view Side view

image 6 image 1

image 5 image 2

image 4 image 3

Figure 3.20: Concept of image sectioning. [Made in Illustrator]

The mapping between the samples and the correct pixels of the images is quite complex and is further complicated by the vertical offset between the LIDAR and the camera. To begin with, the pixels of each sample image are divided into rows and columns, each with a unique coordinate.

Pixel (0,764) Pixel (1296,764)

Image

Pixel (0,0)

Figure 3.21: Pixel coordinates. [Made in Illustrator] Side view r

rmin h LIDAR

l 23.75 mm offset 23 γ Camera

Minimum distance: 5.75 cm Pixel (0,764) Pixel (1296,764)

Image

First the Raspberry Pi calculates in which picture it will find the correct pixels, depending on angles φ and θ. WhenPixel the (0,0) correct image is found, the pixels are organized in columns and rows as shown in Figure 3.22. Lastly the correct pixel is mapped with respect to the offset between the LIDAR and the camera.

Side view r

rmin h LIDAR

l Sample point 23.75 mm offset

γ Camera

Figure 3.22: h Minimum distance: 5.75 cm Camera offset geometry. y pixel =−976()1 [Made in Illustrator] l  γ  hr=−()rmin ⋅ tan    2  When the correct image has been chosen, the xpixel is the easiest determine. Dividing the θ angle for the sample byγ the total number of images, we are left with a fraction. This fraction is lr=⋅tan   equal to how far into the 2 image the point is. To determine the ypixel, the above shown geometry must be taken into consideration.rr− The coordinates of the pixels can therefor be described as: y =−976()1 min pixel r

xfpixel =⋅θ raction 1296 (3.10) h =−976()1 h yypixel =−976()1 (3.11) pixel l  γ  hr=−()r ⋅ tan   hr=−()rmiminn ⋅ tan  (3.12)  2  γγ  lrlr=⋅=⋅tatann  (3.13) 22

rr− min y =−976()1 min y pixepixel l =−976()1 (3.14) r xf=⋅θ raction 1296 xfpixelpixel =⋅θ raction 1296 Since image loading is a relatively central processing unit (CPU) intense operation, it’s virtually impossible to go through each sample one by one, find the correct image, load it and acquire the pixel color. This would result in the program loading about two millions images. Instead the points are sorted into a matrix depending on which image they are located on. This means the program only have to load 24 images in total.

To combine and mesh the different pointclouds into one 3D model, the software CloudCompare [Danielgm 2017] is used. This is a popular open source software for managing pointclouds.

24 3.4.2 Virtual points

The LIDAR takes 400 samples/second while the camera captures 1.26 million pixels per image and second. To use these pixels without having to take same amount of LIDAR samples, the demonstrator creates points from non existing samples.

The fictional points are interpolated and 4 points are placed between every real sample pair and 4 new rows are inserted between every sample row. Unless this method is applied, the final result would be poorly represented with few (1.5 million) color samples but is now increased to 13.5 million pixels. The 24 images contains in total 23.8 million pixels, the remaining pixels are duplicates due to overlap between the images.

Figure 3.23: Interpolated points compared to sample points. [Captured from CloudCompare]

Considering multiple scans can be combined together, the resulting 3d object in high detail.

25 3.3 Connectivity

Ease of uploading new software and upgrading firmware was one of priorities of the design. To run the demonstrator you simply connect to it using a virtual desktop program. Raspberry Pi This means the desktop of the Raspberry Pie is displayed in a window on the users computer with the user being able to use their mouse and keyboard as input devices. Wifi

The Raspberry Pi can also be used as a network drive, this means the user can write and modify code on their Raspberry PI desktop computer and simply save it to the Raspberry Pi. PYTHON CODE

To modify the firmware of the micro-controller the user connects a In-system programmer (ISP) via USB. The connector is then connected to the demonstrator. Simply put, the ISP adds the possibility to connect to the micro- Figure 3.24: Raspberry Pi controller using USB. Connecting using a virtual desktop. [Made in Illustrator] Wifi

C In-system The purpose of this simplification is to be able to modify both software andCODE firmware without Raspberry PI desktop programmer having to disassemblePYTHON any part of the demonstrator. This is vital since much of the code (ISP) cannot fullyCODE be tested without the demonstrator being fully assembled and operational.

Micro-controller on PCB

C In-system CODE programmer (ISP)

Micro-controller on PCB

Figure 3.25: Figure 3.26: Concept illustration of using an ISP. [Made in Illustrator] The connector with the colored cables is connected when modifying firmware. [Made in Photoshop]

26 Chapter 4 Results

In this chapter various test results are analyzed and presented.

4.1 Single color results

Pointclouds consisting of 1.7 million real sample points each, sampled from a living room. Full scan time is about 70 minutes (at 400 samples/second).

Figure 4.1: Figure 4.2: Sample Blue, Top view, no interpolation. Sample blue, Side view, no interpolation. [Captured from CloudCompare] [Captured from CloudCompare]

Position of demonstrator

Figure 4.3: Outline of walls and objects (grey) compared to sample Blue with ceiling removed, no interpolation. [Captured from CloudCompare and edited in Illustrator]

27 Position of demonstrator

Position of demonstrator

Figure 4.4: Sample Red and Blue. Top view, no interpolation, ceiling removed. [Captured from CloudCompare]

28 Position of demonstrator Figure 4.5: Sample Red and Blue side by side. Top side view, no interpolation, ceiling removed. [Captured from CloudCompare]

Figure 4.6: Sample Red and Blue combined. Top side view, no interpolation, ceiling removed. [Captured from CloudCompare]

29 By reducing the sample size and increasing the laser sensor sample speed, faster scans can be accomplished. Below scan is from a roughly 20x15 m large class room from 130 000 points. Scan time is 2½ minutes.

Figure 4.7: Sample Orange, 130 000 points, a 20x15 m large classroom. Side view, wall removed. [Captured from CloudCompare)

Positions of demonstrator

Figure 4.8: Meshed 3d model of two samples, total 260 000 points. Totalt scan time 5 minutes. [Mesh created in CloudCompare with Poisson surface reconstruction method and model rendered in Keyshot]

30 4.2 RGB results

After the scan is complete, a Python script maps the correct pixel color to each point in the pointcloud. In the below image a single colored pointcloud is compared to a RGB colored pointcloud. Note that the pointclouds are the same, only the color of the points are different.

Figure 4.9: Comparison between single colored points and RGB mapped points from images. Sample Blue, View from demonstrator viewpoint, no interpolation. [Captured from CloudCompare and combined with photo in Photoshop]

Since the demonstrator cannot photograph points obscured by objects, shifting the viewpoint slightly visualizes the limitations of taking samples from a single location (Figure 4.10).

The viewpoint from the demonstrators position is the only viewpoint from where the colors of all points are correctly projected. To overcome this limitation several scans have to be completed of the same area.

Figure 4.10: Sample Blue with image pixels mapped to each point. [Captured from CloudCompare]

31 4.3 Comparison between the demonstrator and commonly used device

For comparison, a popular device using roughly the same technology and range is selected.

Figure 4.11: Figure 4.12: Demonstrator [Own image] Faro 3D X 30 [Faro 2015]

Table 4.1: Product comparison between typical products of each type

Product Demonstrator Faro 3D X 30 [Faro 2015]

Technology Time of flight Time of flight

Accuracy See table 4.2 ± 2 mm

Samples per second See table 4.2 122 000

360° by 90° 360° by 300° Scanning area @ 40 m radius @ 30 m radius

Price approx. 2500 SEK 300 000 SEK

Beam divergence Typical 8 mrad Typical 0.19 mrad

Laser class Laser class 1 Laser class 1

Wavelength: 905 nm 1550nm

Adjustable, min. Step size horizontal/vertical 0.009°/ 0.009° 0.0375 °/ 0.0751°

WLAN Yes Yes Battery life 15 V DC required 4.5 h Scanner control WLAN, Buttons WLAN, Touch screen

Dual axis compensator, Height Other features – sensor, Compass

32 The sample speed of the demonstrator can be varied. In the below table the effects of increased speed can be seen.

Table 4.2: 300 samples taken at 450 cm distance, highly diffusive material (gray cloth)

Mode Samples per second Standard deviation

Super fast 1472 ± 1.67 cm

Super fast (error correction off + 1062 - rotation 2,65 RPS)

Balanced (error correction on) 317 ± 1.20 cm

Balanced (error correction off) 534 ± 1.29 cm

Fast (error correction on) 307 ± 1.39 cm

Fast (error correction off) 532 ± 1.28 cm

Long range (error correction on) 264 ± 1.10 cm

Long range (error correction off) 535 ± 1.28 cm

33 Scan speed

The demonstrator’s much slower scan speed is the main draw drawback between the two. The Faro is about 115 times faster than the demonstrator. However the step size of the demonstrator is only four times larger horizontally and ten times larger vertically. The demonstrator captures 11.5 millions points from a 360° x 90° sample area compared to the Faro which captures 400 million points from the same sample area.

So the Faro captures only 34 times as many points but at 115 times the speed. A max resolution scan would take the Faro 54 min but for the demonstrator 3 hours. As 3 hours is too long for a scan, the step size of the demonstrator is increased. To put the resolution into perspective, at minimum step size the demonstrator has 1.3 mm between the horizontal points when scanning objects 2 m away.

Beam divergence

The second largest drawback of the demonstrator is the much larger laser beam divergence as seen the figure below (42 times larger).

Demonstrator Faro

8 mrad 0.19 mrad (0.456°) (0.011°)

Demonstrator5 m Faro

8 mrad 0.19 mrad (0.456°) (0.011°)

5 m

4 cm 0.95 mm Figure 4.13: Illustration of the different beam divergence between the compared models. [Made in Illustrator]

When scanning with a small step size at longer range, the larger beam divergence causes the scanned areas to overlap (Figure 4.14) and the average value of the surface is calculated. This causes scans at longer range to have less detail. 4 cm 0.95 mm

Figure 4.14: Overlapping scan points due to large beam divergence. [Made in Illustrator]

34 Chapter 5 Discussion and conclusions

In this chapter the final conclusions are drawn whether the low cost platform is a feasible alternative.

5.1 Sample density

My initial belief that a very dense pointcloud would result in a very high resolution mesh turned out to be false. While a dense pointcloud captures the objects shape very well, the inaccuracy of the LIDAR sensor actually reduces the final mesh quality. If we consider Figure 5.1 where the sensor captures points on a wall with a very small gap. To generate a surface, normals are calculated between a set of points, creating small planes between the points. The smaller spacing between the points, combined with the error in the reading results in a jagged shape instead of a flat wall.

z

z y

x

Figure 5.1: Illustration ofz very dense pointcloud with errors and resulting mesh [Made in Illustrator] y Fewer samples on the other hand results in a less detailed pointcloud, but a more correct mesh. This is due to “weighing” of point error in the algorithm (Poission surface reconstruction), simplyx put, points are averaged to create the most suitable plane. This can be visualized by the illustration below. y z

x

y

x Figure 5.2: Illustration of less dense pointcloud with errors and resulting mesh [Made in Illustrator]

35 5.2 Scan overlap

A full scan (about 2 million points), 360° by 90° takes about 33 minutes to complete. Multiple scans are needed to capture a correct representation of the space if objects are obscuring the view of the demonstrator.

To capture more angles but without increasing the sample time tremendously, one full scan can be complemented by several smaller scans. The idea is to initially capture the room completely and then use a more narrow scan angle of the missing areas and angles.

By comparing sample Blue and sample Red (Fig. 4.5) it is apparent that they share many identical points. Surfaces A,B,C are largely identical except for the orange marked areas. Only about 20-30% of the points are unique to one of the scan.

Figure 5.3: Illustration of scan overlaps. Unique surfaces highlighted in orange. [Made in Illustrator]

36 5.3 Scan speed and beam divergence

The demonstrator’s much slower scan speed is the main draw drawback between the two. The Faro is about 115 times faster than the demonstrator. A max resolution scan would take the Faro 54 min but for the demonstrator 3 hours. As 3 hours is too long for a scan, the step size of the demonstrator is increased. The second largest drawback of the demonstrator is the much larger laser beam divergence (42 times larger). This results in decreased detail with increased distance.

5.4 RGB challenges

The RGB mapping turned out to be more problematic than first anticipated. The three main problems with this feature are:

1. To achieve a realistic looking RGB pointcloud, many tightly spaced samples are needed. As discussed in chapter 4.2, the very dense scan causes faulty surfaces. To overcome this problem a more accurate laser sensor would be needed. 2. Due to the distortion that occurs when photographing objects from a tilted perspective, a lot of work is required to “straighten” images and merging them. This can be done with 3rd party panorama software but even then it cannot be fully automated. To illustrate the problem, observe the images below. The green line illustrates what the laser sensor samples at a fixed angle, the red line is the what the camera angle correlate to.

Figure 5.4: Photo taken perpendicular Figure 5.5: The distortion from the to surface. Pixel mapping works well. tilted perspective results in incorrectly [Made in Photoshop] mapped pixels. [Made in Photoshop]

37 The pixel mapping method works well if these distortion can be removed by appropriate transformations. To test the mapping, a test image was created and mapped to a 3d scan. In the images below it is shown that both horizontally and vertically the pixels are projected correctly. To conclude, if all images are taken perpendicular to the surfaces, the image mapping algorithm works very well.

Figure 5.6: Correct mapping of pixels from an ideal 2d image to 3d pointcloud. Pointcloud created from a room, including ceiling. [Captured from CloudCompare]

3. Camera dynamic range limitations causes overlapping images to look completely different, since both images requires different exposure settings. Using a fixed camera settings removes this problem but introduces another one, many images being under/over exposed.

Figure 5.7: The overlap of two images, each taken with correct exposure settings but cannot be overlapped. [Made in Photoshop] 5.5 A good compromise

Due to the slower scan speed, wider beam divergence and the problems associated with meshing very dense pointclouds (4.2), I have found that a faster and less detailed scan to be the best compromise. Due to the problems of RGB scanning I found it best to focus on single color scanning. A good “all around” settings would consist of a pointcloud of 150 000 points that takes 2½ minutes to complete. The short scan time enables the user to capture for example a room with furniture from many different angles while still keeping the total scan time brief. This setting also results in good quality meshes.

38 5.6 Conclusion

The objective of this thesis was to investigate the accuracy and limitations of time-of-flight laser scanners and compare them to the results acquired with a low cost platform constructed with consumer grade parts. For validation purposes, a 3d scanner was constructed and tested. The objective was formulated into in two main research questions which now can be answered.

Research Question 1: What accuracy can be achieved with a constructed low cost platform?

The mechanical construction is able to rotate accurately within 0.0156 ° (Table 3.3) of a specified angle. Utilizing its built in feature to self correct, this error does not increase with time. The accuracy in measuring distance was ± 1.10 cm at 450 cm (Table 4.2) and ±10 cm at ranges 5-40 m.

Research Question 2: How does this implementation differs from comparable popular, high end devices on the market?

The low cost platform is specification wise inferior in all aspects except size, weight and cost. It does however produce accurate enough scans for many applications where higher accuracy is not necessary, but investing in a 200,000+ SEK device is not possible. In the next chapter I will discuss how this technology can be used in a more efficient way compared to using the demonstrator. Overall the low cost platform produced good result, especially considering only 2 months part time was spent on development and the platform costing about 1% of the compared product.

39 Chapter 6 Recommendations and future work

In this brief chapter the final conclusions are made and some recommendations for future work listed.

6.1 Redesign

Most of the practical limitations with the demonstrator depends on the current design and not on the cheaper LIDAR sensor. Therefore it would be misleading to come to the conclusion that the low cost platform is inferior to a typical commercial TOF 3d scanner. The low cost platform has potentially many use cases in which it actually would be superior. There are many improvement that can be made, enabling the low cost platform have plenty of use cases.

6.1.2 Multipurpose 3d scanner

As seen in chapter 1, few or no devices except 200,000+ SEK scanners can scan large areas, houses, caves etc reliably. Photometric scanners are improving but they currently produce unreliable results, mainly depending on lightning conditions. Using a large TOF scanner is cumbersome, requires trained personal to operate and is both very expensive and time consuming to use. The ideal device is a low cost, hand-held device capable of scanning while the user is walking.

Figure 6.1: Idea of multipurpose 3d scanner platform. [Construction worker : TF3DM, scanner made in Solidworks, image rendered in Keyshot] 40 LIDAR

LCD

Scan switch

Figure 6.2: Idea of multipurpose 3d scanner construction. [Scanner made in Solidworks, image rendered in Keyshot]

The device would scan 45° forward and backwards with 4 laser scanners. The beams would be reflected by rotating mirrors. By only rotating the mirrors, the noise, rotational inertia and power consumption is kept a at minimum.

Four laser scanners can sample almost 6000 points per second, with a person moving 5 km/h results in about 4000 points per meter. This is sufficient for many applications. To enable accuracy while moving three complimentary system are needed.

1. To compensate for the users movement a gyroscope and accelerometer is necessary. Suggestion would be a low cost, all-in-one solution, such as Bosch BNO055. 2. To accurately trace the users movement a combination of dead reckoning with the above sensor and LIDAR Simultaneous Localization And Mapping (SLAM) can be used. Google released an open source LIDAR SLAM library October 2016. 3. To compensate for the inaccuracy in the readings, an iterative closest point (ICP) algorithm can be used to align the points “on the go”.

Other mechanical design improvements: 1. No need for Raspberry Pi, a separate microprocessor for calculations can be used and a secure digital (SD) card slot to store captured scans. 2. Replacing step motors with micro DC motors (+180 RPM) with high precision encoders (+1500 pulses per revolution (PPR)) for reduced power consumption, noise and cost.

Other notes: 1. Adding functionality to enable RGB pointclouds is appealing despite being a complicated process. However it makes the device sensitive to lightning conditions in the same way as photometric scanning is. A device that cost less than 15,000 SEK and works in complete darkness has more unique use cases.

41 6.1.3 Potential use cases

Underground structures With a modular design the rear scanner pair can be removed and a rope attached so the device can be lowered into hard to reach places. This would enable it to scan under ground tanks, wells, sinkholes and dangerous caves. Using the gyroscope and accelerometer, errors due to the device dangling in a rope can then be corrected. By rotating the mirrors in the opposite directing of each other, torque due to rotational inertia is canceled out.

Figure 6.3: Figure 6.4: Rope attached to scanner. Tank scanning concept. [Models made in [Scanner made in Solidworks, Solidworks, image rendered in Keyshot] image rendered in Keyshot]

Buildings, apartments or mines The ability to scan “on the go”, small form factor, rugged and water resistant casing would make the device ideal for harsh conditions where 3d scanner currently are limited. The user can also insert the scanner into narrow openings, such as under heavy machinery or in nooks and corners.

Volume changes By mapping the surroundings it would be easy to compute any changes in volume between two scans. In a mine this could be Figure 6.5: the amount of iron that has been 3d scanner being used in harsh conditions. removed since last scan or in a [Construction worker from TF3DM, scanner cargo ship, how much of the total made in Solidworks, image rendered in capacity is currently being used. Keyshot]

42 Security Since LIDAR works perfectly well regardless of light conditions, it would work well as security solution. One initial scan would scan the surroundings and compare it with future scans to detect intruders. A single device could cover up to 5000 m2 (40 m radius).

Figure 6.6: Front scanners used for outdoor security application. [Scanner made in Solidworks, image rendered in Keyshot]

Figure 7.5: Comparison scanning, empty room versus room with intruders. Scan time: 20 sec. Samples: 20 000. [Captured from CloudCompare]

43 Bibliography

Adams, M (2010) Lidar Design, Use, and Calibration Concepts for Correct Environmental Detection. Institute of Electrical and Electronics Engineers

Agisoft (2017). Website. Available from: http://www.agisoft.com/fileadmin/templates/images/photo_store_1.png [cited 2017-03-07]

Amazon (2017). Website. Available from: https://www.amazon.com/Matter-Form-MFS1V1-3D-Scanner/dp/B00O2O5SS4 [cited 2017- 04-13]

Aniwaa (2016). Website. Available from: http://www.aniwaa.com/product/3d-scanners/creaform-goscan-50/ [cited 2017-04-11]

Bernardini, H and Rushmeier, H (2002). The 3D Model Acquisition Pipeline. IBM Thomas J. Watson Research Center

Brian Curless (2000). From Range Scans to 3D Models. University of Washington

Creaform (2017). Website. Available from: https://www.creaform3d.com/en/metrology-solutions/portable-3d-scanners [cited 2017-04-11]

Danielgm (2017). Software. Available from: http://www.danielgm.net/cc/ [cited 2017-01-22]

Faro (2015). Website. Available from: http://www.faro.com/products/3d-surveying/laser-scanner-faro-focus-3d/overview [cited 2017- 04-11] iReviews (2014). Website. Available from: http://3d-scanners.www1.ireviews.com/2014-best-3d-scanners-under-50000-review [cited 2017-03-10]

János, T (2008). Website. Available from: http://www.tankonyvtar.hu/en/tartalom/tamop425/0032_precizios_mezogazdasag/ch02s04. html [cited 2017-03-10]

Johansson, H. (2013). Elektroteknik. Institutionen för Maskinkonstruktion, Mekatronik.

Meekohi (2017). Website. Available from: https://commons.wikimedia.org/w/index.php?cu- rid=44925507 [cited 2017-02-21]

MICROMO (2015). Microstepping WP. Datasheet. Available from: www.micromo.com/media/wysiwyg/Technical-library/Stepper/6_Microstepping%20WP.pdf [cited 2017-02-16].

44 Motioncontroltips (2017) Website. Available from: www.motioncontroltips.com/faq-whats-the-difference-between-detent-torque-and-holding- torque/ [cited 2017-02-16].

Nayar, SK and M. Gupta (2012).Diffuse Structured Light. IEEE International Conference on Computational Photography

Paquit, V et al. (2006). Near-infrared imaging and structured light ranging for automatic catheter insertion. University of Tennessee.

Powerbyproxi (2017) Website. Available from: http://powerbyproxi.com/wp-content/uploads/2016/05/Traditional-slip-ring-01.jpg [cited 2017-03-29]

Printersketch3d (2017) Website. Available from: http://4.bp.blogspot.com/-X2T0xwXTtKA/Um640YBk5WI/AAAAAAAAAKk/BaXFdIbDyLU/ s1600/GT2xT2_5.jpg [cited 2017-03-29]

Pobonline (2011) Website. Available from: http://www.pobonline.com/articles/95143-product-review-faro-focus3d-laser-scanner [cited 2017-04-23]

Quanhong F. (2012). Practical application of 3d techniques to underground projects. Rock engineering research foundation

Roy Mayer (1999). Scientific Canadian: Invention and Innovation From Canada’s National Research Council.

TF3DM User-name “Ultraflash” (2017). 3d model of construction worker. Available from: http://tf3dm.com/3d-model/road-worker-30617.html [cited 2017-04-10]

TMD -Todays medical developments (2017) Website. Available from: http://www.todaysmedicaldevelopments.com/article/tmd-0211-rolling-ring-linear-drives/ [cited 2017-02-21]

Vanessaezekowitz (2008) Website. Available from: https://upload.wikimedia.org/wikipedia/commons/a/aa/3-proj2cam.svg

Texas Instrument (2011) Product data sheet. LIDAR System Design for Automotive/Industrial/ Military Applications. Available from: http://www.ti.com/lit/an/snaa123/snaa123.pdf

Wantmotor (2017) Product data sheet (42BYGHW811). Available from: www.wantmotor.com/product/42byghw.html [cited 2017-02-16].

45 Appendix A - PCB circuit diagram LIDAR - Lidar connector T-Mot PWR Reprog - SPI to ISP device to program micro-controller 12V Output 5v Input USB-Output - Power to raspberry pi 328P - Atmega 328P micro-controller PCB circuit diagram - Power from step down converter - To Step down converter - Input power and step motor in lower body S-P - Optic Sensor Phi S-T - Optic Sensor Theta RASP-D - Raspberry Pi serial P-MOT - Phi motor SDT - Step Driver Theta SDS - Step Driver Phi LLC - Logic level converter

46 Appendix A - PCB board

47 Appendix B - C code for micro-controller Compiled with Arduino compiler ## Microcontroller code for 3d scanner, Johan Moberg 19-05-2017 ## MMK 2017:22 MDAB 640 ## This program is compiled with Arduino compiler and runs on ATMEGA329-PU ## Wire and Lidar library are not created or modified by me.

#include #include int const PMOT[2]= {A2, A3}; // PS,PD int const TMOT[2]= {A0, A1}; // TS,TD int const MS[3] = {5, 6, 7}; //Step mode parameters int St = 8; //Sensor theta int Sp = 9; //Sensor phi int const motorPower=3; int pointDistance; int d=0;

LIDARLite Lidar; int bootParams[4]; //pstep, tstep, hstepp, stepMode bool booted = false; void setup(){ pinMode(motorPower, OUTPUT); pinMode(MS, OUTPUT); pinMode(PMOT, OUTPUT); //Writing to this pin tells the step driver which direction it should turn. pinMode(TMOT, OUTPUT); //Writing to this pin tells the step driver which direction it should turn. pinMode(Sp, INPUT); //Sensor Phi pinMode(St, INPUT); //Sensor Theta Lidar.begin(1,true); // Set LIDAR configuration to balance mode and I2C to 400 kHz Serial.begin(1000000); // Initialize serial connection to display distance readings Serial.setTimeout(100); Serial.println(“Scanner started”); digitalWrite(motorPower, HIGH);

} void loop(){ //Listen for commands forever after boot sequence is completed. listenForCommand(); }

// Main step and sample function. // void stepIt(int motor[], String dir, int steps, bool sample){ // Which motor?, what direc- tion?, how many steps?, take measurments while stepping? int r=500;

48 int temp = 0; temp=Lidar.distance(true); //Calibrate lidar but dont save the read digitalWrite(motorPower, LOW); delay(1);

if (dir == “forward”){ //Step step direction digitalWrite(motor[1], HIGH); }

else{ digitalWrite(motor[1], LOW); }

int data[417]; //Initiate the variable storing the variable. Data send in a big pack- et is faster. int j=0; int k=0;

if (motor == PMOT){ r=1000; } for (int i=0; i < steps; i++){ //Run no. of steps of chosen motor, take a sample for each step and save in array. When array is full, send the data. digitalWrite(motor[0], HIGH); digitalWrite(motor[0], LOW); delayMicroseconds(500);

if (sample == true){ if (bootParams[1]==1){ data[j]=Lidar.distance(true); } if (bootParams[1]==0){ data[j]=Lidar.distance(false); } j++; if (j == 414){ for (int k=0; k < 414; k++){ Serial.print(data[k]); Serial.print(“;”); delay(1); } j= 0; } } else{ delayMicroseconds(500); digitalWrite(motor[0], HIGH); digitalWrite(motor[0], LOW); delayMicroseconds(500); 49 } digitalWrite(motor[0], HIGH); digitalWrite(motor[0], LOW); delayMicroseconds(500); } if (j != 0){ //Send any remaining entries for (int k=0; k < j; k++){ Serial.print(data[k]); Serial.print(“;”); delay(1); } } delay(10); } void listenForCommand(){ int temp; int incoming; int y = 0;

if (Serial.available() > 0){ temp = Serial.parseInt(); incoming = temp;

switch (temp) {

case 10: //Home homeTheta(); break;

case 1: //Boot case, get parameters, send them back for confirmation and then use them. setParams(); printConfig(); setStepMode(bootParams[3]); Lidar.configure(bootParams[0],true); // Update the mode for lidar depending on data recieved from Raspberry Pi. break;

case 3: //Motors off digitalWrite(motorPower, HIGH); break;

case 7: //Home phi homePhi(); break;

case 16: //Default scan mode. 50 changeInterrupt(); scan();

break;

break; } } }

// Main scan function. void scan(){ //Home theta with 2nd step settings (normal function steps to far in 2nd step mode) int j=0; digitalWrite(motorPower, LOW); digitalWrite(TMOT[1], HIGH); for (int i=0; i < 500; i++){ digitalWrite(TMOT[0], HIGH); digitalWrite(TMOT[0], LOW); delayMicroseconds(1000); }

while (digitalRead(St) == HIGH){ digitalWrite(TMOT[0], HIGH); digitalWrite(TMOT[0], LOW); delayMicroseconds(1000); j++; } //Home phi with 2nd step settings. digitalWrite(motorPower, LOW); digitalWrite(PMOT[1], HIGH); while (digitalRead(Sp) == LOW){ digitalWrite(PMOT[0], HIGH); digitalWrite(PMOT[0], LOW); delayMicroseconds(1000); }

digitalWrite(motorPower, LOW); //Activate step drivers Lidar.write(0x02, 0x0d); // Set maximum acquisition count Lidar.write(0x04, 0b00000100); // Edit reference acquisition count Lidar.write(0x12, 0x03); // Change ref. acquisition to 3 (def. 5) int x=0;

//For each of 120 phi steps, step 1200 steps theta. for(int i = 0; i < 120; i++){ for(int i = 0; i < 1200; i++){ x=distanceFast(false); Serial.write(lowByte(x)); 51 Serial.write(highByte(x)); digitalWrite(TMOT[0], HIGH); digitalWrite(TMOT[0], LOW); }

digitalWrite(PMOT[1], LOW); digitalWrite(PMOT[0], HIGH); digitalWrite(PMOT[0], LOW); } }

// Modify the firmware to increase scan speed // int distanceFast(bool biasCorrection) { byte isBusy = 1; int distance; int counter;

// Poll busy bit in status register until device is idle while(isBusy) { // Read status register Wire.beginTransmission(LIDARLITE_ADDR_DEFAULT); Wire.write(0x01); Wire.endTransmission(); Wire.requestFrom(LIDARLITE_ADDR_DEFAULT, 1); isBusy = Wire.read(); isBusy = bitRead(isBusy,0); // Take LSB of status register, busy bit

counter++; // Increment loop counter // Stop status register polling if stuck in loop if(counter > 9999) { break; } }

// Send measurement command Wire.beginTransmission(LIDARLITE_ADDR_DEFAULT); Wire.write(0X00); // Prepare write to register 0x00 if(biasCorrection == true){ Wire.write(0X04); // Perform measurement with receiver bias correction } else{ Wire.write(0X03); // Perform measurement without receiver bias correction } Wire.endTransmission();

// Immediately read previous distance measurement data. This is valid until the next 52 measurement finishes. // The I2C transaction finishes before new distance measurement data is acquired. // Prepare 2 byte read from registers 0x0f and 0x10 Wire.beginTransmission(LIDARLITE_ADDR_DEFAULT); Wire.write(0x8f); Wire.endTransmission();

// Perform the read and repack the 2 bytes into 16-bit word Wire.requestFrom(LIDARLITE_ADDR_DEFAULT, 2); distance = Wire.read(); distance <<= 8; distance |= Wire.read();

// Return the measured distance return distance; } void homeTheta(){ //In case stopped in home position, step 500 steps then step until hit home pos. int j=0; digitalWrite(motorPower, LOW); digitalWrite(TMOT[1], HIGH); for (int i=0; i < 500; i++){ digitalWrite(TMOT[0], HIGH); digitalWrite(TMOT[0], LOW); delayMicroseconds(500); }

while (digitalRead(St) == HIGH){ digitalWrite(TMOT[0], HIGH); digitalWrite(TMOT[0], LOW); delayMicroseconds(500); j++; } } void homePhi(){ //In case stopped in top home position, step 50 steps then step until hit home pos.

digitalWrite(motorPower, LOW); digitalWrite(PMOT[1], HIGH); for (int i=0; i < 50; i++){ digitalWrite(PMOT[0], HIGH); digitalWrite(PMOT[0], LOW); delayMicroseconds(500); } while (digitalRead(Sp) == LOW){ digitalWrite(PMOT[0], HIGH); digitalWrite(PMOT[0], LOW); 53 delayMicroseconds(500); } for (int i=0; i < 30; i++){ digitalWrite(PMOT[0], HIGH); digitalWrite(PMOT[0], LOW); delayMicroseconds(500); } } void changeInterrupt(){ //To avoid having to poll the optical sensor pin each time, change the interrupt pin to P0. (Faster operation) DDRB &= ~(1 << DDB0); // Clear the PB0 pin // PB0 (PCINT0 pin) is now an input

PORTB |= (1 << PORTB0); // turn On the Pull-up // PB0 is now an input with pull-up enabled

PCICR |= (1 << PCIE0); // set PCIE0 to enable PCMSK0 scan PCMSK0 |= (1 << PCINT0); // set PCINT0 to trigger an interrupt on state change

sei(); // turn on interrupts }

ISR (PCINT0_vect){ if( (PINB & (1 << PINB0)) == 1 ) { //Do nothing } else { int x=9999; // Command sent to Raspberry to notify that a full rev. is completed. Serial.write(lowByte(x)); Serial.write(highByte(x)); } } void setStepMode(int x){ switch (x) { case 1: digitalWrite(MS[0], LOW); digitalWrite(MS[1], LOW); digitalWrite(MS[2], LOW); Serial.println(“Full step”); break; case 2: digitalWrite(MS[0], HIGH); digitalWrite(MS[1], LOW); digitalWrite(MS[2], LOW); 54 Serial.println(“Half step”); break; case 4: digitalWrite(MS[0], LOW); digitalWrite(MS[1], HIGH); digitalWrite(MS[2], LOW); Serial.println(“Quad step”); break; case 8: digitalWrite(MS[0], HIGH); digitalWrite(MS[1], HIGH); digitalWrite(MS[2], LOW); Serial.println(“8th step”); break; case 16: digitalWrite(MS[0], HIGH); digitalWrite(MS[1], HIGH); digitalWrite(MS[2], HIGH); Serial.println(“16th step”); break;

break; }

} void sendData(int data){ Serial.print(data); } void setParams(){ //When first booting get params from pi. If received “Boot” set ard. params to the received ones. int i=0; int incoming; while ( i < 4){ while (Serial.available() > 0 && booted == false){ incoming = Serial.parseInt(); bootParams[i] = incoming; i++; if (i == 4){ break; } } } } void printConfig(){ Serial.print(“Lidar mode: “); if (bootParams[0] == 0){ 55 Serial.println(“Balanced”); } if (bootParams[0] == 1){ Serial.println(“Fast”); } if (bootParams[0] == 2){ Serial.println(“Dynamic”); } if (bootParams[0] == 3){ Serial.println(“Max range”); } delay(1);

Serial.print(“Error correction: “);

if (bootParams[1] == 1){ Serial.println(“On”); } if (bootParams[1] == 0){ Serial.println(“Off”); }

delay(1); Serial.print(“Homestep toggle between: “); delay(1); Serial.println(bootParams[2]); delay(1); Serial.print(“Stepmode set to: “);

}

56 Appendix C - Optional Python RGB point mapping

## RGB point mapping for 3d scanner, Johan Moberg 19-05-2017 ## MMK 2017:22 MDAB 640 ## This program runs Python 2.7. Csv, math and PIL libraries are not created or mod- ified by me. import csv import math import PIL

## Camera and image parameters ## hPixels = 1920 vPixels = 1080 CamXAngle = math.radians(62.2) CamYAngle = math.radians(48.8) ImXAngle = math.radians(22.5) ImYAngle = math.radians(24.15) deltaX=float(695) deltaY=float(534) skipX=float(deltaX/300) skipY=float(deltaY/80)

ImCenterX = hPixels / 2 ImCenterY = vPixels / 2

StartImageX = ImCenterX StartImageY = int(round(ImCenterY - deltaY/2)) WidthRange = ImCenterX - StartImageX HeightRange = ImCenterY - StartImageY print "Start x", StartImageX,"End x", StartImageX+deltaX print "Start y", StartImageY,"End y", StartImageY+deltaY print "Max delta x", deltaX print "Max delta y", deltaY print "Skip X", skipX print "Skip Y", skipY r=[] phi=[] theta=[] R="" G="" B=""

57 pix = [] fileNames=[] alldata=list() trans=list()

## Imports r, phi, theta. Outputs lists where each row contains: r,phi,theta,z & id no. def importData(): z=float(0) i=0 j=0 s=0 fulldata=[] global alldata global trans

with open('polar.txt', 'r') as f: reader = csv.reader(f) data = list(reader) print data[5][0]

for x in range(0, 322, 1): ## Creates a list of 322 matrices, each containing 4800 rows containing 4 ele- ments. ## The 4800 rows in each matrix are sorted after ascending z. ## Example: Matrix 1/322 will contain 4800 rows. In the first row the z value will be the lowest, and row 4800 will contain the highest in that matrix.

fulldata=[] for x in range(0,4800,1): temp = [] r= round(float(data[i][0]),6) phi = round(float(data[i][1]), 6) theta = round(float(data[i][2]), 6) z = r * math.cos(phi) temp.append(r) temp.append(phi) temp.append(theta) temp.append(z) temp.append(j) fulldata.append(temp) j+=1 i+=1 j=0 s+=1

fulldata.sort(key=lambda x: x[4]) alldata.append(fulldata)

##All data outputs [row(1-322)][array[1-4800][element(1-5)] 58 temp=zip(*alldata) trans=temp

## Adjust center point of model and image ### def shiftPoints():

global trans print len(trans) rollOver=False shiftX=500 shiftY=200 transShifted=[] i=0

for g in range(0,4800,1):

if (shiftX + i) == 4800: rollOver = True i=0

if rollOver == True: transShifted.append(trans[i])

if rollOver == False: transShifted.append(trans[shiftX+i])

i+=1

trans=[] trans=transShifted print len(trans[0]) print trans[0][0] print trans[0][:30:10] print "Finished loop", i

## Get the RGB color from the right pixels of the image ### This image is the result of merging all images together using panoramic software to one image. By using this software most of the problems are described on chp 5 are solved. def getPixels(): file_name = "XYZ.txt" opened_file = open(file_name, 'w')

filename = "image00.jpg" from PIL import Image im = Image.open(filename) pix=im.load() 59 print "Loaded image ",filename print "image size:", im.size j=0 i=0 k=0 counter=0 error=0 for c in range(0,4800,1): i=0 for v in range (0,322,1): r2=trans[j][k][0] phi2=trans[j][k][1] theta2=trans[j][k][2] thetastep=round(4799 - 4799/(math.pi*2)*theta2)

if thetastep < 0: thetastep= 0 c=pix[thetastep,i] x = r2 * math.cos(theta2) * math.sin(phi2) y = r2 * math.sin(theta2) * math.sin(phi2) z = r2 * math.cos(phi2) if (x**2+y**2+z**2) > 0: xdir=x/math.sqrt(x**2+y**2+z**2) ydir=y/math.sqrt(x**2+y**2+z**2) zdir=z/math.sqrt(x**2+y**2+z**2) else: xdir=math.sqrt(1/3) xdir = math.sqrt(1/3) xdir = math.sqrt(1/3) error+=1

opened_file.write("%f, %f, %f, %s, %s, %s, %f, %f, %f, \n" % (x,y,z,c[0], c[1], c[2], xdir, ydir, zdir)) i+=4 if i >= 1200: i = 1199 k+=1 counter+=1 j += 1 i=0 k=0 print "Created", counter, "points" print "Unit vector errors",error opened_file.close()

### Main program ### importData() shiftPoints() getPixels() 60 Appendix D - Main Python code for Raspberry Pi

## Main Python code for 3d scanner, Johan Moberg 19-05-2017 ## MMK 2017:22 MDAB 640

## This program runs python 2.7 on Raspberry Pi. ## 1. First the program sends 4 boot parameters to the micro-controller. These set the step motor settings. ## 2. User inputs parameters and controls the device using console inputs. ## None of the below listed libraries are made or modified by me. import serial import time import picamera import csv import numpy as np import math frames = 1 Pskip = 1 # Set lidar mode; 0-Balanced, 1- Fast, 2- Dynamic, 3- Max range Tskip = 0 # Error correction, 0- Off, 1- On Htoggle = 1 # Home toogle between; Not used. Any int. works. Stepmode = 2 # Step mode, 1,2,4,8,16 buffer=[""] distance=[] ser = serial.Serial('/dev/ttyAMA0', 1000000, timeout=10) def menu(): menuVar = raw_input("Press 1 to menu info, 16 for quick scan, 3 motors off, 0 for image capture "\n")

if menuVar== "0": if menuVar== "0": sendData("10") time.sleep(2) sendData("7") time.sleep(1) with picamera.PiCamera() as camera: camera.resolution = 1296, 976 camera.framerate = 60 time.sleep(2) camera.capture('/home/pi/share/image1.jpg') print ("photo saved")

sendData("6") time.sleep(1) j=0 with picamera.PiCamera() as camera:61 camera.resolution = 1296, 976 camera.framerate = 60 time.sleep(2) camera.capture('/home/pi/share/image2.jpg') print ("Photo saved") sendData("10") time.sleep(1) getDataSmall(325) menu()

if menuVar== "1": ##Get info on current settings boot() menu()

if menuVar== "16": ##Fast scan. Create random named .txt and save scan data. Scan takes about 2 minutes. ser.flushInput() from random import randint namn=randint(0,1000000) cart_fast = open(str(namn)+ '.txt','a') sendData("16") now = time.time() g=0 t=0 rAll=[] while len(rAll) < 144000: data=ser.read(2) r= (ord(data[1]) << 8) + ord(data[0]) rAll.append(r) phi = float(math.pi/2) - g*( float(math.radians(72) / 120)) theta = t*(math.pi*2/1200)

x = r * math.cos(theta) * math.sin(phi) y = r * math.sin(theta) * math.sin(phi) z = r * math.cos(phi) if r == 9999: g+=1 t=0

if r > 1 and r < 1500: cart_fast.write("%f, %f, %f \n" % (x, y, z)) t+=1

future = time.time() print (future-now) menu()

if menuVar== "3": ##Motors off sendData("3") 62 menu() def sendData(x): x=str(x) ser.write(x) time.sleep(0.2) def getDataSmall(x): file_name = "cart_small.txt" opened_file = open(file_name, 'a') counter = 0 file_name2 = "polar_small.txt" opened_file2 = open(file_name2, 'a') f = 0 menua=0 i = 0 s = 0 run=0 phi = float(math.pi/2) ##Start angle phistep = float(math.radians(48.8) / 325) right = True import time for i in range(0, x): run+=1 if right == True: sendData("8")

if right == False: sendData("9") t = -1 val4 = [] ser.flushInput() while 1 == 1: val = ser.readline(ser.inWaiting()) val2 = val.split(";") r = filter(None, val2)

if "END" in r: try: r.remove("END") except ValueError: print ("cant remove") break

val4.append(r) for element in val4: for rad in element: try: t += 1 63 rad2 = float(rad) if right == True: theta1 = t * (math.radians(62.2) / 430) if right == False: theta1 = (430-t) * (math.radians(62.2) / 430)

x = rad2 * np.cos(theta1) * np.sin(phi) y = rad2 * np.sin(theta1) * np.sin(phi) z = rad2 * np.cos(phi) opened_file.write("%f, %f, %f \n" % (x, y, z)) opened_file2.write("%f, %f, %f \n" % (rad2, phi, theta1)) counter += 1 except ValueError: print ("value error")

if right == True: right = False

elif right == False: right = True

s += 1 f += 1 print run,"/325" phi = phi - phistep opened_file.close() def boot(): # Send params to uc, Serial listen to get the back for confirmation and print them. sendData("1") # Phistep skip sendData(Pskip) # Phistep skip sendData(Tskip) # Thetastep skip sendData(Htoggle) # Home toogle between sendData(Stepmode) # Step mode

if ser.inWaiting() > 0: while ser.inWaiting(): val = ser.readline(ser.inWaiting()) val2=val.split("\r\n") filter(None, val2) print (val2[0]) buffer.append(val2) # stores all data received into a list if len(buffer) > 4: break

## Main program ## boot() menu()

64 TRITA MMK 2017:22 MDAB 640

www.kth.se 65