v.2, n.1, p.2-18, 2019. CarNotFound: Project and Development of an Autonomous R/C Car

Isabelle Diniz Orlandi, Daniel Pereira Cinalli, Italo Milhomem de Abreu Lanza, Thiago Lima de Almeida, Tito Caco Curimbaba Spadini, Pedro Ivo da Cruz, Filipe Ieda Fazanaro

Federal University of ABC (UFABC)

ABSTRACT

Robots are fascinating, mainly, due to the fact that it is essential that the acknowledgment of different areas - as electrical engineering, computer engineering, mechanical engineering, physics, mathematics and biology - must be integrated so that the can interact with the environment. In the context of autonomous , it is also fundamental the capability to locate itself without any human interference, only by processing the information obtained by numerous types of sensors. The work described here aims to study the essential concepts associated with autonomous robots, specifically, an autonomous R/C car capable to navigate in a closed environment, following a dashed reference line, using only a single camera. The processing of the images acquired and the execution of the control system were performed by a Raspberry Pi 3B+ using codes in Python and OpenCV. The results obtained indicate strong dependence on the rate of the processed frames per second and, moreover, a simple PD controller was sufficient to adjust the direction of the car along the trajectory to be followed.

Keywords: Monocular vision, OpenCV, PD controller, PiCamera, Python, Raspberry Pi 3B+.

INTRODUCTION

Since Leonardo da Vinci’s first sketches, through the initial decades of the 20th century and gaining more attention after the publications of Isaac Asimov’s works, have become a topic that excites the human imagination. Although we are a long way from having the robots as described in science fiction, i.e. humanoid robots interacting with human beings or assisting pilots on rebel spaceships fighting against the Empire, it is relatively affordable to buy simple structures - compared to fiction -, such as a vacuum cleaner robot, to help in the daily activities.

An incipient view about robotics would be associated with the robot manipulators [1] cooperatively operating in an automotive production line. The manipulators has transcended this barrier long ago and nowadays are employed in several aspects of the society. In medicine, for example, there were developed studies showing how robotics has assisted surgical procedures, diagnoses and remotely monitor the patients [2], being the medical robotic system Da Vinci one of the the most important [3].

2 v.2, n.1, p.2-18, 2019.

Throughout the last decades, the world contemplated the development of another class of robotics, named mobile robots. As there are studies and research related to different types of such robots, e.g. the ones which are capable to move through the air [4, 5, 6] or in the water [7, 8], its core arises from the 1920s, when the first radio controlled vehicles were developed [9]. In this context, probably autonomous cars can be seen to be one of the most emblematic example of mobile robots.

In recent years, it has been possible to see a constant growing interest in the research related to autonomous vehicles led, mainly, by companies such as Google, Tesla, Amazon, NVidia and Intel. However, a project of this nature is extremely complex and until now has open questions [10]. In this context, the work reported here aims to roughly understand such questions and studies the fundamental techniques necessary to allow a vehicle - in this case, a radio controlled (R/C) 1:14 scale car - to move autonomously within a closed envi- ronment, using a single camera and computer vision concepts, respecting the initial motivation of the project: the RoboCar Race [11]. In the next sections, it is presented the main aspects related to the vehicle’s deve- lopment, followed by the concepts used to implement its control system and the results obtained during the experiments. Finally, some discussions and conclusions about the project are presented.

AN OVERVIEW ABOUT THE VEHICLE DEVELOPED

The vehicle’s development can be analyzed considering the hardware - which is related to the motors, the power system and all the embedded electronic - and the software, which is associated with the computational vision algorithm and the control system. The main aspects of both are briefly described in the following.

Hardware: the main aspects

The embedded electronic of the vehicle was based on simple and basic components [12] and can be synthesized with the help of the block diagram presented in Figure 1.

Power System

Raspberry Powerbank PiCamera V2 Pi 3B+

7.2V Battery LM298 Driver Servomotor

DC Motors

Figure 1. An overview describing how the embedded electronic is structured.

As illustrated in Figure 1, a Raspberry Pi 3B+ (RPi) [13] is the responsible for processing the visual information obtained by a PiCamera V2 [14] and controlling the DC motors. The car contains two DC motors [15] (Motor A and B installed, respectively, at the right and at the left sides of the chassis) which are driven by an H-bridge (L298N) [16]. In Figure 2, it is possible to observe the signals IN1A, IN1B, IN2A and IN2B which are directly

3 v.2, n.1, p.2-18, 2019. connected, in pairs, to the RPi’s GPIO, i.e. IN1A and IN1B are both connected to the GPIO BCM 23, and IN2A and IN2B to the GPIO BCM 22. This solution allowed to drive both DC motors (e.g. rotate clockwise, counterclockwise and stop) at the same time, because it was not necessary to separately control their rotation.

Figure 2. Illustration of how the DC motors were connected to the GPIOs of the Raspberry Pi.

The LD-1501MG servomotor [17] was installed in the front of the chassis and used to rotate the front wheels and, consequently, adjusting the direction of the car. This servomotor can be controlled by employing a PWM signal generated at the RPi by software - using any GPIO -, or by hardware (GPIO BCM 13; dedicated pin) [18]. As can be seen in Figure 3, the behavior of the PWM generated by software is noisier when compared to the signal generated by hardware. During some initial experiments, this characteristic caused the servomotor to slightly move when no command was being sent and, therefore, it was used the hardware PWM to control the servomotor.

Figure 3. Comparation of the PWM signals generated by the RPi. The curve in blue (at the top) was generated by hardware. In both cases, the duty cycle is 25% and the high voltage is 3.3V.

From Figure 1, it is also possible to observe that the power system is divided in two subsystems. A common NiCd 7.2V/2200mAh battery - used in R/C cars - was employed to supply the necessary energy for the DC motors. A Xiaomi Mi Powerbank 2i (model PLM09ZM), with two outputs with 5V/2.4A each, was connected to the RPi. The servomotor was connected directly to the GPIO 2 (or 4) of the RPi.

Hardware: some informations about the camera

The most important sensor employed in this project - in fact, the only one - was the PiCamera V2 [14]. Although it was possible to use standard USB webcams, the advantage of the PiCamera is that it is connected directly to

4 v.2, n.1, p.2-18, 2019. the RPi’s CPU (BCM2837 SoC) through an CSI-2 interface, with a 2 Gpbs bandwidth. The PiCamera captures the image using a rolling shutter mechanism [19] where each image is built from the pixels values of each line of its sensor, i.e. each line of a single frame is obtained at different time instants. To avoid possible distortions in the frame captured, i.e. to ensure that continuous reading of frames always occurs within the same time, the associated capture processing is performed by the RPi’s GPU - which runs its own real-time operating system (VCOS). For further information, it is strongly recommended to review the reference [20, section 6.1.5. - Division of labor].

Hardware: the “404” autonomous R/C car

As previously mentioned, the rules of the Robocar Race [11] establishes that just one camera could be used for sensing the environment and all the processing must be done in a Raspberry Pi (any model). Additionally, the chassis dimensions must be smaller than 420mm (length) × 200mm (width). The chassis used [21] and all the embedded electronic are shown in Figure 4.

Figure 4. An image of the car. It is possible to see all the embedded electronic and the power system.

Some general considerations about the software

Since a RaspberryPi was employed as the processing unit, it was used the Raspbian Lite (version 2018-10-09; Linux kernel 4.14.79) as the operational system. It was defined a script which initializes during the OS booting and is the responsible to configure the PIGPIO library and the hardware PWM [18]. It must be observed that all the programs were developed using Python 3.5.3 programming language and based on object oriented con- cepts - which facilitated the debugging and the development itself, and, more important, the code portability. Some additional libraries and packages, as the NumPy (mathematical functions and arrays manipulations) and the RPi.GPIO (control of the IO peripherals), were also employed. Finally, the computer vision algo- rithms were developed using the OpenCV (version 3.4.3) library which were installed from the source using the tutorial presented in [22]. Further explanations of how the algorithms were developed and some related

5 v.2, n.1, p.2-18, 2019. discussions are presented in the following.

IMAGE PROCESSING AND CAR CONTROL

The main objective of the competition consists in an autonomous R/C car being capable to travel along a pre-established course in the shortest possible time. Initially, it was considered to detect the side limits (the margins) of the track and, with this information, maintain the car in the central region of the track, as proposed in [23]. However, it was not possible to “see” these margins employing only one camera and, moreover, due to the dimensions of the car related to the track - which has 5m of width, approximately 25 times the width of the car. Consequently, in order to successfully complete the designated task, it was developed a system capable of maintaining the car orientation along a dashed reference line (defined by white rectangles with length equal to 300mm and width equal to 50mm and spaced 300mm between them) drawn in the central region of the track. Therefore, it is necessary to detect the line, the direction where the car should go and then send the necessary commands to the servomotor. This methodology has been divided into three main modules which are illustrated in Figure 5 and described in the following.

Captured Frame

Image Handler

x x xc c

ControlHandler

Direction Angle [¡ ] = f(xc ) ¡

Command Sent to Motor CarHandler PWM (GPIO)

Figure 5. This figure illustrates how the software is organized.

Capturing and processing the images

The development of the car control system was based on the identification and the adjustment of its direction related to the dashed reference line. As previously described and illustrated in Figure 5, the software was organized in three main modules, the ImageHandler, the ControlHandler and the CarHandler, which were developed based on object oriented principles. The first class, the ImageHandler, is the responsible to acquire and process the images captured by the PiCamera.

As previously described, due to the characteristics of the rolling shutter capture mechanism from the PiCa- mera, it is important that the capture rate to be as fast as possible, which should avoid unwanted distortions in the final image. Furthermore, the image to be processed should not contain irrelevant information, such as the

6 v.2, n.1, p.2-18, 2019. horizon (e.g. the sunset). This is one of the main motivations to employ, during the ImageHandler develop- ment, the results discussed in the work present by Neto and Rittner in [23]. The methodology proposed in this work considers a Region Of Interest (ROI) approximately equal to 60% of the original height of the image with resolution equal to 320 × 240, discarding the horizon, accentuating the track where the car must travel and, more important in an embedded system, decreasing the computational efforts. In the work presented here, it was considered the same resolution, i.e. 320 × 240, with a ROI slightly “taller”, equal to 2/3 of the original height.

After being obtained, the frame is converted from RGB to 8-bit grayscale, which also helps to reduce the computational efforts as the number of layers to be processed are reduced from 3 (RGB) to just one. The next step executes the blurring procedure, i.e. convoluting the image with a Gaussian filter with a kernel similar to that defined by Eq. (1), given by,

  1 2 1   1   K = 2 4 2 . (1) 16   1 2 1

Here, the kernel employed has a size equal to 21 × 21 (see [24, section “Image Processing in OpenCV”] for further details). The aim of the Gaussian blurring is to smooth the image, decreasing the abrupt changes of the pixels intensity in the border regions of the dashed reference line. Moreover, the Gaussian blurring contributes to decrease the influence of noise in the image, improving the binarization procedure of the frame - the next step in the image processing to be considered.

During the binarization, the relevant (foreground) and irrelevant (background) information are separated by a simple procedure: initially, a threshold, i.e. a value that represents a pixel intensity, is defined. For instance, as a 8-bit grayscale frame is considered, the values of each one of its pixels are in the range 0 to 255. In the presented work, as a white dashed reference line must be separated from a dark (almost black) track, the threshold considered here is equal to 200. So, every pixel that has a value smaller than this threshold will be set to 0 (or 1; this decision should not interfere in the algorithm performance); similarly, if the pixel value is greater than the threshold, it will be set to 1 (or 0). In the end, the processed frame will contain the dashed reference line in white and all other information in black. It is interesting to note that, although the threshold was heuristically chosen, the OpenCV library implements the Otsu algorithm which is capable to obtain this value automatically minimizing (maximizing) the intra- (inter-) classes variances. Its performance benefits from a pixel value bi-modal distribution, however, it works in any kind of distribution [24, section “Image Processing in OpenCV: Image Thresholding”].

After the binarized image has been obtained, it was initially considered the use of two complementary morpho- logical transformations: the erosion - which “erodes away the boundaries of the foreground object” - and the dilation - which “increases the white region in the image or size of the foreground object” - [25]. Both aim to re- move noise that eventually could still be present after the binarization procedure. However, this idea of using

7 v.2, n.1, p.2-18, 2019. erosion and dilation has been replaced by the computation of the image moments [26], or the “center of mass” of the rectangles of the dashed reference line. This decision has been made due to the fact that, during the tests and experiments performed, it was possible to observe that the approach based on the erosion-dilation transformations resulted in a very low frame-processed rate, of the order of 15 to 25 FPS and, due to this, the positioning control system of the servomotor was impaired. The alternative approach based on the calculation of the moments allowed to process the frames at a rate of approximately 50 to 75 FPS (with peaks of up to 90 FPS) which shows to be a sufficient rate for the control system to work. In both cases, the main goal is to obtain the dashed reference line orientation related to the car orientation, resulting in the error to be sent to the controller.

Figure 6 illustrates the frames obtained by the PiCamera and processed by the ImageHandler class. To improve its visualization, the center of mass is represented by a (large) red circle. In Figure 6(a), the center of mass is positioned at (xcm, ycm) = (226, 96) [pixels] and in Figure 6(b), at (xcm, ycm) = (125, 70) [pixels]. It is important to not that the origin of the frame is positioned at its left top corner, with +x and +y oriented, respectively, to the right and to the bottom of the image (see Figure 7 and Ref. [27] for further details).

(a) (b) Figure 6. Two frames obtained by the PiCamera and processed by the ImageHandler class. The red circle defines the center of mass of the image. The white vertical lines drawn in the middle of the images are the reference lines, i.e. to where the car is oriented. During this experiment, the threshold was set to 50.

Implementing the control system

The ControlHandler is the second class to be executed, being the responsible for processing the positioning of the center of mass of the dashed reference line in relation to the car orientation obtained by the ImageHandler class and, moreover, implementing a closed loop control system based on a PD controller. Basically, the Con- trolHandler class returns how much the servomotor must “turn” to adjust the car orientation, ideally aligning it with the dashed reference line. To realize such task, it must be computed the angle γ identified in Figure 7.

As previously mentioned, it was used a ROI with dimensions 320 × 160. Additionally, two auxiliary reference lines, one vertical and one horizontal, both positioned in the middle of the image at xre f = 160 and yre f = 80, respectively, as illustrated in Figure 7. Since ycm varies as the car moves, the need to compensate for this movement was circumvented considering that ycm is always fixed and positioned along the reference line in yre f . Due to this simplification, the deviation on the car orientation will depend only on xcm and, moreover, the angles γ and β will have the same absolute value. In this context, quantifying such deviation is carried out by calculating the angle β using Eq. (2),

8 v.2, n.1, p.2-18, 2019.

x x (0,0) ref cm +x frame α β

yref

ycm γ CM θ +y

Figure 7. The ControlHandler implementation is based on the calculation of the angles γ and β and the generation of the control signal necessary to adjust the car orientation coherently.

! y  y  re f re f β = 90° − arctan = 90° − arctan . (2) xre f − xcm ∆x

The implementation of the ControlHandler class is presented in Listing 1. In the very first line, it is imported a file which imports the NumPy library defining it as “np” and also contains all the auxiliaries constants employed by all classes, auxiliary functions and the main function. A variable ∆x is defined in line 6 and then checked whether it is null, configuring that the car is perfectly oriented with the dashed reference line; otherwise, Eq. (2) is applied (lines 13 and 14).

9 v.2, n.1, p.2-18, 2019.

1 from constants import * 2 class ControlHandler(): 3 4 def Controle(x_cm, dt, last_error): 5 6 dX = X_REF − x_cm 7 8 if dX == 0: 9 beta = 0 10 error = 0 11 alfa = 90 12 else: 13 alfa = np.degrees(np.arctan(Y_REF/dX)) 14 beta = 90 − abs(alfa) 15 16 if dX > 0: 17 error = ((−1.0)*beta)/ANG_ERROR_MAX 18 else: 19 error = beta/ANG_ERROR_MAX 20 21 # Update the error 22 d_error = error − last_error 23 24 #PD controller 25 P = Kp*error 26 D = Kd*(d_error)/dt 27 28 # Control signal 29 control_signal = 90 + P + D 30 31 return error, beta, alfa, control_signal

Listing 1. This source code shows how the ControlHandler class was implemented.

At line 16, it is checked whether the center of mass is positioned on the right (∆x < 0) or left side of the reference line (xre f ), indicating how the controller should correct the car orientation. For instance, considering the center of mass previously indicated in Figure 6(a) and positioned at (xcm, ycm) = (226, 96) [pixels], by applying Eq. (2), the resulting orientation deviation is β = 39.5231°, which is normalized with respect to the maximum deviation that could be observed, indicated by the constant ANG_ERROR_MAX. This measure is obtained considering that the center of mass is positioned at the edges of the image, i.e. xcm = 320 [pixel] and applying, again, Eq. (2),

  80 ANG_ERROR_MAX = 90° − arctan ≈ 63.4350°. (3) 160

This normalization procedure is interesting as it allows a fine tuning of the control signal from the adjustment of the controller’s gains. Therefore, for the considered example, the normalized value of the error is +0.6230.

Similarly, for the center of mass positioned at (xcm, ycm) = (125, 70) [pixels], β = 23.6294° and the error is -0.3725.

The controller is implemented in lines 21-29. Initially, the error variation is obtained between two successive iterations of the algorithm (line 22), i.e. after processing two consecutive frames. Then, a time differential equal to the total time required to execute, successively, the ImageHandler class twice is considered. With

10 v.2, n.1, p.2-18, 2019. this value, the signals related to the Proportional and Derivative components of the PD controller (lines 25 and 26) are defined. Finally, the control signal is generated in line 29. It is important to note that, in an ideal situation where both the error and the differential of the error are zero, i.e. when the car orientation is exactly the same as the dashed reference line, the control signal assumes a value equal to 90. This value indicates that the servomotor shaft is in its central position, aligning the frontal axle of the car so that it is capable to move in a straight line. The execution of the ControlHandler class is then finished, returning the values related to the error, the α and β angles, and the control signal itself, which are used by the main code and by the CarHandler class, which is described in the following.

The CarHandler class: driving the DC motors and the servomotor

The CarHandler, the last but not the least class to be executed, contains dedicated functions necessary to drive the DC motors and the servomotor. To understand its importance, consider, initially, the servo. This class converts the control signal generated by the ControlHandler in a PWM signal to be used to control this motor and, consequently, adjusting the car orientation. This conversion needs to be carefully performed as it is necessary to analyze whether the received angle is smaller than the angle determined as the minimum stop angle (60°) or greater than the maximum stop angle (120°). Note that by “stops” is meant the “virtual” constraints defined by the physical structure of the car, which does not allow the wheels to reach all positions of the 180° rotation of the servomotor. If the received angle is outside these limits, it is considered that the angle to be converted in a PWM signal is the minimum or the maximum stop angle itself. This implementation ensures that no command will be sent to the servomotor that could force the front wheels to collide with the side structure of the car, which would cause a peak of current in the servomotor.

The PWM signal is generated by the GPIO 13 (hardware PWM; see Figure 3) and is configured using the “hardware_PWM(gpio, PWMfreq, PWMduty)” function defined by the PIGPIO library [18], having as parame- ters the frequency and the duty cycle desired on the signal. The frequency is defined equal to 50 Hz. The duty cycle (PWMduty) must be an integer and is calculate by Eq. (4),

PWMduty = (control_signal/18.0 + 2.0) × 10000, (4)

For instance, consider again the center of mass positioned at (xcm, ycm) = (226, 96) [pixels] (Figure 6(a)). For simplicity, considering that Kp = 1 and Kd = 0, the related control signal is equal to 90.6230 and, using Eq. (4), the resulting duty cycle is equal to 70346. In other words, when the function hardware_PWM(13, 50, 70346) is executed, the servomotor shaft will be positioned at 90.6230°.

The DC motors are positioned at the rear of the car and are directly connected to the rear wheels. The motors are driven by the L298N driver based on the signals received by its IN1 and IN2 inputs (see Figure 2 and sec- tion “Hardware: the main aspects”). Those signals are generated by the CarHandler (employing the RPi.GPIO library) and, depending of their combination - as can be seen in Table 1 -, defines the car movement.

11 v.2, n.1, p.2-18, 2019.

L298N Inputs

How the DC motors will respond IN1 IN2 Forward HIGH LOW Backward LOW HIGH Stop HIGH HIGH Neutral LOW LOW

Table 1. Configuration of the associated GPIO pins used to drive the DC motors. HIGH is equal to 3.3V and LOW is equal to 0V.

RESULTS

Before beginning the analysis of the results obtained during the experiments, some observations and important definitions related to the configuration of the system are presented. Firstly, it must be observed that the L298N driver is designed to be able to control and handle the DC motors speed by means of an additional input that receives a PWM signal. The duty cycle of this PWM signal is the responsible for controlling the speed (e.g. a duty cycle of 100% results in its maximum value). As a first approach, it was decided not to control the speed, since it was expected that its maximum value wouldn’t cause the car drifts even on the sharpest corners of the track. However, after some tests, it was verified that the maximum FPS value reached by RPi during the code execution wasn’t fast enough to send commands to the motors in a time interval that the camera wouldn’t lose track of the dashed line, as too much time would pass between captures. As previously mentioned (see section “IMAGE PROCESSING AND CAR CONTROL”), the framerate obtained with the execution of the code presented values between 50 and 75 FPS which were sufficient for the task of driving the car along the track, when considering the value of the PWM used within the interval [65%, 70%].

Figure 8(a) illustrates a skecth of the test track built for the experiments’ realizations. Figures 8(b) and 8(c) illustrate its final arrangement. Its total length is approximately equal to 30m. Despite the fact that the dashed reference line was drawn in black, it respects the RoboRace’s rules, i.e., rectangles with length equal to 300mm and width equal to 50mm, spaced 300mm between them.

(a) (b) (c) Figure 8. (a) illustrates a sketch of the track built for the experiments. (b) and (c) contain photos of the track.

Figure 9(a) illustrates the evolution of the position of xcm considering 5 laps (8722 points acquired). The total “run time” was approximately 139.59s. It can be seen that the average position of the center of mass of the dashed reference line is slightly smaller (approximately 149.9583 pixels; the horizontal dashed red line) than the reference (xre f = 160; the horizontal black line), which could be related to how the servo motor is installed, i.e. its shaft turns in the vertical plane, an intrinsic characteristic of the structure of the car. To improve the

12 v.2, n.1, p.2-18, 2019. analysis, Figure 9(b) illustrates the evolution of ∆x considering an interval approximately equal to the first 2 laps.

(a) (b) Figure 9. (a) illustrates the evolution of the center of mass of the dashed reference line related to the car orientation. (b) contains the ∆x evolution (see Eq. (2)).

Figure 10 contains the results obtained for the first 2 laps. It can be observed that the error is in the interval [-50, 50] pixels - most of time -, with peaks equal to -62.7005 and 61.1135 pixels, which are smaller than the ANG_ERROR_MAX (Eq. (3)). The error average is equal to -5.0212. The control signal has a mean value approximately equal to 87.6254°. Observe that, as the value 90° sent to the servo motor indicates that the car has a straightforward movimentation, both results indicates that the dashed reference line is positioned in the left side of the car most of time as the control signal must to correct the car’s orientation to the left.

Figure 10. Behaviour of ∆x, the error and the control signal considering the first 2 laps.

To improve the software comprehension, consider the results illustrated in Figure 11. Figure 11(a) contains ∆x and the error. Moreover, it was subtracted the value 90 from the control signal (see the ControlHandler class - Listing 1) and, consequently, the curve presented here represents the PD action itself. Figure 11(b) highlights one sample, when ∆x(367) = 61 [pixels]. Applying Eq. (2), this value of ∆x, in pixels, represen- ting the car orientation related to the dashed reference line, is converted to an angle value, resulting in an error(367) = 37.3256°. Then, this value is normalized in relation to the ANG_ERRO_MAX (Eq. (3)) and multi- plied by Kp = 30. From line 29 of the ControlHandler class (Listing 1), knowing that ∆x > 0 and considering

13 v.2, n.1, p.2-18, 2019.

Kd = 0, it will be send a value approximately equal to 107.6522° to the servo motor, indicating that it must turn to right so that the car orientation could be aligned with the dashed reference line.

(a) (b) Figure 11. (a) illustrates the evolution of ∆x, the error and the control signal. The circles in (b) represents a sample of these signals.

CONCLUSIONS

The work presented here includes the main topics related to the development of a scaled autonomous car designed to follow a dashed line. Even though this task can be considered simple, the line follower framework for an autonomous car motivates the study of many applications, like the implementation of an autonomous public transportation system [28] or in the path planning in “real” cars context [29, 30, 31]. From a didactic point of view, the project studied here is particularly interesting as it can be considered an accessible and low cost platform, allowing students to implement basic concepts of electronics, control systems, embedded systems architecture and computational vision [32, 33, 34].

The developed methodology presented was based on (1) the use of the PiCamera to “see” the dashed reference line to be followed, (2) the use of Python programming language and the OpenCV library to calculate the centers of mass of the rectangles which form the dashed line and (3) quantifying the car’s orientation deviation relative to such reference line. This approach allows that approximately 50 to 75 frames per second were processed by the RaspberryPi 3B+. The adjustment of the orientation was performed from a closed-loop PD control system.

The results shown that the car was able to efficiently follow the dashed reference line, moving with an average speed of approximately equal to 3.87 km/h considering a PWM signal with a duty cycle equal to 65%, even though oscillatory behaviour was observed when adjusting the car orientation in relation to the reference (similarly to a step response in a second order system context). Although not being so high, these speed value are consistent with more sophisticated approaches using, for example, deep neural networks [35]. However, it is likely that the speed limitations and the car behaviour might be related to the frame rate obtained and, also, to the fact that the ROI used removed the upper third of the original image, removing the horizon. In spite of the fact that this methodology has been based in the work presented in [23], the results obtained by its authors, Neto and Rittner, have been shown to be more efficient, perhaps due to the fact that the camera was positioned

14 v.2, n.1, p.2-18, 2019. in a higher point, in that case, on the roof of an SUV. Here, the camera was positioned a little more than 30cm from the floor.

Based on previous works, it is possible to note that the influence of the camera position could be minimized, resulting in an increase of the general performance of the system if it were considered information associated to the horizon. In other words, a solution would be based on a predictive algorithm capable to identify the path curvature and whether the direction should be changed or not. Therefore, it would be possible to de- velop a control system that would reduce the speed in curves and increase it in straight lines [36]. A more radical approach would be a system with more powerful computational resources and able to perform parallel processing, which, in the image processing perspective, is a great advantage [37, 38].

ACKNOWLEDGMENTS

The authors would thank the support from Jeferson Cotrim, Sergio Polimante, Valério Cardoso and professors Dr. André Kazuo Takahata and Dr. Ricardo Suyama. The authors would also thank, in alphabetical order, Dimitri Leandro de Oliveira Silva, George Salvino, Guilherme Lima, Henrique Ferreira, José Araújo, Leonardo Biazzuto, Lucas Passarelli, Matheus Costa and Yan Podkorytoff.

REFERENCES

[1] SICILIANO, B.; SCIAVICCO, L.; VILLANI, L.; ORIOLO, G. Robotics: Modelling, Planning and Control. [S.l.]: Springer, 2010.

[2] IEEE SPECTRUM: MEDICAL ROBOTS. [S.l.: s.n.], 2018. https://spectrum.ieee.org/robotics/medical-robots. Accessed on July 23, 2018.

[3] KAZANZIDES, P; CHEN, Z; DEGUET, A; FISCHER, G S; TAYLOR, R H; DIMAIO, S P. An Open-Source Research Kit for the Da Vinci Surgical System. In: 2014 IEEE International Conference on Robotics and Automation (ICRA). [S.l.: s.n.], 2014. p. 6434–6439.

[4] GRAETZEL, C. F.; MEDICI, V.; ROHRSEITZ, N.; NELSON, B. J.; FRY, S. N. The Fly: A biorobotic platform to investigate dynamic coupling effects between a fruit fly and a robot. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. [S.l.: s.n.], set. 2008. p. 14–19.

[5] HOSSEINI, A.; KARIMI, H.; ZARAFSHAN, P.; MASSAH, J.; PARANDIAN, Y. Modeling and Control of an Octorotor Flying Robot using the Software in a Loop. In: 2016 4th International Conference on Control, Instrumentation, and Automation (ICCIA). [S.l.: s.n.], jan. 2016. p. 52–57.

[6] KUBO, Y.; SHIMOYAMA, I.; MIURA, H. Study of Insect-based Flying Microrobots. In: [1993] Proceedings IEEE International Conference on Robotics and Automation. [S.l.: s.n.], maio 1993. v. 2, p. 386–391.

[7] KUMAR, G. S.; PAINUMGAL, U. V.; KUMAR, M. N. V. C.; RAJESH, K. H. V. Autonomous Underwater Vehicle for Vision Based Tracking. Procedia Computer Science, v. 133, p. 169–180, 2018. International Conference on Robotics and Smart Manufacturing (RoSMa2018).

15 v.2, n.1, p.2-18, 2019.

[8] LEE, D.; KIM, G.; KIM, D.; MYUNG, H.; CHOI, H.-T. Vision-based object detection and tracking for autonomous navigation of underwater robots. Ocean Engineering, v. 48, p. 59–68, 2012.

[9] BIMBRAW, K. Autonomous Cars: Past, Present and Future. A Review of the Developments in the Last Century, the Present Scenario and the Expected Future of Autonomous Vehicle Technology. In: 12TH International Conference on Informatics in Control, Automation and Robotics (ICINCO) 2015. Colmar, Alsace, France: [s.n.], 2015. v. 01, p. 191–198.

[10] POZNA, C; ANTONYA, C. Issues about autonomous cars. In: IEEE 11th International Symposium on Applied Computational Intelligence and Informatics (SACI) 2016. [S.l.: s.n.], 2016. p. 13–18.

[11] ROBOCAR RACE. [S.l.: s.n.], 2018. http://roborace.com.br/. Accessed on September 27, 2018.

[12] OMRANE, H; MASMOUDI, M S; MASMOUDI, M. Neural controller of autonomous driving by an embedded camera. In: 4TH International Conference on Advanced Technologies for Signal and Image Processing. ATSIP - 2018. Sousse, Tunisia: [s.n.], 2018. p. 1–5.

[13] RASPBERRY PI - FROM WIKIPEDIA, THE FREE ENCYCLOPEDIA. [S.l.: s.n.], 2018. https://en.wikipedia.org/wiki/Raspberry_Pi. Accessed on October 13, 2018.

[14] RASPBERRY PI CAMERA MODULE V2 - DOCUMENTATION. [S.l.: s.n.], 2018. https://www.raspberrypi.org/documentation/raspbian/applications/camera.md. Accessed on October 13, 2018.

[15] FREESCALE CUP DC MOTOR SPECIFICATIONS. [S.l.: s.n.], 2012. https://community.nxp.com/docs/DOC-93309. Accessed on October 13, 2018.

[16] DUAL FULL-BRIDGE DRIVER. [S.l.: s.n.]. https://www.mouser.com/datasheet/2/389/l298-954744.pdf. Accessed on October 17, 2018.

[17] POWER HD HIGH-TORQUE SERVO 1501MG. [S.l.: s.n.], 2018. https://www.pololu.com/file/0J729/HD-1501MG.pdf. Accessed on October 13, 2018.

[18] THE PIGPIO LIBRARY. [S.l.: s.n.], 2018. http://abyz.me.uk/rpi/pigpio/. Accessed on October 13, 2018.

[19] ROLLING SHUTTER - FROM WIKIPEDIA, THE FREE ENCYCLOPEDIA. [S.l.: s.n.], 2019. https://en.wikipedia.org/wiki/Rolling_shutter. Accessed on January 09, 2019.

[20] RASPBERRY PI CAMERA MODULE V2 - THE PICAMERA PACKAGE. [S.l.: s.n.], 2018. https://picamera.readthedocs.io/en/release-1.13/index.html. Accessed on October 13, 2018.

[21] ASSEMBLY OF THE FREESCALE CUP CAR CHASSIS. [S.l.: s.n.], 2012. https://community.nxp.com/docs/DOC-1014. Accessed on October 13, 2018.

[22] ROSEBROCK, A. Raspbian Stretch: Install OpenCV 3 + Python on your Raspberry Pi. [S.l.: s.n.], 2017. https://www.pyimagesearch.com/2017/09/04/raspbian-stretch-install-opencv-3-python-on-your- raspberry-pi/.

[23] NETO, A M; RITTNER, L. A Simple and Efficient Road Detection Algorithm for Real Time Autonomous Navigation based on Monocular Vision. In: 2006 IEEE 3rd Latin American Robotics Symposium. Santiago, Chile: [s.n.], 2006. p. 92–99.

[24] OPENCV-PYTHON TUTORIALS. [S.l.: s.n.], 2018. https://docs.opencv.org/3.4.3/d6/d00/tutorial_py_root.html. Accessed on October 14, 2018.

16 v.2, n.1, p.2-18, 2019.

[25] OPENCV LIBRARY: MORPHOLOGICAL TRANSFORMATIONS. [S.l.: s.n.], 2019. https://docs.opencv.org/3.4.2/d4/d76/tutorial_js_morphological_ops.html. Accessed on January 14, 2019.

[26] IMAGE MOMENT - FROM WIKIPEDIA, THE FREE ENCYCLOPEDIA. [S.l.: s.n.], 2018. https://en.wikipedia.org/wiki/Image_moment. Accessed on October 24, 2018.

[27] OPENCV LIBRARY. [S.l.: s.n.], 2018. https://opencv.org/. Accessed on October 11, 2018.

[28] GUMUS, O.; TOPALOGLU, M.; OZCELIK, D. The Use of Computer Controlled Line Follower Robots in Public Transport. Procedia Computer Science, v. 102, p. 202–208, 2016.

[29] KATRAKAZAS, C.; QUDDUS, M.; CHEN, W.-H.; DEKA, L. Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions. Transportation Research Part C: Emerging Technologies, v. 60, p. 416–442, nov. 2015.

[30] KIM, J.; JO, K.; KIM, D.; CHU, K.; SUNWOO, M. Behavior and Path Planning Algorithm of Autonomous Vehicle A1 in Structured Environments. In: 10. 8TH IFAC Symposium on Intelligent Autonomous Vehicles. [S.l.: s.n.], jun. 2013. v. 46, p. 36–41.

[31] PENDLETON, S. D.; ANDERSEN, H.; DU, X.; SHEN, X.; MEGHJANI, M.; ENG, Y. H.; RUS, D.; ANG, M. H. Perception, Planning, Control, and Coordination for Autonomous Vehicles. Machines, v. 5, n. 1, mar. 2017.

[32] FAZILI, A.; IMAAN, D. ul; RASHID, M.M. Development of autonomous lane following and collision avoiding robot using image processing. In: 2012 International Conference on Computer and Communication Engineering (ICCCE). [S.l.: s.n.], jul. 2012. p. 331–337.

[33] LAU, T. K. Learning autonomous drift parking from one demonstration. In: 2011 IEEE International Conference on Robotics and Biomimetics. [S.l.: s.n.], dez. 2011. p. 1456–1461.

[34] RACHUJ, S.; REICHENBACH, M.; VAAS, S.; FEY, D. Autonomous Driving in the Curriculum of Computer Architecture. In: 2018 12th European Workshop on Microelectronics Education (EWME). [S.l.: s.n.], set. 2018. p. 11–16.

[35] DO, T.; DUONG, M.; DANG, Q.; LE, M. Real-Time Self-Driving Car Navigation Using Deep Neural Network. In: 2018 4th International Conference on Green Technology and Sustainable Development (GTSD). [S.l.: s.n.], nov. 2018. p. 7–12.

[36] AUTONOMOUS RACING ROBOT WITH AN ARDUINO, A RASPBERRY PI AND A PI CAMERA. [S.l.: s.n.], 2017. https://becominghuman.ai/autonomous-racing-robot-with-an-arduino-a-raspberry- pi-and-a-pi-camera-3e72819e1e63. Accessed on February 08, 2019.

[37] ALDEGHERI, S.; BOMBIERI, N. Rapid Prototyping of Embedded Vision Systems: Embedding Computer Vision Applications into Low-Power Heterogeneous Architectures. In: 2018 International Symposium on Rapid System Prototyping (RSP). [S.l.: s.n.], out. 2018. p. 63–69.

[38] BABU, V. S. An Experiment Platform for Implementing Advanced Algorithm in Scaled Autonomous Cars. 2018. Diss. (Mestrado) – Faculty of the School of Engineering e Applied Science - University of Virginia.

17 v.2, n.1, p.2-18, 2019.

CONTACT INFORMATION

Isabelle Diniz Orlandi (corresponding author) [email protected]

Daniel Pereira Cinalli [email protected]

Italo Milhomem de Abreu Lanza [email protected]

Thiago Lima de Almeida [email protected]

Tito Caco Curimbaba Spadini [email protected]

Pedro Ivo da Cruz [email protected]

Filipe Ieda Fazanaro fi[email protected]

18