International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106

Self Driving Car using

B.Sivakumar 1, Anany Chitranshi2 Suyash Srivastava3 1Associate Professor, Department of CSE, SRM Institute of Science and Technology, Kattankulathur,Chennai,Tamilnadu, India-603203. 2,3 Student,Department of CSE , SRM Institute of Science and Technology, Kattankulathur,Chennai,Tamilnadu, India-603203, [email protected] ,[email protected], [email protected]

Abstract The objective of this project is to facilitate the process of driving a car by automating it. The result of this project will surely reduce the number of car accidents happening these days. The idea is to create an autonomous vehicle(a hybrid model of Level two and three) that uses only some sensors(collision detectors, temperature detectors etc.) and camera module to travel between destinations with minimal/no human intervention. The car contains a trained Convolutional Neural Network (CNN) which will predict all the parameters that are required for smoothly driving a car. They are directly connected to the main steering mechanism and the output of the deep learning model will determine the steering angle of the vehicle. Many algorithms like Lane Detection, Object Detection are used in tandem to provide the necessary functionalities in the car

Keywords:Raspberry pi, lane Detection ,obstacle Detection , OpenCV ,Deep Learning, Convolutional Neural Networks(CNN)

1. Introduction

Behaviour cloning is a method by which we can mimic human behaviour with the help of machines. This method involves learning the patterns in data by observing human behaviour at any point of time and then replicating it when required. This method is generally used in areas where the generic pipeline theory fails to perform. A self-driving car can be defined as a car that has been automated by using some kind of algorithmic approach. It could drive anywhere or perform human tasks (related to driving) without least/any human intervention. We need to feed a vast amount of data that is collected during manual driving of the car by a human, and then this data is sent to a deep neural network which processes it and predicts the outcome. Therefore we can say that the data helps us to identify road lanes, pedestrians or any object detection, recognize the type of traffic light and finally perform action according to the situation Advanced driver- assistance systems (ADAS) are available in the market these days. They include features such as adaptive throttle control, emergency braking system, lane detection, and more. The goal of ADAS is to make driving safe by reducing the human errors.

However, even the most advanced ADAS system requires the driver to pay full attention while driving, he/she should intervene whenever necessary The main advantage that the autonomous vehicles propose is safety, the NHTSA has estimated that majority of the life threatening crashes are due to human error, This number of risk factors can be severely reduced if the human intervention while driving is reduced though we can say that these cars are still prone to mechanical damage or circuit failure. But if we see the bigger picture then we can say the technology can act as a boon to us. It can help people with disabilities or old people to live the life they want without any dependencies . It will surely reduce the majority of life threatening accidents happening all over the world due to careless driving errors made by humans.

ISSN: 2005-4238 IJAST 9099 Copyright ⓒ 2020 SERSC

International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106

2. State of Art (Literature Survey) This paper proposes a different type of architecture which does not follow the standard pipelining model , instead it follows an end-to end trainable model . Since it was end-to- end trainable it required a vast amount of data . In this type of modelling architecture the output was predicted in one shot and so it was considered faster than the standard pipeline method. The approach follows a regression based mapping and is able to predict the exact steering angle required to keep the vehicle in lane . Results in this paper depict the considerable advantage that a RCNN network has over a simple CNN network. [1]

The research paper proposes various modelling techniques and hyperparameter tuning methods for facilitating the process of driving a car in a wide variety of complex environment conditions and agent behaviour. According to the paper, Explicitly defining all possible scenarios is unrealistic so in theory, we can leverage data from large fleets of human driven cars and use them to make predictions in our autonomous vehicle. In this paper we also propose a new benchmark to measure the scalability and limitations of behaviour cloning. The issues due to overfitting and dataset biasing are properly handled by this model. [2]

It consisted of a Convolutional Neural network (a custom architecture known as model architecture). It was far more complex than the LeNet model. It gave better results in most of the conditions. The system automatically learns how to drive a car by adjusting throttle and steering angle according to the given condition. This NVIDIA architecture processes the image in YUV format and uses the PX system as a computational power source. It had three camera modules for capturing images from three different angles( Center, Left and right). [3]

In this paper, a deep Convolutional Neural Network was trained to learn safe driving behaviour of a car, which consisted of lane detection, automatic throttle control traffic signal detection and more. The data is collected from a single camera module which is placed in the car so that it is able to capture the upcoming road(the front part) and then fed to CNN . The CNN architecture used here is known as BCNet (Behaviour Cloning CNN) and it's a deep 17 layer model. The optimizer used here is ADAM and the activation layer used is elu. The paper talks in detail about each step that we have to take from beginning till the end. The proposed approach seemed successful at the end as it was able to replicate the sub-cognitive human behaviour. [4]

In this paper, they propose a technique known as Behaviour Cloning from observation(BCO), it's a process that involves two steps. The first step is to learn the important parameters by observing the human performing the specified task. The next step is to create a model out of it that predicts the outcome of an unknown situation. The paper also talks about the comparisons that are made between GAIL (Generative Adversarial Imitation Learning) and BCO in several simulation domains. The advantages and disadvantages of using them in a certain environment and when to use which Behaviour Cloning technique[5]

This paper explores the impacts that the self driving car might impose on today's society and in the transport market ,The pros and cons of using an autonomous vehicle over a manual one. It talks about the likeliness of people switching to a self driving car. The analysis indicates that the process of switching might begin in the 2030s . Some of the benefits it talked about were less traffic, more safety on roads , less number of life threatening accidents on road The cons it mentioned were possibility of system failure, poor performance in some weather conditions and at night time.[6]

ISSN: 2005-4238 IJAST 9100 Copyright ⓒ 2020 SERSC

International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106

In this paper, the main focus is given to DeepTest, a tool whose job is to detect inaccurate behaviour in autonomous cars. It mainly focuses on enhancing the models accuracy by predicting the mistakes that the DL network can make and what outcomes might come out of it. It contains a combination of several sensors, LIDAR and Artificial Intelligence(AI). These combinations of sensors are used so that the model can adapt to extreme weather conditions . Deeptest explores different environments while testing the model so it triumphs over many other architectural model when it comes to accuracy .[7]

3. Hardware Design 3.1 List of Hardware

● Pre-built RoboCar Chassis (4 Wheels) ● Raspberry pi Model B+(With Wi-Fi Module) ● Power Bank ● L298N H bridge Motor driver ● Jumper Wires to connect individual components ● Pi Camera

3.2 Hardware and Software Description 3.2.1 Raspberry Pi

The Raspberry pi is a small sized single-board computer . The model we are using is known as “ModelB+”. It consists of a wifi module and a slot to connect the external camera module. The power is provided through the custom build PCB that we have made. Here we are using raspberry pi as our primary device mainly used for image processing .

Fig 1: Features offered in Raspberry pi Model B+

3.2.2 Pi Camera

It is an external 8MP camera used for providing high-definition images of the road to the raspberry pi. In this project we will use this sensor for providing input to our Deep Learning Model.

3.2.3 Raspbian OS Of all the OS , Raspbian is considered best for Raspberry Pi . Raspbian is the most user-friendly and best looking OS . It consists of all the default softwares required .It is

ISSN: 2005-4238 IJAST 9101 Copyright ⓒ 2020 SERSC

International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106 free and based on the , and it can be easily downloaded from the official Raspberry pi website.

3.2.4 UNO This is also used in our self driving car. It acts as a slave device in our car and it acts according to the input provided by the raspberry pi. It regulates the motor and adjusts its speed as per the requirements.

3.2.5 L298N H bridge Motor It allows the simultaneous speed and direction control of 2 DC motors . It operates on voltage between 5-35V with a peak current of 2A. The 2 PWM pins on it are used to control the speed of motors.

Fig 2: A L298H dual H bridge Motor

4. Hardware Components Connection The 4 wheels of the chassis are connected separately to 4 DC motors. Then another serial connection is made between the adjacent wheels so that they move in the same direction. These motors are connected to the L298N Dual H bridge motor driver which helps in regulating the speed through arduino as required . Now connect raspberry pi to the arduino UNO after setting up the raspberry pi cam on it . We also build a custom made PCB inorder to provide circuits for providing all the peripheral components .

Fig 3: Motor connection demonstration

ISSN: 2005-4238 IJAST 9102 Copyright ⓒ 2020 SERSC

International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106

FIG 4: HARDWARE ASSEMBLY

5. Proposed Work 5.1 Data Collection and processing We have used the German Traffic signal dataset for classifying the traffic signals and then perform action according to it. The Raspi cam provides high resolution images of the road, which is first gray scaled and then the image is cropped according to the Region of Interest specified by us. This reduces the size of the image to a lot extent . The processed image is stored alongside the steering angle and throttle of the car at that point . This dataset is fed to CNN for training purposes and then our model is generated. This model can now predict the steering angle and throttle of the car based upon the realtime image of the road.

ISSN: 2005-4238 IJAST 9103 Copyright ⓒ 2020 SERSC

International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106

Fig 5: Comparison between processed and original frame

Fig 6: Overview of the system 5.2 Deep Learning Model We used the sequential model in this case. a) 5 convolutional 2D layers each of kernel size 5 X 5 are added .The activation layer used in every „elu‟. b) 1 Dropout layer of shape (1,18,64). ) A max pooling layer with 2 X 2 window d) A convolutional layer with 64 kernels each of size 3 and padding to maintain size. e) A max pooling layer with 2 X 2 window To train the model data augmentation is required. Learning rate is initialized at 0.01 and Adam optimizer is used, model trained for a total of 30 epochs. After creating the model and testing it , the accuracy came around 95% . In order to improve the accuracy we looked at the distribution of dataset and found that it was biased towards the center so we trained it more on the turns.

ISSN: 2005-4238 IJAST 9104 Copyright ⓒ 2020 SERSC

International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106

We also added augmented data in our dataset so that the car may know how to recover from a poor position or orientation. This leads to a high accuracy of our model which is around 98 % .

Fig 7: Biased Dataset

Fig 8: Model Summary

5.3. Simulation Before testing our model on real roads , we tried to evaluate our model on the udacity self driving car emulator . The simulator can be run in 2 modes. 1) Autonomous mode 2) training Mode. We first switched to the training mode and manually drove the car using a keyboard ,which led to the generation of the dataset . This dataset was used to create a deep learning model . Finally this deep learning model was used to drive the vehicle automatically .

ISSN: 2005-4238 IJAST 9105 Copyright ⓒ 2020 SERSC

International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106

Fig 9: Simulator in autonomous mode 5.4. Working in Real time Environment 5.4.1 Algorithm The basic gist of the algorithm is to extract every frame of the image captured by the camera installed on raspberry pi and process it inside raspberry pi and then the raspberry pi sends instruction to the slave device which is Arduino UNO in this case . 1. Define the 3D array of scalar to store the intensity of every frame . 2. Select the threshold value and mask the image. 3. Adjust the Region of Interest in order to minimize the size of image and eventually decrease the amount of calculations required for the processing of image. 4. Find the Hough lines on the road curve. 5. Take the average slope of all lines and draw a line using the average slope. 6. Now train the car manually and record throttle and turn intensity for every frame . 7. After training , you can use the deep learning model to drive the car in an autonomous mode. 8. Use the YOLO technique to identify the traffic signals and change the behaviour of your car accordingly using ARDUINO UNO. 6. Conclusion In this paper , a method to make a self driving car is discussed. We discussed all the hardware assembly and even the deep learning part of the project . A single camera module just with the help of some image processing was able to drive a car flawlessly without any human intervention. The camera even detects any obstacles in front of the car and stops or slows accordingly. In order to make the car more versatile we require more sensors (LIDAR, RADAR etc.). References [1] Behavioural Cloning for Autonomous Driving, Saumya Kumaar*, Navaneeth krishnan B*, Sinchana Hegde, Pragadeesh Raja and Ravi M. Vishwanath.(IEEE) [2] Exploring the Limitations of Behavior Cloning for Autonomous Driving Felipe Codevilla, Eder Santana, Antonio M. López, Adrien Gaidon(IEEE). [3] End to End Learning for Self-Driving Cars, Nvidia 's DAVE-2 [4] Cloning Safe Driving Behavior for Self-Driving Cars using Convolutional Neural Networks, Waei Farag, University of Cairo(Electrical Engineering Department, American University of the Middle East, Kuwait, Cairo University, Egypt). [5] Behavioral Cloning from Observation,Faraz Torabi, Garrett Warnell , 6 ter Stone, (The University of Texas at Austin End to End Learning for Self-Driving Cars). [6] Automated Testing of Deep-Neural-Network driven Autonomous Cars , Yuchi Tian, University of Virginia [7] Autonomous Vehicle Implementation Predictions ,Todd Litman, Victoria Transport Policy Institute (IEEE).

ISSN: 2005-4238 IJAST 9106 Copyright ⓒ 2020 SERSC