Self Driving Car Using Raspberry Pi

Self Driving Car Using Raspberry Pi

International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106 Self Driving Car using Raspberry pi B.Sivakumar 1, Anany Chitranshi2 Suyash Srivastava3 1Associate Professor, Department of CSE, SRM Institute of Science and Technology, Kattankulathur,Chennai,Tamilnadu, India-603203. 2,3 Student,Department of CSE , SRM Institute of Science and Technology, Kattankulathur,Chennai,Tamilnadu, India-603203, [email protected] ,[email protected], [email protected] Abstract The objective of this project is to facilitate the process of driving a car by automating it. The result of this project will surely reduce the number of car accidents happening these days. The idea is to create an autonomous vehicle(a hybrid model of Level two and three) that uses only some sensors(collision detectors, temperature detectors etc.) and camera module to travel between destinations with minimal/no human intervention. The car contains a trained Convolutional Neural Network (CNN) which will predict all the parameters that are required for smoothly driving a car. They are directly connected to the main steering mechanism and the output of the deep learning model will determine the steering angle of the vehicle. Many algorithms like Lane Detection, Object Detection are used in tandem to provide the necessary functionalities in the car Keywords:Raspberry pi, lane Detection ,obstacle Detection , OpenCV ,Deep Learning, Convolutional Neural Networks(CNN) 1. Introduction Behaviour cloning is a method by which we can mimic human behaviour with the help of machines. This method involves learning the patterns in data by observing human behaviour at any point of time and then replicating it when required. This method is generally used in areas where the generic pipeline theory fails to perform. A self-driving car can be defined as a car that has been automated by using some kind of algorithmic approach. It could drive anywhere or perform human tasks (related to driving) without least/any human intervention. We need to feed a vast amount of data that is collected during manual driving of the car by a human, and then this data is sent to a deep neural network which processes it and predicts the outcome. Therefore we can say that the data helps us to identify road lanes, pedestrians or any object detection, recognize the type of traffic light and finally perform action according to the situation Advanced driver- assistance systems (ADAS) are available in the market these days. They include features such as adaptive throttle control, emergency braking system, lane detection, and more. The goal of ADAS is to make driving safe by reducing the human errors. However, even the most advanced ADAS system requires the driver to pay full attention while driving, he/she should intervene whenever necessary The main advantage that the autonomous vehicles propose is safety, the NHTSA has estimated that majority of the life threatening crashes are due to human error, This number of risk factors can be severely reduced if the human intervention while driving is reduced though we can say that these cars are still prone to mechanical damage or circuit failure. But if we see the bigger picture then we can say the technology can act as a boon to us. It can help people with disabilities or old people to live the life they want without any dependencies . It will surely reduce the majority of life threatening accidents happening all over the world due to careless driving errors made by humans. ISSN: 2005-4238 IJAST 9099 Copyright ⓒ 2020 SERSC International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106 2. State of Art (Literature Survey) This paper proposes a different type of architecture which does not follow the standard pipelining model , instead it follows an end-to end trainable model . Since it was end-to- end trainable it required a vast amount of data . In this type of modelling architecture the output was predicted in one shot and so it was considered faster than the standard pipeline method. The approach follows a regression based mapping and is able to predict the exact steering angle required to keep the vehicle in lane . Results in this paper depict the considerable advantage that a RCNN network has over a simple CNN network. [1] The research paper proposes various modelling techniques and hyperparameter tuning methods for facilitating the process of driving a car in a wide variety of complex environment conditions and agent behaviour. According to the paper, Explicitly defining all possible scenarios is unrealistic so in theory, we can leverage data from large fleets of human driven cars and use them to make predictions in our autonomous vehicle. In this paper we also propose a new benchmark to measure the scalability and limitations of behaviour cloning. The issues due to overfitting and dataset biasing are properly handled by this model. [2] It consisted of a Convolutional Neural network (a custom architecture known as NVIDIA model architecture). It was far more complex than the LeNet model. It gave better results in most of the conditions. The system automatically learns how to drive a car by adjusting throttle and steering angle according to the given condition. This NVIDIA architecture processes the image in YUV format and uses the NVIDIA drive PX system as a computational power source. It had three camera modules for capturing images from three different angles( Center, Left and right). [3] In this paper, a deep Convolutional Neural Network was trained to learn safe driving behaviour of a car, which consisted of lane detection, automatic throttle control traffic signal detection and more. The data is collected from a single camera module which is placed in the car so that it is able to capture the upcoming road(the front part) and then fed to CNN . The CNN architecture used here is known as BCNet (Behaviour Cloning CNN) and it's a deep 17 layer model. The optimizer used here is ADAM and the activation layer used is elu. The paper talks in detail about each step that we have to take from beginning till the end. The proposed approach seemed successful at the end as it was able to replicate the sub-cognitive human behaviour. [4] In this paper, they propose a technique known as Behaviour Cloning from observation(BCO), it's a process that involves two steps. The first step is to learn the important parameters by observing the human performing the specified task. The next step is to create a model out of it that predicts the outcome of an unknown situation. The paper also talks about the comparisons that are made between GAIL (Generative Adversarial Imitation Learning) and BCO in several simulation domains. The advantages and disadvantages of using them in a certain environment and when to use which Behaviour Cloning technique[5] This paper explores the impacts that the self driving car might impose on today's society and in the transport market ,The pros and cons of using an autonomous vehicle over a manual one. It talks about the likeliness of people switching to a self driving car. The analysis indicates that the process of switching might begin in the 2030s . Some of the benefits it talked about were less traffic, more safety on roads , less number of life threatening accidents on road The cons it mentioned were possibility of system failure, poor performance in some weather conditions and at night time.[6] ISSN: 2005-4238 IJAST 9100 Copyright ⓒ 2020 SERSC International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106 In this paper, the main focus is given to DeepTest, a tool whose job is to detect inaccurate behaviour in autonomous cars. It mainly focuses on enhancing the models accuracy by predicting the mistakes that the DL network can make and what outcomes might come out of it. It contains a combination of several sensors, LIDAR and Artificial Intelligence(AI). These combinations of sensors are used so that the model can adapt to extreme weather conditions . Deeptest explores different environments while testing the model so it triumphs over many other architectural model when it comes to accuracy .[7] 3. Hardware Design 3.1 List of Hardware ● Pre-built RoboCar Chassis (4 Wheels) ● Raspberry pi Model B+(With Wi-Fi Module) ● Power Bank ● L298N H bridge Motor driver ● Jumper Wires to connect individual components ● Pi Camera 3.2 Hardware and Software Description 3.2.1 Raspberry Pi The Raspberry pi is a small sized single-board computer . The model we are using is known as “ModelB+”. It consists of a wifi module and a slot to connect the external camera module. The power is provided through the custom build PCB that we have made. Here we are using raspberry pi as our primary device mainly used for image processing . Fig 1: Features offered in Raspberry pi Model B+ 3.2.2 Pi Camera It is an external 8MP camera used for providing high-definition images of the road to the raspberry pi. In this project we will use this sensor for providing input to our Deep Learning Model. 3.2.3 Raspbian OS Of all the linux OS , Raspbian is considered best for Raspberry Pi . Raspbian is the most user-friendly and best looking OS . It consists of all the default softwares required .It is ISSN: 2005-4238 IJAST 9101 Copyright ⓒ 2020 SERSC International Journal of Advanced Science and Technology Vol. 29, No. 5, (2020), pp. 9099-9106 free and based on the DEBIAN , and it can be easily downloaded from the official Raspberry pi website. 3.2.4 Arduino UNO This microcontroller is also used in our self driving car.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us