Journal of Interdisciplinary Cycle Research ISSN NO: 0022-1945

Simulation of Self Driving Car Computer Engineering Department of Xavier Institute of Engineering, Mumbai.

1st Asst Prof. TEENA VARMA 2nd Mr.Aman Puranchand Sharma Department of Computer Engineering Department of Computer Engineering Xavier Institute of Engineering Xavier Institute of Engineering Mumbai University Mumbai University Mumbai, India Mumbai, India [email protected] [email protected]

3rd Mr.Dion Philip 4th Boris ALEXANDER Department of Computer Engineering Department of Computer Engineering Xavier Institute of Engineering Xavier Institute of Engineering Mumbai University Mumbai University Mumbai, India Mumbai, India [email protected] [email protected]

Abstract : 1. INTRODUCTION :

Self-driving cars, have rapidly become one of During the last decade, deep learning and the most transformative technologies to artificial intelligence (AI) became the dominant emerge. Fuelled by Deep Learning algorithms, technology Several breakthroughs in computer they are continuously driving our society forward, vision [1], robotics [2] and natural language and creating new opportunities in the mobility processing (NLP) [3]. They also the major impact sector. Over the past decade, interest has in the autonomous driving revolution has been increased In self-driving cars. This is due to seen today in both academia and industry. successes in the field of Deep learning where Autonomous vehicle (AVs) and self-driving cars deep neural networks are trained Perform tasks began migrating from laboratory development that usually require human intervention. Cnn and testing conditions to driving on public roads. Apply models to identify patterns and features in Their deployment in our environmental landscape images Useful in the field of . provides for road accidents and traffic congestion Examples of this Object detection, image as well as improving our mobility in congested classification, image captioning etc. In this cities. The title of "self-driving" may seem project, we trained a CNN using captured images obvious, But five SAE levels are actually used to A fake car to run the car autonomously. Cnn define autonomous driving. SAE J3016 standard Learns unique features from images and [4] Introduction A scale from 0 to 5 for grading generates steering Predictions that allow a car to vehicle automation. Less SAE level features run without a human. For Integration of test basic driver assistance, while higher SAE levels objectives and datasets The simulator provided move towards vehicles that do not require any by Udacity was used. human involvement. Level 5 class cars are required There will also be no human input and usually or foot pedals. One of the primary autonomous cars was developed by Max Keywords : Autonomous driving, deep learning, [5] within the Nineteen Eighties. Convolutional Neural Network (CNN), steering This sealed the approach for brand spanking new commands, NVIDIA, end-to-end learning, deep analysis comes, like Prometheus, that aimed to steering. develop a completely purposeful autonomous automobile. In 1994, the VaMP driverless automobile managed to drive one,600km, out of that ninety-fifth were driven autonomously. Similarly, in 1995, CMU incontestible

Volume XII, Issue V, May/2020 Page No:1359 Journal of Interdisciplinary Cycle Research ISSN NO: 0022-1945

autonomous driving on six,000km, with ninety- autonomous but limited to automation The eight has driven autonomously. Another operational design domain of the vehicle i.e. necessary milestone in autonomous driving was Does not cover every driving scenario. Level 5 the government agency Grand Challenges in The vehicles are expected to be fully autonomous 2004 and 2005, in addition, because of the and Their performance should be equal to one government agency Urban Challenge in 2007. Human driver. We are far from getting Level 5 The goal was for a driverless automobile to autonomous vehicles in the near future. However, navigate AN cross-country course as quick as Level - 3/4 autonomous vehicles are possible It is attainable, while not human intervention. In 2004, becoming a reality in the near future. Main Due to none of the fifteen vehicles completed the rigorous technical achievements in it Areas of race. Stanley, the winner of the 2005 race, technological success and excellent Research is leveraged Machine Learning techniques for being done in the field of computer vision And navigating the unstructured surroundings. This machine learning even lower cost Vehicle- was a turning purpose in self-driving automobile mounted cameras which can be either Freely development, acknowledging Machine Learning actionable information or Complement to other and AI as central parts of autonomous sensors. Many vision-based Driver support driving. The turning purpose is additionally features are widely supported In modern notable during this survey paper since the bulk of vehicles. Some of these features Include the surveyed work is dated when 2005. during pedestrian/bicycle detection, collision Estimation this survey, we have a tendency to review the by estimating lane distance, front car Departure various computing and deep learning Warning, etc. However, in this project, Target technologies utilized in autonomous driving and autonomous steering, which is a relatively supply a survey on progressive deep learning and Unexplained work in the field of computer vision AI ways applied to self-driving cars. we have a and machine learning. tendency to additionally dedicate complete In this project, we tend to sections on braving safety aspects, the challenge implement a configuration to form a raw of coaching information sources, and therefore constituent map from a neural network (CNN) the needed machine hardware. pictures captured on steering commands for a In recent years, autonomous With minimum coaching information from the driving algorithms Using low-cost vehicle- humans, the system learns to steer on the road, mounted cameras Attracted increased research with or while not the lane markings.a short efforts from both, Education and Industry. summary of connected works is enclosed Different levels of Automation is defined in Developed in previous years.Section IV and V autonomy walked. Level 0. No human has no elaborate on information assortment and automation The driver controls the vehicle. Levels information the previous a part of the project are 1 and 2 Advanced Driver Assistance System severally.Section VI explains the intensive Where a Human The driver still controls the learning model that we tend to Used and Section system but some features Such as , VII describes the system design.System stability control, etc. are automatic. Level 3 performance is evaluated Section VIII wherever vehicles are autonomous, however, The human sections IX and X area unit mentioned Future driver still needs to be monitored and Interfere work and conclusions. whenever necessary. Level 4 vehicles are Fully

2. Overview of Deep Learning outlined within the interval, wherever k Technologies : denotes the length of the sequence.as an example, the worth of a state variable z is In this section, we tend to describe the idea of outlined either at distinct time t, as z , or at deep learning technologies utilized in intervals a sequence interval z. Vectors and autonomous vehicles and treat the capabilities of matrices square measure indicated by daring every paradigm. Throughout the survey, we tend symbols. to use the subsequent notations to explain time- dependent sequences.The worth of a variable is outlined either for one distinct time step t, written 2.1 Deep Convolutional Neural Networks : as superscript < t >, or as a distinct sequence

Volume XII, Issue V, May/2020 Page No:1360 Journal of Interdisciplinary Cycle Research ISSN NO: 0022-1945

Convolutional Neural Networks (CNN) unit of associate activation perform applied to every pel measurement primarily used for technique within the input. Typically, the corrected long abstraction data, like footage, and can be viewed measure (ReLU) is that the most typically used as image selections extractors and universal non- activation perform in pc vision applications [1]. the linear perform approximators [7], [8]. Before the ultimate layer of a CNN is typically a fully- increase of deep learning, laptop computer vision connected layer that acts as associate object systems acquainted with being enforced based someone on a high-level abstract illustration of on handcrafted selections, like HAAR [9], native objects.in a very supervised manner, the Binary Patterns (LBP) [10], or Histograms of response R(·;θ) of a CNN may be trained positive Gradients (HoG) [11].compared to those employing a coaching info D = [(x1, y1),...,(xm, ancient handcrafted selections, convolutional ym)], wherever xi could be a information sample, neural networks unit of measurement able to Loloish is that the mechanically learn Associate in Nursing For the clarity of rationalization, we have a illustration of the featured house encoded among tendency to take as associate example of the the work setting. easy least-squares error perform, which may be CNN's is loosely understood as very wont to drive the MLE method: 2 approximate analogies to entirely absolutely θˆ = argmaxL(θ;D) = argmin∑(R(xi;θ)−yi) . entirely whole totally different parts of the class (2) house [12]. a picture intentional on the membrane θ θ i=1 is shipped to the world through the structure. The visual data is received by the realm throughout a} For classification functions, the least-squares very crossed manner: the left area receives data error is typically replaced by the cross-entropy or from the correct eye, whereas the correct visual the negative log-likelihood loss functions. a pair cortex is fed with visual knowledge from the left of is usually solved with random Gradient eye. the information is processed in step with the Descent (SGD) and therefore the twin flux theory [13], which states that the visual backpropagation formula for gradient estimation flow follows a try of main fluxes: a ventral flux, [14]. answerable for visual identification and seeing, In observe, completely different variants of SGD and a dorsal flux used for establishing abstraction square measure used, like Adam [15] or AdaGrad relations between objects. the sooner brain cells [16]. among the visual ara unit activated by sharp transitions among the read|visual read} of 2.2 Recurrent Neural Networks : reading, among constant technique among that a grip detector highlights sharp these edges unit of Among deep learning techniques, recurrent measurement a lot of used inside the brain to neural networks (RNNs) are particularly good at approximate object parts and eventually to processing temporal sequence data such as text, estimate. or video streams.different from In traditional CNN is parametrized by its weights neural networks, an RNN consists of a time vector θ =[W,b], wherever W is that the set of dependent feedback loop in a memory cell.Given weights governing the inter-neural connections a time Dependent input sequence [s,...,s] and b is that the set of somatic.The set of weights and an output Sequence [z,...,z], an RNN W is organized as image filters, with coefficients can be "revealed" τti +τto too many times to learned throughout coaching.Convolutional generate a loop-lessnetwork architecture layers among a CNN exploit native spatial Matching the input length, as depicted in fig. 2.t correlations of image pixels to be told translation- represents a temporal index, while lengthi and τo invariant convolution filters, that capture are lengths Input and output sequences discriminant image options.contemplate a respectively.Such nerve The network is also multichannel signal illustration Mk in layer k, that known as the Sequenceto-sequence model. tτi may be a channel-wise integration of signal +τto +1 is in a manifest network Same layers, ie representations M ,c, wherever c ∈ N.a sign k each layer is learned the same Weight.Once illustration is generated in layer k+1 as: exposed, an RNN can be trained using Mk+1,l = ϕ(Mk ∗wk,l +bk,l), (1) where wk,l Backpropagation via the corresponding label and ∈ W could be a convolutional filter with constant m is that the range of coaching examples. The range of channels as Mk, bk,l ∈ b represents the best network parameters may be calculated bias, l could be a channel index and ϕ(*)is victimisation the utmost chance Estimation

Volume XII, Issue V, May/2020 Page No:1361 Journal of Interdisciplinary Cycle Research ISSN NO: 0022-1945

(MLE). The main challenge in using basic RNNs with assignments z , one can train the is the vanishing proneness during training. The response of an LSTM network Q(·;θ) using gradient signal can be multiplied by a large Maximum Likelihood Estimation: number of times, as long as the number of steps. θˆ = argmax θ L (θ;D) Therefore, a traditional RNN sequence is not = argmin θ m ∑i suitable for capturing long-term dependencies in =1li(Q(si;θ), zi), data. If a network is too deep, or processes a long =argminθm sequence, the gradient of the network's output ∑i=1τo∑t=1li(Q(si;θ), zi),(8) will have a difficult time propagating back to affect where an input sequence of observations the load of the previous layers. Under gradual s =[s,...,s,s] is composed fading, the weight of the network will not be of τi consecutive data samples, l(·,·) is the logistic effectively updated, ending up with very low regression loss function and t represents a weight values. Long-term memory (LSTM) [17] temporal index. are non-linear function approximations for In recurrent neural networks terminology, the estimating temporal dependencies in network optimization procedure in Eq. 8 is typically used sequence data. Unlike traditional recurrent neural for training ”manyto-many” RNN architectures, networks, LSTMs solve the vanishing problem by such as the one in Fig. 2, incorporating three gates, which control Input, where the input and output states are output and memory states. Recurrent layers represented by temporal sequences of τi and τo exploit temporal correlations of sequence data instances, respectively. This optimization knowledge to be told time dependent neural problem is commonly solved using gradient structures. think about the memory state c based methods, like Stochastic Gradient Descent and therefore the output state h in associate (SGD), together with the backpropagation LSTM network, sampled at time step t-1, in through time algorithm for calculating the addition because the input file s at time t. The network’s gradients. gap or closing of a gate is controlled by a sigmoid operate σ(·) of the present signaling s and therefore the signaling of the last time purpose h , as follows: 3. RELATED WORK : , (3) The DAVA system was built by DARPA [1] and used images from two cameras as the left and Γf = σ(Wf s +Uf h +bf ), (4) right steering commands as a coach to drive a (Wos +Uoh +bo), (5) model. This demonstrates that end-to-end learning techniques will be applied for < t> < t> < t> where wherever, Γ u , Γ f and Γ o are gate autonomous driving. This suggests that functions of the input gate, forget gate and output intermediate features such as stop signs and lane gate, severally. Given current marking should not be annotated or labeled for detection. DAVE is an initial project in the field of observation, the memory state c are going to autonomous driving. In terms of current be updated as: technology, a large part relied on wireless data c = wherever ∗tanh(Wcs +Uch +bc)+Γf ∗c ,The exchange as vehicles could not take computers new network output h is computed as: and power sources for the system, unlike lighting h = Γo ∗ tanh(c). (7) devices that exist today. The architecture of this An LSTM network Q is parametrized by θ = model was a CNN made up of fully connected [Wi,Ui,bi],where Wi represents the weights of the layers stems from networks already in use. network’s gates and memory cell multiplied with The ALVINN system [2] may be a 3-layer the input state, Ui are the weights governing the back-propagation network built by a gaggle at activations and bi denotes the set of neuron bias CMU to complete the task of lane-following. It values. ∗ symbolizes element-wise multiplication. trains on images from a camera and a distance In a supervised learning setup, given a set of measure from a laser range finder to output the training sequences D = [(s 1 , direction the vehicle should move. ALVINN’s z1),...,(sq , zq )], that model uses one hidden layer back-propagation is, q independent pairs of observed sequences network. We replicated a study by NVIDIA [3].

Volume XII, Issue V, May/2020 Page No:1362 Journal of Interdisciplinary Cycle Research ISSN NO: 0022-1945

The system uses an end-to-end approach where data is first collected in many different environments Conditions. The data is then augmented to drive from the center and strengthen the system in different possible environments. The next step is to train the network on this data. The network architecture is a total of 9 layers, starting with convex layers and then fully connected layers. This is the network that we have tried to replicate. Recently, a paper by a couple of IEEE researchers introduced a very different neural network architecture, which also takes temporal information into consideration [4]. They achieved this in practice by combining standard vector-based long short- Fig. 1 Udacity Simulator term memory (LSTM) and conventional LSTM in various layers of the proposed deep network. We used Udacity's self-riding car simulator Continuous frames typically have a similar visual Collecting facts. This simulator is fashioned in appearance, but microscopic per pixel motion can Unity And became employed by using Eudicity for be observed when optical flow is calculated. self-using Nanodegree program however Traditional image resolutions, as adopted by changed into currently open [6]. This explains state-of-the-art image classification models, can what NVIDIA did Simulation. We'll gather all our move along both spatial dimensions in an image, records from Simulator. Using our keyboard to meaning that they are essentially 2-D. Since pressure a automobile, we Were capable of give these resolutions work on static images or multi- instructions to factor out at the capable car Left, channel response maps, they are unable to proper, speed and slow. any other The critical capture temporal dynamics in video. The authors aspect is that this simulator are frequently used adopted a Spot Temporal Convolution (ST- Training also as testing the model. So it There are Conv), which shifts in both spatial and temporal two methods: (i) training mode, and (ii) The self dimensions, so the convolution is applied in 3- sustaining mode proven in Figure 1. dimensional dimensions as opposed to the The schooling mode is traditional 2-D process. A similar paper also used to collect information and Autonomous proposed this concept. to incorporate temporary mode is used to peer the model. Additionally, information within the model Learn steering there are two types of tracks Simulator - lake information [5]. during this letter The authors music and jungle song. The lake song is quite quantitatively demonstrate that A. Conventional small and smooth to address When the long STM record Neural networks (C-LSTM) are automobile compared with the jungle track often significantly Improve end-to-end learning Shown in Fig. 2 and fig. 3. The simulator captures performance in Autonomous vehicle steering Data when driving across the song using the supported camera Images. Inspired by CNN's automobile Left and right keys to manipulate adequacy within the scene Facility extraction and steerage angle Up and down arrows to manage long efficiency STM (LSTM) recurrent nerve the pace. Network in handling long distance temp The dependency model allows our approach Dynamic dependencies within the context of Estimation of steering angle supported camera input.

4. DATA COLLECTION :

Volume XII, Issue V, May/2020 Page No:1363 Journal of Interdisciplinary Cycle Research ISSN NO: 0022-1945

Fig 2. Udacity Simulation-Lake Track

Fig 4.Udacity Simulator: Left Image

This makes the simulator create a folder Containing images and a CSV file. The image There are three images for each frame in the folder Captured by left, center and right camera and The CSV file has four metrics in each row - Steering angle, speed, throttle and for each captured frame. Fig. 4, fig. 5. and figs. 6 shows Left, center and right image for a frame. Fig 5.Udacity Simulator : Center Image

Fig 3. Udacity Simulator: Jungle track Fig 6.Udacity Simulator:Right Image

5.DATA PREPROCESSING :

The data we collect i.e. the captured image The model is preprocessed before being trained. Whereas Preprocessing is cropped to extract images The sky and the front of the car. The pictures are Then we are converting it from RGB to YUV and resizing the image The input size

Volume XII, Issue V, May/2020 Page No:1364 Journal of Interdisciplinary Cycle Research ISSN NO: 0022-1945

used by the model. is it done Because RGB is not the best mapping for the view an assumption. YUV has too much color space Efficient coding and reduces bandwidth Than RGB can capture. After selecting the last set of frames, the data is Enhanced by adding artificial shifts and curves How to teach a network to overcome the poor Position or orientation. While Fig 10. Zoom growing up, we Randomly select right, left or center images, Randomly flip left / right images and adjust Operating angle. Steering angle is adjusted by 0.2 for left image and -0.2 for right 6. DEEP LEARNING MODEL: image. Using left / right flipped pictures is useful for training Recovery driving scenario. We also randomly Translate image horizontally or The intensive learning model we've developed is vertically Steering angle adjustment. Horizontal predicated on Research is done by NVIDIA for his Can be useful for handling translation scenarios or her autonomous Vehicle [3]. The model Difficult decreases. The horrors of these changes includes the subsequent Critical layers. Randomly chosen from a normal distribution. The distribution has a zero mean. A. Conversational layer We implemented them The convolution layer applies the Enhancement using a script from udyus Store. convolution The of Convolution function filter on Enhanced images are added to the current set the input image and produces a 3D production of Images and their corresponding steering angles activation neurons. This layer helps Find out the In relation to growth is also adjusted Exhibited. varied features of the image that are used The primary reason for this is The increase is to Classification. Number of hard layers It depends make the system more robust, Thus, learn as on the sort of application within the network. Early much as possible Environment using diverse Consistent Layers in Networks Help Find low- ideas with diverse Adjustment. level features of the image that are Simple and firm layers help further Detection of high-level features of the image.

B. Max-pooling layer Help reduce pooling/downsampling layer Number of parameters/loads within the network and Helps reduce training time without Fig 7.Brightness losing time Knowledge of any specific feature of the image. this The layer creates a smaller image than the input image Downslaming the image using key pooling Neurons. There are different types of pooling Maximum pooling, average pooling, L2-standard pooling, etc. But maximum pooling is that the one that's the foremost comprehensive Used. Fig 8.Panned C. Thick layer Is similar to a dense layer or a totally attached layer Common neural network layer that contains all The neurons during this layer are connected to every of the neurons From the previous layer. This layer is typically The convoluted neural network is made because the Fig 9. Flipped last layer. The entire architectural picture is different layers Number of filters and sizes then a shown in Figure 15 There are layers with 5 dropout To affect overfitting. Finally, 3 dense

Volume XII, Issue V, May/2020 Page No:1365 Journal of Interdisciplinary Cycle Research ISSN NO: 0022-1945

layers The output layer was then added. Adam The optimizer was used for parameter For performance evaluation during training, the optimization A fixed learning rate. Batch size of mean The squared error was used as a loss 100 was selected And the number within the era function to put a Track performance of the of 50-60 was Used with. 16 on a machine without model. GPU The GB ram Core i5 took about 6 hours of coaching.

Actually scenarios, while driving on the road, The following metric is proposed in [3]. This The metric is designated as autonomy. this is By counting the number of interventions, Multiply by 6 seconds, divided by elapsed Mock test time, so subtract 1.

Fig 11. convoluted neural network.

As a results of evaluating our system, we operated autonomously Simulator aspect on both forest and lake Track. within the jungle track, the vehicle drove away Ride after 8 seconds of driving. By the above measure of autonomy, it'd mean that the car 87.5% was autonomous. We retrained the model With more data and thus the vehicle never left Path makes it 100% autonomous. Complete Driving behavior doesn't seem realistic. it's like To heal and forth between the sides of Lane.

Fig 12. NVIDIA Model in Simulation of Self Driving Car. During the primary run on the lake track, the vehicle It doesn't leave the track, but it hugs the 7. SYSTEM ARCHITECTURE : left Towards the lane which is that the direction The curve within the track is towards the bend. Vehicle Drives to the left of the lane and when it Fig. 16 refers to the high level of architecture moves Vehicle drives near left lane marking system. After performing data enhancement on Back towards the middle of the lane The center every Input image, batches are created in input of the lane is approximately halfway between And images and from them and fed. For the CNN marking the left lane, then drive it back To the left model for training. Is after training Completed, the of the lane. This behavior repeats. After model is employed to predict Send more Model re-training, vehicle driving visible Much predictions on the steering angle Udys Simulator smoother. It doesn't hug the left Lane, but it to drive a car in real time. appears to drive aside Lane and move to a special. This behavior too Repeats. In terms of autonomy, both models appear To be 100% 8.PERFORMANCE EVALUATION : autonomous. The second time we trained the model, we Saw that the vehicle can drive autonomously, But the behavior wasn't natural. That's more prefer it Take care when refining the system.

Volume XII, Issue V, May/2020 Page No:1366 Journal of Interdisciplinary Cycle Research ISSN NO: 0022-1945

This seems unrealistic. (We actually added this one)

Fig 13. Fitting the data of Simulation of Car

Fig 14.Training and Validaton data Graph of fitting data

9. FUTURE WORK:

We have many ideas to improve Self-driving car performance, of which Was implemented due to a time constraint. First of all The idea would be to add speed feature CNN so that when the simulator in autonomous Mode, it is using to create the speed of prediction The movement Fig 15. Deep Network Architecture appears to be more realistic. Where did the need Another possible improvement would be to to add this element come from Observing contemplate each of the cameras separately and simulator driving in Autonomous make CNN models using each stream of images Train the car after mode and then watch it to form a definite steering command coming from Accelerates to maximum speed, car will the left, center, and right model. Then averaging Automatically slows down to minimum speed the three of those to urge a more

accurate prediction. we'd expect that the bulk of angle that's very unlike the opposite two, then it'd the time, this model would have accurate skew the steering angle in an unexpected predictions but if one model predicts a steering

Volume XII, Issue V, May/2020 Page No:1367 Journal of Interdisciplinary Cycle Research ISSN NO: 0022-1945

direction. Figure 17 could be a graphical representation of this idea.

Fig 17. 3 Model Prediction

Fig 16. High-level system architecture Another idea to reinforce the accuracy of Steering must implement deep steering The architecture In addition, we might wish to add training data proposed in [2] to combine the standard Vector- Where the car avoids getting off the road. In based LSTM and tenacious LSTM We are Enhancement phase of this process, system Allowing the model of Self driving Car to use Learns a way to overcome some minor accidents both of as well as spatial and temporal but Never from an outsized deviation from the Information for removing features. Opportunity to road. If Autonomous car driving completely off guage the strength of Is to form a track but the road Will not be ready to recover. we wish to system Predefined forest and lake track and see feature training Data where the car starts off the whether Will be able to drive trained model car road And makes his way back. this can require Track. Also, if it can drive relatively well, it will Be Stop data recording to begin when the car is Off interesting to observe the difference in how good road which will bring to an end user driving Close it's It drives on the track vs. the trained track The the road because we do not want to Models model wasn't trained. should learn to drive a car on the road, but we They want to find out driving on the road. 10. CONCLUSIONS :

In this project, we were ready to successfully predict Steering Angle Confusing Using Neural Neural Were ready to understand the network and also the inner Description of Simultaneous Neural Network The way they'll be tuned. We also performed CNN is ready to find out the whole work of the lane And following the road without manual decomposition Marking detection in street or lane, semantically Abstraction, path planning and control. A small Less than 100 training data amounts Hours of driving were enough to coach the car Highways, local but operate in diverse conditions And sunny, cloudy, and rainy residential roads Conditions. a motivating caveat to the present is that The system was ready to drive successfully on the

Volume XII, Issue V, May/2020 Page No:1368 Journal of Interdisciplinary Cycle Research ISSN NO: 0022-1945

roads It was trained on him. Autonomous system ● hard disc size - 1 TB for Vehicles that don't use Udness Simulator SOFTWARE: More strength is required because they need to ● Python be taken Consider roads that don't seem to be ● Unity 3D paved And an outsized amount of obstacles like ● Keras (Tensorflow Backend) Pedestrians and signs stop. However, work Was ● Anaconda successfully defined in our paper Completed. ● Open CV

12. REFERENCES : 11. EXPERIMENTAL SETUP : [1] LeCun, Y., et al. DAVE: Autonomous off-road The machine configuration for our experiments vehicle control using end-to-end learning. was as follows is as follows: Technical Report DARPA-IPTO Final Report, HARDWARE: Courant Institute/CBLL, http://www. cs. ● RAM - 16 GB nyu.edu/yann/research/dave/index. html, 2004. ● OS - Window 10 [2] Pomerleau, Dean A. "Alvinn: An autonomous [7] Deep learning for Video classification and land vehicle in a very neural network." Advances captioning by Zuxuan Wu, Ting Yao, Yanwei Fu, in neural IP systems. 1989. Yu-Gang Jiang [3] Bojarski, Mariusz, et al. "End to finish [8] https://keras.io/backend learning for self-driving cars." arXiv preprint [9] arXiv:1604.07316 (2016). https://github.com/llSourcell/How_to_simulate_a [4] Chi, Lu, and Yadong Mu. "Deep Steering: _self_driving_car Learning End-to-End Driving Model from Spatial [10] https://github.com/naokishibuya/car- and Temporal Visual Cues." arXiv preprint behavioral-cloning arXiv:1708.03798 (2017). [11] https://github.com/jeremy-shannon/CarND- [5] Eraqi, Hesham M., Mohamed N. Moustafa, Behavioral-Cloning-Project and Jens Honer. "End-to-End Deep Learning for [12] Visualizing and Understanding Steering Autonomous Vehicles Considering Convolutional Networks by Matthew D. Zeiler, Temporal Dependencies." arXiv preprint Rob Fergus. arXiv:1710.03804 (2017). [13] Dropout: a straightforward thanks to Prevent [6] https://github.com/udacity/self-driving-car-sim Neural Networks from Overfitting by Nitish Srivastava, Geoffrey Hinton , Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov.

Volume XII, Issue V, May/2020 Page No:1369