This project implements an end-to-end autonomous vehicle capable of completing routes and avoiding obstacles. The model processes input from a single RGB camera, high-level commands (HLC), current speed, and traffic light status to output steering, throttle, and brake values. Although the model was trained on a pre-existing dataset, additional data can be collected using the same methodology and format using the data collection scripts provided. Additionally, the provided evaluation script assesses trained models using CARLA leaderboard metrics.
The below diagram was generated using the torchview library:
From torchinfo, the total number of parameters comes out to 345,393.
All training was conducted on Google Colab using PyTorch v2.3.1+cu121, using this dataset. The model was trained for 30 epochs with a batch size of 32, using the AdamW optimizer with a learning rate of 0.001 and a weight decay value of 0.01. Cosine annealing was utilized to decay the learning rate, along with AutoClip for gradient clipping. The training notebook is attached in the train directory.
The following graph shows average loss during training for the current model:
Install CARLA Simulator 0.9.15 from here.
Set up an environment using conda + pip or venv + pip, Python version 3.10.12 is required.
To install required packages run:
pip install -r requirements.txt
To run the data collection script:
- Run
./CarlaUE4.sh
in your CARLA installation path if you're using Linux. If you're on Windows, runCarlaUE4.exe
- Run
python data_collection.py
The following command line arguments can be used:
Argument | Description | Default Value |
---|---|---|
--town | CARLA Town to run data collection on | Town01 |
--weather | Weather condition for world | ClearNoon |
--max_frames | Max number of frames to run episode for | 2000 |
--episodes | Number of episodes to run data collection for | 200 |
--vehicles | Number of vehicles present in simulation | 50 |
--route_file | Filepath for route file | routes/Town01_Train.txt |
--noisy_agent | Use noisy agent over default agent | Off by default |
--lane_invasion | Enable lane invasion sensor | Off by default |
--collect_steer | Only collect data with high steering angle | Off by default |
To run the data collection with DAgger script:
- Run
./CarlaUE4.sh
in your CARLA installation path if you're using Linux. If you're on Windows, runCarlaUE4.exe
- Run
python data_collection_dagger.py
The following command line arguments can be used:
Argument | Description | Default Value |
---|---|---|
--town | CARLA Town to run data collection on | Town01 |
--weather | Weather condition for world | ClearNoon |
--max_frames | Max number of frames to run episode for | 2000 |
--episodes | Number of episodes to run data collection for | 5 |
--vehicles | Number of vehicles present in simulation | 50 |
--route_file | Filepath for route file | routes/Town01_Train.txt |
--lane_invasion | Enable lane invasion sensor | Off by default |
--model | Filename of trained model | av_model.pt |
To run the evaluation script:
- Run
./CarlaUE4.sh
in your CARLA installation path if you're using Linux. If you're on Windows, runCarlaUE4.exe
- Run
python evaluation.py
The following command line arguments can be used:
Argument | Description | Default Value |
---|---|---|
--town | CARLA Town to run evaluation on | Town01 |
--weather | Weather condition for world | ClearNoon |
--max_frames | Max number of frames to run episode for | 5000 |
--episodes | Number of episodes to run model for | 12 |
--vehicles | Number of vehicles present in simulation | 50 |
--route_file | Filepath for route file | routes/Town02_All.txt |
--model | Filename of trained model | av_model.pt |
When the script finishes running, the following metrics will be saved in evaluation.log:
- Episode Completion Percentage
- Driving Score
- Route Completion
- Infraction Score
The following papers were instrumental in guiding this project. In particular, paper #1 was the most influential, as its model architecture and methodologies directly inspired my work. Specifically, the incorporation of HLC, speed, and traffic light during training was directly adapted from this seminal paper.
- H. Haavaldsen M. Aasbø and F. Lindseth, 2019, NAIS 2019, Autonomous Vehicle Control: End-to-end Learning in Simulated Urban Environments
- I. Vasiljević, J. Musić, J. Mendes & J. Lima, 2023, OL2A 2023, Adaptive Convolutional Neural Network for Predicting Steering Angle and Acceleration on Autonomous Driving Scenario
- Y. Wang, D. Liu, H. Jeon, Z. Chu and E. T. Matson, 2019, ICAART 2019, End-to-end Learning Approach for Autonomous Driving: A Convolutional Neural Network Model
- NVIDIA Corporation, 2020, arXiv reprint, The NVIDIA PilotNet Experiments
- P. Viswanath, S. Nagori, M. Mody, M. Mathew, P. Swami, 2018, ICEE 2018, End to End Learning based Self-Driving using JacintoNet
This project relies on the following repositories, with repo #1 being the most influential. Many of its methodologies were directly used in this project, as its data collection and evaluation methods proved to be optimal for this task. Additionally, the authors of repo #1 were the providers of the dataset used. A massive thank you to TheRoboticsClub, as their contributions were essential to the success of this project.