Skip to content

arnabdutta73/CarND-Capstone

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Team Members:

Name email Task
Tianji Li [email protected] Implement waypoint updater
Xiao Chen [email protected] Integrate the model in the ros code/ Readme
Bhavesh Parkhe [email protected] debug and test code
Hanyu Wu [email protected] Labeling and Train the Object detection model
Arnab Dutta [email protected] Implement waypoint updater and dbw_node/ manage Github

Team Lead: Arnab Dutta

  • The training set (jpeg images) pulled from the simulation can be found here.
  • The training set used for the on-site detector training can be found here.

This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.

Implementation

  • Code Structure ros_structure
  • TODO Lists:
    • waypoint_updater: This node will publish waypoints from the car's current position to some 'x' distance ahead

    • dbw_node: drive-by-wire node, which will subscribe to twist_cmd and use various controllers to provide appropriate throttle, brake, and steering commands. These commands can then be published to the following topics:

      • /vehicle/throttle_cmd
      • /vehicle/brake_cmd
      • /vehicle/steering_cmd
    • twist_controller: contains the Controller class. The control methode can take twist data as input and return trottle, control, and steering values

    • tl_detector: The traffic light detection node

      • Use the vehicle's location and the (x, y) coordinates for traffic lights to find the nearest visible traffic light ahead of the vehicle. We can use ground truth to test the other parts without detection: in methode get_light_state, return light.state

      • Use the camera image data to classify the color of the traffic light.

    • tl_classifier: Take the BGR Image as input, output the ID of traffic light color (specified in styx_msgs/TrafficLight)
      We used the Object Detection Lab and replaced the pb-file with our pretrained model file (Training Process will be described below) frozen_inference_graph_sim.pb for simulation and for the on-site test we will implemnet a different model frozen_inference_graph_site.pb.

    • Object detection model training: We use MobileNet SSD to detect the differenr traffic light signals. It is accurate and fast, takes just about 60ms to process a frame using a laptop's CPU (Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz). We trained each model for 200k steps (batch size = 24) on google cloud platform using NVIDIA® Tesla® K80 based on the pretrained COCO model. The mean Average Precision in simulation task can be over 0.95 at 0.50IOU, and over 0.88 at 0.50IOU in on-site task. In fact we first trained the on-site detector based on the Bosch Small Traffic Lights Dataset, but it behaved very poor on the on-site images, then we directly turn to the on-site images (very small dataset).

Results

  • Test video of Simulation, click the figure below:
    ScreenShot

  • A snippet of the on-site traffic light detection is shown here:

Please use one of the two installation options, either native or docker installation.

Native Installation

  • Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.

  • If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:

    • 2 CPU
    • 2 GB system memory
    • 25 GB of free hard drive space

    The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.

  • Follow these instructions to install ROS

  • Dataspeed DBW

  • Download the Udacity Simulator.

Docker Installation

Install Docker

Build the docker container

docker build . -t capstone

Run the docker file

docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone

Port Forwarding

To set up port forwarding, please refer to the "uWebSocketIO Starter Guide" found in the classroom (see Extended Kalman Filter Project lesson).

Usage

  1. Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
  1. Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
  1. Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
  1. Run the simulator

Real world testing

  1. Download training bag that was recorded on the Udacity self-driving car.
  2. Unzip the file
unzip traffic_light_bag_file.zip
  1. Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
  1. Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
  1. Confirm that traffic light detection works on real life images

Other library/driver information

Outside of requirements.txt, here is information on other driver/library versions used in the simulator and Carla:

Specific to these libraries, the simulator grader and Carla use the following:

Simulator Carla
Nvidia driver 384.130 384.130
CUDA 8.0.61 8.0.61
cuDNN 6.0.21 6.0.21
TensorRT N/A N/A
OpenCV 3.2.0-dev 2.4.8
OpenMP N/A N/A

We are working on a fix to line up the OpenCV versions between the two.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • CMake 39.1%
  • Python 34.5%
  • C++ 25.2%
  • Other 1.2%