Using LiDAR as Camera for End-to-End Driving

Name
Jan Aare van Gent
Abstract
Research on autonomous driving has seen a growing surge in popularity in the last decade. One of the more interesting avenues of autonomous driving, known as end-to-end (E2E) driving, involves training a neural network to predict control signals directly from input sensors.
Usually, the main sensor used for E2E driving is a regular front-facing camera. Cameras are the preferred sensor since they can perceive the road and traffic the way humans see it. Additionally, to make self-driving affordable, the sensor set should be relatively simple and cost-effective, which simple front-facing dashcams excel at.
However, Light Detection And Ranging (LiDAR) instruments give accurate distance estimations and can be more robust to weather and lighting conditions than regular cameras.
In this thesis, the feasibility of using LiDAR as a camera for E2E driving is evaluated. Specifically, the sensor examined is the Ouster OS1-128 LiDAR instrument, which can output measurements as a 360-degree raster image with range, intensity and ambience channels.
A convolutional neural network (CNN) was trained to predict steering angles from LiDAR images. In addition, multiple experiments, including varying the data and the network architecture, were performed. The trained models were evaluated with both offline with open-loop metrics and online with closed-loop metrics. The evaluation results confirm that using LiDAR measurements as a raster image (instead of point cloud) allows to make use of the well-tested CNN networks for E2E driving. This means that Ouster OS1-128 lidar can be used as a drop-in replacement for the camera in E2E driving solutions, with potential improvements due to range sensing, less sensitivity to weather and lighting conditions and novel data augmentation opportunities.
Graduation Thesis language
English
Graduation Thesis type
Master - Computer Science
Supervisor(s)
Tambet Matiisen
Defence year
2021
 
PDF