Augmented Reality in Road Navigation
Supervised by Gil Shamai
Driving assistance systems for tasks such as aiding in navigation and improving safety are an increasingly important part of modern vehicles. These systems rely more and more on knowledge of the vehicle’s precise localization. Localization estimates for current vehicles come predominantly from automotive GPS, but its coarse positioning and issues such as signal unavailability mean that it cannot be relied on for the accuracy needed for this type of function. In this work, we propose a vision-based solution for globally localizing vehicles on the road using a single on-board camera and exploiting the availability of priorly geo-tagged street view images from the surrounding environment together with their associated local point clouds. Our approach is focused on the integration of image-based localization into a tracking and mapping system in order to provide accurate and globally-registered 6DoF tracking of the vehicle’s position at all times. The method incrementally tracks the position of the vehicle using mapping and tracking techniques, which inevitably drift over time, and combines the tagged images as a source of accurate global positioning in order to correct the accumulated drift, whenever a good match is detected between the camera image and the tagged street view images. The proposed approach is tested on the public KITTI dataset, which covers realistic driving situations, and show that we are able to achieve lane-level localization in global coordinates. As our results indicate, the solution provides a reliable alternative to GPS systems, which is purely based on vision. We also show how the localization produced by our method can be utilized to provide accurate Augmented Reality overlays for a driver assistance application.
Please, see project report.
Please, see final presentation.
Please, see KITTI Odometry Benchmark.