This paper is concerned with large-scale localisation at city scales with monocular cameras. Our primary motivation lies with the development of autonomous road vehicles — an application domain in which low-cost sensing is particularly important. Here we present a method for localising against a textured 3-dimensional prior mesh using a monocular camera. We first present a system for generating and texturing the prior using a LIDAR scanner and camera. We then describe how we can localise against that prior with a single camera, using an information-theoretic measure of image similarity. This process requires dealing with the distortions induced by a wide-angle camera. We present and justify an interesting approach to this issue in which we distort the prior map into the image rather than vice-versa. Finally we explain how the general purpose computation functionality of a modern GPU is particularly apt for our task, allowing us to run the system in real time. We present results showing centimetre-level localisation accuracy through a city over six kilometres.

  • [PDF] G. Pascoe, W. Maddern, A. D. Stewart, and P. Newman, “FARLAP: Fast Robust Localisation using Appearance Priors,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 2015.
    [Bibtex]

    @inproceedings{PascoeICRA2015,
    Address = {Seattle, WA, USA},
    Author = {Pascoe, Geoffrey and Maddern, Will and Stewart, Alexander D. and Newman, Paul},
    Booktitle = {{P}roceedings of the {IEEE} {I}nternational {C}onference on {R}obotics and {A}utomation ({ICRA})},
    Month = {May},
    Pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/2015ICRA_pascoe.pdf},
    Title = {{FARLAP}: {F}ast {R}obust {L}ocalisation using {A}ppearance {P}riors},
    Year = {2015}}