This work addresses the challenging problem of vision-based pose estimation in busy and distracting urban environments. By leveraging laser-generated 3D scene priors, we demonstrate how distracting objects of arbitrary types can be identified and masked in order to improve egomotion estimation. Results from data collected in central London during the Olympics show how our system is able to cope with situations where most of the image is obscured by dynamic objects.


  • [PDF] C. McManus, W. Churchill, A. Napier, B. Davis, and P. and Newman, “Distraction Suppression for Vision-Based Pose Estimation at City Scales,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 2013.
    [Bibtex]

    @inproceedings{McManusICRA2013,
    Address = {Karlsruhe, Germany},
    Author = {Colin McManus and Winston Churchill and Ashley Napier and Ben Davis and and Paul Newman},
    Booktitle = {Proc. IEEE International Conference on Robotics and Automation (ICRA)},
    Keywords = {Experience Based Navigation},
    Month = {May},
    Pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/2013ICRA_cm.pdf},
    Title = {Distraction Suppression for Vision-Based Pose Estimation at City Scales},
    Year = {2013}}