Autonomous vehicles operating in places like parking lots can leverage of a higher-level understanding of the objects around it. For instance, the knowledge hat there is an upcoming zebra crossing should be taken into account in the vehicle’s current motion plan and speed. Also labelling of parking spots, could be crucial in other  tasks as efficient assignment of parking spaces and local planning. Because many of these useful semantic cues are fixed in place, it makes sense to build up reusable semantic maps of the areas in which we want our autonomous cars to operate. Usually these maps are created manually but offers guarantees that are currently not available when using unsupervised machine learning classifiers. We use an active learning framework for semantic mapping in mobile robotics and demonstrated it in the context of autonomous driving. Intuitively, an introspective classification framework – i.e. one which moderates its predictions by an estimate of how well it is placed to make a call in a particular situation – is particularly well suited to this task.

  • [PDF] H. Grimmett, M. Buerki, L. Paz, P. Piniés, P. Furgale, I. Posner, and P. Newman, “Integrating Metric and Semantic Maps for Vision-Only Automated Parking,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 2015.

    Address = {Seattle, WA, USA},
    Author = {Grimmett, Hugo and Buerki, Mathias and Paz, Lina and Pini{\'e}s, Pedro and Furgale, Paul and Posner, Ingmar and Newman, Paul},
    Booktitle = {{P}roceedings of the {IEEE} {I}nternational {C}onference on {R}obotics and {A}utomation ({ICRA})},
    Month = {May},
    Pdf = {},
    Title = {{I}ntegrating {M}etric and {S}emantic {M}aps for {V}ision-{O}nly {A}utomated {P}arking},
    Year = {2015}}