Classification precision and recall have been widely adopted by roboticists as canonical metrics to quantify the performance of learning algorithms. This paper advocates that for robotics applications, which often involve mission-critical decision making, good performance according to these standard metrics is desirable but insufficient to appropriately characterise system performance. We introduce and motivate the importance of a classifier’s introspective capacity: the ability to mitigate potentially overconfident classifications by an appropriate assessment of how qualified the system is to make a judgement on the current test datum. We provide an intuition as to how this introspective capacity can be achieved and systematically investigate it in a selection of classification frameworks commonly used in robotics: support vector machines, LogitBoost classifiers and Gaussian Process classifiers (GPCs). Our experiments demonstrate that for common robotics tasks a framework such as a GPC exhibits a superior introspective capacity while maintaining commensurate classification performance to more popular, alternative approaches

  • [PDF] H. Grimmett, R. Paul, R. Triebel, and I. Posner, “Knowing When We Don’t Know: Introspective Classification for Mission-Critical Decision Making,” in Proc. IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 2013.
    [Bibtex]

    @inproceedings{GrimmettICRA2013,
    Address = {Karlsruhe, Germany},
    Author = {Hugo Grimmett and Rohan Paul and Rudolph Triebel and Ingmar Posner},
    Booktitle = {Proc. IEEE International Conference on Robotics and Automation (ICRA)},
    Keywords = {conference_posner},
    Month = {May},
    Pdf = {http://www.robots.ox.ac.uk/~mobile/Papers/2013ICRA_hg.pdf},
    Title = {Knowing When We Don't Know: Introspective Classification for Mission-Critical Decision Making},
    Year = {2013}}