We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments.

Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings.

We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.

Please see the video and paper below for more information.

  • [PDF] D. Barnes, W. Maddern, and I. Posner, “Find Your Own Way: Weakly-Supervised Segmentation of Path Proposals for Urban Autonomy,” ArXiv e-prints, 2016.
    [Bibtex]

    @article{BarnesArXivOctober2016,
    author = {Barnes, D. and Maddern, W. and Posner, I.},
    title = "{Find Your Own Way: Weakly-Supervised Segmentation of Path Proposals for Urban Autonomy}",
    journal = {ArXiv e-prints},
    archivePrefix = "arXiv",
    eprint = {1610.01238},
    primaryClass = "cs.RO",
    keywords = {Computer Science - Robotics, Computer Science - Artificial Intelligence, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Learning},
    year = 2016,
    month = oct,
    adsurl = {http://adsabs.harvard.edu/abs/2016arXiv161001238B},
    adsnote = {Provided by the SAO/NASA Astrophysics Data System},
    Pdf = {https://arxiv.org/abs/1610.01238}
    }