Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Autonomous Driving

NSS-VAEs: Generative Scene Decomposition for Visual Navigable Space Construction

Zheng Chen · Lantao Liu


Abstract:

Detecting navigable space is the first and also a critical step for successful robot navigation. In this work, we treat the visual navigable space segmentation as a scene decomposition problem and propose a new network, NSS-VAEs (Navigable Space Segmentation Variational AutoEncoders), a representation-learning-based framework to enable robots to learn the navigable space segmentation in an unsupervised manner. Different from prevalent segmentation techniques which heavily rely on supervised learning strategies and typically demand immense pixel-level annotated images, the proposed framework leverages a generative model -- Variational Auto-Encoder (VAE) -- to learn a probabilistic polyline representation that compactly outlines the desired navigable space boundary. Uniquely, our method also assesses the prediction uncertainty related to the unstructuredness of the scenes, which is important for robot navigation in unstructured environments. Through extensive experiments, we have validated that our proposed method can achieve remarkably high accuracy (>90%) even without a single label. We also show that the prediction of NSS-VAEs can be further improved using few labels with results significantly outperforming the SOTA fully supervised learning-based method.

Chat is not available.