Skip to yearly menu bar Skip to main content


Poster

One for All: Multi-Domain Joint Training for Point Cloud Based 3D Object Detection

Zhenyu Wang · Ya-Li Li · Hengshuang Zhao · Shengjin Wang

East Exhibit Hall A-C #4904
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The current trend in computer vision is to utilize one universal model to address all various tasks. Achieving such a universal model inevitably requires incorporating multi-domain data for joint training to learn across multiple problem scenarios. In point cloud based 3D object detection, however, such multi-domain joint training is highly challenging, because large domain gaps among point clouds from different datasets lead to the severe domain-interference problem. In this paper, we propose OneDet3D, a universal one-for-all model that addresses 3D detection across different domains, including diverse indoor and outdoor scenes, within the same framework and only one set of parameters. We propose the domain-aware partitioning in scatter and context, guided by a routing mechanism, to address the data interference issue, and further incorporate the text modality for a language-guided classification to unify the multi-dataset label spaces and mitigate the category interference issue. The fully sparse structure and anchor-free head further accommodate point clouds with significant scale disparities. Extensive experiments demonstrate the strong universal ability of OneDet3D to utilize only one trained model for addressing almost all 3D object detection tasks (Fig. 1). We will open-source the code for future research and applications.

Live content is unavailable. Log in and register to view live content