Skip to yearly menu bar Skip to main content


Poster

Multiview Human Body Reconstruction from Uncalibrated Cameras

Zhixuan Yu · Linguang Zhang · Yuanlu Xu · Chengcheng Tang · LUAN TRAN · Cem Keskin · Hyun Soo Park

Hall J (level 1) #630

Keywords: [ Fusion ] [ Dense keypoints ] [ Multiview ] [ 3D human body reconstruction ] [ Uncalibrated ]


Abstract:

We present a new method to reconstruct 3D human body pose and shape by fusing visual features from multiview images captured by uncalibrated cameras. Existing multiview approaches often use spatial camera calibration (intrinsic and extrinsic parameters) to geometrically align and fuse visual features. Despite remarkable performances, the requirement of camera calibration restricted their applicability to real-world scenarios, e.g., reconstruction from social videos with wide-baseline cameras. We address this challenge by leveraging the commonly observed human body as a semantic calibration target, which eliminates the requirement of camera calibration. Specifically, we map per-pixel image features to a canonical body surface coordinate system agnostic to views and poses using dense keypoints (correspondences). This feature mapping allows us to semantically, instead of geometrically, align and fuse visual features from multiview images. We learn a self-attention mechanism to reason about the confidence of visual features across and within views. With fused visual features, a regressor is learned to predict the parameters of a body model. We demonstrate that our calibration-free multiview fusion method reliably reconstructs 3D body pose and shape, outperforming state-of-the-art single view methods with post-hoc multiview fusion, particularly in the presence of non-trivial occlusion, and showing comparable accuracy to multiview methods that require calibration.

Chat is not available.