Skip to yearly menu bar Skip to main content


Poster

Human Parsing Based Texture Transfer from Single Image to 3D Human via Cross-View Consistency

Fang Zhao · Shengcai Liao · Kaihao Zhang · Ling Shao

Poster Session 1 #397

Abstract:

This paper proposes a human parsing based texture transfer model via cross-view consistency learning to generate the texture of 3D human body from a single image. We use the semantic parsing of human body as input for providing both the shape and pose information to reduce the appearance variation of human image and preserve the spatial distribution of semantic parts. Meanwhile, in order to improve the prediction for textures of invisible parts, we explicitly enforce the consistency across different views of the same subject by exchanging the textures predicted by two views to render images during training. The perceptual loss and total variation regularization are optimized to maximize the similarity between rendered and input images, which does not necessitate extra 3D texture supervision. Experimental results on pedestrian images and fashion photos demonstrate that our method can produce higher quality textures with convincing details than other texture generation methods.

Chat is not available.