Timezone: »
In this paper, we aim to create generalizable and controllable neural signed distance fields (SDFs) that represent clothed humans from monocular depth observations. Recent advances in deep learning, especially neural implicit representations, have enabled human shape reconstruction and controllable avatar generation from different sensor inputs. However, to generate realistic cloth deformations from novel input poses, watertight meshes or dense full-body scans are usually needed as inputs. Furthermore, due to the difficulty of effectively modeling pose-dependent cloth deformations for diverse body shapes and cloth types, existing approaches resort to per-subject/cloth-type optimization from scratch, which is computationally expensive. In contrast, we propose an approach that can quickly generate realistic clothed human avatars, represented as controllable neural SDFs, given only monocular depth images. We achieve this by using meta-learning to learn an initialization of a hypernetwork that predicts the parameters of neural SDFs. The hypernetwork is conditioned on human poses and represents a clothed neural avatar that deforms non-rigidly according to the input poses. Meanwhile, it is meta-learned to effectively incorporate priors of diverse body shapes and cloth types and thus can be much faster to fine-tune, compared to models trained from scratch. We qualitatively and quantitatively show that our approach outperforms state-of-the-art approaches that require complete meshes as inputs while our approach requires only depth frames as inputs and runs orders of magnitudes faster. Furthermore, we demonstrate that our meta-learned hypernetwork is very robust, being the first to generate avatars with realistic dynamic cloth deformations given as few as 8 monocular depth frames.
Author Information
Shaofei Wang (Department of Computer Science, ETH Zurich)
Marko Mihajlovic (Swiss Federal Institute of Technology)
Qianli Ma
Andreas Geiger (MPI Tübingen)
Siyu Tang (ETH Zurich)
More from the Same Authors
-
2021 : STEP: Segmenting and Tracking Every Pixel »
Mark Weber · Jun Xie · Maxwell Collins · Yukun Zhu · Paul Voigtlaender · Hartwig Adam · Bradley Green · Andreas Geiger · Bastian Leibe · Daniel Cremers · Aljosa Osep · Laura Leal-Taixé · Liang-Chieh Chen -
2021 Poster: On the Frequency Bias of Generative Models »
Katja Schwarz · Yiyi Liao · Andreas Geiger -
2021 Oral: Shape As Points: A Differentiable Poisson Solver »
Songyou Peng · Chiyu Jiang · Yiyi Liao · Michael Niemeyer · Marc Pollefeys · Andreas Geiger -
2021 Poster: ATISS: Autoregressive Transformers for Indoor Scene Synthesis »
Despoina Paschalidou · Amlan Kar · Maria Shugrina · Karsten Kreis · Andreas Geiger · Sanja Fidler -
2021 Poster: Shape As Points: A Differentiable Poisson Solver »
Songyou Peng · Chiyu Jiang · Yiyi Liao · Michael Niemeyer · Marc Pollefeys · Andreas Geiger -
2021 Poster: Projected GANs Converge Faster »
Axel Sauer · Kashyap Chitta · Jens Müller · Andreas Geiger -
2020 Poster: MATE: Plugging in Model Awareness to Task Embedding for Meta Learning »
Xiaohan Chen · Zhangyang Wang · Siyu Tang · Krikamol Muandet -
2017 Poster: The Numerics of GANs »
Lars Mescheder · Sebastian Nowozin · Andreas Geiger -
2017 Spotlight: The Numerics of GANs »
Lars Mescheder · Sebastian Nowozin · Andreas Geiger