Timezone: »

 
Poster
Touch and Go: Learning from Human-Collected Vision and Touch
Fengyu Yang · Chenyang Ma · Jiacheng Zhang · Jing Zhu · Wenzhen Yuan · Andrew Owens

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #1024

The ability to associate sight with touch is essential for understanding material properties, and for physically interacting with the world. Learning these correlations, however, has proven challenging, since existing datasets have not captured the full diversity of these modalities. To address this shortcoming, we propose a dataset for multimodal visuo-tactile learning called Touch and Go, in which human data collectors probe objects in natural environments with tactile sensors, while recording egocentric video. The objects and scenes in our dataset are significantly more diverse than prior efforts, making the data well-suited to tasks that involve understanding material properties and physical interactions in the wild. To demonstrate our dataset's effectiveness, we successfully apply it to a variety of tasks: 1) self-supervised visuo-tactile feature learning, 2) tactile-driven image stylization, i.e., making the visual appearance of an object more consistent with a given tactile signal, and 3) predicting future frames of a tactile signal from visuo-tactile inputs.

Author Information

Fengyu Yang (University of Michigan - Ann Arbor)
Chenyang Ma (University of Michigan - Ann Arbor)
Jiacheng Zhang (Electrical Engineering and Computer Science, University of Michigan - Ann Arbor)
Jing Zhu (University of Michigan - Ann Arbor)
Wenzhen Yuan
Andrew Owens (University of Michigan)

More from the Same Authors

  • 2020 Workshop: Self-Supervised Learning -- Theory and Practice »
    Pengtao Xie · Shanghang Zhang · Pulkit Agrawal · Ishan Misra · Cynthia Rudin · Abdelrahman Mohamed · Wenzhen Yuan · Barret Zoph · Laurens van der Maaten · Xingyi Yang · Eric Xing