Timezone: »

Procedural Image Programs for Representation Learning
Manel Baradad · Richard Chen · Jonas Wulff · Tongzhou Wang · Rogerio Feris · Antonio Torralba · Phillip Isola

Wed Nov 30 02:00 PM -- 04:00 PM (PST) @ Hall J #513

Learning image representations using synthetic data allows training neural networks without some of the concerns associated with real images, such as privacy and bias. Existing work focuses on a handful of curated generative processes which require expert knowledge to design, making it hard to scale up. To overcome this, we propose training with a large dataset of twenty-one thousand programs, each one generating a diverse set of synthetic images. These programs are short code snippets, which are easy to modify and fast to execute using OpenGL. The proposed dataset can be used for both supervised and unsupervised representation learning, and reduces the gap between pre-training with real and procedurally generated images by 38%.

Author Information

Manel Baradad (Massachusetts Institute of Technology)
Richard Chen (JPMorgan Chase)
Jonas Wulff (CSAIL MIT / Xyla)
Tongzhou Wang (MIT)
Rogerio Feris (MIT-IBM Watson AI Lab, IBM Research)
Antonio Torralba (MIT)
Phillip Isola (Massachusetts Institute of Technology)

More from the Same Authors