Skip to yearly menu bar Skip to main content


Poster

Flow-based Image-to-Image Translation with Feature Disentanglement

Ruho Kondo · Keisuke Kawano · Satoshi Koide · Takuro Kutsuna

East Exhibition Hall B + C #126

Keywords: [ Deep Learning ] [ Generative Models ]


Abstract:

Learning non-deterministic dynamics and intrinsic factors from images obtained through physical experiments is at the intersection of machine learning and material science. Disentangling the origins of uncertainties involved in microstructure growth, for example, is of great interest because future states vary due to thermal fluctuation and other environmental factors. To this end we propose a flow-based image-to-image model, called Flow U-Net with Squeeze modules (FUNS), that allows us to disentangle the features while retaining the ability to generate highquality diverse images from condition images. Our model successfully captures probabilistic phenomena by incorporating a U-Net-like architecture into the flowbased model. In addition, our model automatically separates the diversity of target images into condition-dependent/independent parts. We demonstrate that the quality and diversity of the images generated for microstructure growth and CelebA datasets outperform existing variational generative models.

Live content is unavailable. Log in and register to view live content