Skip to yearly menu bar Skip to main content


Poster

Optical Diffusion Models for Image Generation

Ilker Oguz · Niyazi Dinc · Mustafa Yildirim · Junjie Ke · Innfarn Yoo · Qifei Wang · Feng Yang · Christophe Moser · Demetri Psaltis

East Exhibit Hall A-C #2707
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Diffusion models generate new samples by progressively decreasing the noise from the initially provided random distribution. This inference procedure generally utilizes a trained neural network numerous times to obtain the final output, creating significant latency and energy consumption on digital electronic hardware such as GPUs. In this study, we demonstrate that the propagation of a light beam through a transparent medium can be programmed to implement a denoising diffusion model on image samples. This framework projects noisy image patterns through passive diffractive optical layers, which collectively only transmit the predicted noise term in the image. The optical transparent layers, which are trained with an online training approach, backpropagating the error to the analytical model of the system, are passive and kept the same across different steps of denoising. Hence this method enables high-speed image generation with minimal power consumption, benefiting from the bandwidth and energy efficiency of optical information processing.

Live content is unavailable. Log in and register to view live content