Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: CtrlGen: Controllable Generative Modeling in Language and Vision

Invited Talk #6 - Generating and Editing Images Using StyleGAN and CLIP (Or Patashnik)

Or Patashnik


Abstract:

Title: Generating and Editing Images Using StyleGAN and CLIP

Abstract: Recently, there has been an increased interest in leveraging the semantic power of large-scale Contrastive-Language-Image-Pre-training (CLIP) models. Specifically, combining the power of CLIP with the generative power of StyleGAN has led to novel text-driven methods with unprecedented generative performance.
In this talk, I will start by presenting StyleCLIP. I will show three approaches in which CLIP can be paired with StyleGAN, to provide endless expressive power for image editing. Then I will present our recent follow-up work, StyleGAN-NADA where CLIP facilitates shifting a trained StyleGAN to new domains without collecting even a single image from those domains.

Bio: Or Patashnik is a graduate student in the School of Computer Science at Tel Aviv University, under the supervision of Daniel Cohen-Or. Her research is about image generation tasks such as image-to-image translation, image editing, etc.