Timezone: »

 
Fine-tuning Diffusion Models with Limited Data
Taehong Moon · Moonseok Choi · Gayoung Lee · Jung-Woo Ha · Juho Lee
Event URL: https://openreview.net/forum?id=0J6afk9DqrR »

Diffusion models have recently shown remarkable progress, demonstrating state-of-the-art image generation qualities. Like the other high-fidelity generative models, diffusion models require a large amount of data and computing time for stable training, which hinders the application of diffusion models for limited data settings. To overcome this issue, one can employ a pre-trained diffusion model built on a large-scale dataset and fine-tune it on a target dataset. Unfortunately, as we show empirically, this easily results in overfitting. In this paper, we propose an efficient fine-tuning algorithm for diffusion models that can efficiently and robustly train on limited data settings. We first show that fine-tuning only the small subset of the pre-trained parameters can efficiently learn the target dataset with much less overfitting. Then we further introduce a lightweight adapter module that can be attached to the pre-trained model with minimal overhead and show that fine-tuning with our adapter module significantly improves the image generation quality. We demonstrate the effectiveness of our method on various real-world image datasets.

Author Information

Taehong Moon (Korea Advanced Institute of Science and Technology)
Moonseok Choi (KAIST, Korea Advanced Institute of Science and Technology)
Gayoung Lee (NAVER)
Jung-Woo Ha (NAVER CLOVA AI Lab)
Jung-Woo Ha

- Head, AI Innovation, NAVER Cloud - Research Fellow, NAVER AI Lab - Datasets and Benchmarks Co-Chair, NeurIPS 2023 - Socials Co-Chair, ICML 2023 - Socials Co-Chair, NeurIPS 2022 - BS, Seoul National University - PhD, Seoul National University

Juho Lee (KAIST, AITRICS)

More from the Same Authors