Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Generative AI for Education (GAIED): Advances, Opportunities, and Challenges

Paper 16: Diffusion Models in Dermatological Education: Flexible High Quality Image Generation for VR-based Clinical Simulations

Leon Pielage · Paul Schmidle · Bernhard Marschall · Benjamin Risse

Keywords: [ Medical Education ] [ Image Generation ] [ upsampling ] [ virtual reality ] [ diffusion models ] [ Generative AI ] [ Simulation Training ] [ Guidance Strategies ] [ Deep Learning ]


Abstract:

Training medical students to accurately recognize malignant melanoma is a crucial competence and part of almost all medical curricular. We here present a pipeline to generate realistic high-resolution imagery of nevus and melanoma skin lesions by using diffusion models. To ensure the required quality and flexibility we introduce three novel guidance strategies and an adapted upsampling approach which enable the generation of user-specified shapes and to integrate the lesions onto pre-defined skin textures. We evaluate our lesions qualitatively and quantitatively and integrate our results into a virtual reality (VR) simulation for clinical education. Moreover, we discuss several advantages of synthetic over real images such as the ability to facilitate adjustable learning scenarios and the preservation of patient privacy underlining the huge potential of generative image generation for medical education.

Chat is not available.