Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning for Creativity and Design

CICADA: Interface for Concept Sketches Using CLIP

Tomas Lawton


Abstract:

From Stable Diffusion to DALL·E2, state of the art models for high-resolution text-to-image generation seem to arrive nearly every week [1, 2], along with the promise to cause significant disruption in the creative industries. However, professional designers – from illustrators to architects to engineers – use low-fidelity representations like sketches to refine their understanding of the problem, rather than for developing completed solutions [3 , 4]. Conceptual stages of design have been operationalised as the co-evolution of problem and solution “spaces” [5]. We introduce the Collaborative, Interactive, Context-Aware Design Agent (CICADA) [6 ], which uses CLIP-guided [7] synthesis-by-optimisation to support conceptual designing. Building on previous approaches [8] we optimize a set of Bézier curves to match a given text prompt. In CICADA, users sketch collaboratively with the system in real-time. Users maintain editorial control, although additions to both the optimiser and interaction model enable designers and CICADA to influence one another by engaging with the sketch. CICADA provides an instrument to explore how text-to-image generative systems can assist designers, so we conducted a qualitative user study to explore its impact on designing.

Chat is not available.