Hacking Generative Models with Differentiable Network Bending
Giacomo Aldegheri · Alina Rogalska · Ahmed Youssef · Eugenia Iofinova
2023 Poster
in
Workshop: NeurIPS 2023 Workshop on Machine Learning for Creativity and Design
in
Workshop: NeurIPS 2023 Workshop on Machine Learning for Creativity and Design
Abstract
In this work, we propose a method to ’hack’ generative models, pushing their outputs away from the original training distribution towards a new objective. We inject a small-scale trainable module between the intermediate layers of the model and train it for a low number of iterations, keeping the rest of the network frozen. The resulting output images display an uncanny quality, given by the tension between the original and new objectives that can be exploited for artistic purposes.
Chat is not available.
Successful Page Load