Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning for Creativity and Design

Datasets That Are Not: Evolving Novelty Through Sparsity and Iterated Learning

Yusong Wu · Kyle Kastner · Tim Cooijmans · Cheng-Zhi Anna Huang · Aaron Courville


Abstract:

Creative machines have long been a subject of interest for generative modeling research. One research goal of machine creativity is to create machine processes which are data adaptive to develop new creative directions, which may inspire users or be used to provide creative expansions of current ideas. Several works propose models which leverage data-driven deep learning approaches to generate "out-of-domain" or novel samples that deviate from the dataset on which these models are trained. In these existing works, generative model weights are only optimized on real datasets, rather than incorporating model generated outputs back into the training loop.In this work, we propose expanding the scope of a generative model by iteratively training on generated samples, in addition to the given training data. In this paper, we propose Datasets That Are Not, a procedure for accumulating generated samples and iteratively training a generative model on this expanding dataset. Specifically, we expand upon Digits that Are Not, a sparsity-based autoencoder for the inner generative model, due to the variety and novelty of outputs when trained on the standard MNIST dataset. Our results show that by learning on generated data, the model effectively reinforces its own hallucinations, directing generated outputs in new and unexpected directions \emph{away} from initial training data while retaining core semantics.

Chat is not available.