GANs or Generative Adversarial Networks, have been a cornerstone in the machine learning and computer vision community due to their ability to generate high-quality realistic images by learning the underlying complex distributions in the training data. On the other hand, adaptation of generative models by retraining on the target dataset is currently computationally extensive. Therefore, there exists a need for data efficient transfer of generative models. We hypothesize that approaches which seek to align the source and target distributions generally tend to overfit the target data. Alternatively, we propose linear modulation of features in a non-linear network (FiLM) which allows fine grain control over the learned parameters during retraining. Originally introduced for visual reasoning, FiLM learns scaling and bias parameters gamma and beta respectively, that help close the divergence during source and target distributions. Using a DCGAN, we obtain results on the MNIST digits dataset while adapting it to the Rotated MNIST dataset. As future work, we hope to explore the performance of FiLM in our proposed GAN structure in conjunction with further fine-tuning and manipulation of batchnorm statistics.