The interest in interpretable machine learning has grown significantly in the last few years. In this work, we present an approach belonging to the family of symbolic regression, which considers models as mathematical formulas thus creating new features from input variables as well as possible interactions between variables. Symbolic regression inherently allows for interpretability by searching for models that are usually much simpler than random forests or neural networks, and whose formula is explicit. Our approach, called Zoetrope Genetic Programming, combines the advances in symbolic regression with sparse linear regression, and builds models that are both interpretable and with good performance. We demonstrate the good performance on a benchmark of 97 regression datasets comparing ZGP with other state-of-the-art classical regression and symbolic regression algorithms.