Besides increasing trust in the human-AI relationship, XAI methods have the potential to promote new scientific insight. Graph neural networks (GNNs) have recently established themselves as a valuable tool in chemistry and material sciences. Various XAI methods have already been applied to gain new understanding of real-world scientific questions in these application domains. To that end we propose MEGAN, a multi-explanation graph attention network. Unlike common post-hoc XAI methods, our model is self-explaining and features multiple explanation channels, which can be chosen independent of the task specifications. We first validate our model on a synthetic graph regression dataset. We then apply our model to the prediction of water solubility for chemical compounds. We find that it learns to produce explanations consistent with human intuition, opening the way to learning from our model in less well-understood tasks.