Probabilistic backpropagation is an approximate Bayesian inference method for deep neural networks, using a message-passing framework. These messages---which correspond to distributions arising as we propagate our input through a probabilistic neural network---are approximated as Gaussian. However, in practice, the exact distributions may be highly non-Gaussian. In this paper, we propose a more realistic approximation based on a spike-and-slab distribution. Unfortunately, in this case, better approximation of the messages does not translate to better downstream performance. We present results comparing the two schemes and discuss why we do not see a benefit from this spike-and-slab approach.