Deep learning has made great strides in the last few years. For example, it is now possible to train networks with millions of neurons--using gradient-based learning methods--to classify images at near human performance. One exciting possibility is to run these networks on energy-efficient neuromorphic hardware, such as IBM's TrueNorth chip. However, these specialized architectures impose constraints that are not typically considered in deep learning; for example to achieve energy efficiency, TrueNorth uses low precision synapses, spiking neurons, and restricted fan-in. In this talk, I will describe our recent work that modifies deep learning to be compatible with typical neuromorphic constraints. Using this approach, we demonstrate near state-of-the-art accuracy on 8 datasets, while running between 1,200 and 2,600 frames per second and using between 25mW and 275mW on TrueNorth.