Automated facial expression recognition is useful for many applications, but models are often subject to racial biases. These racial biases may be hard to reveal due to complexity and opacity of the complex networks needed for state-of-the-art performance. Racial biases are also hard to demonstrate due to the inability to fully match facial expressions across real people. In this paper we use artificially created faces where facial expression can be carefully manipulated and matched across artificial faces with different skin colors and different facial shapes. We show that several public facial expression models appear to have a racial bias. In future work, we will work towards using the artificial data to help understand the basis of these biases and remove them from facial expression models.