Recent work has shown promise towards training policies for contact-rich tasks in simulation with the hope that they can be transferred to the real world. However, to close the sim2real gap, it is important to consider the effects of partial observability that are unavoidable in the real world and particularly relevant when dealing with small parts that require precise manipulation. In this work, we perform a detailed simulation-based analysis of how pose-estimation error, object geometry variability, and controller variability affect deep reinforcement learning algorithms. We show that using asymmetric actor-critic architectures leads to more robust training under noise.