In this paper, we propose an MPC method for robot motion by formulating MPC as Bayesian Inference. We propose using amortized variational inference to approximate the posterior with a normalizing flow conditioned on the start, goal and environment. By using a normalizing flow to represent the posterior, we are able to model complex distributions. This is important for robotics, where real environments impose difficult constraints on trajectories. We also present an approach for generalizing the learned sampling distribution to novel environments outside the training distribution. We demonstrate that our approach generalizes to a difficult novel environment and outperform a baseline sampling-based MPC method on a navigation problem.