Convergences guarantees of GFlowNets
Abstract
Although they were introduced to approximate complex distributions defined up to normalization, Generative Flow Networks (GFlowNets) only provide strong guarantees once idealized conditions are matched. However, these conditions are never satisfied exactly in practice when they are trained using gradient-based methods. In this paper, we prove that minimizing the Trajectory Balance loss, a popular GFlowNet objective, does lead to an induced distribution getting closer to the target distribution of interest, confirming theoretically this long-standing intuition from the GFlowNet literature. We ultimately show that the KL divergence between both distributions is upper-bounded by the quantity being minimized, and we further verify this theoretical statement on a simple sampling task.