Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Differentiable Programming Workshop

Unbiased Reparametrisation Gradient via Smoothing and Diagonalisation

Dominik Wagner · Luke Ong


Abstract:

It is well-known that the reparametrisation gradient estimator for non-differentiable models is biased. To formalise the problem, we consider a variant of the simply-typed lambda calculus which supports the reparametrisation of arguments. We endow this language with a denotational semantics based on the cartesian closed category of Frölicher spaces (parameterised by a smoothing accuracy), which generalise smooth manifolds. Finally, we apply the standard reparametrisation gradient to the smoothed model and show that by enhancing the accuracy of the smoothing in a diagonalisation fashion we converge to a critical point of the original optimisation problem.