Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Mathematics of Modern Machine Learning (M3L)

From algorithms to neural networks and back

Andrej Risteski

[ ]
Sat 16 Dec 7 a.m. PST — 7:45 a.m. PST

Abstract:

An increasingly common design and analysis paradigm for neural networks is thinking of them as parametrizing (implicitly or explicitly) some algorithm. In images, score-based generative models can be thought of as parametrizing a learned sampler (a stochastic differential equation or a Markov Chain). In scientific applications, PDE solvers are trained as neural analogues of numerical solvers. In language, we probe to understand whether transformers can solve simple algorithmic tasks like parsing. In this talk, I’ll share several vignettes illustrating the value of an algorithmic lens in these settings: namely, understanding the performance of “natural” algorithms will allow us to understand the performance of neural methods, as well as explore and elucidate the architectural design space.

Chat is not available.