Skip to yearly menu bar Skip to main content


Poster

An interior-point stochastic approximation method and an L1-regularized delta rule

Peter Carbonetto · Mark Schmidt · Nando de Freitas


Abstract:

The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its far-reaching application, there is almost no work on applying stochastic approximation to learning problems with constraints. The reason for this, we hypothesize, is that no robust, widely-applicable stochastic approximation method exists for handling such problems. We propose that interior-point methods are a natural solution. We establish the stability of a stochastic interior-point approximation method both analytically and empirically, and demonstrate its utility by deriving an on-line learning algorithm that also performs feature selection via L1 regularization.

Live content is unavailable. Log in and register to view live content