Timezone: »

 
Oral
An interior-point stochastic approximation method and an L1-regularized delta rule
Peter Carbonetto · Mark Schmidt · Nando de Freitas

Wed Dec 10 04:20 PM -- 04:40 PM (PST) @ None

The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its far-reaching application, there is almost no work on applying stochastic approximation to learning problems with constraints. The reason for this, we hypothesize, is that no robust, widely-applicable stochastic approximation method exists for handling such problems. We propose that interior-point methods are a natural solution. We establish the stability of a stochastic interior-point approximation method both analytically and empirically, and demonstrate its utility by deriving an on-line learning algorithm that also performs feature selection via L1 regularization.

Author Information

Peter Carbonetto (University of British Columbia)
Mark Schmidt (INRIA - SIERRA Project Team)
Nando de Freitas (University of Oxford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors