Poster
First order expansion of convex regularized estimators
Pierre C Bellec · Arun Kuchibhotla
East Exhibition Hall B, C #233
Keywords: [ Learning Theory ] [ Theory ] [ Large Deviations and Asymptotic Analysis; Th ] [ Algorithms -> Regression; Algorithms -> Sparsity and Compressed Sensing; Theory ]
[
Abstract
]
Abstract:
We consider first order expansions of convex penalized estimators in
high-dimensional regression problems with random designs. Our setting includes
linear regression and logistic regression as special cases. For a given
penalty function $h$ and the corresponding penalized estimator $\hbeta$, we
construct a quantity $\eta$, the first order expansion of $\hbeta$, such that
the distance between $\hbeta$ and $\eta$ is an order of magnitude smaller than
the estimation error $\|\hat{\beta} - \beta^*\|$. In this sense, the first
order expansion $\eta$ can be thought of as a generalization of influence
functions from the mathematical statistics literature to regularized estimators
in high-dimensions. Such first order expansion implies that the risk of
$\hat{\beta}$ is asymptotically the same as the risk of $\eta$ which leads to a
precise characterization of the MSE of $\hbeta$; this characterization takes a
particularly simple form for isotropic design. Such first order expansion also
leads to inference results based on $\hat{\beta}$. We provide sufficient
conditions for the existence of such first order expansion for three
regularizers: the Lasso in its constrained form, the lasso in its penalized
form, and the Group-Lasso. The results apply to general loss functions under
some conditions and those conditions are satisfied for the squared loss in
linear regression and for the logistic loss in the logistic model.
Live content is unavailable. Log in and register to view live content