Loading [MathJax]/jax/output/CommonHTML/jax.js
Skip to yearly menu bar Skip to main content


Poster

Lower bounds on minimax rates for nonparametric regression with additive sparsity and smoothness

Garvesh Raskutti · Martin J Wainwright · Bin Yu


Abstract: This paper uses information-theoretic techniques to determine minimax rates for estimating nonparametric sparse additive regression models under high-dimensional scaling. We assume an additive decomposition of the form f(X1,,Xp)=jShj(Xj), where each component function hj lies in some Hilbert Space \Hilb and S{1,,\pdim} is an unknown subset with cardinality \s=|S. Given \numobs i.i.d. observations of f(X) corrupted with white Gaussian noise where the covariate vectors (X1,X2,X3,...,X\pdim) are drawn with i.i.d. components from some distribution \mP, we determine tight lower bounds on the minimax rate for estimating the regression function with respect to squared \LTP error. The main result shows that the minimax rates are max(\slog\pdim/\sn,\LowerRateSq). The first term reflects the difficulty of performing \emph{subset selection} and is independent of the Hilbert space \Hilb; the second term \LowerRateSq is an \emph{\s-dimensional estimation} term, depending only on the low dimension \s but not the ambient dimension \pdim, that captures the difficulty of estimating a sum of \s univariate functions in the Hilbert space \Hilb. As a special case, if \Hilb corresponds to the \m-th order Sobolev space \SobM of functions that are m-times differentiable, the \s-dimensional estimation term takes the form \LowerRateSq\sn2\m/(2\m+1). The minimax rates are compared with rates achieved by an 1-penalty based approach, it can be shown that a certain 1-based approach achieves the minimax optimal rate.

Live content is unavailable. Log in and register to view live content