Skip to yearly menu bar Skip to main content


Poster

Higher-Order Total Variation Classes on Grids: Minimax Theory and Trend Filtering Methods

Veeranjaneyulu Sadhanala · Yu-Xiang Wang · James Sharpnack · Ryan Tibshirani

Pacific Ballroom #209

Keywords: [ Sparsity and Compressed Sensing ] [ Frequentist Statistics ] [ Regularization ] [ Denoising ] [ Signal Processing ] [ Spaces of Functions and Kernels ]


Abstract: We consider the problem of estimating the values of a function over $n$ nodes of a $d$-dimensional grid graph (having equal side lengths $n^{1/d}$) from noisy observations. The function is assumed to be smooth, but is allowed to exhibit different amounts of smoothness at different regions in the grid. Such heterogeneity eludes classical measures of smoothness from nonparametric statistics, such as Holder smoothness. Meanwhile, total variation (TV) smoothness classes allow for heterogeneity, but are restrictive in another sense: only constant functions count as perfectly smooth (achieve zero TV). To move past this, we define two new higher-order TV classes, based on two ways of compiling the discrete derivatives of a parameter across the nodes. We relate these two new classes to Holder classes, and derive lower bounds on their minimax errors. We also analyze two naturally associated trend filtering methods; when $d=2$, each is seen to be rate optimal over the appropriate class.

Live content is unavailable. Log in and register to view live content