Skip to yearly menu bar Skip to main content


Workshop

Practical Application of Sparse Modeling: Open Issues and New Directions

Irina Rish · Alexandru Niculescu-Mizil · Guillermo Cecchi · Aurelie Lozano

Hilton: Sutcliffe A

Fri 10 Dec, 7:30 a.m. PST

Sparse modeling is a rapidly developing area at the intersection of statistics, machine-learning and signal processing, focused on the problem of variable selection in high-dimensional datasets. Selection (and, moreover, construction) of a small set of highly predictive variables is central to many applications where the ultimate objective is to enhance our understanding of underlying physical, biological and other natural processes, beyond just building accurate `black-box' predictors.

\par Recent years have witnessed a flurry of research on algorithms and theory for sparse modeling, mainly focused on l1-regularized optimization, a convex relaxation of the (NP-hard) smallest subset selection problem. Examples include sparse regression, such as Lasso and its various extensions, such as Elastic Net, fused Lasso, group Lasso, simultaneous (multi-task) Lasso, adaptive Lasso, bootstrap Lasso, etc.), sparse graphical model selection, sparse dimensionality reduction (sparse PCA, CCA, NMF, etc.) and learning dictionaries that allow sparse representations. Applications of these methods are wide-ranging, including computational biology, neuroscience, image processing, stock market prediction and social network analysis, as well as compressed sensing, an extremely fast-growing area of signal processing.

\par However, is the promise of sparse modeling realized in practice? It turns out that, despite the significant advances in the field, a number of open issues remain when sparse modeling meets real-life applications. Below we only mention a few of them (see the workshop website for a more detailed discussion): stability of sparse models; selection of the right'' regularization parameter/model selection; findingright'' representation (dictionary learning); handling structured sparsity; evaluation of the results, interpretability.

\par We would like to invite researchers working on methodology, theory and especially applications of sparse modeling to share their experiences and insights into both the basic properties of the methods, and the properties of the application domains that make particular methods more (or less) suitable. Moreover, we plan to have a brainstorming session on various open issues, including (but not limited to) the ones mentioned above, and hope to come up with a set of new research directions motivated by problems encountered in practical applications.

\par We welcome submissions on various practical aspects of sparse modeling, specifically focusing on the following questions:
Does sparse modeling provide a meaningful interpretation of interest to domain experts? What other properties of the sparse models are desirable for better interpretability? How robust is the method with respect to various type of noise in the data? What type of method (e.g., combination of regularizers) is best-suited for a particular application and why? What is the best representation allowing for sparse modeling in your domain? How do you find such a representation efficiently? How is the model evaluated with respect to its structure-recovery quality?

Live content is unavailable. Log in and register to view live content