Timezone: »
Many problems in machine learning involve collecting high-dimensional multivariate observations or sequences of observations, and then fitting a compact model which explains these observations. Recently, linear algebra techniques have given a fundamentally different perspective on how to fit and perform inference in these models. Exploiting the underlying spectral properties of the model parameters has led to fast, provably consistent methods for parameter learning that stand in contrast to previous approaches, such as Expectation Maximization, which suffer from bad local optima and slow convergence.
In the past several years, these Spectral Learning algorithms have become increasingly popular. They have been applied to learn the structure and parameters of many models including predictive state representations, finite state transducers, hidden Markov models, latent trees, latent junction trees, probabilistic context free grammars, and mixture/admixture models. Spectral learning algorithms have also been applied to a wide range of application domains including system identification, video modeling, speech modeling, robotics, and natural language processing.
The focus of this workshop will be on spectral learning algorithms, broadly construed as any method that fits a model by way of a spectral decomposition of moments of (features of) observations. We would like the workshop to be as inclusive as possible and encourage paper submissions and participation from a wide range of research related to this focus.
We will encourage submissions on the following themes:
- How can spectral techniques help us develop fast and local minima free solutions to real world problems where existing methods such as Expectation Maximization are unsatisfactory?
- How do spectral/moment methods compare to maximum-likelihood estimators and Bayesian methods, especially in terms of robustness, statistical efficiency, and computational efficiency?
- What notions of spectral decompositions are appropriate for latent variable models and structured prediction problems?
- How can spectral methods take advantage of multi-core/multi-node computing environments?
- What computational problems, besides parameter estimation, can benefit from spectral decompositions and operator parameterizations? (For example, applications to parsing.)
Author Information
Byron Boots (University of Washington)
Daniel Hsu (Columbia University)
See <https://www.cs.columbia.edu/~djhsu/>
Borja Balle (McGill University)
More from the Same Authors
-
2020 : Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics »
Bo Cowgill · Fabrizio Dell'Acqua · Augustin Chaintreau · Nakul Verma · Samuel Deng · Daniel Hsu -
2021 Spotlight: Bayesian decision-making under misspecified priors with applications to meta-learning »
Max Simchowitz · Christopher Tosh · Akshay Krishnamurthy · Daniel Hsu · Thodoris Lykouris · Miro Dudik · Robert Schapire -
2021 : Towards a Trace-Preserving Tensor Network Representation of Quantum Channels »
Siddarth Srinivasan · Sandesh Adhikary · Jacob Miller · Guillaume Rabusseau · Byron Boots -
2022 : Learning Semantics-Aware Locomotion Skills from Human Demonstrations »
Yuxiang Yang · Xiangyun Meng · Wenhao Yu · Tingnan Zhang · Jie Tan · Byron Boots -
2023 Poster: Adversarial Model for Offline Reinforcement Learning »
Mohak Bhardwaj · Tengyang Xie · Byron Boots · Nan Jiang · Ching-An Cheng -
2023 Poster: Representational Strengths and Limitations of Transformers »
Clayton Sanford · Daniel Hsu · Matus Telgarsky -
2022 Poster: Masked Prediction: A Parameter Identifiability View »
Bingbin Liu · Daniel Hsu · Pradeep Ravikumar · Andrej Risteski -
2021 : Towards a Trace-Preserving Tensor Network Representation of Quantum Channels »
Siddarth Srinivasan · Sandesh Adhikary · Jacob Miller · Guillaume Rabusseau · Byron Boots -
2021 Poster: Support vector machines and linear regression coincide with very high-dimensional features »
Navid Ardeshir · Clayton Sanford · Daniel Hsu -
2021 Poster: Bayesian decision-making under misspecified priors with applications to meta-learning »
Max Simchowitz · Christopher Tosh · Akshay Krishnamurthy · Daniel Hsu · Thodoris Lykouris · Miro Dudik · Robert Schapire -
2020 : Q&A: Byron Boots »
Byron Boots -
2020 : Invited Talk: Byron Boots »
Byron Boots -
2020 Poster: Intra Order-preserving Functions for Calibration of Multi-Class Neural Networks »
Amir Rahimi · Amirreza Shaban · Ching-An Cheng · Richard I Hartley · Byron Boots -
2020 Poster: Ensuring Fairness Beyond the Training Data »
Debmalya Mandal · Samuel Deng · Suman Jana · Jeannette Wing · Daniel Hsu -
2019 : Continuous Online Learning and New Insights to Online Imitation Learning »
Jonathan Lee · Ching-An Cheng · Ken Goldberg · Byron Boots -
2019 Poster: On the number of variables to use in principal component regression »
Ji Xu · Daniel Hsu -
2018 Poster: Overfitting or perfect fitting? Risk bounds for classification and regression rules that interpolate »
Mikhail Belkin · Daniel Hsu · Partha P Mitra -
2018 Poster: Benefits of over-parameterization with EM »
Ji Xu · Daniel Hsu · Arian Maleki -
2018 Poster: Leveraged volume sampling for linear regression »
Michal Derezinski · Manfred K. Warmuth · Daniel Hsu -
2018 Spotlight: Leveraged volume sampling for linear regression »
Michal Derezinski · Manfred K. Warmuth · Daniel Hsu -
2017 Poster: Linear regression without correspondence »
Daniel Hsu · Kevin Shi · Xiaorui Sun -
2016 Poster: Global Analysis of Expectation Maximization for Mixtures of Two Gaussians »
Ji Xu · Daniel Hsu · Arian Maleki -
2016 Oral: Global Analysis of Expectation Maximization for Mixtures of Two Gaussians »
Ji Xu · Daniel Hsu · Arian Maleki -
2016 Poster: Search Improves Label for Active Learning »
Alina Beygelzimer · Daniel Hsu · John Langford · Chicheng Zhang -
2015 Poster: Mixing Time Estimation in Reversible Markov Chains from a Single Sample Path »
Daniel Hsu · Aryeh Kontorovich · Csaba Szepesvari -
2015 Poster: Efficient and Parsimonious Agnostic Active Learning »
Tzu-Kuo Huang · Alekh Agarwal · Daniel Hsu · John Langford · Robert Schapire -
2015 Spotlight: Efficient and Parsimonious Agnostic Active Learning »
Tzu-Kuo Huang · Alekh Agarwal · Daniel Hsu · John Langford · Robert Schapire -
2014 Poster: Scalable Non-linear Learning with Adaptive Polynomial Expansions »
Alekh Agarwal · Alina Beygelzimer · Daniel Hsu · John Langford · Matus J Telgarsky -
2014 Poster: The Large Margin Mechanism for Differentially Private Maximization »
Kamalika Chaudhuri · Daniel Hsu · Shuang Song -
2013 Poster: When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity »
Anima Anandkumar · Daniel Hsu · Majid Janzamin · Sham M Kakade -
2013 Poster: Contrastive Learning Using Spectral Methods »
James Y Zou · Daniel Hsu · David Parkes · Ryan Adams -
2012 Poster: Learning Mixtures of Tree Graphical Models »
Anima Anandkumar · Daniel Hsu · Furong Huang · Sham M Kakade -
2012 Poster: A Spectral Algorithm for Latent Dirichlet Allocation »
Anima Anandkumar · Dean P Foster · Daniel Hsu · Sham M Kakade · Yi-Kai Liu -
2012 Poster: Identifiability and Unmixing of Latent Parse Trees »
Percy Liang · Sham M Kakade · Daniel Hsu -
2012 Spotlight: A Spectral Algorithm for Latent Dirichlet Allocation »
Anima Anandkumar · Dean P Foster · Daniel Hsu · Sham M Kakade · Yi-Kai Liu -
2012 Poster: Spectral Learning of General Weighted Automata via Constrained Matrix Completion »
Borja Balle · Mehryar Mohri -
2012 Oral: Spectral Learning of General Weighted Automata via Constrained Matrix Completion »
Borja Balle · Mehryar Mohri -
2011 Poster: Stochastic convex optimization with bandit feedback »
Alekh Agarwal · Dean P Foster · Daniel Hsu · Sham M Kakade · Sasha Rakhlin -
2011 Poster: Spectral Methods for Learning Multivariate Latent Tree Structure »
Anima Anandkumar · Kamalika Chaudhuri · Daniel Hsu · Sham M Kakade · Le Song · Tong Zhang -
2010 Poster: Agnostic Active Learning Without Constraints »
Alina Beygelzimer · Daniel Hsu · John Langford · Tong Zhang -
2010 Poster: Predictive State Temporal Difference Learning »
Byron Boots · Geoffrey Gordon -
2009 Poster: A Parameter-free Hedging Algorithm »
Kamalika Chaudhuri · Yoav Freund · Daniel Hsu -
2009 Poster: Multi-Label Prediction via Compressed Sensing »
Daniel Hsu · Sham M Kakade · John Langford · Tong Zhang -
2009 Oral: Multi-Label Prediction via Compressed Sensing »
Daniel Hsu · Sham M Kakade · John Langford · Tong Zhang -
2007 Oral: A Constraint Generation Approach to Learning Stable Linear Dynamical Systems »
Sajid M Siddiqi · Byron Boots · Geoffrey Gordon -
2007 Poster: A Constraint Generation Approach to Learning Stable Linear Dynamical Systems »
Sajid M Siddiqi · Byron Boots · Geoffrey Gordon -
2007 Spotlight: A general agnostic active learning algorithm »
Sanjoy Dasgupta · Daniel Hsu · Claire Monteleoni -
2007 Poster: A general agnostic active learning algorithm »
Sanjoy Dasgupta · Daniel Hsu · Claire Monteleoni