Timezone: »
A large body of machine learning problems require solving nonconvex optimization. This includes deep learning, Bayesian inference, clustering, and so on. The objective functions in all these instances are highly non-convex, and it is an open question if there are provable, polynomial time algorithms for these problems under realistic assumptions.
A diverse set of approaches have been devised to solve nonconvex problems in a variety of approaches. They range from simple local search approaches such as gradient descent and alternating minimization to more involved frameworks such as simulated annealing, continuation method, convex hierarchies, Bayesian optimization, branch and bound, and so on. Moreover, for solving special class of nonconvex problems there are efficient methods such as quasi convex optimization, star convex optimization, submodular optimization, and matrix/tensor decomposition.
There has been a burst of recent research activity in all these areas. This workshop brings researchers from these vastly different domains and hopes to create a dialogue among them. In addition to the theoretical frameworks, the workshop will also feature practitioners, especially in the area of deep learning who are developing new methodologies for training large scale neural networks. The result will be a cross fertilization of ideas from diverse areas and schools of thought.
Thu 11:15 p.m. - 11:30 p.m.
|
Opening Remarks
(
Talk
)
|
🔗 |
Thu 11:30 p.m. - 12:00 a.m.
|
Learning To Optimize
(
Talk
)
The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this talk I describe how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. The learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. |
Nando de Freitas 🔗 |
Fri 12:00 a.m. - 12:30 a.m.
|
Morning Poster Spotlight
(
Spotlight
)
|
🔗 |
Fri 12:30 a.m. - 1:30 a.m.
|
Morning Poster Session
(
Posters
)
|
🔗 |
Fri 1:30 a.m. - 2:00 a.m.
|
Coffee Break
|
🔗 |
Fri 2:00 a.m. - 2:30 a.m.
|
The moment-LP and moment-SOS approaches in optimization and some related applications
(
Talk
)
In a first part we provide an introduction to the basics of the moment-LP and moment-SOS approaches to global polynomial optimization. In particular, we describe the hierarchy of LP and semidefinite programs to approximate the optimal value of such problems. In fact, the same methodology also applies to solve (or approximate) Generalized Moment Problems (GMP) whose data are described by basic semi-algebraic sets and polynomials (or even semi-algebraic functions). Indeed, Polynomial optimization is a particular (and even the simplest) instance of the GMP. In a second part, we describe how to use this methodology for solving some problems (outside optimization) viewed as particular instances of the GMP. This includes: - Approximating compact basic semi-algebraic sets defined by quantifiers. - Computing convex polynomials underestimators of polynomials on a box. - Bounds on measures satisfying some moment conditions. - Approximating the volume of compact basic semi-algebraic sets. - Approximating the Gaussian measure of non-compact basic semi-algebraic sets. - Approximating the Lebesgue decomposition of a measure μ w.r.t. another measure ν, based only on the moments of μ and ν. |
Jean Lasserre 🔗 |
Fri 2:30 a.m. - 3:00 a.m.
|
Non-convexity in the error landscape and the expressive capacity of deep neural networks
(
Talk
)
A variety of recent work has studied saddle points in the error landscape of deep neural networks. A clearer understanding of these saddle points is likely to arise from an understanding of the geometry of deep functions. In particular, what do the generic functions computed by a deep network “look like?” How can we quantify and understand their geometry, and what implications does this geometry have for reducing generalization error as well as training error? We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of generic deep functions. Our results reveal an order-to-chaos expressivity phase transition, with networks in the chaotic phase computing nonlinear functions whose global curvature grows exponentially with depth but not width. Moreover, we formalize and quantitatively demonstrate the long conjectured idea that deep networks can disentangle highly curved manifolds in input space into flat manifolds in hidden space. Our theoretical analysis of the expressive power of deep networks broadly applies to arbitrary nonlinearities, and provides intuition for why initializations at the edge of chaos can lead to both better optimization as well as superior generalization capabilities. |
Surya Ganguli 🔗 |
Fri 3:00 a.m. - 3:30 a.m.
|
Leveraging Structure in Bayesian Optimization
(
Talk
)
Bayesian optimization is an approach to non-convex optimization that leverages a probabilistic model to make decisions about candidate points to evaluate. The primary advantage of this approach is the ability to incorporate prior knowledge about the objective function in an explicit way. While such prior information has typically been information about the smoothness of the function, many machine learning problems have additional structure that can be leveraged. I will talk about how such prior information can be found across tasks, within inner-loop optimizations, and in constraints. |
Ryan Adams 🔗 |
Fri 3:30 a.m. - 4:30 a.m.
|
Lunch Break
|
🔗 |
Fri 4:30 a.m. - 5:00 a.m.
|
Submodular Optimization and Nonconvexity
(
Talk
)
Despite analogies of submodularity and convexity, submodular optimization is closely connected with certain "nice" non-convex optimization problems for which theoretical guarantees are still possible. In this talk, I will review some of these connections and make them specific at the example of a challenging robust influence maximization problem, for which we obtain new, tractable formulations and algorithms. |
Stefanie Jegelka 🔗 |
Fri 5:00 a.m. - 5:30 a.m.
|
Submodular Functions: from Discrete to Continuous Domains
(
Talk
)
Submodular set-functions have many applications in combinatorial optimization, as they can be minimized and approximately maximized in polynomial time. A key element in many of the algorithms and analyses is the possibility of extending the submodular set-function to a convex function, which opens up tools from convex optimization. Submodularity goes beyond set-functions and has naturally been considered for problems with multiple labels or for functions defined on continuous domains, where it corresponds essentially to cross second-derivatives being nonpositive. In this paper, we show that most results relating submodularity and convexity for set-functions can be extended to all submodular functions. In particular, (a) we naturally define a continuous extension in a set of probability measures, (b) show that the extension is convex if and only if the original function is submodular, (c) prove that the problem of minimizing a submodular function is equivalent to a typically non-smooth convex optimization problem, and (d) propose another convex optimization problem with better computational properties (e.g., a smooth dual problem). Most of these extensions from the set-function situation are obtained by drawing links with the theory of multi-marginal optimal transport, which provides also a new interpretation of existing results for set-functions. We then provide practical algorithms to minimize generic submodular functions on discrete domains, with associated convergence rates. |
Francis Bach 🔗 |
Fri 5:30 a.m. - 6:00 a.m.
|
Taming non-convexity via geometry
(
Talk
)
In this talk, I will highlight some aspects of geometry and its role in optimization. In particular, I will talk about optimization problems whose parameters are constrained to lie on a manifold or in a specific metric space. These geometric constraints often make the problems numerically challenging, but they can also unravel properties that ensure tractable attainment of global optimality for certain otherwise non-convex problems. We'll make our foray into geometric optimization via geodesic convexity, a concept that generalizes the usual notion of convexity to nonlinear metric spaces such as Riemannian manifolds. I will outline some of our results that contribute to g-convex analysis as well as to the theory of first-order g-convex optimization. I will mention several very interesting optimization problems where g-convexity proves remarkably useful. In closing, I will mention extensions to large-scale non-convex geometric optimization as well as key open problems. |
Suvrit Sra 🔗 |
Fri 6:00 a.m. - 6:30 a.m.
|
Break
(
Coffee Break
)
|
🔗 |
Fri 6:30 a.m. - 7:30 a.m.
|
Discussion Panel
|
🔗 |
Fri 7:30 a.m. - 8:00 a.m.
|
Afternoon Poster Spotlight
(
Spotlight
)
|
🔗 |
Fri 8:00 a.m. - 9:00 a.m.
|
Afternoon Poster Session
(
Posters
)
|
🔗 |
Author Information
Hossein Mobahi (Google Research)
Anima Anandkumar (Caltech)
Percy Liang (Stanford University)
Stefanie Jegelka (MIT)
Anna Choromanska (NYU Tandon School of Engineering)
More from the Same Authors
-
2020 : Invited Talk 8 Presentation - Percy Liang - Semantic Parsing for Natural Language Interfaces »
Percy Liang -
2021 : An Empirical Study of Pre-trained Models on Out-of-distribution Generalization »
Yaodong Yu · Heinrich Jiang · Dara Bahri · Hossein Mobahi · Seungyeon Kim · Ankit Rawat · Andreas Veit · Yi Ma -
2022 Poster: MinVIS: A Minimal Video Instance Segmentation Framework without Video-based Training »
De-An Huang · Zhiding Yu · Anima Anandkumar -
2022 : Can you label less by using out-of-domain data? Active & Transfer Learning with Few-shot Instructions »
Rafal Kocielnik · Sara Kangaslahti · Shrimai Prabhumoye · Meena Hari · Michael Alvarez · Anima Anandkumar -
2022 : ZerO Initialization: Initializing Neural Networks with only Zeros and Ones »
Jiawei Zhao · Florian Schaefer · Anima Anandkumar -
2022 : Retrieval-based Controllable Molecule Generation »
Jack Wang · Weili Nie · Zhuoran Qiao · Chaowei Xiao · Richard Baraniuk · Anima Anandkumar -
2022 : Towards Neural Variational Monte Carlo That Scales Linearly with System Size »
Or Sharir · Garnet Chan · Anima Anandkumar -
2022 : Incremental Fourier Neural Operator »
Jiawei Zhao · Robert Joseph George · Yifei Zhang · Zongyi Li · Anima Anandkumar -
2022 : FALCON: Fourier Adaptive Learning and Control for Disturbance Rejection Under Extreme Turbulence »
Sahin Lale · Peter Renn · Kamyar Azizzadenesheli · Babak Hassibi · Morteza Gharib · Anima Anandkumar -
2022 : Fourier Continuation for Exact Derivative Computation in Physics-Informed Neural Operators »
Haydn Maust · Zongyi Li · Yixuan Wang · Anima Anandkumar -
2022 : MoleculeCLIP: Learning Transferable Molecule Multi-Modality Models via Natural Language »
Shengchao Liu · Weili Nie · Chengpeng Wang · Jiarui Lu · Zhuoran Qiao · Ling Liu · Jian Tang · Anima Anandkumar · Chaowei Xiao -
2022 : Fourier Neural Operator for Plasma Modelling »
Vignesh Gopakumar · Stanislas Pamela · Lorenzo Zanisi · Zongyi Li · Anima Anandkumar -
2022 : VIMA: General Robot Manipulation with Multimodal Prompts »
Yunfan Jiang · Agrim Gupta · Zichen Zhang · Guanzhi Wang · Yongqiang Dou · Yanjun Chen · Fei-Fei Li · Anima Anandkumar · Yuke Zhu · Linxi Fan -
2022 : Out-of-Distribution Robustness via Targeted Augmentations »
Irena Gao · Shiori Sagawa · Pang Wei Koh · Tatsunori Hashimoto · Percy Liang -
2022 : Surgical Fine-Tuning Improves Adaptation to Distribution Shifts »
Yoonho Lee · Annie Chen · Fahim Tajwar · Ananya Kumar · Huaxiu Yao · Percy Liang · Chelsea Finn -
2022 : Fast Sampling of Diffusion Models via Operator Learning »
Hongkai Zheng · Weili Nie · Arash Vahdat · Kamyar Azizzadenesheli · Anima Anandkumar -
2022 : DensePure: Understanding Diffusion Models towards Adversarial Robustness »
Zhongzhu Chen · Kun Jin · Jiongxiao Wang · Weili Nie · Mingyan Liu · Anima Anandkumar · Bo Li · Dawn Song -
2022 : HEAT: Hardware-Efficient Automatic Tensor Decomposition for Transformer Compression »
Jiaqi Gu · Ben Keller · Jean Kossaifi · Anima Anandkumar · Brucek Khailany · David Pan -
2022 : Surgical Fine-Tuning Improves Adaptation to Distribution Shifts »
Yoonho Lee · Annie Chen · Fahim Tajwar · Ananya Kumar · Huaxiu Yao · Percy Liang · Chelsea Finn -
2022 : Contributed Talk: DensePure: Understanding Diffusion Models towards Adversarial Robustness »
Zhongzhu Chen · Kun Jin · Jiongxiao Wang · Weili Nie · Mingyan Liu · Anima Anandkumar · Bo Li · Dawn Song -
2022 Workshop: Trustworthy and Socially Responsible Machine Learning »
Huan Zhang · Linyi Li · Chaowei Xiao · J. Zico Kolter · Anima Anandkumar · Bo Li -
2022 : HEAT: Hardware-Efficient Automatic Tensor Decomposition for Transformer Compression »
Jiaqi Gu · Ben Keller · Jean Kossaifi · Anima Anandkumar · Brucek Khailany · David Pan -
2022 : Fine-Tuning without Distortion: Improving Robustness to Distribution Shifts »
Percy Liang · Ananya Kumar -
2022 Workshop: MATH-AI: Toward Human-Level Mathematical Reasoning »
Pan Lu · Swaroop Mishra · Sean Welleck · Yuhuai Wu · Hannaneh Hajishirzi · Percy Liang -
2022 Workshop: Machine Learning and the Physical Sciences »
Atilim Gunes Baydin · Adji Bousso Dieng · Emine Kucukbenli · Gilles Louppe · Siddharth Mishra-Sharma · Benjamin Nachman · Brian Nord · Savannah Thais · Anima Anandkumar · Kyle Cranmer · Lenka Zdeborová · Rianne van den Berg -
2022 Workshop: AI for Science: Progress and Promises »
Yi Ding · Yuanqi Du · Tianfan Fu · Hanchen Wang · Anima Anandkumar · Yoshua Bengio · Anthony Gitter · Carla Gomes · Aviv Regev · Max Welling · Marinka Zitnik -
2022 Poster: What Can Transformers Learn In-Context? A Case Study of Simple Function Classes »
Shivam Garg · Dimitris Tsipras · Percy Liang · Gregory Valiant -
2022 Poster: Insights into Pre-training via Simpler Synthetic Tasks »
Yuhuai Wu · Felix Li · Percy Liang -
2022 Poster: Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models »
Manli Shu · Weili Nie · De-An Huang · Zhiding Yu · Tom Goldstein · Anima Anandkumar · Chaowei Xiao -
2022 Poster: PeRFception: Perception using Radiance Fields »
Yoonwoo Jeong · Seungjoo Shin · Junha Lee · Chris Choy · Anima Anandkumar · Minsu Cho · Jaesik Park -
2022 Poster: Deep Bidirectional Language-Knowledge Graph Pretraining »
Michihiro Yasunaga · Antoine Bosselut · Hongyu Ren · Xikun Zhang · Christopher D Manning · Percy Liang · Jure Leskovec -
2022 Poster: Decentralized Training of Foundation Models in Heterogeneous Environments »
Binhang Yuan · Yongjun He · Jared Davis · Tianyi Zhang · Tri Dao · Beidi Chen · Percy Liang · Christopher Ré · Ce Zhang -
2022 Poster: Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits »
Tianyuan Jin · Pan Xu · Xiaokui Xiao · Anima Anandkumar -
2022 Poster: Diffusion-LM Improves Controllable Text Generation »
Xiang Li · John Thickstun · Ishaan Gulrajani · Percy Liang · Tatsunori Hashimoto -
2022 Poster: Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? »
Rishi Bommasani · Kathleen A. Creel · Ananya Kumar · Dan Jurafsky · Percy Liang -
2022 Poster: Learning Chaotic Dynamics in Dissipative Systems »
Zongyi Li · Miguel Liu-Schiaffini · Nikola Kovachki · Kamyar Azizzadenesheli · Burigede Liu · Kaushik Bhattacharya · Andrew Stuart · Anima Anandkumar -
2022 Poster: Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models »
Boxin Wang · Wei Ping · Chaowei Xiao · Peng Xu · Mostofa Patwary · Mohammad Shoeybi · Bo Li · Anima Anandkumar · Bryan Catanzaro -
2022 Poster: Improving Self-Supervised Learning by Characterizing Idealized Representations »
Yann Dubois · Stefano Ermon · Tatsunori Hashimoto · Percy Liang -
2022 Poster: Pre-Trained Language Models for Interactive Decision-Making »
Shuang Li · Xavier Puig · Chris Paxton · Yilun Du · Clinton Wang · Linxi Fan · Tao Chen · De-An Huang · Ekin Akyürek · Anima Anandkumar · Jacob Andreas · Igor Mordatch · Antonio Torralba · Yuke Zhu -
2022 Poster: MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge »
Linxi Fan · Guanzhi Wang · Yunfan Jiang · Ajay Mandlekar · Yuncong Yang · Haoyi Zhu · Andrew Tang · De-An Huang · Yuke Zhu · Anima Anandkumar -
2021 Workshop: Distribution shifts: connecting methods and applications (DistShift) »
Shiori Sagawa · Pang Wei Koh · Fanny Yang · Hongseok Namkoong · Jiashi Feng · Kate Saenko · Percy Liang · Sarah Bird · Sergey Levine -
2020 : Invited Talk 8 Q/A - Percy Liang »
Percy Liang -
2020 Poster: Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming »
Sumanth Dathathri · Krishnamurthy Dvijotham · Alexey Kurakin · Aditi Raghunathan · Jonathan Uesato · Rudy Bunel · Shreya Shankar · Jacob Steinhardt · Ian Goodfellow · Percy Liang · Pushmeet Kohli -
2019 : Extended Poster Session »
Travis LaCroix · Marie Ossenkopf · Mina Lee · Nicole Fitzgerald · Daniela Mihai · Jonathon Hare · Ali Zaidi · Alexander Cowen-Rivers · Alana Marzoev · Eugene Kharitonov · Luyao Yuan · Tomasz Korbak · Paul Pu Liang · Yi Ren · Roberto Dessì · Peter Potash · Shangmin Guo · Tatsunori Hashimoto · Percy Liang · Julian Zubek · Zipeng Fu · Song-Chun Zhu · Adam Lerer -
2019 Poster: SPoC: Search-based Pseudocode to Code »
Sumith Kulal · Panupong Pasupat · Kartik Chandra · Mina Lee · Oded Padon · Alex Aiken · Percy Liang -
2019 Poster: On the Accuracy of Influence Functions for Measuring Group Effects »
Pang Wei Koh · Kai-Siang Ang · Hubert Teo · Percy Liang -
2019 Poster: Verified Uncertainty Calibration »
Ananya Kumar · Percy Liang · Tengyu Ma -
2019 Spotlight: Verified Uncertainty Calibration »
Ananya Kumar · Percy Liang · Tengyu Ma -
2018 : Natural Language Supervision »
Percy Liang -
2018 Poster: Uncertainty Sampling is Preconditioned Stochastic Gradient Descent on Zero-One Loss »
Stephen Mussmann · Percy Liang -
2018 Poster: Semidefinite relaxations for certifying robustness to adversarial examples »
Aditi Raghunathan · Jacob Steinhardt · Percy Liang -
2018 Poster: A Retrieve-and-Edit Framework for Predicting Structured Outputs »
Tatsunori Hashimoto · Kelvin Guu · Yonatan Oren · Percy Liang -
2018 Oral: A Retrieve-and-Edit Framework for Predicting Structured Outputs »
Tatsunori Hashimoto · Kelvin Guu · Yonatan Oren · Percy Liang -
2017 : (Invited Talk) Percy Liang: Learning with Adversaries and Collaborators »
Percy Liang -
2017 Demonstration: Babble Labble: Learning from Natural Language Explanations »
Braden Hancock · Paroma Varma · Percy Liang · Christopher Ré · Stephanie Wang -
2017 Poster: Learning Overcomplete HMMs »
Vatsal Sharan · Sham Kakade · Percy Liang · Gregory Valiant -
2017 Poster: Certified Defenses for Data Poisoning Attacks »
Jacob Steinhardt · Pang Wei Koh · Percy Liang -
2017 Poster: Unsupervised Transformation Learning via Convex Relaxations »
Tatsunori Hashimoto · Percy Liang · John Duchi -
2016 : Anima Anandkumar »
Anima Anandkumar -
2016 Workshop: Deep Learning for Action and Interaction »
Chelsea Finn · Raia Hadsell · David Held · Sergey Levine · Percy Liang -
2016 Workshop: Learning with Tensors: Why Now and How? »
Anima Anandkumar · Rong Ge · Yan Liu · Maximilian Nickel · Qi (Rose) Yu -
2016 Workshop: Reliable Machine Learning in the Wild »
Dylan Hadfield-Menell · Adrian Weller · David Duvenaud · Jacob Steinhardt · Percy Liang -
2016 Poster: Unsupervised Risk Estimation Using Only Conditional Independence Structure »
Jacob Steinhardt · Percy Liang -
2016 Poster: Online and Differentially-Private Tensor Decomposition »
Yining Wang · Anima Anandkumar -
2015 : Sharing the "How" (and not the "What") »
Percy Liang -
2015 : Non convex Optimization by Complexity Progression »
Hossein Mobahi -
2015 : Opening and Overview »
Anima Anandkumar -
2015 Workshop: Non-convex Optimization for Machine Learning: Theory and Practice »
Anima Anandkumar · Niranjan Uma Naresh · Kamalika Chaudhuri · Percy Liang · Sewoong Oh -
2015 Poster: Logarithmic Time Online Multiclass prediction »
Anna Choromanska · John Langford -
2015 Poster: Fast and Guaranteed Tensor Decomposition via Sketching »
Yining Wang · Hsiao-Yu Tung · Alexander Smola · Anima Anandkumar -
2015 Spotlight: Logarithmic Time Online Multiclass prediction »
Anna Choromanska · John Langford -
2015 Spotlight: Fast and Guaranteed Tensor Decomposition via Sketching »
Yining Wang · Hsiao-Yu Tung · Alexander Smola · Anima Anandkumar -
2015 Demonstration: CodaLab Worksheets for Reproducible, Executable Papers »
Percy Liang · Evelyne Viegas -
2015 Poster: On-the-Job Learning with Bayesian Decision Theory »
Keenon Werling · Arun Tejasvi Chaganty · Percy Liang · Christopher Manning -
2015 Spotlight: On-the-Job Learning with Bayesian Decision Theory »
Keenon Werling · Arun Tejasvi Chaganty · Percy Liang · Christopher Manning -
2015 Poster: Deep learning with Elastic Averaging SGD »
Sixin Zhang · Anna Choromanska · Yann LeCun -
2015 Spotlight: Deep learning with Elastic Averaging SGD »
Sixin Zhang · Anna Choromanska · Yann LeCun -
2015 Poster: Estimating Mixture Models via Mixtures of Polynomials »
Sida Wang · Arun Tejasvi Chaganty · Percy Liang -
2015 Poster: Learning with a Wasserstein Loss »
Charlie Frogner · Chiyuan Zhang · Hossein Mobahi · Mauricio Araya · Tomaso Poggio -
2015 Poster: Learning with Relaxed Supervision »
Jacob Steinhardt · Percy Liang -
2015 Poster: Calibrated Structured Prediction »
Volodymyr Kuleshov · Percy Liang -
2014 Workshop: Discrete Optimization in Machine Learning »
Jeffrey A Bilmes · Andreas Krause · Stefanie Jegelka · S Thomas McCormick · Sebastian Nowozin · Yaron Singer · Dhruv Batra · Volkan Cevher -
2014 Workshop: Challenges in Machine Learning workshop (CiML 2014) »
Isabelle Guyon · Evelyne Viegas · Percy Liang · Olga Russakovsky · Rinat Sergeev · Gábor Melis · Michele Sebag · Gustavo Stolovitzky · Jaume Bacardit · Michael S Kim · Ben Hamner -
2014 Poster: Multi-Step Stochastic ADMM in High Dimensions: Applications to Sparse Optimization and Matrix Decomposition »
Hanie Sedghi · Anima Anandkumar · Edmond A Jonckheere -
2014 Poster: Altitude Training: Strong Bounds for Single-Layer Dropout »
Stefan Wager · William S Fithian · Sida Wang · Percy Liang -
2014 Poster: Parallel Double Greedy Submodular Maximization »
Xinghao Pan · Stefanie Jegelka · Joseph Gonzalez · Joseph K Bradley · Michael Jordan -
2014 Poster: Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets »
Adarsh Prasad · Stefanie Jegelka · Dhruv Batra -
2014 Spotlight: Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets »
Adarsh Prasad · Stefanie Jegelka · Dhruv Batra -
2014 Poster: Simple MAP Inference via Low-Rank Relaxations »
Roy Frostig · Sida Wang · Percy Liang · Christopher D Manning -
2014 Poster: On the Convergence Rate of Decomposable Submodular Function Minimization »
Robert Nishihara · Stefanie Jegelka · Michael Jordan -
2014 Poster: Weakly-supervised Discovery of Visual Pattern Configurations »
Hyun Oh Song · Yong Jae Lee · Stefanie Jegelka · Trevor Darrell -
2013 Workshop: Topic Models: Computation, Application, and Evaluation »
David Mimno · Amr Ahmed · Jordan Boyd-Graber · Ankur Moitra · Hanna Wallach · Alexander Smola · David Blei · Anima Anandkumar -
2013 Workshop: Discrete Optimization in Machine Learning: Connecting Theory and Practice »
Stefanie Jegelka · Andreas Krause · Pradeep Ravikumar · Kazuo Murota · Jeffrey A Bilmes · Yisong Yue · Michael Jordan -
2013 Poster: Dropout Training as Adaptive Regularization »
Stefan Wager · Sida Wang · Percy Liang -
2013 Poster: Optimistic Concurrency Control for Distributed Unsupervised Learning »
Xinghao Pan · Joseph Gonzalez · Stefanie Jegelka · Tamara Broderick · Michael Jordan -
2013 Spotlight: Dropout Training as Adaptive Regularization »
Stefan Wager · Sida Wang · Percy Liang -
2013 Poster: Reflection methods for user-friendly submodular optimization »
Stefanie Jegelka · Francis Bach · Suvrit Sra -
2013 Poster: When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity »
Anima Anandkumar · Daniel Hsu · Majid Janzamin · Sham M Kakade -
2013 Poster: Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions »
Rishabh K Iyer · Stefanie Jegelka · Jeffrey A Bilmes -
2012 Workshop: Discrete Optimization in Machine Learning (DISCML): Structure and Scalability »
Stefanie Jegelka · Andreas Krause · Jeffrey A Bilmes · Pradeep Ravikumar -
2012 Poster: Learning Mixtures of Tree Graphical Models »
Anima Anandkumar · Daniel Hsu · Furong Huang · Sham M Kakade -
2012 Poster: A Spectral Algorithm for Latent Dirichlet Allocation »
Anima Anandkumar · Dean P Foster · Daniel Hsu · Sham M Kakade · Yi-Kai Liu -
2012 Poster: Identifiability and Unmixing of Latent Parse Trees »
Percy Liang · Sham M Kakade · Daniel Hsu -
2012 Spotlight: A Spectral Algorithm for Latent Dirichlet Allocation »
Anima Anandkumar · Dean P Foster · Daniel Hsu · Sham M Kakade · Yi-Kai Liu -
2012 Poster: Latent Graphical Model Selection: Efficient Methods for Locally Tree-like Graphs »
Anima Anandkumar · Ragupathyraj Valluvan -
2011 Poster: Fast approximate submodular minimization »
Stefanie Jegelka · Hui Lin · Jeffrey A Bilmes -
2011 Poster: Spectral Methods for Learning Multivariate Latent Tree Structure »
Anima Anandkumar · Kamalika Chaudhuri · Daniel Hsu · Sham M Kakade · Le Song · Tong Zhang -
2010 Workshop: Discrete Optimization in Machine Learning: Structures, Algorithms and Applications »
Andreas Krause · Pradeep Ravikumar · Jeffrey A Bilmes · Stefanie Jegelka -
2009 Workshop: The Generative and Discriminative Learning Interface »
Simon Lacoste-Julien · Percy Liang · Guillaume Bouchard -
2009 Poster: Asymptotically Optimal Regularization in Smooth Parametric Models »
Percy Liang · Francis Bach · Guillaume Bouchard · Michael Jordan -
2008 Workshop: Speech and Language: Unsupervised Latent-Variable Models »
Slav Petrov · Aria Haghighi · Percy Liang · Dan Klein -
2007 Poster: Agreement-Based Learning »
Percy Liang · Dan Klein · Michael Jordan -
2007 Spotlight: Agreement-Based Learning »
Percy Liang · Dan Klein · Michael Jordan -
2007 Poster: A Probabilistic Approach to Language Change »
Alexandre Bouchard-Côté · Percy Liang · Tom Griffiths · Dan Klein