Timezone: »
Finding a good way to model probability densities is key to probabilistic inference. An ideal model should be able to concisely approximate any probability while being also compatible with two main operations: multiplications of two models (product rule) and marginalization with respect to a subset of the random variables (sum rule). In this work, we show that a recently proposed class of positive semi-definite (PSD) models for non-negative functions is particularly suited to this end. In particular, we characterize both approximation and generalization capabilities of PSD models, showing that they enjoy strong theoretical guarantees. Moreover, we show that we can perform efficiently both sum and product rule in closed form via matrix operations, enjoying the same versatility of mixture models. Our results open the way to applications of PSD models to density estimation, decision theory, and inference.
Author Information
Alessandro Rudi (INRIA, Ecole Normale Superieure)
Carlo Ciliberto (University College London)
More from the Same Authors
-
2021 Spotlight: Mixability made efficient: Fast online multiclass logistic regression »
Rémi Jézéquel · Pierre Gaillard · Alessandro Rudi -
2021 Spotlight: Beyond Tikhonov: faster learning with self-concordant losses, via iterative regularization »
Gaspard Beugnot · Julien Mairal · Alessandro Rudi -
2022 Poster: Conditional Meta-Learning of Linear Representations »
Giulia Denevi · Massimiliano Pontil · Carlo Ciliberto -
2023 Poster: Efficient Sampling of Stochastic Differential Equations with Positive Semi-Definite Models »
Anant Raj · Umut Simsekli · Alessandro Rudi -
2023 Poster: GloptiNets: Scalable Non-Convex Optimization with Certificates »
Gaspard Beugnot · Julien Mairal · Alessandro Rudi -
2022 Spotlight: Conditional Meta-Learning of Linear Representations »
Giulia Denevi · Massimiliano Pontil · Carlo Ciliberto -
2022 Spotlight: Lightning Talks 3B-1 »
Tianying Ji · Tongda Xu · Giulia Denevi · Aibek Alanov · Martin Wistuba · Wei Zhang · Yuesong Shen · Massimiliano Pontil · Vadim Titov · Yan Wang · Yu Luo · Daniel Cremers · Yanjun Han · Arlind Kadra · Dailan He · Josif Grabocka · Zhengyuan Zhou · Fuchun Sun · Carlo Ciliberto · Dmitry Vetrov · Mingxuan Jing · Chenjian Gao · Aaron Flores · Tsachy Weissman · Han Gao · Fengxiang He · Kunzan Liu · Wenbing Huang · Hongwei Qin -
2022 Poster: Learning Dynamical Systems via Koopman Operator Regression in Reproducing Kernel Hilbert Spaces »
Vladimir Kostic · Pietro Novelli · Andreas Maurer · Carlo Ciliberto · Lorenzo Rosasco · Massimiliano Pontil -
2022 Poster: Active Labeling: Streaming Stochastic Gradients »
Vivien Cabannes · Francis Bach · Vianney Perchet · Alessandro Rudi -
2021 : Carlo Ciliberto Q&A »
Carlo Ciliberto -
2021 : Carlo Ciliberto »
Carlo Ciliberto -
2021 Poster: Mixability made efficient: Fast online multiclass logistic regression »
Rémi Jézéquel · Pierre Gaillard · Alessandro Rudi -
2021 Poster: Overcoming the curse of dimensionality with Laplacian regularization in semi-supervised learning »
Vivien Cabannes · Loucas Pillaud-Vivien · Francis Bach · Alessandro Rudi -
2021 Poster: The Role of Global Labels in Few-Shot Classification and How to Infer Them »
Ruohan Wang · Massimiliano Pontil · Carlo Ciliberto -
2021 Poster: Beyond Tikhonov: faster learning with self-concordant losses, via iterative regularization »
Gaspard Beugnot · Julien Mairal · Alessandro Rudi -
2020 Poster: The Advantage of Conditional Meta-Learning for Biased Regularization and Fine Tuning »
Giulia Denevi · Massimiliano Pontil · Carlo Ciliberto -
2020 Poster: Structured Prediction for Conditional Meta-Learning »
Ruohan Wang · Yiannis Demiris · Carlo Ciliberto -
2020 Poster: Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning »
Luca Oneto · Michele Donini · Giulia Luise · Carlo Ciliberto · Andreas Maurer · Massimiliano Pontil -
2019 Poster: Online-Within-Online Meta-Learning »
Giulia Denevi · Dimitris Stamos · Carlo Ciliberto · Massimiliano Pontil -
2019 Poster: Localized Structured Prediction »
Carlo Ciliberto · Francis Bach · Alessandro Rudi -
2019 Poster: Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm »
Giulia Luise · Saverio Salzo · Massimiliano Pontil · Carlo Ciliberto -
2019 Spotlight: Sinkhorn Barycenters with Free Support via Frank-Wolfe Algorithm »
Giulia Luise · Saverio Salzo · Massimiliano Pontil · Carlo Ciliberto -
2018 Poster: Learning To Learn Around A Common Mean »
Giulia Denevi · Carlo Ciliberto · Dimitris Stamos · Massimiliano Pontil -
2018 Poster: Differential Properties of Sinkhorn Approximation for Learning with Wasserstein Distance »
Giulia Luise · Alessandro Rudi · Massimiliano Pontil · Carlo Ciliberto -
2018 Poster: Manifold Structured Prediction »
Alessandro Rudi · Carlo Ciliberto · Gian Maria Marconi · Lorenzo Rosasco -
2017 Poster: Generalization Properties of Learning with Random Features »
Alessandro Rudi · Lorenzo Rosasco -
2017 Oral: Generalization Properties of Learning with Random Features »
Alessandro Rudi · Lorenzo Rosasco -
2017 Poster: Consistent Multitask Learning with Nonlinear Output Relations »
Carlo Ciliberto · Alessandro Rudi · Lorenzo Rosasco · Massimiliano Pontil -
2017 Poster: FALKON: An Optimal Large Scale Kernel Method »
Alessandro Rudi · Luigi Carratino · Lorenzo Rosasco -
2016 Poster: A Consistent Regularization Approach for Structured Prediction »
Carlo Ciliberto · Lorenzo Rosasco · Alessandro Rudi -
2015 Poster: Less is More: Nyström Computational Regularization »
Alessandro Rudi · Raffaello Camoriano · Lorenzo Rosasco -
2015 Oral: Less is More: Nyström Computational Regularization »
Alessandro Rudi · Raffaello Camoriano · Lorenzo Rosasco -
2013 Workshop: Output Representation Learning »
Yuhong Guo · Dale Schuurmans · Richard Zemel · Samy Bengio · Yoshua Bengio · Li Deng · Dan Roth · Kilian Q Weinberger · Jason Weston · Kihyuk Sohn · Florent Perronnin · Gabriel Synnaeve · Pablo R Strasser · julien audiffren · Carlo Ciliberto · Dan Goldwasser