Timezone: »
The learning with privileged information setting has recently attracted a lot of attention within the machine learning community, as it allows the integration of additional knowledge into the training process of a classifier, even when this comes in the form of a data modality that is not available at test time. Here, we show that privileged information can naturally be treated as noise in the latent function of a Gaussian process classifier (GPC). That is, in contrast to the standard GPC setting, the latent function is not just a nuisance but a feature: it becomes a natural measure of confidence about the training data by modulating the slope of the GPC probit likelihood function. Extensive experiments on public datasets show that the proposed GPC method using privileged noise, called GPC+, improves over a standard GPC without privileged knowledge, and also over the current state-of-the-art SVM-based method, SVM+. Moreover, we show that advanced neural networks and deep learning methods can be compressed as privileged information.
Author Information
Daniel Hernández-lobato (Universidad Autonoma de Madrid)
Viktoriia Sharmanska (University of Sussex, Imperial College London)
Kristian Kersting (University of Bonn and Fraunhofer IAIS)
Christoph Lampert (Institute of Science and Technology Austria (ISTA))

Christoph Lampert received the PhD degree in mathematics from the University of Bonn in 2003. In 2010 he joined the Institute of Science and Technology Austria (ISTA) first as an Assistant Professor and since 2015 as a Professor. There, he leads the research group for Machine Learning and Computer Vision, and since 2019 he is also the head of ISTA's ELLIS unit.
Novi Quadrianto (University of Sussex and HSE)
More from the Same Authors
-
2021 : SSSE: Efficiently Erasing Samples from Trained Machine Learning Models »
Alexandra Peste · Dan Alistarh · Christoph Lampert -
2021 : Poster: On the Impossibility of Fairness-Aware Learning from Corrupted Data »
Nikola Konstantinov · Christoph Lampert -
2023 Poster: Deep Neural Collapse Is Provably Optimal for the Deep Unconstrained Features Model »
Peter Súkeník · Marco Mondelli · Christoph Lampert -
2022 Poster: Fairness-Aware PAC Learning from Corrupted Data »
Nikola Konstantinov · Christoph Lampert -
2022 Poster: Okapi: Generalising Better by Making Statistical Matches Match »
Myles Bartlett · Sara Romiti · Viktoriia Sharmanska · Novi Quadrianto -
2021 : On the Impossibility of Fairness-Aware Learning from Corrupted Data »
Nikola Konstantinov · Christoph Lampert -
2020 Poster: Unsupervised object-centric video generation and decomposition in 3D »
Paul Henderson · Christoph Lampert -
2017 Workshop: Learning with Limited Labeled Data: Weak Supervision and Beyond »
Isabelle Augenstein · Stephen Bach · Eugene Belilovsky · Matthew Blaschko · Christoph Lampert · Edouard Oyallon · Emmanouil Antonios Platanios · Alexander Ratner · Christopher Ré -
2017 Poster: Recycling Privileged Learning and Distribution Matching for Fairness »
Novi Quadrianto · Viktoriia Sharmanska -
2015 Workshop: Transfer and Multi-Task Learning: Trends and New Perspectives »
Anastasia Pentina · Christoph Lampert · Sinno Jialin Pan · Mingsheng Long · Judy Hoffman · Baochen Sun · Kate Saenko -
2015 Poster: Lifelong Learning with Non-i.i.d. Tasks »
Anastasia Pentina · Christoph Lampert -
2014 Workshop: Second Workshop on Transfer and Multi-Task Learning: Theory meets Practice »
Urun Dogan · Tatiana Tommasi · Yoshua Bengio · Francesco Orabona · Marius Kloft · Andres Munoz · Gunnar Rätsch · Hal Daumé III · Mehryar Mohri · Xuezhi Wang · Daniel Hernández-lobato · Song Liu · Thomas Unterthiner · Pascal Germain · Vinay P Namboodiri · Michael Goetz · Christopher Berlind · Sigurd Spieckermann · Marta Soare · Yujia Li · Vitaly Kuznetsov · Wenzhao Lian · Daniele Calandriello · Emilie Morvant -
2013 Workshop: New Directions in Transfer and Multi-Task: Learning Across Domains and Tasks »
Urun Dogan · Marius Kloft · Tatiana Tommasi · Francesco Orabona · Massimiliano Pontil · Sinno Jialin Pan · Shai Ben-David · Arthur Gretton · Fei Sha · Marco Signoretto · Rajhans Samdani · Yun-Qian Miao · Mohammad Gheshlaghi azar · Ruth Urner · Christoph Lampert · Jonathan How -
2013 Poster: Learning Feature Selection Dependencies in Multi-task Learning »
Daniel Hernández-lobato · José Miguel Hernández-Lobato -
2013 Poster: Gaussian Process Conditional Copulas with Applications to Financial Time Series »
José Miguel Hernández-Lobato · James R Lloyd · Daniel Hernández-lobato -
2012 Poster: Dynamic Pruning of Factor Graphs for Maximum Marginal Prediction »
Christoph Lampert -
2012 Poster: Symbolic Dynamic Programming for Continuous State and Observation POMDPs »
Zahra Zamani · Scott Sanner · Pascal Poupart · Kristian Kersting -
2011 Workshop: Choice Models and Preference Learning »
Jean-Marc Andreoli · Cedric Archambeau · Guillaume Bouchard · Shengbo Guo · Kristian Kersting · Scott Sanner · Martin Szummer · Paolo Viappiani · Onno Zoeter -
2011 Poster: Maximum Margin Multi-Label Structured Prediction »
Christoph Lampert -
2011 Poster: Robust Multi-Class Gaussian Process Classification »
Daniel Hernández-lobato · José Miguel Hernández-Lobato · Pierre Dupont -
2010 Demonstration: Globby: It's a Search Engine with a Sorting View »
Novi Quadrianto -
2010 Poster: Optimal Web-Scale Tiering as a Flow Problem »
Gilbert Leung · Novi Quadrianto · Alexander Smola · Kostas Tsioutsiouliklis -
2010 Poster: Multitask Learning without Label Correspondences »
Novi Quadrianto · Alexander Smola · Tiberio Caetano · S.V.N. Vishwanathan · James Petterson -
2009 Poster: Convex Relaxation of Mixture Regression with Efficient Algorithms »
Novi Quadrianto · Tiberio Caetano · John Lim · Dale Schuurmans -
2009 Poster: Distribution Matching for Transduction »
Novi Quadrianto · James Petterson · Alexander Smola -
2008 Poster: Kernelized Sorting »
Novi Quadrianto · Le Song · Alexander Smola -
2008 Spotlight: Kernelized Sorting »
Novi Quadrianto · Le Song · Alexander Smola