Timezone: »
Large amounts of highdimensional data are routinely acquired in scientific fields ranging from biology, genomics and health sciences to astronomy and economics due to improvements in engineering and data acquisition techniques. Nonparametric methods allow for better modelling of complex systems underlying data generating processes compared to traditionally used linear and parametric models. From statistical point of view, scientists have enough data to reliably fit nonparametric models. However, from computational point of view, nonparametric methods often do not scale well to big data problems.
The aim of this workshop is to bring together practitioners, who are interested in developing and applying nonparametric methods in their domains, and theoreticians, who are interested in providing sound methodology. We hope to effectively communicate advances in development of computational tools for fitting nonparametric models and discuss challenging future directions that prevent applications of nonparametric methods to big data problems.
We encourage submissions on a variety of topics, including but not limited to:
 Randomized procedures for fitting nonparametric models. For example, sketching, random projections, core set selection, etc.
 Nonparametric probabilistic graphical models
 Scalable nonparametric methods
 Multiple kernel learning
 Random feature expansion
 Novel applications of nonparametric methods
 Bayesian nonparametric methods
 Nonparametric network models
This workshop is a fourth in a series of NIPS workshops on modern nonparametric methods in machine learning. Previous workshops focused on time/accuracy tradeoffs, high dimensionality and dimension reduction strategies, and automating the learning pipeline.
Fri 11:30 p.m.  12:00 a.m.
[iCal]

Richard Samworth. Adaptation in logconcave density estimation
(Invited talk)
»
The logconcave maximum likelihood estimator of a density on the real line based on a sample of size $n$ is known to attain the minimax optimal rate of convergence of $O(n^{4/5})$ with respect to, e.g., squared Hellinger distance. In this talk, we show that it also enjoys attractive adaptation properties, in the sense that it achieves a faster rate of convergence when the logarithm of the true density is $k$affine (i.e. made up of $k$ affine pieces), provided $k$ is not too large. Our results use two different techniques: the first relies on a new Marshall's inequality for logconcave density estimation, and reveals that when the true density is close to loglinear on its support, the logconcave maximum likelihood estimator can achieve the parametric rate of convergence in total variation distance. Our second approach depends on local bracketing entropy methods, and allows us to prove a sharp oracle inequality, which implies in particular that the rate of convergence with respect to various global loss functions, including KullbackLeibler divergence, is $O(kn^{1} \log^{5/4} n)$ when the true density is logconcave and its logarithm is close to $k$affine.

Richard J Samworth 
Sat 12:00 a.m.  12:30 a.m.
[iCal]

Ming Yuan. Functional nuclear norm and low rank function estimation.
(Invited talk)
»
The problem of low rank estimation naturally arises in a number of functional or relational data analysis settings, for example when dealing with spatiotemporal data or link prediction with attributes. We consider a unified framework for these problems and devise a novel penalty function to exploit the low rank structure in such contexts. The resulting empirical risk minimization estimator can be shown to be optimal under fairly general conditions. 
Ming Yuan 
Sat 12:30 a.m.  1:00 a.m.
[iCal]

Mladen Kolar. PostRegularization Inference for Dynamic Nonparanormal Graphical Models.
(Invited talk)
»
We propose a novel class of dynamic nonparanormal graphical models, which allows us to model high dimensional heavytailed systems and the evolution of their latent network structures. Under this model we develop statistical tests for presence of edges both locally at a fixed index value and globally over a range of values. The tests are developed for a highdimensional regime, are robust to model selection mistakes and do not require commonly assumed minimum signal strength. The testing procedures are based on a high dimensional, debiasingfree moment estimator, which uses a novel kernel smoothed Kendall's tau correlation matrix as an input statistic. The estimator consistently estimates the latent inverse Pearson correlation matrix uniformly in both index variable and kernel bandwidth. Its rate of convergence is shown to be minimax optimal. Thorough numerical simulations and an application to a neural imaging dataset support the usefulness of our method. Joint work with Junwei Lu and Han Liu. 
Mladen Kolar 
Sat 2:00 a.m.  2:20 a.m.
[iCal]

Debarghya Ghoshdastidar, Ulrike von Luxburg. Do Nonparametric Twosample Tests work for Small Sample Size? A Study on Random Graphs.
(Contributed talks)
»
We consider the problem of twosample hypothesis testing for inhomogeneous unweighted random graphs, where one has access to only a very small number of samples from each model. Standard tests cannot be guaranteed to perform well in this setting due to the small sample size. We present a nonparametric test based on comparison of the adjacency matrices of the graphs, and prove that the test is consistent for increasing sample size as well as when the graph size increases with sample size held fixed. Numerical simulations exhibit the practical significance of the test. 

Sat 2:20 a.m.  2:40 a.m.
[iCal]

Diana Cai, Trevor Campbell, Tamara Broderick. Paintboxes and probability functions for edgeexchangeable graphs. (Contributed talks)  
Sat 2:40 a.m.  3:00 a.m.
[iCal]

Alessandro Rudi, Raffaello Camoriano, Lorenzo Rosasco. Generalization Properties of Learning with Random Features.
(Contributed talks)
»
We study the generalization properties of regularized learning with random features in the statistical learning theory framework. We show that optimal learning errors can be achieved with a number of features smaller than the number of examples. 

Sat 3:00 a.m.  3:20 a.m.
[iCal]

Makoto Yamada, Yuta Umezu, Kenji Fukumizu, Ichiro Takeuchi. Post Selection Inference with Kernels.
(Contributed talks)
»
We propose a novel kernel based post selection inference (PSI) algorithm, which can not only handle nonlinearity in data but also structured output such as multidimensional and multilabel outputs. Specifically, we develop a PSI algorithm for independence measures, and propose the HilbertSchmidt Independence Criterion (HSIC) based PSI algorithm (hsicInf). We apply the hsicInf algorithm to a realworld data, and show that hsicInf can successfully identify important features. 

Sat 3:20 a.m.  3:40 a.m.
[iCal]

Yunpeng Pan, Xinyan Yan, Evangelos Theodorou, Byron Boots. Solving the Linear Bellman Equation via Kernel Embeddings and Stochastic Gradient Descent.
(Contributed talks)
»
We introduce a dataefficient approach for solving the linear Bellman equation, which corresponds to a class of Markov decision processes (MDPs) and stochastic optimal control (SOC) problems. We show that this class of control problem can be reformulated as a stochastic composition optimization problem, which can be further reformulated as a saddle point problem and solved via dual kernel embeddings. Our method is modelfree and using only one sample per state transition from stochastic dynamical systems. Different from related work such as Zlearning based on temporaldifference learning, our method is an online algorithm exploiting stochastic optimization. Numerical results are provided, showing that our method outperforms the Zlearning algorithm. 

Sat 3:40 a.m.  5:30 a.m.
[iCal]

Lunch break


Sat 5:30 a.m.  6:00 a.m.
[iCal]

Francis Bach. Harder, Better, Faster, Stronger Convergence Rates for LeastSquares Regression.
(Invited talk)
»
We consider the optimization of a quadratic objective function whose gradients are only accessible through a stochastic oracle that returns the gradient at any given point plus a zeromean finite variance random error. We present the first algorithm that achieves jointly the optimal prediction error rates for leastsquares regression, both in terms of forgetting of initial conditions in O(1/n^2), and in terms of dependence on the noise and dimension d of the problem, as O(d/n). Our new algorithm is based on averaged accelerated regularized gradient descent, and may also be analyzed through finer assumptions on initial conditions and the Hessian matrix, leading to dimensionfree quantities that may still be small while the "optimal " terms above are large. In order to characterize the tightness of these new bounds, we consider an application to nonparametric regression and use the known lower bounds on the statistical performance (without computational limits), which happen to match our bounds obtained from a single pass on the data and thus show optimality of our algorithm in a wide variety of particular tradeoffs between bias and variance. [joint work with Aymeric Dieuleveut and Nicolas Flammarion] 
Francis Bach 
Sat 6:00 a.m.  6:30 a.m.
[iCal]

Richard (Fangjian) Guo. Boosting Variational Inference.
(Invited talk)
»
Modern Bayesian inference typically requires some form of posterior approximation, and meanfield variational inference (MFVI) is an increasingly popular choice due to its speed. But MFVI is inaccurate in several aspects, including an inability to capture multimodality in the posterior and underestimation of the posterior covariance. These issues arise since MFVI considers approximations to the posterior only in a family of factorized parametric distributions. We instead consider a much more flexible approximating family consisting of all possible mixtures of a parametric base distribution (e.g., Gaussians) without constraining the number of mixture components. In order to efficiently find a highquality posterior approximation within this family, we borrow ideas from gradient boosting and propose the boosting variational inference (BVI) method, which iteratively improves the current approximation by mixing it with a new component from the base distribution family. We develop practical algorithms for BVI and demonstrate their performance on both real and simulated data. Joint work with Xiangyu Wang, Kai Fan, Tamara Broderick and David Dunson. 
Fangjian Guo 
Sat 6:30 a.m.  6:45 a.m.
[iCal]

Break


Sat 6:45 a.m.  7:15 a.m.
[iCal]

Olga Klopp. Network models and sparse graphon estimation.
(Invited talk)
»
Inhomogeneous random graph models encompass many network models such as stochastic block models and latent position models. We consider the problem of statistical estimation of the matrix of connection probabilities based on the observations of the adjacency matrix of the network and derive optimal rates of convergence for this problem. Our results cover the important setting of sparse networks. We also establish upper bounds on the minimax risk for graphon estimation when the probability matrix is sampled according to a graphon model. 
Olga Klopp 
Sat 7:15 a.m.  7:45 a.m.
[iCal]

Emily Fox. Sparse Graphs via Exchangeable Random Measures.
(Invited talk)
»
Statistical network modeling has focused on representing the graph as a discrete structure, namely the adjacency matrix. Assuming exchangeability of this array, the AldousHoover theorem informs us that the graph is necessarily either dense or empty. We instead consider representing the graph as a point process on the positive quadrant. We then propose a graph construction leveraging completely random measures (CRMs) that leads to an exchangeable point process representation of graphs ranging from dense to sparse and exhibiting powerlaw degree distributions. We show how these properties are simply tuned by three hyperparameters. The resulting model lends itself to an efficient MCMC scheme from which we can infer these network attributes. We demonstrate our methods on a series of realworld networks with up to hundreds of thousands of nodes and millions of edges. We also discuss some recent advances in this area and open challenges. Joint work with Francois Caron. 
Emily Fox 
Sat 7:45 a.m.  9:00 a.m.
[iCal]

Coffee break + posters 
Author Information
Aaditya Ramdas (UC Berkeley)
Arthur Gretton (Gatsby Unit, UCL)
Arthur Gretton is a Professor with the Gatsby Computational Neuroscience Unit at UCL. He received degrees in Physics and Systems Engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing and Communications Laboratory at the University of Cambridge. He previously worked at the MPI for Biological Cybernetics, and at the Machine Learning Department, Carnegie Mellon University. Arthur's recent research interests in machine learning include the design and training of generative models, both implicit (e.g. GANs) and explicit (high/infinite dimensional exponential family models), nonparametric hypothesis testing, and kernel methods. He has been an associate editor at IEEE Transactions on Pattern Analysis and Machine Intelligence from 2009 to 2013, an Action Editor for JMLR since April 2013, an Area Chair for NeurIPS in 2008 and 2009, a Senior Area Chair for NeurIPS in 2018, an Area Chair for ICML in 2011 and 2012, and a member of the COLT Program Committee in 2013. Arthur was program chair for AISTATS in 2016 (with Christian Robert), tutorials chair for ICML 2018 (with Ruslan Salakhutdinov), workshops chair for ICML 2019 (with Honglak Lee), program chair for the Dali workshop in 2019 (with Krikamol Muandet and Shakir Mohammed), and coorgansier of the Machine Learning Summer School 2019 in London (with Marc Deisenroth).
Bharath Sriperumbudur (Penn State University)
Han Liu (Tencent AI Lab)
John Lafferty (University of Chicago)
Samory Kpotufe (Princeton University)
Zoltán Szabó (École Polytechnique)
[Homepage](http://www.cmap.polytechnique.fr/~zoltan.szabo/)
More from the Same Authors

2019 Poster: Exponential Family Estimation via Adversarial Dynamics Embedding »
Bo Dai · Zhen Liu · Hanjun Dai · Niao He · Arthur Gretton · Le Song · Dale Schuurmans 
2019 Poster: Maximum Mean Discrepancy Gradient Flow »
Michael Arbel · Anna Korba · Adil SALIM · Arthur Gretton 
2019 Poster: Kernel Instrumental Variable Regression »
Rahul Singh · Maneesh Sahani · Arthur Gretton 
2019 Oral: Kernel Instrumental Variable Regression »
Rahul Singh · Maneesh Sahani · Arthur Gretton 
2019 Tutorial: Interpretable Comparison of Distributions and Models »
Wittawat Jitkrittum · Dougal J Sutherland · Arthur Gretton 
2018 Poster: Informative Features for Model Comparison »
Wittawat Jitkrittum · Heishiro Kanagawa · Patsorn Sangkloy · James Hays · Bernhard Schölkopf · Arthur Gretton 
2018 Poster: BRUNO: A Deep Recurrent Model for Exchangeable Data »
Iryna Korshunova · Jonas Degrave · Ferenc Huszar · Yarin Gal · Arthur Gretton · Joni Dambre 
2018 Poster: PACBayes Tree: weighted subtrees with guarantees »
Tin D Nguyen · Samory Kpotufe 
2018 Poster: Exponentially Weighted Imitation Learning for Batched Historical Data »
Qing Wang · Jiechao Xiong · Lei Han · peng sun · Han Liu · Tong Zhang 
2018 Poster: On gradient regularizers for MMD GANs »
Michael Arbel · Dougal J Sutherland · Mikołaj Bińkowski · Arthur Gretton 
2017 Workshop: Learning on Distributions, Functions, Graphs and Groups »
Florence d'AlchéBuc · Krikamol Muandet · Bharath Sriperumbudur · Zoltán Szabó 
2017 Poster: A LinearTime Kernel GoodnessofFit Test »
Wittawat Jitkrittum · Wenkai Xu · Zoltan Szabo · Kenji Fukumizu · Arthur Gretton 
2017 Poster: Estimating Highdimensional NonGaussian Multiple Index Models via Stein’s Lemma »
Zhuoran Yang · Krishnakumar Balasubramanian · Zhaoran Wang · Han Liu 
2017 Oral: A LinearTime Kernel GoodnessofFit Test »
Wittawat Jitkrittum · Wenkai Xu · Zoltan Szabo · Kenji Fukumizu · Arthur Gretton 
2017 Poster: Parametric Simplex Method for Sparse Learning »
Haotian Pang · Han Liu · Robert J Vanderbei · Tuo Zhao 
2016 Workshop: Adaptive Data Analysis »
Vitaly Feldman · Aaditya Ramdas · Aaron Roth · Adam Smith 
2016 Oral: Interpretable Distribution Features with Maximum Testing Power »
Wittawat Jitkrittum · Zoltán Szabó · Kacper P Chwialkowski · Arthur Gretton 
2016 Poster: Minimax Estimation of Maximum Mean Discrepancy with Radial Kernels »
Ilya Tolstikhin · Bharath Sriperumbudur · Bernhard Schölkopf 
2016 Poster: Interpretable Distribution Features with Maximum Testing Power »
Wittawat Jitkrittum · Zoltán Szabó · Kacper P Chwialkowski · Arthur Gretton 
2016 Poster: Agnostic Estimation for Misspecified Phase Retrieval Models »
Matey Neykov · Zhaoran Wang · Han Liu 
2016 Poster: Online ICA: Understanding Global Dynamics of Nonconvex Optimization via Diffusion Processes »
Chris Junchi Li · Zhaoran Wang · Han Liu 
2016 Poster: Blind Attacks on Machine Learners »
Alex Beatson · Zhaoran Wang · Han Liu 
2016 Poster: Convergence guarantees for kernelbased quadrature rules in misspecified settings »
Motonobu Kanagawa · Bharath Sriperumbudur · Kenji Fukumizu 
2016 Poster: More Supervision, Less Computation: StatisticalComputational Tradeoffs in Weakly Supervised Learning »
Xinyang Yi · Zhaoran Wang · Zhuoran Yang · Constantine Caramanis · Han Liu 
2015 Poster: Optimal Linear Estimation under Unknown Nonlinear Transform »
Xinyang Yi · Zhaoran Wang · Constantine Caramanis · Han Liu 
2015 Poster: Gradientfree Hamiltonian Monte Carlo with Efficient Kernel Exponential Families »
Heiko Strathmann · Dino Sejdinovic · Samuel Livingstone · Zoltan Szabo · Arthur Gretton 
2015 Poster: Nonconvex Statistical Optimization for Sparse Tensor Graphical Model »
Wei Sun · Zhaoran Wang · Han Liu · Guang Cheng 
2015 Poster: Local Smoothness in Variance Reduced Optimization »
Daniel Vainsencher · Han Liu · Tong Zhang 
2015 Poster: High Dimensional EM Algorithm: Statistical Optimization and Asymptotic Normality »
Zhaoran Wang · Quanquan Gu · Yang Ning · Han Liu 
2015 Poster: A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements »
Qinqing Zheng · John Lafferty 
2015 Poster: Optimal Rates for Random Fourier Features »
Bharath Sriperumbudur · Zoltan Szabo 
2015 Spotlight: Optimal Rates for Random Fourier Features »
Bharath Sriperumbudur · Zoltan Szabo 
2015 Poster: Robust Portfolio Optimization »
Huitong Qiu · Fang Han · Han Liu · Brian Caffo 
2015 Poster: A Nonconvex Optimization Framework for Low Rank Matrix Estimation »
Tuo Zhao · Zhaoran Wang · Han Liu 
2015 Poster: Fast TwoSample Testing with Analytic Representations of Probability Measures »
Kacper P Chwialkowski · Aaditya Ramdas · Dino Sejdinovic · Arthur Gretton 
2014 Workshop: Modern Nonparametrics 3: Automating the Learning Pipeline »
Eric Xing · Mladen Kolar · Arthur Gretton · Samory Kpotufe · Han Liu · Zoltán Szabó · Alan L Yuille · Andrew G Wilson · Ryan Tibshirani · Sasha Rakhlin · Damian Kozbur · Bharath Sriperumbudur · David LopezPaz · Kirthevasan Kandasamy · Francesco Orabona · Andreas Damianou · Wacha Bounliphone · Yanshuai Cao · Arijit Das · Yingzhen Yang · Giulia DeSalvo · Dmitry Storcheus · Roberto Valerio 
2014 Poster: Mode Estimation for High Dimensional Discrete Tree Graphical Models »
Chao Chen · Han Liu · Dimitris Metaxas · Tianqi Zhao 
2014 Poster: Accelerated Minibatch Randomized Block Coordinate Descent Method »
Tuo Zhao · Mo Yu · Yiming Wang · Raman Arora · Han Liu 
2014 Poster: Multivariate Regression with Calibration »
Han Liu · Lie Wang · Tuo Zhao 
2014 Poster: Sparse PCA with Oracle Property »
Quanquan Gu · Zhaoran Wang · Han Liu 
2014 Spotlight: Mode Estimation for High Dimensional Discrete Tree Graphical Models »
Chao Chen · Han Liu · Dimitris Metaxas · Tianqi Zhao 
2014 Poster: A Wild Bootstrap for Degenerate Kernel Tests »
Kacper P Chwialkowski · Dino Sejdinovic · Arthur Gretton 
2014 Oral: A Wild Bootstrap for Degenerate Kernel Tests »
Kacper P Chwialkowski · Dino Sejdinovic · Arthur Gretton 
2014 Poster: Optimal rates for kNN density and mode estimation »
Sanjoy Dasgupta · Samory Kpotufe 
2014 Poster: Kernel Mean Estimation via Spectral Filtering »
Krikamol Muandet · Bharath Sriperumbudur · Bernhard Schölkopf 
2014 Poster: Tighten after Relax: MinimaxOptimal Sparse PCA in Polynomial Time »
Zhaoran Wang · Huanran Lu · Han Liu 
2013 Workshop: New Directions in Transfer and MultiTask: Learning Across Domains and Tasks »
Urun Dogan · Marius Kloft · Tatiana Tommasi · Francesco Orabona · Massimiliano Pontil · Sinno Jialin Pan · Shai BenDavid · Arthur Gretton · Fei Sha · Marco Signoretto · Rajhans Samdani · YunQian Miao · Mohammad Gheshlaghi azar · Ruth Urner · Christoph Lampert · Jonathan How 
2013 Workshop: Modern Nonparametric Methods in Machine Learning »
Arthur Gretton · Mladen Kolar · Samory Kpotufe · John Lafferty · Han Liu · Bernhard Schölkopf · Alexander Smola · Rob Nowak · Mikhail Belkin · Lorenzo Rosasco · peter bickel · Yue Zhao 
2013 Poster: Sparse Inverse Covariance Estimation with Calibration »
Tuo Zhao · Han Liu 
2013 Poster: Btest: A Nonparametric, Low Variance Kernel Twosample Test »
Wojciech Zaremba · Arthur Gretton · Matthew B Blaschko 
2013 Poster: A Kernel Test for ThreeVariable Interactions »
Dino Sejdinovic · Arthur Gretton · Wicher Bergsma 
2013 Poster: Regressiontree Tuning in a Streaming Setting »
Samory Kpotufe · Francesco Orabona 
2013 Poster: Adaptivity to Local Smoothness and Dimension in Kernel Regression »
Samory Kpotufe · Vikas K Garg 
2013 Spotlight: Regressiontree Tuning in a Streaming Setting »
Samory Kpotufe · Francesco Orabona 
2013 Oral: A Kernel Test for ThreeVariable Interactions »
Dino Sejdinovic · Arthur Gretton · Wicher Bergsma 
2013 Poster: Robust Sparse Principal Component Regression under the High Dimensional Elliptical Model »
Fang Han · Han Liu 
2013 Spotlight: Robust Sparse Principal Component Regression under the High Dimensional Elliptical Model »
Fang Han · Han Liu 
2012 Workshop: Confluence between Kernel Methods and Graphical Models »
Le Song · Arthur Gretton · Alexander Smola 
2012 Workshop: Modern Nonparametric Methods in Machine Learning »
Sivaraman Balakrishnan · Arthur Gretton · Mladen Kolar · John Lafferty · Han Liu · Tong Zhang 
2012 Poster: Gradient Weights help Nonparametric Regressors »
Samory Kpotufe · Abdeslam Boularias 
2012 Oral: Gradient Weights help Nonparametric Regressors »
Samory Kpotufe · Abdeslam Boularias 
2012 Poster: Highdimensional Nonparanormal Graph Estimation via Smoothprojected Neighborhood Pursuit »
Tuo Zhao · Kathryn Roeder · Han Liu 
2012 Poster: Optimal kernel choice for largescale twosample tests »
Arthur Gretton · Bharath Sriperumbudur · Dino Sejdinovic · Heiko Strathmann · Sivaraman Balakrishnan · Massimiliano Pontil · Kenji Fukumizu 
2012 Poster: Exponential Concentration for Mutual Information Estimation with Application to Forests »
Han Liu · John Lafferty · Larry Wasserman 
2011 Poster: kNN Regression Adapts to Local Intrinsic Dimension »
Samory Kpotufe 
2011 Poster: Kernel Bayes' Rule »
Kenji Fukumizu · Le Song · Arthur Gretton 
2011 Oral: kNN Regression Adapts to Local Intrinsic Dimension »
Samory Kpotufe 
2011 Poster: Learning in Hilbert vs. Banach Spaces: A Measure Embedding Viewpoint »
Bharath Sriperumbudur · Kenji Fukumizu · Gert Lanckriet 
2010 Workshop: Lowrank Methods for Largescale Machine Learning »
Arthur Gretton · Michael W Mahoney · Mehryar Mohri · Ameet S Talwalkar 
2009 Workshop: Temporal Segmentation: Perspectives from Statistics, Machine Learning, and Signal Processing »
Stephane Canu · Olivier Cappé · Arthur Gretton · Zaid Harchaoui · Alain Rakotomamonjy · JeanPhilippe Vert 
2009 Workshop: LargeScale Machine Learning: Parallelism and Massive Datasets »
Alexander Gray · Arthur Gretton · Alexander Smola · Joseph E Gonzalez · Carlos Guestrin 
2009 Session: Oral session 10: Neural Modeling and Imaging »
Arthur Gretton 
2009 Poster: Kernel Choice and Classifiability for RKHS Embeddings of Probability Distributions »
Bharath Sriperumbudur · Kenji Fukumizu · Arthur Gretton · Gert Lanckriet · Bernhard Schölkopf 
2009 Oral: Kernel Choice and Classifiability for RKHS Embeddings of Probability Distributions »
Bharath Sriperumbudur · Kenji Fukumizu · Arthur Gretton · Gert Lanckriet · Bernhard Schölkopf 
2009 Poster: On the Convergence of the ConcaveConvex Procedure »
Bharath Sriperumbudur · Gert Lanckriet 
2009 Poster: Fast, smooth and adaptive regression in metric spaces »
Samory Kpotufe 
2009 Poster: Nonlinear directed acyclic structure learning with weakly additive noise models »
Robert E Tillman · Arthur Gretton · Peter Spirtes 
2009 Poster: A Fast, Consistent Kernel TwoSample Test »
Arthur Gretton · Kenji Fukumizu · Zaid Harchaoui · Bharath Sriperumbudur 
2009 Spotlight: A Fast, Consistent Kernel TwoSample Test »
Arthur Gretton · Kenji Fukumizu · Zaid Harchaoui · Bharath Sriperumbudur 
2008 Workshop: Kernel Learning: Automatic Selection of Optimal Kernels »
Corinna Cortes · Arthur Gretton · Gert Lanckriet · Mehryar Mohri · Afshin Rostamizadeh 
2008 Poster: Kernel Measures of Independence for noniid Data »
Xinhua Zhang · Le Song · Arthur Gretton · Alexander Smola 
2008 Poster: Characteristic Kernels on Groups and Semigroups »
Kenji Fukumizu · Bharath Sriperumbudur · Arthur Gretton · Bernhard Schölkopf 
2008 Spotlight: Kernel Measures of Independence for noniid Data »
Xinhua Zhang · Le Song · Arthur Gretton · Alexander Smola 
2008 Oral: Characteristic Kernels on Groups and Semigroups »
Kenji Fukumizu · Bharath Sriperumbudur · Arthur Gretton · Bernhard Schölkopf 
2008 Session: Oral session 2: Sensorimotor Control »
Arthur Gretton 
2008 Poster: Learning Taxonomies by Dependence Maximization »
Matthew B Blaschko · Arthur Gretton 
2007 Workshop: Representations and Inference on Probability Distributions »
Kenji Fukumizu · Arthur Gretton · Alexander Smola 
2007 Spotlight: Kernel Measures of Conditional Dependence »
Kenji Fukumizu · Arthur Gretton · Xiaohai Sun · Bernhard Schölkopf 
2007 Poster: Kernel Measures of Conditional Dependence »
Kenji Fukumizu · Arthur Gretton · Xiaohai Sun · Bernhard Schölkopf 
2007 Spotlight: A Kernel Statistical Test of Independence »
Arthur Gretton · Kenji Fukumizu · Choon Hui Teo · Le Song · Bernhard Schölkopf · Alexander Smola 
2007 Oral: Colored Maximum Variance Unfolding »
Le Song · Alexander Smola · Karsten Borgwardt · Arthur Gretton 
2007 Poster: Colored Maximum Variance Unfolding »
Le Song · Alexander Smola · Karsten Borgwardt · Arthur Gretton 
2007 Poster: A Kernel Statistical Test of Independence »
Arthur Gretton · Kenji Fukumizu · Choon Hui Teo · Le Song · Bernhard Schölkopf · Alexander Smola 
2006 Poster: A Kernel Method for the TwoSampleProblem »
Arthur Gretton · Karsten Borgwardt · Malte J Rasch · Bernhard Schölkopf · Alexander Smola 
2006 Poster: Correcting Sample Selection Bias by Unlabeled Data »
Jiayuan Huang · Alexander Smola · Arthur Gretton · Karsten Borgwardt · Bernhard Schölkopf 
2006 Spotlight: Correcting Sample Selection Bias by Unlabeled Data »
Jiayuan Huang · Alexander Smola · Arthur Gretton · Karsten Borgwardt · Bernhard Schölkopf 
2006 Talk: A Kernel Method for the TwoSampleProblem »
Arthur Gretton · Karsten Borgwardt · Malte J Rasch · Bernhard Schölkopf · Alexander Smola