Poster
Informative Features for Model Comparison
Wittawat Jitkrittum · Heishiro Kanagawa · Patsorn Sangkloy · James Hays · Bernhard Schölkopf · Arthur Gretton

Thu Dec 6th 05:00 -- 07:00 PM @ Room 210 #79

Given two candidate models, and a set of target observations, we address the problem of measuring the relative goodness of fit of the two models. We propose two new statistical tests which are nonparametric, computationally efficient (runtime complexity is linear in the sample size), and interpretable. As a unique advantage, our tests can produce a set of examples (informative features) indicating the regions in the data domain where one model fits significantly better than the other. In a real-world problem of comparing GAN models, the test power of our new test matches that of the state-of-the-art test of relative goodness of fit, while being one order of magnitude faster.

Author Information

Wittawat Jitkrittum (Max Planck Institute for Intelligent Systems)

Wittawat Jitkrittum is a postdoctoral researcher at Max Planck Institute for Intelligent Systems, Germany. He earned his PhD from Gatsby Unit, University College London with a thesis on informative features for comparing distributions. He received a best paper award at NeurIPS 2017 and the ELLIS PhD award 2019 for outstanding dissertation. Wittawat has broad research interests covering kernel methods, deep generative models, and approximate Bayesian inference. He served as a publication chair for AISTATS 2016, a program committee for NeurIPS, ICML, AISTATS, among others, and is a co-organizer of the first Southeast Asia Machine Learning School (SEAMLS 2019) in Indonesia and a co-organizer of the first Machine Learning Research School (MLRS 2019) in Thailand.

Heishiro Kanagawa (Gatsby Unit, University College London)
Patsorn Sangkloy (Georgia Institute of Technology)
James Hays (Georgia Institute of Technology, USA)
Bernhard Schölkopf (MPI for Intelligent Systems)

Bernhard Scholkopf received degrees in mathematics (London) and physics (Tubingen), and a doctorate in computer science from the Technical University Berlin. He has researched at AT&T Bell Labs, at GMD FIRST, Berlin, at the Australian National University, Canberra, and at Microsoft Research Cambridge (UK). In 2001, he was appointed scientific member of the Max Planck Society and director at the MPI for Biological Cybernetics; in 2010 he founded the Max Planck Institute for Intelligent Systems. For further information, see www.kyb.tuebingen.mpg.de/~bs.

Arthur Gretton (Gatsby Unit, UCL)

Arthur Gretton is a Professor with the Gatsby Computational Neuroscience Unit at UCL. He received degrees in Physics and Systems Engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing and Communications Laboratory at the University of Cambridge. He previously worked at the MPI for Biological Cybernetics, and at the Machine Learning Department, Carnegie Mellon University. Arthur's recent research interests in machine learning include the design and training of generative models, both implicit (e.g. GANs) and explicit (high/infinite dimensional exponential family models), nonparametric hypothesis testing, and kernel methods. He has been an associate editor at IEEE Transactions on Pattern Analysis and Machine Intelligence from 2009 to 2013, an Action Editor for JMLR since April 2013, an Area Chair for NeurIPS in 2008 and 2009, a Senior Area Chair for NeurIPS in 2018, an Area Chair for ICML in 2011 and 2012, and a member of the COLT Program Committee in 2013. Arthur was program chair for AISTATS in 2016 (with Christian Robert), tutorials chair for ICML 2018 (with Ruslan Salakhutdinov), workshops chair for ICML 2019 (with Honglak Lee), program chair for the Dali workshop in 2019 (with Krikamol Muandet and Shakir Mohammed), and co-organsier of the Machine Learning Summer School 2019 in London (with Marc Deisenroth).

More from the Same Authors