Timezone: »
Workshop Webpage: https://ml-critique-correct.github.io/
Recently there have been calls to make machine learning more reproducible, less hand-tailored, fair, and generally more thoughtful about how research is conducted and put into practice. These are hallmarks of a mature scientific field and will be crucial for machine learning to have the wide-ranging, positive impact it is expected to have. Without careful consideration, we as a field risk inflating expectations beyond what is possible. To address this, this workshop aims to better understand and to improve all stages of the research process in machine learning.
A number of recent papers have carefully considered trends in machine learning as well as the needs of the field when used in real-world scenarios [1-18]. Each of these works introspectively analyzes what we often take for granted as a field. Further, many propose solutions for moving forward. The goal of this workshop is to bring together researchers from all subfields of machine learning to highlight open problems and widespread dubious practices in the field, and crucially, to propose solutions. We hope to highlight issues and propose solutions in areas such as:
- Common practices [1, 8]
- Implicit technical and empirical assumptions that go unquestioned [2, 3, 5, 7, 11, 12, 13, 17, 18]
- Shortfalls in publication and reviewing setups [15, 16]
- Disconnects between research focus and application requirements [9, 10, 14]
- Surprising observations that make us rethink our research priorities [4, 6]
The workshop program is a collection of invited talks, alongside contributed posters and talks. For some of these talks, we plan a unique open format of 10 minutes of talk + 10 minutes of follow up discussion. Additionally, a separate panel discussion will collect researchers with a diverse set of viewpoints on the current challenges and potential solutions. During the panel, we will also open the conversation to the audience. The discussion will further be open to an online Q&A which will be solicited prior to the workshop.
A key expected outcome of the workshop is a collection of important open problems at all levels of machine learning research, along with a record of various bad practices that we should no longer consider to be acceptable. Further, we hope that the workshop will make inroads in how to address these problems, highlighting promising new frontiers for making machine learning practical, robust, reproducible, and fair when applied to real-world problems.
Call for Papers:
Deadline: October 30rd, 2018, 11:59 UTC
The one day NIPS 2018 Workshop: Critiquing and Correcting Trends in Machine Learning calls for papers that critically examine current common practices and/or trends in methodology, datasets, empirical standards, publication models, or any other aspect of machine learning research. Though we are happy to receive papers that bring attention to problems for which there is no clear immediate remedy, we particularly encourage papers which propose a solution or indicate a way forward. Papers should motivate their arguments by describing gaps in the field. Crucially, this is not a venue for settling scores or character attacks, but for moving machine learning forward as a scientific discipline.
To help guide submissions, we have split up the call for papers into the follows tracks. Please indicate the intended track when making your submission. Papers are welcome from all subfields of machine learning. If you have a paper which you feel falls within the remit of the workshop but does not clearly fit one of these tracks, please contact the organizers at: ml.critique.correct@gmail.com.
Bad Practices (1-4 pages)
Papers that highlight common bad practices or unjustified assumptions at any stage of the research process. These can either be technical shortfalls in a particular machine learning subfield, or more procedural bad practices of the ilk of those discussed in [17].
Flawed Intuitions or Unjustified Assumptions (3-4 pages)
Papers that call into question commonly held intuitions or provide clear evidence either for or against assumptions that are regularly taken for granted without proper justification. For example, we would like to see papers which provide empirical assessments to test out metrics, verify intuitions, or compare popular current approaches with historic baselines that may have unfairly fallen out of favour (see e.g. [2]). We would also like to see work which provides results which makes us rethink our intuitions or the assumptions we typically make.
Negative Results (3-4 pages)
Papers which show failure modes of existing algorithms or suggest new approaches which one might expect to perform well but which do not. The aim of the latter of these is to provide a venue for work which might otherwise go unpublished but which is still of interest to the community, for example by dissuading other researchers from similar ultimately unsuccessful approaches. Though it is inevitably preferable that papers are able to explain why the approach performs poorly, this is not essential if the paper is able to demonstrate why the negative result is of interest to the community in its own right.
Research Process (1-4 pages)
Papers which provide carefully thought through critiques, provide discussion on, or suggest new approaches to areas such as the conference model, the reviewing process, the role of industry in research, open sourcing of code and data, institutional biases and discrimination in the field, research ethics, reproducibility standards, and allocation of conference tickets.
Debates (1-2 pages)
Short proposition papers which discuss issues either affecting all of machine learning or significantly sized subfields (e.g. reinforcement learning, Bayesian methods, etc). Selected papers will be used as the basis for instigating online forum debates before the workshop, leading up to live discussions on the day itself.
Open Problems (1-4 papers/short talks)
Papers that describe either (a) unresolved questions in existing fields that need to be addressed, (b) desirable operating characteristics for ML in particular application areas that have yet to be achieved, or (c) new frontiers of machine learning research that require rethinking current practices (e.g., error diagnosis for when many ML components are interoperating within a system, automating dataset collection/creation).
Submission Instructions Papers should be submitted as pdfs using the NIPS LaTeX style file. Author names should be anonymized.
All accepted papers will be made available through the workshop website and presented as a poster. Selected papers will also be given contributed talks. We have a small number of complimentary workshop registrations to hand out to students. If you would like to apply for one of these, please email a one paragraph supporting statement. We also have a limited number of reserved tickets slots to assign to authors of accepted papers. If any authors are unable to attend the workshop due to ticketing, visa, or funding issues, they will be allowed to provide a video presentation for their work that will be made available through the workshop website in lieu of a poster presentation.
Please submit papers here: https://easychair.org/conferences/?conf=cract2018
Deadline: October 30rd, 2018, 11:59 UTC
References
[1] Mania, H., Guy, A., & Recht, B. (2018). Simple random search provides a competitive approach to reinforcement learning. arXiv preprint arXiv:1803.07055.
[2] Rainforth, T., Kosiorek, A. R., Le, T. A., Maddison, C. J., Igl, M., Wood, F., & Teh, Y. W. (2018). Tighter variational bounds are not necessarily better. ICML.
[3] Torralba, A., & Efros, A. A. (2011). Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on (pp. 1521-1528). IEEE.
[4] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
[5] Mescheder, L., Geiger, A., Nowozin S. (2018) Which Training Methods for GANs do actually Converge? ICML
[6] Daumé III, H. (2009). Frustratingly easy domain adaptation. arXiv preprint arXiv:0907.1815
[7] Urban, G., Geras, K. J., Kahou, S. E., Wang, O. A. S., Caruana, R., Mohamed, A., ... & Richardson, M. (2016). Do deep convolutional nets really need to be deep (or even convolutional)?.
[8] Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., & Meger, D. (2017). Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560.
[9] Narayanan, M., Chen, E., He, J., Kim, B., Gershman, S., & Doshi-Velez, F. (2018). How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation. arXiv preprint arXiv:1802.00682.
[10] Schulam, S., Saria S. (2017). Reliable Decision Support using Counterfactual Models. NIPS.
[11] Rahimi, A. (2017). Let's take machine learning from alchemy to electricity. Test-of-time award presentation, NIPS.
[12] Lucic, M., Kurach, K., Michalski, M., Gelly, S., Bousquet, O. (2018). Are GANs Created Equal? A Large-Scale Study. arXiv preprint arXiv:1711.10337.
[13] Le, T.A., Kosiorek, A.R., Siddharth, N., Teh, Y.W. and Wood, F., (2018). Revisiting Reweighted Wake-Sleep. arXiv preprint arXiv:1805.10469.
[14] Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J. and Mané, D., (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.
[15] Sutton, C. (2018) Making unblinding manageable: Towards reconciling prepublication and double-blind review. http://www.theexclusive.org/2017/09/arxiv-double-blind.html
[16] Langford, J. (2018) ICML Board and Reviewer profiles. http://hunch.net/?p=8962378
Fri 5:30 a.m. - 5:40 a.m.
|
Opening Remarks
(
Talk
)
|
🔗 |
Fri 5:40 a.m. - 6:05 a.m.
|
Zachary Lipton
(
Invited Talk
)
|
Zachary Lipton 🔗 |
Fri 6:05 a.m. - 6:30 a.m.
|
Kim Hazelwood
(
Invited Talk
)
|
Kim Hazelwood 🔗 |
Fri 6:30 a.m. - 6:40 a.m.
|
Expanding search in the space of empirical ML
(
Contributed Talk
)
|
Bronwyn Woods 🔗 |
Fri 6:40 a.m. - 6:50 a.m.
|
Opportunities for machine learning research to support fairness in industry practice
(
Contributed Talk
)
|
Kenneth Holstein 🔗 |
Fri 6:50 a.m. - 7:20 a.m.
|
Spotlights - Papers 2, 23, 24, 36, 40, 44
(
Contributed Talk
)
|
🔗 |
Fri 7:20 a.m. - 8:10 a.m.
|
Poster Session 1 (note there are numerous missing names here, all papers appear in all poster sessions)
(
Poster Session
)
|
Akhilesh Gotmare · Kenneth Holstein · Jan Brabec · Michal Uricar · Kaleigh Clary · Cynthia Rudin · Sam Witty · Andrew Ross · Shayne O'Brien · Babak Esmaeili · Jessica Forde · Massimo Caccia · Ali Emami · Scott Jordan · Bronwyn Woods · D. Sculley · Rebekah Overdorf · Nicolas Le Roux · Peter Henderson · Brandon Yang · Tzu-Yu Liu · David Jensen · Niccolo Dalmasso · Weitang Liu · Paul Marc TRICHELAIR · Jun Ki Lee · Akanksha Atrey · Matt Groh · Yotam Hechtlinger · Emma Tosch
|
Fri 8:10 a.m. - 8:35 a.m.
|
Finale Doshi-Velez
(
Invited Talk
)
|
Finale Doshi-Velez 🔗 |
Fri 8:35 a.m. - 9:00 a.m.
|
Suchi Saria
(
Invited Talk
)
|
Suchi Saria 🔗 |
Fri 9:00 a.m. - 10:30 a.m.
|
Lunch
|
🔗 |
Fri 10:30 a.m. - 10:55 a.m.
|
Sebastian Nowozin
(
Invited Talk
)
|
Sebastian Nowozin 🔗 |
Fri 10:55 a.m. - 11:05 a.m.
|
Using Cumulative Distribution Based Performance Analysis to Benchmark Models
(
Contributed Talk
)
|
Scott Jordan 🔗 |
Fri 11:05 a.m. - 11:30 a.m.
|
Charles Sutton
(
Invited Talk
)
|
Charles Sutton 🔗 |
Fri 11:30 a.m. - 11:40 a.m.
|
On Avoiding Tragedy of the Commons in the Peer Review Process
(
Contributed Talk
)
|
D. Sculley 🔗 |
Fri 11:40 a.m. - 12:00 p.m.
|
Spotlights - Papers 10, 20, 35, 42
(
Contributed Talk
)
|
🔗 |
Fri 12:00 p.m. - 12:30 p.m.
|
Coffee Break and Posters
|
🔗 |
Fri 12:30 p.m. - 1:30 p.m.
|
Panel on research process
(
Panel
)
|
Zachary Lipton · Charles Sutton · Finale Doshi-Velez · Hanna Wallach · Suchi Saria · Rich Caruana · Thomas Rainforth 🔗 |
Fri 1:30 p.m. - 3:00 p.m.
|
Poster Session 2
(
Poster Session
)
|
🔗 |
Author Information
Thomas Rainforth (University of Oxford)
Matt Kusner (University of Oxford)
Benjamin Bloem-Reddy (University of Oxford)
Brooks Paige (Alan Turing Institute / University of Cambridge)
Rich Caruana (Microsoft)
Yee Whye Teh (University of Oxford, DeepMind)
I am a Professor of Statistical Machine Learning at the Department of Statistics, University of Oxford and a Research Scientist at DeepMind. I am also an Alan Turing Institute Fellow and a European Research Council Consolidator Fellow. I obtained my Ph.D. at the University of Toronto (working with Geoffrey Hinton), and did postdoctoral work at the University of California at Berkeley (with Michael Jordan) and National University of Singapore (as Lee Kuan Yew Postdoctoral Fellow). I was a Lecturer then a Reader at the Gatsby Computational Neuroscience Unit, UCL, and a tutorial fellow at University College Oxford, prior to my current appointment. I am interested in the statistical and computational foundations of intelligence, and works on scalable machine learning, probabilistic models, Bayesian nonparametrics and deep learning. I was programme co-chair of ICML 2017 and AISTATS 2010.
More from the Same Authors
-
2021 Spotlight: Neural Additive Models: Interpretable Machine Learning with Neural Nets »
Rishabh Agarwal · Levi Melnick · Nicholas Frosst · Xuezhou Zhang · Ben Lengerich · Rich Caruana · Geoffrey Hinton -
2021 : GAM Changer: Editing Generalized Additive Models with Interactive Visualization »
Zijie Jay Wang · Harsha Nori · Duen Horng Chau · Jennifer Wortman Vaughan · Rich Caruana -
2021 : Uncertainty Quantification in End-to-End Implicit Neural Representations for Medical Imaging »
Bobby He · Francisca Vasconcelos · Yee Whye Teh -
2021 : Uncertainty Quantification in End-to-End Implicit Neural Representations for Medical Imaging »
Francisca Vasconcelos · Bobby He · Yee Teh -
2022 : Pre-training via Denoising for Molecular Property Prediction »
Sheheryar Zaidi · Michael Schaarschmidt · James Martens · Hyunjik Kim · Yee Whye Teh · Alvaro Sanchez Gonzalez · Peter Battaglia · Razvan Pascanu · Jonathan Godwin -
2022 : When Does Re-initialization Work? »
Sheheryar Zaidi · Tudor Berariu · Hyunjik Kim · Jorg Bornschein · Claudia Clopath · Yee Whye Teh · Razvan Pascanu -
2022 Spotlight: Lightning Talks 1A-4 »
Siwei Wang · Jing Liu · Nianqiao Ju · Shiqian Li · Eloïse Berthier · Muhammad Faaiz Taufiq · Arsene Fansi Tchango · Chen Liang · Chulin Xie · Jordan Awan · Jean-Francois Ton · Ziad Kobeissi · Wenguan Wang · Xinwang Liu · Kewen Wu · Rishab Goel · Jiaxu Miao · Suyuan Liu · Julien Martel · Ruobin Gong · Francis Bach · Chi Zhang · Rob Cornish · Sanmi Koyejo · Zhi Wen · Yee Whye Teh · Yi Yang · Jiaqi Jin · Bo Li · Yixin Zhu · Vinayak Rao · Wenxuan Tu · Gaetan Marceau Caron · Arnaud Doucet · Xinzhong Zhu · Joumana Ghosn · En Zhu -
2022 Spotlight: Conformal Off-Policy Prediction in Contextual Bandits »
Muhammad Faaiz Taufiq · Jean-Francois Ton · Rob Cornish · Yee Whye Teh · Arnaud Doucet -
2022 Poster: Tractable Function-Space Variational Inference in Bayesian Neural Networks »
Tim G. J. Rudner · Zonghao Chen · Yee Whye Teh · Yarin Gal -
2022 Poster: Conformal Off-Policy Prediction in Contextual Bandits »
Muhammad Faaiz Taufiq · Jean-Francois Ton · Rob Cornish · Yee Whye Teh · Arnaud Doucet -
2022 Poster: Riemannian Score-Based Generative Modelling »
Valentin De Bortoli · Emile Mathieu · Michael Hutchinson · James Thornton · Yee Whye Teh · Arnaud Doucet -
2021 : Invited talk (ML) - Rich Caruana »
Rich Caruana -
2021 Poster: On Contrastive Representations of Stochastic Processes »
Emile Mathieu · Adam Foster · Yee Teh -
2021 Poster: Group Equivariant Subsampling »
Jin Xu · Hyunjik Kim · Thomas Rainforth · Yee Teh -
2021 Poster: Powerpropagation: A sparsity inducing weight reparameterisation »
Jonathan Richard Schwarz · Siddhant Jayakumar · Razvan Pascanu · Peter E Latham · Yee Teh -
2021 Poster: Neural Additive Models: Interpretable Machine Learning with Neural Nets »
Rishabh Agarwal · Levi Melnick · Nicholas Frosst · Xuezhou Zhang · Ben Lengerich · Rich Caruana · Geoffrey Hinton -
2021 Poster: On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations »
Tim G. J. Rudner · Cong Lu · Michael A Osborne · Yarin Gal · Yee Teh -
2021 Poster: Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Independent Projected Kernels »
Michael Hutchinson · Alexander Terenin · Viacheslav Borovitskiy · So Takao · Yee Teh · Marc Deisenroth -
2021 Poster: BayesIMP: Uncertainty Quantification for Causal Data Fusion »
Siu Lun Chau · Jean-Francois Ton · Javier González · Yee Teh · Dino Sejdinovic -
2021 Poster: Neural Ensemble Search for Uncertainty Estimation and Dataset Shift »
Sheheryar Zaidi · Arber Zela · Thomas Elsken · Chris C Holmes · Frank Hutter · Yee Teh -
2020 Poster: Bayesian Deep Ensembles via the Neural Tangent Kernel »
Bobby He · Balaji Lakshminarayanan · Yee Whye Teh -
2020 Poster: Bootstrapping neural processes »
Juho Lee · Yoonho Lee · Jungtaek Kim · Eunho Yang · Sung Ju Hwang · Yee Whye Teh -
2020 Poster: How Robust are the Estimated Effects of Nonpharmaceutical Interventions against COVID-19? »
Mrinank Sharma · Sören Mindermann · Jan Brauner · Gavin Leech · Anna Stephenson · Tomáš Gavenčiak · Jan Kulveit · Yee Whye Teh · Leonid Chindelevitch · Yarin Gal -
2020 Spotlight: How Robust are the Estimated Effects of Nonpharmaceutical Interventions against COVID-19? »
Mrinank Sharma · Sören Mindermann · Jan Brauner · Gavin Leech · Anna Stephenson · Tomáš Gavenčiak · Jan Kulveit · Yee Whye Teh · Leonid Chindelevitch · Yarin Gal -
2019 : Coffee Break & Poster Session 2 »
Juho Lee · Yoonho Lee · Yee Whye Teh · Raymond A. Yeh · Yuan-Ting Hu · Alex Schwing · Sara Ahmadian · Alessandro Epasto · Marina Knittel · Ravi Kumar · Mohammad Mahdian · Christian Bueno · Aditya Sanghi · Pradeep Kumar Jayaraman · Ignacio Arroyo-Fernández · Andrew Hryniowski · Vinayak Mathur · Sanjay Singh · Shahrzad Haddadan · Vasco Portilheiro · Luna Zhang · Mert Yuksekgonul · Jhosimar Arias Figueroa · Deepak Maurya · Balaraman Ravindran · Frank NIELSEN · Philip Pham · Justin Payan · Andrew McCallum · Jinesh Mehta · Ke SUN -
2019 : Poster Session »
Clement Canonne · Kwang-Sung Jun · Seth Neel · Di Wang · Giuseppe Vietri · Liwei Song · Jonathan Lebensold · Huanyu Zhang · Lovedeep Gondara · Ang Li · FatemehSadat Mireshghallah · Jinshuo Dong · Anand D Sarwate · Antti Koskela · Joonas Jälkö · Matt Kusner · Dingfan Chen · Mi Jung Park · Ashwin Machanavajjhala · Jayashree Kalpathy-Cramer · · Vitaly Feldman · Andrew Tomkins · Hai Phan · Hossein Esfandiari · Mimansa Jaiswal · Mrinank Sharma · Jeff Druce · Casey Meehan · Zhengli Zhao · Hsiang Hsu · Davis Railsback · Abraham Flaxman · · Julius Adebayo · Aleksandra Korolova · Jiaming Xu · Naoise Holohan · Samyadeep Basu · Matthew Joseph · My Thai · Xiaoqian Yang · Ellen Vitercik · Michael Hutchinson · Chenghong Wang · Gregory Yauney · Yuchao Tao · Chao Jin · Si Kai Lee · Audra McMillan · Rauf Izmailov · Jiayi Guo · Siddharth Swaroop · Tribhuvanesh Orekondy · Hadi Esmaeilzadeh · Kevin Procopio · Alkis Polyzotis · Jafar Mohammadi · Nitin Agrawal -
2019 : Contributed Talk - Towards deep amortized clustering »
Juho Lee · Yoonho Lee · Yee Whye Teh -
2019 : QUOTIENT: Two-Party Secure Neural Network Training & Prediction »
Nitin Agrawal · Matt Kusner · Adria Gascon -
2019 Poster: Stacked Capsule Autoencoders »
Adam Kosiorek · Sara Sabour · Yee Whye Teh · Geoffrey E Hinton -
2019 Poster: Efficient Forward Architecture Search »
Hanzhang Hu · John Langford · Rich Caruana · Saurajit Mukherjee · Eric Horvitz · Debadeepta Dey -
2019 Poster: Continual Unsupervised Representation Learning »
Dushyant Rao · Francesco Visin · Andrei A Rusu · Razvan Pascanu · Yee Whye Teh · Raia Hadsell -
2019 Poster: Random Tessellation Forests »
Shufei Ge · Shijia Wang · Yee Whye Teh · Liangliang Wang · Lloyd Elliott -
2019 Poster: Variational Bayesian Optimal Experimental Design »
Adam Foster · Martin Jankowiak · Elias Bingham · Paul Horsfall · Yee Whye Teh · Thomas Rainforth · Noah Goodman -
2019 Spotlight: Variational Bayesian Optimal Experimental Design »
Adam Foster · Martin Jankowiak · Elias Bingham · Paul Horsfall · Yee Whye Teh · Thomas Rainforth · Noah Goodman -
2019 Poster: Augmented Neural ODEs »
Emilien Dupont · Arnaud Doucet · Yee Whye Teh -
2019 Poster: Continuous Hierarchical Representations with Poincaré Variational Auto-Encoders »
Emile Mathieu · Charline Le Lan · Chris Maddison · Ryota Tomioka · Yee Whye Teh -
2018 : Panel Discussion »
Rich Caruana · Mike Schuster · Ralf Schlüter · Hynek Hermansky · Renato De Mori · Samy Bengio · Michiel Bacchiani · Jason Eisner -
2018 : Rich Caruana, "Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning" »
Rich Caruana -
2018 Workshop: Machine Learning for Molecules and Materials »
José Miguel Hernández-Lobato · Klaus-Robert Müller · Brooks Paige · Matt Kusner · Stefan Chmiela · Kristof Schütt -
2018 : Panel on research process »
Zachary Lipton · Charles Sutton · Finale Doshi-Velez · Hanna Wallach · Suchi Saria · Rich Caruana · Thomas Rainforth -
2018 : Invited Talk 2 »
Benjamin Bloem-Reddy -
2018 : Rich Caruna - Justice May Be Blind But It Shouldn’t Be Opaque: The Risk of Using Black-Box Models in Healthcare & Criminal Justice »
Rich Caruana -
2018 : Introduction of the workshop »
Razvan Pascanu · Yee Teh · Mark Ring · Marc Pickett -
2018 Workshop: Continual Learning »
Razvan Pascanu · Yee Teh · Marc Pickett · Mark Ring -
2018 Poster: Faithful Inversion of Generative Models for Effective Amortized Inference »
Stefan Webb · Adam Golinski · Rob Zinkov · Siddharth N · Thomas Rainforth · Yee Whye Teh · Frank Wood -
2018 Poster: Causal Inference via Kernel Deviance Measures »
Jovana Mitrovic · Dino Sejdinovic · Yee Whye Teh -
2018 Spotlight: Causal Inference via Kernel Deviance Measures »
Jovana Mitrovic · Dino Sejdinovic · Yee Whye Teh -
2018 Poster: Stochastic Expectation Maximization with Variance Reduction »
Jianfei Chen · Jun Zhu · Yee Whye Teh · Tong Zhang -
2018 Poster: Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects »
Adam Kosiorek · Hyunjik Kim · Yee Whye Teh · Ingmar Posner -
2018 Spotlight: Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects »
Adam Kosiorek · Hyunjik Kim · Yee Whye Teh · Ingmar Posner -
2018 Poster: Modelling sparsity, heterogeneity, reciprocity and community structure in temporal interaction data »
Xenia Miscouridou · Francois Caron · Yee Whye Teh -
2017 : Panel Session »
Neil Lawrence · Finale Doshi-Velez · Zoubin Ghahramani · Yann LeCun · Max Welling · Yee Whye Teh · Ole Winther -
2017 : Invited Talk 6 »
Rich Caruana -
2017 : Poster spotlights »
Hiroshi Kuwajima · Masayuki Tanaka · Qingkai Liang · Matthieu Komorowski · Fanyu Que · Thalita F Drumond · Aniruddh Raghu · Leo Anthony Celi · Christina Göpfert · Andrew Ross · Sarah Tan · Rich Caruana · Yin Lou · Devinder Kumar · Graham Taylor · Forough Poursabzi-Sangdeh · Jennifer Wortman Vaughan · Hanna Wallach -
2017 : Poster Spotlights »
Francesco Locatello · Ari Pakman · Da Tang · Thomas Rainforth · Zalan Borsos · Marko Järvenpää · Eric Nalisnick · Gabriele Abbati · XIAOYU LU · Jonathan Huggins · Rachit Singh · Rui Luo -
2017 Workshop: Machine Learning for Molecules and Materials »
Kristof Schütt · Klaus-Robert Müller · Anatole von Lilienfeld · José Miguel Hernández-Lobato · Klaus-Robert Müller · Alan Aspuru-Guzik · Bharath Ramsundar · Matt Kusner · Brooks Paige · Stefan Chmiela · Alexandre Tkatchenko · Anatole von Lilienfeld · Koji Tsuda -
2017 Symposium: Interpretable Machine Learning »
Andrew Wilson · Jason Yosinski · Patrice Simard · Rich Caruana · William Herlands -
2017 Invited Talk: On Bayesian Deep Learning and Deep Bayesian Learning »
Yee Whye Teh -
2017 Poster: Counterfactual Fairness »
Matt Kusner · Joshua Loftus · Chris Russell · Ricardo Silva -
2017 Poster: Distral: Robust multitask reinforcement learning »
Yee Teh · Victor Bapst · Wojciech Czarnecki · John Quan · James Kirkpatrick · Raia Hadsell · Nicolas Heess · Razvan Pascanu -
2017 Oral: Counterfactual Fairness »
Matt Kusner · Joshua Loftus · Chris Russell · Ricardo Silva -
2017 Poster: Filtering Variational Objectives »
Chris Maddison · John Lawson · George Tucker · Nicolas Heess · Mohammad Norouzi · Andriy Mnih · Arnaud Doucet · Yee Teh -
2017 Poster: When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness »
Chris Russell · Matt Kusner · Joshua Loftus · Ricardo Silva -
2016 : Probabilistic structure discovery in time series data »
David Janz · Brooks Paige · Thomas Rainforth · Jan-Willem van de Meent -
2016 Poster: Bayesian Optimization for Probabilistic Programs »
Thomas Rainforth · Tuan Anh Le · Jan-Willem van de Meent · Michael A Osborne · Frank Wood -
2016 Poster: Gaussian Processes for Survival Analysis »
Tamara Fernandez · Nicolas Rivera · Yee Whye Teh -
2015 Workshop: Black box learning and inference »
Josh Tenenbaum · Jan-Willem van de Meent · Tejas Kulkarni · S. M. Ali Eslami · Brooks Paige · Frank Wood · Zoubin Ghahramani -
2015 Workshop: Scalable Monte Carlo Methods for Bayesian Analysis of Big Data »
Babak Shahbaba · Yee Whye Teh · Max Welling · Arnaud Doucet · Christophe Andrieu · Sebastian J. Vollmer · Pierre Jacob -
2015 : The risk of deploying unintelligible models in healthcare »
Rich Caruana -
2015 : Random Tensor Decompositions for Regression and Collaborative Filtering »
Yee Whye Teh -
2015 Poster: A hybrid sampler for Poisson-Kingman mixture models »
Maria Lomeli · Stefano Favaro · Yee Whye Teh -
2015 Poster: Expectation Particle Belief Propagation »
Thibaut Lienart · Yee Whye Teh · Arnaud Doucet -
2014 Poster: Distributed Bayesian Posterior Sampling via Moment Sharing »
Minjie Xu · Balaji Lakshminarayanan · Yee Whye Teh · Jun Zhu · Bo Zhang -
2014 Poster: Asynchronous Anytime Sequential Monte Carlo »
Brooks Paige · Frank Wood · Arnaud Doucet · Yee Whye Teh -
2014 Poster: Do Deep Nets Really Need to be Deep? »
Jimmy Ba · Rich Caruana -
2014 Oral: Asynchronous Anytime Sequential Monte Carlo »
Brooks Paige · Frank Wood · Arnaud Doucet · Yee Whye Teh -
2014 Poster: Mondrian Forests: Efficient Online Random Forests »
Balaji Lakshminarayanan · Daniel Roy · Yee Whye Teh -
2013 Poster: Using multiple samples to learn mixture models »
Jason D Lee · Ran Gilad-Bachrach · Rich Caruana -
2013 Poster: Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits »
Ben Shababo · Brooks Paige · Ari Pakman · Liam Paninski -
2013 Poster: Learning with Invariance via Linear Functionals on Reproducing Kernel Hilbert Space »
Xinhua Zhang · Wee Sun Lee · Yee Whye Teh -
2013 Spotlight: Using multiple samples to learn mixture models »
Jason D Lee · Ran Gilad-Bachrach · Rich Caruana -
2013 Spotlight: Learning with Invariance via Linear Functionals on Reproducing Kernel Hilbert Space »
Xinhua Zhang · Wee Sun Lee · Yee Whye Teh -
2013 Spotlight: Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits »
Ben Shababo · Brooks Paige · Ari Pakman · Liam Paninski -
2013 Poster: Bayesian Hierarchical Community Discovery »
Charles Blundell · Yee Whye Teh -
2013 Poster: Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex »
Sam Patterson · Yee Whye Teh -
2013 Spotlight: Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex »
Sam Patterson · Yee Whye Teh -
2012 Poster: Searching for objects driven by context »
Bogdan Alexe · Nicolas Heess · Yee Whye Teh · Vittorio Ferrari -
2012 Poster: Learning Label Trees for Probabilistic Modelling of Implicit Feedback »
Andriy Mnih · Yee Whye Teh -
2012 Poster: MCMC for continuous-time discrete-state systems »
Vinayak Rao · Yee Whye Teh -
2012 Poster: Bayesian nonparametric models for ranked data »
Francois Caron · Yee Whye Teh -
2012 Spotlight: Searching for objects driven by context »
Bogdan Alexe · Nicolas Heess · Yee Whye Teh · Vittorio Ferrari -
2012 Poster: Scalable imputation of genetic data with a discrete fragmentation-coagulation process »
Lloyd T Elliott · Yee Whye Teh -
2011 Poster: Modelling Genetic Variations using Fragmentation-Coagulation Processes »
Yee Whye Teh · Charles Blundell · Lloyd T Elliott -
2011 Oral: Modelling Genetic Variations using Fragmentation-Coagulation Processes »
Yee Whye Teh · Charles Blundell · Lloyd T Elliott -
2011 Poster: Gaussian process modulated renewal processes »
Vinayak Rao · Yee Whye Teh -
2011 Tutorial: Modern Bayesian Nonparametrics »
Peter Orbanz · Yee Whye Teh -
2010 Session: Spotlights Session 7 »
Rich Caruana -
2010 Session: Oral Session 8 »
Rich Caruana -
2010 Poster: Improvements to the Sequence Memoizer »
Jan Gasthaus · Yee Whye Teh -
2009 Workshop: Nonparametric Bayes »
Dilan Gorur · Francois Caron · Yee Whye Teh · David B Dunson · Zoubin Ghahramani · Michael Jordan -
2009 Workshop: Grammar Induction, Representation of Language and Language Learning »
Alex Clark · Dorota Glowacka · John Shawe-Taylor · Yee Whye Teh · Chris J Watkins -
2009 Poster: Indian Buffet Processes with Power-law Behavior »
Yee Whye Teh · Dilan Gorur -
2009 Spotlight: Indian Buffet Processes with Power-law Behavior »
Yee Whye Teh · Dilan Gorur -
2009 Poster: Spatial Normalized Gamma Processes »
Vinayak Rao · Yee Whye Teh -
2009 Spotlight: Spatial Normalized Gamma Processes »
Vinayak Rao · Yee Whye Teh -
2008 Oral: The Mondrian Process »
Daniel Roy · Yee Whye Teh -
2008 Poster: The Infinite Factorial Hidden Markov Model »
Jurgen Van Gael · Yee Whye Teh · Zoubin Ghahramani -
2008 Poster: The Mondrian Process »
Daniel Roy · Yee Whye Teh -
2008 Spotlight: The Infinite Factorial Hidden Markov Model »
Jurgen Van Gael · Yee Whye Teh · Zoubin Ghahramani -
2008 Poster: A mixture model for the evolution of gene expression in non-homogeneous datasets »
Gerald Quon · Yee Whye Teh · Esther Chan · Michael Brudno · Tim Hughes · Quaid Morris -
2008 Poster: Dependent Dirichlet Process Spike Sorting »
Jan Gasthaus · Frank Wood · Dilan Gorur · Yee Whye Teh -
2008 Poster: An Efficient Sequential Monte Carlo Algorithm for Coalescent Clustering »
Dilan Gorur · Yee Whye Teh -
2007 Poster: Bayesian Agglomerative Clustering with Coalescents »
Yee Whye Teh · Hal Daumé III · Daniel Roy -
2007 Poster: Cooled and Relaxed Survey Propagation for MRFs »
Hai Leong Chieu · Wee Sun Lee · Yee Whye Teh -
2007 Session: Session 5: Probabilistic Representations and Learning »
Yee Whye Teh -
2007 Spotlight: Cooled and Relaxed Survey Propagation for MRFs »
Hai Leong Chieu · Wee Sun Lee · Yee Whye Teh -
2007 Oral: Bayesian Agglomerative Clustering with Coalescents »
Yee Whye Teh · Hal Daumé III · Daniel Roy -
2007 Spotlight: Collapsed Variational Inference for HDP »
Yee Whye Teh · Kenichi Kurihara · Max Welling -
2007 Poster: Collapsed Variational Inference for HDP »
Yee Whye Teh · Kenichi Kurihara · Max Welling -
2006 Poster: A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation »
Yee Whye Teh · David Newman · Max Welling