Timezone: »
As adoption of machine learning grows in high-stakes application areas (e.g., industry, government and health care), so does the need for guarantees: how accurate a learned model will be; whether its predictions will be fair; whether it will divulge information about individuals; or whether it is vulnerable to adversarial attacks. Many of these questions involve unknown or intractable quantities (e.g., risk, regret or posterior likelihood) and complex constraints (e.g., differential privacy, fairness, and adversarial robustness). Thus, learning algorithms are often designed to yield (and optimize) bounds on the quantities of interest. Beyond providing guarantees, these bounds also shed light on black-box machine learning systems.
Classical examples include structural risk minimization (Vapnik, 1991) and support vector machines (Cristianini & Shawe-Taylor, 2000), while more recent examples include non-vacuous risk bounds for neural networks (Dziugaite & Roy, 2017, 2018), algorithms that optimize both the weights and structure of a neural network (Cortes, 2017), counterfactual risk minimization for learning from logged bandit feedback (Swaminathan & Joachims, 2015; London & Sandler, 2019), robustness to adversarial attacks (Schmidt et al., 2018; Wong & Kolter, 2018), differentially private learning (Dwork et al., 2006, Chaudhuri et al., 2011), and algorithms that ensure fairness (Dwork et al., 2012).
This one-day workshop will bring together researchers in both theoretical and applied machine learning, across areas such as statistical learning theory, adversarial learning, fairness and privacy, to discuss the problem of obtaining performance guarantees and algorithms to optimize them. The program will include invited and contributed talks, poster sessions and a panel discussion. We particularly welcome contributions describing fundamentally new problems, novel learning principles, creative bound optimization techniques, and empirical studies of theoretical findings.
Sat 8:45 a.m. - 9:00 a.m.
|
Welcome Address
(
Talk
)
|
Ben London 🔗 |
Sat 9:00 a.m. - 9:45 a.m.
|
Tengyu Ma, "Designing Explicit Regularizers for Deep Models"
(
Invited Talk
)
I will discuss some recent results on designing explicit regularizers to improve the generalization performances of deep neural networks. We derive data-dependent generalization bounds for deep neural networks. We empirically regularize the bounds and obtain improved generalization performance (in terms of the standard accuracy or the robust accuracy). I will also touch on recent results on applying these techniques to imbalanced datasets. Based on joint work with Colin Wei, Kaidi Cao, Adrien Gaidon, and Nikos Arechiga https://arxiv.org/abs/1910.04284 https://arxiv.org/abs/1906.07413 https://arxiv.org/abs/1905.03684 |
Tengyu Ma 🔗 |
Sat 9:45 a.m. - 10:15 a.m.
|
Vatsal Sharan, "Sample Amplification: Increasing Dataset Size even when Learning is Impossible"
(
Contributed Talk
)
Given data drawn from an unknown distribution, $D$, to what extent is it possible to ``amplify'' this dataset and faithfully output a larger set of samples that appear to have been drawn from $D$? We formalize this question as follows: an $(n,m)$ amplification procedure takes as input $n$ independent draws from an unknown distribution $D$, and outputs a set of $m > n$ ``samples'' which must be indistinguishable from $m$ samples drawn i.i.d. from $D$. We consider this sample amplification problem in two fundamental settings: the case where $D$ is an arbitrary discrete distribution supported on $k$ elements, and the case where $D$ is a $d$-dimensional Gaussian with unknown mean, and fixed covariance matrix. Perhaps surprisingly, we show a valid amplification procedure exists for both of these settings, even in the regime where the size of the input dataset, $n$, is significantly less than what would be necessary to learn distribution $D$ to non-trivial accuracy. We also show that our procedures are optimal up to constant factors. Beyond these results, we also formalize a number of curious directions for future research along this vein.
|
Vatsal Sharan 🔗 |
Sat 10:15 a.m. - 10:45 a.m.
|
Break / Poster Session 1
Visit https://sites.google.com/view/mlwithguarantees/accepted-papers for the list of papers. Posters will be up all day. |
Antonia Marcu · Yao-Yuan Yang · Pascale Gourdeau · Chen Zhu · Thodoris Lykouris · Jianfeng Chi · Mark Kozdoba · Arjun Nitin Bhagoji · Xiaoxia Wu · Jay Nandy · Michael T Smith · Bingyang Wen · Yuege Xie · Konstantinos Pitas · Suprosanna Shit · Maksym Andriushchenko · Dingli Yu · Gaël Letarte · Misha Khodak · Hussein Mozannar · Chara Podimata · James Foulds · Yizhen Wang · Huishuai Zhang · Ondrej Kuzelka · Alexander Levine · Nan Lu · Zakaria Mhammedi · Paul Viallard · Diana Cai · Lovedeep Gondara · James Lucas · Yasaman Mahdaviyeh · Aristide Baratin · Rishi Bommasani · Alessandro Barp · Andrew Ilyas · Kaiwen Wu · Jens Behrmann · Omar Rivasplata · Amir Nazemi · Aditi Raghunathan · Will Stephenson · Sahil Singla · Akhil Gupta · YooJung Choi · Yannic Kilcher · Clare Lyle · Edoardo Manino · Andrew Bennett · Zhi Xu · Niladri Chatterji · Emre Barut · Flavien Prost · Rodrigo Toro Icarte · Arno Blaas · Chulhee Yun · Sahin Lale · YiDing Jiang · Tharun Kumar Reddy Medini · Ashkan Rezaei · Alexander Meinke · Stephen Mell · Gary Kazantsev · Shivam Garg · Aradhana Sinha · Vishnu Lokhande · Geovani Rizk · Han Zhao · Aditya Kumar Akash · Jikai Hou · Ali Ghodsi · Matthias Hein · Tyler Sypherd · Yichen Yang · Anastasia Pentina · Pierre Gillot · Antoine Ledent · Guy Gur-Ari · Noah MacAulay · Tianzong Zhang
|
Sat 10:45 a.m. - 11:30 a.m.
|
Mehryar Mohri, "Learning with Sample-Dependent Hypothesis Sets"
(
Invited Talk
)
|
Mehryar Mohri 🔗 |
Sat 11:30 a.m. - 12:00 p.m.
|
James Lucas, "Information-theoretic limitations on novel task generalization"
(
Contributed Talk
)
Machine learning models have traditionally been developed under the assumption that the training and test distributions match exactly. However, recent success in few-shot learning and related problems are encouraging signs that these models can be adapted to more realistic settings where train and test differ. Unfortunately, there is severely limited theoretical support for these algorithms and little is known about the difficulty of these problems. In this work, we provide novel information-theoretic lower-bounds on minimax rates of convergence for algorithms which are trained on data from multiple sources and tested on novel data. Our bounds depend intuitively on the information shared between sources of data and characterizes the difficulty of learning in this setting for arbitrary algorithms. |
James Lucas 🔗 |
Sat 12:00 p.m. - 1:45 p.m.
|
Lunch Break
|
🔗 |
Sat 1:45 p.m. - 2:30 p.m.
|
Soheil Feizi, "Certifiable Defenses against Adversarial Attacks"
(
Invited Talk
)
While neural networks have achieved high performance in different learning tasks, their accuracy drops significantly in the presence of small adversarial perturbations to inputs. In the last couple of years, several practical defenses based on regularization and adversarial training have been proposed which are often followed by stronger attacks to defeat them. To escape this cycle, a new line of work focuses on developing certifiably robust classifiers. In these models, for a given input sample, one can calculate a robustness certificate such that for ‘any’ perturbation of the input within the robustness radius, the classification output will ‘provably’ remain unchanged. In this talk, I will present two certifiable defenses: (1) Wasserstein smoothing to defend against non-additive Wasserstein adversarial attacks, and (2) Curvature-based robust training to certifiably defend against L2 attacks by globally bounding curvature values of the network. This is a joint work with Alex Levine and Sahil Singla. |
Soheil Feizi 🔗 |
Sat 2:30 p.m. - 3:00 p.m.
|
Maksym Andriushchenko, "Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks"
(
Contributed Talk
)
The problem of adversarial robustness has been studied extensively for neural networks. However, for boosted decision trees and decision stumps there are almost no results, even though they are widely used in practice (e.g. XGBoost) due to their accuracy, interpretability, and efficiency. We show in this paper that for boosted decision stumps the \textit{exact} min-max robust loss and test error for an $l_\infty$-attack can be computed in $O(T\log T)$ time per input, where $T$ is the number of decision stumps and the optimal update step of the ensemble can be done in $O(n^2\,T\log T)$, where $n$ is the number of data points. For boosted trees we show how to efficiently calculate and optimize an upper bound on the robust loss, which leads to state-of-the-art robust test error for boosted trees on MNIST (12.5\% for $\epsilon_\infty=0.3$), FMNIST (23.2\% for $\epsilon_\infty=0.1$), and CIFAR-10 (74.7\% for $\epsilon_\infty=8/255$). Moreover, the robust test error rates we achieve are competitive to the ones of provably robust CNNs. Code of our method is available at \url{https://git.io/Je18r}. This is a short version of the corresponding NeurIPS 2019 paper \cite{andriushchenko2019provably}.
|
Maksym Andriushchenko 🔗 |
Sat 3:00 p.m. - 3:30 p.m.
|
Coffee Break / Poster Session 2
Visit https://sites.google.com/view/mlwithguarantees/accepted-papers for the list of papers. Posters will be up all day. |
🔗 |
Sat 3:30 p.m. - 4:15 p.m.
|
Aaron Roth, "Average Individual Fairness"
(
Invited Talk
)
|
Aaron Roth 🔗 |
Sat 4:15 p.m. - 4:45 p.m.
|
Hussein Mozannar, "Fair Learning with Private Data"
(
Contributed Talk
)
We study learning non-discriminatory predictors when the protected attributes are privatized or noisy. We observe that, in the population limit, non-discrimination against noisy attributes is equivalent to that against original attributes. We show this to hold for various fairness criteria. We then characterize the amount of difficulty, in sample complexity, that privacy adds to testing non-discrimination. Using this relationship, we propose how to carefully adapt existing non-discriminatory learners to work with privatized protected attributes. Care is crucial, as naively using these learners may create the illusion of non-discrimination, while continuing to be highly discriminatory. |
Hussein Mozannar 🔗 |
Sat 4:45 p.m. - 5:30 p.m.
|
Emma Brünskill, "Some Theory RL Challenges Inspired by Education"
(
Invited Talk
)
|
Emma Brunskill 🔗 |
Sat 5:30 p.m. - 6:00 p.m.
|
Discussion Panel
|
🔗 |
Author Information
Ben London (Amazon)
Gintare Karolina Dziugaite (Element AI)
Daniel Roy (Univ of Toronto & Vector)
Thorsten Joachims (Cornell)
Aleksander Madry (MIT)
Aleksander Madry is the NBX Associate Professor of Computer Science in the MIT EECS Department and a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 2011 and, prior to joining the MIT faculty, he spent some time at Microsoft Research New England and on the faculty of EPFL. Aleksander's research interests span algorithms, continuous optimization, science of deep learning and understanding machine learning from a robustness perspective. His work has been recognized with a number of awards, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, an ACM Doctoral Dissertation Award Honorable Mention, and 2018 Presburger Award.
John Shawe-Taylor (UCL)
More from the Same Authors
-
2021 Spotlight: Towards a Unified Information-Theoretic Framework for Generalization »
Mahdi Haghifam · Gintare Karolina Dziugaite · Shay Moran · Dan Roy -
2021 : Stochastic Pruning: Fine-Tuning, and PAC-Bayes bound optimization »
Soufiane Hayou · Bobby He · Gintare Karolina Dziugaite -
2021 : The Dynamics of Functional Diversity throughout Neural Network Training »
Lee Zamparo · Marc-Etienne Brunet · Thomas George · Sepideh Kharaghani · Gintare Karolina Dziugaite -
2022 : Unmasking the Lottery Ticket Hypothesis: Efficient Adaptive Pruning for Finding Winning Tickets »
Mansheej Paul · Feng Chen · Brett Larsen · Jonathan Frankle · Surya Ganguli · Gintare Karolina Dziugaite -
2022 : A Unified Framework for Comparing Learning Algorithms »
Harshay Shah · Sung Min Park · Andrew Ilyas · Aleksander Madry -
2022 : The Effect of Data Dimensionality on Neural Network Prunability »
Zachary Ankner · Alex Renda · Gintare Karolina Dziugaite · Jonathan Frankle · Tian Jin -
2022 : Invited Talk: Aleksander Mądry »
Aleksander Madry -
2022 Poster: Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks »
Mansheej Paul · Brett Larsen · Surya Ganguli · Jonathan Frankle · Gintare Karolina Dziugaite -
2022 Poster: Pruning’s Effect on Generalization Through the Lens of Training and Regularization »
Tian Jin · Michael Carbin · Dan Roy · Jonathan Frankle · Gintare Karolina Dziugaite -
2022 Poster: 3DB: A Framework for Debugging Computer Vision Models »
Guillaume Leclerc · Hadi Salman · Andrew Ilyas · Sai Vemprala · Logan Engstrom · Vibhav Vineet · Kai Xiao · Pengchuan Zhang · Shibani Santurkar · Greg Yang · Ashish Kapoor · Aleksander Madry -
2021 : Discussion: Aleksander Mądry, Ernest Mwebaze, Suchi Saria »
Aleksander Madry · Ernest Mwebaze · Suchi Saria -
2021 : ML Model Debugging: A Data Perspective »
Aleksander Madry -
2021 Poster: The future is log-Gaussian: ResNets and their infinite-depth-and-width limit at initialization »
Mufan Li · Mihai Nica · Dan Roy -
2021 Poster: Fairness in Ranking under Uncertainty »
Ashudeep Singh · David Kempe · Thorsten Joachims -
2021 Poster: Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers »
Jeffrey Negrea · Blair Bilodeau · Nicolò Campolongo · Francesco Orabona · Dan Roy -
2021 Poster: Deep Learning on a Data Diet: Finding Important Examples Early in Training »
Mansheej Paul · Surya Ganguli · Gintare Karolina Dziugaite -
2021 Poster: Towards a Unified Information-Theoretic Framework for Generalization »
Mahdi Haghifam · Gintare Karolina Dziugaite · Shay Moran · Dan Roy -
2021 Poster: Unadversarial Examples: Designing Objects for Robust Vision »
Hadi Salman · Andrew Ilyas · Logan Engstrom · Sai Vemprala · Aleksander Madry · Ashish Kapoor -
2021 Poster: Editing a classifier by rewriting its prediction rules »
Shibani Santurkar · Dimitris Tsipras · Mahalaxmi Elango · David Bau · Antonio Torralba · Aleksander Madry -
2020 : Keynote 5: Gintare Karolina Dziugaite »
Gintare Karolina Dziugaite -
2020 : What Do Our Models Learn? »
Aleksander Madry -
2020 Poster: Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel »
Stanislav Fort · Gintare Karolina Dziugaite · Mansheej Paul · Sepideh Kharaghani · Daniel Roy · Surya Ganguli -
2020 Poster: Adaptive Gradient Quantization for Data-Parallel SGD »
Fartash Faghri · Iman Tabrizian · Ilia Markov · Dan Alistarh · Daniel Roy · Ali Ramezani-Kebrya -
2020 Poster: On Adaptive Attacks to Adversarial Example Defenses »
Florian Tramer · Nicholas Carlini · Wieland Brendel · Aleksander Madry -
2020 Poster: Do Adversarially Robust ImageNet Models Transfer Better? »
Hadi Salman · Andrew Ilyas · Logan Engstrom · Ashish Kapoor · Aleksander Madry -
2020 Oral: Do Adversarially Robust ImageNet Models Transfer Better? »
Hadi Salman · Andrew Ilyas · Logan Engstrom · Ashish Kapoor · Aleksander Madry -
2020 Poster: Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms »
Mahdi Haghifam · Jeffrey Negrea · Ashish Khisti · Daniel Roy · Gintare Karolina Dziugaite -
2020 Poster: In search of robust measures of generalization »
Gintare Karolina Dziugaite · Alexandre Drouin · Brady Neal · Nitarshan Rajkumar · Ethan Caballero · Linbo Wang · Ioannis Mitliagkas · Daniel Roy -
2020 Poster: MOReL: Model-Based Offline Reinforcement Learning »
Rahul Kidambi · Aravind Rajeswaran · Praneeth Netrapalli · Thorsten Joachims -
2019 : Lunch break & Poster session »
Breandan Considine · Michael Innes · Du Phan · Dougal Maclaurin · Robin Manhaeve · Alexey Radul · Shashi Gowda · Ekansh Sharma · Eli Sennesh · Maxim Kochurov · Gordon Plotkin · Thomas Wiecki · Navjot Kukreja · Chung-chieh Shan · Matthew Johnson · Dan Belov · Neeraj Pradhan · Wannes Meert · Angelika Kimmig · Luc De Raedt · Brian Patton · Matthew Hoffman · Rif A. Saurous · Daniel Roy · Eli Bingham · Martin Jankowiak · Colin Carroll · Junpeng Lao · Liam Paull · Martin Abadi · Angel Rojas Jimenez · JP Chen -
2019 : Lunch Break and Posters »
Xingyou Song · Elad Hoffer · Wei-Cheng Chang · Jeremy Cohen · Jyoti Islam · Yaniv Blumenfeld · Andreas Madsen · Jonathan Frankle · Sebastian Goldt · Satrajit Chatterjee · Abhishek Panigrahi · Alex Renda · Brian Bartoldson · Israel Birhane · Aristide Baratin · Niladri Chatterji · Roman Novak · Jessica Forde · YiDing Jiang · Yilun Du · Linara Adilova · Michael Kamp · Berry Weinstein · Itay Hubara · Tal Ben-Nun · Torsten Hoefler · Daniel Soudry · Hsiang-Fu Yu · Kai Zhong · Yiming Yang · Inderjit Dhillon · Jaime Carbonell · Yanqing Zhang · Dar Gilboa · Johannes Brandstetter · Alexander R Johansen · Gintare Karolina Dziugaite · Raghav Somani · Ari Morcos · Freddie Kalaitzis · Hanie Sedghi · Lechao Xiao · John Zech · Muqiao Yang · Simran Kaur · Qianli Ma · Yao-Hung Hubert Tsai · Ruslan Salakhutdinov · Sho Yaida · Zachary Lipton · Daniel Roy · Michael Carbin · Florent Krzakala · Lenka Zdeborová · Guy Gur-Ari · Ethan Dyer · Dilip Krishnan · Hossein Mobahi · Samy Bengio · Behnam Neyshabur · Praneeth Netrapalli · Kris Sankaran · Julien Cornebise · Yoshua Bengio · Vincent Michalski · Samira Ebrahimi Kahou · Md Rifat Arefin · Jiri Hron · Jaehoon Lee · Jascha Sohl-Dickstein · Samuel Schoenholz · David Schwab · Dongyu Li · Sang Keun Choe · Henning Petzka · Ashish Verma · Zhichao Lin · Cristian Sminchisescu -
2019 : Welcome Address »
Ben London -
2019 : Opening Remarks »
Thorsten Joachims · Nathan Kallus · Michele Santacatterina · Adith Swaminathan · David Sontag · Angela Zhou -
2019 Workshop: “Do the right thing”: machine learning and causal inference for improved decision making »
Michele Santacatterina · Thorsten Joachims · Nathan Kallus · Adith Swaminathan · David Sontag · Angela Zhou -
2019 : Thorsten Joachim: Fair Ranking with Biased Data »
Thorsten Joachims -
2019 Poster: Policy Learning for Fairness in Ranking »
Ashudeep Singh · Thorsten Joachims -
2019 Poster: Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates »
Jeffrey Negrea · Mahdi Haghifam · Gintare Karolina Dziugaite · Ashish Khisti · Daniel Roy -
2019 Poster: Image Synthesis with a Single (Robust) Classifier »
Shibani Santurkar · Andrew Ilyas · Dimitris Tsipras · Logan Engstrom · Brandon Tran · Aleksander Madry -
2019 Poster: Fast-rate PAC-Bayes Generalization Bounds via Shifted Rademacher Processes »
Jun Yang · Shengyang Sun · Daniel Roy -
2019 Poster: Adversarial Examples Are Not Bugs, They Are Features »
Andrew Ilyas · Shibani Santurkar · Dimitris Tsipras · Logan Engstrom · Brandon Tran · Aleksander Madry -
2019 Spotlight: Adversarial Examples Are Not Bugs, They Are Features »
Andrew Ilyas · Shibani Santurkar · Dimitris Tsipras · Logan Engstrom · Brandon Tran · Aleksander Madry -
2018 : Adversarial Vision Challenge: Shooting ML Models in the Dark: The Landscape of Blackbox Attacks »
Aleksander Madry -
2018 Poster: Spectral Signatures in Backdoor Attacks »
Brandon Tran · Jerry Li · Aleksander Madry -
2018 Poster: Data-dependent PAC-Bayes priors via differential privacy »
Gintare Karolina Dziugaite · Daniel Roy -
2018 Poster: How Does Batch Normalization Help Optimization? »
Shibani Santurkar · Dimitris Tsipras · Andrew Ilyas · Aleksander Madry -
2018 Poster: Adversarially Robust Generalization Requires More Data »
Ludwig Schmidt · Shibani Santurkar · Dimitris Tsipras · Kunal Talwar · Aleksander Madry -
2018 Oral: How Does Batch Normalization Help Optimization? »
Shibani Santurkar · Dimitris Tsipras · Andrew Ilyas · Aleksander Madry -
2018 Spotlight: Adversarially Robust Generalization Requires More Data »
Ludwig Schmidt · Shibani Santurkar · Dimitris Tsipras · Kunal Talwar · Aleksander Madry -
2018 Tutorial: Adversarial Robustness: Theory and Practice »
J. Zico Kolter · Aleksander Madry -
2017 : Daniel Roy - Deep Neural Networks: From Flat Minima to Numerically Nonvacuous Generalization Bounds via PAC-Bayes »
Daniel Roy -
2017 : Equality of Opportunity in Rankings »
Thorsten Joachims · Ashudeep Singh -
2017 Workshop: From 'What If?' To 'What Next?' : Causal Inference and Machine Learning for Intelligent Decision Making »
Ricardo Silva · Panagiotis Toulis · John Shawe-Taylor · Alexander Volfovsky · Thorsten Joachims · Lihong Li · Nathan Kallus · Adith Swaminathan -
2016 : Panel Discussion »
Gisbert Schneider · Ross E Goodwin · Simon Colton · Russ Salakhutdinov · Thorsten Joachims · Florian Pinel -
2016 : Structured Prediction with Logged Bandit Feedback »
Thorsten Joachims -
2016 Workshop: "What If?" Inference and Learning of Hypothetical and Counterfactual Interventions in Complex Systems »
Ricardo Silva · John Shawe-Taylor · Adith Swaminathan · Thorsten Joachims -
2016 Poster: Measuring the reliability of MCMC inference with bidirectional Monte Carlo »
Roger Grosse · Siddharth Ancha · Daniel Roy -
2015 Poster: The Self-Normalized Estimator for Counterfactual Learning »
Adith Swaminathan · Thorsten Joachims -
2015 Spotlight: The Self-Normalized Estimator for Counterfactual Learning »
Adith Swaminathan · Thorsten Joachims -
2014 Workshop: 3rd NIPS Workshop on Probabilistic Programming »
Daniel Roy · Josh Tenenbaum · Thomas Dietterich · Stuart J Russell · YI WU · Ulrik R Beierholm · Alp Kucukelbir · Zenna Tavares · Yura Perov · Daniel Lee · Brian Ruttenberg · Sameer Singh · Michael Hughes · Marco Gaboardi · Alexey Radul · Vikash Mansinghka · Frank Wood · Sebastian Riedel · Prakash Panangaden -
2014 Poster: Gibbs-type Indian Buffet Processes »
Creighton Heaukulani · Daniel Roy -
2014 Poster: Mondrian Forests: Efficient Online Random Forests »
Balaji Lakshminarayanan · Daniel Roy · Yee Whye Teh -
2013 Poster: Learning Trajectory Preferences for Manipulators via Iterative Improvement »
Ashesh Jain · Brian Wojcik · Thorsten Joachims · Ashutosh Saxena -
2013 Session: Session Chair »
Daniel Roy -
2013 Session: Tutorial Session B »
Daniel Roy -
2012 Workshop: Probabilistic Programming: Foundations and Applications (2 day) »
Vikash Mansinghka · Daniel Roy · Noah Goodman -
2012 Workshop: Probabilistic Programming: Foundations and Applications (2 day) »
Vikash Mansinghka · Daniel Roy · Noah Goodman -
2012 Poster: Random function priors for exchangeable graphs and arrays »
James R Lloyd · Daniel Roy · Peter Orbanz · Zoubin Ghahramani -
2011 Poster: Complexity of Inference in Latent Dirichlet Allocation »
David Sontag · Daniel Roy -
2011 Spotlight: Complexity of Inference in Latent Dirichlet Allocation »
David Sontag · Daniel Roy -
2011 Poster: Semantic Labeling of 3D Point Clouds for Indoor Scenes »
Hema Koppula · Abhishek Anand · Thorsten Joachims · Ashutosh Saxena -
2008 Workshop: Probabilistic Programming: Universal Languages, Systems and Applications »
Daniel Roy · John Winn · David A McAllester · Vikash Mansinghka · Josh Tenenbaum -
2008 Oral: The Mondrian Process »
Daniel Roy · Yee Whye Teh -
2008 Poster: The Mondrian Process »
Daniel Roy · Yee Whye Teh -
2007 Workshop: Machine Learning for Web Search »
Denny Zhou · Olivier Chapelle · Thorsten Joachims · Thomas Hofmann -
2007 Poster: Bayesian Agglomerative Clustering with Coalescents »
Yee Whye Teh · Hal Daumé III · Daniel Roy -
2007 Oral: Bayesian Agglomerative Clustering with Coalescents »
Yee Whye Teh · Hal Daumé III · Daniel Roy -
2006 Poster: Learning annotated hierarchies from relational data »
Daniel Roy · Charles Kemp · Vikash Mansinghka · Josh Tenenbaum -
2006 Talk: Learning annotated hierarchies from relational data »
Daniel Roy · Charles Kemp · Vikash Mansinghka · Josh Tenenbaum