Timezone: »

 
Workshop
Probabilistic Programming: Universal Languages, Systems and Applications
Daniel Roy · John Winn · David A McAllester · Vikash Mansinghka · Josh Tenenbaum

Sat Dec 13 07:30 AM -- 06:30 PM (PST) @ Westin: Alpine CD
Event URL: http://probabilistic-programming.org »

Probabilistic graphical models provide a formal lingua franca for modeling and a common target for efficient inference algorithms. Their introduction gave rise to an extensive body of work in machine learning, statistics, robotics, vision, biology, neuroscience, AI and cognitive science. However, many of the most innovative and exciting probabilistic models published by the NIPS community far outstrip the representational capacity of graphical models and are instead communicated using a mix of natural language, pseudo code, and mathematical formulae and solved using special purpose, one-off inference methods. Very often, graphical models are used only to describe the coarse, high-level structure rather than the precise specification necessary for automated inference. Probabilistic programming languages aim to close this representational gap; literally, users specify a probabilistic model in its entirety (e.g., by writing code that generates a sample from the joint distribution) and inference follows automatically given the specification. Several existing systems already satisfy this specification to varying degrees of expressiveness, compositionality, universality, and efficiency. We believe that the probabilistic programming language approach, which has been emerging over the last 10 years from a range of diverse fields including machine learning, computational statistics, systems biology, probabilistic AI, mathematical logic, theoretical computer science and programming language theory, has the potential to fundamentally change the way we understand, design, build, test and deploy probabilistic systems. The NIPS workshop will be a unique opportunity for this diverse community to meet, share ideas, collaborate, and help plot the course of this exciting research area.

Author Information

Daniel Roy (U of Toronto; Vector)
John Winn (Microsoft Research)
David A McAllester (TTI-Chicago)
Vikash Mansinghka (Massachusetts Institute of Technology)

Vikash Mansinghka is a research scientist at MIT, where he leads the Probabilistic Computing Project. Vikash holds S.B. degrees in Mathematics and in Computer Science from MIT, as well as an M.Eng. in Computer Science and a PhD in Computation. He also held graduate fellowships from the National Science Foundation and MIT’s Lincoln Laboratory. His PhD dissertation on natively probabilistic computation won the MIT George M. Sprowls dissertation award in computer science, and his research on the Picture probabilistic programming language won an award at CVPR. He served on DARPA’s Information Science and Technology advisory board from 2010-2012, and currently serves on the editorial boards for the Journal of Machine Learning Research and the journal Statistics and Computation. He was an advisor to Google DeepMind and has co-founded two AI-related startups, one acquired and one currently operational.

Josh Tenenbaum (MIT)

Josh Tenenbaum is an Associate Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from MIT in 1999, and was an Assistant Professor at Stanford University from 1999 to 2002. He studies learning and inference in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. He focuses on problems of inductive generalization from limited data -- learning concepts and word meanings, inferring causal relations or goals -- and learning abstract knowledge that supports these inductive leaps in the form of probabilistic generative models or 'intuitive theories'. He has also developed several novel machine learning methods inspired by human learning and perception, most notably Isomap, an approach to unsupervised learning of nonlinear manifolds in high-dimensional data. He has been Associate Editor for the journal Cognitive Science, has been active on program committees for the CogSci and NIPS conferences, and has co-organized a number of workshops, tutorials and summer schools in human and machine learning. Several of his papers have received outstanding paper awards or best student paper awards at the IEEE Computer Vision and Pattern Recognition (CVPR), NIPS, and Cognitive Science conferences. He is the recipient of the New Investigator Award from the Society for Mathematical Psychology (2005), the Early Investigator Award from the Society of Experimental Psychologists (2007), and the Distinguished Scientific Award for Early Career Contribution to Psychology (in the area of cognition and human learning) from the American Psychological Association (2008).

More from the Same Authors