Timezone: »
One of the greatest challenges facing biologists and the statisticians that work with them is the goal of representation learning to discover and define appropriate representation of data in order to perform complex, multi-scale machine learning tasks. This workshop is designed to bring together trainee and expert machine learning scientists with those in the very forefront of biological research for this purpose. Our full-day workshop will advance the joint project of the CS and biology communities with the goal of "Learning Meaningful Representations of Life" (LMRL), emphasizing interpretable representation learning of structure and principle.
We will organize around the theme "From Genomes to Phenotype, and Back Again": an extension of a long-standing effort in the biological sciences to assign biochemical and cellular functions to the millions of as-yet uncharacterized gene products discovered by genome sequencing. ML methods to predict phenotype from genotype are rapidly advancing and starting to achieve widespread success. At the same time, large scale gene synthesis and genome editing technologies have rapidly matured, and become the foundation for new scientific insight as well as biomedical and industrial advances. ML-based methods have the potential to accelerate and extend these technologies' application, by providing tools for solving the key problem of going "back again," from a desired phenotype to the genotype necessary to achieve that desired set of observable characteristics. We will focus on this foundational design problem and its application to areas ranging from protein engineering to phylogeny, immunology, vaccine design and next generation therapies.
Generative modeling, semi-supervised learning, optimal experimental design, Bayesian optimization, & many other areas of machine learning have the potential to address the phenotype-to-genotype problem, and we propose to bring together experts in these fields as well as many others.
LMRL will take place on Dec 13, 2021.
Tue 4:00 a.m. - 5:00 a.m.
|
All LMRL Events are accessible from our Gather.Town! ( GatherTown ) link » | 🔗 |
Tue 5:00 a.m. - 5:30 a.m.
|
Fritz Obermeyer ( Live Talk, Zoom 2 ) link » | 🔗 |
Tue 5:00 a.m. - 5:30 a.m.
|
Dagmar Kainmueller
(
Live Talk, Zoom 1
)
link »
SlidesLive Video » |
🔗 |
Tue 5:30 a.m. - 6:00 a.m.
|
Mo Lotfollahi ( Live Talk, Zoom 2 ) link » | Mohammad Lotfollahi 🔗 |
Tue 5:59 a.m. - 6:00 a.m.
|
8:30-9:00 EST Steve Frank - The evolutionary paradox of robustness, genome overwiring, and analogies with deep learning
(
Live Talk, Zoom 1
)
link »
SlidesLive Video » I start with the paradox of robustness, which is roughly: Greater protection from errors at the system level leads to more errors at the component level. The paradox of robustness may be an important force shaping the architecture of evolutionary systems. I then turn to the question: Why are genomes overwired? By which I mean that genetic regulatory networks seem to be more deeply and densely connected than one might expect. I suggest that the paradox of robustness may explain some of the observed complexity in genetic networks. Finally, I ask: What are the consequences of deeply and densely connected genetic networks? That question brings up to possible links between genetic networks, evolutionary dynamics, and learning dynamics in the deep neural networks of modern AI. |
Steven Frank 🔗 |
Tue 6:00 a.m. - 6:30 a.m.
|
Nancy Zhang - Data Denoising and Transfer Learning in Single Cell Transcriptomics
(
Live Talk, Zoom 1
)
link »
SlidesLive Video » Cells are the basic biological units of multicellular organisms. The development of single-cell RNA sequencing (scRNA-seq) technologies have enabled us to study the diversity of cell types in tissue and to elucidate the roles of individual cell types in disease. Yet, scRNA-seq data are noisy and sparse, with only a small proportion of the transcripts that are present in each cell represented in the final data matrix. We propose a transfer learning framework based on deep neural nets to borrow information across related single cell data sets for de-noising and expression recovery. Our goal is to leverage the expanding resources of publicly available scRNA-seq data, for example, the Human Cell Atlas which aims to be a comprehensive map of cell types in the human body. Our method is based on a Bayesian hierarchical model coupled to a deep autoencoder, the latter trained to extract transferable gene expression features across studies coming from different labs, generated by different technologies, and/or obtained from different species. Through this framework, we explore the limits of data sharing: How much can be learned across cell types, tissues, and species? How useful are data from other technologies and labs in improving the estimates from your own study? If time allows, I will also discuss the implications of such data denoising to downstream statistical inference. |
🔗 |
Tue 6:00 a.m. - 6:30 a.m.
|
Matt Raybould ( Live Talk, Zoom 2 ) link » | 🔗 |
Tue 6:30 a.m. - 6:45 a.m.
|
15 min Break - Check out the posters on Gather Town! ( GatherTown ) link » | 🔗 |
Tue 6:45 a.m. - 7:15 a.m.
|
Frank Noe
(
Live Talk, Zoom 1
)
link »
SlidesLive Video » |
🔗 |
Tue 6:45 a.m. - 7:15 a.m.
|
Jennifer Wei - Machine Learning for Chemical Sensing
(
Live Talk, Zoom 2
)
link »
I will present two applications of machine learning for molecular sensing: mass spectrometry and olfaction. Mass spectrometry is a method that chemists use to identify unknown molecules. Spectra from unknown samples are compared against existing libraries of mass spectra; highly matching spectra are considered candidates for the identity of the molecule. I will discuss some work in using machine learning models to predict mass spectra to expand the coverage of libraries to improve the ability of identifying spectra through mass spectrometry. The second project will discuss a more natural form of molecular sensing: olfaction. I will discuss some work my team has done in predicting human odor labels for individual molecules, and some of the resulting consequences |
🔗 |
Tue 7:15 a.m. - 7:45 a.m.
|
Su-In Lee ( Live Talk, Zoom 2 ) link » | 🔗 |
Tue 7:15 a.m. - 7:45 a.m.
|
Lyla Atta - RNA velocity-informed embeddings for visualizing cellular trajectories
(
Live Talk, Zoom 1
)
link »
SlidesLive Video » Single-cell transcriptomics profiling technologies enable genome-wide gene expression measurements in individual cells but can currently only provide a static snapshot of cellular transcriptional states. RNA velocity analysis can help infer cell state changes using such single-cell transcriptomics data. To interpret these cell state changes inferred from RNA velocity analysis as part of underlying cellular trajectories, current approaches rely on visualization with principal components, t-distributed stochastic neighbor embedding and other 2D embeddings derived from the observed single-cell transcriptional states. However, these 2D embeddings can yield different representations of the underlying cellular trajectories, hindering the interpretation of cell state changes. We developed VeloViz to create RNA velocity-informed 2D and 3D embeddings from single-cell transcriptomics data. Using both real and simulated data, we demonstrate that VeloViz embeddings are able to capture underlying cellular trajectories across diverse trajectory topologies, even when intermediate cell states may be missing. By considering the predicted future transcriptional states from RNA velocity analysis, VeloViz can help visualize a more reliable representation of underlying cellular trajectories. |
Lyla Atta 🔗 |
Tue 7:45 a.m. - 8:15 a.m.
|
Jean-Phillippe Vert - Deep learning for DNA and proteins: equivariance and alignment
(
Live Talk, Zoom 1
)
link »
SlidesLive Video » Deep learning and language models are increasingly used to model DNA and protein sequences. While many models and tasks are inspired and borrowed from the field of natural language processing, biological sequences have specificities that deserve attention. In this talk I will discuss two such specificities: 1) the inherent symmetry in double-stranded DNA sequences due to reverse-complement pairing, that calls for equivariant architectures, and 2) the fact that sequence alignment is a natural way to compare evolutionary related sequences. |
🔗 |
Tue 7:45 a.m. - 8:15 a.m.
|
Kristin Branson ( Live Talk, Zoom 2 ) link » | 🔗 |
Tue 8:15 a.m. - 8:30 a.m.
|
15 min Break - Check out the posters on Gather Town! ( GatherTown ) link » | 🔗 |
Tue 8:30 a.m. - 9:00 a.m.
|
Milo Lin - Distilling generalizable rules from data using Essence Neural Networks
(
Live Talk, Zoom 1
)
link »
SlidesLive Video » Human reasoning can distill principles from observed patterns and generalize them to explain and solve novel problems, as exemplified in the success of scientific theories. The patterns in biological data are often complex and high dimensional, suggesting that machine learning could play a vital role in distilling collective rules from patterns that may be challenging for human reasoning. However, the most powerful artificial intelligence systems are currently limited in interpretability and symbolic reasoning ability. Recently, we developed essence neural networks (ENNs), which train to do general supervised learning tasks without requiring gradient optimization, and showed that ENNs are intrinsically interpretable, can generalize out-of-distribution, and perform symbolic learning on sparse data. Here, I discuss our current progress in automatically translating the weights of an ENN into concise, executable computer code for general symbolic tasks, an implementation of data-based automatic programming which we call deep distilling. The distilled code, which can contain loops, nested logical statements, and useful intermediate variables, is equivalent to the ENN neural network but is generally orders of magnitude more compact and human-comprehensible. Because the code is distilled from a general-purpose neural network rather than constructed by searching through libraries of logical functions, deep distilling is flexible in terms of problem domain and size. On a diverse set of problems involving arithmetic, computer vision, and optimization, we show that deep distilling generates concise code that generalizes out-of-distribution to solve problems orders-of-magnitude larger and more complex than the training data. For problems with a known ground-truth rule set, including cellular automata which encode a type of sequence-to-function mapping, deep distilling discovers the rule set exactly with scalable guarantees. For problems that are ambiguous or computationally intractable, the distilled rules are similar to existing human-derived algorithms and perform at par or better. Our approach demonstrates that unassisted machine intelligence can build generalizable and intuitive rules explaining patterns in large datasets that would otherwise overwhelm human detection and reasoning. |
🔗 |
Tue 8:30 a.m. - 9:00 a.m.
|
Georg Seelig - Machine learning-guided design of functional DNA, RNA and protein sequences ( Live Talk, Zoom 2 ) link » | 🔗 |
Tue 9:00 a.m. - 9:30 a.m.
|
Lacra Bintu - High-throughput discovery and characterization of human transcriptional repressor and activator domains
(
Live Talk, Zoom 1
)
link »
SlidesLive Video » Human gene expression is regulated by thousands of proteins that can activate or repress transcription. To predict and control gene expression, we need to know where in the protein their effector domains are, and how strongly they activate or repress. To systematically measure the function of transcriptional effector domains in human cells, we developed a high- throughput assay in which pooled libraries of thousands of domains are recruited individually to a reporter gene. Cells are then separated by reporter expression level, and the library of protein domains is sequenced to determine the frequency of each domain in silenced versus active cell populations. We used this method to: 1) quantify the activation, silencing, and epigenetic memory capability of all nuclear protein domains annotated in Pfam, including the KRAB family of >300 domains. We find that while evolutionary young KRABs are strong repressors, some of the old KRABs are activators. 2) characterize the amino acids responsible for effector function via deep mutational scanning. We applied it to the KRAB used in CRISPRi to map the co-repressor binding surface and identify substitutions that improve stability, silencing, and epigenetic memory. 3) discover novel functional domains in unannotated regions of large transcription factors, including repressors as short as 10 amino acids. Together, these results provide a resource of 600 human proteins containing effectors, and demonstrate a scalable strategy for assigning functions to protein domains. |
Lacramioara Bintu 🔗 |
Tue 9:00 a.m. - 9:30 a.m.
|
Jingshu Wang - Model-based trajectory analysis for Single-Cell RNA Sequencing using deep learning with a mixture prior ( Live Talk, Zoom 2 ) link » | 🔗 |
Tue 9:30 a.m. - 10:00 a.m.
|
Qingyuan Zhao
(
Live Talk, Zoom 1
)
link »
SlidesLive Video » |
🔗 |
Tue 9:30 a.m. - 10:00 a.m.
|
Jackson Loper - Latent representations reveal that stationary covariances are always secretly linear
(
Live Talk, Zoom 2
)
link »
We recently found that any continuous covariance for time-series data, no matter how intricate, can be approximated arbitrarily well in terms of a well-behaved parametric family of linear projections of linear stochastic dynamical systems. This family makes efficient exact inference a breeze, even for millions of time-points. Applied to ATAC-seq data, this machinery infers smooth representations that encode how chromatin accessibility varies (1) along the one-dimensional topology of each chromosome and (2) throughout the diversity of cells. |
🔗 |
Tue 10:00 a.m. - 10:15 a.m.
|
15 min Break - Check out the posters on Gather Town! ( GatherTown ) link » | 🔗 |
Tue 10:15 a.m. - 10:45 a.m.
|
Tatyana Sharpee ( Live Talk, Zoom 2 ) link » | 🔗 |
Tue 10:15 a.m. - 10:45 a.m.
|
Brian Trippe
(
Live Talk, Zoom 1
)
link »
SlidesLive Video » |
🔗 |
Tue 10:45 a.m. - 11:15 a.m.
|
Jian Tang ( Live Talk, Zoom 1 ) link » | 🔗 |
Tue 10:45 a.m. - 11:15 a.m.
|
Antonio Moretti
(
Live Talk, Zoom 2
)
link »
SlidesLive Video » |
🔗 |
Tue 11:15 a.m. - 11:45 a.m.
|
Mackenzie Mathis ( Live Talk, Zoom 2 ) link » | Mackenzie Mathis 🔗 |
Tue 11:15 a.m. - 11:45 a.m.
|
Žiga Avsec
(
Live Talk, Zoom 1
)
link »
SlidesLive Video » |
Ziga Avsec 🔗 |
Tue 11:45 a.m. - 1:15 p.m.
|
Poster Session in Gather Town! ( Poster Session ) link » | 🔗 |
Tue 11:45 a.m. - 12:00 p.m.
|
15 min Break - Check out the posters on Gather Town! ( GatherTown ) link » | 🔗 |
Tue 11:45 a.m. - 1:05 p.m.
|
Panel: How do we define Meaningful Research in ML/Bio?
(
Discussion Panel, Zoom 1
)
link »
SlidesLive Video » |
🔗 |
Tue 12:00 p.m. - 12:30 p.m.
|
Bianca Dumitrascu - Beyond multimodality in genomics ( Live Talk, Zoom 2 ) link » | 🔗 |
Author Information
Elizabeth Wood (Broad Institute)
Elizabeth Wood co-founded and co-runs JURA Bio, Inc., an early-stage therapeutics start up focusing on developing and delivering cell-based therapies for the treatment of autoimmune and immune-related neurodegenerative disease. Before founding JURA, Wood was a post-doc in the lab of Adam Cohen at Harvard, after completing her PhD studies with Angela Belcher and Markus Buehler at MIT, and Claus Helix-Neilsen at The Technical University of Denmark. She has also worked at the University of Copenhagen’s Biocenter with Kresten Lindorff-Larsen, integrating computational methods with experimental studies to understand how the ability of proteins to change their shape help modulate their function. Elizabeth Wood is a visiting scientist at the Broad Institute, where she serves on the steering committee of the Machine Inference Algorithm’s Initiative.
Adji Bousso Dieng (Princeton University & Google AI)
Aleksandrina Goeva (Broad Institute)
Anshul Kundaje (Stanford University)
Barbara Engelhardt (Princeton University)
Chang Liu (UC Irvine)
Professor Liu’s research is in the fields of synthetic biology, chemical biology, and directed evolution. He is particularly interested in engineering specialized genetic systems for rapid mutation and evolution of genes in vivo. These systems can then be widely applied for the engineering, discovery, and understanding of biological function.
David Van Valen (Caltech)
Debora Marks (Harvard University)
Debora is a mathematician and computational biologist with a track record of using novel algorithms and statistics to successfully address unsolved biological problems. She has a passion for interpreting genetic variation in a way that impacts biomedical applications. During her PhD, she quantified the pan-genomic scope of microRNA targeting - the combinatorial regulation of protein expression and co-discovered the first microRNA in a virus. As a postdoc she made a breakthrough in the classic, unsolved problem of ab initio 3D structure prediction of proteins using undirected graphical probability models for evolutionary sequences. She has developed this approach to determine functional interactions, biomolecular structures, including the 3D structure of RNA and RNA-protein complexes and the conformational ensembles of apparently disordered proteins. Her new lab at Harvard is interested in developing methods in deep learning to address a wide range of biological challenges including designing drug affinity libraries for large numbers of human genes, predicting epistasis in antibiotic resistance, the effects of genetic variation on human disease etiology and drug response and sequence design for biosynthetic applications.
Edward Boyden (Massachusetts Institute of Technology)
Eli N Weinstein (Harvard)
Lorin Crawford (Microsoft Research)
I am a Senior Researcher at Microsoft Research New England. I also maintain a faculty position in the School of Public Health as the RGSS Assistant Professor of Biostatistics with an affiliation in the Center for Computational Molecular Biology at Brown University. The central aim of my research program is to build machine learning algorithms and statistical tools that aid in the understanding of how nonlinear interactions between genetic features affect the architecture of complex traits and contribute to disease etiology. An overarching theme of the research done in the Crawford Lab group is to take modern computational approaches and develop theory that enable their interpretations to be related back to classical genomic principles. Some of my most recent work has landed me a place on Forbes 30 Under 30 list and recognition as a member of The Root 100 Most Influential African Americans in 2019. I have also been fortunate enough to be awarded an Alfred P. Sloan Research Fellowship and a David & Lucile Packard Foundation Fellowship for Science and Engineering. Prior to joining both MSR and Brown, I received my PhD from the Department of Statistical Science at Duke University where I was co-advised by Sayan Mukherjee and Kris C. Wood. As a Duke Dean’s Graduate Fellow and NSF Graduate Research Fellow I completed my PhD dissertation entitled: "Bayesian Kernel Models for Statistical Genetics and Cancer Genomics." I also received my Bachelors of Science degree in Mathematics from Clark Atlanta University.
Mor Nitzan (The Hebrew University of Jerusalem)
Mor Nitzan is a Senior Lecturer (Assistant Professor) in the School of Computer Science and Engineering, and affiliated to the Institute of Physics and the Faculty of Medicine, at the Hebrew University of Jerusalem. Her research is at the interface of Computer Science, Physics, and Biology, focusing on the representation, inference and design of multicellular systems. Her group develops computational frameworks to better understand how cells encode multiple layers of spatiotemporal information, and how to efficiently decode that information from single-cell data. They do so by employing concepts derived from diverse fields, including machine learning, information theory and dynamical systems, while working in collaboration with experimentalists and capitalizing on vast publicly available data. Mor aims to uncover organization principles underlying information processing, division of labor, and self-organization of multicellular systems such as tissues, and how cell-to-cell interactions can be manipulated to optimize tissue structure and function. Prior to joining the Hebrew University as a faculty member, Mor was a John Harvard Distinguished Science Fellow and James S. McDonnell Fellow at Harvard University. She completed a BSc in Physics, and obtained a PhD in Physics and Computational Biology at the Hebrew University, working with Profs. Hanah Margalit and Ofer Biham, on the interplay between structure and dynamics in gene regulatory networks. She was then hosted as a postdoctoral fellow by Prof. Nir Friedman (Hebrew University) and Prof. Aviv Regev (Broad Institute). Mor is a recipient of the Azrieli Foundation Early Career Faculty Fellowship, Google Research Scholar Award, Researcher Recruitment Award by the Israeli Ministry of Science and Technology, John Harvard Distinguished Science Fellowship, James S. McDonnell Fellowship, and the Schmidt Postdoctoral Award for Women in Mathematical and Computing Sciences.
Romain Lopez (Genentech & Stanford University)
Tamara Broderick (MIT)
Ray Jones (Broad Institute)
Wouter Boomsma (University of Copenhagen)
Yixin Wang (Columbia University)
More from the Same Authors
-
2021 Spotlight: Learning Equilibria in Matching Markets from Bandit Feedback »
Meena Jagadeesan · Alexander Wei · Yixin Wang · Michael Jordan · Jacob Steinhardt -
2021 : Polaris: accurate spot detection for biological images with deep learning and weak supervision »
Emily Laubscher · William Graf · David Van Valen -
2021 : A kernel for continuously relaxed, discrete Bayesian optimization of protein sequences »
Yevgen Zainchkovskyy · Simon Bartels · Søren Hauberg · Jes Frellsen · Wouter Boomsma -
2021 : Bayesian Data Selection »
Eli N Weinstein · Jeffrey Miller -
2021 : Measuring the sensitivity of Gaussian processes to kernel choice »
Will Stephenson · Soumya Ghosh · Tin Nguyen · Mikhail Yurochkin · Sameer Deshpande · Tamara Broderick -
2021 : Desiderata for Representation Learning: A Causal Perspective »
Yixin Wang · Michael Jordan -
2021 : Multi-Group Reinforcement Learning for Maternal Health in Childbirth »
Barbara Engelhardt · Promise Ekpo -
2022 : Multi-fidelity Bayesian experimental design using power posteriors »
Andrew Jones · Diana Cai · Barbara Engelhardt -
2022 : Sequential Gaussian Processes for Online Learning of Nonstationary Functions »
Michael Minyi Zhang · Bianca Dumitrascu · Sinead Williamson · Barbara Engelhardt -
2022 : Multi-group Reinforcement Learning for Electrolyte Repletion »
Promise Ekpo · Barbara Engelhardt -
2022 : A Bayesian Causal Inference Approach for Assessing Fairness in Clinical Decision-Making »
Linying Zhang · Lauren Richter · Yixin Wang · Anna Ostropolets · Noemie Elhadad · David Blei · George Hripcsak -
2022 : How can we use natural evolution and genetic experiments to design protein functions? »
Ada Shaw · June Shin · Debora Marks -
2022 : TranceptEVE: Combining Family-specific and Family-agnostic Models of Protein Sequences for Improved Fitness Prediction »
Pascal Notin · Lodevicus van Niekerk · Aaron Kollasch · Daniel Ritter · Yarin Gal · Debora Marks -
2022 : Kernelized Stein Discrepancies for Biological Sequences »
Alan Amin · Eli Weinstein · Debora Marks -
2022 : scPerturb: Information Resource for Harmonized Single-Cell Perturbation Data »
Tessa Green · Stefan Peidli · Ciyue Shen · Torsten Gross · Joseph Min · Samuele Garda · Jake Taylor-King · Debora Marks · Augustin Luna · Nils Blüthgen · Chris Sander -
2022 : Designing and Evolving Neuron-Specific Proteases »
Han Spinner · Colin Hemez · Julia McCreary · David Liu · Debora Marks -
2022 : Valid Inference after Causal Discovery »
Paula Gradu · Tijana Zrnic · Yixin Wang · Michael Jordan -
2022 : Learning Causal Representations of Single Cells via Sparse Mechanism Shift Modeling »
Romain Lopez · Nataša Tagasovska · Stephen Ra · Kyunghyun Cho · Jonathan Pritchard · Aviv Regev -
2022 : Interventional Causal Representation Learning »
Kartik Ahuja · Yixin Wang · Divyat Mahajan · Yoshua Bengio -
2022 : Interventional Causal Representation Learning »
Kartik Ahuja · Yixin Wang · Divyat Mahajan · Yoshua Bengio -
2022 : Interventional Causal Representation Learning »
Kartik Ahuja · Yixin Wang · Divyat Mahajan · Yoshua Bengio -
2022 Workshop: Learning Meaningful Representations of Life »
Elizabeth Wood · Adji Bousso Dieng · Aleksandrina Goeva · Alex X Lu · Anshul Kundaje · Chang Liu · Debora Marks · Ed Boyden · Eli N Weinstein · Lorin Crawford · Mor Nitzan · Rebecca Boiarsky · Romain Lopez · Tamara Broderick · Ray Jones · Wouter Boomsma · Yixin Wang · Stephen Ra -
2022 Workshop: Machine Learning and the Physical Sciences »
Atilim Gunes Baydin · Adji Bousso Dieng · Emine Kucukbenli · Gilles Louppe · Siddharth Mishra-Sharma · Benjamin Nachman · Brian Nord · Savannah Thais · Anima Anandkumar · Kyle Cranmer · Lenka Zdeborová · Rianne van den Berg -
2022 : Dynamic Survival Transformers for Causal Inference with Electronic Health Records »
Prayag Chatha · Yixin Wang · Zhenke Wu · Jeffrey Regier -
2022 : Dynamic Survival Transformers for Causal Inference with Electronic Health Records »
Prayag Chatha · Yixin Wang · Zhenke Wu · Jeffrey Regier -
2022 Poster: Markov Chain Score Ascent: A Unifying Framework of Variational Inference with Markovian Gradients »
Kyurae Kim · Jisu Oh · Jacob Gardner · Adji Bousso Dieng · Hongseok Kim -
2022 Poster: Non-identifiability and the Blessings of Misspecification in Models of Molecular Fitness »
Eli Weinstein · Alan Amin · Jonathan Frazer · Debora Marks -
2022 Poster: Large-Scale Differentiable Causal Discovery of Factor Graphs »
Romain Lopez · Jan-Christian Huetter · Jonathan Pritchard · Aviv Regev -
2022 Poster: Anticipating Performativity by Predicting from Predictions »
Celestine Mendler-Dünner · Frances Ding · Yixin Wang -
2022 Poster: Empirical Gateaux Derivatives for Causal Inference »
Michael Jordan · Yixin Wang · Angela Zhou -
2021 : Invited Talk 6 Q&A »
Yixin Wang -
2021 : Statistical and Computational Tradeoffs in Variational Bayes »
Yixin Wang -
2021 : Invited talk (ML) - Barbara Engelhardt »
Barbara Engelhardt -
2021 : Bayesian Data Selection »
Eli N Weinstein -
2021 : Invited Talk #7: Romain Lopez »
Romain Lopez -
2021 Workshop: Your Model is Wrong: Robustness and misspecification in probabilistic modeling »
Diana Cai · Sameer Deshpande · Michael Hughes · Tamara Broderick · Trevor Campbell · Nick Foti · Barbara Engelhardt · Sinead Williamson -
2021 Workshop: Machine Learning in Structural Biology »
Ellen Zhong · Raphael Townshend · Stephan Eismann · Namrata Anand · Roshan Rao · John Ingraham · Wouter Boomsma · Sergey Ovchinnikov · Bonnie Berger -
2021 Poster: Consistency Regularization for Variational Auto-Encoders »
Samarth Sinha · Adji Bousso Dieng -
2021 Poster: Can we globally optimize cross-validation loss? Quasiconvexity in ridge regression »
Will Stephenson · Zachary Frangella · Madeleine Udell · Tamara Broderick -
2021 Poster: Posterior Collapse and Latent Variable Non-identifiability »
Yixin Wang · David Blei · John Cunningham -
2021 Poster: For high-dimensional hierarchical models, consider exchangeability of effects across covariates instead of across datasets »
Brian Trippe · Hilary Finucane · Tamara Broderick -
2021 Poster: A generative nonparametric Bayesian model for whole genomes »
Alan Amin · Eli N Weinstein · Debora Marks -
2021 Poster: Learning Equilibria in Matching Markets from Bandit Feedback »
Meena Jagadeesan · Alexander Wei · Yixin Wang · Michael Jordan · Jacob Steinhardt -
2020 : Panel & Closing »
Tamara Broderick · Laurent Dinh · Neil Lawrence · Kristian Lum · Hanna Wallach · Sinead Williamson -
2020 : Frank Noe Intro »
Wouter Boomsma -
2020 Workshop: Machine Learning for Structural Biology »
Raphael Townshend · Stephan Eismann · Ron Dror · Ellen Zhong · Namrata Anand · John Ingraham · Wouter Boomsma · Sergey Ovchinnikov · Roshan Rao · Per Greisen · Rachel Kolodny · Bonnie Berger -
2020 : Chang Liu »
Chang Liu -
2020 : Tamara Broderick »
Tamara Broderick -
2020 Workshop: Machine Learning and the Physical Sciences »
Anima Anandkumar · Kyle Cranmer · Shirley Ho · Mr. Prabhat · Lenka Zdeborová · Atilim Gunes Baydin · Juan Carrasquilla · Adji Bousso Dieng · Karthik Kashinath · Gilles Louppe · Brian Nord · Michela Paganini · Savannah Thais -
2020 Workshop: Learning Meaningful Representations of Life (LMRL.org) »
Elizabeth Wood · Debora Marks · Ray Jones · Adji Bousso Dieng · Alan Aspuru-Guzik · Anshul Kundaje · Barbara Engelhardt · Chang Liu · Edward Boyden · Kresten Lindorff-Larsen · Mor Nitzan · Smita Krishnaswamy · Wouter Boomsma · Yixin Wang · David Van Valen · Orr Ashenberg -
2020 : Invited Talk: Lorin Crawford: A Machine Learning Pipeline for Feature Selection and Association Mapping with 3D Shapes »
Lorin Crawford -
2020 Poster: Point process models for sequence detection in high-dimensional neural spike trains »
Alex Williams · Anthony Degleris · Yixin Wang · Scott Linderman -
2020 Oral: Point process models for sequence detection in high-dimensional neural spike trains »
Alex Williams · Anthony Degleris · Yixin Wang · Scott Linderman -
2020 Poster: Decision-Making with Auto-Encoding Variational Bayes »
Romain Lopez · Pierre Boyeau · Nir Yosef · Michael Jordan · Jeffrey Regier -
2020 Poster: Fourier-transform-based attribution priors improve the interpretability and stability of deep learning models for genomics »
Alex Tseng · Avanti Shrikumar · Anshul Kundaje -
2020 Poster: Approximate Cross-Validation for Structured Models »
Soumya Ghosh · Will Stephenson · Tin Nguyen · Sameer Deshpande · Tamara Broderick -
2020 Affinity Workshop: Women in Machine Learning »
Xinyi Chen · Erin Grant · Kristy Choi · Krystal Maughan · Xenia Miscouridou · Judy Hanwen Shen · Raquel Aoki · Belén Saldías · Mel Woghiren · Elizabeth Wood -
2020 Poster: Approximate Cross-Validation with Low-Rank Data in High Dimensions »
Will Stephenson · Madeleine Udell · Tamara Broderick -
2019 : Surya Ganguli, Yasaman Bahri, Florent Krzakala moderated by Lenka Zdeborova »
Florent Krzakala · Yasaman Bahri · Surya Ganguli · Lenka Zdeborová · Adji Bousso Dieng · Joan Bruna -
2019 : Synthetic Systems »
Pamela Silver · Debora Marks · Chang Liu · Possu Huang -
2019 : In conversations: Daphne Koller and Barbara Englehardt »
Daphne Koller · Barbara Engelhardt -
2019 Workshop: Learning Meaningful Representations of Life »
Elizabeth Wood · Yakir Reshef · Jonathan Bloom · Jasper Snoek · Barbara Engelhardt · Scott Linderman · Suchi Saria · Alexander Wiltschko · Casey Greene · Chang Liu · Kresten Lindorff-Larsen · Debora Marks -
2019 Poster: Variational Bayes under Model Misspecification »
Yixin Wang · David Blei -
2019 Poster: Using Embeddings to Correct for Unobserved Confounding in Networks »
Victor Veitch · Yixin Wang · David Blei -
2018 : Invited Talk Session 2 »
Debora Marks · Olexandr Isayev · Tess Smidt · Nathaniel Thomas -
2018 : Barbara Engelhardt »
Barbara Engelhardt -
2018 : Research Panel »
Sinead Williamson · Barbara Engelhardt · Tom Griffiths · Neil Lawrence · Hanna Wallach -
2018 : TBC 4 »
Debora Marks -
2018 Workshop: All of Bayesian Nonparametrics (Especially the Useful Bits) »
Diana Cai · Trevor Campbell · Michael Hughes · Tamara Broderick · Nick Foti · Sinead Williamson -
2018 Poster: Information Constraints on Auto-Encoding Variational Bayes »
Romain Lopez · Jeffrey Regier · Michael Jordan · Nir Yosef -
2018 Poster: 3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data »
Maurice Weiler · Wouter Boomsma · Mario Geiger · Max Welling · Taco Cohen -
2017 : Functional Data Analysis using a Topological Summary Statistic: the Smooth Euler Characteristic Transform, »
Lorin Crawford -
2017 Workshop: Advances in Approximate Bayesian Inference »
Francisco Ruiz · Stephan Mandt · Cheng Zhang · James McInerney · James McInerney · Dustin Tran · Dustin Tran · David Blei · Max Welling · Tamara Broderick · Michalis Titsias -
2017 Spotlight: Spherical convolutions and their application in molecular modelling »
Wouter Boomsma · Jes Frellsen -
2017 Poster: Spherical convolutions and their application in molecular modelling »
Wouter Boomsma · Jes Frellsen -
2017 Poster: PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference »
Jonathan Huggins · Ryan Adams · Tamara Broderick -
2017 Spotlight: PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference »
Jonathan Huggins · Ryan Adams · Tamara Broderick -
2017 Poster: Variational Inference via $\chi$ Upper Bound Minimization »
Adji Bousso Dieng · Dustin Tran · Rajesh Ranganath · John Paisley · David Blei -
2016 Workshop: Machine Learning in Computational Biology »
Gerald Quon · Sara Mostafavi · James Y Zou · Barbara Engelhardt · Oliver Stegle · Nicolo Fusi -
2016 : Tamara Broderick: Foundations Talk »
Tamara Broderick -
2016 Workshop: Advances in Approximate Bayesian Inference »
Tamara Broderick · Stephan Mandt · James McInerney · Dustin Tran · David Blei · Kevin Murphy · Andrew Gelman · Michael I Jordan -
2016 Workshop: Practical Bayesian Nonparametrics »
Nick Foti · Tamara Broderick · Trevor Campbell · Michael Hughes · Jeffrey Miller · Aaron Schein · Sinead Williamson · Yanxun Xu -
2016 Poster: Unsupervised Learning from Noisy Networks with Applications to Hi-C Data »
Bo Wang · Junjie Zhu · Armin Pourshafeie · Oana Ursu · Serafim Batzoglou · Anshul Kundaje -
2016 Poster: Coresets for Scalable Bayesian Logistic Regression »
Jonathan Huggins · Trevor Campbell · Tamara Broderick -
2016 Poster: Edge-exchangeable graphs and sparsity »
Diana Cai · Trevor Campbell · Tamara Broderick -
2016 Poster: Automated scalable segmentation of neurons from multispectral images »
Uygar Sümbül · Douglas Roossien · Dawen Cai · Fei Chen · Nicholas Barry · John Cunningham · Edward Boyden · Liam Paninski -
2015 Workshop: Bayesian Nonparametrics: The Next Generation »
Tamara Broderick · Nick Foti · Aaron Schein · Alex Tank · Hanna Wallach · Sinead Williamson -
2015 Workshop: Advances in Approximate Bayesian Inference »
Dustin Tran · Tamara Broderick · Stephan Mandt · James McInerney · Shakir Mohamed · Alp Kucukelbir · Matthew D. Hoffman · Neil Lawrence · David Blei -
2015 Poster: Linear Response Methods for Accurate Covariance Estimates from Mean Field Variational Bayes »
Ryan Giordano · Tamara Broderick · Michael Jordan -
2015 Spotlight: Linear Response Methods for Accurate Covariance Estimates from Mean Field Variational Bayes »
Ryan Giordano · Tamara Broderick · Michael Jordan -
2014 Workshop: Advances in Variational Inference »
David Blei · Shakir Mohamed · Michael Jordan · Charles Blundell · Tamara Broderick · Matthew D. Hoffman -
2014 Workshop: Machine Learning in Computational Biology »
Oliver Stegle · Sara Mostafavi · Anna Goldenberg · Su-In Lee · Michael Leung · Anshul Kundaje · Mark B Gerstein · Martin Renqiang Min · Hannes Bretschneider · Francesco Paolo Casale · Loïc Schwaller · Amit G Deshwar · Benjamin A Logsdon · Yuanyang Zhang · Ali Punjani · Derek C Aguiar · Samuel Kaski -
2013 Poster: Optimistic Concurrency Control for Distributed Unsupervised Learning »
Xinghao Pan · Joseph Gonzalez · Stefanie Jegelka · Tamara Broderick · Michael Jordan -
2013 Poster: Streaming Variational Bayes »
Tamara Broderick · Nicholas Boyd · Andre Wibisono · Ashia C Wilson · Michael Jordan