NIPS 2013
Skip to yearly menu bar Skip to main content


Workshop

Topic Models: Computation, Application, and Evaluation

David Mimno · Amr Ahmed · Jordan Boyd-Graber · Ankur Moitra · Hanna Wallach · Alexander Smola · David Blei · Anima Anandkumar

Harvey's Emerald Bay B

Since the most recent NIPS topic model workshop in 2010, interest in statistical topic modeling has continued to grow in a wide range of research areas, from theoretical computer science to English literature. The goal of this workshop, which marks the 10th anniversary of the original LDA NIPS paper, is to bring together researchers from the NIPS community and beyond to share results, ideas, and perspectives.

We will organize the workshop around the following three themes:

Computation: The computationally intensive process of training topic models has been a useful testbed for novel inference methods in machine learning, such as stochastic variational inference and spectral inference. Theoretical computer scientists have used LDA as a test case to begin to establish provable bounds in unsupervised machine learning. This workshop will provide a forum for researchers developing new inference methods and theoretical analyses to present work in progress, as well as for practitioners to learn about state of the art research in efficient and provable computing.

Applications: Topic models are now commonly used in a broad array of applications to solve real-world problems, from questions in digital humanities and computational social science to e-commerce and government science policy. This workshop will share new application areas, and discuss our experiences adapting general tools to the particular needs of different settings. Participants will look for commonalities between diverse applications, while also using the particular challenges of each application to define theoretical research agendas.

Evaluation: A key strength of topic modeling is its exceptional capability for exploratory analysis, but evaluating such use can be challenging: there may be no single right answer. As topic models become widely used outside machine learning, it becomes increasingly important to find evaluation strategies that match user needs. The workshop will focus both on the specifics of individual evaluation metrics and the more general process of iteratively criticizing and improving models. We will also consider questions of interface design, visualization, and user experience.

Program committee (confirmed):

Edo Airoldi (Harvard), Laura Dietz (UMass), Jacob Eisenstein (GTech), Justin Grimmer (Stanford), Yoni Halpern (NYU), Daniel Hsu (Columbia), Brendan O'Connor (CMU), Michael Paul (JHU), Eric Ringger (BYU), Brandon Stewart (Harvard), Chong Wang (CMU), Sinead Williamson (UT-Austin)

Live content is unavailable. Log in and register to view live content