Invited Talks
Computations in Human Sensorimotor Control
The effortless ease with which humans move our arms, our eyes, even our lips when we speak masks the true complexity of the control processes involved. This is evident when we try to build machines to perform human control tasks. While computers can now beat grandmasters at chess, no computer can yet control a robot to manipulate a chess piece with the dexterity of a six-year-old child. I will review our recent work on how the humans learn to make skilled movements covering structural learning and generalization, how we learn the dynamics of tools and how we make decisions in the face of uncertainty.
Speaker
Daniel M Wolpert
Daniel Wolpert read medical sciences at Cambridge and clinical medicine at Oxford. After working as a medical doctor for a year he completed a PhD in the Physiology Department at Oxford. He then worked as a postdoctoral fellow at MIT, before moving to the Institute of Neurology, UCL. In 2005 he took up the post of Professor of Engineering for the Life Sciences at the University of Cambridge and is a Fellow of Trinity College. His research interests are computational and experimental approaches to human sensorimotor control (www.wolpertlab.com).
Online Stochastic Combinatorial Optimization
Advances in telecommunication technologies, combined with the
increasingly integrated nature of optimization applications, create a
wealth of online optimization problems in scheduling, routing, and
resource allocation. Moreover, in many applications, stochastic and
simulation models, or massive amount of historical data, are typically
available to the decision-maker.
This talk presents an overview of online anticipatory algorithms for
addressing this new class of applications and reports on their
performance in a variety of settings. Anticipatory algorithms make
decisions online, by conditionally sampling a distribution and solving
the resulting optimization problems. Interestingly, many of these
algorithms features innovative integration of artificial intelligence,
discrete optimization, and stochastic programming techniques.
Speaker
Pascal van Hentenryck
Pascal Van Hentenryck is a professor of computer science at Brown
University and the director of the optimization laboratory. During the
past 20 years, he developed a number of influential systems, including
the pioneering CHIP system which is the foundation of all modern
constraint programming systems, the Numerica system for global
optimization, the optimization programming language OPL, and the
programming language Comet which supports constraint-based local
search, constraint programming, and mathematical programming. Most of
these systems, and their foundations, are described in books published
by the MIT Press and have been licensed to industry. His current
research in online stochastic optimization integrates techniques from
artificial intelligence, stochastic optimization, and combinatorial
optimization to tackle complex decision-making applications under
uncertainty.
Van Hentenryck is the recipient of an 1993 NSF National Young
Investigator (NYI) award, the 2002 INFORMS ICS Award for research
excellence at the interface between computer science and operations
research, the 2006 ACP Award for Research Excellence in Constraint
Programming, best paper awards at CP'03, CP'04, and IJCAI’07, and an
IBM Faculty Award in 2004. Pascal has given invited talks at many
premier conferences in artificial intelligence, operations research,
and programming languages, including IJCAI'97, CP'97, UAI'06,
CP'AI'OR'08, SIOP'08, and ECAI'08.
Connectome: the quest to deconstruct the brain
Recent innovations in 3d nanoscale imaging are expected to produce teravoxel and petavoxel-sized images of the brain's neural networks. These datasets will only become useful for neuroscience if computer scientists can develop algorithms for automated image analysis. Chief among the challenges is accurate tracing of the "wires" of the brain, its axons and dendrites, through the 3d images. Achieving the necessary accuracy will require the use of machine learning, rather than hand-designed algorithms. If the tracing problem is solved, it will become possible to create automated systems that take a sample of brain tissue as input and generate its "wiring diagram" or "connectome". Such systems would revolutionize neuroscience by giving rise to a new field called "connectomics," defined by the high-throughput generation of data about neural connectivity, and the subsequent mining of that data for knowledge about the brain. I will discuss the impact that connectomics could have on our understanding of how the brain wires and rewires itself, the dynamics of activity in neural networks, and the neuropathological basis of mental disorders.
Speaker
Machine Learning in High Energy Physics
I begin with a brief discussion of the nature of high energy physics, and
follow with a review of a few real-world examples of the application of
machine learning methods in this field. I focus on the common, but
difficult task, of extracting small signals masked by enormous
backgrounds. The talk ends with a discussion of the computational
challenges we expect to face in the very near future at the Large Hadron
Collider and an enumeration of what my colleagues and I see as open
questions.
Speaker
Harrison B Prosper
Harrison Prosper did his doctorate in particle physics from the University of
Manchester, England, in 1980 and, from 1982-1986, was a post-doctoral
fellow at the Rutherford Appleton Laboratory, but stationed at the
Institut Laue Langevin, Grenoble, France. In 1988, after a brief stint at
Virginia Tech, Blacksburg, he joined the Fermi National Accelerator
Laboratory as an Associate Scientist. In 1993,he joined the faculty at
Florida State University, became a full professor in 1998, was elected a
fellow of the American Physical Society in 2002, and became the Kirby W.
Kemper professor of physics in 2006. A principal interest of his is the
application of machine learning and Bayesian methods to particle physics
research.
Theory of Mind with fMRI
Externally observable components of human actions carry only a tiny fraction of the information that matters. Human observers are vastly more interested in perceiving or inferring the mental states - the beliefs, desires and intentions - that lie behind the observable shell. If a person checks her watch, is she uncertain about the time, late for an appointment, or bored with the conversation? If a person shoots his friend on a hunting trip, did he intend revenge or just mistake his friend for a partridge? The mechanism people use to infer and reason about another person’s states of mind is called a ‘Theory of Mind’ (ToM). One of the most striking discoveries of recent human cognitive neuroscience is that there is a group of brain regions in human cortex that selectively and specifically underlie this mechanism. I will describe recent studies from my lab characterising the functional profile of one of these regions, the right temporo-parietal junction. The challenge for the future remains: to construct an adequate computational description of a neurally implemented mechanism, that could reason about another person's thoughts.
Speaker
Rebecca Saxe
Rebecca Saxe studied Psychology and Philosophy at Oxford, and then received a PhD in Cognitive Science from MIT, working under the supervision of Nancy Kanwisher. For the next three years, she was a Junior Fellow at Harvard University, moonlighting in the developmental psychology lab of Susan Carey. Since 2006 she has been an assistant professor of Cognitive Neuroscience, back at MIT. Her work investigates the development and neural basis of human social cognition (saxelab.mit.edu).
No Events Found
Try adjusting your search terms
Successful Page Load