Poster
The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process
Hongyuan Mei · Jason Eisner

Mon Dec 4th 06:30 -- 10:30 PM @ Pacific Ballroom #69 #None

Many events occur in the world. Some event types are stochastically excited or inhibited—in the sense of having their probabilities elevated or decreased—by patterns in the sequence of previous events. Discovering such patterns can help us predict which type of event will happen next and when. We model streams of discrete events in continuous time, by constructing a neurally self-modulating multivariate point process in which the intensities of multiple event types evolve according to a novel continuous-time LSTM. This generative model allows past events to influence the future in complex and realistic ways, by conditioning future event intensities on the hidden state of a recurrent neural network that has consumed the stream of past events. Our model has desirable qualitative properties. It achieves competitive likelihood and predictive accuracy on real and synthetic datasets, including under missing-data conditions.

Author Information

Hongyuan Mei (JOHNS HOPKINS UNIVERSITY)

I am a second-year Ph.D. student (2016-) in Department of Computer Science at Johns Hopkins University, affiliated with the Center for Language and Speech Processing, where I am advised by Jason Eisner. My research interests are rooted in designing models and algorithms to solve challenging real-life problems (especially in continuous-time scheduling and natural langauge processing). I am currently working on continuous-time sequential modelling (e.g., neural Hawkes process).

Jason Eisner (Johns Hopkins University)

Jason Eisner is Professor of Computer Science at Johns Hopkins University, as well as Director of Research at Microsoft Semantic Machines. He is a Fellow of the Association for Computational Linguistics. At Johns Hopkins, he is also affiliated with the Center for Language and Speech Processing, the Machine Learning Group, the Cognitive Science Department, and the national Center of Excellence in Human Language Technology. His goal is to develop the probabilistic modeling, inference, and learning techniques needed for a unified model of all kinds of linguistic structure. His 135+ papers have presented various algorithms for parsing, machine translation, and weighted finite-state machines; formalizations, algorithms, theorems, and empirical results in computational phonology; and unsupervised or semi-supervised learning methods for syntax, morphology, and word-sense disambiguation. He is also the lead designer of Dyna, a new declarative programming language that provides an infrastructure for AI research. He has received two school-wide awards for excellence in teaching, as well as recent Best Paper Awards at ACL 2017 and EMNLP 2019.

More from the Same Authors