Skip to yearly menu bar Skip to main content


Spotlight

Multi-task Gaussian Process Prediction

Edwin Bonilla · Kian Ming A Chai · Chris Williams

[ ] [ Visit Spotlights ]

Abstract:

In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a ``free-form'' covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets.

Chat is not available.