Timezone: »
Neural processes (NPs) formulate exchangeable stochastic processes and are promising models for meta learning that do not require gradient updates during the testing phase. However, most NP variants place a strong emphasis on a global latent variable. This weakens the approximation power and restricts the scope of applications using NP variants, especially when data generative processes are complicated.To resolve these issues, we propose to combine the Mixture of Expert models with Neural Processes to develop more expressive exchangeable stochastic processes, referred to as Mixture of Expert Neural Processes (MoE-NPs). Then we apply MoE-NPs to both few-shot supervised learning and meta reinforcement learning tasks. Empirical results demonstrate MoE-NPs' strong generalization capability to unseen tasks in these benchmarks.
Author Information
Qi Wang (University of Amsterdam)
Herke van Hoof (University of Amsterdam)
More from the Same Authors
-
2022 : Training graph neural networks with policy gradients to perform tree search »
Matthew Macfarlane · Diederik Roijers · Herke van Hoof -
2022 Poster: Neural Topological Ordering for Computation Graphs »
Mukul Gagrani · Corrado Rainone · Yang Yang · Harris Teague · Wonseok Jeon · Roberto Bondesan · Herke van Hoof · Christopher Lott · Weiliang Zeng · Piero Zappi -
2020 Poster: Experimental design for MRI by greedy policy search »
Tim Bakker · Herke van Hoof · Max Welling -
2020 Spotlight: Experimental design for MRI by greedy policy search »
Tim Bakker · Herke van Hoof · Max Welling -
2020 Poster: MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning »
Elise van der Pol · Daniel E Worrall · Herke van Hoof · Frans Oliehoek · Max Welling