Timezone: »

 
Poster
Adversarial Task Up-sampling for Meta-learning
Yichen WU · Long-Kai Huang · Ying Wei

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #425

The success of meta-learning on existing benchmarks is predicated on the assumption that the distribution of meta-training tasks covers meta-testing tasks. Frequent violation of the assumption in applications with either insufficient tasks or a very narrow meta-training task distribution leads to memorization or learner overfitting. Recent solutions have pursued augmentation of meta-training tasks, while it is still an open question to generate both correct and sufficiently imaginary tasks. In this paper, we seek an approach that up-samples meta-training tasks from the task representation via a task up-sampling network. Besides, the resulting approach named Adversarial Task Up-sampling (ATU) suffices to generate tasks that can maximally contribute to the latest meta-learner by maximizing an adversarial loss. On few-shot sine regression and image classification datasets, we empirically validate the marked improvement of ATU over state-of-the-art task augmentation strategies in the meta-testing performance and also the quality of up-sampled tasks.

Author Information

Yichen WU (City University of Hong Kong)
Long-Kai Huang (Nanyang Technological University)
Ying Wei (City University of Hong Kong)

More from the Same Authors