Skip to yearly menu bar Skip to main content


Poster

Preference Based Adaptation for Learning Objectives

Yao-Xiang Ding · Zhi-Hua Zhou

Room 517 AB #163

Keywords: [ Bandit Algorithms ] [ Online Learning ] [ Ranking and Preference Learning ] [ Multitask and Transfer Learning ] [ Boosting and Ensemble Methods ]


Abstract:

In many real-world learning tasks, it is hard to directly optimize the true performance measures, meanwhile choosing the right surrogate objectives is also difficult. Under this situation, it is desirable to incorporate an optimization of objective process into the learning loop based on weak modeling of the relationship between the true measure and the objective. In this work, we discuss the task of objective adaptation, in which the learner iteratively adapts the learning objective to the underlying true objective based on the preference feedback from an oracle. We show that when the objective can be linearly parameterized, this preference based learning problem can be solved by utilizing the dueling bandit model. A novel sampling based algorithm DL^2M is proposed to learn the optimal parameter, which enjoys strong theoretical guarantees and efficient empirical performance. To avoid learning a hypothesis from scratch after each objective function update, a boosting based hypothesis adaptation approach is proposed to efficiently adapt any pre-learned element hypothesis to the current objective. We apply the overall approach to multi-label learning, and show that the proposed approach achieves significant performance under various multi-label performance measures.

Live content is unavailable. Log in and register to view live content