Skip to yearly menu bar Skip to main content


Poster

Inference Aided Reinforcement Learning for Incentive Mechanism Design in Crowdsourcing

Zehong Hu · Yitao Liang · Jie Zhang · Zhao Li · Yang Liu

Room 210 #69

Keywords: [ Markov Decision Processes ] [ Reinforcement Learning ] [ Game Theory and Computational Economics ] [ Bayesian Nonparametrics ]


Abstract:

Incentive mechanisms for crowdsourcing are designed to incentivize financially self-interested workers to generate and report high-quality labels. Existing mechanisms are often developed as one-shot static solutions, assuming a certain level of knowledge about worker models (expertise levels, costs for exerting efforts, etc.). In this paper, we propose a novel inference aided reinforcement mechanism that acquires data sequentially and requires no such prior assumptions. Specifically, we first design a Gibbs sampling augmented Bayesian inference algorithm to estimate workers' labeling strategies from the collected labels at each step. Then we propose a reinforcement incentive learning (RIL) method, building on top of the above estimates, to uncover how workers respond to different payments. RIL dynamically determines the payment without accessing any ground-truth labels. We theoretically prove that RIL is able to incentivize rational workers to provide high-quality labels both at each step and in the long run. Empirical results show that our mechanism performs consistently well under both rational and non-fully rational (adaptive learning) worker models. Besides, the payments offered by RIL are more robust and have lower variances compared to existing one-shot mechanisms.

Live content is unavailable. Log in and register to view live content