Timezone: »

Scoring Workers in Crowdsourcing: How Many Control Questions are Enough?
Qiang Liu · Alexander Ihler · Mark Steyvers

Sat Dec 07 07:00 PM -- 11:59 PM (PST) @ Harrah's Special Events Center, 2nd Floor

We study the problem of estimating continuous quantities, such as prices, probabilities, and point spreads, using a crowdsourcing approach. A challenging aspect of combining the crowd's answers is that workers' reliabilities and biases are usually unknown and highly diverse. Control items with known answers can be used to evaluate workers' performance, and hence improve the combined results on the target items with unknown answers. This raises the problem of how many control items to use when the total number of items each workers can answer is limited: more control items evaluates the workers better, but leaves fewer resources for the target items that are of direct interest, and vice versa. We give theoretical results for this problem under different scenarios, and provide a simple rule of thumb for crowdsourcing practitioners. As a byproduct, we also provide theoretical analysis of the accuracy of different consensus methods.

Author Information

Qiang Liu (UC Irvine)
Alexander Ihler (UC Irvine)
Mark Steyvers (UC Irvine)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors