Skip to yearly menu bar Skip to main content


Session

Oral Session 10

Joelle Pineau

Abstract:
Chat is not available.

Wed 14 Dec. 7:00 - 7:50 PST

Invited Talk
Natural Algorithms

Bernard Chazelle

I will discuss the merits of an algorithmic approach to the analysis of complex self-organizing systems. I will argue that computer science, and algorithms in particular, offer a fruitful perspective on the complex dynamics of multiagent systems: for example, opinion dynamics, bird flocking, and firefly synchronization. I will give many examples and try to touch on some of the theory behind them, with an emphasis on their algorithmic nature and the particular challenges to machine learning that an algorithmic approach to dynamical systems raises.

Wed 14 Dec. 7:50 - 8:10 PST

Oral
Iterative Learning for Reliable Crowdsourcing Systems

David R Karger · Sewoong Oh · Devavrat Shah

Crowdsourcing systems, in which tasks are electronically distributed to numerous ``information piece-workers'', have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a general model of such rowdsourcing tasks, and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give new algorithms for deciding which tasks to assign to which workers and for inferring correct answers from the workers’ answers. We show that our algorithm significantly outperforms majority voting and, in fact, are asymptotically optimal through comparison to an oracle that knows the reliability of every worker.

Wed 14 Dec. 8:10 - 8:30 PST

Oral
A Collaborative Mechanism for Crowdsourcing Prediction Problems

Jacob D Abernethy · Rafael Frongillo

Machine Learning competitions such as the Netflix Prize have proven reasonably successful as a method of “crowdsourcing” prediction tasks. But these compe- titions have a number of weaknesses, particularly in the incentive structure they create for the participants. We propose a new approach, called a Crowdsourced Learning Mechanism, in which participants collaboratively “learn” a hypothesis for a given prediction task. The approach draws heavily from the concept of a prediction market, where traders bet on the likelihood of a future event. In our framework, the mechanism continues to publish the current hypothesis, and par- ticipants can modify this hypothesis by wagering on an update. The critical in- centive property is that a participant will profit an amount that scales according to how much her update improves performance on a released test set.