Skip to yearly menu bar Skip to main content


Workshop

NIPS’14 Workshop on Crowdsourcing and Machine Learning

David Parkes · Denny Zhou · Chien-Ju Ho · Nihar Bhadresh Shah · Adish Singla · Jared Heyman · Edwin Simpson · Andreas Krause · Rafael Frongillo · Jennifer Wortman Vaughan · Panagiotis Papadimitriou · Damien Peters

Level 5, room 511 a

Sat 13 Dec, 5:30 a.m. PST

Motivation
Crowdsourcing aims to combine human knowledge and expertise with computing to help solve problems and scientific challenges that neither machines nor humans can solve alone. In addition to a number of human-powered scientific projects, including GalaxyZoo, eBird, and Foldit, crowdsourcing is impacting the ability of academic researchers to build new systems and run new experiments involving people, and is also gaining a lot of use within industry for collecting training data for the purpose of machine learning. There are a number of online marketplaces for crowdsourcing, including Amazon’s Mechanical Turk, ODesk and MobileWorks. The fundamental question that we plan to explore in this workshop is:

How can we build systems that combine the intelligence of humans and the computing power of machines for solving challenging scientific and engineering problems?

The goal is to improve the performance of complex human-powered systems by making them more efficient, robust, and scalable.

Current research in crowdsourcing often focuses on micro-tasking (for example, labeling a set of images), designing algorithms for solving optimization problems from the job requester’s perspective and with simple models of worker behavior. However, the participants are people with rich capabilities including learning, collaboration and so forth, suggesting the need for more nuanced approaches that place special emphasis on the participants. Such human-powered systems could involve large numbers of people with varying expertise, skills, interests, and incentives. This poses many interesting research questions and exciting opportunities for the machine learning community. The goal of this workshop is to foster these ideas and work towards this goal by bringing together experts from the field of machine learning, cognitive science, economics, game theory, and human-computer interaction.


Topics of Interest
Topics of interests in the workshop include:

* Social aspects and collaboration: How can systems exploit the social ties of the underlying participants or users to create incentives for users to collaborate? How can online social networks be used to create tasks with a gamification component and engage users in useful activities? With ever-increasing time on the Internet being spent on online social networks, there is a huge opportunity to elicit useful contributions from users at scale, by carefully designing tasks.

* Incentives, pricing mechanisms and budget allocation: How to design the right incentive structure and pricing policies for participants that maximize the satisfaction of participants as well as utility of the job requester for a given budget? How can techniques from machine learning, economics and game theory be used to learn optimal pricing policies and to infer optimal incentive designs?

* Learning by participants: How can we use insights from machine learning to build tools for training and teaching the participants for carrying out complex or difficult tasks? How can this training be actively adapted based on the skills or expertise of the participants and by tracking the learning process?

* Peer prediction and knowledge aggregation: How can complex crowdsourcing tasks be decomposed into simpler micro-tasks? How can techniques of peer prediction be used to elicit informative responses from participants and incentivize effort? Can we design models and algorithms to effectively aggregate responses and knowledge, especially for complex tasks?

* Privacy aspects: The question of privacy in human-powered systems has often been ignored and we seek to understand the privacy aspects both from job requester as well as privacy of the participants. How can a job requester (such as firm interested in translating legal documents) carry out crowdsourcing tasks without revealing private information to the crowd? How can systems negotiate the access to private information of participants (such as the GPS location in community sensing applications) in return of appropriate incentives?

* Open theoretical questions and novel applications: What are the open research questions, emerging trends and novel applications related to design of incentives in human computation and crowdsourcing systems?


Participants
We expect diverse participation from researchers with a wide variety of scientific interests spanning economics, game theory, cognitive science, and human-computer interaction. Given the widespread use of crowdsourcing in the industry, such Amazon, Google and Bing, we expect active participation from industry.

Live content is unavailable. Log in and register to view live content