Timezone: »
The Strategic Perceptron
Saba Ahmadi · Hedyeh Beyhaghi · Avrim Blum · Keziah Naggita
Event URL: https://openreview.net/forum?id=qhL1W4nE9sX »
The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample’s position and label and updates the current predictor accordingly if it makes a mistake. However, in presence of strategic agents that desire to be classified as positive and that are able to modify their position by a limited amount, the classifier may not be able to observe the true position of agents but rather a position where the agent pretends to be. Unlike the original setting with perfect knowledge of positions, in this situation the Perceptron algorithm fails to achieve its guarantees, and we illustrate examples with the predictor oscillating between two solutions forever, making an unbounded number of mistakes even though a perfect large-margin linear classifier exists. Our main contribution is providing a modified Perceptron-style algorithm which makes a bounded number of mistakes in presence of strategic agents with both $\ell_2$ and weighted $\ell_1$ manipulation costs. In our baseline model, knowledge of the manipulation costs (i.e., the extent to which an agent may manipulate) is assumed. In our most general model, we relax this assumption and provide an algorithm which learns and refines both the classifier and its cost estimates to achieve good mistake bounds even when manipulation costs are unknown.
The classical Perceptron algorithm provides a simple and elegant procedure for learning a linear classifier. In each step, the algorithm observes the sample’s position and label and updates the current predictor accordingly if it makes a mistake. However, in presence of strategic agents that desire to be classified as positive and that are able to modify their position by a limited amount, the classifier may not be able to observe the true position of agents but rather a position where the agent pretends to be. Unlike the original setting with perfect knowledge of positions, in this situation the Perceptron algorithm fails to achieve its guarantees, and we illustrate examples with the predictor oscillating between two solutions forever, making an unbounded number of mistakes even though a perfect large-margin linear classifier exists. Our main contribution is providing a modified Perceptron-style algorithm which makes a bounded number of mistakes in presence of strategic agents with both $\ell_2$ and weighted $\ell_1$ manipulation costs. In our baseline model, knowledge of the manipulation costs (i.e., the extent to which an agent may manipulate) is assumed. In our most general model, we relax this assumption and provide an algorithm which learns and refines both the classifier and its cost estimates to achieve good mistake bounds even when manipulation costs are unknown.
Author Information
Saba Ahmadi (Toyota Technological Institute at Chicago)
Hedyeh Beyhaghi (Toyota Technological Instute at Chicago)
Avrim Blum (Toyota Technological Institute at Chicago)
Keziah Naggita (Toyota Technological Institute at Chicago)
More from the Same Authors
-
2021 Spotlight: Excess Capacity and Backdoor Poisoning »
Naren Manoj · Avrim Blum -
2021 : One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning »
Richard Phillips · Han Shao · Avrim Blum · Nika Haghtalab -
2021 : On classification of strategic agents who can both game and improve »
Saba Ahmadi · Hedyeh Beyhaghi · Avrim Blum · Keziah Naggita -
2021 : The Strategic Perceptron »
Saba Ahmadi · Hedyeh Beyhaghi · Avrim Blum · Keziah Naggita -
2021 : One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning »
Richard Phillips · Han Shao · Avrim Blum · Nika Haghtalab -
2021 : On classification of strategic agents who can both game and improve »
Saba Ahmadi · Hedyeh Beyhaghi · Avrim Blum · Keziah Naggita -
2021 : Ethics:: The Equity Framework »
Keziah Naggita · Julius Aguma -
2021 Poster: Excess Capacity and Backdoor Poisoning »
Naren Manoj · Avrim Blum -
2014 Poster: Learning Optimal Commitment to Overcome Insecurity »
Avrim Blum · Nika Haghtalab · Ariel Procaccia -
2014 Poster: Learning Mixtures of Ranking Models »
Pranjal Awasthi · Avrim Blum · Or Sheffet · Aravindan Vijayaraghavan -
2014 Poster: Active Learning and Best-Response Dynamics »
Maria-Florina F Balcan · Christopher Berlind · Avrim Blum · Emma Cohen · Kaushik Patnaik · Le Song -
2014 Spotlight: Learning Mixtures of Ranking Models »
Pranjal Awasthi · Avrim Blum · Or Sheffet · Aravindan Vijayaraghavan -
2010 Spotlight: Trading off Mistakes and Don't-Know Predictions »
Amin Sayedi · Avrim Blum · Morteza Zadimoghaddam -
2010 Poster: Trading off Mistakes and Don't-Know Predictions »
Amin Sayedi · Morteza Zadimoghaddam · Avrim Blum -
2009 Workshop: Clustering: Science or art? Towards principled approaches »
Margareta Ackerman · Shai Ben-David · Avrim Blum · Isabelle Guyon · Ulrike von Luxburg · Robert Williamson · Reza Zadeh -
2009 Poster: Tracking Dynamic Sources of Malicious Activity at Internet Scale »
Shobha Venkataraman · Avrim Blum · Dawn Song · Subhabrata Sen · Oliver Spatscheck -
2009 Spotlight: Tracking Dynamic Sources of Malicious Activity at Internet Scale »
Shobha Venkataraman · Avrim Blum · Dawn Song · Subhabrata Sen · Oliver Spatscheck -
2008 Workshop: New Challanges in Theoretical Machine Learning: Data Dependent Concept Spaces »
Maria-Florina F Balcan · Shai Ben-David · Avrim Blum · Kristiaan Pelckmans · John Shawe-Taylor