Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Learning in Presence of Strategic Behavior

Regret, stability, and fairness in matching markets with bandit learners

Sarah Cen · Devavrat Shah


Abstract:

We consider the two-sided matching market with bandit learners. In a matching market, there are two sets of agents---users and providers---that seek to be matched one-to-one. Contrary to a core assumption in the literature, users and providers often do not know their true matching preferences a priori and must learn them. To address this assumption, recent works propose to blend the matching and multi-armed bandit problems. One recent work finds that it is possible to assign matchings that are stable (i.e., no pair of agents is incentivized to defect) at every time step while also allowing agents to learn enough so that the system converges to matchings that are stable under the agents' true preferences. However, the authors also uncover an impossibility result: under their setup, one cannot simultaneously guarantee stability and low optimal regret. In this work, we study the two-sided matching market with bandit learners under costs and transfers in order to faithfully model competition between agents. We show that, in contrast to previous work, it is possible to simultaneously guarantee four desiderata: (1) stability, (2) low optimal regret, (3) fairness in the distribution of regret, and (4) high social welfare.

Chat is not available.