Timezone: »
The argument for targeted immunization has been prevailing since the Covid-19 pandemic. However, sophisticated techniques to identify “superspreaders” for targeted vaccination may lead to inequalities in vaccine distribution and immunity from Covid-19 between social communities. This is particularly poignant in social networks which demonstrate homophily: our tendancy to interact more with those whom we share similar demographics. If our contact networks similarly show that we move in close communities, can we ensure that targeted immunization does not benefit one community over another? Here, we answer this question by applying group fairness constraints, ensuring immunity is balanced among different sub-populations, to an Influence Maximization (IM) task. IM is a technique which identifies the most influential members of a social network, those who are responsible for the greatest spread of e.g. disease or information. Previous works have demonstrated the equivalence of outbreak minimization and IM to detect superspreaders, and shown that networks with homophilic social networks lead to more unbalanced spread of information. Whilst the fair IM problem has been approached from a time-critical perspective, no attempt has yet been made to achieve group fairness on dynamic social networks. Here, we propose a novel method for applying fairness constraints to IM on dynamic and homophilic social networks to detect superspreaders.
Author Information
Nicola Neophytou (Mila / Universite de Montreal)
Golnoosh Farnadi (Mila)
More from the Same Authors
-
2022 : Exposure Fairness in Music Recommendation »
Rebecca Salganik · Fernando Diaz · Golnoosh Farnadi -
2022 : Mitigating Online Grooming with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2022 : Towards Private and Fair Federated Learning »
Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi -
2022 : Early Detection of Sexual Predators with Federated Learning »
Khaoula Chehbouni · Gilles Caporossi · Reihaneh Rabbany · Martine De Cock · Golnoosh Farnadi -
2022 : Privacy-Preserving Group Fairness in Cross-Device Federated Learning »
Sikha Pentyala · Nicola Neophytou · Anderson Nascimento · Martine De Cock · Golnoosh Farnadi -
2022 : Q & A »
Golnoosh Farnadi · Elliot Creager · Q.Vera Liao -
2022 : Tutorial part 1 »
Golnoosh Farnadi -
2022 Tutorial: Algorithmic fairness: at the intersections »
Golnoosh Farnadi · Q.Vera Liao · Elliot Creager -
2022 Workshop: Algorithmic Fairness through the Lens of Causality and Privacy »
Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Matt Kusner · Jessica Schrouff -
2021 Workshop: Algorithmic Fairness through the lens of Causality and Robustness »
Jessica Schrouff · Awa Dieng · Golnoosh Farnadi · Mark Kwegyir-Aggrey · Miriam Rateike -
2020 Workshop: Algorithmic Fairness through the Lens of Causality and Interpretability »
Awa Dieng · Jessica Schrouff · Matt Kusner · Golnoosh Farnadi · Fernando Diaz -
2020 Poster: Counterexample-Guided Learning of Monotonic Neural Networks »
Aishwarya Sivaraman · Golnoosh Farnadi · Todd Millstein · Guy Van den Broeck