Workshop

Algorithmic Fairness through the Lens of Time

Awa Dieng · Miriam Rateike · Golnoosh Farnadi · Ferdinando Fioretto · Jessica Schrouff

Room 252 - 254
[ Abstract ] Workshop Website
Fri 15 Dec, 7 a.m. PST

We are proposing the Algorithmic Fairness through the Lens of Time (AFLT) workshop, which isthe fourth edition of this workshop series on algorithmic fairness. Previous editions have looked atcausal approaches to fairness and the intersection of fairness with other fields of trustworthy machinelearning namely interpretability, robustness and privacy.The aim of this year’s workshop is to provide a venue to discuss foundational work on fairness,challenge existing static definitions of fairness (group, individual, causal) and explore the long-termeffects of fairness methods. More importantly, the workshop aims to foster an open discussion on howto reconcile existing fairness frameworks with the development and proliferation of large generativemodels.$$$$Topic $$$$Fairness has been predominantly studied under the static regime, assuming an unchangingdata generation process [Hardt et al., 2016a, Dwork et al., 2012, Agarwal et al., 2018, Zafar et al.,2017]. However, these approaches neglect the dynamic interplay between algorithmic decisions andthe individuals they impact, which have shown to be prevalent in practical settings [Chaney et al.,2018, Fuster et al., 2022]. Such observation has highlighted the need to study the long term effectof fairness mitigation strategies and incorporate dynamic systems within the development of fairalgorithms.Despite prior research identifying several impactful scenarios where such dynamics can occur,including bureaucratic processes [Liu et al., 2018], social learning [Heidari et al., 2019], recourse[Karimi et al., 2020], and strategic behavior [Hardt et al., 2016b, Perdomo et al., 2020], extensiveinvestigation of the long term effect of fairness methods remains limited. Initial studies have shownhow enforcing static fairness constraints in dynamical systems can lead to unfair data distributionsand may perpetuate or even amplify biases [Zhang et al., 2020, Creager et al., 2020, D’Amour et al.,2020].Additionally, the rise of powerful large generative models have brought at the forefront the need tounderstand fairness in evolving systems. The general capabilities and widespread use of these modelsraise the critical question of how to assess these models for fairness[Luccioni et al., 2023] and mitigateobserved biases [Ranaldi et al., 2023, Ma et al., 2023] within a long term perspective. Importantly,mainstream fairness frameworks have been developed around classification and prediction tasks. Howcan we reconcile these existing techniques (proprocessing, in-processing and post-processing) withthe development of large generative models?Given these interesting questions, this workshop aims to deeply investigate how to address fairness concerns in settings where learning occurs sequentially or in evolving environments. We are particularly interested in addressing open questions in the field, such as:• What are the long term effects of static fairness methods?• How to develop adaptable fairness approaches under known or unknown dynamic environments?• Are there trade-offs between short-term and long-term fairness?• How to incorporate existing fairness frameworks into the development of large generativemodels?• How to ensure long term fairness in large generative models via feedback loops?

Chat is not available.
Timezone: America/Los_Angeles

Schedule