Keynote talk by Helen Nissenbaum
Abstract
Ethical Obfuscation: Why’s, How’s, and Should Have’s
This talk is about data obfuscation, defined as the “production, inclusion, addition, or communication of misleading, ambiguous, of false data in an effort to evade, distract, or confuse” [Brunton & Nissenbaum 2015]. This was a theoretical abstraction that followed on the heels of TrackMeNot [2006] and AdNauseam [2014], two browser extensions aimed at disrupting business-as-usual for behavioral profiling, which have both earned praise and provoked rebuke [Howe and Nissenbaum 2018]. I will revisit our efforts of twenty years ago explaining why we chose obfuscation and why those designs and not other ones (possibly, more effective.) Confronting the contemporary landscape of GenAI and LLMs, with diverse privacy threats, greatly intensified, I ask whether this landscape offers new opportunities to instantiate past ethical and practical successes while mitigating prior limitations.