Responsible AI Research & Unlearning: From Consent to Compliance to Critique
Abstract
This panel brings together three timely position papers that tackle different dimensions of how the AI/ML research and deployment ecosystem must evolve around consent, forgetting, and governance. The first paper, Stop the Nonconsensual Use of Nude Images in Research, highlights how research practices around nudity detection and nude-image datasets often proceed without consent, perpetuating harm and normalising distribution of non-consensual intimate content. The second paper, Bridge the Gaps between Machine Unlearning and AI Regulation, examines the promises of machine unlearning (e.g., removal of data influence) and contrasts that with existing regulatory frameworks like the EU AI Act, pointing out legal and technical gaps. The third paper, Machine Unlearning Doesn’t Do What You Think: Lessons for Generative AI Policy, Research, and Practice, digs deeper into the mismatch between technical unlearning methods in generative-AI systems and what law/policy stakeholders expect these methods to achieve.