Mexico City Oral Session
Oral 4C Position Paper Panels
Don Alberto 3
Moderators: Olawale Salaudeen · Muhan Zhang
Responsible AI Research & Unlearning: From Consent to Compliance to Critique
Princessa Cintaqia · Bill Marino · A. Feder Cooper
This panel brings together three timely position papers that tackle different dimensions of how the AI/ML research and deployment ecosystem must evolve around consent, forgetting, and governance. The first paper, Stop the Nonconsensual Use of Nude Images in Research, highlights how research practices around nudity detection and nude-image datasets often proceed without consent, perpetuating harm and normalising distribution of non-consensual intimate content. The second paper, Bridge the Gaps between Machine Unlearning and AI Regulation, examines the promises of machine unlearning (e.g., removal of data influence) and contrasts that with existing regulatory frameworks like the EU AI Act, pointing out legal and technical gaps. The third paper, Machine Unlearning Doesn’t Do What You Think: Lessons for Generative AI Policy, Research, and Practice, digs deeper into the mismatch between technical unlearning methods in generative-AI systems and what law/policy stakeholders expect these methods to achieve.
Strengthening the AI Research Ecosystem: Integrity, Critique, and Consensus
Jianghao Lin · Rylan Schaeffer · Rishi Bommasani
This panel brings together three recent challenging position papers from the NeurIPS 2025 platform that collectively spotlight structural vulnerabilities in the machine-learning research ecosystem and propose bold reforms. The first paper, “Stop DDoS Attacking the Research Community with AI‑Generated Survey Papers”, identifies the surge of AI-generated, mass-produced survey manuscripts as a form of “survey-paper DDoS” that threatens to flood and degrade the research record. The second, “Position: Machine Learning Conferences Should Establish a ‘Refutations and Critiques’ Track”, argues that major ML conferences currently lack a credible, high-visibility venue for rigorous critiques and corrections of prior work, and proposes the creation of a dedicated “Refutations & Critiques” track. The third paper, “NeurIPS should lead scientific consensus on AI policy”, makes the case that NeurIPS (and by extension the ML community) should play an active role in building scientific consensus on AI policy, filling an important gap in evidence-synthesis and decision-making.