Toggle Poster Visibility
Fri Dec 09 06:45 AM -- 07:00 AM (PST) None
Introduction and Opening Remarks
Fri Dec 09 07:00 AM -- 07:30 AM (PST) None
Invited Talk: Aleksander Mądry
Fri Dec 09 07:30 AM -- 07:45 AM (PST) None
Contributed Talk: Revisiting Robustness in Graph Machine Learning
Fri Dec 09 07:45 AM -- 08:00 AM (PST) None
Contributed Talk: Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning
Fri Dec 09 08:00 AM -- 08:30 AM (PST) None
Invited Talk: Milind Tambe
Fri Dec 09 08:30 AM -- 08:40 AM (PST) None
Coffee Break I
Fri Dec 09 08:40 AM -- 09:30 AM (PST) None
Morning Poster Session
Fri Dec 09 09:30 AM -- 10:00 AM (PST) None
Invited Talk: Nika Haghtalab
Fri Dec 09 10:00 AM -- 10:30 AM (PST) None
Invited Talk: Kamalika Chaudhuri
Fri Dec 09 10:30 AM -- 11:00 AM (PST) None
Invited Talk: Been Kim
Fri Dec 09 11:00 AM -- 12:00 PM (PST) None
Lunch Break
Fri Dec 09 12:00 PM -- 12:30 PM (PST) None
Invited Talk: Yi Ma
Fri Dec 09 12:30 PM -- 01:00 PM (PST) None
Invited Talk: Dorsa Sadigh
Fri Dec 09 01:00 PM -- 01:30 PM (PST) None
Invited Talk: Marco Pavone
Fri Dec 09 01:30 PM -- 01:40 PM (PST) None
Contributed Talk: DensePure: Understanding Diffusion Models towards Adversarial Robustness
Fri Dec 09 01:40 PM -- 02:30 PM (PST) None
Afternoon Poster Session
Fri Dec 09 02:45 PM -- 03:00 PM (PST) None
Contributed Talk: Differentially Private Bias-Term only Fine-tuning of Foundation Models
Fri Dec 09 02:45 PM -- 03:00 PM (PST) None
Contributed Talk: TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
Fri Dec 09 03:00 PM -- 03:15 PM (PST) None
Contributed Talk: zPROBE: Zero Peek Robustness Checks for Federated Learning
Fri Dec 09 03:15 PM -- 04:00 PM (PST) None
Panel Discussion
Fri Dec 09 04:00 PM -- 04:15 PM (PST) None
Closing Remarks
None
Forgetting Data from Pre-trained GANs
None
Training Differentially Private Graph Neural Networks with Random Walk Sampling
None
Striving for data-model efficiency: Identifying data externalities on group performance
None
Cooperation or Competition: Avoiding Player Domination for Multi-target Robustness by Adaptive Budgets
None
Individual Privacy Accounting with Gaussian Differential Privacy
None
Learning to Take a Break: Sustainable Optimization of Long-Term User Engagement
None
Is the Next Winter Coming for AI?The Elements of Making Secure and Robust AI
None
Differentially Private Gradient Boosting on Linear Learners for Tabular Data
None
zPROBE: Zero Peek Robustness Checks for Federated Learning
None
A Closer Look at the Intervention Procedure of Concept Bottleneck Models
None
A View From Somewhere: Human-Centric Face Representations
None
Cold Posteriors through PAC-Bayes
None
A Stochastic Optimization Framework for Fair Risk Minimization
None
Denoised Smoothing with Sample Rejection for Robustifying Pretrained Classifiers
[
OpenReview]
[
Topia]
None
Revisiting Robustness in Graph Machine Learning
None
Information-Theoretic Evaluation of Free-Text Rationales with Conditional $\mathcal{V}$-Information
[
OpenReview]
[
Topia]
None
FL-Talk: Covert Communication in Federated Learning via Spectral Steganography
None
A Theory of Learning with Competing Objectives and User Feedback
None
Explainability in Practice: Estimating Electrification Rates from Mobile Phone Data in Senegal
None
Addressing Bias in Face Detectors using Decentralised Data collection with incentives
None
Evaluating the Practicality of Counterfactual Explanation
None
Participatory Systems for Personalized Prediction
None
REGLO: Provable Neural Network Repair for Global Robustness Properties
None
On Causal Rationalization
None
Indiscriminate Data Poisoning Attacks on Neural Networks
None
Bias Amplification in Image Classification
None
Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation
None
Men Also Do Laundry: Multi-Attribute Bias Amplification
None
Anonymization for Skeleton Action Recognition
None
Provable Re-Identification Privacy
None
Membership Inference Attacks via Adversarial Examples
None
Take 5: Interpretable Image Classification with a Handful of Features
None
Generating Intuitive Fairness Specifications for Natural Language Processing
None
Learning from uncertain concepts via test time interventions
None
Improving Fairness in Image Classification via Sketching
None
Inferring Class Label Distribution of Training Data from Classifiers: An Accuracy-Augmented Meta-Classifier Attack
None
Case Study: Applying Decision Focused Learning in the Real World
None
Assessing Performance and Fairness Metrics in Face Recognition - Bootstrap Methods
None
Beyond Protected Attributes: Disciplined Detection of Systematic Deviations in Data
None
Quantifying Social Biases Using Templates is Unreliable
None
A Fair Loss Function for Network Pruning
None
Benchmarking the Effect of Poisoning Defenses on the Security and Bias of the Final Model
[
OpenReview]
[
Topia]
None
Physically-Constrained Adversarial Attacks on Brain-Machine Interfaces
None
Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning
None
Honest Students from Untrusted Teachers: Learning an Interpretable Question-Answering Pipeline from a Pretrained Language Model
None
Interactive Rationale Extraction for Text Classification
None
When Personalization Harms: Reconsidering the Use of Group Attributes of Prediction
[
OpenReview]
[
Topia]
None
Few-shot Backdoor Attacks via Neural Tangent Kernels
None
Towards Reasoning-Aware Explainable VQA
None
A Deep Dive into Dataset Imbalance and Bias in Face Identification
None
On the Robustness of deep learning-based MRI Reconstruction to image transformations
None
DensePure: Understanding Diffusion Models towards Adversarial Robustness
[
OpenReview]
[
Topia]
None
Just Avoid Robust Inaccuracy: Boosting Robustness Without Sacrificing Accuracy
None
Accelerating Open Science for AI in Heliophysics
None
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales
None
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
None
On the Impact of Adversarially Robust Models on Algorithmic Recourse
None
TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
None
COVID-Net Biochem: An Explainability-driven Framework to Building Machine Learning Models for Predicting Survival and Kidney Injury of COVID-19 Patients from Clinical and Biochemistry Data
None
Visual Prompting for Adversarial Robustness
None
On the Feasibility of Compressing Certifiably Robust Neural Networks
None
Just Following AI Orders: When Unbiased People Are Influenced By Biased AI
None
Towards Algorithmic Fairness in Space-Time: Filling in Black Holes
None
Scalable and Improved Algorithms for Individually Fair Clustering
None
On the Trade-Off between Actionable Explanations and the Right to be Forgotten
None
Real world relevance of generative counterfactual explanations
None
GFairHint: Improving Individual Fairness for Graph Neural Networks via Fairness Hint
None
Strategy-Aware Contextual Bandits
None
Group Excess Risk Bound of Overparameterized Linear Regression with Constant-Stepsize SGD
None
Not All Knowledge Is Created Equal: Mutual Distillation of Confident Knowledge
None
Poisoning Generative Models to Promote Catastrophic Forgetting
None
Certified Training: Small Boxes are All You Need
None
What Makes a Good Explanation?: A Unified View of Properties of Interpretable ML
None
On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition
None
Finding Safe Zones of Markov Decision Processes Policies
None
Hybrid-EDL: Improving Evidential Deep Learning for Uncertainty Quantification on Imbalanced Data
None
Attack-Agnostic Adversarial Detection
None
When Fairness Meets Privacy: Fair Classification with Semi-Private Sensitive Attributes
None
Differentially Private Bias-Term only Fine-tuning of Foundation Models
None
Uncertainty-aware predictive modeling for fair data-driven decisions
None
Socially Responsible Reasoning with Large Language Models and The Impact of Proper Nouns
None
An Analysis of Social Biases Present in BERT Variants Across Multiple Languages
None
Private Data Leakage via Exploiting Access Patterns of Sparse Features in Deep Learning-based Recommendation Systems
None
A Brief Overview of AI Governance for Responsible Machine Learning Systems
Successful Page Load