Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

22 Results

<<   <   Page 2 of 2   >>   >
Poster
Fri 16:30 Data Free Backdoor Attacks
Bochuan Cao · Jinyuan Jia · Chuxuan Hu · Wenbo Guo · Zhen Xiang · Jinghui Chen · Bo Li · Dawn Song
Poster
Fri 11:00 SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents
Ethan Rathbun · Christopher Amato · Alina Oprea
Workshop
Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?
Michael-Andrei Panaitescu-Liess · Zora Che · Bang An · Yuancheng Xu · Pankayaraj Pathmanathan · Souradip Chakraborty · Sicheng Zhu · Tom Goldstein · Furong Huang
Poster
Wed 16:30 RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation
Peihua Mai · Ran Yan · Yan Pang
Workshop
PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models
Michael-Andrei Panaitescu-Liess · Pankayaraj Pathmanathan · Yigitcan Kaya · Zora Che · Bang An · Sicheng Zhu · Aakriti Agrawal · Furong Huang
Poster
Thu 16:30 PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
Omead Pooladzandi · Sunay Bhat · Jeffrey Jiang · Alexander Branch · Gregory Pottie
Poster
Wed 11:00 Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated Learning
Runhua Xu · Shiqi Gao · Chao Li · James Joshi · Jianxin Li
Poster
Wed 11:00 From Trojan Horses to Castle Walls: Unveiling Bilateral Data Poisoning Effects in Diffusion Models
Zhuoshi Pan · Yuguang Yao · Gaowen Liu · Bingquan Shen · H. Vicky Zhao · Ramana Kompella · Sijia Liu
Workshop
The Ultimate Cookbook for Invisible Poison: Crafting Subtle Clean-Label Text Backdoors with Style Attributes
Wencong You · Daniel Lowd
Workshop
Mitigating Downstream Model Risks via Model Provenance
Keyu Wang · Scott Schaffter · Abdullah Norozi Iranzad · Doina Precup · Jonathan Lebensold · Megan Risdal