Workshop
|
|
Safety-Aware Fine-Tuning of Large Language Models
Hyeong Kyu Choi · Xuefeng Du · Sharon Li
|
|
Workshop
|
|
A Safety-aware Framework for Generative Enzyme Design with Foundation Models
Xiaoyi Fu · Tao Han · Yuan Yao · Song Guo
|
|
Workshop
|
Sun 11:21
|
Rui Ye, Jingyi Chai, Xiangrui Liu, Yaodong Yang, Yanfeng Wang & Siheng Chen. Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models
|
|
Workshop
|
|
Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models
Rui Ye · Jingyi Chai · Xiangrui Liu · Yaodong Yang · Yanfeng Wang · Siheng Chen
|
|
Workshop
|
|
Uncertainty as a criterion for SOTIF evaluation of deep learning models in autonomous driving systems
Ho Suk
|
|
Poster
|
Thu 11:00
|
MedSafetyBench: Evaluating and Improving the Medical Safety of Large Language Models
Tessa Han · Aounon Kumar · Chirag Agarwal · Himabindu Lakkaraju
|
|
Poster
|
Fri 16:30
|
Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning Attack
Tiansheng Huang · Sihao Hu · Fatih Ilhan · Selim Tekin · Ling Liu
|
|
Poster
|
Fri 16:30
|
T2VSafetyBench: Evaluating the Safety of Text-to-Video Generative Models
Yibo Miao · Yifan Zhu · Lijia Yu · Jun Zhu · Xiao-Shan Gao · Yinpeng Dong
|
|
Workshop
|
|
MISR: Measuring Instrumental Self-Reasoning in Frontier Models
Kai Fronsdal · David Lindner
|
|
Workshop
|
Sat 13:15
|
Keynote 3: Risk assessment, safety alignment, and guardrails for multimodal foundation models
Bo Li
|
|
Workshop
|
Sat 11:30
|
Multimodal Situational Safety
Kaiwen Zhou · Chengzhi Liu · Xuandong Zhao · Anderson Compalas · Xin Eric Wang
|
|
Workshop
|
|
Adversarial Negotiation Dynamics in Generative Language Models
Arinbjörn Kolbeinsson · Benedikt Kolbeinsson
|
|