Skip to yearly menu bar Skip to main content


Poster

Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning Attack

Tiansheng Huang ⋅ Sihao Hu ⋅ Fatih Ilhan ⋅ Selim Tekin ⋅ Ling Liu
2024 Poster

Abstract

Video

Chat is not available.