Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

7 Results

<<   <   Page 1 of 1   >>   >
Workshop
CryptoFormalEval: Integrating Large Language Models and Formal Verification for Automated Cryptographic Protocol Vulnerability Detection
Cristian Curaba · D&#x27;Ambrosi Denis · Alessandro Minisini
Workshop
Adversarial Vulnerabilities in Large Language Models for Time Series Forecasting
Fuqiang Liu · Sicong Jiang · Luis Miranda-Moreno · Seongjin Choi · Lijun Sun
Poster
Thu 11:00 Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable
Martin Bertran · Shuai Tang · Michael Kearns · Jamie Morgenstern · Aaron Roth · Steven Wu
Workshop
On Adversarial Robustness of Language Models in Transfer Learning
Bohdan Turbal · Anastasiia Mazur · Jiaxu Zhao · Mykola Pechenizkiy
Workshop
Decompose, Recompose, and Conquer: Multi-modal LLMs are Vulnerable to Compositional Adversarial Attacks in Multi-Image Queries
Julius Broomfield · George Ingebretsen · Reihaneh Iranmanesh · Sara Pieri · Ethan Kosak-Hine · Tom Gibbs · Reihaneh Rabbany · Kellin Pelrine
Workshop
Decompose, Recompose, and Conquer: Multi-modal LLMs are Vulnerable to Compositional Adversarial Attacks in Multi-Image Queries
Julius Broomfield · George Ingebretsen · Reihaneh Iranmanesh · Sara Pieri · Ethan Kosak-Hine · Tom Gibbs · Reihaneh Rabbany · Kellin Pelrine
Workshop
A Formal Framework for Assessing and Mitigating Emergent Security Risks in Generative AI Models: Bridging Theory and Dynamic Risk Mitigation
aviral srivastava · Sourav Panda